STE WILLIAMS

Critical Infrastructure: Stop Whistling Past the Cyber Graveyard

An open letter to former colleagues in Homeland Security, peers in private sector cybersecurity firms, those who own and operate critical systems, academics, and politicians.

I woke up to a cyberattack double-whammy that frankly made me want to go straight back to bed.

First, the Department of Homeland Security and the FBI issued an alert about the Russian government’s targeting of US critical infrastructure — nuclear power plants, chemical plants, heavy manufacturing facilities, and so on. The joint alert was an extraordinary and unprecedented move by two agencies that traditionally have avoided pointing the finger at nation-state actors. From my time as the founding director of the United States Computer Emergency Readiness Team (US-CERT), I can say this is highly unusual.

As if that were not enough, the New York Times published a lengthy analysis of a cyberattack on a Saudi petrochemical plant that took place in the summer of 2017. Though investigators have yet to publish their findings as to who was behind the attack and what the attackers hoped to achieve, cyber experts speaking on the condition of anonymity told the Times that they believe the attack was intended to cause an explosion and kill or injure hundreds of people.

These scenarios may read like a summary of the latest must-see episode from Homeland or the latest superhero flick, but they’re not fiction — far from it. They reflect the stark and sobering reality of living in our digital-everything world. The fact that they are surprising to anybody is the most shocking (and some might say terrifying) thing of all. According to a study of the oil and gas industry by the Ponemon Institute, 68% of respondents report at least one security compromise. As recently as last year, the Department of Energy reported that the American electrical grid was in “imminent danger” from cyberattacks that are “growing more frequent and sophisticated.”

The signs are all around us and they’re multiplying and growing more strident. At best, the string of cyberattacks on petrochemical plants in Saudi Arabia is an alarming reminder of the threats facing critical infrastructure everywhere. At worst, they’re a stark warning, if not a promise, of what’s to come.

Let me put this another way: all of the hand-wringing and face-palming in Congress and in the media over the Equifax breach, which jeopardized the personal information of roughly 148 million Americans, will look like a walk in the park compared to what happens should a US energy facility be successfully attacked. And with reason. It’s the difference between damages that can be more easily dismissed as a nuisance — a compromised driver’s license number, for example — versus those with the potential to wreak widespread havoc in our communities. We’re talking about the kind of cyberattack that jumps the digital divide and does physical damage with the intent to injure or kill people.

Securing decades-old power plants and manufacturing facilities that were deemed safe from cyberattack precisely because they were never designed to be connected to digital devices is incredibly complex, and I acknowledge that. But the fact is that these plants were designed for the old-school way of doing things, not for a digital world brimming with smart, connected heaters, window shades, cars, and phones.

We must view these attacks as an urgent call to change the way we handle the threats targeting the world’s most valuable and vulnerable systems. Otherwise, the next story won’t be about what could have happened. It’ll be about the real-world consequences of what did happen. We’ll be looking in the rearview mirror asking ourselves why we, collectively, were asleep at the proverbial wheel.

Securing the critical infrastructure that powers our modern lives has to be made a global priority. This is a sacred trust shared by both private and public sectors. This is an all-hands effort for cybersecurity — my former colleagues in Homeland Security, my peers in private sector cybersecurity firms, those who own and operate critical systems, academics, and politicians — to come together to address this issue now. We can’t solve the security challenges facing these delicate, mission-critical systems by working in isolation. Industry experts and government agencies around the world need to work together to develop modern standards, processes, and regulations to address today’s modern threat landscape. Let’s start by protecting the systems that matter most.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here.

Amit Yoran is chairman and CEO of Tenable, overseeing the company’s strategic vision and direction. As the threat landscape expands, Amit is leading Tenable into a new era of security solutions, empowering organizations to meet the challenges of evolving threats with … View Full Bio

Article source: https://www.darkreading.com/critical-infrastructure-stop-whistling-past-the-cyber-graveyard/a/d-id/1331308?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Azure Guest Agent Design Enables Plaintext Password Theft

Researchers find attackers can abuse the design of Microsoft Azure Guest Agent to recover plaintext administrator passwords.

The design of the Microsoft’s Windows Azure Guest Agent could let hackers steal plaintext administrator passwords from target machines, researchers at Guardicore reported this week. If abused, the flaw could enable a cross-platform attack affecting every machine type Azure provides.

Analysts discovered the attack vector while researching the Azure Guest Agent, which provides plugins for Azure’s infrastructure-as-a-service (IaaS) offerings. The agent receives tasks from the Azure infrastructure and executes them; tasks are saved on the machine’s disk. VM Access is a plugin for recovering VM access when users are accidentally locked out, or credentials are lost.

This research uncovered several security issues, all of which have been shared with Microsoft. Guardicore claims attackers have been able to access plaintext credentials after taking over an Azure virtual machine – if the VM Access plugin had been used on the machine – since 2015.

“When we looked into the communication channels between the guest agent and the underlying infrastructure, we were alarmed by the way the guest agent handles sensitive information,” says Ofri Ziv, head of Guardicore Labs.

Microsoft says the attack method divulged by Guardicore doesn’t constitute a security vulnerability, however.

“The technique described is not a security vulnerability and requires administrator privileges. We are continuing to investigate new features to improve customer experiences, and recommend customers follow best security practices, including setting unique passwords across their services,” a Microsoft spokesperson said in a statement provided to Dark Reading.

Researchers found a flaw in the way the Azure Guest Agent receives the “Reset password” command from the Azure portal. The reset function contains encrypted sensitive data, and an attacker can abuse the workflow to steal new credentials in plaintext. The data can be decrypted using the certificate that resides on the machine, Ziv explains.

For the attack to be possible, the machine has to have used the VM Access plugin to reset a password at least once. The attacker must also have admin permissions to the Azure machine. With privileged access, it’s “fairly easy” to take advantage of this flaw, he points out.

This design affects every Azure machine, Windows or Linux, that has the Azure Guest Agent (bundled into every minute) if the Azure reset password tool was used, says Ziv. Windows defenses like Credentials Guard don’t protect machines from this exploitation.

The business implications of stolen plaintext passwords are powerful, he continues. Attackers can reuse them to access enterprise services, machines, and databases, testing credentials across different environments. This isn’t possible with a password hash, which if stolen, can’t be used as it is against services that don’t support Microsoft authentication protocols.

Plaintext passwords can also be easily manipulated, Ziv explains. “For example, if the stolen machine’s password is ‘AzureForTheWin1,’ the attacker might follow this pattern and login (successfully) to the Azure portal using ‘AzureForTheWin!’ as a password,” he says. Attackers wouldn’t be able to test different variations with a password hash.

Further, he points out, companies keeping plaintext credentials could violate compliance regulations including PCI-DSS and SWIFT.

Microsoft has buckled down on credential security in recent versions of Windows. Credential storage has been hardened over time and since Windows 8.1 and Windows Server 2012, the operating system has not kept plaintext passwords at all. Guardicore approached the company with its findings 6 months ago, says Ziv, and Microsoft reported the behavior is “by design.”

“We don’t know about attackers that exploited this security flaw, but we believe that the popularity of the Azure reset password plugin exposes many machine to this flaw by leaving their plaintext passwords on machines’ disks,” he adds.

Azure users are advised to check if they have reset password configuration files stored on their Azure machines. If so, they should be deleted. Anyone who has used the Azure reset password should “assume it got compromised,” says Ziv, and replace it on every machine or service where it is used.

This type of attack could be avoided by employing temporary credentials when resetting an account password, and then creating a new password for GuestOS, for example. 

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Early bird special ends March 23 – use promo code 200KS for an extra discount. Check out the security track here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/azure-guest-agent-design-enables-plaintext-password-theft-/d/d-id/1331317?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Fake Amazon ad ranks top on Google search results

Dang! Don’t you just hate it when you search for Amazon on Google, you click on the top link (which of course must be legit, right? – it’s from Google!) and then you somehow wind up infected with “Malicious Pornographic Spyware” with a dab of “riskware” on top?

Yep, not for the first time, Google’s been snookered into serving a scam tech support ad posing as an Amazon ad.

This is déjà vu. Thirteen unlucky months ago, scammers slipped a fake Amazon ad under Google’s nose. Anybody who clicked on it was whisked to a Windows support scam.

ZDNet reported on that one in February 2017, and it brings us news of the bad ad rebirth once again. On Friday, ZDNet’s Zack Whittaker reported that for hours on Thursday, the top Google search result for “Amazon” was pointing to a scam site.

Top, as in, it outranked even the legitimate search result for Amazon.com. Users who clicked on the bad ad were whisked to a page that tried to terrify them with reports of malware infection so they’d call a number for “help.” The ad masqueraded as an official Apple or Windows support page, depending on the type of computer in use.

Then, just as fake tech support ads tend to do, and just as the fake Amazon ad did last February, the bad ad shrugged off users’ attempts to dismiss a popup box that warned them about malicious pornographic spyware and riskware etc. (What IS “pornographic spyware?” Spyware accompanied by heavy breathing?).

According to ZDNet’s analysis of the code, trying to close the popup would have likely triggered the browser to expand and fill up the entire screen, making it look like a system had been grabbed by ransomware.

ZDNet says it appeared through a proxy script on a malicious domain to make it look as though the link fully resolved to an Amazon.com page, “likely in an effort to circumvent Google’s systems from flagging the ad.”

The malicious domain was registered by GoDaddy, and the apparent domain owner didn’t respond to ZDNet’s inquiries. A spokesperson for Google told ZDNet that the company doesn’t tolerate advertising of illegal activity and takes “immediate action to disable the offending sources” when it finds ads that violate its policies.

GoDaddy pulled the site offline within an hour of being contacted by ZDNet. A GoDaddy spokesperson said that its security team found that the ad violated its terms of services, so they removed it.

Google’s swimming in these bad ads.

Last week, it announced that in 2017, it took down more than 3.2 billion that violated advertising policies.

That’s an average of 100 per second, Google said, and it’s up from 1.7 billion removals of bad ads in the prior year. Google also booted 320,000 online publishers off for violations like showing Google-supplied ads alongside inappropriate or controversial content, according to Scott Spencer, Google’s director of sustainable ads.

What to do?

Google’s working hard to kill bad ads, but they’re obviously still getting through, including those that contain malware. So to help you stay vigilant, here are some suggestions on what to do if you get hit with one of these fake tech support scams, be it on the phone or as “Riskware! Spyware!” taking over your browser:

  • If you receive a cold call about accepting support, just hang up.
  • If you receive a web popup or ad urging you to call for support, ignore it.
  • If you need help with your computer, ask someone whom you know and trust.
  • When searching for Amazon, remember that you don’t need to use Google. Simply go straight to Amazon.com.

DEALING WITH FAKE SUPPORT CALLS

Here’s a short podcast you can recommend to friends and family. We make it clear it clear that these guys are scammers (and why), and offer some practical advice on how to deal with them.

(Originally recorded 05 Nov 2010, duration 6’15”, download size 4.5MB)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DaYFrzCN0SM/

US spy lab wants to geolocate any video or photo taken outdoors

How do you track down a terrorist captured in a photo as he poses in front of a cave in the Syrian outback?

It’s not easy. If he’s smart enough to turn off geotagging, you don’t have Exchangeable Image File Format (EXIF) data as a beacon to his location.

If the cave happened to be around the corner from the town library or a Starbucks, you might be in luck. Then, you could get help from something like Google PlaNet: a deep-learning machine that was initially trained on 126 million photos with EXIF data in order to learn how to work out the location of almost any photo, just going by its pixels, no EXIF data needed.

But most caves aren’t so conveniently located. PlaNet had a lot more images to go on when dealing with photos of cities or places to which tourists flock, snip-snapping away and producing scads of information-filled imagery. The neural network had far fewer images to rely on when it came to remote places where people don’t take many photos, such as oceans or polar regions, so it’s going to be pretty useless in cave-land.

This difficulty of tracking down outdoor photos that haven’t been geotagged has led a US spy lab to launch Finder: a research program of the Intelligence Advanced Research Projects Agency (IARPA), the Office of the Director of National Intelligence’s dedicated research organization.

The project aims to build on existing research systems to develop technology that augments analysts’ geolocating skills. At this point, analysts rely on information such as visible skyline and terrain; digital elevation data; existing, well-understood image collections; surface geology; geography; and architecture (think red phone booths).

It’s an “extremely time-consuming and labor-intensive activity” that often meets with limited success, according to IARPA.

The goal is all-encompassing: IARPA wants Finder to find everything, as in, the ability to geolocate any video or photo taken anywhere outdoors.

It’s going to take integration of analysts’ abilities and automated geolocation technologies along the lines of PlaNet, fusion of publicly available data sources, and expansion of automated geolocation technologies that can efficiently and accurately crawl over all manner of terrain, no matter how vast or how rugged.

When it comes to law enforcement legally searching for criminals, including terrorists, we don’t like to tell people how to hide. But it’s easy to see that if Finder gets to the outdoors-omniscient level that IARPA intends to take it, it’s going to be able to find anybody, anywhere, and that includes non-criminals/terrorists.

We’ve seen kitty cats stalked and fugitives tracked to the jungles of Guatemala via photo EXIF data. We’ve seen police act like kids in a candy store with tracking technology.

If all goes as planned, Finder, like PlaNet, at some point won’t rely on EXIF data. But that point hasn’t yet arrived, and there are issues to consider in the meantime for those who are privacy-conscious. So, for what it’s worth:

Here’s how to disappear

On most Android phones, you can just open your camera app and tap on Settings. Scroll down until you see the option for “Location tags,” and slide it off.

For iOS devices, you can turn it off for the camera app entirely by going to Settings from the home screen. From there go to Privacy and then tap Location Services. Find the Camera option, tap it and choose “Never” in the “Allow Location Access” menu.

Alternatively, you can set it to prompt you on a case-by-case basis. From the Settings menu go to General, tap Reset. On the Reset screen choose Reset Location Privacy. At this point you may be asked to input your passcode in order to make the changes, which will reset Location Privacy settings for all apps.

From now on all apps, including the camera, wanting to use your location information will prompt you to ‘Allow Access’.

Also, here’s Apple’s support guide to turning off location services for specific apps on iOS devices.

That’s all well and good for future photos, but once you realize how many of your photos are out there, bearing EXIF data that contains times, dates and locations of, say, your kids in the playground, you might want to start scrubbing your old photos clean. Here’s a guide on that from How-To Geek.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3EQ09OzKEUk/

Apple burns the HSTS super cookie

Want to know something cool?

Quietly and without fanfare Apple has rolled out a change to its Safari browser that munches one of the web’s most advanced “super cookies” into crumbs.

So-called “super cookies” are tracking methods that rely on esoteric things like browser fingerprints, ETags, Local Storage and Flash LSOs rather than cookies. They’re popular with people who really, really want to track you because they’re much harder for you to block, purge or manage than plain old regular cookies.

A few years ago I wrote about a theoretical super cookie that could defeat Incognito mode by abusing HSTS, a technology that’s designed to make your browsing more secure.

Abusing HSTS would allow these imagined super cookies to hide in plain sight because removing them results in reduced security. In the face of HSTS cookies in the wild, users and browser vendors would be forced to trade privacy and security off against each other.

At the time we weren’t aware of anyone actually using them.

Well, that’s changed now, and Apple has responded.

How HSTS can be used for tracking

HSTS is a simple instruction that websites can send to browsers that says, in effect, “remember not to talk to me in an insecure way”.

After receiving an HSTS instruction from a website, a browser will use HTTPS – the encrypted form of HTTP – to talk to that site, even if a user clicks on or types a link that uses the insecure HTTP.

Just like a cookie, an HSTS instruction is a piece of information that a website can ask a browser to remember.

However, to track a browser you need to be able to assign it a unique ID, and a single HSTS instruction just doesn’t hold enough information to do that.

While a cookie can contain thousands of bits of information, a single HSTS instruction holds just a single bit (because it’s either on or off).

However, if somebody can get your browser to make HTTP requests to a handful of websites under their control (which could be done easily by embedding a number of tiny images in web page, for example) they can set enough HSTS on/off switches in your browser to store an ID.

An array of 30 images would be enough to track just over 1 billion different IDs.

It works like this…

A visitor goes to a web page with a nefarious HSTS tracking code provided by Evil Marketing Corp Inc.

The code asks for 30 different images from 30 different websites under the control of Evil Marketing Corp. The images are all fetched using HTTP – some respond with an HSTS instruction, and some don’t.

The specific pattern of 30 on/off responses is that visitor’s unique ID.

The next time that same visitor goes to a page with the tracking code on it, their browser will ask for the same images from the same websites as before. This time though, the browser will remember its HSTS instructions and ask for some of the images over HTTPS instead of HTTP.

The pattern of HTTPS/HTTP requests received by Evil Marketing Corp across its 30 websites matches the pattern of on/off responses it sent earlier – the visitor’s unique ID.

For a fully illustrated, step-by-step example of HSTS tracking, take a look at my article about how HSTS ‘supercookies’ make you choose between privacy or security.

How Apple ate the cookie

WebKit is the open source browser engine that powers Apple’s Safari browser. Writing on the WebKit blog, Brent Fulgham volunteered a hint that HSTS tracking had recently moved from theory to practice.

Recently we became aware that this theoretical attack was beginning to be deployed against Safari users. We therefore developed a balanced solution that protects secure web traffic while mitigating tracking.

Finding itself on the horns of the privacy vs security dilemma, Apple looked for a way to step on the neck of the trackers without compromising the benefits of HSTS.

Taking a good look at how HSTS was being abused in the wild, it came up with two tactics.

Firstly, to prevent tracking codes from using an array of websites to set HSTS instructions, Safari now blocks HSTS instructions from everything other than the site you’re on, or its root domain (or the Top Level Domain + 1, as it’s described on the WebKit blog).

So, if you visit tracking.website.example.org then you can only get HSTS instructions from tracking.website.example.org (the hostname you’re on) and example.org (the root domain).

This reflects the fact that the HSTS tracking it spotted was using arrays of related subdomains, like this:

http://a.tracking.website.example.org
http://b.tracking.website.example.org
http://c.tracking.website.example.org
http://d.tracking.website.example.org

Or like this:

http://a.tracking.website.example.org
http://b.a.tracking.website.example.org
http://c.b.a.tracking.website.example.org
http://d.c.b.a.tracking.website.example.org

Safari’s second countermeasure is to ignore HSTS instructions from websites that its Intelligent Tracking Protection blocks cookies from.

Having deployed and monitored the changes, Fulgham writes that Apple may have successfully sidestepped the privacy vs security problem that we feared:

Telemetry gathered during internal regression testing, our public seeds, and the final public software release indicates that the two mitigations described above successfully prevented the creation and reading of HSTS super cookies while not regressing the security goals of first party content.

Let’s hope it’s enough to blunt the progress of HSTS tracking in the wild and return it to a theoretical curiosity.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FKTMjhpC_RM/

Nine years on, Firefox’s master password is still insecure

Developer Wladimir Palant (of Adblock Plus fame) has uncovered a big security weakness in the way Firefox secures browser passwords behind a master password.

Firefox users who save browser passwords without a master key are, in theory, protected from attackers with access to their computer by encryption. The problem is the key to unlock the logins.json file used to store these passwords can be found in a file called key3.db.

This design is secure from only the most casual attacks, as Palant notes:

It is common knowledge that storing passwords there without defining a master password is equivalent to storing them in plain text.

Which is why Mozilla offers users the option to protect passwords behind a master password set through Tools Privacy Security Use a master password.

In Firefox’s case, this turns the master password into a hash value by adding a random string to the password (a ‘salt’) and applying the SHA-1 algorithm. Thereafter, when the user enters the master password, the software simply compares a hash of the password you enter with your master password’s hash – if the two match, the user has entered the correct password.

The first problem is that using the aging SHA-1 is considered weak because, as Palant says, “GPUs [Graphics Processing Units] are extremely good at calculating SHA-1 hashes.”

This is called brute forcing, where the attacker uses a commodity graphics card to calculate huge numbers of possible hashes until a match with the target hash generated by SHA-1 is found.

It sounds like an impossible task, but GPUs can churn through billions of these per second.

The standard technique to increase the time it would take an attacker to brute-force a password hash is by re-applying (or iterating) it.

So, having generated a hash, you add a salt to it and make another hash. Then you add some some salt to the resulting hash and make a hash of that, and so on and so on until you hit a target number of salt+hash iterations (for an in-depth look at this read our article on how to store your users’ passwords safely).

Any attacker wanting to crack your password hash will have to perform exactly the same number of iterations, with the same salt, to find a match.

Choosing an iteration count is a matter of balancing the inconvenience you’re prepared to inflict on users when they log in against the amount of obstruction you want to put in a password cracker’s way.

The good news is you don’t have to pick one iteration count and stick to it – you can increase the iteration count over time to keep pace with improvements in hardware.

Unfortunately, Palant noticed, Firefox performs just one iteration.

By comparison, password managers such as LastPass (which also uses SHA-256 rather than SHA-1) defaults to 5,000 iterations, with some software taking that to 100,000 if speed is less of a concern.

However, the most extraordinary part of this story is that it the inadequate iteration count was first reported to Mozilla nine years ago in a Bugzilla report.

Developer Justin Dolske:

A higher iteration count would make this more resistant to brute forcing (by increasing the cost of testing password), the PKCS#5 spec suggests a “modest value” of 1000 iterations. And that was 10 years ago. :).

Mozilla acknowledged the issue, which was followed up by senior manager Brendan Eich around 2014 who stated simply: “need an owner for this.” Somehow, the issue fell through the cracks.

Solutions?

An update could up the iteration count of the hashing, although this will require users to immediately reset their master password. Mozilla also recently announced a project to build a native password manager extension called Lockbox, but that does require Firefox users to create an account.

For most users, the easiest alternative is to use an independent password manager and abandon Firefox’s integrated manager entirely.

Mozilla and its loyal users can at least console themselves with one thought: if Firefox wasn’t open source, and therefore open to scrutiny, the issue might have sat out of sight indefinitely. It is always better to know than not.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XCbTyoMbJXY/

Leading by example: UK.gov’s secure server setup is patchy at best

The security of UK government websites is inconsistent, and local authorities are among the worst offenders.

Ministers have for years spoken about making the UK “one of the most secure places in the world to do business in cyberspace”, one component of which is making government services available online.

The government also promotes the secure server setup best practice, not least through a handy guide published by the National Cyber Security Centre here.

El Reg recently reported how one key e-government service (renewing driving licences online) was not as secure as it ought to be because of the use of weak ciphers and an improperly installed digital certificate, among other issues.

The issues meant reader Andy was unable to access the Driver and Vehicle Licensing Agency (DVLA) site using either FireFox or a Samsung S7 browser. In response, the DVLA said the “security certificates of all of our websites meet industry standards”, a response our tipster and security experts including Paul Moore said missed the point that the certificate was installed incorrectly.

Andy concluded that there was still a problem with the site despite some recent improvements that brought the rating of the site by Qualys SSL Labs up from a failing “F” to a “B”.

passport service SSL Labs

Passport Service SSL Labs

The security headers rating for the Passport Service site is still only a “D”.

passport service security headers

Passport Service security headers

Last week Moore had needed to use the passport service’s site to track a passport application. He discovered on Monday that the cert for the site – https://www.gov.uk/track-passport-application – wasn’t installed correctly and wouldn’t load on a Galaxy S8’s browser, causing errors as a result.

Passport service oops

Passport service SSL error message encountered by Moore

Soon after Moore publicly complained about the issue, and El Reg began asking its own questions, the security of the passport service tracking website was improved to achieve an A+ rating. This is a good thing and the timing might all be a coincidence. The passport service has yet to respond to a request for comment so we can’t say either way.

The two cases prompted us to take a wider look at the security of UK.gov SSL servers in general, which some experts reckon is generally lamentable.

“The sheer amount of .gov sites which are either broken, misconfigured or insecure is shocking,” Moore told El Reg.

We began looking at a sample of central government websites. Websites run by tax collectors at HM Revenue Customs and related to the Department for Work and Pensions’ oft-criticised Universal Benefits service, it turns out, are both securely set up.

tax service SSL Labs

Tax service SSL Labs

universal credit SSL Labs

Universal credit SSL Labs

Moneyclaim.gov.uk – a site for submitting or defending a small claim – got a failing “F” grade last Monday before improving to achieve a “C” grade by Tuesday.

moneyclaim SSL Labs

Moneyclaim SSL Labs

“MCOL isn’t the worst I’ve seen, but certainly could benefit from an upgrade,” Moore remarked.

An inconsistent picture for central government SSL servers was developing.

“Some departments manage it internally, others outsource,” Moore said. “Many are services through http://gov.uk which scores very well. Some haven’t transitioned fully, so rely on old and outdated services.”

The picture when it comes to local government SSL servers is far bleaker.

One site – https://www.birmingham.gov.uk/pcn – run by Birmingham City Council and designed to allow motorists to pay their penalty charge notices, “isn’t even PCI compliant,” Moore observed. “I’m struggling to find *any* site at @BhamCityCouncil which is actually secure,” he added.

The site scored a failing “F” grade when accessed using Qualys SSL Labs server testing tool. The site’s certificate setup and configuration were found to be inadequate. El Reg raised this as an issue with the council but we’ve yet to hear back.

Birmingham motorists face infosec payment peril

Birmingham City Council SSL fail

Supposedly “secure sites” run by West Sussex County Council and the council of the Lancashire town of Wigan were also anything but.

All of this matters because failure to get it right with a site’s HTTPS certificate and server settings for encrypting traffic leaves people’s personal information at risk of interception. More immediately, using badly set up sites is likely to throw up browser errors and warnings that are likely to confound and frustrate citizens.

Even those sites getting an F aren’t necessarily exposed to a vulnerability that might be readily exploited, but it does show that they’re not taking basic precautions to protect their users. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/20/uk_gov_ssl_servers/

Brit police forces spend peanuts on cybercrime training

The police force covering the base of the UK’s electronic spy agency, GCHQ, in Cheltenham, England, has admitted that it has spent nothing at all on cybercrime training over the past few years.

Gloucestershire Police, whose patch, ironically, covers the sigint specialists’ headquarters, said it had just 11 trained cybercrime cops and spends nothing on cybercrime training.

Across the country, police forces have spent just £1.3m on cybercrime training courses over the past few years, according to a survey by the Conservative-leaning think tank Parliament Street.

Top of the table was North Wales Police, which spent a whopping £375,488 on cybercrime training, including putting 147 personnel through a dedicated five-day course. All new North Wales coppers also get a cybercrime bolt-on to their basic training.

Of the 39,500 British bobbies who have received some form of training on digital naughtiness, Police Scotland came fifth, having shelled out just £83,000 between 2015 and 2017.

Norfolk and Suffolk police put no fewer than 3,882 personnel through a “Cyber Crime and Digital Policing First Responder” course, and just under 150 bods through a “digital media investigator course”.

West Midlands Police, the second largest force in the UK after London’s Metropolitan Police, had spent just £91,200 over the three-year period. Meanwhile, the City of London Police, which leads the Action Fraud online police initiative for tackling fraud, has trained two thirds (448 of 684) of its constables in cybercrime stuff.

At the very bottom of the league table was the Port of Dover Police, a force so small that most forget it exists. That force said none of its staff were trained on cybercrime matters and none of its budget was spent on counter-cybercrime training.

The survey (PDF) was carried out using Freedom of Information requests to all of the UK’s police forces. The think tank recommended that police forces “increase recruitment of officers with existing cyber skills” and work with the private and educational sectors “to ensure a pipeline of highly skilled workers are encouraged to join the police”. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/20/police_cybercrime_training_spend/

7 Spectre/Meltdown Symptoms That Might Be Under Your Radar

The Spectre/Meltdown pair has a set of major effects on computing but there are impacts on the organization that IT leaders might not have considered in the face of the immediate problem.PreviousNext

Spectre and Meltdown are awful. That much goes without saying. When a vulnerability in the heart of the CPU can bring your secure authentication efforts to naught, it’s a bad thing.

But in addition to the obvious security threats, there could be significant impact on an organization’s budgets, schedules, vendor relationships, and product plans. And for many companies, these “secondary” effects could have far more impact than the initial security vulnerabilities

While the basic impact of Spectre and Meltdown lies deep within a silicon wafer, the real impact could be felt in boardrooms, datacenters, and product planning meetings around the globe. Whether the planning is involved is for products for sale, internal projects, or capital expenses, you can expect to see an impact from Spectre and Meltdown.

Here, then, are seven implications of Spectre/Meltdown that might not have been at the top of your list of worries, but that should be on your radar.

(Image: Myriams-Fotos, via Pixabay)

 

Curtis Franklin Jr. is executive editor for technical content at InformationWeek. In this role he oversees product and technology coverage for the publication. In addition he acts as executive producer for InformationWeek Radio and Interop Radio where he works with … View Full BioPreviousNext

Article source: https://www.darkreading.com/risk/7-spectre-meltdown-symptoms-that-might-be-under-your-radar/d/d-id/1331299?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Case for Integrating Physical Security & Cybersecurity

Aggregating threat intel from external data sources is no longer enough. You must look inside and outside your traditional knowledge base for the best way to defend against attacks.

Early last year in “Grizzly Steppe and Carbanak: The Dangers of Miscalculation in Cyberspace,TruSTAR researchers outlined the overlap of tactics, techniques, and procedures (TTP) between Russian state organizations and criminal organizations like the Carbanak hacking group. We found that Carbanak and attacks attributed to Russian state security agencies were utilizing some the same infrastructure to launch attacks. CrowdStrike’s new 2018 Threat Report expands the aperture beyond Russia to include to North Korea, China, and Iran. There’s evidence hacktivists borrow these TTPs too.

The overlap of TTP raises serious questions for defenders of corporate and government networks, and poses a danger of miscalculation for government in responding to attacks. Overlapping TTP also drives home the need to change our security strategy at the organizational level to a unified security data model that can help organizations better defend themselves and collaborate with other companies, sharing organizations, and even government agencies.

Too often, security teams silo event data into multiple categories like fraud, phishing, malware, DDoS, insider threats, and physical breaches, just to name a few. These are often handled by separate teams requiring different skills sets, which is understandable. But it’s also surprising that we separate the data around these events and fail to correlate it in a common repository to identify trends and patterns in TTP.

Take spear phishing, for example. We know spear phishing campaigns often insert malware strains that can lead to advanced persistent threats through command-and-control servers. DDoS obviously disrupts networks, but it is also used as a means to establish a persistent presence. Physical breaches lead to malware implants. Our failure to fuse this data leaves us vulnerable to adversaries, creating dangerous inefficiency for security operators. Without a comprehensive understanding of event data across an entire organization, we place ourselves at a permanent disadvantage.

Where Collaboration Is Already Happening
Several large companies in finance, cloud services, insurance, health, and retail are now integrating their event data associated with fraud, malware, DDoS, and phishing. (Physical breach data is a laggard.) For example, Rackspace CISO and TruSTAR adviser Brian Kelly recently broke down his decision to combine physical security and cybersecurity in The Wall Street Journal. Kelly argued that in the case of executive protection, the number of spear phishing and spoofing attacks against top executives clearly mark this area as both a physical and cyber problem.

Progressive security teams are also integrating relevant data associated with the protection of their own infrastructure as well as that of their customers. This data model does not rely on adoption of a particular data format or protocol such as STIX. Companies using this approach can leverage internal resources including security information and event management (SIEM) systems, case management, endpoint detection, and vulnerability data with relevant external data feeds including everything from threat intelligence to insights from information sharing analysis centers (ISACs) to government insights.

The key component to a unified security data model relies on a centralized common knowledge repository. A common knowledge repository of security-related events can align teams and make working together more effective. Security teams can then visualize relationships in real time and exchange notes to streamline responses and save time. This approach also creates a historical reference point, which can expedite a forensic investigation when a breach or disruption occurs.

This framework extends beyond individual organizations. Like-minded organizations can easily leverage insights from others using cloud-based technology. Machine learning can identify trending TTPs in real time, enabling others to proactively defend themselves by ingesting insights and modifying their SIEM and firewall profiles accordingly.

Adoption of a unified security data model is a step beyond a traditional threat intelligence platform. Aggregating data from external sources is no longer enough. You must look at your entire organizational knowledge to accurately to determine relevance, context, and priority.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX and learn from the industry’s most knowledgeable IT security experts. Check out the Interop ITX 2018 security track here. Save $200 off your conference pass with Promo Code DR200.

Paul Kurtz is the CEO and cofounder of TruSTAR Technology. Prior to TruSTAR, Paul was the CISO and chief strategy officer for CyberPoint International LLC where he built the US government and international business verticals. Prior to CyberPoint, Paul was the managing partner … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/the-case-for-integrating-physical-security-and-cybersecurity/a/d-id/1331292?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple