STE WILLIAMS

Why your website is officially ‘not secure’ from today

In 2017, Google’s Chrome browser started marking transactional sites that weren’t using HTTPS as “not secure”.

Starting 24 July – think of it as the Google Chrompocalypse – that “transactional” vs. “everything else” difference comes to a decisive end. As of Tuesday, all HTTP pages will be slapped with the “not secure” label, regardless of whether they’re transactional or not.

How important is it? In some ways, not very. The world won’t end. A lot of alarm-fatigued people will probably learn to ignore the little “not secure” message, if they had ever bothered to check the address bar for it to begin with.

On the other hand, it is important, because of a few things. First off, at long last, it ushers in the much-heralded reversal of what’s considered “exceptional”.

For more than a decade, the browser address bar has been the place where we all (hopefully!) looked to see whether the site we were visiting had the reassuring “Secure” padlock, letting us know that the pages we were about to view were coming to us over a secure connection. That padlock let us know that nobody else on the web could intercept the information we exchanged with a given site.

Then too, of course, the address bar also showed us the “not secure” red warning triangle. Really, with all these icons, it was getting a bit crowded up there, as we noted recently.

In its ongoing efforts to make encrypted – i.e., HTTPS – web connections the norm, as opposed to the exception, we can all welcome Chrome version 68, the stable version of which is due on Tuesday.

With Chrome 68, Google takes one more step toward streamlining that address bar, moving to the point where it only informs users when a site is insecure. It gets better from here: Starting with Chrome version 69, due 4 Sept., the “Secure” label will disappear from HTTPS sites, and the green padlock will turn grey.

At some point after that, the padlock will go “Poof!”, completely disappearing from the address bar, leaving it empty save for the URL. No more telling us when something is good (HTTPS). We’ll just be told when it’s bad (HTTP).

Google has been twisting arms for a while to get us here.

In 2014, the company declared that it would be giving preferential treatment to pages that use HTTPS, proclaiming that “HTTPS everywhere” would be the security priority for all web traffic. From there, Google went on to add a dash of pain to the security push by downranking plain HTTP URLs in search results in favor of ones using HTTPS wherever available.

Google started labelling sites offering logins or collecting credit cards without HTTPS as “not secure” starting in 2016.

Now, we’re moving toward a place where HTTPS is a given. But will it solve all security threats?

Of course not. There’s nothing stopping crooks from using HTTPS on scam sites or phishing sites, after all.

We still have to be careful. We’re not putting our tinfoil hats away in the closet just yet. But we’re waving a hearty hello to Chrome 68 just the same: it’s one important stepping stone on the road to a more secure web.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/36u-1ikEmiI/

Google hasn’t suffered an employee phishing compromise in over a year

Phishing attackers have failed to compromise a single employee account at Google since the company mandated authentication using U2F hardware tokens in early 2017.

That’s the remarkable claim made to security writer Brian Krebs, who received the following statement on the topic from a company spokesperson:

We have had no reported or confirmed account takeovers since implementing security keys at Google.

Given that Google has 85,050 employees, all of whom would be prized targets for phishing attacks, this is a remarkable advert for tokens, which reports suggest are Yubico’s Universal 2nd Factor (U2F) Yubikey.

This doesn’t rule out the possibility that phishing attackers have been able to steal employee credentials, simply that they haven’t been able to overcome the extra layer provided by token security to take control of an account.

Naked Security has discussed U2F tokens before, the basic principle of which is that users must authenticate themselves to their account using a username, a password, but also by plugging in a token that is individual to each user.

This is what is meant by old-school two-factor authentication – users authenticate themselves with something they know (their password) and something they have (their token).

Google has long recommended consumers use this kind of security when accessing its services, even offering a special type of Advanced Protection Program (APP) account for users who think they might be at high risk of attack in which U2F keys are mandatory. Tokens can also be used to add security to a growing number of other sites, including Dropbox, Facebook, and all major password managers.

Google’s statement to Krebs hinted at other security layers:

Users might be asked to authenticate using their security key for many different apps/reasons. It all depends on the sensitivity of the app and the risk of the user at that point in time.

This appears to be a reference to the fact that Google’s systems can ask employees to present their keys in a number of contexts and not only when logging on to email when they start work. It’s a secondary trend in which regular re-authentication slows attackers who do somehow compromise an account.

Is the future U2F?

If U2F tokens are such an effective way to boost security, why do so few people beyond Google use them?

One would expect Google to be a big advocate as it was one of the founding backers of the FIDO Alliance under whose auspices the U2F standard was developed.

And Google has a good reason to persevere with U2F tokens in the form of another emerging standard called WebAuthn under which passwords will be consigned to history in favour of strong authentication.

Sadly, although the enthusiasm for U2F has spread to some other big companies, Google admits the same can’t be said for its its own users, most of whom have failed to turn on two-step verification in any form.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2PfK5CrjNQ8/

Names and photos of Venmo ‘drug buyers’ published on Twitter

There are a few options available to get people’s attention when you find their really, really personal information dribbled onto the web for anybody to see.

For one, you could pull it all together and publish it in a nice, sober, insightful analysis that anonymizes all their Venmo drug purchases and pizza-eating habits. Or, another option is to whip up a bot to automatically tweet out the profile photos of anybody making those drug deals…

…or, to be more specific, the profile photos and first names of people whose public-by-default Venmo transactions include words such as heroin, marijuana, cocaine, meth or speed; emojis that denote drugs; or non-drug-related words such as sex, porn or hookers.

As we wrote about last week, researcher Hang Do Thi Duc took the analysis route, scraping a year’s worth of data from Venmo’s public API to find out what people are buying, who they’re sending money to, why they’re sending money, first and last names, profile pictures, times of the transactions, messages attached to the transactions, and more. Using 207,984,218 transactions, she chronicled Venmo users’ lives, everything from cannabis sales to budding romances and breakups, and eating habits.

She wanted to make it clear that anybody can find out a whole lot about you if you don’t make your Venmo account private. To do so, she used the gentle touch of anonymized data.

Joel Guerra, the creator of the Who’s buying drugs on Venmo? bot, got a little more slap-happy when he got his hands on Venmo’s public API. As he explained in a post on Medium, when Guerra saw the endpoint to Venmo’s API had been posted publicly on Twitter, he quickly did “what any software engineer would do”: he started digging through the data.

Ah, he realized: I’m not the only one who likes to put salacious nonsense in Venmo’s transaction description field.

I thought about the many times I had filled that out myself with joke descriptions like “baby oil backrub” or “plan B pills” when splitting restaurant tabs with friends.

The key difference: Guerra’s Venmo account has his transactions set to “private” to ensure that he’s not spewing his baby oil backrubs all over the place.

Venmo’s default setting is public, for whatever “we also want to be a social app” reasons there might be. Don’t like it? You shouldn’t, unless you like the idea of somebody such as Guerra coming along and tweeting out your transactions, along with your profile picture, name, and any other information Venmo makes public by default. If you don’t like the thought of that prospect, you can change your privacy settings.

Guerra wanted to do something fun to call people’s attention to the lack of privacy in Venmo’s default settings, so he whipped up a 70-line Python script and made a new Twitter account. Then, he set it free. For about 24 hours, his script diligently, automatically tweeted the first names and profile pictures of users making “drug” transactions on Venmo.

I chose drugs, sex and alcohol keywords as the trigger for the bot because they were funny and shocking. I removed the last names of users because I didn’t want to actually contribute to the problem of lack of privacy.

The transactions aren’t, necessarily, actual drug deals. In fact, most aren’t, Guerra told Motherboard. Rather, Guerra believes that he caught a net-full of tongue-in-cheek transaction descriptions, on par with his Plan B pills.

Either that, or the simple Python script took things out of context. For example, one transaction posted on Thursday included the message “Your love is my drug.” The profile picture showed the user with somebody who could have been a spouse or significant other.

Another message read “not drugs,” while yet another contained the phrase “Funding for your Scotland Ireland trip. God speed,” with the script presumably only plucking the word “speed” out of context.

As Guerra said in his Medium post, the response to his “Who’s buying drugs on Venmo?” tweets was overwhelmingly positive. People heard his privacy message loud and clear. But even if we all got it, we didn’t all like it. Just because data is public doesn’t mean that more exposure will make things better. In fact, it makes things worse.

As Motherboard’s Joseph Cox notes, there are plenty of examples of researchers and coders who’ve scraped sites for publicly available data and then handled the data in ways that have rubbed people the wrong way.

One example: in 2016, without users’ permission, Danish researchers publicly released data scraped from 70,000 OkCupid profiles, including their usernames, age, gender, location, what kind of relationship (or sex) they’re interested in, personality traits and answers to thousands of profiling questions used by the site.

On Friday, after getting “more attention than [he] had imagined possible,” Guerra shut down the script.

I saw no further value in tweeting out anyone’s personal transactions anymore.

But just because he’s not sniffing around for public mentions of meth or hookers anymore doesn’t mean they’re not out there. He invited us all to have a peek to see for ourselves at Venmo’s API:

You’ll see all the details of the last one thousand Venmo transactions including last names and usernames that I chose not to include in my bot’s tweets.

Or, then again, maybe we shouldn’t look at that public API. Maybe instead, check your own account, and please do advise your friends if you think they need to zip up their Venmo pants.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/89V70arUguU/

The Bluetooth “device snooping bug” – what you need to know

“Curiouser and curiouser,” said Alice. “I wonder what all these security bulletins are for?”

That was how I felt this morning, seeing that a sea of Apple security announcements had dropped into my inbox overnight.

But a quick visit to SettingsGeneralSoftware Update on my iPhone took just a couple of seconds to confirm:

iOS 11.4.1
Your software is up to date.

I had a similar outcome on my Mac when I went AppleAbout This MacSoftware Update....

On reaching my desk at Sophos HQ, the mystery deepened when I was asked to comment on the newly-published Carnegie Mellon’s CERT Vulnerability Note VU#304725, detailing a security hole dubbed CVE-2018-5383.

The CVE-2018-5383 bug has the full and imposing title:

Bluetooth implementations may not sufficiently validate elliptic curve parameters during Diffie-Hellman key exchange.

Simply put, a crook who is in the right place at the right time might be able to figure out the encryption key that one of your Bluetooth devices is using to talk to your laptop, or your bicycle computer, or your phone, or whatever it’s paired with.

This sounds serious – presumably, this sort of bug could allow crooks to listen to, or even to interfere with, data coming out of your Bluetooth devices, from heart rate monitors and bicycle power meters all the way to mice and keyboards.

Sniffed-out keyboard data could be used to get hold of passwords you just typed in; modified keyboard transmissions could be used to alter data you’d just entered, leaving you with wrongly-edited documents or taking you to websites you never intended to visit.

Diffie-Hellman is more correctly called Diffie-Hellman-Merkle (DHM) after its three co-inventors. Martin Hellman himself has urged that Ralph Merkle’s name ought to be included, and we agree, so DHM is how we shall refer to it here.

Sharing secret keys securely

Diffie-Hellman-Merkle was a breakthrough when it was published in the 1970s, because it was the first public and practical algorithm for sharing secret encryption keys without needing a pre-existing secure channel.

Imagine where internet commerce would be without a system like DHM – every time you wanted to do business with someone online, you’d have to visit their offices first with a USB drive so you could securely collect the encryption keys to use when you got back home.

In theory, even if crooks can intercept all the network data that’s exchanged at the start of a network session by a correctly implemented DHM algorithm , they won’t learn enough to be able to figure out the encryption key agreed upon for that session.

In other words, the crooks won’t subsequently be able to eavesdrop on or to tamper with the traffic that’s exchanged, even if they were there and listening in right from the start.

But CVE-2018-5383 is an implementation flaw discovered by Israeli cryptographers Lior Neumann and Eli Biham – some Bluetooth firmware has bugs that get in the way of the anti-snooping/anti-tampering properties of DHM.

A crook within Bluetooth range might be able to interfere with the device connection process – what’s called a Man-in-the-Middle, or MiTM, attack – in such a way as to extract the secret encryption key that each end of the Bluetooth session just agreed on.

What to do?

According to Carnegie Mellon’s vulnerability notice, Apple, amongst others, only published an update for this bug on 2018-07-23 – the date that the vulnerability was publicised, and the date of Apple’s latest wave of security bulletin emails.

Yet Apple’s own products will report that “your software is up to date” if you try to get this apparently important fix.

That’s because Apple’s latest security bulletin isn’t announcing that a fix is available, merely that this issue was already fixed – in the update before last, in fact, namely when iOS 11.4 and macOS 10.13.5 came out. (Current Apple version numbers are iOS 11.4.1 and macOS 10.13.6.)

So, it looks as though 2018-07-23 was merely the pre-arranged date for talking publicly about CVE-2018-5383, rather than the date at which vendors started fixing it.

Are you safe?

According to Cargnegie Mellon’s chart, Bluetooth code from Android, Microsoft and Apple is either already updated or was never affected…

…but you will need to check with your phone vendor or mobile carrier to make sure that the patches added to the Android Open Source Project have made their way to your handset.

In the meantime:

  • To exploit this vulnerability, crooks need to be in range when you first connect to a Bluetooth device. As far as we can see, they can’t start snooping on a device that’s already connected, which limits the extent of any attacks.
  • If you’re not using Bluetooth, turn if off. You’ll save battery and avoid any sort of unwanted pairings or connections, as well as neutralising any cryptographic attacks that might exist. You’ll also avoid inadvertently broadcasting your Bluetooth hardware address, a detail that makes you easier to track.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/l4VMiq5_gaM/

Insecure web still too prevalent: Boffins unveil HSTS wall of shame

How’s that migration to “HTTPS everywhere” going? With some Chrome browsers* now flagging insecure sites, there’s a lot of work still to do, according to security bods Troy Hunt and Scott Helme.

Sceptical looking people check something on a laptop

Google Chrome: HTTPS or bust. Insecure HTTP D-Day is tomorrow, folks

READ MORE

In particular, while some holdouts exist who haven’t applied HTTPS to their sites, many websites that people expect to be secure can be accessed insecurely because of HSTS (HTTP Strict Transport Security) configuration problems.

HSTS is a policy mechanism that allows a web server to enforce the use of TLS in browsers and other web agents. The cryptographic technology was designed to protect websites against protocol downgrade attacks and cookie hijacking.

What started as “a fun way to spend an afternoon” with coffee, Hunt told Vulture South today, turned into a week-long project documenting the many ways in which HSTS configurations can unintentionally leave pages unencrypted, even when sites can present their SSL certificates.

The pair have documented their efforts at whynohttps.com, foreshadowed yesterday, where among other things they list globally top-rated Alexa sites that can load insecurely, along with country-specific analysis – all with the hope that sites listed on the wall of shame will lift their game.

Even if you ignore the 35 Chinese sites on the list (there are, after all, special circumstances in the Middle Kingdom we’ll discuss later), there are still 65 out of the world’s 502 largest websites that can, always or sometimes, load insecurely.

The 100 sites at the site, Hunt noted, are 20 per cent of the top 502 sites (ranked by Alexa).

Making the assessments, Hunt said, brought him and Helme into contact with some odd site behaviours.

Australia provides a useful, if unfortunate, example in the form of its Department of Home Affairs, https://www.homeaffairs.gov.au. That site loads securely for Hunt, but it popped up as “insecure” in Helme’s crawl.

Mobile banking, image via Shutterstock

El Reg assesses crypto of UK banks: Who gets to wear the dunce cap?

READ MORE

If you load the site from the link above, it will load correctly and securely: the configuration error the pair found was that there existed an HTTP maintenance page Helme somehow landed on, and from there, users can navigate to other links without HTTPS.

The worst that could happen, Hunt told The Register, is that “the site can be requested insecurely, and serve content insecurely – so that page can become a phishing or malware page”.

So while Australia’s Department of Home Affairs doesn’t deserve a screaming “it’s insecure!” headline (because most people will never land on the page Helme’s crawler found), there is a configuration error that drops the site’s guard under some circumstances.

Getting it right, Hunt told us, needs HTTPS, HSTS, and HSTS pre-loading. Having worked through all three on his own HaveIBeenPwned.com site, he said, pre-loading is important to stop browsers falling back to HTTP in requesting HTTPS: “Even if you’ve never been to a site before, [HTTPS] is baked into the browser… even an insecure request redirects to a secure request.”

What he and Helme found is that there turned out to be a lot of edge cases. They decided to keep those edge cases in whynohttps.com – any site that will serve insecure requests is included, even though the pair agree that often “it’s a matter of degree” for an individual site.

As for China, which figures prominently in the top 100 offenders, Hunt said the case of the Middle Kingdom is an interesting case.

The lack of HTTPS and HSTS in the country could in part reflect national security attitudes to encryption, state censorship, and the heavy presence of the state in infrastructure ownership, he maintained.

Twitter’s T.co URL shortener, the BBC (.com), Fox News, Speedtest.net, Fedex, 4chan, or Australia’s ABC or Bureau of Meteorology (to name just a few) have no such excuse. ®

*At the time of publication, the Chrome 68 update was not yet available on our Macs

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/24/insecure_web_hsts_https/

Dust yourself off and try again: Ancient Solaris patch missed the mark

A vulnerability first detected and “resolved” years ago in Oracle’s Unix OS, Solaris, has resurfaced, necessitating a fix in Big Red’s latest quarterly patch batch.

Rather than a Lazarus-like return from the dead, it’s more a case of security researchers discovering that the original fix, for a component that’s become known as Solaris Availability Suite Service, isn’t good enough.

thumbs up

So long and thanks for all the fixes: ERPScan left out of credits on Oracle bug-bash list

READ MORE

Oracle agreed with security researchers at Trustwave, who flagged up the issue, and pushed out a new fix.

The (CVE-2018-2892) vulnerability lends itself solely to local exploitation – it’s not a remote hacking threat – so it could be worse, though it would be foolish to ignore it on insecure systems. The vulnerability rates a 7.8 classification on the Common Vulnerability Scoring System (CVSS) 3.0 system, well towards the upper end of the 1-10 scale.

The “easily exploitable vulnerability allows low privileged attacker with logon to the infrastructure where Solaris executes to compromise Solaris”, according to a bug listing on the CVE database.

Section of Oracle's lengthy advisory on Solaris Availability Suite Service (re)patch

Solaris Availability Suite Service (re)patched. Click to enlarge

The practical upshot is that users running Oracle Solaris 10/11 and running Sun StorageTek Availability Suite (AVS) for the filesystem need to patch their systems to resolve an incompletely fixed flaw rooted in security bugs in legacy versions of the tech.

The vulnerability is a memory corruption bug that would allow an attacker to write malicious code to memory and execute it with kernel-level (highest) privileges. The flaw was first discovered in 2007 and made public during CanSec West 2009, a security conference held in Vancouver, Canada. A fix was applied shortly after the event.

Trustwave found that the original fix was insufficient.

“Exploiting the vulnerability can only be done by a locally logged in user (no direct remote exploitation),” the researchers said. “The vulnerability lets you execute code in the root/kernel context. Typically, this would be a root shell.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/24/oracle_repatch_old_solaris_bug/

Threat Hunting: Rethinking ‘Needle in a Haystack’ Security Defenses

In cyber, needles (that is, threats) can disappear quickly, for a variety of reasons, and long often after hackers have completed what they came to do.

Business executives are finally getting the message from IT and security leaders that they need to be more proactive when it comes to cybersecurity. They can’t afford to let their cybersecurity teams wait for alerts that may come too late to stop a minor intrusion from becoming a major breach. Threat hunting is the approach business leaders need to detect these incidents early enough to stop them.

Where leadership holds both authority and responsibility for these functions, they may not know enough about threat hunting to provide much-needed direction. It’s often up to frontline defenders to figure out how to get that initiative on stable footing.

At any organization, the sheer number of “events” to sort through can make early detection daunting. It’s easy to reach for the “needle in a haystack” metaphor, but this is a flawed perception of the problem. The old saying assumes that you know there is a needle, that you know what a needle looks like, and that it is in fact a needle you’re looking for. This doesn’t address the fact that, in the cyber world, needles (that is, threats) can disappear quickly for a wide variety of reasons — and often long after the malicious party has completed what he or she came to do.

Although there are many factors for cybersecurity teams to juggle, getting started isn’t hard.

Read the Hacker Playbook 
Cybersecurity professionals who support detection and response have an advantage over their adversaries that might not be obvious. Independent groups like MITRE have conducted research on the techniques and tactics used by threat actors, which they have released under the ATTCK framework. By studying and understanding this knowledge base, analysts and other professionals can focus their efforts to remain ahead of threats.

Where other models oversimplify categories of techniques, attempt to apply a one-size-fits-all approach to complex behaviors, or assign too much significance to the early pre-compromise stages of an attack, ATTCK is a comprehensive and threat-agnostic resource that emphasizes the importance of a data-driven approach. By using a resource like ATTCK and adopting a quantitative method of measuring coverage, teams responsible for monitoring and response can more effectively hunt.

Take Action
The ATTCK framework can seem overwhelming at first, given that it enumerates hundreds of individual techniques and tactics across Windows, Linux, and macOS systems. New threat-hunting teams without clear direction from their leaders may feel they need to tackle everything at once. That leads to doing none of them well and contributes to poor retention and satisfaction rates that leave major gaps in the cybersecurity teams.

Fortunately, full coverage isn’t necessary to significantly improve a cybersecurity program. Starting small and building momentum gives threat-hunting teams a chance to earn some early success and learn more about how to conduct threat hunts.

There is no prescribed approach to getting started, but a data-driven approach helps provide some guidance. In my experience, the most effective place to start is an assessment of available sources of evidence such as running processes and network metadata for availability, timeliness, and quality. By understanding your data, security teams can understand which threat-hunting actions are possible in their environment. They can also learn where they need to make visibility improvements to be able to do more.

Each new hunt will become easier for the defenders as they get a better understanding of the processes. They will also improve their understanding of their operating environment. From there, they can expand the scope of adversary behaviors they’re looking for to find more malicious activity and prepare a defense for a wider variety of attacks.

Important to the continuing support of this program is active, quantifiable measurement. Being able to show IT cybersecurity and organizational leaders that threat hunts are having a measurable impact on the team’s ability to stop breaches helps them justify continuing to provide or even increase budgets and other resources.

Qualitative Assesments
The ATTCK matrix can help by giving cybersecurity teams a concrete pin on which to hang their results. Using qualitative scales for assessment — such as “low,” “medium,” or “high” — leaves organizations guessing about whether adversaries are active in their environment. But those who adopt a quantitative scale can point to entire categories or individual techniques where attacks weren’t active or where they were prevented.

This continuing stream of information about the success of the threat-hunting program as it expands will win friends with the relevant decision makers. It is also important for cybersecurity teams to have a champion in the organization to enable continued success.

Cybersecurity teams are sometimes seen as the “bad guys” of the IT department because security professionals often make their jobs harder to do. Having a champion who can demonstrate the unseen benefits of a cybersecurity program will reduce the amount of “political capital” execustives need to spend to maintain an effective threat-hunting program. Organizations that are struggling to make progress with threat hunting and detection may be trying to take on too much too soon, failing to quantify their results, or expending more political capital than they earn.

Threat hunting may seem a like a daunting task, and the bigger the enterprise to defend, the more daunting it seems. Starting small against the most common hacker techniques and building steadily will make every search a little easier every time. (You can click here for further tips on setting up a threat-hunting program.) Some of the most common techniques and data sources for threat hunting are covered in this recent talk at BSides Charm 2018. Knowing the hacker playbook and using it against them makes it easier to stop threats before they make the company another breach headline.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Devon Kerr is a principal researcher at Endgame, focusing on detection and response technologies. Formerly a Mandiant incident response and remediation lead, Devon has over 6 years of experience in security professional services where he has worked with clients in a nearly … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/threat-hunting-rethinking-needle-in-a-haystack-security-defenses/a/d-id/1332341?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

7 Ways to Better Secure Electronic Health Records

Healthcare data is prime targets for hackers. What can healthcare organizations do to better protect all of that sensitive information?PreviousNext

January was not a particularly bad month for electronic health record (EHR) breaches. Still, in just those 31 days, nearly a half-million records were exposed to unauthorized viewers.

According to the HIPAA Journal, the top four breaches in January were all the result of hacking or an IT incident, exposing more than 387,000 records. While these numbers pale in comparison to the tens of millions of records involved in recent credit bureau and social media hacks, the sensitive nature of the records amplify the damage done.

What’s more, the number of records lost to hacking or IT incident has steadily increased year over year since 2009 (though authors of the “January 2018 Healthcare Data Breach Report” note that at least some of that increase could be due to a lack of reporting in earlier years). 

The reports points to several reasons why healthcare breaches continue to occur. First, they’re valuable records that have currency with criminals and nation-state actors. Next, healthcare organizations come in a dazzling array of sizes, with an equivalent array of IT security skill levels at their service. Finally, almost every step along the records trail involves a human, and humans are infamously fallible. So what’s a conscientious organization to do?

In this article, we look at seven ways to better secure this sensitive healthcare data. This is far from an exhaustive list, but each one is something that an organization can reasonably do to reduce its risk. Of note, many of these points can be applied to any organization with sensitive data to protect.

Have you found other steps worth taking to protect sensitive data? What have you tried and found effective? Let us know in the comments section, below.

(Image: pandpstock001)

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/application-security/7-ways-to-better-secure-electronic-health-records/d/d-id/1332365?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Report Shows Pen Testers Usually Win

Pen testers are successful most of the time, and it’s not all about stolen credentials, according to a new report based on hundreds of tests.

Penetration testers are able to gain complete administrative control of their target network 67% of the time when that function is within the scope of the test.

That’s just one stat from the Under the Hoodie 2018 report from the pen testing team of Rapid7 Global Consulting. Taken from 268 engagements from September 2017 through June 2018, the report looks at both the broad conclusions and security details from a wide variety of industries and company sizes.

Tod Beardsley, Rapid7’s research director, says that the broad conclusions are not surprising. “I wouldn’t say shocking but it’s sobering and it tells me that edge defenses are great, but we really still have a ton of work to do when it comes to securing that internal network,” he says.

When it comes to securing the network, two points of vulnerability stood out: Software and credentials. Software vulnerabilities are a reliable point of entry for intruders, and that reliability is growing. According to the report, there was “a significant increase in the rate that software vulnerabilities are exploited in order to gain control over a critical networked resource.”

In fact, only 16% of the organizations the group tested did not have an exploitable vulnerability, down from 32% of organizations included in last year’s report.

The pen testers weren’t relying on finding novel software exploits; in only one encounter was a “zero day” exploit used, and that was in conjunction with other, previously known vulnerabilities. Virtually every vulnerability exploited was a  well-documented exploit, including SMB Relay, broadcast name resolution, cross-site scripting, or SQL injection.

User credentials are the next most exploitable point of entry, with at least one credential captured in more than half (53%) of all the tests, and testers reported that simple password-guessing was the most effective method of gaining those credentials. The guessing game is assisted by users who include the company name (5%), “Password” (3%), or the season (1.4%), in their password — a password that will be 10 characters or shorter 84% of the time.

Beardsley says that he was surprised by credentials not being a larger factor in exploits. “I think the reason why is that organizations don’t really tend to sign up for that kind of test,” he says. “They really do want to just focus on vulnerabilities and network configuration issues and they’re not super interested in the credential part of it.”

Helping clients understand what needs to be tested is a huge part of a successful pen test. “A lot of the work in pen testing is convincing the client of what they need,” Beardsley says. “It’s real tempting to scope out your past so that the testers won’t find your dirty laundry,” he explains.

But cutting corners isn’t the right approach: “If you just want me to tell you what you want to hear, that’s not going to get you very far from a practical point of view,” he explains.

One of the practical issues raised in the tests is whether or not the target organization knows that it has been breached. In 61% of the cases, the organization did not detect the breach within the time that that test covered.

But there was some good news for defenders: Some 22% of intrusions were found within a day of their occurrence. Overall, though, Beardsley says, “I do think that your post-breach detection capability is still pretty amateur today.”

Beardsley was suprised that the report shows little difference in the detection capabilities between large enterprises and small enterprises. “It continues to be a little concerning that large enterprises don’t really outperform small enterprises when it comes to that detection part,” he says.

He expected, he says, for the resources of the larger organizations to allow them to be better at detecting breaches, but for all organizations the basic state is the same: If an intruder is not detected within the first day, then it’s likely that they will be in your network long enough to do serious damage.

Related Content:

 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/new-report-shows-pen-testers-usually-win/d/d-id/1332368?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

No big deal… Kremlin hackers ‘jumped air-gapped networks’ to pwn US power utilities

The US Department of Homeland Security is once again accusing Russian government hackers of penetrating America’s critical infrastructure.

Uncle Sam’s finest reckon Moscow’s agents managed to infiltrate computers networks within US power utilities – to the point where the miscreants could have virtually pressed the off switch in control rooms, yanked the plug on the Yanks, and plunged America into darkness.

The hackers, dubbed Dragonfly and Energetic Bear, struck in the spring of 2016, and continued throughout 2017 and into 2018, even invading air-gapped networks, it is claimed.

This seemingly Hollywood screenplay emerged on Monday in the pages of the Wall Street Journal (paywalled) which spoke to Homeland Security officials on the record.

The Energetic Bear crew – first fingered in 2014 by Crowdstrike – was inside “hundreds” of power grid control rooms by last year, it is claimed. Indeed, since 2014, power companies have been warned by Homeland Security to be on the look out for state-backed snoops.

Microsoft: The Kremlin’s hackers are already sniffing, probing around America’s 2018 elections

READ MORE

The Russians hacked into the utilities’ equipment vendors and suppliers by spear-phishing staff for their login credentials or installing malware on their machines via boobytrapped webpages, it is alleged.

The miscreants then leveraged their position within these vendors to infiltrate the utilities and squeeze into the isolated air-gapped networks in control rooms, it is further alleged. The hacker crew also swiped confidential internal information and blueprints to learn how American power plants and the grid system work.

We’re told, and can well believe, that the equipment makers and suppliers have special access into the utilities’ networks in order to provide remote around-the-clock support and patch deployment – access that, it seems, turned into a handy conduit for Kremlin spies.

The attacks are believed to be ongoing, and some utilities may not yet be aware they’ve been pwned, we were warned. It is feared the stolen information, as well as these early intrusions, could be part of a much larger looming assault.

“They got to the point where they could have thrown switches,” Jonathan Homer, chief of industrial control system analysis for Homeland Security, told the paper.

The Register will watch developments, however, some caution is probably a useful prescription at this stage.

After all, an attack on the American grid reported in late 2016 turned out to be far less than was first feared: it was one infected laptop in a relatively small operator, Burlington Electric, and the attack didn’t reach control systems.

On the other hand, the Kremlin has developed a keen interest in America’s computer systems. The Putin government has denied any wrongdoing. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/07/24/russia_us_energy_grid_hackers/