STE WILLIAMS

Yet another Apple password leak – how to avoid it

Mac forensics guru Sarah Edwards, who blogs under the cool nickname of mac4n6 (say it out aloud slowly and deliberately), recently wrote about a rather worrying Mac password problem.

Another Mac password problem, that is – or, to be more precise, yet another password problem.

Apple has ended up with password egg on its face twice before since the release of macOS 10.13 (High Sierra).

First was the “password plaintext stored as password hint” bug, where macOS used your password as your password hint, so that clicking the [Show Hint] button after plugging in a removable drive would immediately reveal the actual password instead.

Second was the “blank root password” hole, whereby trying to logon as root with a blank root password would inadvertently enable the root account, and leave it enabled with no password.

Max4n6’s new password bug isn’t quite that serious, but it’s still a bad look for Apple: under some conditions, the password you choose when creating an encrypted disk ends up written into the system log.

That sort of behaviour is a serious no-no: some sorts of data just aren’t meant to be stored.

The 3-digit CVV code on the back of your credit card is one example – you’re only supposed to use CVVs to validate individual transactions, and you’re not allowed to store them for later.

And passwords are another example – the plaintext of your password should never be stored in its raw, plaintext form, so that’s it’s never left lying around where someone else might stumble upon it, such as in the system logs.

Apple’s leaky logging

Here’s what we did and what happened, based on what we learned from Mac4n6’s article.

We plugged in a blank USB drive that macOS wouldn’t recognise or mount. (To blank the disk in the first place, we used macOS’s handy but potentially dangerous diskutil zeroDisk utility.)

After a short while, macOS popped up its familiar “disk not readable” popup, at which we clicked [Initialize...] to launch the Mac Disk Utility app:

We used the Erase option to initialise a blank volume using APFS, Apple’s new-and-groovy filing system introduced in macOS 10.13.

We created an APFS (Encrypted) volume called TEST, entering the password text when requested:

Once the new APFS volume was created and visible in Disk Utility, we reviewed the system log for recent invocations of the newfs system command.

On Macs, newfs is much like mkfs on Linux and FORMAT on Windows – the low-level command used to prepare a new volume for use with a specific filing system:

$ log stream --info --predicate 'eventMessage contains "newfs"'
Filtering the log data using "eventMessage CONTAINS "newfs""
Timestamp      .....    Command            
2018-03-28 10:59:23     /System/Library/Filesystems/hfs.fs/Contents/Resources/newfs_hfs -J -v TEST /dev/rdisk2s1 .
2018-03-28 10:59:35     /System/Library/Filesystems/apfs.fs/Contents/Resources/newfs_apfs -C disk2s1 .

Curiously, Disk Utility first created an old-school HFS volume called TEST using the specific utility newfs_hfs, an apparently redundant operation considering what happens next, but presumably as a side-effect of partitioning the uninitialised USB device for use.

Next, Disk Utility used newfs_apfs to convert the drive into a so-called APFS container – that’s like an APFA “disk within a disk” that can be divided up further.

However, the final newfs_apfs command by which the APFS encrypted volume itself gets created wasn’t logged, presumably as a security measure to prevent the password ending up in the logs.

Reformtting an APFS drive

Unfortunately, if you create an encrypted volume on a USB device that already contains an APFS container – for example, a device you’ve initialised or reinitialised since macOS 10.13 came out – then no such logging precautions are taken.

We went back into Disk Utility, clicked on the already-mounted volume TEST, and once again used Erase to create an APFS (Encrypted) volume with a password.

This is a handy way to reformat a USB drive and choose a new password at the same time:

This time, the system log revealed just one invocation of newfs, with the command line parameters used to reformat the existing APFS volume, including the plaintext of the password we just typed in:

$ log stream --info --predicate 'eventMessage contains "newfs"'
Filtering the log data using "eventMessage CONTAINS "newfs""
Timestamp      .....    Command            
2018-03-28 11:01:23     /System/Library/Filesystems/apfs.fs/Contents/Resources/newfs_apfs -i -E -S pa55word -v TEST disk3s1 .

According to Mac4n6, this “leak the password to the system log” behaviour always happens in macOS 10.13 and 10.13.1, even when you’re initialising a brand new device, so it’s a good guess that Apple made some changes in 10.13.2 in order to be more cautious with the plaintext password…

…but it’s also a good guess that Apple didn’t identify all possible Disk Utility workflows in which passwords get logged, and thus didn’t get around fixing this one.

What to do?

Assuming you have your Mac’s built-in disk encrypted – you really should! – an attacker can’t just steal your computer and read off passwords for all your other devices – they’d have to know your Mac password first.

But a quick-fingered crook (or an ill-intentioned colleague) with access to your unlocked Mac, even for just a few seconds, could use this bug to try to recover any disk passwords you’ve chosen recently.

Fortunately, you can change the password on existing APFS volumes in a way that doesn’t (as far as we can see) leave any password traces in the log.

Mount your encrypted disk and figure out its macOS device name using diskutil apfs list:

Once you know the drive’s device name, you can change the APFS password directly, without reformatting it, with diskutil apfs changePassphrase, like this:

If you’re worried about passwords left behind in your current system logs, you can purge the logs with the log erase command:

$ sudo log erase -all
Password: ********
Deleted selected logs
$ 

(You need to use sudo to promote yourself to a system administrator, to prevent just anyone deleting your logs.)

Where next?

This isn’t a show-stopping bug, and it’s easy to work around it if you’re happy using the command line, but it’s still a bad look for Apple.

We’re guessing it will quietly get fixed – for good, this time – in the not-too-distant future.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/01JlRDH_l5k/

Unmasking Monero: stripping the currency’s privacy protection

Monero is a cryptocurrency designed for privacy, promising “all the benefits of a decentralized cryptocurrency, without any of the typical privacy concessions”.

It’s where Dark Web market AlphaBay, at the time the most popular site of its kind, looked in 2016 when it wanted to adopt a cryptocurrency that offered users more protection than Bitcoin.

It’s also where the authors of WannaCry, the infamous ransomware that went global in May 2017, turned when they wanted to transform their ill-gotten ransoms into something harder to trace.

But recently updated research on traceability in the Monero blockchain suggests that the currency’s privacy protections can be weakened, and in many cases stripped away entirely, leaving users exposed.

The researchers detail a pair of attacks, one that works on transactions up to the beginning of 2017 and one that still works today.

In this article we’ll examine the first of those attacks, but we’ll begin by looking at how Monero attempts to avoid the pitfalls of Bitcoin.

Exposing Bitcoin users

Bitcoin and Monero are both cryptocurrencies that rely on a blockchain, a cryptographically protected, decentralised ledger of transactions.

The robustness of each relies, in part, on transparency: there are thousands of copies of both the Bitcoin and Monero blockchains in existence and every copy carefully details every single transaction ever made in that currency.

Changing the history enshrined in those blockchains is effectively impossible. If you’ve ever spent a bitcoin or a monero then the proof that it happened is etched indelibly into that currency’s blockchain, forever.

In the Bitcoin blockchain each transaction points to a previous transaction, making it possible to see what any given Bitcoin wallet (and by extension, any given Bitcoin wallet owner) has spent and received.

That makes Bitcoin users pseudonymous – their privacy is protected by one or more false names, their wallet addresses.

Bitcoin users can be exposed if any one of a wallet’s transactions can be linked to a real identity.

If a Bitcoin user pays for something at an online market that requires personal information, such as a delivery address, then that one single transaction creates a link between the user’s real identity and every other transaction they’ve made with that Bitcoin wallet.

A similar link is created if a Bitcoin user signs up to an online exchange that requires an ID to open an account.

Even usernames can be used to unmask Bitcoin users if they’re reused across, say, a Dark Web site where bitcoins have been spent and a public site like Reddit or GitHub that requires a login.

Monero attempts to make users fully anonymous by obscuring the links between transactions. Unmasking the person behind a single transaction does not unmask their other transactions too.

It does this using decoy coins, known as mixins.

Whereas the Bitcoin paper trail clearly identifies the coin being spent in every transaction, Monero identifies a number of coins in every transaction, one real one and at least four mixins.

Anyone attempting to piece together a user’s transaction history from the Monero blockchain will find themselves running down blind alleyways.

However, if an attacker can find a way to tell the real coins from the decoys then Monero users are no better off than Bitcoin users and just as vulnerable to the tactics used to expose them.

And that’s exactly what the researchers did.

Exposing Monero users

Just like any software, cryptocurrencies can adapt and change over time. However, while the rules that govern transactions can evolve, old transactions made under older rules (including rules their writers may come to regret) cannot be erased.

There is a fee for adding mixins to a transaction and until a couple of years ago adding them wasn’t mandatory.

This created an incentive for users who weren’t particularly interested in Monero’s privacy protections to set them aside.

Because of this, at the time the research was conducted, about two thirds of the transactions in the Monero blockchain had been made without any mixins. These transactions can be linked to previous transactions in the same way as with Bitcoin transactions.

The people who did this didn’t care about their own anonymity enough to pay for mixins but inadvertently weakened the protection of people who did (my emphasis).

0-mixin transactions not only provide no privacy to the users that created them, but also present a privacy hazard if other users include the provably-spent outputs as mixins in other transactions. When the Monero client chooses mixins, it does not take into account whether the potential mixins have already been spent.

In other words, the potential pool of decoys includes coins that an attacker can prove have been spent elsewhere.

That’s a problem because if you’re presented with a Monero transaction that contains a number of coins (the ‘real’ one and a number of mixin phantoms) and you know for sure that some of the coins have been spent before, then they cannot be the real coin.

Given their prevalence, these zero-mixin transactions are actually very likely to be deployed as mixins in other transactions.

So, the researchers began by removing all of the decoys that they could prove had already been spent, stripping the camouflage from a number of previously obscured transactions.

Once the decoys had gone, these transactions were not only provably linked to previous transactions but also no longer useful as mixins either, which exposed another layer of transactions, which exposed another, which exposed another and so on.

According to the researchers, this recursive “chain-reaction analysis” can be used to remove all of the decoys from two thirds of the transactions that used them, prior to 2017.

We find that among Monero transaction inputs with one or more mixins, 63% of these are deducible, i.e. we can irrefutably identify the prior TXO that they spend.

Two changes, the last in early 2017, prevent this kind attack on more recent transactions.

From January 2016 all new Monero transactions required a minimum of two mixins. That was followed a year later by a hard fork that introduced a new type of transaction called RingCT that can only contain other RingCT transactions as mixins.

Since all RingCT transactions exist after the two mixin minimum was introduced, they form a separate pool of transactions without a zero-mixin foothold.

Without that foothold, the chain-reaction analysis doesn’t work.

That’s good news for people who are new to Monero (although the research details another, less effective, attack for them that we’ll cover in a later article) but cold comfort to anyone who used it for its anonymisation features prior to 10 January 2017, such as buyers on AlphaBay:

Users who made privacy-sensitive transactions prior to February 2017 are at significant risk of post hoc deanonymization.

The research shows that the transparency and immutability that make blockchains trustworthy may also leave users vulnerable to retrospective action.

The transactions inside them are artefacts frozen in time according to rules that were considered good enough or strong enough at the time.

Immune to correction, their protections have to survive the cycles of Moore’s Law, and as yet unseen advances in technology, techniques and research.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PRc6aNfzzuk/

Privacy activists to UK plod: Wanna slurp folks’ phone records? Come back with a warrant

Cops should need a search warrant to slurp information from peoples’ phones, Privacy International has said as it calls for a government review into police data-extraction tech.

The civil rights group has published a report into the use of such kit by forces across the UK, finding that more than half are already using it, and a further 17 per cent have trialled or plan to trial the tech.

However, Privacy International said the use of data-extraction technologies is taking place in the “absence of national guidance and paucity of local policy” with “conflicting views between police forces as to the legal basis to search, download and store personal data”.

It said this lack of clear independent regulation “creates a serious risk of abuse and discriminatory practices”, while the public have little knowledge of the activities.

The group is calling for an immediate review into the practice by the Home Office and College of Policing, along with independent oversight into compliance, clear cybersecurity standards for data storage, access and deletion and better information for the public about their rights.

Moreover, the group argued that gathering information contained by a phone – which can include location, medical and banking data as well as call and text records – should only happen if the force is issued with a warrant from a court.

Traditional search methods that don’t require a warrant “are wholly inappropriate for such a deeply intrusive search”.

The report also indicated a lack of clear local and national guidance, calling for definitive rules to be set for forces using the tech.

Privacy International sent Freedom of Information requests to 47 forces across the UK, to which all but five responded.

It found that for local level guidance, six forces said it was being developed, while others said they relied on the police’s Digital Investigation and Intelligence guide – and two said they rely on the supplier’s manuals.

“We are concerned that some of the processes or procedures that do exist are written by the technology manufacturers, not by the police, thus abrogating responsibility for designing policing procedures to private companies,” the report stated.

A similar confusion exists at the national level: the report notes that the Lancashire force said national guidance came from the National Policing Improvement Agency. However, this agency has since been replaced by the College of Policing, which said that it doesn’t provide any such guidance.

Privacy International also raised concerns about the lawful basis police rely on for extracting the evidence.

The National Police Chief’s Council has said the use of self-service kiosks (SSKs) – in-house kit that cops can use to extract information – is governed by section 20 of the Police and Criminal Evidence Act (PACE), which gives police the “power to require any information stored in any electronic form”.

But when Privacy International asked forces what the legal basis was for extracting data, not all responses pointed to PACE. For instance, Derbyshire, Gwent, Norfolk, Suffolk and West Yorkshire Police identified no clear legal basis.

The group questioned whether it was appropriate to rely on legislation written before mobile phone technology effectively turned devices into “a pocket surveillance tool”.

Among its other recommendations, Privacy International called for further assessment of whether the use of such intrusive technology should be limited to serious crimes. At the moment it is also used in low-level crimes.

It also warned against “replicating or exacerbating” existing discrimination against minority groups in the criminal justice system.

“It is disturbing that the police have such a highly draconian power, operating in secret, without any accountability to the public,” said Millie Graham Wood, solicitor at Privacy International.

“Given the serious problems we still face in the UK with discriminatory policing, we need to urgently address how this new frontier of policing might be disproportionately and unfairly impacting on minority ethnic groups, political demonstrators, environmental activists and many other groups that can find themselves in the crosshairs of the police.”

The calls come as government takes heat over its other policing record systems, such as the retention of images in its growing custody image database and its reluctance to put the Automatic Number Plate Recognition database on a statutory footing. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/28/warrant_to_extract_phone_data_police_privacy_international/

Most FTSE 100 boards kept in the dark about cyber resilience plans

Only one in five FTSE 100 companies disclose testing of online business protection plans.

Most (57 per cent) of FTSE 100 companies talk about their overall crisis management, contingency or disaster recovery plans within their annual reports but few in comparison mention cybersecurity. Just 21 per cent of UK Blue Chip businesses regularly share security updates with the board at least twice a year, according to a study by management consultancy Deloitte.

Cyber risk testing would include services such as “ethical hacking” (AKA penetration testing) to find vulnerabilities in their IT systems. Security testing will become even more important with the advent of the EU’s General Data Protection Regulation, due to swing into effect in June, under which data breaches in the UK and other member states will be punished with much tougher financial sanctions.

Phill Everson, head of cyber risk services at Deloitte UK, said: “Would-be hackers look for weaknesses in a system to gain access, so testing remains vital in ensuring strong cyber resilience. The 20 per cent of companies that disclosed testing for these vulnerabilities in our analysis demonstrate to investors that the company has ways to continually and proactively test for flaws, whilst also showing commitment in fixing them if identified.”

Rob Norris, VP head of enterprise and cybersecurity EMEIA at Fujitsu, argued that a reluctance to reveal cybersecurity plans can often be explained.

“Whilst the forthcoming GPDR will require organisations be honest when a breach takes place, forcing companies to disclose details of specific cyber risk testing may be more difficult as it can allow hackers to understand what defences a company has in place.

red team chief at desk

Stealing, scamming, bluffing: El Reg rides along with pen-testing ‘red team hackers’

READ MORE

“Companies need to ensure they are at the very least reporting openly and honestly about their cyber risk testing to the board.”

Brian Honan, founder and head of Ireland’s first CSIRT and special adviser on internet security to Europol, agreed that (by itself) firms not disclosing their security testing isn’t much of a concern.

“It is quite common for companies not to disclose their testing,” Honan told El Reg. “They may fear the info can be used by nefarious actors for their needs or it may draw negative public attention.”

But infosec veteran Stephen Bonner said: “Testing is essential, disclosure is a choice, but increasingly firms realise transparency breeds trust. And soon not disclosing may be indistinguishable from not testing.”

Despite the small proportion of FTSE 100 companies providing security updates to the board, 89 per cent recognise cyber as a “principal risk” and identified a number of consequences in the event of a breach. Disruption to business and operations was of greatest concern but damage to reputation and financial loss occasioned by a breach also featured as a worry.

Deloitte found 8 per cent of companies had a member of the board with specialist technology or cybersecurity experience, up from 5 per cent last year. The figure is matched by the number of companies that also disclose having a chief information security officer (CISO) in the executive team this year.

The much-publicised skills gap may effect the ability of large companies to increase cybersecurity expertise, according to Fujitsu’s Norris. “Many organisations will be using cyber threat intelligence (CTI) as an early warning system to help identify and block potential threats before they escalate and become problems,” he said. “But with the skills gap affecting IT departments in particular, the board should be made aware if their organisation is in need of additional support, and this can only come from regular security updates.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/28/cyber_resilience_planning_ftse_100/

Internet of insecure Things: Software still riddled with security holes

An audit of the security of IoT mobile applications available on official stores has found that tech to safeguard the world of connected things remains outstandingly mediocre.

Pradeo Security put a representative sample of 100 iOS and Android applications developed to manage connected objects (heaters, lights, door-locks, baby monitors, CCTV etc) through their paces.

Researchers at the mobile security firm found that around one in seven (15 per cent) applications sourced from the Google Play and Apple App Store were vulnerable to takeover. Hijacking was a risk because these apps were discovered to be defenceless against bugs that might lend themselves to man-in-the-middle attacks.

Four in five of the tested applications carry vulnerabilities, with an average of 15 per application.

Around one in 12 (8 per cent) of applications phoned home or otherwise connected to uncertified servers. “Among these, some [certificates] have expired and are available for sale. Anyone buying them could access all the data they receive,” Pradeo warns.

Pradeo’s team also discovered that the vast majority of the apps leaked the data they processed. Failings in this area were many and varied.

  • Application file content: 81 per cent of applications
  • Hardware information (device manufacturer, commercial name, battery status…): 73 per cent
  • Device information (OS version number…): 73 per cent
  • Temporary files: 38 per cent
  • Phone network information (service provider, country code…): 27 per cent
  • Video and audio records: 19 per cent
  • Files coming from app static data: 19 per cent
  • Geolocation: 12 per cent
  • Network information (IP address, 2D address, Wi-Fi connection state): 12 per cent
  • Device identifiers (IMEI): 8 per cent

Pradeo Security said it had notified the vendors involved about the security problems it uncovered in their kit. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/03/28/iot_software_still_insecure/

Getting Ahead of Internet of Things Security in the Enterprise

What’s This?

In anticipation of an IoT-centric future, CISOs must be rigorous in shoring up defenses that provide real-time insights across all network access points.

One of the prevailing critiques of the Internet of Things (IoT) has been targeted at manufacturers who only consider cybersecurity an afterthought. As a result, the burden to protect these devices from massive botnet attacks and hacking attempts generally falls on information security teams and consumers themselves, who are rushing to purchase the latest gadgets – from kids’ toys to smart thermostats – at a faster pace than manufacturers can defend them. 

This is especially worrisome as specialized IoT devices are adopted in specific industries and sectors. Consider the potentially catastrophic consequences if IoT implants used in healthcare are compromised, or IoT tools tracking safety conditions in a factory are rendered nonfunctional by a DDoS attack.

In an attempt to turn the tide on rampant security flaws surrounding IoT in almost every context, the United Kingdom’s Department for Culture Media and Sport – in conjunction with the country’s National Cyber Security Centre – published the “Secure By Design” report, which outlines 13 directives that manufacturers should consider when designing connected products.

IoT Innovation Versus IoT Security
The goal of the guidance is to throttle – only slightly – the rapid pace of innovation with IoT to protect industries and consumers that are already highly vulnerable to cybersecurity threats. It’s an early-stage attempt to regulate the endpoint security on IoT products in the same way the FDA holds food producers to standards of health and safety stateside, barring unfit products from store shelves if they don’t pass muster. The problem here, however, is that all of the guidance is optional, and none of the standards outlined in the report can be enforced.

That said, despite the best early and admirable efforts of the UK government to beef up device-level security, network and information security teams are really going to have to lead the charge in keeping user data protected as the IoT continues to proliferate. In anticipation of an IoT-centric future, chief information security officers will need to make sure that their current network architecture and infrastructure is streamlined and functional to accommodate the larger cybersecurity burdens to come.

Take Stock of All “Periphery” Devices
For starters, it’s important for CISOs to understand the full scope of their organization’s connected footprint. It may sound easy enough, but there are many periphery technologies, multifunction printer/copier/fax machines, for instance, that are less scrutinized than the smart phones or laptops that get the most attention.

Tying up all the loose ends and ensuring that an older fax machine, for instance, enjoys the same protections and feature parity from the security tools servicing tablet computers is essential. This will make it easier to tailor protections for the lower-bandwidth, beacon-sensor communications that the network will need to support in tomorrow’s wider-scale IoT rollouts.

Assign Permissions to Employees and Assets
Network access control (NAC) schemes need to be drafted that anticipate an IoT-heavy future, but with an eye to the past. For instance, controls must be configured that make sure that unrecognized or unauthorized devices aren’t using access to an oft-forgotten printer/copier/fax as a pathway to more valuable network data. This requires teams to not only reference device and user registries – and to update them regularly – when mapping out NAC architectures, but to use security tools that provide real-time traffic insights across all network access points.

The biggest challenge to network security in any context is mapping just how large the scope of connected devices already in use really is. Not only are consumers bringing their own IoT gadgets into the office – Amazon Echos in the C-Suite, for instance, or smart picture frames – but the peripheral technology found in almost every office – security cameras, smart TVs in the lobby – are prime targets by hackers because they often get overlooked.

Until manufacturers can catch up with device-level defenses, IoT cybersecurity will continue to fall on the shoulders of network and security teams, both of which must be rigorous in scrutinizing all network defenses.

Simon Eappariello is the senior vice president of product and engineering, EMIA at iboss. He has a long history working in cybersecurity, networking, and information technology for global organizations in both the private and public sectors. Simon heads up iboss engineering … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/iboss/getting-ahead-of-internet-of-things-security-in-the-enterprise/a/d-id/1331384?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Kaspersky Lab Open-Sources its Threat-Hunting Tool

‘KLara’ was built to speed up and automate the process of identifying malware samples.

Kaspersky Lab is now offering its homegrown threat-hunting application KLara as an open-source tool, the company said today.

KLara is a YARA rules-based malware scanner that runs multiple YARA identifier rules across multiple databases simultaneously as a way to speed up the process of malware identification. Kaspersky Lab said it created the tool as a distributed system for YARA searches that includes researchers’ own malware collections as well as others. 

“Detecting cyberthreats requires tools and systems that can hunt effectively for malware – particularly when tracking advanced targeted threat campaigns through months or even years of activity,” said Dan Demeter, security researcher at Kaspersky Lab and one the creators of  KLara. “We created KLara to help us hunt threats better and faster” and are now sharing it with the security community, he said.

The open source tool is available via GitHub.

 

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/perimeter/kaspersky-lab-open-sources-its-threat-hunting-tool/d/d-id/1331388?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Automating Ethics for Cybersecurity

Having a code of ethics and enforcing it are two different things.

Doctors, accountants, and lawyers all operate under a code of ethics, but what about security professionals? Shouldn’t they, too? While cybersecurity breaches don’t necessarily have the life-and-death consequences of, say, brain surgery, the more vicious cyberattacks can and do cripple livelihoods, often en masse. Witness last year’s WannaCry ransomware attack, the Equifax breach, and the more recent processor flaws Spectre and Meltdown.

A number of IT security organizations do have codes of ethics — SANS, ISSA, and GIAC, for example. They spell out the do’s and don’ts that should already be inscribed in the heart of every security professional. Things like, I will not advance private interests at the expense of end users, colleagues, or my employer; I will not abuse my power; and I will obtain permission before probing systems on a network for vulnerabilities.

But having a code of ethics and enforcing it are two different things. Some organizations may have security pros sign off on such frameworks, but this is little more than a move that allows employers to prosecute the signer if she later abuses her power or simply makes a mistake. And mistakes do happen. A novice or unskilled IT operator, like a novice or unskilled plumber, can screw up. Badly.

Don’t Regulate — Automate
This question of enforcement is a tricky one. In a recent op-ed in The New York Times, cybersecurity executive Nathaniel Fick compares cybersecurity today to accounting in the pre-Enron era. Just as the Enron scandal inspired higher standards for corporate disclosure with the Sarbanes-Oxley Act, Fick proposes that cybersecurity breaches like WannaCry and Equifax should spur increased regulation of cybersecurity practices by the federal government.

While I applaud the intent here, governmental intervention does not solve the enforcement issue. Regulations can be ignored, subverted, forgotten. Even if enforced after the fact — by an army of auditors, for example — damage has already been done, and victims may not necessarily be made whole. On a personal level, I’m not much of a fan of interventionist approaches in general, and I will choose non-intervention every time. Instead of regulating, how about automating?

One of the best ways to implement an ethical framework is to automate it. It’s not yet perfect, but in the world of driverless cars, automation enforces traffic rules and regulations without giving the driver a chance to make a mistake. Automation keeps the vehicle in the correct lane, makes it adhere to speed limits, avoid pedestrians, cyclists, and kids darting from behind ice-cream trucks — regardless of the experience level or skill of the operator. Today, we benefit from automation of this kind in a wide variety of scenarios. Imagine what automation could do as a means of enforcing “the right thing” in large, complex data centers.

Instead, we entrust the running of these huge IT environments to system administrators, many of whom have been gaming, hacking, and cracking since their early teens. Their tech smarts are beyond reproach, but how many of them have the ethical foundation needed to handle such a responsibility? In some ways, it’s like putting a regular motorist behind the wheel of a Formula One racecar.

Sometimes the enemy of doing the right thing is simply too much data — and here, too, automation can play a role. The Equifax breach is a case in point. The data was all there to indicate that something was going wrong, but the sheer amount of it was so overwhelming that the security teams couldn’t separate the signal from the noise. Automation doesn’t allow bad actors to take advantage of the system. With the right investment, automation could have prevented the breach.

Separate the Signal from the Noise
Many solutions today automate functions such as patching, software updates and determining whether or not a system is vulnerable before it attaches to the network. It helps keep human error and questionable ethics out of the equation. Take Microsoft Windows. One reason it’s so vulnerable to attack is that, philosophically speaking, it was created to be open (unlike UNIX, which was created closed). Its creators worked under the assumption that Windows operators would have a strong ethical compass and would not be bad actors. As a result, many exploits in Windows have stemmed from people uncovering a door left open or system unpatched. Automation pre-empts bad actors from exploiting these vulnerabilities.

We live in an era where ethics are given scant regard — in which our leaders almost daily eschew the moral high ground. Compromise your ethics, hold your nose and get the vote to hold the party line seems to be the order of the day. How long before we see a trickle-down of this attitude into cybersecurity, if it hasn’t already happened?

In the absence of an ingrained ethical framework and assurance of skill levels, security professionals needs tools to dynamically and in real time enforce good, skilled, and ethical behavior. The end-state we should seek is a strong ethical culture embedded in system and network administrators, in security practitioners, in database administrators — in short, in all those who have access to the keys of the kingdom.

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop ITX 2018 agenda here.

John De Santis has operated at the bleeding edge of innovation and business transformation for over 30 years — with international and US-based experience at venture-backed technology start-ups as well as large global public companies. Today, he leads HyTrust, whose … View Full Bio

Article source: https://www.darkreading.com/operations/automating-ethics-for-cybersecurity-/a/d-id/1331333?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Fixing Hacks Has Deadly Impact on Hospitals

A study from Vanderbilt University shows that remediating data breaches has a very real impact on mortality rates at hospitals.

Breaches of private information in hospital records are serious and expensive security events but remediating them can be deadly. That’s the conclusion of a study presented last week at the 4A Security and Compliance Conference.

The data shows that the type and scale of a breach don’t have an impact on patient outcomes but that breaches do have an effect, and it appears to come from the hospital’s response rather than the attack itself. The effect is serious: mortality rates go up significantly.

Dr. Sung Choi, a post-doctoral fellow at Vanderbilt University, says that the study looked at a common metric available to researchers: the 30-day mortality rate from AMI (acute myocardial infarction), which is basically how many people who come through the hospital door because of a heart attack are still alive 30 days later.

They chose that number because it’s commonly collected and frequently used by researchers and allows different factors to be compared in their impact on this metric, and it allows different facilities to be compared on a similar metric.

The 30-day mortality rate also allows for tracking a hospital’s performance through time, and that’s where this study gets very interesting.

The general 30-day mortality rate has been falling at a fairly consistent rate for at least the last five years, which is good news. But, according to the study, “The .34 to.45 percentage point increase in 30-day AMI mortality rate after a breach was comparable to undoing a year’s worth of improvement in mortality rate.” 

Behind the Bad Number

There are two key findings in the study’s working paper that are surprising from a computer security perspective. “The association between data breaches and AMI mortality rate did not differ significantly by the magnitude of the breach,” the paper said. So the outcome wasn’t significantly different whether there were 1,000 records hit or 500,000.

The second key finding contains an important caveat. According to the paper, “The relation between breaches and AMI mortality did not differ significantly by the type of breach.” The caveat is the timing of the study’s data; the last year included was 2015, before ransomware became a major malware issue.

Choi says this appears to point in the direction of a cause for the worsening mortality rate. “It’s not the immediate effect of the breach but what happens afterward that has such an impact on the patients,” he says. And the research paper begins to explore why that is so: “…regardless of the source the resulting discovery and mitigation of a breach can be viewed as a random shock to a hospital’s care-delivery system.”

(Lack of) Speed Kills

Healthcare IT systems may show that shock in slower and more disruptive change than those in other industryies because they start from a relatively weakened position security-wise. “For the most part the healthcare industry, and especially the providers, has been a laggard  for information security,” says Larry Ponemon, founder and chairman of the Ponemon Institute.

When hospitals respond to a breach, the response tends to have a major impact on their legitimate users. According to Choi’s research, “new access and authentication procedures, new protocols, new software after any breach incident is likely to disrupt clinicians.”

That disruption is where the patient is affected, through inaccurate or delayed information reaching the people caring for them. And how much, in blunt terms, can that effect be? The study says an additional 34- to 45 deaths per 1,000 heart attack discharges every year.

Good and Bad on the Horizon

Choi says that hospitals should be careful to focus changes in their security processes, procedures, and technology to improve both data security and patient outcomes.

Ponemon sees healthcare organizations starting to improve in security. “We do see healthcare organizations starting to take care of security and rising to the next level of security. I think the public demands it,” he says.

Two factors contribute to the improvement across the industry, he says. The first is the simple acknowledgement that doctors and hospitals are targets – an acknowledgement that was a long time coming. The next is the march of technology. “There are technologies that healthcare can now afford because they’re available in the cloud and it provides the opportunities for healthcare security to improve,” Ponemon says.

The improved security may come just in time to have an impact on a looming area of security concern: The medical IoT. “There’s a universe of devices, many of which are implanted and many can be communicated with through WiFI or Bluetooth,” Ponemon says. “Right now, the providers are looking at records but the devices are really an area of huge concern.”

Related Content:

Interop ITX 2018

Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the security track here. Register with Promo Code DR200 and save $200.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy/fixing-hacks-has-deadly-impact-on-hospitals/d/d-id/1331386?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

FTC goes after Facebook

The US Federal Trade Commission (FTC) confirmed on Monday that it’s investigating how the personal information of 50 million users slipped through Facebook’s grasp and wound up with data analytics firm Cambridge Analytica (CA).

Last week, the FTC declined to confirm that it was investigating Facebook, including whether the company violated a consent decree signed with the agency in 2011. That decree required that Facebook notify users and receive explicit permission before sharing personal data beyond their specified privacy settings.

Any violations of the consent decree could carry a penalty of $40,000 a pop.

The statement issued by Tom Pahl, Acting Director of the FTC’s Bureau of Consumer Protection, about the concerns regarding Facebook’s privacy practices:

The FTC is firmly and fully committed to using all of its tools to protect the privacy of consumers. Foremost among these tools is enforcement action against companies that fail to honor their privacy promises, including to comply with Privacy Shield, or that engage in unfair acts that cause substantial injury to consumers in violation of the FTC Act.

Companies who have settled previous FTC actions must also comply with FTC order provisions imposing privacy and data security requirements. Accordingly, the FTC takes very seriously recent press reports raising substantial concerns about the privacy practices of Facebook. Today, the FTC is confirming that it has an open non-public investigation into these practices.

Privacy practices? What privacy practices?

According to multiple whistleblowers, Facebook basically rolled over and played dead while CA and other developers blithely scraped away its users’ data.

Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, spoke to British MPs last week. Parakilas said that during his tenure, he got the impression that Facebook feared looking too closely at the unvetted developers who’d been given access to Facebook servers and the user data therein, frightened as it was that lifting up the rock could lead to liability over policies or laws being broken in data breaches.

As it is, CA is believed to have used such data to create what it’s dubbed “psychographic profiles” with which to microtarget Facebook users in the 2016 presidential campaign, the Brexit campaign, and the campaigns of US Republicans including Ted Cruz, Ben Carson, Tom Cotton, John Bolton, et al.

And just where was Facebook during all this? The FTC says its probe will determine whether the company “failed” to honor its privacy promises.

Former FTC officials told the Washington Post that the investigation could lead to fines in the trillions of dollars.

Facebook’s putting on its game face. Rob Sherman, deputy chief privacy officer, told CNBC that the company would “appreciate the opportunity to answer questions the FTC may have.”


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nAW4ckWoXOM/