STE WILLIAMS

Companies’ ‘Anonymized’ Data May Violate GDPR, Privacy Regs

New study found that any database containing 15 pieces of demographic data could be used to identify individuals.

For more than two decades, researchers have chipped away at the assumption that anonymized collections of data could protect the identities of research subjects as long as the datasets did not include one of a score of different identifying attributes.  

In the latest research highlighting the ease of what is known as “re-identification,” three academic researchers have shown that 99.98% of Americans could be re-identified from an otherwise anonymized dataset, if it included 15 demographic attributes.

The findings suggests that even the current policies surrounding the protection of customer identities, such as the General Data Protection Regulation (GDPR), fall short of truly protecting citizens.

In the paper, which appeared in Nature on July 23, the researchers conclude that “even heavily sampled anonymized datasets are unlikely to satisfy the modern standards for anonymization set forth by GDPR and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model.” 

The paper adds to the mountain of research suggesting that any dataset that contains useful information about individuals likely could be used to re-identify those subjects and link individuals to information that may be protected by privacy regulations or law. The research could lead to a rethinking of whether all big data sets need to be significantly better protected.

“Many companies think that, if it’s anonymous, I don’t need to secure it, but the data is likely not as anonymous as they think,” says Bruce Schneier, a lecturer at Harvard University’s Kennedy School of Management and the author of Data and Goliath, a book about how companies’ data collection results in a mass-surveillance infrastructure. “Again and again and again, we have learned that anonymization of data is extremely hard. People are unique enough that data about them is enough to identify them.”

The findings mean that companies and government agencies need to reassess how they deal with “anonymized” data, says Scott Giordano, vice president of data protection, Spirion, a providers of data-security services. The US Department of Health and Human Services, for example, currently requires that businesses remove 18 different classes of information from files, or have an expert review their anonymization techniques, to certify data as non-identifying. 

That may not be enough, he says.

“It is too easy, with advances in big data, to de-anonymize things that maybe you couldn’t have de-anonymized five years ago,” Giordano says. “We are in an arms race between the desire to anonymize data and our collection of big data, and big data is winning.”

Zip Code, Gender, and DoB

The concerns over re-identification appeared in the late 1990s, when then-graduate student Latanya Sweeney conducted research into the possibility of combining voter rolls and medical research records on Massachusetts state employees to de-anonymize patients’ information. Famously, Sweeney, now a professor of government and technology in residence at Harvard University, was able to find then-Governor William Weld’s medical record in the dataset. In a 2000 paper, she estimated that 87% of US citizens could be identified using just three pieces of information: their 5-digit zip code, gender, and data of birth. 

With the collection of a broad range of data proliferating from personal devices — not just from smartphones, but from Apple watches to connected mattresses — technology firms and data aggregators are making choices that affect the rights of US citizens, she argued in a speech at Stanford University’s School of Engineering in 2018.

“We live in a technocracy — that is, we live in a world in which technology design dictates the rules we live by,” she said. “We don’t know these people, we didn’t vote for them in office, there was no debate about their design, but yet, the rules that they determined by the design decisions they make — and many of them somewhat arbitrary — end up dictating how we will live our lives.” 

The Nature paper, written by a team of three researchers from the Imperial College of London and Belgium’s UC Louvain, shows that the massive number of attributes collected about people makes them more unique. For companies, the lesson is that any sufficiently detailed dataset cannot be, by definition, anonymous. Even releasing partial datasets runs the risk of re-identification, the researchers found.

“Moving forward, (our results) question whether current de-identification practices satisfy the anonymization standards of modern data protection laws such as GDPR and CCPA (the California Consumer Privacy Act) and emphasize the need to move, from a legal and regulatory perspective, beyond the de-identification release-and-forget model,” the researchers stated in the paper. 

This leaves companies with no easy answers on whether following current guidelines is enough to protect the anonymity of the information in their care, says Pravin Kothari, CEO of CipherCloud, a data-security provider. 

“This finding proves that re-identification is easy, so companies need to make sure they are anonymizing all demographic data, not just names,” he says. “The removal of names is simply not enough to properly de-identify a person. We’ll need to ensure that all personally identifiable information is anonymized in order to remove the risk of re-identification of individuals.”

Related Content

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy/companies-anonymized-data-may-violate-gdpr-privacy-regs/d/d-id/1335361?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Complete Personal Fraud Kits Sell for Less Than $40 on Dark Web

The low cost of records reflects the huge supply of PII after many breaches at hospitals, government agencies, and credit bureaus.

When a complete set of personally identifiable information (PII) is sold on the Internet, it’s all a criminal needs to steal an identity. And research has shown that the cost to steal that identity is only $30 to $40.

A “fullz” for a US consumer contains a person’s full name, birth date, Social Security number, address, phone number, driver’s license number, and mother’s maiden name. For an extra $10 to $25, sellers will add an individual’s credit card data, bank account data, bank security questions and answers, employer, or other critical information.

The new research, by Armor Defense, found PII merchants who gave instructions on how the information can be used to commit bank fraud and tips on getting information that might be missing from a record — one seller, for example, suggested going to Ancestry.com to find a victim’s mother’s maiden name.

Although costs for a fullz differ from country to country, the generally low cost of the records reflects the huge supply of PII after a series of breaches at hospitals, government agencies, and credit bureaus.

For more, read here

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/complete-personal-fraud-kits-sell-for-less-than-$40-on-dark-web/d/d-id/1335362?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Malware Researcher Hutchins Sentenced to Supervised Release

Marcus Hutchins, the researcher known for stopping WannaCry, avoids jail time over charges of creating and distributing Kronos malware.

Marcus Hutchins, a security researcher known for creating the “kill switch” that stopped the 2017 WannaCry ransomware attack, has been sentenced to time served and a year of supervised release for charges of creating and distributing the Kronos banking malware.

Judge J.P. Stadtmueller, who presided over today’s hearing, said 25-year-old Hutchins had served his time and considered his age of the time of the offense, which occurred when he was a teenager. He credited the researcher for having made positive changes in his life prior to his arrest; over the years, Hutchins has developed a reputation as a top industry analyst.

“It’s going to take the people like [Hutchins] with your skills to come up with solutions because that’s the only way we’re going to eliminate this entire subject of the woefully inadequate security protocols,” said Stadtmueller at the hearing, as reported by TechCrunch.

The judge waived any fines. Hutchins, who had been in Los Angeles on bail, can return to his home in the United Kingdom. His criminal record will likely prevent him from re-entering the United States.

Months after WannaCry, Hutchins was arrested in the Las Vegas airport on his way home to the UK after the DEF CON security conference. He was charged with creating and distributing Kronos banking malware, conspiracy to commit computer fraud, illegally accessing computers, and advertising an illegal communication-interception device, among other things. Hutchins and a co-conspirator were also accused of creating and distributing Upas Kit malware.

Hutchins initially pleaded not guilty in August 2017 to charges of developing and distributing Kronos. Earlier this year, he pleaded guilty to two counts of hacking for writing malware. “I regret these actions and accept full responsibility for my mistakes,” he wrote in a statement.

“Having grown up, I’ve since been using the same skills that I misused several years ago for constructive purposes,” Hutchins said. “I will continue to devote my time to keeping people safe from malware attacks.” He apologized again today to family, friends, and victims.

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/malware-researcher-hutchins-sentenced-to-supervised-release/d/d-id/1335363?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

EvilGnome – Linux malware aimed at your desktop, not your servers

Some of our readers asked us this week, “What do you guys think of EvilGnome?”

#ICYMI, EvilGnome is a recent malware sample that’s made a few headlines, and although we haven’t seen any examples of it actually popping up in the wild, we thought we’d answer the question anyway.

Because Linux!

As you probably know, Linux malware and hacked Linux systems are very common, for the simple reason that most of the servers that power today’s internet run Linux in some form.

If you’re a cybercrook who wants to spread your Windows malware widely – keyloggers, for example, or banking Trojans, or other network nasties that thieve people’s digital stuff so it can be sold on to the next crook on the cyberunderground…

…then you’re probably going to be relying on hacked or compromised Linux systems for the bulk of your malware distribution.

For that reason, Linux malware generally doesn’t look like Windows malware, and isn’t supposed to, either.

But EvilGnome, rare and unusual though it may be, gets its media-friendly name because it was clearly written to target the comparatively small but committed community who use Linux on their laptops.

EvilGnome starts life as a self-contained file that consists of 522 lines of text – what’s called a shell script because it’s designed to run directly inside a Linux command shell, such as the command prompt you get in a terminal window – followed by a compressed blob of data that carries the rest of the malware along with it.

If you glance at the start of the malware file, all you’ll see is this:

#!/bin/sh
# This script was generated using Makeself 2.3.0

ORIG_UMASK=`umask`
if test "n" = n; then
    umask 077
fi

CRCsum="XXXXXXXXXX"
MD5="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
TMPROOT=${TMPDIR:=/tmp}
USER_PWD="$PWD"; export USER_PWD

label="setup files..."
script="./setup.sh"
scriptargs=""
licensetxt=""
. . .

That looks pretty unexceptionable – in fact, this is what’s called a self-extracting archive, and it was created with a legitimate and widely-used free software packaging system called Makeself.

Several mainstream software tools, such as Oracle’s VirtualBox software, make use of the Makeself toolkit, so the presence of Makeself’s auto-self-extraction code at the start of a Linux file isn’t itself cause for alarm.

After all, the idea is a good one – to make installing your software easier.

Instead of downloading a file in a static archive format such as ZIP, gzip, bzip2, and then decompressing and unpacking the bundle yourself before digging around to figure out how to install it, you just download one self-contained Makeself file and run it.

The shell script then extracts the embedded app into a temporary directory and automatically hands control over to a component that’s just been extracted – in this case, the uncontroversial-looking setup.sh.

Self-extracting archives and installers are commonplace on Windows; this is a way of achieving a similarly simple way of installing even very complex Linux software tools.

Forget about ./configure; make; make install, just run thisfile.sh or thisfile.run directly instead.

Linux doesn’t need file extensions in quite the same way Windows does, but the creators of the Makeself tool recommend adding an extension of .sh or .run anyway, just for clarity.)

What’s good for the goose

Unfortunately, the very tools that make it easier for us to construct self-installing software bundles also make things easier for the crooks.

If you run the EvilGnome self-extractor you will end up with malware installed in a directory called:

~/.cache/gnome-software/gnome-shell-extensions/

To explain.

In Unix-speak, the special filename ~/ means your home directory.

The rest of the file path refers to a temporary subdirectory used by the popular Linux desktop software known as Gnome.

Note that Unix filenames that start with a dot (also known as period and displayed as “.”) aren’t shown by default in most directory listings, so they’re essentially invisible by default.

In any case, .cache is a standard place for apps to store files they think they’ll need again but don’t need to keep forever.

In other words, the ~/.cache/gnome-software/ directory is a great place for malware to hide in plain sight – you’ll probably never see it, but if you do you’ll expect it to be full of random-looking stuff that can largely be ignored.

If you look in the hiding place used by the malware, you’ll find the innocent-sounding files:

gnome-shell-ext
gnome-shell-ext.sh

The names make them look like a Gnome shell extension, a kind of Gnome desktop plugin, but they are the malware app, plus a shell script to launch the app in the background, respectively.

The gnome-shell-ext file is a compiled C++ program; dumping some of the debugging symbols that the crooks left behind gives an immediate hint of what it’s for:

$ nm -C gnome-shell-ext

000000000040b650 T ShooterKey::threadKeysBody()
000000000040b850 T ShooterKey::sendKeys()
000000000040b700 T ShooterKey::ShooterKey()
. . .
0000000000409ce0 T ShooterFile::scanFolder()
0000000000409cb0 T ShooterFile::ShooterFile()
. . .
000000000040bc10 T ShooterPing::sendStoredPackets()
000000000040c560 T ShooterPing::ShooterPing()
. . .
000000000040b280 T ShooterImage::takeScreenshot()
000000000040b260 T ShooterImage::ShooterImage()
. . .
000000000040c610 T ShooterSound::takeSound()
000000000040c5f0 T ShooterSound::ShooterSound()
. . .

According to Intezer, who first broke the news of this malware, and gave it the name EvilGnome, these functions do pretty much what their names suggest.

The takeSound() function can capture audio and upload it; takeScreenshot() speaks for itself, and scanFolder() looks for files to steal.

Intezer says that the ShooterKey:: components aren’t finished (and therefore aren’t used), but it’s easy to guess what these functions might do in a future version – log keystrokes and thereby sniff out passwords.

Lastly, ShooterPing:: not only communicates back to the crooks but can also download new malware and run it.

That makes this into a general-purpose zombie or bot, namely a remotely controllable software agent that the crooks can harness later for whatever they think of next.

The EvilGnome malware also adds itself to your crontab (a Linux tool for running programs in the background at predetermined times) so that it gets re-launched within a minute if ever crashes or gets killed off.

That means it not only survives a reboot but also comes back to life if you notice it and terminate the suspicious process.

What to do?

As mentioned at the start, we haven’t seen this in the wild, so it’s unlikely you’ll encounter it.

But here are some tips anyway:

  • Check for a process called gnome-shell-ext. If found, use kill -9 to terminate it. If if comes back after a minute then this malware is probably already active on your system. Do steps 2 and 3, then repeat this step to kill it completely.
  • Check your crontab for an entry like 0-59 * * * * /.cache/gnome-software/gnome-shell-extensions/gnome-shell-ext.sh. That’s a sign that the auto-reloading script has been installed. Remove it from crontab.
  • Check for the above-mentioned gnome-shell-ext* files. If you remove them then the malware can’t reload even you if haven’t cleaned the crontab.

By the way, Sophos Anti-Virus for Linux is 100% free for home and business use – why not try it?

Our product detects and blocks all types of malware on a Linux system, including Windows and Mac malware.

That means it also stops you serving up dodgy files to other people if some rogue has deliberately uploaded malware to use your server as a temporary malware repository.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/E4CH_8oYLak/

S2 Ep1: FaceApp, logic bombs and stranger danger – Naked Security Podcast

We’re finally back with Series 2 of the Naked Security Podcast. While you’ve been missing us, we’ve been working out how to improve the show and kitting out a dedicated studio.

You’ll now find longer episodes with more opportunities to get involved. Send us your general cybersecurity questions and join the discussion via social media or by commenting on our relevant articles.

In this week’s episode, host Anna Brading is joined by Paul Ducklin, Mark Stockley and Matt Boddy.

We investigate whether FaceApp is as dangerous as they say [12’57”], how to keep logic bombs out of your software [24’14”], and how to help youngsters stay safe online [35’06”].

Listen now and share your thoughts with us.

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/o9M6vKHufP0/

Browser plug-ins peddled personal data from over 4m browsers

Eight catastrophically leaky browser extensions were discovered by researcher Sam Jadali, working with Washington Post columnist Geoffrey A. Fowler.

Together, they traced the privacy train wreck, dubbed DataSpii, to browser extensions (also known as add-ons or plug-ins) that run around doing things like making browsing better by finding coupons or remembering passwords or whatever.

Peel back the “whatever” and this is what you find: those extensions, offered up on stores run by Chrome and Firefox and therefore presumably legit, are running a side hustle, watching every click we make online and then putting it all up for sale.

Jadali published his findings last Thursday.

He found that the extensions were leaking, in near real-time, personal, sensitive data on the websites you’re browsing, primarily on Chrome but also on Firefox. Ditto for sensitive business information. Jadali’s Security with Sam firm found that the leaked data included these types of personal and corporate data:

Personal data

  • personal interests
  • tax returns
  • GPS location
  • travel itineraries
  • gender
  • genealogy
  • usernames
  • passwords
  • credit card information
  • genetic profiles

Corporate data

  • company memos
  • employee tasks
  • API keys
  • proprietary source code
  • LAN environment data
  • firewall access codes
  • proprietary secrets
  • operational material
  • zero-day vulnerabilities

As Ars Technica reported last Thursday, by Google’s account, we’re talking about data from as many as 4.1 million users. The extensions collected “the URLs, webpage titles, and in some cases the embedded hyperlinks of every page that the browser user visited,” Ars reported.

They didn’t just slurp up web histories – some of the extensions then peddled them, publishing the histories through a fee-based service called Nacho Analytics that markets itself as “God mode for the Internet” and which uses the tag line “See Anyone’s Analytics Account.”

The extensions

  • Hover Zoom
  • SpeakIt!
  • SuperZoom
  • SaveFrom.net Helper
  • FairShare Unlock
  • PanelMeasurement
  • Branded Surveys
  • Panel Community Surveys

This is the data that Fowler says he found for sale:

I’ve watched you check in for a flight and seen your doctor refilling a prescription.

I’ve peeked inside corporate networks at reports on faulty rockets. If I wanted, I could’ve even opened a tax return you only shared with your accountant.

I found your data because it’s for sale online. Even more terrifying: It’s happening because of software you probably installed yourself.

Google removed the extensions from its Chrome Web Store a day after Jadali and the Post published their stories. It also remotely disabled those extensions on the millions of computers that had them installed. Mozilla removed and disabled its one DataSpii extension in February. About a week later, Nacho Analytics announced a “data outage.”

Ars reports that in an 11 July 2019 email, Nacho Analytics founder and CEO Mike Roberts told customers that the site had suffered a “permanent data outage” due to a third-party supplier no longer being available. He told customers that the site would no longer accept new customers or provide new data, but that customers who kept their accounts open would still be able to access any data they’d previously bought.

However, Nacho Analytics – which sells “links to tax returns, prescription refills, and reams of other sensitive information collected from more than four million browsers,” is still making the data available to existing customers.

Here’s how it works: URL data from websites is imported directly into customers’ Google Analytics accounts, which includes sensitive information that led to Nacho Analytics getting shut off in the first place, such as names of medical patients who got test results from a patient care cloud platform used by medical services.

Ars displayed a few redacted screenshots in its writeup: one shows data slurped by the extensions from inside Tesla’s network that was sent on to Nacho Analytics, and then imported into Google Analytics.

Once this type of data is out there, what are you supposed to do to get it back? Ars Security Editor Dan Goodin compares the situation to putting toothpaste back into a tube. Once data is out, it’s out, and it ain’t going back in. Such is the case with the Nacho Analytics customers who bought data: they can hold on to what’s potentially gigabytes’ worth of browsing histories collected from millions of people, thanks to the help of Nacho Analytics and Google Analytics.

Is any of this against Google’s terms of service? Here’s what a company spokesperson told Ars:

Passing data that personally identifies an individual, such as email addresses or mobile numbers, through Google Analytics is prohibited by our terms of service, and we take action on any account found doing so intentionally.

The spokesperson said that Google has suspended multiple Google Analytics properties owned by Nacho Analytics for violating Google terms of service and that Google’s investigating additional accounts that may be connected or integrated with Nacho Analytics.

What to do?

You can find out if DataSpii is spying on your every click by viewing your extensions.

In Chrome, manually enter this URL in your browser: chrome://extensions

In Firefox, manually enter this URL in your browser: about:addons

If you see any of the extensions from the list above, remove them. Note that in one instance, Jadali says, a remotely deactivated extension didn’t stop collecting data. You’ve got to remove the extension to make the data collection stop.

Besides removing the extensions, Jadali recommends that those who downloaded the addons change their passwords. Also, if you access services through an API via a URL, consider changing your API keys. Security by Sam has more recommendations in Section 4.6 of its report on DataSpii.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/hh3QByv2ph0/

BlueKeep guides make imminent public exploit more likely

A public exploit for Microsoft’s apocalyptic BlueKeep vulnerability is just days away. In fact, for those with deep enough pockets, it’s already here.

To refresh your memory. BlueKeep is a vulnerability in the Remote Desktop Protocol (RDP) implementation affecting Windows XP, Windows 7, Windows Server 2003, and Windows Sever 2008.

An attacker who exploits it can do two things. First, they can run code remotely on the compromised machine. Secondly, they can use RDP to exploit other machines without any human interaction. That’s a worm, and that’s bad, because it can spread on its own, infecting potentially hundreds of thousands of machines in short order.

The problem is exploiting it properly. Getting code to run on targeted machines without crashing them is technically difficult. That’s why, even though Microsoft acknowledged the vulnerability and patched it on 14 May 2019, we haven’t seen BlueKeep worms swarming across the internet yet.

Working exploits for BlueKeep have been developed by a number of ethical hackers and security companies, including Sophos, who decided to keep the details secret.

As the time people have had to patch increases, and as more people develop exploits, the omertà that’s keeping offensive code under wraps is starting to unravel.

One technical expert released workable exploits, while others posted detailed instructions on how to produce them, this week.

On Tuesday, security company Immunity Inc claimed to have added a module to its CANVAS automated exploitation system with a working BlueKeep exploit.

A subscription to the service costs tens of thousands of dollars, though, which should keep it out of the hands of the script kiddies.

The same day, a researcher posted a detailed technical analysis of the vulnerability, along with some Python proof-of-concept code, explaining exactly how to bridge the technical gap. The analysis omits an executable shellcode payload, and doesn’t explain where to put it, instead calling those “exercises left to the reader”. Still, it gets coders far closer to an executable attack.

The details are, as you might expect, extremely technical. BlueKeep is a use-after-free vulnerability, meaning that the program tries to use memory after it is supposed to have discarded it. The vulnerability lies in termdd.sys, which is the RDP kernel driver. A user can exploit this by opening an RDP connection to a remote computer called a channel – in this case a default RDP channel called MS_T210 – and sending specially crafted data to it.

The exploit runs code on Windows XP, they said, but warned that it would probably crash Windows 7 or Server 2008 machines.

They justified the release of the information by saying that the information is “largely already available within the Chinese hacker community”. They might have been referring to a series of Chinese-language slides purportedly explaining how to exploit the vulnerability and execute remote code that someone else posted on GitHub on Monday.

We’re not linking to either of the GitHub repos here, because why make it easier for someone to develop a worm? They’re easy enough for people to find, though.

How many people could a working exploit hit? A scan from security firm BitSight on 2 July 2019 identified 805,665 vulnerable computers, down from almost a million in May. That’s worrying, because it shows that not enough people are patching. So the message is clear: If you’re running Windows XP, 7, Server 2003 or Server 2008, patch them, please.

More on RDP attacks

BlueKeep isn’t the only problem facing machines running RDP. Recent research by Sophos showed that criminals are performing massive numbers of simple but effective RDP password guessing attacks every day against internet-facing Windows machines.

Anna Brading talks to Matt Boddy, Ben Jones and Mark Stockley about their research in the Naked Security podcast series 2 launch episode, entitled RDP Exposed.

Listen now, and let us know what you think!

You can find out more about our RDP research here on Naked Security, or by reading the full report.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ECYOzBCkJ4g/

Happy SysAdminDay 2019!

Can it really be 1011011012 days since the last System Administrator Appreciation day, you ask.

Yes, yes, it can. And it is.

But you didn’t need me to tell you that, because you could hardly miss the #SysAdminDay banners, the bunting, the witty cards, the mysterious deliveries of baked goods and the personal message of thanks from the CEO.

Wait, what, that didn’t happen in your organisation?

Ha. We’re kidding. Of course it didn’t – we know you’re invisible, but we see you. We see you in the server room, behind the pile of boxen carcasses (or should that be box carcassen?), checking stackoverflow, on the end of the ssh connection, down in the config and all up in the logs.

We see you, and we just wanted to say: nice tee.

It would be an exaggeration to say that sysadmins draw their power from amusing, esoteric and slightly faded T-shirts, but only just. So today we’re celebrating the style, self expression and social signalling of the sysadmins’ second skin.

If you’ve got a smartphone or a webcam handy, we’d love to see what you’ve got: film references, witty slogans or niche Norwegian heavy metal bands, we love them all.

Just reply to the tweet below:

And while you’re at it, perhaps you can settle a most important question for us…

So, on behalf of all your users, and your friends at Sophos. Happy #SysAdminDay everyone!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HYLsCyS13qg/

Cyberlaw wonks squint at NotPetya insurance smackdown: Should ‘war exclusion’ clauses apply to network hacks?

Analysis The defining feature of cyberwarfare is the fact that both the weapon and the target is the network itself. In June 2017, the notorious file-scrambling software nasty NotPetya caused global havoc that affected government agencies, power suppliers, healthcare providers and big biz.

The ransomware sought out vulnerabilities and used a modified version of the NSA’s leaked EternalBlue SMB exploit, generating one of the most financially costly cyber-attacks to date.

Among the victims was US food giant Mondelez – the parent firm of Oreo cookies and Cadburys chocolate – which is now suing insurance company Zurich American for denying a £76m claim (PDF) filed in October 2018, a year after the NotPetya attack. According to the firm, the malware rendered 1,700 of its servers and 24,000 of its laptops permanently dysfunctional.

In January, Zurich rejected the claim, simply referring to a single policy exclusion which does not cover “hostile or warlike action in time of peace or war” by “government or sovereign power; the military, naval, or air force; or agent or authority”.

Office war photo via Shutterstock

Cyber-insurance shock: Zurich refuses to foot NotPetya ransomware clean-up bill – and claims it’s ‘an act of war’

READ MORE

Mondelez, meanwhile, suffered significant loss as the attack infiltrated the company – affecting laptops, the company network and logistics software. Zurich American claims the damage, as the result of an “an act of war”, is therefore not covered by Mondelez’s policy, which states coverage applies to “all risks of physical loss or damage to electronic data, programs, or software, including loss or damage caused by the malicious introduction of a machine code or instruction.”

While war exclusions are common in insurance policies, the court papers themselves refer to the grounds as “unprecedented” in relation to “cyber incidents”.

Previous claims have only been based on conventional armed conflicts.

Zurich’s use of this sort of exclusion in a cybersecurity policy could be a game-changer, with the obvious question being: was NotPetya an act of war, or just another incidence of ransomware?

The UK, US and Ukrainian governments, for their part, blamed the attack on Russian, state-sponsored hackers, claiming it was the latest act in an ongoing feud between Russia and Ukraine.

Either way, it is evident that the result of the case will have enormous ramifications for cyber insurance policies and a significant impact on the monetisation of cybercrime. If Zurich’s approach is successful, it could also lead to a loss of confidence in cyber insurance as an investment – ironically devaluing Zurich’s product.

Are war exclusion clauses fit for purpose under International Humanitarian Law as cyber-attacks?

The juxtaposed nature of cyber-attacks and war, which has connotations of devastation and loss of life, leads to questions about whether the NotPetya attack would meet the standards under International Humanitarian Law (IHL). In order for IHL to be applicable, there needs to be an “armed conflict” – however, the term itself is not defined within the treaties.

Notably, there are two types of conflict governed by the IHL: International Armed Conflict (IAC) and Non-International Armed Conflict (NIAC).

Due to the ongoing conflict between Russia and Ukraine, we’ll look at whether or not the NotPetya attack could be considered an International Armed Conflict; if it was, it could possibly fulfil that exclusionary clause. There are three points we need to look at.

1: Was the attack ‘international’ in nature?

Since the US and the UK accused Russia, this allowed the often problematic notion of attribution to arise, and possibly led Zurich to justify the war exclusion clause.

There are competing views regarding the attribution, with both the GRU – the Russian Military Intelligence Agency – and Russian-sponsored hackers accused. Legally, an IAC exists when hostilities between two states occur, so if it were the Russian military agency (being an organ of the state), the international element would suffice.

However, if it were non-state actors (NSA), in order for the conflict to be classed as international, the state would have to have “overall control” of the NSA. For those interested in the case law, this principle is outlined by the International Criminal Tribunal for the former Yugoslavia (ICTY) in The Prosecutor v Dusko Tadic*.

If there was sufficient control of these groups, where a state has issued directions on specific cyber acts to cause significant damage, the international aspect could be fulfilled. However, it is clear from jurisprudence that mere support alone in the form of financing, training and equipping falls below this threshold. Therefore, the difficult burden of attribution will lie with the defence of Zurich.

2: Was the ‘armed conflict’ requirement sufficed?

Due to an absence of treaty definition, there have been competing views on what level of “armed” is required. It has been argued that the traditional approach cannot govern cyber-attacks as these are not kinetic acts. However, the growing consensus is that IHL is applicable.

The minds behind the Tallinn Manual – the international cyberwar rules of engagement – were divided as to whether damage caused met the armed criterion. However, they noted there was a possibility that it could in rare circumstances.

Professor Michael Schmitt, director of the Tallinn Manual project, indicated (PDF) that it is reasonable to extend armed attacks to cyber-attacks. The International Committee of the Red Cross (ICRC) went further to enunciate that cyber operations that only disable certain objects are still qualified as an attack, despite no physical damage. There will be no doubt Zurich will have to consider the wider implications and rising tensions between Russia and Ukraine for the attack to be considered an armed conflict, which, based on a lack of previous cyber operations, would be unlikely.

3: Was the threshold of ‘armed attack’ met?

The attack is defined as an act of violence against the adversary in article 49(1) of the additional protocols to the Geneva Convention. Although controversy surrounds the cyber application due to the requirement of physical damage, which is usually associated with violence involving physical force, and it is unclear where the line would be drawn, the consensus is that attacks resulting in non-violent operations such as psychological cyber or espionage would not qualify as an attack.

There have been different approaches taken to assess what physical force is required about a cyber equivalent. Tallinn Manual’s Schmitt insists (PDF) that the attack must result in injury or physical damage to objects. Whereas Dr Knut Dörmann, head of the legal division at the ICRC, extended the concept, saying that though it might not necessarily result in injury or damage, it could be partial destruction (see here).

A competing view reflects a greater extent of duration and intensity, meaning that a single cyber incident that causes limited damage, destruction, injury or even death would not suffice nor be classified as IAC. Due to the uncertainty, the current proceedings would have to tread carefully in how they define the level of damage as widening the threshold could warrant an avalanche of insurance claims and also reduce the threshold for conflicts.

The future outcome… or just the beginning?

The unfolding nature of the case will be highly anticipated. However, it will likely remain that the NotPetya cyber-attack could not reach the high thresholds currently set out by the IHL framework as an IAC.

The proceedings highlight the inadequacies of the current international regulation. The case will hopefully guide the limits of insurance coverage. However, the case may leave questions unanswered and create new ones, such as: is IHL the best way to go forward about cyber damage? How should cyber conflict be defined?

And lastly: if it is decided that this kind of damage from cyber-conflicts is uninsurable, how will this impact the companies that are hacked? ®

* Dusko Tadic was charged by the ICTY with a list of crimes allegedly committed in the Prijedor region of Bosnia-Herzegovina between 25 May 1992 and early August of the same year [PDF]. The Appeals Chamber found that “the armed forces of the Republika Srpska were to be regarded as acting under the overall control of and on behalf of the [Federal Republic of Yugoslavia]”. (Our emphasis.)

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/26/do_insurance_war_exclusion_clauses_apply_to_cyberattacks/

In Depth