STE WILLIAMS

Open AWS S3 bucket leaked hotel booking service data

Another day, another unsecured AWS storage bucket leaking corporate data, this time from hotel booking service Groupize.

The find was made by Kromtech Security Center researchers and is detailed at MacKeeper.

The find has sparked a spat between Kromtech and Groupize, with the latter denying that anything sensitive had leaked.

Au contraire, writes MacKeeper’s Bob Diachenko, claiming that before they were locked down on August 15 the exposed folders included nearly 3,000 documents detailing “contracts or agreements between hotels, customers and Groupize, including credit cards’ payment authorization forms, with full CC#, expiration date and CVV code”, a leads folder with more than 3,000 spreadsheets, and another folder with more than 32,000 “menus, images and more”.

Diachenko says Kromtech first notified Groupize on August 9.

The company told Kaspersky’s Threatpost it’s grateful for Kromtech shedding “light on a potential vulnerability”, and added that it’s been in touch with customers about the issue and “… steps we’ve taken to further secure our systems.”

The Register has contacted Groupize for comment.

AWS S3 leaks are becoming the flavour of 2017. Verizon leaked 14 million customer records, and other open buckets researchers have spotted include those belonging to Dow Jones, voting machine supplier ESS (both found by former MacKeeper security bod Chris Vickery).

In his new job at UpGuard, Vickery also turned up a bunch of sensitive US geospatial data, while Kromtech went public about WWE fan data leaking in July.

With white-hat-plus-dog Googling for “password AWS”, we expect plenty of others will emerge, even though the default configuration for new AWS storage is that it’s private.

Earlier this month, Amazon unveiled its “patrol bot” service Macie, which tries to identify and help shut down unsecured corporate data repositories. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/22/open_aws_s3_bucket_leaked_hotel_booking_service_data_says_kromtech/

‘Gloomy times ahead’ for security on critical infrastructure, warn experts

It looks like pretty good timing. Less than a week after a couple of critical infrastructure experts bemoaned the ongoing lack of security in the industry, the US National Institute of Standards and Technology (NIST) is out with the latest (fifth) draft of its Security and Privacy Controls for Information Systems and Organizations, with some specific emphasis on that sector.

Early on, in a section titled Notes to Reviewers, NIST declares:

There is an urgent need to further strengthen the underlying information systems, component products, and services that we depend on in every sector of the critical infrastructure – ensuring those systems, components, and services are sufficiently trustworthy and provide the necessary resilience to support the economic and national security interests of the United States.

So, will that call to action move the industry at least a credible step toward retiring the specter of a “cyber Pearl Harbor” attack against the US grid or other critical infrastructure?

Not likely, it seems. While the draft – nearly 500 pages long – makes more than a dozen mentions of critical infrastructure, along with a couple of references to industrial control systems (ICS), it is not expected to move the needle all that much.

The most compelling evidence for that is right in the document, if you can make it to page 174, where it says:

The requirement and guidance for defining critical infrastructure and key resources and for preparing an associated critical infrastructure protection plan are found in applicable laws, Executive Orders, directives, policies, regulations, standards, and guidelines.

In other words, nothing new to see here. The “requirement and guidance” are based on what already exists, which has resulted in what Galina Antova, cofounder and chief business development officer at Claroty, called “The Lost Decade of Information Security”.

Joe Weiss, managing partner of Applied Control Solutions, who complained over the past several weeks in posts on his Unfettered blog about a lack of security in ICS process sensors, noted that in spite of numerous references in this new draft to “sensors” and “process controls”, there is nothing in all the “laws, Executive Orders, directives …” etc. that even mentions process sensors. He said:

This is a really big deal, but still, not one of our [ICS] vendors makes authenticated process sensors today.

Another ICS expert, who couldn’t speak for attribution “because of employment stuff,” said in his view such documents amount to

… lots of words and lofty goals, but I really don’t see much changing until things gets out of control and dangerous to the point of diminishing returns at all levels of society. Then there won’t be any other choice, but at that point will we even have a choice? Gloomy times ahead from my perspective, sorry to say.

Not everybody’s view is that bleak. David Shearer, CEO of (ISC)2, an international nonprofit membership association for information security professionals, said he agrees that if attackers are able to get control of elements of ICS infrastructure and change what it does, there could be, “catastrophic outcomes” in everything from medicine to food safety, manufacturing and critical infrastructure, adding:

A threat actor assuming control of an ICS for a dam floodgate, electrical infrastructure, a fossil or nuclear fueled power plant could have life, limb and property implications.

But, he said he is encouraged that NIST is raising awareness that “the argument that ICS enjoys security through obscurity has quickly become a thing of the past,” and that “critical infrastructure cannot be an afterthought”.

And James Scott, cofounder and senior fellow at ICIT (Institute for Critical Infrastructure Technology), said it is important to note that in the private sector, NIST can only persuade, since it doesn’t have the authority to sanction private-sector organizations that fail to meet its standards.

Their standards are just that – standards – and there are no actual requirements on industry to use them to make their organizational IoT microcosms more cybersecure. NIST is doing all they can with the powers that have been allocated to them.

Of course, NIST can and does mandate that the use of the controls it lists apply to all “federal information systems and organizations … in accordance with the provisions of the Federal Information Security Modernization Act7 (FISMA)”.

And, given that the federal government is, by orders of magnitude, the biggest “business” in the country, that should carry considerable weight.

But Scott said Congress still needs to get a lot more aggressive with the private sector, passing legislation that will

…enforce these standards with heavy penalties for those organizations that are breached due to not using these standards. A gentle tap on the shoulder with a hint at standards is the reason our critical infrastructure lacks resiliency.


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/02eUsNSQsrQ/

News in brief: ban on killer robots urged; Apple’s hidden job advert; torrent sites blocked Down Under

Your daily round-up of some of the other stories in the news

Experts call for ban on killer robots

Elon Musk and Google DeepMind’s Mustafa Suleyman are among a group of 116 experts in robotics and AI who have written an open letter calling on the UN to ban the use of “killer robots” such as military drones.

In their letter, they warn that “lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever … they can be weapons of terror, weapons that despots and terrorists use against innocent populations”.

The letter was published at the opening of the International Joint Conference on Artificial Intelligence in Melbourne, Australia, which opened at the weekend. It’s the first time that AI and robotics companies have taken a joint stand on the issue.

The group, drawn from companies across 26 countries, ends its letter with the warning that “once this Pandora’s box is opened, it will be hard to close”.

Hidden Apple job advert uncovered

Big tech companies are known for their challenging interview process, but Apple seems to have upped the ante by deliberately hiding a job advert, burying it on its website.

Zack Whittaker, security editor for ZDNet, stumbled across the advert when he was analysing data on iPhones to work out what personal data was being sent to advertisers.

The job advert said Apple was “looking for a talented engineer to develop a critical infrastructure component that is to be a key part of Apple’s ecosystem”.

Writing for ReCode, Whittaker said: “In fairness, Apple is not looking for me, but someone who’s far smarter and qualified”.

Torrent sites blocked Down Under

Australians who look to torrent sites to source the TV programmes and films are going to find it that much harder after the country’s federal court ordered ISPs to block dozens of torrent sites.

The ruling is in response to legal action brought by Foxtel and Roadshow Films against Australian ISPs, and is the latest skirmish in an ongoing war against the pirate sites: last year they won a bid to have the Pirate Bay blocked, and ISPs were ordered in April to block KickAssTorrents.

If you’re torrenting – in Australia or anywhere else – now seems like a good time to remind you that it’s not a great idea from a security point of view: you risk being caught, the forums where you chat about torrenting can be compromised, the software itself can be hacked and the sites from which you download torrenting clients can also be hacked.

If you’re caught in Australia, you face fines of up to A$117,000, and you could go to prison.

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EisvvTlIj48/

10% of UK’s top firms would be screwed in a cyber attack – survey

Most of the UK’s top businesses are underprepared for new data protection rules, while 10 per cent have no response plan for a cyber attack, according to a government survey.

This year’s annual cyber governance health check (PDF) asked FTSE 350 companies about both their cyber security and data protection measures – the latter being a new introduction for the 2017 report.

It found that 10 per cent of businesses don’t have a plan in place for a cyber incident – which the government noted should be addressed as soon as possible, “given that their organisations are likely to be subject to regular attempts at cyber breaches owing to their high-profile status”.

Meanwhile, a quarter of boards said they have no defined role in a company-wide response to an attack – but 68 per cent said the board had no received any incident training.

The survey did find, however, that cyber risk is now seen as a top or group-level risk for most (54 per cent) of company boards – although 13 per cent still ranked it as a low, or operational-level, risk.

Just over half of company boards said they set their business’s appetite for cyber risk – up from a third in last year’s survey – and 50 per cent said the board does review and challenge reports on the security of customers’ data.

The number of boards that believe they have a clear understanding of the impact of a cyber attack was also higher this year, rising from 49 per cent to 57 per cent.

The survey also posed a set of questions about May 2018’s EU General Data Protection Regulation, which found that 97 per cent of the UK’s top firms had heard at least heard of the new rules.

However, most responses indicated that it is not classed as a board-level concern: only 13 per cent said they regularly consider GDPR at board level.

Just 6 per cent of businesses said they were completely prepared for GDPR, but almost three-quarters said they considered themselves “somewhat” prepared.

When asked what their biggest concerns were about the new laws, two areas topped the list: the requirement that companies delete a person’s data and the tightening of the consent requirements.

Although experts in data sanitisation have previously told The Reg that companies should expect data deletion terms to be tougher than anticipated, the UK’s data watchdog has taken aim at overhyped concerns about consent in a “myth-busting” article published last week.

Indeed, the government recommends in the cyber health report that businesses consult the Information Commissioner’s Office’s guidance. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/21/one_in_ten_of_the_uks_top_firms_dont_have_a_plan_for_cyber_attack/

Hackers scam half a million from Enigma digital currency investors

Cunning hackers have successfully duped investors out of almost $500,000 after compromising the servers of the online currency platform Enigma.

The organization, set up by MIT whiz kids and due to launch its new cryptocurrency on September 11, had its website, email servers and Slack channel hacked. The attackers then used these channels to spam out a message to those interested in the group, asking for money.

“We are pleased with the enormous support we have gotten in the last few weeks,” the bogus message reads. “The Enigma team has decided to open the Pre-Sale to the public. The hard cap for this presale will be 20 million. Please note that tokens will be calculated and distributed based on how much the pre sale raises.”

Meanwhile, the hackers had put their own digital wallet address on Enigma’s website and directed would-be investors to it. At time of going to press they’ve reaped nearly $500,000, but the word is out. Enigma has shut down the offending Slack channel and is warning investors about the scam.

In a statement, Enigma said that the group had not lost any funds itself and was still planning to make its initial coin offering (think IPO but for digital currency) on September 11 as planned.

“We’re changing all passwords, engaging 2FA, and taking other security precautions,” Enigma said on its Telegram group. “It is a very very hectic time for all of us. I realize some of you lost money and are very very upset. We hear you. Give us some time and we will soon announce the next steps that concern the victims of this attack.”

The fact that the organization didn’t have two-factor authentication turned on in the first place is a red flag, and they indicated that this scam was made possible by sloppy password use or reuse. Some on social media suggest that the CEO had his password pwned on another site and was reusing it for Enigma’s servers, but that hasn’t been confirmed.

Enigma said that it was working with the bitcoin exchange Bitfinex about freezing accounts to stop the purloined e-currencies from being moved, however it hasn’t said if this has been successful. It’s also going to be of limited use in the US after Bitfinex pulled out of the American market earlier this month. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/21/enigma_digital_currency_investors_scammed/

Concerns ignored as Home Office pushes ahead with facial recognition

Sure, automatic facial recognition (AFR) has its problems.

Those problems don’t seem to be troubling the UK government, though. As The Register reports, the Home Office is rushing full-speed ahead with the controversial, inaccurate, largely unregulated technology, having issued a call for bids on a £4.6m ($5.9m) contract for facial recognition software.

The Home Office is seeking a company that can set it up with “a combination of biometric algorithm software and associated components that provide the specialized capability to match a biometric facial image to a known identity held as an encoded facial image”. The main point is to integrate its Biometric Matcher Platform Service (BMPS) into a centralized biometric Matching Engine Software (MES) and to standardize what is now a fractured landscape of FR legacy systems.

All this in spite of an AFR pilot having proved completely useless when London police used it last year to pre-emptively spot “persons of interest” at the Notting Hill Carnival (pictured), which draws some 2m people to the west London district on the last weekend of August every year. Out of 454 arrested people last year, the technology didn’t tag a single one of them as a prior troublemaker.

Failure be damned, and likewise for protests over the technology’s use: London’s Metropolitan Police plan to use AFR again to scan the faces of people partying at Carnival this year, in spite of the civil rights group Liberty having called the practice racist.

The carnival is rooted in the capital’s African-Caribbean community. AFR is insult added to injury: the community which police plan to subject to face scanning is still reeling from the horrific June 14 fire at Grenfell Tower, the blackened shell of which looms over the area where Carnival takes place. Out of at least 80 missing or dead victims, many were from this community.

It’s probably safe to say that no group likes to be treated like a bunch of criminals by law enforcement grabbing their mugshots via AFR.

But those with dark complexions have even more reason to begrudge the surveillance treatment from a technological point of view.

Studies have found that black faces are disproportionately targeted by facial recognition. They’re over-represented in face databases to begin with: according to a study from Georgetown University’s Center for Privacy and Technology, in certain states, black Americans are arrested up to three times their representation in the population. A demographic’s over-representation in the database means that whatever error rate accrues to a facial recognition technology will be multiplied for that demographic.

Beyond that over-representation, facial recognition algorithms themselves have been found to be less accurate at identifying black faces.

During a recent, scathing US House oversight committee hearing on the FBI’s use of the technology, it emerged that 80% of the people in the FBI database don’t have any sort of arrest record. Yet the system’s recognition algorithm inaccurately identifies them during criminal searches 15% of the time, with black women most often being misidentified.

That’s a lot of people wrongly identified as persons of interest to law enforcement. According to a Government Accountability Office (GAO) report from August 2016, the FBI’s massive face recognition database has 30m likenesses.

The problems with American law enforcement’s use of AFR is replicated across the pond. The Home Office’s database of 19m mugshots contains hundreds of thousands of facial images that belong to individuals who’ve never been charged with, let alone convicted of, an offense.

Another commonality: in the US, one of the things the House committee focused on in its review of the FBI’s database was the FBI’s retention policy with regards to facial images. In the UK, controversy has also arisen over police’s retention of images. According to biometrics commissioner Paul Wiles, the UK’s National Police Database holds 19m images: a number that doesn’t even include all police forces. Most notably, it lacks those of the largest police force, the Metropolitan Police. A Home Office review was bereft of statistics on how those databases are being used, or to what effect, Wiles said.

How did we get to this state of pervasive facial recognition? It certainly hasn’t been taking place with voter approval. In fact, campaigners in the US state of Vermont in May demanded a halt to the state’s use of FR.

The American Civil Liberties Union (ACLU) pointed to records that show that the Vermont Department of Motor Vehicles (DMV) has conducted searches involving people merely alleged to be involved in “suspicious circumstances”. That includes minor offenses such as trespassing or disorderly conduct. Then again, some records fail to reference any criminal conduct whatsoever.

UK police have been on a similarly non-sanctioned spree. The retention of millions of people’s faces was  declared illegal by the High Court back in 2012. At the time, Lord Justice Richards told police to revise its policies, giving them a period of “months, not years” to do so.

“Months”,  eh? Let’s hope nobody was holding their breath, given that it took five years. The Home Office only came up with a new set of policies in February of this year.

The upshot of the new policies: police have to delete the photos. If, that is, the people in the photos complain about them. And if the photo doesn’t promise to serve some vague, undefined “policing purpose”.

In other words, police will delete the photos if they feel like it.

As The Register notes, there’s simply no official biometrics strategy in the UK, despite the government having promised to produce one back in 2013.

That sounds familiar to American ears. The FBI is also flying its FR technologies without being tethered by rules. For example, it’s required, by law, to first publish a privacy impact assessment before it uses FR. For years, it did no such thing, as became clear when the FBI’s Kimberly Del Greco – deputy assistant director of the bureau’s Criminal Justice Information Services Division – was on the hot seat, being grilled by that House committee in March.

The omission of a privacy impact assessment means that we don’t know the answer to questions such as: what happens if the system misidentifies a suspect and an innocent person is arrested?

Nobody knows, apparently. States have no rules or regulations governing the use of real-time or static facial data, or whether this data can be accessed for less serious crimes that don’t require a warrant.

It’s almost as if law enforcement in both countries have discovered a new tool to make their job easier but want to use it on the quiet, with as little fuss as possible, and hopefully without all these messy, inconvenient civil rights questions and all those tiresome protests.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EChr_hvB-aA/

Return to sender: military will send malware right back to you

Planning to weaponize malware against the US? The US military will grab it, reprogram it and send it right back to you, warned lieutenant-general Vincent Stewart of the US Defense Intelligence Agency last week.

Once we’ve isolated malware, I want to reengineer it and prep to use it against the same adversary who sought to use against us. We must disrupt to exist.

Stewart was speaking at the Department of Defense Intelligence Information System Worldwide Conference, which includes commanders from American, Canadian and British military intelligence.

Attendees included the FBI, the CIA, the National Security Agency, the National Geospatial-Intelligence Agency and the Office of the Director of National Intelligence, along with organizations such as Microsoft, Xerox, the NFL, FireEye, and DataRobot.

The meeting focused on the growing and international nature of cyberattacks. Commander William Marks of the US Navy explained why discussing cybersecurity is important for them:

Threats are no longer constrained by international borders, economics or military might; they have no borders, age limits or language barriers, or identity. The threat could be a large nation-state or a 12-year-old hacking our network from a small, isolated country.

Janice Glover-Jones, chief information officer of the DIA, added:

In the past, we have looked inward, focusing on improving our internal processes, business practices and integration. Today we are looking outward, directly at the threat. The adversary is moving at a faster pace than ever before, and we must continue to stay one step ahead.

There are concerns about the DIA’s strategy of retooling malware and sending it back like a boomerang to attackers. Sophisticated attacks make it even more difficult to determine an origin and specific attacker – what if the malware the DIA sends attacks a teenage script kiddie? What if the DIA ends up attacking people who are unaware that their computers are part of a botnet? There’s also the concern of the DIA’s counter-attacks damaging innocent bystanders such as ISPs and web hosts.

Is this a good tactic? What do you think?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/GKMPvh5uTuQ/

The $500 gizmo that cracks iPhone passcodes – and how to stop it

A recent YouTube video shows a phone-sized hacking device recovering the passcode of an iPhone 7 or 7s in just a few minutes.

Posted by an American YouTuber going by the name of EverythingApplePro, the video features a $500 “iPhone unlocker” apparently bought online and imported from China.

Rather than bypassing the passcode, the $500 gizmo (which can automatically try out passcodes on up to three iPhones at the same time) keeps trying codes in sequence – e.g. 0000, 0001, and so on – until it figures out that it just entered the right one, presumably from how the phone reacts.

You then read the code off the gizmo and you should be able to unlock the phone for yourself any time the lockscreen comes up.

According to the video, there are some special situations on some iPhone versions, halfway through a firmware update, in which you don’t get locked out after making too many wrong guesses.

The gizmo, it seems, exploits these conditions so it can keep guessing pretty much for ever.

Sounds scary!

Fortunately – although we don’t have a spare iPhone or one of the $500 unlockers to verify any of this – the reality is less dramatic than you might at first think.

Firstly, you need to have changed your password very recently (TechCrunch says “within the last minute or so”) to be able to guess at a non-glacial rate.

Secondly, you need to force a firmware update to get the phone into a state where the repeated guesses will work.

Thirdly, you need to have a short passcode.

According to the video, the cracking device can only try out about six passwords a minute at best; according to TechCrunch, this guessing rate seems to be 20 times slower if your password was last changed more than about 10 minutes ago. The three phones cracked in the 12-minute video were deliberately configured with the passcodes 0015, 0016 and 0012 so they would fall to the gizmo – which started at 0000 on each phone.

So even if your iPhone falls into the wrong hands, a cracker using this gizmo is only likely to succeed if you have a very short passcode, or you have chosen one that is likely to be at the top of any “try these first” list, such as 123456, 111111 or 5683 (it spells out LOVE, in case you are wondering).

Apparently, only iPhone 7 and 7s models (plus some iPhone 6 and 6s models) have this vulnerability, if that’s not an overstated way to describe it, and the bug will be eliminated anyway when iOS 11 comes out.

We’ve seen speculation that the vendor of the gizmo has started advertising it pretty openly – rather than just promoting it quietly to law enforcement or in underground forums – because it will be even less useful than it is now once iOS 11 ships.

Assuming TechCrunch is correct, if you have a six-digit passcode and haven’t changed your password in the past minute or so, you can expect to keep this gizmo guessing for about 10 years on average.

Presumably, all other things being equal, every extra digit in your passcode slows down the guessing time by another factor of 10, so a seven-digit passcode ought to hold out until the 22nd century – if your iPhone’s battery keeps going that long.

What to do?

Our suggestions, admittedly based only on hearsay so far, are:

  • Keep your phone close at hand immediately after you change the password. As far as we can see, the crook needs to pounce on it soon after you’ve done so for the attack to be even vaguely practicable.
  • Choose the longest passcode you can tolerate. Six digits is the minimum Apple will currently permit; try going longer than that.
  • Upgrade to iOS 11 as soon as you can when it comes out. There will almost certainly be dozens of other critical security bug fixes included in iOS 11, giving you plenty of good reasons to patch early anyway.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jR0vbZJT8iM/

British snoops at GCHQ knew FBI was going to arrest Marcus Hutchins

Secretive electronic spy agency GCHQ was aware that accused malware author Marcus Hutchins, aka MalwareTechBlog, was due to be arrested by US authorities when he travelled to United States for the DEF CON hacker conference, according to reports.

The Sunday Times – the newspaper where the Brit government of the day usually floats potentially contentious ideas – reported that GCHQ was aware that Hutchins was under surveillance by the American FBI before he set off to Las Vegas.

Hutchins, 23, was arrested on August 2 as he boarded his flight home. He had previously been known to the public as the man who stopped the WannaCry ransomware outbreak.

Government sources told The Sunday Times that Hutchins’ arrest in the US had freed the British government from the “headache of an extradition battle” with the Americans. This is a clear reference to the cases of alleged NASA hacker Gary McKinnon, whose attempted extradition to the US failed in 2012, and accused hacker Lauri Love, who is currently fighting an extradition battle along much the same lines as McKinnon.

One person familiar with the matter told the paper: “Our US partners aren’t impressed that some people who they believe to have cases against [them] for computer-related offences have managed to avoid extradition.”

Hutchins had previously worked closely with GCHQ through its public-facing offshoot, the National Cyber Security Centre, to share details of how malware operated and the best ways of neutralising it. It is difficult to see this as anything other than a betrayal of confidence, particularly if British snoopers were happy for the US agency to make the arrest – as appears to be the case.

American prosecutors charged Hutchins with six counts related to the creation of the Kronos banking malware. He faces a potential sentence of 40 years in prison. He pleaded not guilty to the charges last week.

Hutchins’ bail conditions are unusually lenient for an accused hacker, with the Milwaukee court hearing his plea more or less relaxing all restrictions on him – with the exception of not allowing him to leave the US and prohibiting him from visiting the domain that sinkholed the WannaCry malware.

The man himself has been active on Twitter again since his bail restrictions were lifted:

Previously, FBI agents had tried claiming Hutchins might try obtaining firearms to commit crimes, based solely on his having tweeted about visiting a shooting range in Las Vegas – a common tourist pastime in Sin City. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/21/gchq_knew_marcus_hutchins_risked_arrest_fbi/

Bitcoin-accepting sites leave cookie trail that crumbles anonymity

Bitcoin transactions might be anonymous, but on the Internet, its users aren’t – and according to research out of Princeton University, linking the two together is trivial on the modern, much-tracked Internet.

In fact, linking a user’s cookies to their Bitcoin transactions is so straightforward, it’s almost surprising it took this long for a paper like this to be published.

The paper sees privacy researcher Dillon Reisman and Princeton’s Steven Goldfeder, Harry Kalodner and Arvind Narayanan demonstrate just how straightforward it can be to link cookies to cryptocurrency transactions:

Sorry Alice: we know who you are. Image: Arxiv paper.

Only small amounts of transaction information need to leak, they write, in order for “Alice” to be associated with her Bitcoin transactions. It’s possible to infer the identity of users if they use privacy-protecting services like CoinJoin, a protocol designed to make Bitcoin transactions more anonymous. The protocol aims is to make it impossible to infer which inputs and outputs belong to each other.

Of 130 online merchants that accept Bitcoin, the researchers say, 53 leak payment information to 40 third parties, “most frequently from shopping cart pages,” and most of these on purpose (for advertising, analytics and the like).

Worse, “many merchant websites have far more serious (and likely unintentional) information leaks that directly reveal the exact transaction on the blockchain to dozens of trackers”.

Of the 130 sites the researchers checked:

It doesn’t help that even for someone running tracking protection, a substantial amount of personal information was passed around by the sites examined in the study.

A total of 49 merchants shared users’ identifying information, and 38 shared that even if the user tries to stop them with tracking protection.

Users have very little protection against all this, the paper says: the danger is created by pervasive tracking, and it’s down to merchants to give users better privacy.

Since, as they write, “most of the privacy-breaching data flows we identify are intentional”, that seems a forlorn hope. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/20/bitcoins_anonymity_easy_to_penetrate/