STE WILLIAMS

Smartmobe Wi-Fi blabs FAR TOO MUCH about us, warn experts

Smartphones leak far more personal information about their users than previously imagined, according to new research.

Security researchers at Sensepost were able to track and profile punters and their devices by observing the phones’ attempts to join Wi-Fi networks. Daniel Cuthbert and Glenn Wilkinson created their own distributed data interception framework, dubbed Snoopy, that profiled mobiles, laptops and their users in real-time.

Smartphones tend to keep a record of Wi-Fi base stations their users have previously connected to, and often poll the airwaves to see if a friendly network is within reach. Although this is supposed to make joining wireless networks seamless for punters, it also makes it too easy for the researchers to link home addresses and other information to individually identifiable devices.

“We tested in numerous countries and during one rush-hour period in central London,” Cuthbert told El Reg. “We saw over 77,000 devices and as a result, were able to map device IDs to the last 5 APs they connected to. Then using geo-location, we were able to map them out to physical locations.”

“Apple devices were the noisiest based upon our observations,” he added.

This phase of the project involved only passively listening to Wi-Fi network requests, rather than complete interception, making it legal under UK law. To help the pair process the huge volume of data collected, the researchers used a visualisation tool called Maltego Radium developed by third-party developers Paterva.

Cuthbert and Wilkinson set up Wi-Fi access points that collected probe requests of smartphones and other wireless devices before deploying a few of these around London, and using Maltego Radium to make sense of the data collected in real-time. “We could work out the most common movement patterns using the SSID probes sent out from their mobile phones,” Cuthbert explained.

A similar system could be used by the unscrupulous to carry out targeted attacks.

“If we wanted to do illegal activities, we could pretend to be one of those networks, route all traffic through our central server and then perform analysis on the traffic,” Cuthbert explained. “This would allow us to dump all credentials, strip down SSL connections, injecting malicious code into all web pages requested, grab social media credentials etc.”

The research established that smartphones leak a lot more information than even tech-savvy people would imagine. “Apple, Google and so on do not have any documentation about how noisy their devices are,” Cuthbert said.

The security bod advised users to use more common sense and disabling Wi-Fi scanning until they needed to actually access the web. “We will click on anything, and rarely turn bits off when outside, for example,” Cuthbert explained. “People are more used to ensuring their laptops are secure.”

The two researchers embarked on the six-month surveillance project as governments stepped up efforts to monitor of citizens’ internet-based communications – such as recording websites visited and emails sent – under the guise of countering terrorism.

Several private organisations, such as Palantir, have developed technologies to identify undesirable activities from collected data, or perhaps – according to the more paranoid – to profile all citizens.

Cuthbert and Wilkinson outlined the fruits of their research in a well-received presentation titled Terrorism, tracking, privacy and human interactions at the recent 44con conference in London. A first-hand account of this talk can be found here. A Naked Security podcast featuring an interview with the Sensepost team and banking insecurity expert Ross Anderson can be found here [MP3]. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2012/09/14/smartphone_tracking_research/

The perfect CRIME? New HTTPS web hijack attack explained

More details have emerged of a new attack that allows hackers to hijack encrypted web traffic – such as online banking and shopping protected by HTTPS connections.

The so-called CRIME technique lures a vulnerable web browser into leaking an authentication cookie created when a user starts a secure session with a website. Once the cookie has been obtained, it can be used by hackers to login to the victim’s account on the site.

The cookie is deduced by tricking the browser into sending compressed encrypted requests for files to a HTTPS website and exploiting information inadvertently leaked in the process. During the attack, the encrypted requests – each of which contains the cookie – are continually modified by malicious JavaScript code, and the changing size of the compressed message is used to determine the cookie’s contents character by character.

CRIME (Compression Ratio Info-leak Made Easy) was created by security researchers Juliano Rizzo and Thai Duong, who cooked up the BEAST SSL exploit last year. CRIME works on any version of TLS, the underlying technology that protects HTTPS connections. The number of requests an attacker would need to make to pull off the hijack is fairly low – up to six requests per cookie byte. Unlike the BEAST attack, CRIME can’t be defeated by configuring the web server to use a different encryption algorithm.

Punters using web browsers that implement either TLS or SPDY compression are potentially at risk – but the vulnerability only comes into play if the victim visits a website that accepts the affected protocols. Support is widespread but far from ubiquitous.

The researchers worked with Mozilla and Google to ensure that both Firefox and Chrome are protected. Microsoft’s Internet Explorer is not vulnerable to the attack, and only beta versions of Opera support SPDY. Smartphone browsers and other applications that rely on TLS may be vulnerable, according to Ars Technica.

“Basically, the attacker is running a script on Evil.com,” Rizzo explained to Kaspersky Labs’ Threatpost. “He forces the browser to open requests to Bank.com by, for example, adding img alt=”” tags with the src pointing to Bank.com. Each of those requests contains data from mixed sources.”

Each encrypted request includes an image file name – a constantly changing detail that is generated by the malicious script; the browser’s identification headers, which don’t change; and the login cookie, the target of the attack. When the file name matches part of the login cookie, the size of the message drops because the compression algorithm removes this redundancy.

“The problem is that compression combines all those sources together,” Rizzo added. “The attacker can sniff the packets and get the size of the requests that are sent. By changing the [file name] path, he could attempt to minimise the request size, ie: when the file name matches the cookie.”

This brute-force attack has been demonstrated against several sites including Dropbox, Github and Stripe. Affected organisations were notified by the pair, and the websites have reportedly suspended support for the leaky encryption compression protocols. Ivan Ristic, director of engineering at Qualys, estimates 42 percent of sites support TLS compression.

The researchers will present their work at the Ekoparty security conference in Buenos Aires, Argentina next week. In the meantime, Jeremiah Grossman, founder and chief technology officer of WhiteHat Security, has a detailed take on the attack here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2012/09/14/crime_tls_attack/

UK boffins get £3.8m pot to probe ‘science of cyber-security’

GCHQ, the UK’s nerve-centre for eavesdropping spooks, has established what’s billed as Blighty’s first academic research institute to investigate the “science of cyber security”.

The lab – which was set up with the Research Councils’ Global Uncertainties Programme and the government’s Department for Business, Innovation and Skills – is a virtual organisation involving several universities.

The institute will throw together leading computer science academics, mathematicians and social scientists – the latter being useful for studying the behaviours of criminals – to improve the nation’s security strategies. The outfit, funded by a £3.8m grant, will link up experts in the UK and overseas. It’s hoped that both the private sector and government will gain from the fruits of this collaboration, which will run for at least three and a half years, starting next month.

Cabinet Secretary Francis Maude, who oversees cyber-security, said: “The UK is one of the most secure places in the world to do business – already 8 per cent of our GDP is generated from the cyber world and that trend is set to grow. But we are not complacent.”

Participating universities were selected following a tough competitive process. The successful teams were: University College London, working with University of Aberdeen; Imperial College, working with Queen Mary College and Royal Holloway, University of London; and Newcastle University and Royal Holloway, working with Northumbria University.

University College London was selected to host the Research Institute, and Professor Angela Sasse will take the role of director of research.

The establishment of the institute is part of the UK’s wider IT security plan, which aims to make the UK one of the most secure places in the world to do business among other objectives. GCHQ has been given a starring role in putting together the nation’s cyber-security strategy, and the lion’s share of a £650m budget following a recent defence review.

Future plans include a scheme to establish a second research institute, increased sponsorship of PhD research, and a scheme to highlight so-called Academic Centres of Excellence in Cyber Security Education. Eight UK universities have already gained this status. The eight universities are: University of Bristol, Imperial College London, Lancaster University, University of Oxford, Queen’s University Belfast, Royal Holloway, University of Southampton and University College London.

No sign of Cambridge on GCHQ list

A lot of cutting-edge security research is taking place at universities outside this scheme, most obviously by Ross Anderson’s group at the University of Cambridge Computer Laboratory. Anderson’s team is well-known for exposing shortcomings in banking security, providing expert advice to various parliamentary committees, and criticising trusted computing and government policy that relates to information security – such as the protection of massive databases and health records. Anderson’s group guards its independence jealously and that probably explains why it’s not on the GCHQ list.

Other interesting courses outside the centre of excellence programme include the ethical hacking degree course at Abertay University in Dundee. The course has been successful in getting people into work in various companies, by developing security skills suitable for the real business world. The course aims to develop practical skills and a hacker’s mindset in students, but with a grounding in ethics, giving graduates a skill-set absent from traditional computer science degrees.

News of the institute emerged in the same week that professional security certification body (ISC)2suggested that instead of cooperating with security organisations and building on their existing know-how, government bigwigs often just start from scratch with IT security. The certification body said that collaboration with experts and academics was the only solution that would work in the long term.

Paul Davis, the director for Europe at security tools firm FireEye, commented: “When it comes to IT security and international cybercrime, there seems to be an ongoing sense of inaction and complacency. In fact, GCHQ recently admitted that businesses are failing to do enough to protect themselves from ‘real and credible threats to cybersecurity’ – and in that respect, this is very welcome news indeed.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2012/09/14/uni_security_think_tank/

‘Over half’ of Android devices have unpatched holes

Duo Security is claiming that “over half” of Android devices have unpatched vulnerabilities.

The company’s Jon Oberheide says in this blog post that the results come from the first slew of users of the company’s X-Ray Android vulnerability scanner.

Promising to announced detailed results on Friday (September 14) at the Rapid7 United Summit conference in San Francisco, Oberheide says the results come from X-Ray scans of more than 20,000 users of the software – the sample base from which Duo draws its “50 percent” claim.

The of vulnerabilities X-Ray tests for include a bug ASHMEM that allows devices to be rooted; Exploid, in which Android’s init daemon forgets to confirm that Netllink messages are coming from the trusted kernel; Gingerbreak, which exploits the same Netlink issue but uses the volume manager as its vector; the Levitator privilege escalation bug; along with the Mempodroid, Wunderbar, ZergRush and Zimperlich bugs.

Android patching is a pain in the neck, involving as it does the complex ecosystem of Google, device makers and carriers. The easiest way to get an up-to-date version of Android is to buy a new device.

Alternatively, we could just wait until Android is sued off the face of the planet and replaced by a new Google operating system. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2012/09/14/duo_says_android_security_nightmare/

Chinese man on trial for smuggling hi-tech military secrets

The federal trial of a Chinese man accused of smuggling hi-tech military secrets from the US into his homeland in the hope of landing a better-paid job began this week.

Both sides exchanged opening salvoes on Wednesday in the case of Sixing Liu, an employee at New Jersey-based firm Space Navigation, a division of US defence contractor L3 Communications which produces a range of navigation, intelligence and surveillance technologies.

Prosecutors allege that Liu, who is a permanent US resident, downloaded trade secrets to his laptop and took them to China to present at conferences on nanotechnology in Chongqing in 2009 and Shanghai in 2010, according to AP.

He has apparently been charged with exporting defence-related data without a license from the Department of State, possessing stolen trade secrets and telling porkies to US officials.

On the latter, Liu apparently lied to a US customs officer on return from the Shanghai trip about his visit to a conference there, despite the official spotting him with a VIP conference badge.

Liu’s defence team are portraying him as a hard-working employee who downloaded emails to his laptop to work on them without internet access, but who naively wasn’t up to speed with US import-export laws.

Assistant US Attorney Gurbir Singh Grewal alleged that Liu deliberately broke company rules about taking work home without a supervisor’s permission because he was looking to land himself a better job back in China.

“Is the fact he went to his alma mater and spoke some sort of motive in this case?” Liu’s attorney James Tunick countered, according to AP.

“There was nothing nefarious about this conference. He wasn’t looking for work in China. There was no motive in this case because there was no crime.”

The case comes at a rather delicate time for US-China relations, given the high profile Congressional investigation into national security concerns which have been raised over allowing Huawei and ZTE access to the US telecoms infrastructure market.

Tensions were already heightened after top US defence contractor United Technologies was fined $75m (£47.8m) by a federal court in July after confessing to more than 500 violations of export restrictions including software used in China’s first attack helicopter – claims which China denies. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2012/09/14/sixing_liu_trade_secrets_smuggle_china/

Microsoft seizes Chinese dot-org to kill Nitol bot army

Microsoft has disrupted the emerging Nitol botnet – and more than 500 additional strains of malware – by taking control of a rogue dot-org website. The takedown is the latest in Microsoft’s war against armies of hacker-controlled PCs.

The Windows 8 giant’s Operation b70 team discovered crooks were selling computers loaded with counterfeit software and malware – including a software nastie that takes control of each machine to carry out orders from the Nitol central command server.

Operation b70 uncovered the industrial-scale scam during an investigation into insecure supply chains [PDF]. Microsoft blames corrupt but unnamed resellers in China.

Computers in the Nitol botnet would communicate with a command server whose DNS was provided by Chinese-run 3322.org, which has been linked to malicious activity since 2008. Microsoft investigators also discovered that other servers using 3322.org, which offers its services for free, harboured more than 500 different strains of malware across more than 70,000 sub-domains. These nasties included key-stroke loggers and banking Trojans.

Microsoft obtained a US court order to seize control of 3322.org – a site Google’s Safe Browsing system warned was home to “malicious software including 1609 exploits, 481 trojans and 6 scripting exploits”. The order instructs the US-based Public Interest Registry, which operates the DNS for all .org domains, to redirect internet traffic for 3322.org to the Redmond giant’s servers.

Sub-domains associated with the malware have been blocked while legitimate domains have been allowed to stay online, as a statement from Microsoft on the takedown explains:

On Sept 10, the court granted Microsoft’s request for an ex parte temporary restraining order against Peng Yong, his company and other John Does. The order allows Microsoft to host the 3322.org domain, which hosted the Nitol botnet, through Microsoft’s newly created domain name system (DNS). This system enables Microsoft to block operation of the Nitol botnet and nearly 70,000 other malicious subdomains hosted on the 3322.org domain, while allowing all other traffic for the legitimate subdomains to operate without disruption.

DNS security firm Nominum helped in the legal case, filed in the US District Court for the Eastern District of Virginia, as well as assisting Microsoft in filtering the 3322.org domain traffic.

The operation was part of the ongoing Project MARS (Microsoft Active Response for Security), which previously led to the successful takedown of the Waledac, Rustock and Kelihos botnets. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2012/09/13/botnet_takedown/

IT chiefs’ purses drained, security budget still safe

Security looks set to be one part of companies’ IT budgets that will be comparatively safeguarded in the recession, if the beancounters at Gartner are to be believed.

Global spending is forecast to rise more than 8 per cent this year to $60bn, reaching $86bn by 2016.

Gartner research director Lawrence Pingree said that based on its CIO spending survey in June businesses are clearly “prioritising on security budgets”

“The security infrastructure market is expected to experience positive growth over the forecast period, despite risks of further economic turbulence,” he said.

Security services and security software will lead the growth stakes driven by the “persistent threat landscape…and evolving attack patterns that are growing in sophistication”.

Roughly 45 per cent of CIOs surveyed told Gartner they would plough more cash into security; one half expected to at least maintain budgets; and just 5 per cent planned to spend less.

Based on its findings, Gartner reckons the economic meltdown in the US and Europe has had “some impact” on companies’ security buying habits in those regions, unlike firms in emerging countries.

“We expect current market trends will keep security infrastructure growth at between 9 per cent and 11 per cent from 2011 through 2013, but we are factoring in a higher degree of caution in terms of buying behaviour,” said Pingree. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2012/09/13/gartner_security/

Blackhole 2: Crimeware kit gets stealthier, Windows 8 support

Cybercrooks have unveiled a new version of the Blackhole exploit kit. Version 2 of Blackhole is expressly designed to better avoid security defences. Support for Windows 8 and mobile devices is another key feature, a sign of the changing target platforms for malware-based cyberscams.

The release also includes a spruced-up user interface – so the tool can now be used by the less technically able criminal – as well as a revised licensing structure that puts a greater emphasis on renting rather than buying the application.

Rental prices run from $50 a day while leasing the software for a year costs around $1,500.

The Blackhole exploit kit has been around for about two years, during which time it has become the preferred tool for running drive-by download attacks. Cybercrooks must first find a site that can be exploited to insert malicious code, thus exposing users of often legitimate sites to attacks from linked hacker-controlled portals powered by Blackhole. The exploit kit will also attempt to download malware on the PCs of visiting surfers by taking advantage of any unlatched Java, browser or Adobe Flash vulnerability it manages to find.

The end result is that an unpatched Windows PC becomes infected with a banking Trojan, fake anti-virus or botnet agent after visiting a compromised website.

A good write-up on the new features of Blackhole and what this means for security firms and enterprise security defenders can be found in a blog post by Sophos here. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2012/09/13/blackhole_exploit_kit_revamp/

Prof casts doubt on Stuxnet’s accidental ‘great escape’ theory

Analysis An expert has challenged a top theory on how the infamous Stuxnet worm, best known for knackering Iranian lab equipment, somehow escaped into the wild.

New York Times journalist David Sanger wrote what’s become the definitive account of how Stuxnet was jointly developed by a US-Israeli team. The sophisticated malware was deployed to sabotage high-speed centrifuges at Iran’s nuclear fuel processing plant by infecting and commandeering the site’s control systems.

According to Sanger’s sources, an Iranian technician’s laptop was plugged into a Stuxnet-sabotaged centrifuge and was infected by the malfunctioning equipment. The worm then “escaped into the wild” when the laptop was connected to the internet, granting the software nastie safe passage to the wider world, according to the newspaper journalist’s contacts.

Now Prof Larry Constantine, a software engineer with years of experience in industrial control systems, claims some parts of Sanger’s account are just not possible. According to the prof, Sanger may have been misled by his political sources.

In an IEEE Spectrum Techwise Conversations podcast, Prof Constantine explained that the Stuxnet worm is like a military missile: one half of it is the rocket engine, designed to spread the malware from PC to PC by exploiting security vulnerabilities in Microsoft’s Windows operating system; the other half is the explosive payload, a block of malicious code injected into Siemens-built industrial controllers.

Prof Constantine asserted that the specialised payload hidden away in the control systems was incapable of infecting a Windows PC, thus it is impossible for the Iranian technician’s laptop to have picked up the worm from the uranium enrichment machinery. It is not known exactly how the engineer’s portable PC was infected.

The academic also said the malware was designed to restrict itself to local-area networks, specifically the plant’s internal LAN, and could not have spread to the wider internet under its own steam.

The prof claimed in the podcast:

First of all, the Stuxnet worm did not escape into the wild. The analysis of initial infections and propagations by Symantec show that, in fact, that it never was widespread, that it affected computers in closely connected clusters, all of which involved collaborators or companies that had dealings with each other.

Secondly, it couldn’t have escaped over the internet, as Sanger’s account maintains, because it never had that capability built into it: it can only propagate over [a] local-area network, over removable media such as CDs, DVDs, or USB thumb drives. So it was never capable of spreading widely, and in fact the sequence of infections is always connected by a closed chain.

Another thing that Sanger got wrong… was the notion that the worm escaped when an engineer connected his computer to the programmable logic controllers (PLCs) that were controlling the centrifuges and his computer became infected, which then later spread over the internet. This is also patently impossible because the software that was resident on the PLCs is the payload that directly deals with the centrifuge motors; it does not have the capability of infecting a computer because it doesn’t have any copy of the rest of the Stuxnet system, so that part of the story is simply impossible.

In addition, the explanation offered in his book and in his article is that Stuxnet escaped because of an error in the code, with the Americans claiming it was the Israelis’ fault that suddenly allowed it to get onto the internet because it no longer recognised its environment. Anybody who works in the field knows that this doesn’t quite make sense, but in fact the last version, the last revision to Stuxnet, according to Symantec, had been in March, and it wasn’t discovered until June 17. And in fact the mode of discovery had nothing to do with its being widespread in the wild because in fact it was discovered inside computers in Iran that were being supported by a Belarus antivirus company called VirusBlokAda.

Prof Constantine, an academic in the mathematics and engineering department at the University of Madeira in Portugal, argued that these technical details matter “because it raises broad questions about the nature of the so-called leaks from administration personnel to Sanger”. The academic does not dispute that Stuxnet was a joint US-Israeli operation to create malware specifically to sabotage Siemens equipment at processing plants in Natanz, Iran.

Costin Raiu, a senior security researcher at Kaspersky Lab, said the professor was right to question the infected laptop theory, and added “Stuxnet did not ‘escape’ into the wild by accident”. It’s possible that, rather than admit the worm was deployed wider than a specific Iranian installation, the US administration let it be known that its super-weapon had accidentally broken free of its constraints.

El Reg asked several independent experts for a reality check on the technical aspects of Prof Constantine’s criticism of Sanger’s account. Folks at security tools biz Sourcefire and antivirus firm Eset agreed that it was unlikely the laptop could have been compromised by plugging it into a Stuxnet-infected PLC. However the experts were split on whether or not Stuxnet was capable of spreading across the internet.

Eset earlier published a report [PDF] – jointly written by David Harley, Eugene Rodionov, Juraj Malcho and Aleksandr Matrosov – on the Stuxnet outbreak. The team was sympathetic to the prof’s fresh take on several counts, but dismissed his suggestion that Stuxnet was unable to escape into the wild:

The way the IEEE story describes it, there’s a confusion somewhere between the infection mechanism and the payload that clearly casts doubt on Sanger’s account, if it really has to do with a backward infection from a PLC. The account of the backward infection doesn’t sound convincing technically. A vulnerability in software interfacing between PLC and another system might account for it in principle, but doesn’t seem likely given the nature of the payload programming.

Constantine is more or less correct in that Stuxnet spread by USB device, removable media or network shares rather than normal internet channels. But network share infection is kind of ambivalent and [network file system protocol] SMB/CIFS is certainly capable of being used beyond the local-area network. Stuxnet’s primary infection vector was USB, but it also infected through the MS08-67 RPC vulnerability initially exploited by Conficker, the MS10-061 print spooler vulnerability, and network shares. However it might be able to propagate through the internet under some circumstances via network shares along with VPN and using the RPC vulnerability.

Although MS08-067, MS10-061 are mainly used to propagate inside the local-area network, Eugene thinks that it is possible for these vulnerabilities to allow the malware to cross the borders of adjacent networks. But did it? As Juraj points out, there’s no reason (apart from Sanger telling us so) to assume that if Stuxnet “escaped” it did so by leaking from a developer’s PC via the internet.

The bods at Eset also rubbished Prof Constantine’s contention that Stuxnet did not spread widely:

It’s nonsense to say that Stuxnet didn’t get into the wild. Constantine cites Symantec as demonstrating that Stuxnet was never widespread, but Symantec itself stated: “As of September 29, 2010, the data has shown that there are approximately 100,000 infected hosts… We have observed over 40,000 unique external IP addresses, from over 155 countries.”

However Dominic Storey, Sourcefire’s technical director for EMEA, told El Reg that the local-area network protocols exploited by Stuxnet to spread across a nuclear plant’s internal systems would be blocked at the firewall in any corporate – or even any sensible home user. Even a badly managed enterprise set-up would block incoming file and print sharing connections. If it didn’t, Stuxnet would be the least of the organisation’s problems.

Storey trained as a plasma physicist at the UK’s Atomic Energy Authority, specialising in nuclear fusion research, prior to embarking in a career in network security. Recently he’s carried out a lot of work looking into vulnerabilities in industrial control (SCADA) systems.

“Stuxnet was not like a worm. It was written for a specific platform and its vector for spreading was from laptop to laptop or USB drive – it didn’t rip through the cosmos,” Storey said.

The physicist argued there was “merit” in Prof Constantine’s argument, which if nothing else adds a dash of further intrigue to the heated debate about the origins and purpose of Stuxnet, generally considered as the world’s first cyber-weapon. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2012/09/13/stuxnet/

Cambridge boffins: Chip-and-PIN cards CAN be cloned – here’s how

Boffins at Cambridge University have uncovered shortcomings in ATM security that might be abused to create a mechanism to clone chip-and-PIN cards.

The security shortcoming might already be known to criminals and creates an explanation for what might have happened in some, otherwise baffling, “phantom” withdrawal cases.

Each time a consumer uses their chip-and-PIN card, a unique “unpredictable number” is created to authenticate the transaction. Mike Bond, a research associate at the University of Cambridge Computer Lab, explains that elements of these “unique” transaction authentication numbers appear to be predictable.

The cryptographic flaw – the result of mistakes by both banks and card manufacturers in implementing the EMV* protocol – creates a means to predict that authentication code (the “unpredictable” number). For example, if a crook had access several other authentication codes generated by the particular card (in their paper, Bond and his associates posit a scenario where a programmer is sitting behind the till at a mafia-owned shop), it would be possible for the miscreant to extract sensitive data off a chip-and-PIN cards, thus allowing the compromised smart card to be cloned. Bond explains further in a blog post (extract below).

An EMV payment card authenticates itself with a MAC of transaction data, for which the freshly generated component is the unpredictable number (UN). If you can predict it, you can record everything you need from momentary access to a chip card to play it back and impersonate the card at a future date and location. You can as good as clone the chip. It’s called a “pre-play” attack. Just like most vulnerabilities we find these days some in industry already knew about it but covered it up; we have indications the crooks know about this too, and we believe it explains a good portion of the unsolved phantom withdrawal cases reported to us for which we had until recently no explanation.

The security weakness might also be used (with somewhat greater difficulty) to run man-in-the-midddle attacks or they might be used in conjunction with malware on an ATM or Point of Sale terminal, Bond adds.

Bond said he discovered the security shortcoming almost by accident, while studying a list of disputed ATM withdrawals relating to someone who had their wallet stolen in Mallorca, Spain. The consumer’s card was subsequently used to make five withdrawals, totaling €1,350, over the course of just an hour.

While studying EMV numbers for each transaction, Bond realised that the numbers shared 17 bits in common while the remaining 15 digits appeared to be some sort of counter, rather than a random number.

In the course of their research, the Cambridge boffins examined data from previous disputed ATM transactions as well as fresh data from ATM machines and retail Chip-and-PIN terminals – altogether 1,000 transactions at 20 different ATMs and POS terminals. This ongoing research has already “established non-uniformity of unpredictable numbers in half of the ATMs we have looked at,” according to the researchers.

‘We’ve never claimed chip-and-PIN is 100 per cent secure’

The idea that debit and credit cards fitted with supposedly tamper-proof chips might be vulnerable to a form of cloning sits awkwardly with assurances from the banking sector that the technology is highly reliable, if not foolproof.

In a statement, the UK’s Financial Fraud Action told El Reg:

We’ve never claimed that chip and PIN is 100 per cent secure and the industry has successfully adopted a multi-layered approach to detecting any newly-identified types of fraud. What we know is that there is absolutely no evidence of this complicated fraud being undertaken in the real world. It is a complicated attack. It requires considerable effort to set up and involves a series of co-ordinated activities, each of which carries a certain risk of detection and failure for the fraudster. All these features are likely to make it less attractive to a criminal than other types of fraud.

We are confident that banks are refunding customers and upholding the law – this clearly states that the innocent victim of fraud should have their money reimbursed promptly.

Bond and his colleagues were due to present a paper (PDF) based on their research at the Cryptographic Hardware and Embedded System (CHES) 2012 conference in Leuven, Belgium this week. The paper explains how the cryptographic howler might be exploited in practice.

Many ATMs and point-of-sale terminals have seriously defective random number generators. These are often just counters, and in fact the EMV specification encourages this by requiring only that four successive values of a terminal’s “unpredictable number” have to be different for it to pass conformance testing. The result is that a crook with transient access to a payment card (such as the programmer of a terminal in a Mafia-owned shop) can harvest authentication codes which enable a “clone” of the card to be used later in ATMs and elsewhere.

More commentary on the information security aspects on the potential plastic card security weakness identified by the Cambridge boffins can be found in a blog post by Sophos here. ®

Bootnote

*EMV, also known as “Chip-and-PIN”, is used in debit and credit cards issued throughout Europe and much of Asia. The technology is also beginning to be introduced in North America. The Cambridge team estimates 1.34 billion cards issued worldwide already rely on the technology, which is primarily designed to prevent card cloning, relatively straightforward with previous magnetic-strip cards.

EMV stands for Europay, MasterCard and Visa – the three backers of the technology.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2012/09/13/chip_and_pin_security_flaw_research/