STE WILLIAMS

Code-busters lift RSA keys simply by listening to the noises a computer makes

Disaster recovery protection level self-assessment

Computer scientists have shown how it might be possible to capture RSA decryption keys using the sounds emitted by a computer while it runs decryption routines.

The clever acoustic cryptanalysis attack was developed by Adi Shamir (the “S” in RSA) of the Weizmann Institute of Science along with research colleagues Daniel Genkin and Eran Tromer and represents the practical fulfillment of an idea first hatched nearly 10 years ago. Back in 2004 Shamir and his colleagues realised that the high-pitched noises emitted by computers could leak sensitive information about security-related computations.


At the time the established that different RSA keys induce different sound patterns but they weren’t able to come up with anything practical. Fast forward 10 years and the researchers have come up with a practical attack using everyday items of electronics, such as mobile phones, to carry out the necessary eavesdropping. The attack rests on the sounds generated by a computer during the decryption of ciphertexts sleeted by an attacker, as a paper RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysis explains.

We describe a new acoustic cryptanalysis key extraction attack, applicable to GnuPG’s current implementation of RSA. The attack can extract full 4096-bit RSA decryption keys from laptop computers (of various models), within an hour, using the sound generated by the computer during the decryption of some chosen ciphertexts. We experimentally demonstrate that such attacks can be carried out, using either a plain mobile phone placed next to the computer, or a more sensitive microphone placed four meters away.

Put simply, the attack relies on using a mobile phone or other microphone to recover, bit by bit, RSA private keys. the process involves bombarding a particular email client with thousands of carefully-crafted encrypted messages, on a system configured to open these messages automatically. The private key to be broken can’t be password protected because that would mean a human would need to intervene to open every message.

There are other limitations too, including use of the GnuPG 1.4.x RSA encryption software. And because the whole process is an adaptive ciphertext attack a potential attacker needs a live listening device to provide continuous acoustic feedback in order to work out what the next encrypted message needs to be. The attack requires an evolving conversation of sorts rather than the delivery of a fixed (albeit complex) script.

Mitigating against the complex attack requires simply using the more modern GnuPG 2.x instead of the vulnerable GnuPG 1.4.x encryption scheme, which ought to plug up the problem at least until more powerful attacks comes along.

“The Version 2 branch of GnuPG has already been made resilient against forced-decryption attacks by what is known as RSA blinding,” explains security industry veteran Paul Ducklin in a post on Sophos’ Naked Security blog.

Even aside from this all sort of things are likely to go wrong with the potential attack including the presence of background noise and the possibility that an intended target happens to have his or her mobile phone in their pocket or bag while reading encrypted emails on a nearby system.

Key recovery might also be possible by other types of side channel attacks, the crypto boffins go on to explain. For example, changes in the electrical potential of the laptop’s chassis – which can be measured at a distance if any shielded cables (e.g. USB, VGA, HDMI) are plugged in because the shield is connected to the chassis – can provide a source for analysis at least as reliable as emitted sounds. ®

The business case for a multi-tenant, cloud-based Recovery-as-a-Service solution

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/12/19/acoustic_cryptanalysis/

Crooks target Target: 40 MILLION bank cards imperiled in cyber-heist

Disaster recovery protection level self-assessment

Target says 40 million credit and debit card accounts are at risk after crooks infiltrated the US megastore chain’s payment systems.

It’s feared criminals harvested customers’ sensitive banking details between November 27 and December 15, right in the middle of America’s peak shopping season. The leak leaves shoppers vulnerable to potential fraud.


The retail giant today confirmed there had been “unauthorized access to payment card data”, after investigative journo Brian Krebs broke the news of the infiltration on his website. His sources claimed crims had lifted data from the magstripes on victims’ cards.

The breach was perfectly timed to take advantage of Americans flooding stores to buy holiday gifts. Target is now probing the attack, with the help of the authorities and banks, to ensure it doesn’t happen again.

“Target’s first priority is preserving the trust of our guests [customers] and we have moved swiftly to address this issue, so guests can shop with confidence. We regret any inconvenience this may cause,” Target president and CEO Gregg Steinhafel said in a statement.

“We take this matter very seriously and are working with law enforcement to bring those responsible to justice.”

Based out of Minneapolis, Target is among the largest of the “big box” retailers in North America. The company reports that it has 1,778 stores in the US and Canada, and logged more than $73.3bn in revenue last year.

Krebs cites sources at credit-card issuers, who believe the breach could have involved nearly every Target outlet in the country.

Should the breach be as massive as believed, the fallout could rival that of the 2007 wireless network breach of TJ Maxx parent TJX Companies: sensitive data on 45.7 million credit card accounts was harvested from compromised systems within the retail giant’s infrastructure.

That incident proved to have major repercussions for TJX as the company spent years rebuilding its reputation, and ultimately cost it more than $110m – perhaps as much as $256m.

Target will likely now have a long road of investigations, audits and reconciliations in the coming months while customers will need to keep a close eye on their bank and credit card statements for signs of fraud. ®

The business case for a multi-tenant, cloud-based Recovery-as-a-Service solution

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/12/19/target_scoped_for_black_friday_breach/

Proposed California law demands anti-theft ‘kill switch’ in all smartphones

Disaster recovery protection level self-assessment

California may soon become the first US state to require mobile phone makers to include a feature that can remotely disable their handsets in the event they are stolen.

A new law proposed by California state Senator Mark Leno and San Francisco District Attorney George Gascón would require all smartphones sold in the state to include a remote-controllable “kill switch” as a deterrent against theft.


“One of the top catalysts for street crime in many California cities is smartphone theft, and these crimes are becoming increasingly violent,” Leno said in a statement.

The issue is not a new one for Gascón, who along with New York Attorney General Eric Schneiderman, has been crusading for greater smartphone security in light of what has been described as an epidemic of mobile phone theft.

In Gascón’s home city of San Francisco, which falls within Leno’s jurisdiction, more than half of all robberies now involve a smartphone, according to police statistics.

But although some phone vendors offer remote deactivation – Apple introduced Activation Lock for iPhones with the release of iOS 7, for example, to Gascón’s approval – most do not, which means smartphone thieves can quickly and easily sell the devices to eager bargain-hunters.

Gascón has accused the mobile phone industry of dragging its feet on the issue, saying that although he had managed to convince smartphone titan Samsung to bundle LoJack security software with its devices, the major wireless carriers rejected the proposal, believing it would eat into their profits: fewer thefts would mean fewer sales.

Now Gascón’s patience is running out. “I appreciate the efforts that many of the manufacturers are making, but the deadline we agreed upon is rapidly approaching and most do not have a technological solution in place,” Gascón said.

If Leno’s proposed law passes the California legislature, all new smartphones sold in the state will be required to include some form of kill switch.

Leno says he plans to formally introduce the bill in early January at the start of the 2014 legislative session. ®

Disaster recovery protection level self-assessment

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/12/19/california_smartphone_kill_switch_law/

Encrypted Cloud Startup Tresorit Reveals Pricing, Challenges Hackers To Break Their Service For $25,000

NEW YORK, Dec. 19, 2013 /PRNewswire-iReach/ — A challenge was issued to top hackers a week ago to break into secure cloud service, Tresorit for $25,000. 700 hackers from 49 countries already took up the hacking challenge, hailing from top universities like MIT, Stanford and Princeton and corporations like Vodafone and Tata Consulting.

(Photo: http://photos.prnewswire.com/prnh/20131219/MN35866)

Based on the previous 6 months of unsuccessful hack attempts following its April launch to beta, the company announced its Pro package today, offering plans starting from 100GB of secure storage for $12.99 a month. The service is also moving to target businesses.

“We’re confident in our system, having withstood 6 months of hacking attempts since we posted the first bounty on our head when launching our beta, in April”, says Istvan Lam, founder and CEO.

Tresorit is built on patented cryptographic tech, and uses shareable client-side encryption. Its users allegedly don’t have to fear any admin, hacker, or even the NSA accessing their data, even when they share it with others or sync it to mobile devices.

Tresorit’s approach to privacy made the service popular with a crowd of NSA-averse individuals – they enjoy 20% monthly user growth. Users are leaving competitors like Box and Dropbox behind for the service – their top rankings on Google Play (with a best in class 4.6, matched only by Dropbox) seem to confirm their dedication to the user experience.

But Tresorit’s crypthographic assurance that data stays under the users’ control and their EU-based servers are an even bigger draw among businesses who face compliance requirements or business needs for security. This fall the company even moved to Switzerland to enjoy increased legal protection from surveillance.

“We see great demand from industries like financial and legal advisories, healthcare, biotech or contruction. These verticals combine the need for strict control over data with collaboration and dispersed teams, off-site work or frequent file sharing with parties outside the company.” – says Lam.

Moving into the SMB space will see Tresorit launch industry-focused features like timestamping, e-signatures and delivery reports early next year. Their roadmap also contains several features that will help it woo prospective enterprise clients as well – full audit trails, file-based policies, and central admin controls.

About Tresorit

Tresorit, an encrypted collaboration and storage service, based on patented cryptographic research from a team of European security experts started in 2009.

The company received a Series A funding of 1.7M USD in 2012 and grew to 30 people by the end of 2013. Tresorit emphasizes usability as well as security.

Aiming for concerned individuals and small and medium sized businesses who need compliance, security and confidentiality as well as collaboration and backup.

Article source: http://www.darkreading.com/authentication/encrypted-cloud-startup-tresorit-reveals/240164910

Jail for phishing gang member who stole £393k from students

Phishing. Image courtesy of Shutterstock.A man who stole hundreds of thousands of pounds from UK students has been jailed for three years and nine months.

Nigerian Olajide Onikoyi, 29, of Blackley, Manchester, was one of many criminals who tricked students via a phishing campaign. Victims received emails prompting them to visit a student loans website but, as I am sure you have already guessed, that website was a fake.

When students visited the bogus site they were invited to update various details which, presumably, included bank details as Onikoyi was then able to access accounts and make large withdrawals.

In total, he stole close to £400,000 from 238 students, with one having £19,000 taken from their account.

When Onikoyi appeared before Southwark Crown Court on Friday he admitted conspiracy to defraud UK financial organisations out of £393,000 which earned him a three year custodial sentence. He also pleaded guilty to a charge of money laundering which earned him a further nine months in jail, set to run consecutively with his other sentence.

During the investigation process police had worked closely with internet service providers, financial institutions and the Students Loan Company whose fraud prevention and detection manager, Heather Laing, said:

All students should be alert to these scams and remain vigilant to attempts to defraud them and steal their funds, particularly around payment times.

It is absolutely vital for students to protect themselves and their finances by keeping personal and account information safe. We will never request a customer to provide or confirm their account or banking details by email or text.

This case and the sentence received has sent a clear message that the courts take fraud against the taxpayer very seriously.

Once we identified the extent of the fraud and alerted the police to the issue, they worked with the wider financial community to investigate further, resulting in the sentencing.

As part of its inquiries, police from the now-defunct Metropolitan police central e-crime unit seized computer equipment belonging to Onikoyi.

On examination they discovered chat log evidence which proved that he had conspired with other individuals in the UK, Russia and Lithuania in order to compromise computers and bank accounts.

Detective chief inspector Jason Tunn of the Metropolitan police’s new cyber crime unit said,

My officers worked doggedly to secure Onikoyi’s conviction. They examined numerous leads to identify members of this phishing gang, of which Onikoyi was a key member.

He played a significant role in the scam by systematically targeting British students and UK financial institutions in order to steal large amounts of money that were then dispersed across numerous bank accounts.

We’ve had a number of bank accounts and properties connected to Onikoyi restrained under the Proceeds of Crime Act. This is now subject to a financial investigation.

A number of other cyber criminals have also been jailed recently over the scam which saw UK students defrauded out of a total of £1.5m.

In July last year Damola Clement Olatunji was jailed for a total of six and a half years for money laundering and conspiracy to defraud the Student Loan Company and Halifax Bank.

In February of this year Christopher Inokwere received a sentence of three and a half years for conspiracy to defraud UK banks and their customers.

Earlier this month Ruth Smith-Ajala was sentenced to five years of imprisonment for conspiracy to defraud.

A further five individuals were arrested as part of the investigation. Of those, four have been released with no further action planned and one is scheduled for a plea and case management hearing later this month.


Image of phishing courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/J1jnaSsyUA4/

Obama’s NSA panel recommends new hands on the reins of same old mass data collection

Spyglass. Image courtesy of Shutterstock.The White House on Wednesday released a 303-page report from a panel of presidential advisors who recommended that the National Security Agency’s (NSA’s) massive data trawling carry on, but that the data be kept in private hands for “queries and data mining” only by court order.

The panel – former White House counter-terrorism advisor Richard A. Clarke, Michael J. Morell, Geoffrey R. Stone, Cass R. Sunstein, and Peter Swire – delivered 46 recommendations to US President Barack Obama in the report.

According to the Agence France-Presse (AFP), Obama spokesman Jay Carney said that the report was released earlier than a planned January date due to the media getting the contents wrong:

While we had intended to release the review group’s full report in January … given the inaccurate and incomplete reports in the press about the report’s content, we felt it was important to allow people to see the full report to draw their own conclusions.

Obama met with members of the panel earlier on Wednesday to work through the recommendations.

As far as surveillance of US persons goes, the panel isn’t recommending that the government stop collecting and storing bulk telephony metadata – i.e., telephone numbers that originate and receive calls, along with the time and date of calls.

Rather, the panel wants to see Congress merely transfer all that metadata over to private hands, from whence it can be queried “when necessary for national security purposes.”

The panel also recommended boosting the privacy of non-US persons to the point where they would get the same protections now given to Americans under the Privacy Act of 1974.

That act keeps the government from disclosing information about people without the written consent of a given individual – unless, that is, disclosing the information falls under a smorgasbord of statutory exceptions, one of which being law enforcement purposes.

(Am I missing something here? One imagines that “for law enforcement purposes” could actually be used to exempt pretty much all intelligence agency access to people’s records without their permission. Legal experts, your input would be welcome in the comments section below.)

Another recommendation must surely have been dubbed the “Appease the Very Indignant and Very Spied Upon German Chancellor Angela Merkel” clause when the panelists were working on it, given that it addresses “unjustified or unnecessary” surveillance of foreign leaders – particularly leaders of countries with which that the US shares “fundamental values and interests”.

The group also suggested that any operation that entails spying on foreign leaders should pass a rigorous test to see if the intelligence gained would outweigh the economic and diplomatic problems that could erupt if the operation were to become public.

The panel also wants the NSA to back off from its work to undercut attempts to create secure encryption standards.

One such effort is the NSA’s attempts to peel apart the layers of the Tor anonymizing service.

The recommendation:

We recommend that, regarding encryption, the US Government should:

(1) fully support and not undermine efforts to create encryption standards;

(2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and

(3) increase the use of encryption, and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage.

The panel would also like to see the NSA be headed up by a Congressional appointee, which could be a civilian – a possibility the panel suggested President Obama seriously consider.

Beyond maybe sticking a civilian into the top job at the NSA, the panel also thinks it would be nice to split the NSA between a military commander in charge of the Pentagon’s cyberwarfare unit – US Cyber Command – and another individual as director of the NSA.

That recommendation was dead in the water before the panel’s report ever saw the light of day, however.

Last week, the White House said that the Obama administration likes the positions of NSA Director and Cyber Command commander just fine the way they are, all rolled up into one “dual-hatted” position.

The recommendations are just that: recommendations. It’s unclear which, if any, will actually be adopted, particularly given that, as the New York Times pointed out, some would require Congress to enact new legislation.

At any rate, the recommendations shy away from the strong condemnation delivered by the US federal judge who on Monday ordered the NSA to stop collecting phone metadata, calling the agency’s collection technology “almost Orwellian” and deeming it likely unconstitutional.

It’s also worth noting how dated much of the material Edward Snowden has disclosed in the months following his triggering of NSA-gate in June.

For example, the presentation published by The Guardian concerning XKeyscore, the NSA search engine, goes back to 2008. So is the panel five years behind the curve? Are the recommendations based on current technologies and practices?

Also, might we perhaps demand deeper change than tweaks that mostly involve who gets to authorize searches and that the NSA is directed up by one or two heads?

It’s the trawling of both domestic and foreign data that seems to be the biggest problem, not who issues the warrants for searching it.

Image of spyglass courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/6ni6JBGo2XI/

Five-minute fix: Setting up parental controls on Android

Parents of Android fans have the ability to set up parental controls on their child’s device, and the recent release of Android 4.3 has enhanced the restricted profiles feature that was introduced with version 4.2 of the operating system.

(Note, this is currently only available on Android tablets).

1. Tap Settings and select Users.

2. Tap Add user or profile.

3. Tap to add a Restricted profile.

At this point you will be prompted to set up a lock using a PIN, password or a pattern if you haven’t already done so. Choose your PIN or password carefully and make it difficult to break.

Add restricted profile

4. All the installed apps on the device will then be displayed. Each of these can be toggled on or off as you deem appropriate for the child you have set the account up for. However, not all apps can be disabled this way as some do not support restricted profiles.

Toggle on and off

5. Choose Settings. From here you can choose to disable location services.

6. To start using the restricted account go to Settings, choose the newly created account and it will be set up for you by Android. From then on every time you lock the screen you will be given an option to use the restricted account you created.

Find instructions for parental controls for other operating systems here.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5-QMsGnwfnY/

Macbook webcams CAN spy on you

Disaster recovery protection level self-assessment

Security researchers have confirmed that MacBook webcams can spy on their users without the warning light being activated.

Apple computers have a “hardware interlock” between the camera and the light that is supposed to ensure the camera can’t be activated without alerting the user by lighting a tell-tale LED above the screen.


However Stephen Checkoway, a computer science professor at Johns Hopkins University and graduate student Matthew Brocker were able to circumvent this security feature by reprogramming the micro-controller chip inside the camera.

Normally, any program running on a MacBook’s central processing unit that takes images through Apple’s iSight camera would turn on the tell-tale light. Brocker and Checkoway’s reprogramming tactic allows the camera and it light to be activated independently, so that the camera can be running while the light is switched off.

The researchers have released proof-of-concept software to demonstrate the trick, including a paper , entitled iSeeYou: Disabling the MacBook Webcam Indicator LED. The study provides the first public confirmation that a sophisticated hacker tactic long suspected to be part of the playbook of intelligence agencies, feds and others is not only possible but relatively straightforward.

The same technique that allows us to disable the LED, namely reprogramming the firmware that runs on the iSight, enables a virtual machine escape whereby malware running inside a virtual machine reprograms the camera to act as a USB Human Interface Device (HID) keyboard which executes code in the host operating system. We build two proofs-of-concept: (1) an OS X application, iSeeYou, which demonstrates capturing video with the LED disabled; and (2) a virtual machine escape that launches Terminal.app and runs shell commands. To defend against these and related threats, we build an OS X kernel extension, iSightDefender, which prohibits the modification of the iSight’s firmware from user space.

The research focused on MacBook and iMac computers released before 2008 (iMac G5 and early Intel-based iMacs, MacBooks, and MacBook Pros) but other security researchers reckon the same tactics would work on more recent models from multiple vendors, not just Apple. This means that any laptop with a built-in camera could be used by a skilled hacker to spy on its users without giving the game away. It’s unclear whether or not Apple or other manufacturers are developing any mitigation plans.

Attacks on micro-controllers, particularly on Mac machines, are becoming an increasingly fruitful area of security research. For example, security researcher Charlie Miller demonstrated a hack on systems that control Apple batteries, causing the battery to discharge rapidly and potentially leading to explosive consequences. Other attacks target Apple’s keyboard controllers.

The spying on people without turning on warning lights issue is far from an academic concern. A tell-tale flickering light was central feature of a notorious case involving school-supplied laptops in Pennsylvania back in 2008.

Administrators at Lower Merion High School near Philadelphia reportedly captured 56,000 images of students by using a trojan installed on school-issued laptops. “Students reported seeing a ‘creepy’ green flicker that indicated that the camera was in use. That helped to alert students to the issue, eventually leading to a lawsuit,” the Washington Post reported.

More sophisticated hackers have developed the apparent ability to suppress any warning light. This may be a feature of commercial spyware packages, such as FinSpy from FinFisher, which marketing documents covertly leaked through WikiLeaks claim can be “covertly deployed on target systems” to allow “live surveillance through webcam and microphone.”

A surveillance program called Ghostnet, reckoned to be a Chinese spying operation against prominent Tibetans including the Dalai Lama, involved “web cameras [which are] silently triggered, and audio inputs surreptitiously activated,” according to a 2009 report into the snooping by the University of Toronto.

Marcus Thomas, a former assistant director of the FBI’s Operational Technology Division, recently told the Washington Post that the FBI has long been able to covertly activate a computer’s camera, without triggering any “recording in progress” warning light.

Privacy conscious users have one ready means to protect themselves from spying. “The safest thing to do is to put a piece of tape on your camera,” Miller told the Washington Post. ®

Camnote

A video demonstrating how the iSight camera can be turned on without activating the small-green LED light on older Macs can be found here.

The business case for a multi-tenant, cloud-based Recovery-as-a-Service solution

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/12/19/apple_isight_webcam_led_hack/

First China banned Bitcoin. Now its crooks are using malware to steal traders’ wallets

Disaster recovery protection level self-assessment

Cybercrooks have developed a strain of malware that actively targets BTC China and other Bitcoin exchanges.

A Zeus P2P/Gameover variant discovered by Trusteer is designed to steal the passwords of traders in the virtual currency. A blog post by the IBM-owned transaction security firm (extract below) explains that the malware is specially designed to trick potential victims into supplying one time passwords that might be needed for successful account takeovers.


This Gameover variant waits until an infected user attempts to log into the BTC China website. When this occurs, the malware steals the victim’s username and password and suspends the session temporarily. Once the cybercriminal has the victim’s credentials he can easily perform an account takeover and assume control of the Bitcoins associated with the account.

The reason for pausing the session is that the cybercriminal may need to ask the victim for their one time password (OTP). To do so, the malware will use simple social engineering techniques, combined with HTML injection, and present the victim with a request for the OTP under the false pretense of a security measure.

ZeuS variants are commonly used for conventional electronic banking account takeovers and looting.

The arrival of the Bitcoin-targeting malware variant came shortly before BTC China, China’s largest exchange, began blocking new deposits. This and a related regulatory clampdown by the Chinese government are blamed for taking a huge toll on the crypto-currency’s value over recent days. interest from Chinese speculators was credited as playing a big part in a previous stratospheric rise that saw Bitcoin values soar to well over $1000, before dropping to $600 this week.

Tough new restrictions are thought to have motivated an a denial of service attack against China’s central bank on Wednesday.

Etay Maor, fraud prevention manager at Trusteer, added that cybercrooks in general are interested in Bitcoin mainly as a means of laundering ill-gotten funds

“So far, it is worth noting that most cybercriminals, whether they see Bitcoins as a platform or a target, are not keen on keeping their capital in Bitcoins,” Maor writes. “Rather, criminals use the currency as a middleman for laundering funds without leaving any tracks. They sometimes use additional services such as the Tor hidden service “Bitcoin Fog Company” as an additional anti-trace back step.”

“With the growing use and popularity of this currency we can expect to see more Man in the Browser (MitB) malware variants targeting Bitcoin exchanges and related sites,” he added. ®

The business case for a multi-tenant, cloud-based Recovery-as-a-Service solution

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/12/19/zeus_variant_specialises_in_bitcoin_china_account_hijacking/

The sound made by your computer could give away your encryption keys…

One of the first computers I was ever allowed to use all on my own was a superannuated ICL-1901A, controlled from a Teletype Model 33.

One of the processor’s address lines was wired up to a speaker inside the teletype, producing an audible click every time that address bit changed.

The idea was that you could, quite literally, listen to your code running.

Loops, in particular, tended to produce recognisable patterns of sound, as the program counter iterated over the same set of memory adresses repeatedly.

This was a great help in debugging – you could count your way through a matrix multiplication, for instance, and keep track of how far your code ran before it crashed.

You could even craft your loops (or the data you fed into them) to produce predictable frequencies for predictable lengths of time, thus producing vaguely tuneful – and sometimes even recognisable – musical output.

Plus ça change, plus c’est la même chose

So it was with considerable amusement that I read a recent Debian security advisory that said:

Click on the image to read the advisory...

Genkin, Shamir and Tromer discovered that RSA key material could be extracted by using the sound generated by the computer during the decryption of some chosen ciphertexts.

(The Shamir amongst the authors is the same Adi Shamir that is the S in RSA.)

Their paper, which is well worth reading even if you are neither a mathematician nor a cryptographer, just to see the research process that the authors followed, reaches a remarkable conclusion.

You can still “listen to your loops,” even on a recent multi-gigahertz laptop.

In fact, as the authors show by means of a working (if somewhat impractical) attack, you can listen in to other people’s loops, assuming you have a mobile phone or other microphone handy to do your audio eavesdropping, and recover, bit by bit, their RSA private keys.

Remember that with a victim’s private email key, you can not only read their most confidential messages, but also send digitally signed emails of apparently unimpeachable veracity in their name.

First steps

The authors started out by creating a set of contrived program loops, just to see if there was any hope of telling which processor instructions were running based on audio recordings made close to the computer.

The results were enough to convince them that it was worth going further:

Click on the image to read the original paper [PDF]...it's well worth it!

In the image above, time runs from top to bottom, showing the audio frequency spectrum recorded near the voltage regulation circuitry (the acoustic behaviour of which varies with power consumption) as different instructions are repeated for a few hundred milliseconds each.

ADDs, MULtiplies and FMULs (floating point multiplications) look similar, but nevertheless show differences visible to the naked eye, while memory accesses cause a huge change in spectral fingerprint.

Digging deeper

Telling whether a computer is adding or multiplying in a specially-coded loop doesn’t get you very far if your aim is to attack the internals of an encryption system.

Nevertheless, the authors went on to do just that, encouraged by their initial, albeit synthetic, success.

Their next result was to discover that they could determine which of a number of RSA keys were being used, just by listening in to the encryption of a fixed message using each key in turn:

Click on the image to read the original paper [PDF]...it's well worth it!

Above, with time once again running from top to bottom, you can see slight but detectable differences in acoustic pattern as the same input text is encrypted five times with five different keys.

This is called a key distinguishing attack.

Differentiating amongst keys may not sound like much of a result, but an attacker who has no access to your computer (or even to the network to which it is connected) should not be able to tell anything about what or how you are encrypting your traffic.

Anwyay, that was just the beginning: the authors ultimately went much further, contriving a way in which a particular email client, bombarded with thousands of carefully-crafted encrypted messages, might end up leaking its entire RSA private key, one bit at a time.

Mitigations

The final attack presented by the authors – recovering an entire RSA private key – requires:

  1. A private key that is not password protected, so that decryption can be triggered repeatedly without user interaction.
  2. An email client that automatically decrypts incoming emails as they are received, not merely if or when they are opened.
  3. The GnuPG 1.4.x RSA encryption software.
  4. Accurate acoustic feedback from the decryption of message X, needed to compute what data to send in message (X+1).

→ Feature (4) means that this is an adaptive ciphertext attack: you need feedback from the decryption of the first message before you can decide what to put into the second message, and so on. You can’t simply construct all your attack messages in advance, send them in bulk, and then extract the key material. Of course, this means you need a live listening device that can report back to you in real time – a mobile phone rigged for surveillance, for example – somewhere near the victim’s computer.

The easiest mitigation, therefore, is simply to replace GnuPG 1.4.x with its more current cousin, GnuPG 2.x.

The Version 2 branch of GnuPG has already been made resilient against forced-decryption attacks by what is known as RSA blinding.

Very greatly simplified, this involves a quirk of how RSA encryption works, allowing you to multiply a random number into the input before encryption, and then to divide it out after decryption, without affecting the result.

This messes up the “adaptive” part of the attack, which relies on each ciphertext having a bit pattern determined by the attacker.

→ If you are a GnuPG 1.x user and don’t want to upgrade to Version 2, be sure to get the latest version, as mentioned in the Debian advisory above. GnuPG 1.4.16 has been patched against this attack.

Other circumstances that may make things harder for an attacker include:

  • Disabling auto-decryption of received emails.
  • Putting your mobile phone in your pocket or bag before reading encrypted emails.
  • The presence of background noise.
  • “Decoy processes” running on other CPU cores at the same time.

Note, however, that the authors explain that background noise often has a narrow frequency band, making it easy to filter out.

Worse still, they show some measurements taken while running a decoy process in parallel, aiming to interfere with their key-recovery readings:

Click on the image to read the original paper [PDF]...it's well worth it!

The extra CPU load merely reduced the frequency of the acoustic spikes they were listening out for, ironically making them easier to detect with a lower-quality microphone.

What next?

As the authors point out in two appendixes to the paper, data leakage of this sort is not limited to the acoustic realm.

They also tried measuring fluctuations in the power consumption of their laptops, by monitoring the voltage of the power supply between the power brick and the laptop power socket.

They didn’t get the accuracy needed to do full key recovery, but they were able to perform their key distinguishing attack, so exploitable data is almost certainly leaked by your power supply, too.

The authors further claim that changes in the electrical potential of the laptop’s chassis – which can be measured at a distance if any shielded cables (e.g. USB, VGA, HDMI) are plugged in, as the shield is connected to the chassis – can give results at least as accurate as the ones they achieved acoustically.

In short: expect more intriguing research into what’s called side channel analysis, and in the meantime, upgrade to GnuPG 2!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OIdNAmJA_xw/