STE WILLIAMS

FAA to pilots: Expect ‘unreliable or unavailable’ GPS signals

The US Federal Aviation Administration is warning pilots to expect “unreliable or unavailable” signals from their global positioning gear as a result unspecified tests being carried out by the Department of Defense.

The Notice to Airmen, or NOTAM (PDF) said the GPS tests will be carried out beginning Thursday and are expected to last through February 22. They will cause spotty GPS signals in a several hundred mile radius centered off the coast of Florida.

Map showing affected area of Department of Defense GPS testsSource: FAA

A second NOTAM (PDF) warns of similar GPS disruptions centered in Southern California Nevada around the same period.

“Pilots are highly recommended to report anomalies during testing to the appropriate [Air Route Traffic Control Center] to assist in the determination of the extent of GPS degradation during tests.”

During the effective period, test events will be active for 45 minutes followed by 15 minutes of off time.

It’s not clear if GPS apps in smartphones and car navigation systems will be affected. We’re guessing they will. Readers who know for sure are encouraged to leave a comment

GCHQ goes Google

Britain’s digital spies have turned to Google for help making sense of the floods of data now inundating their powerful computing resources.

GCHQ, the Cheltenham-based signals intelligence agency, is recruiting an expert on MapReduce, the patented number-crunching technique previously behind the dominant web search engine.

The agency’s new lead researcher on data mining will be responsible for “developing MapReduce analytics on parallel computing clusters”, a job advertisment reveals.

MapReduce was developed by Google to index billions of web pages across its cluster of hundreds of thousands of commodity servers. It breaks up complicated tasks into smaller, easier computing problems that cheap hardware is capable of solving quickly.

Google patented the technique earlier this year, but it remains free for other organisations to adopt via Hadoop, an open source project. Originally described in a 2004 research paper, MapReduce has allowed Google’s algorithms to index a rapidly expanding web while keeping costs down.

GCHQ faces similar a challenge as it gathers more and more raw data from internet communications, including email, social networks and VoIP.

“Successful data-driven organisations must be able to process, interpret and rapidly respond to indicators derived from unprecedented volumes of data from disparate information sources,” its recruitment advertisement says.

The Register understands that GCHQ now has a cluster of more than 250,000 commodity servers under its Cheltenham “doughnut” building. In recent years it has developed this Google-style infrastructure instead of the very expensive, bespoke supercomputers it used to analyse microwave intercepts during the Cold War.

While spies are planning research on MapReduce, Google has already moved on to BigTable, its new distributed database

Gov will spend £400k to destroy ID card data

Taxpayers will finally see some value for money out of the former goverment’s ID card scheme.

The cost of destroying the personal data collected under the ill-starred programme will be a mere £400,000, Home Office minister Damian Green revealed yesterday.

The figure came in a commons reply to Paul Goggins MP, who’d asked what security standards would be applied in the destruction of the National Identity Register, what the arrangements were for the data destruction, and what the cost would be.

Green replied that the standards applied had been set out in a document placed in the House of Lords Library last November.

The destruction will be carried out by a a CESG accredited and approved supplier, securely and in accordance with established secure destruction policy, procedures and guidelines, Green said. These include compliance with the HMS IA Standard No. 5-Secure Sanitisation of Protectively Marked Sensitive Information. Physical equipment holding the data will be degaussed and physically shredded.

While scrapping the system will save £86m over the next four years, said Green, costs from asset write-offs and the like will be £5m in 2010-2011.

The actual dismantling of the systems and the destruction of the personal data will be a mere £400,000, though. Which seems like a bargain compared to the £330m Labour spent on the scheme, of which £41m went on “developing the policy, legislation and business case for the introduction of identity cards”.

A cheaper option of course might have been to simply shove the data in the Lords Library. As Green himself demonstrated to Goggins, no one thinks of looking for anything in there

More privacy for the Queen, less for everyone else

The coalition government has detailed the changes it wishes to make to the Freedom of Information Act – reducing the 30-year rule and increasing the number of bodies which must obey the law.

Secretary of State for Justice Kenneth Clarke told the House the Freedom of Information Act would be extended to include the Association of Chief Police Officers (ACPO), the Financial Ombudsman Service and the University and Colleges Admissions Service (UCAS).

Clarke said the government would consult with other bodies on their inclusion into the remit of the Act including Examination Boards, Harbour Authorities, the Local Government Association and the NHS Confederation.

The coalition is also speeding up the release of public documents by changing the 30-year rule to a 20-year rule. It will also look at ways to reduce the time that some other information like court records and ministerial correspondence is kept secret.

Clarke also promised to enhance the independence of the Information Commissioner’s Office.

But there will also be changes to the Constitutional Reform Act to strengthen privacy rights for the Queen, the heir to the throne (Prince Charles) and the second-in-line (Prince William) or anyone acting on their behalf. The changes mean any communication between the government and these people is now an absolute rather than a qualified exemption.

The exemption will last for 20 rather than 30 years, or the lifetime of the person plus five years.

Clarke said the changes were needed to “protect the long-standing conventions surrounding the monarchy and its records, for example the sovereign’s right and duty to counsel, encourage and warn her Government, as well as the heir to the throne’s right to be instructed in the business of Government”.

Finally Clarke said the coalition would engage in “post-legislative scrutiny” to see what impact the changes have and whether more tinkering is required.

Go here to read Clarke’s statement on Freedom of Information, from Hansard

WikiLeaks accused of tapping P2P for secret docs

As much as half of the secret documents posted by WikiLeaks may have been siphoned from peer-to-peer users who incorrectly configured their file-sharing software, according to evidence gathered by a security firm.

Tiversa, a Pennsylvania company that in 2009 uncovered confidential blueprints of the US President’s Marine One helicopter being traded over P2P networks, told Bloomberg News the evidence suggests that WikiLeaks volunteers actively sought out confidential documents, despite claims by the whistle-blower website that it doesn’t know who provides it with the information it gets.

“There are not that many whistleblowers in the world to get you millions of documents,” Tiversa chief executive Robert Boback told Bloomberg. “However, if you are getting them yourselves, that information is out there and available.”

The company has turned the evidence over to government officials investigating WikiLeaks, Boback told the news service. An attorney for WikiLeaks called the claim “completely false in every regard.”

Among the findings leading to Tiversa’s claim:

  • Over a stretch of 60 minutes on February 7, 2009, four computers with Swedish IP addresses issued 413 searches over LimeWire and Kazaa for government documents. The searches unearthed a survey of the Pentagon’s Pacific Missile Range Facility stored on a computer in Hawaii. A little more than two months later, the document was renamed and posted to WikiLeaks. The post said the sensitive information “was first publicly revealed by WikiLeaks working with our source.”
  • In late 2009, WikiLeaks published a spreadsheet detailing potential terrorist targets in California’s Fresno County. The document, which noted locations of caches of bomb-grade fertilizers and other potentially vulnerable sites, was inadvertently indexed on P2P networks by a California state employee in August, 2008, more than a year before the secret-spilling site posted it.
  • Also in 2009, WikiLeaks published Army intelligence documents that reported on the movements of Taliban leaders and other confidential details. Those documents were exposed on P2P networks as early as September of 2008, eight months earlier.
  • The Pentagon’s 58-page Afghanistan Order of Battle was available on P2P networks in January 2009. It was posted to WikiLeaks four months later

It’s not the first time WikiLeaks has been accused of trawling public networks for the confidential material it posts. Last Year, The New Yorker reported that WikiLeaks obtained “millions of secret transmissions” that passed over the Tor anonymizing network. WikiLeaks vehemently denied the claim, but so far no correction has been issued by the magazine.

Bloomberg said the information scavenging by WikiLeaks, if true, “would contradict its stated mission as a facilitator of leaked material by insiders whose identities, [founder Julian] Assange has said the group takes measures not to know.

But it seems just as plausible that someone not affiliated with WikiLeaks performed the P2P searches and anonymously provided the resulting documents to WikiLeaks.

Lame Stuxnet worm ‘full of errors’, says security consultant

Far from being cyber-spy geniuses with ninja-like black-hat coding skills, the developers of Stuxnet made a number of mistakes that exposed their malware to earlier detection and meant the worm spread more widely than intended.

Stuxnet, the infamous worm that infected SCADA-based computer control systems, is sometimes described as the world’s first cyber-security weapon. It managed to infect facilities tied to Iran’s controversial nuclear programme before re-programming control systems to spin up high-speed centrifuges and slow them down, inducing more failures than normal as a result. The malware used rootkit-style functionality to hide its presence on infected systems. In addition, Stuxnet made use of four zero-day Windows exploits as well as stolen digital certificates.

All this failed to impress security consultant Tom Parker, who told the Black Hat DC conference on Tuesday that the developers of Stuxnet had made several mistakes. For one thing, the command-and-control mechanisms used by the worm were inelegant, not least because they sent commands in the clear. The worm spread widely across the net, something Parker argued was ill-suited for the presumed purpose of the worm as a mechanism for targeted computer sabotage. Lastly, the code-obfuscation techniques were lame.

Parker doesn’t dispute that the worm is as sophisticated as most previous analysis would suggest, or that it took considerable skills and testing to develop. “Whoever did this needed to know WinCC programming, Step 7, they needed platform process knowledge, the ability to reverse engineer a number of file formats, kernel rootkit development and exploit development,” Parker said, Threatpost reports. “That’s a broad set of skills.”

Parker floated the theory that two teams might have been involved in the release of Stuxnet: one a crew of skilled black-hat programmers, who worked on the code and exploits, and the second a far less adept group who weaponised the malware – the point where most of the shortcomings of the code are located. He suggested that a Western state was unlikely to be responsible for developing Stuxnet because its intelligence agencies would have done a better job at packaging the malware payload.

Nate Lawson, an expert on the security of embedded systems, also criticised the cloaking and obfuscation techniques applied by the malware’s creators, arguing that teenage BulgarianVXers had managed a much better job on those fronts as long ago at the 1990s.

“Rather than being proud of its stealth and targeting, the authors should be embarrassed at their amateur approach to hiding the payload,” Lawson writes. “I really hope it wasn’t written by the USA because I’d like to think our elite cyberweapon developers at least know what Bulgarian teenagers did back in the early 90′s”1

He continues: “First, there appears to be no special obfuscation. Sure, there are your standard routines for hiding from AV tools, XOR masking, and installing a rootkit. But Stuxnet does no better at this than any other malware discovered last year. It does not use virtual machine-based obfuscation, novel techniques for anti-debugging, or anything else to make it different from the hundreds of malware samples found every day.

“Second, the Stuxnet developers seem to be unaware of more advanced techniques for hiding their target. They use simple “if/then” range checks to identify Step 7 systems and their peripheral controllers. If this was some high-level government operation, I would hope they would know to use things like hash-and-decrypt or homomorphic encryption to hide the controller configuration the code is targeting and its exact behavior once it did infect those systems,” he adds.

Several theories about the development of Stuxnet exist, the most credible of which suggests it was developed by US and Israeli intelligence agencies as a means of sabotaging Iran’s nuclear facilities without resorting to direct military action. A report by the New York Times earlier this week suggested Stuxnet was a joint US-Israeli operation that was tested by Israel on industrial control systems at the Dimona nuclear complex during 2008 prior to its release a year later, around June 2009. The worm wasn’t detected by anyone until a year later, suggesting that for all its possible shortcomings the worm was effective at escaping detection on compromised systems. ®

1 This is a reference to the then revolutionary virus mutation (polymorphic) technique popularised by a VXer called Dark Avenger, from Bulgaria, back in 1991. The true identity of Dark Avenger has never been established, though there are no shortage of conspiracy theories floating around the net.

Carbon trading registry suspends ops following hack attack

A carbon emissions trading registry in Austria has suspended operations until at least 21 January following a hacking attack earlier this month.

The registry has been disconnected from the EU and UN carbon trading registries in response to the 10 January attack, details on which are unclear. A statement on the trading registry website (extract below) explains that the disconnection from other registries and suspension of operations is a security precaution taken to safeguard the operation of wider EU systems while problems on the Austrian site are identified and resolved.

Umweltbundesamt GmbH as registry and ECRA GmbH as registry service provider inform that for security reasons all access to the Austrian emissions trading registry has been locked because of a hacker attack on 10 January 2011. The Austrian registry can therefore not be reached until further notice. Since the registry also had to be disconnected from the CITL and the ITL to ensure security, it is currently not foreseeable when trading in the Austrian emissions trading registry may continue.

The Austrian site is one of a network of sites across Europe that apply a market-based approach to tackling carbon emissions. Green activists rubbish this notion while cybercrooks look at carbon exchanges as a left-field source of illicit income, so sites are subject to hacking attacks or scams from multiple sources.

Last July, an EU Climate Exchange website was hacked by green-hat hackers as part of a political protest against carbon credits. Phishing fraudsters periodically try to con their way towards accessing carbon trading accounts. One phishing attack in February 2010 resulted in losses to six German firms estimated at €3m that prompted the temporary closure of registries across the EU for one day, Business Green reports. More recently in November, 1.6m carbon emission permits were looted from a Romanian trading account maintained by cement-maker Holcim, Reuters adds

Cambridge boffins rebuff banking industry take down request

Computer scientists from Cambridge University have rebuffed attempts by a banking association to persuade them to take down a thesis covering the shortcomings of Chip-and-PIN as a payment verification method.

Omar Choudary’s masters thesis contains too much information about how it might be possible to fool a retailing terminal into thinking a PIN authorising a purchase had been entered, as far as the bankers are concerned. Noted cryptographer and banking security expert Professor Ross Anderson gives short shrift to the argument that publishing the research exceeds the bounds of responsible disclosure, politely but firmly telling the UK Cards Association that the research was already in the public domain and that Choudary’s work would stay online.

Anderson is one of Choudary’s supervisors in the latter’s research.

Choudary’s research on so-called NO-PIN attacks builds on work by Steven Murdoch, Saar Drimer and Anderson that was disclosed to the banking industry last year and published back in February.

Chip-and-PIN is used throughout Europe and in Canada as a method to authorise credit and debit card payments. The attack unearthed by the Cambridge researchers creates a means to trick a card into thinking a chip-and-signature transaction is taking place while the terminal thinks it’s authorised by chip-and-PIN. The flaw creates a means to make transactions that are “Verified by PIN” using a stolen (uncancelled) card without knowing the PIN code. The ruse works by installing a wedge between the card and terminal.

The same approach cannot be applied to make ATM transactions.

In the months since the potential loophole was uncovered only Barclays Bank has responded by modifying its technology to block the potential scam, Anderson reports.

Choudary is one of the authors of an upcoming paper on Chip-and-PIN security, due to be unveiled at the Financial Cryptography 2011 conference in February

Cellphone Snooping Now Easier & Cheaper

Cryptographers have devised a low-cost way to intercept phone calls and text messages sent over the majority of the world’s mobile networks.

The attack, which requires four $15 Motorola handsets, a medium-end computer and a 2TB hard drive, was demonstrated last week at the 27th annual Chaos Communication Congress in Berlin. It builds off of last year’s crack of the A5/1 encryption algorithm used to protect communications sent using GSM, or Global System for Mobile Communications, technology, which carries an estimated 80 percent of the world’s mobile traffic.

The method, cooked up by researchers Karsten Nohl and Sylvain Manaut, is a significant improvement over previous techniques, which required two USRP2 receivers and software to rapidly change radio frequencies over a spectrum of 80 channels. Equipment costs of the new attack are about $650, compared with more than $4,000 using the previous method.

“GSM is as insecure as Wi-Fi was ten years ago,” Nohl, who is chief scientist at Berlin-based Security Research Labs, told The Register. “It will be attacked by the same ‘war-driving’ script kiddies soon. Any discussion over whether the attacks available in the community are incomplete or impractical should have been put to rest with the last demonstration so that we can now start discussing how to fix the networks.”

Nohl, a cryptographer who has identified gaping holes in smart cards, cordless phones and car immobilizers designed to thwart auto thieves, was alluding to comments last year from the GSM Alliance, which claimed eavesdropping on GSM communications wasn’t practical.

Nohl has long nudged mobile operators to adopt the significantly more secure A5/3 algorithm, which still isn’t widely deployed – presumably because of the cost of upgrading a huge amount of equipment that’s already in place. He also counsels them to take several “low-hanging fruit” measures. One fix involves restricting access to the HLR, or Home Location Register, which is the database that keeps track of a handset’s location on a carrier’s network. Another suggestion is for operators to randomize message padding when encrypting communications.

GSM is the most widely used mobile phone technology. It connects more than 5 billion phones, according to the GSMA. In the US, it’s used by AT&T and T-Mobile. It’s used by all major carriers in the UK.

The revised attack uses home-brewed firmware to turn the Motorola phones into wire-tapping devices that pull conversations and text messages off of a carrier’s base station. They are connected to a PC that has access to a 2TB rainbow table used to decrypt messages protected by the decades-old A5/1 algorithm. H-online.com and Wired.com have more technical details here and here. Slides from the presentation are here.

10Million Website Accounts Breached

A website that helps drivers avoid speeding tickets is warning its 10 million registered users that their email addresses and passwords may be in the hands of hackers who breached the site’s security.

The advisory was issued on Thursday by Trapster, which boasts more than 10 million users on its front page. The site uses crowd-sourcing techniques to compile locations of police who are using radar to catch speeding drivers.

Trapster said the hack amounted to a “single event,” and that the company has since taken steps to “prevent this type of attack from happening again, and continue to implement additional security measures to further protect your data.” Trapster didn’t say whether it planned to begin hashing passwords, which is considered a basic security precaution to prevent their disclosure.

Trapster’s gaffe comes a little more than a month after hackers rooted Gawker Media servers and made off with some 1.5 million user passwords and corresponding email addresses. After a file containing the booty was posted online, many users of Twitter, Facebook, and other popular websites reported a spike in account breaches, indicating the sad fact that some folks can’t be bothered to use a unique password for different sites.

This fact hasn’t been lost on the security team at Twitter, which warned Trapster users to change their passwords shortly after Thursday’s advisory was released.