STE WILLIAMS

Lawsuit alleges that Windows Phone 7 tracks users

Microsoft is facing a lawsuit that claims it tracks the location of its smartphone users, even if they ask not to be followed.

A class action suit was proposed against Microsoft in a filing in the Seattle federal court. The filing alleged that the software giant had lied in its letter to Congress in May this year, when it said it only collected location data with the express permission of the user, according to Reuters.

“Collection is always with the express consent of the user and the goal of our collection is never to track where a specific device has been or is going,” Microsoft said in the letter.

Worries about how much info smartphones collect and how the captains of the technology industry use that info were kicked into high gear in April, when security researchers said the iPhone kept track of everywhere you went and saved the details to a secret file. The files were stored on the phones and were also copied over to users’ computers when they synched with the iPhone, with up to a year’s worth of location data kept.

After the story broke, US lawmakers started a probe into location tracking on mobiles, and sent letters to Apple, Microsoft and other mobile OS developers, as well as carriers, asking for information on location data on their phones and tablets.

Today’s suit has been brought by a Windows Phone 7 user and claims that Microsoft transmits latitude and longitude coordinates, a unique ID and nearby Wi-Fi access points when the camera application is activated, even when the user has not given it permission to do so. The suit is seeking an injunction and punitive damages.

Microsoft declined to comment when contacted by The Reg. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2011/09/01/allegations_of_location_data_transmission/

Haiti study: Mass mobile phone tracking can be laudable

A new study uses the movements of mobile phones during the Haiti earthquake, and cholera epidemic, to accurately show where people went during the disasters, and where help should be delivered.

Studying location data stored by Haiti’s biggest network operator, Digicel, Swedish boffins got more accurate estimates of population movements over the period of both earthquake and epidemic than rescue workers on the ground, demonstrating the value of anonymous real-time tracking during national disasters.

The research is published in the peer-reviewed PLoS Medicine journal, and provides a detailed breakdown of the data gathered by the Swedish researchers (one of whom was American, to be fair).

It might seem obvious that the location of every mobile phone would tell you where the population was, but proving that required extensive analysis of the data, as well as adjustments for the penetration of mobile telephones and the demographic differences involved.

The first problem is that not everyone in Haiti has a mobile. The earthquake study tracked 1.9 million SIMs, having discarded numbers that had not made a call in the preceding month (to exclude rescue workers) and those that did not make a call afterwards (euphemistically described as “lost”).

Around 200,000 of those SIMs exited Port-au-Prince following the earthquake, which, given the mobile penetration of just under one-third, multiplies up to 630,000 people leaving the capital.

That tallies well with official UN figures, compiled through questionnaires following the disaster, but diverges from estimates made at the time (which were based on a counting of buses on the roads exiting the town).

Even more interesting were the results from the cholera epidemic, which were compiled in less than 12 hours and demonstrated that such a process could provide real-time advice to healthcare workers and governments in containing and treating, as well as tracking, outbreaks.

The researchers suggest that localised SMS could have been used to advise people within affected areas, or to discourage them from travelling elsewhere, as well as to let the authorities know where problems might surface next.

That does, of course, require close integration between the operators’ systems and those of the government, which will make many people uncomfortable. Real-time tracking, even by cell site, would have been valuable (for example) to the police during the recent UK riots. That data is already being used retrospectively to work out where people were, and with better integration it could show where people are.

The Chinese government recently launched a research project working out how such data could be effectively used in urban planning and traffic management, but if it were to be available in real-time it could also be used to police demonstrations, football matches and all sorts.

This needn’t be an invasion of privacy – anonymous data is still valuable – but we have to decide, as a society, if we think that allowing governments access to the location of their citizens is a risk worth taking in exchange for the benefits it gives. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2011/09/01/haiti_mobile_tracking_natural_disasters/

Scotland Yard cancels £20m IT tender

The Metropolitan Police Authority has cancelled its tender for a new custody and case-preparation system in favour of using a framework run by the Government Procurement Service.

It has published a cancellation of the tender notice, which was posted in May and valued at up to £20m, in the Official Journal of the European Union. It says that procuring the services under the Framework Agreement for Computer Services and Supplies “is more likely to achieve its purchasing objectives than continuing with the current exercise”.

The tender was for a managed support service to support applications used in managing custody and casework, interfaces with other criminal justice systems, hardware and infrastructure.

The framework the authority has chosen to use was set up by Buying Solutions, now part of the Government Procurement Service, in 2009 and covers a range of ICT products and services, inlcuding software packages and information systems.

This article was originally published at Guardian Government Computing.

Guardian Government Computing is a business division of Guardian Professional, and covers the latest news and analysis of public sector technology. For updates on public sector IT, join the Government Computing Network here.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2011/09/01/met_police_cancels_custody_system_tender/

Kernel.org Linux repository rooted in hack attack

Updated Multiple servers used to maintain and distribute the Linux operating system were infected with malware that gained root access, modified system software, and logged passwords and transactions of the people who used them, the official Linux Kernel Organization has confirmed.

The infection occurred no later than August 12 and wasn’t detected for another 17 days, according to an email John “‘Warthog9” Hawley, the chief administrator of kernel.org, sent to developers on Monday. It said a trojan was found on the personal machine of kernel developer H Peter Anvin and later on the kernel.org servers known as Hera and Odin1. A secure shell client used to remotely access servers was modified, and passwords and user interactions were logged during the compromise.

“Intruders gained root access on the server Hera,” kernel.org maintainers wrote in a statement posted to the site’s homepage shortly after Hawley’s email was leaked. “We believe they may have gained this access via a compromised user credential; how they managed to exploit that to root access is currently unknown and is being investigated.”

The maintainers said they believed the repositories used to store Linux source code were unaffected by the breach, although they said they were in the process of verifying its security. They went on to say the potential damage that can be done by rooting kernel.org is less than typical software repositories because of safeguards built in to the system.

“For each of the nearly 40,000 files in the Linux kernel, a cryptographically secure SHA-1 hash is calculated to uniquely define the exact contents of that file,” the statement explained. “Once it is published, it is not possible to change the old versions without it being noticed.”

Each hash is stored on thousands of different systems all over the world, making it easy for users to check the validity of Linux files before running them on their machines.

Linux kernel maintainers didn’t respond to an email seeking comment for this story, but two security researchers who were briefed on the breach said the infected systems were hit by a self-injecting rootkit known as Phalanx, variant of which has attacked sensitive Linux systems before.

“It’s sort of surprising,” said Jon Oberheide, one of the Linux security researchers briefed on the breach. “If this was a very sophisticated attack, it’s very unlikely that the attackers would use an off-the-shelf rootkit like Phalanx. Normally if you were to target a high-value target you would potentially use something that’s more more tailored to your specific target, something that’s not going to be flagged or potentially detected.”

Fellow security researcher Dan Rosenberg said he was also briefed that the attackers used Phalanx to compromise the kernel.org machines. Both Rosenberg and Oberheide confirmed that Hawley’s email was sent to Linux kerlen developers. It was also signed using Hawley’s private encryption key.

The first indication of a compromise came shortly after an error message related to Xnest was displayed on a machine that didn’t have the X Window application installed. Linux maintainers are advising developers to carefully investigate any systems that don’t have the the program installed and display the /dev/mem message anyway.

Been down this road before

It’s not the first breach to hit a venerable organization that distributes open-source software that thousands of sensitive organizations rely on to remain secure. In December, GNU Savannah, the main source-code repository for the Free Software Foundation, was taken down following a hack that compromised passwords. Admins at the time couldn’t rule out the possibility the attackers gained root access.

And in April 2010, the Apache Software Foundation, which maintains the world’s most widely used webserver, suffered a direct targeted attack that captured he passwords of anyone who used the website’s bug-tracking service over a three-day span. It was the second major compromise of Apache.org in eight months.

Kernel.org members have taken the infected servers offline and are in the process of completely reinstalling the operating system on each machine in the organization. They are also working with all 448 users of kernel.org to change their authentication credentials, including SSH keys. They have also notified authorities in the US and Europe to assist in the ongoing probe of the breach.

“The Linux community and kernel.org take the security of the kernel.org domain extremely seriously, and are pursuing all avenues to investigate this attack and prevent future ones,” Wednesday’s statement said. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2011/08/31/linux_kernel_security_breach/

Mozilla addons site targeted in same attack that hit Google

The secure webpage hosting addons for Mozilla Firefox was targeted in the same attack that minted a fraudulent authentication credential for Google websites, the maker of the open-source browser said.

“DigiNotar informed us that they issued fraudulent certs for addons.mozilla.org in July, and revoked them within a few days of issue,” Johnathan Nightingale, Mozilla’s director of Firefox development, wrote in a statement. “In the absence of a full account of mis-issued certificates from DigiNotar, the Mozilla team moved quickly to remove DigiNotar from our root program and protect our users.”

Nightingale didn’t say how many Mozilla certificates were issued and if they were actively used to intercept the communications of people accessing the address. The site hosts hundreds of thousands of addons that give the Thunderbird and Firefox programs powerful functions not included by default.

On Tuesday, Google said a bogus secure sockets layer certificate issued by Dutch firm DigiNotar was used to spy on people located in Iran while visiting Gmail. Counterfeit credentials for “a number of domains” were minted following a security breach on its systems, DigiNotar has said.

Mozilla’s confirmation came a few hours after a Dutch news service reported (English translation here) that the DigiNotar breach affected Mozilla and at least four other organizations. Fraudulent certificates for Yahoo.com, the Tor Project, WordPress and the Baladin blogging service in Iran were also generated, the service said without citing any source for that information.

Representatives from Yahoo, the Tor Project, and WordPress didn’t immediately respond to inquiries seeking confirmation.

The breach of DigiNotar gave the attackers the digital credentials needed to host spoofs of virtually any Google property that were almost indistinguishable to people using networks controlled by the hackers. The fraudulent certificate showed it was issued on July 10, but it came to light only on Monday. Google hasn’t said how long the counterfeit certificate was actively being used in the wild.

An update to Google’s open-source Chromium browser published Tuesday appeared to blacklist 247 certificates issued by DigiNotar, suggesting the breach may have been more widespread than previously believed. The certificate authority, which is owned by Vasco Data Security, an Oakbrook Terrace, Illinois-based provider of two-factor authentication products, has declined to quantify the extent of the attack, except to say it affected “a number of domains” including Google’s.

Both Google and Microsoft have declined to say how many DigiNotar certificates they plan to blacklist in their software. Representatives from Mozilla have yet to respond to inquiries.

The possibility that other sensitive websites were also targeted only adds to the uncertainty about how widespread the attack was felt. ®

This story was updated to add comment from Johnathan Nightingale and details about the potential number of fraudulent certificates issued.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2011/08/31/more_site_certificates_forged/

New UK ‘leccy meters remotely run via Voda 2G

British Gas is to deploy meters with embedded mobile phones, and Zigbee networking, to ensure we know how much electricity we’re using, and they do too.

The smart meters use Vodafone’s 2G network to send back readings and allow British Gas to see exact levels of ‘leccy consumption at any time, while the Zigbee connectivity pushes the same information to a touch-screen display we can use to set our own targets for saving energy and see how much our solar cells are contributing.

Unlike clip-on systems, which estimate how much energy is flowing but can be fitted to any supply by wrapping them around the incoming cable, the new system requires a replacement meter – but that’s rather the point as this is all part of the industry’s push for greater control over our electricity consumption.

Both the “pebble” display and the meter itself can have their software updated remotely, and the use of Zigbee means that instructions could be sent to (for example) a Zigbee-enabled freezer or washing machine, to reduce consumption at short notice or run when there’s surplus power available in the grid.

But that’s for the future: today it is just about getting the remotely readable meters into homes, which will apparently start next month, and letting people know how much energy they’re using on a minute-by-minute basis.

Mobile networks aren’t ideal for such things, as the White Space crowd keep reminding us. GPRS has a massive communications overhead, and connectivity won’t stretch into every basement meter, not to mention the low priority attributed to data traffic. Mobile-data kit around Heathrow stopped working during last winter’s snow troubles as travellers called home and knocked M2M communications off the network, for a few days.

But White Space technology isn’t available yet, and with the government is committed to getting smart meters into every home by 2019, the cellular operators are the only ones able to provide blanket coverage today – even if the blanket is a little moth-eaten. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2011/08/31/british_gas_meters/

Quarter of NHS data collection to go in red-tape slash plan

The UK’s Department of Health hopes to save millions by cutting data collection and red tape across the NHS.

A 12-week consultation, launched on Tuesday and due to run until 22 November, aims to pool ideas for plans to slash data collection across the NHS by a quarter (25 per cent), saving an estimated £10m in the process. The scheme aims to release administrative and clerical resources to better support frontline patient care.

Patient groups, research organisations, academic institutions and NHS trusts are invited to contribute to streamline the data collection process. Around 300 separate data collections commissioned by the Department of Health and affiliated bodies are up for review. A second phase of process will review how data-returns programs that survive the first phrase of the process can be “rationalised”, so that data can be collected and processed more efficiently.

David Harley, a former NHS IT manager who now works as a senior researcher with net security firm ESET, told El Reg that he had concerns over the direction of the project.

“It’s likely that appreciable savings can be made if this is done properly, though I suspect that there will be some impact on patient care,” Harley said. “It seems to me that in the long term more aggregation of data rather than less would make the NHS more efficient, but I can’t see the cost of doing that properly being countenanced in the present economic climate.”

In a statement, public health minister Anne Milton said: “Meaningful information is the lifeblood of the NHS. The data we collect must be of real value to help us improve patient outcomes, patient choice and clinical decisions. We know that some of the data that is being gathered is of limited use, taking up valuable staff time and resources.

“This is why we want to cut red tape in the NHS so that staff can focus on what matters most – improving frontline care and services for patients,” she added.

Tim Straughan, chief executive of the NHS Information Centre, added: “The purpose of this review is to make sure we collect data that can make a real impact in helping to improve care while stopping data returns that are no longer needed and only continue for historical reasons.”

“In reaching our recommendations, we looked at more than 300 data returns, covering 12 distinct themes and involving contributions from over 200 people. We believe the result of review will free local NHS staff from unnecessary administrative burdens while at the same time supporting patient choice and better decision-making within the NHS,” he added. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2011/08/31/nhs_data_collection_red_tape_cut_plan/

Don’t buy your iPad in a McDonald’s car park

A young woman in South Carolina has had her dream of buying a brand-new shiny iPad in the car park of a McDonald’s crushed by two swindlers, who left her with a wooden dud.

The 22-year-old saw nothing suspicious in being approached by two men in the aforementioned car park nor in being offered an iPad for $300.

Claiming they had bought iPads in bulk, the fraudsters enticed the naive Apple fan by showing her an actual iPad and agreed to take $180 for it, which was all she said she had, according to a Spartanburg County Sheriff’s report.

She was handed one of several FedEx boxes from the boot of their car and, presumably happy and satisfied with her lovely new purchase, she drove home without looking inside.

Imagine her chagrin when she opened the box at home only to discover “a piece of wood painted black with an Apple logo”, as the report described it.

While not exactly the most cunning of disguises, you have to commend the effort and attention to detail. Those conmen sat down with their paints and didn’t think it was enough to just splash a bit of black paint on the mockPad, it had to have that all-important Apple logo too. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2011/08/31/mcdonalds_woman_wooden_ipad/

DARPA software to trawl Bin Laden laptops, thumb drives

Famous Pentagon mad-scientist bureau DARPA says it would like some miracle software to help hard-pressed US spooks and military intelligence officers trawl through the massive troves of imagery retrieved from gadgetry seized in the ongoing massive campaign of special-forces raids across southwest Asia.

The new project has been dubbed Visual Media Reasoning (VMR) and is the subject of a new solicitation (44-page/364KB PDF) issued this week by the military boffins. In the announcement, DARPA says:

VMR’s focus is on extracting as much information as possible from fuzzy, noisy, and ill-posed images that have little or no associated metadata and originate from unknown sources and unknown context. These images may be from captured adversary cameras, laptops, and related devices …

Another intended use case for the VMR program is longer-term forensic analysis on captured media done by investigators in support of theater operations.

It was widely reported that super-elite US Navy SEALs from the unit variously known as Team Six or the Naval Special Warfare Development Group (aka DevGru) seized a large stash of thumb drives, laptops and other gadgets at Osama bin Laden’s Pakistani hideout after shooting the famous terrorist dead. That raid alone probably provided enough data to tie up a large number of human intelligence analysts from the military, CIA etc for a long time.

The elite-of-the-elite “Tier One” operatives of the US Joint Special Operations Command* alone are carrying out more than 1,700 raids in a year at the moment and the many other kinds of special forces active across Afghanistan and sometimes Pakistan, the Horn of Africa etc must be carrying out many more such operations. It seems likely that the haul of gadgetry loaded with data must be quite considerable, hence the DARPA push to automate more of the analysis.

According to DARPA:

Our adversaries frequently use video, still and cellphone cameras to document their training and operations and occasionally post this content to widely available websites. The volume of this visual media is growing rapidly and is quickly outpacing our ability to review, let alone analyze, the contents of every image. The VMR technology will serve as a “force-multiplier” by extracting tactically relevant information for the human analyst and alerting the analyst to scenes that warrant the analyst’s expert attention …

With the VMR solution, a warfighter in one military service branch can use the same cutting- edge facial recognition and geo-location technologies as an analyst in an intelligence agency and can avoid applying multiple software applications to extract different visual information from the same image. Imagine needing to use scores of different internet search engines in order to research a complex subject, because each search engine was limited to a particular type of content …

If DARPA can crack the problem of automated image analysis, there would of course be many other applications for the tech than merely trawling through Osama bin Laden’s personal gadgets. But there have been quite a few such military- and intelligence-funded efforts before, so the debut of VMR is probably more an indication that the need for such kit is growing, rather than a sign that it will soon be here. ®

Bootnote

*JSOC is thought to be made up of just three main operational units: Team 6/DevGru, Delta Force, and the constantly name-changing Intelligence Support Activity. There are tens of thousands of other US “Tier Two” special ops personnel such as ordinary SEALs, Rangers, Green Berets, Marine Force Recon etc – and allied units such as the British SAS, SBS and SFSG to boot.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2011/08/31/darpa_imagery_analysis_ware/

WikiLeaks weathers DDoS assault

WikiLeaks’ website slowed to a crawl on Tuesday night following an apparent denial of service attack.

The assault follows days after the accelerated publication of thousands of the remaining US diplomatic cables which the whistleblowing website first began publishing last autumn.

The initial publication of the controversial cables was also accompanied by a DDoS attack. Patriot hacker Jester claimed credit for running the November assault, but no one has come forward to ‘fess up to the latest assault.

In the latest of a series of updates to its Twitter account, WikiLeaks said: “WikiLeaks.org still down for some. You will need to wait 10 minutes or so until DNS cache timeout. Until then, http://wikileaks.lu/ etc”. The whistleblowing site also repeated earlier calls for individuals to donate to its cause, via bank transfer or Bitcoin payment.

At the time of writing on Wednesday afternoon, the site appears to be operating normally.

WikiLeaks directed interested parties to mirror sites during the assault, so its effects were symbolic rather than amounting to a genuine blockade against those interested in hunting down the latest cache of leaked cables.

Unlike the earlier releases, which were vetted by mainstream media organisations and only released gradually, the latest batch of cables have been released in a firehose-style torrent, sparking fears that the names of US agents and informants might be inadvertently identified. The whistleblowing site itself says such concerns are baseless. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2011/08/31/wikileaks_weathers_ddos_assault/