STE WILLIAMS

Android AV Improves But Still Can’t Nuke Malware

Good news: Antivirus and anti-malware scanners designed for the Android operating system continue to improve.

So says a new report, released this week by independent German testing lab AV-Test. The November and December study of 28 different Android antivirus tools found that the apps’ ability to protect devices — by detecting a representative set of more than 2,000 malicious apps discovered in the four weeks prior to the test — reached an average success rate of 96.6%, up from 90.5% in September.

The tests evaluated the antivirus apps not only on the aforementioned “protection” front, but also looked at usability: the app’s hit on battery life and processing speed, how much data it loaded in the background, and also whether it triggered false alerts when testers attempted to install 500 different clean apps via Google Play and third-party app stores. The tests also looked at a variety of app features with security implications, including any anti-theft technology, parental controls, encryption, call blocking, and backup capabilities.

Read the full article here.

Have a comment on this story? Please click “Discuss” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/mobile/android-av-improves-but-still-cant-nuke/240164857

Blue Coat Acquires Norman Shark

SUNNYVALE, Calif., December 18, 2013– Blue Coat Systems, Inc., the market leader in business assurance technology, today announced it has acquired Norman Shark, the global leader in malware analysis solutions for enterprises, service providers and government. The acquisition unites the industry’s most proven zero-day sandboxing technology with Blue Coat’s leading Advanced Threat Protection solution to help customers bridge the gap between identification of advanced persistent threats and resolution of those threats.

“We have a long history of high performance web traffic processing, and by combining the industry’s most proven sandboxing technology in-line with our market leading secure web gateway, we are removing an obstacle enterprises have faced up to now,” said Greg Clark, CEO of Blue Coat Systems. “Integration of the Norman Shark technology with the Blue Coat Content Analysis System and Security Analytics Platform delivers a solution that protects against zero-day threats before, during and after an attack by detecting, analyzing, blocking, resolving and fortifying the network against unknown and latent threats.”

Norman Shark brings to Blue Coat a world class research and malware analysis team,and its sandboxing technology represents the most mature and proven in the industry with the ability to scale to support enterprise requirements. Norman Shark’s IntelliVM and SandBox emulation technologiesprovide advanced security teams the ability to analyze any threat type, in any version of any application they choose. This allows security teams to gather intelligence on malware targeting their specific environment and application vulnerabilities in order to more effectively contain and resolve the incident. Blue Coat already integrates the Norman Shark technology into its Malware Analysis Appliance to deliver the industry’s most robust sandboxing capability.

“Blue Coat’s acquisition of Norman Shark is a great addition to the company’s advanced malware prevention, detection, and analytics capabilities,” said Jon Oltsik, Senior Principal Analyst at the Enterprise Strategy Group. “The market is growing very quickly, and ESG believes that enterprises need strong integrated solutions to identify, contain and resolve advanced persistent threats in a more automated way.”

Unlike point solutions, the new Blue Coat Advanced Threat Protection solution, which integrates the Norman Shark malware analysis technology, provides a lifecycle defense that fortifies the network against unknown and latent threats. The combined advanced threat protection technologies improve the efficiency and effectiveness of the Blue Coat Global Intelligence Network’sability to dynamically protect all Blue Coat customers. This comprehensive approach allows IT organizations to move beyond protection to empowering the business.

“Our team is a trusted provider to some of the largest, most demanding companies in the world and earned that position with a commitment to excellence in technology,” said Stein Surlien, CEO of Norman Shark. “The power of our sandboxing technology combined with Blue Coat’s gateway security and security analytics gives enterprises unmatched advanced persistent threat detection, root-cause analysis and resolution.”

As part of its commitment to securely empowering businesses, Blue Coat continues to execute on its strategy of acquiring strategically important, best-of-breed technologies to deliver Business Assurance Technology solutions. The Norman Shark acquisition complements earlier acquisitions of Solera Networks and SSL technology from Netronome to provide customers with the industry’s only lifecycle defense against advanced persistent threats.

About Blue Coat Systems

Blue Coat empowers enterprises to safely and quickly choose the best applications, services, devices, data sources, and content the world has to offer, so they can create, communicate, collaborate, innovate, execute, compete and win in their markets. Blue Coat is a portfolio company of Thoma Bravo, LLC. For additional information, please visit www.bluecoat.com.

Article source: http://www.darkreading.com/management/blue-coat-acquires-norman-shark/240164858

UK payday loan spammers fined £175K for "Hi, Mate!" texts

Phone. Image courtesy of Shutterstock.Hey, nice, you got a text message from your mate.

Looks like he’s doing well, too – at any rate, he’s apparently getting his account stuffed with cash even when he’s out of town:

Hi Mate hows u? I’m still out in town, just got £850 in my account from these guys www.firstpaydayloanuk.co.uk.

But hang on, it’s not actually from your mate.

That messages and others like them were actually sent by a UK-based payday loan company that just got fined £175,000 ($283,500) for sending millions of spam text messages, in the process needling thousands of consumers who then complained to officials.

The Advertising Standards Authority (ASA), which is the UK’s independent regulator of advertising across all media, had already taken the loan company – First Financial Ltd – to task back in June.

At that time, the ASA said that the SMS spam was unsolicited, was being sent to people who’d registered with Telephone Preference Service so they wouldn’t get this kind of marketing, was irresponsible in encouraging people to take out loans to fund partying, and were misleading in that they pretended to come from people’s friends.

The ASA’s solution: tell First Financial, and the ISP they rode in on, to knock it off.

To wit:

The ads should not appear again in their current form. We told First Financial and Akklaim Telecoms to ensure text message ads were clearly identifiable as marketing communications and were only sent to those who had given explicit consent to receive them. We also told them to ensure ads did not imply that payday loans were suitable for spending on a social life.

First Financial, apparently, wasn’t convinced.

The Information Commission’s Office (ICO) announced on Tuesday that First Financial was fined after having been found to have sent millions of spam text messages that provoked thousands of complaints.

First Financial was found to have violated The Privacy and Electronic Communications Regulations governing electronic marketing by sending SMS messages without consent.

Thousands of complaints flooded data privacy watchdogs at the ICO, above and beyond the 13 complaints that spurred the ASA’s regulatory action in June.

The ICO investigated, tracing 4,031 of the spammy messages back to First Financial.

In order to avoid detection, the spam texts were sent using unregistered subscriber identity module (SIM) cards.

Despite the use of unregistered SIMs, however, identifying the sender must have been pretty straightforward, given that the messages’ content referred recipients to a website belonging to firstpaydayloanuk.co.uk, which is a trading name used by First Financial.

The company’s former sole director, Hamed Shabani, had been prosecuted on 8 October 2013 after he failed to notify First Financial’s processing of personal information with the ICO, which is a legal requirement under the Data Protection Act.

The ICO reports that Shabani was fined £1,180.66 ($1,912.67), despite trying to claim he had no affiliation with the company.

The Register’s John Leyden reports that, in an effort to avoid prosecution, prior to a hearing in front of City of London Magistrates Court, Shabani had attempted to remove his name from the company’s registration at Companies House.

In its news release, ICO Director of Operations Simon Entwisle said that the office is working with the government to make it easier for them to slap spammers down earlier than in this case:

People are fed up with this menace and they are not willing to be bombarded with nuisance calls and text messages at all times of the day trying to get them to sign up to high interest loans. The fact that this individual tried to distance himself from the unlawful activities of his company shows the kind of individuals we’re dealing with here.

We will continue to target these companies that continue to blight the daily lives of people across the UK. We are also currently speaking with the government to get the legal bar lowered, allowing us to take action at a much earlier stage.

The ICO advises people to avoid replying to unsolicited text messages and to instead report the message using the survey on the ICO website or by forwarding the texts to their network operator at ‘7726’, given that the networks are working to block the worst offenders.

The ICO has also provided guidance for direct marketers that explains their legal requirements under the Data Protection Act and Privacy and Electronic Communications Regulations.

The materials detail how organisations are allowed to market via phone, text, email, post or fax.

Image of phone courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/T-pFgzXAu_o/

Lovers of Tor can now sprinkle Bitcoins on its developers as thanks

Disaster recovery protection level self-assessment

The folks behind web privacy tool Tor will now accept donations in Bitcoins.

The project, which attempts to anonymize connections across the internet, will team up with payment biz Bitpay to allow users to donate using the crypto-currency; BTC contributions will be ultimately converted into dollars for the developers’ coffers.


“Our decision to accept Bitcoins has been well thought out and researched from a financial accounting perspective with an eye on passing our required annual A-133 audit,” the Tor Project announced.

“We believe we are the first US 501(c)3 non-profit organization to test acceptance of Bitcoins and attempt to pass the US Government A-133 Audit Standard.”

The donations will be used to build and maintain the infrastructure of the public Tor network, we’re told. Designed, ideally, as a secure means for routing communications over the the web, the Tor platform is popular among privacy advocates and anyone else who simply doesn’t want to be easily traced online. But security experts warn that simply installing Tor is not enough to protect oneself on the internet.

The group said that, in addition to Bitcoins, it also accepts cash via PayPal, Amazon and Givv as well as traditional check and money order payments.

The announcement comes as speculation continues over the future of Bitcoins. In China, markets have plummeted amid a government ban on Bitcoin transactions, but the overall global USD-BTC exchange rate remains high. Some backers predict the currency will hit $40,000 per ‘coin in the near future. ®

Quick guide to disaster recovery in the cloud

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/12/18/tor_project_to_accept_bitcoin_donations/

(ISC)² Delivers Recommendations For Solving The U.S. Government Cyber Security Skills Gap Challenge

Clearwater, FL., U.S.A., December 17, 2013 – (ISC) (“ISC-squared”), the world’s largest information security professional body and administrators of the CISSP, today announced a series of recommendations for the U.S. government to consider in order to more effectively solve the cyber security workforce skills gap challenge. The recommendations were delivered early this month directly to government officials at the White House, U.S. Department of Homeland Security, U.S. Department of Defense, and National Institute of Standards and Technology, as well as members of academia and other influencers within the federal workforce community.

As supported by data from the 2013 (ISC)2 Global Information Security Workforce Study, the known gap between the supply and demand for qualified information security professionals around the world has become acute. Over half of U.S. government survey respondents said the greatest reason their agency has too few information security workers is because business conditions can’t support additional personnel at this time. Yet, other experts around the world claim the problem of the skills gap lies primarily with the difficulty in finding qualified personnel and funding challenges.

During the 10th anniversary gathering of (ISC)2’s U.S. Government Advisory Board for Cyber Security (GABCS), (ISC)2 officials led a discussion with former and current board members representing CISO-level executives from federal agencies and departments in an effort to gain greater understanding of the underlying challenge facing the federal environment. As a result, (ISC)2 developed a series of recommendations that address the following topics:

ensuring security in the cloud, software, and the supply chain;

establishing a cyber “special forces” team;

aligning existing workforce programs such as the Scholarship for Service (SFS) and Centers for Academic Excellence (CAE) programs to the NICE Framework;

implementing the DoD 8570.01-M model across all government agencies;

assigning accountability for information security failures to mission and business owners, and recognizing successes, among other recommendations.

“Based on our research, 61% of U.S. government information security professionals believe that their agency has too few information security workers to manage threats now, let alone in the future. Yet, information security positions are going unfilled,” says W. Hord Tipton, CISSP, executive director of (ISC)2 and former CIO of the U.S. Department of Interior. “Our goal in delivering these recommendations to key influencers is to help the U.S. government close the workforce skills gap and to strengthen information security via avenues such as existing frameworks, the acquisition process, and personal accountability, among others.”

For a copy of the letter sent to members of the U.S. government information security community that includes a complete list of (ISC)2’s recommendations, please visit https://www.isc2.org/government.aspx.

Article source: http://www.darkreading.com/government-vertical/isc-delivers-recommendations-for-solving/240164847

Five Ways Cloud Services Can Soothe Security Fears in 2014

Enterprise use of cloud services grew tremendously in 2013, but perceived security shortfalls continue to be the biggest block for companies in adopting the services.

For most industries, cloud services have already become part of the corporate infrastructure, either by design or, more often, by workers adopting cloud services without the approval of the IT department. Cloud-service assessment firm Skyhigh Networks, for example, adds approximately 500 cloud services to those that it already tracks, according to CEO and co-founder Rajiv Gupta.

“Employees are using cloud services almost with abandon, without assessing the risk of those services,” Gupta says. For that reason, the security requirements will move front and center in 2014, he says.

No wonder, then, that nearly half of all IT managers continue to be concerned about the security of their cloud resources, even though 35 percent believe the security of the cloud to be superior to on-premise deployments. One reason: Many cloud providers continue to fail to address the concerns of their clients, says Charles Burckmyer, president of security-service provider Sage Data Security, whose clients often work with the firm to assess the security of third-party cloud services.

“Clients need to build a structured approach to working with cloud vendors, have a process for creating permissible exceptions, assigning risks and mitigating that risk,” he says. “Support around and by cloud services is vital for most clients today.”

By opening a dialog with their cloud providers, companies can create a secure hybrid infrastructure. Here are five topics that companies should discuss with their cloud providers in 2014, according to security experts.

1. Make security responsibilities clear
Cloud-service providers continue to place the responsibility for securing business data on the client, while many client assume that cloud services will take responsibility for the data stored in their services.

The gap in expectations narrowed in 2013 compared to previous years, but more than a third of customers still expect their software-as-a-service provider to secure the applications and data, according to a Ponemon Institute study released in March. Only 8 percent of companies assess the security of the applications using their information-technology and security teams, the study found.

While many industries have moved to the cloud without concern, security-conscious industries and those that have to comply with regulations are balking because cloud providers are not clarifying their risk, says Sage Data Security’s Burckmyer.

“Cloud-vendor due diligence and understanding what your responsibilities are, as a client, and what your vendor is doing to support you in those responsibilities is a very necessary topic,” he says. “There has been a reticence about moving to the cloud, from a regulatory and from a security standpoint, because many providers are not doing enough.”

2. Design systems to provide meaningful log data
Companies increasingly want to collect security information on what is happening to their data and applications out in the cloud. Yet, many cloud providers do not supply detailed logs files or cannot adequately separate the events pertaining to one customer from those dealing with another.

“We need to make that the default standard practice, that there is a certain amount of logging information that is available proactively for all the different analytics that companies need to track,” says Jim Reavis, CEO of the Cloud Security Alliance. “A big sore spot has been log file information, and that has been a sticking point.”

[With cloud services collecting more data from businesses, firms should prepare for potential breaches that involve their providers. See Enterprises Should Practice For Cloud Security Breaches.]

Keeping audit logs of admin access is especially important, but most smaller cloud services do not provide such information.

3. Encryption needs to be pervasive
Companies are not only demanding end-to-end encryption in the cloud, but increasingly asking for cloud providers to allow them to encrypt data on-premise before sending it to the cloud.

Cloud providers should not only work with their customers, but develop strong encryption solutions that allow the companies to be confident that their data is secure, while allowing some features to be preserved, says Sanjay Beri, CEO of cloud-service management firm Netskope.

“Encryption is the one thing that they, as an app provider, can do better than anyone in the middle,” Beri says. “No one knows the app better than they do, and as long as they expose the keys to be managed by someone else, many customers will be very happy.”

4. Alert users to anomalies
Encryption, however, is not sufficient to protect a customer’s data, if an attacker has gained access to account credentials. For that reason, cloud providers must also maintain good anomaly detection systems and share the information and audit records from those systems with the client, says Skyhigh’s Gupta.

“You need all these different tools to make sure that the cloud provider meets the customer’s requirement,” he says. “It is a layered approach.”

5. Discuss protections from third-party access
While Cloud providers have to abide by the jurisdiction of the nation in which they do business and in which their data resides, the revelations about the massive data collection conducted by the United States’ National Security Agency and other nations’ intelligence groups has left companies increasingly asking cloud providers about who requests data, how frequently, and whether the provider complies with the requests.

“It is very clear that providers need to help consumers understand how they manage and handle requests for information,” says the CSA’s Reavis. “Providers are not beginning to see that they need to put government requests are amr’s length.”

That clarity needs to extend to the ownership of the information as well, says Skyhigh’s Gupta. Cloud providers need to emphasize that their clients’ continue to own their own data, and be as explicit as possible about the provider’s use of that data.

“How long do they keep your data? In some cases, they keep your data longer than you want them too, in others, they don’t give you enough time to retrieve your data, if you leave the service,” he says.

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/services/five-ways-cloud-services-can-soothe-secu/240164848

Are the websites you’re using tracking what you type?

Image of backspace key courtesy of ShutterstockLet’s say that, mid-oversharing, I thought better about writing a Facebook post about how the rash has now spread to my … (cue the backspacing, the select all/delete, hitting cancel or whatever it takes to avoid telling the world about that itch).

If that text were a Facebook status update (or a Twitter tweet, a Yahoo email, a comment on a blog or any other typing on a web page), cancelling it doesn’t, theoretically, really matter: what I wrote could still have been recorded, even if I decided not to post it.

That’s a point brought up on Friday by Jennifer Golbeck, director of the Human-Computer Interaction Lab and an associate professor at the University of Maryland.

Slate published an article Golbeck wrote up about a paper, titled Self-Censorship on Facebook (PDF), that describes a study conducted by two Facebook researchers: Sauvik Das, a PhD student at Carnegie Mellon and summer software engineer intern at Facebook, and Adam Kramer, a Facebook data scientist.

Over the course of 17 days in July 2012, the two researchers collected self-censorship data from a random sample of about 5 million English-speaking Facebook users in the US or UK.

How did they know when one of the Facebook users under their microscope had decided to back out of a post?

That’s simple as pie, really: they used code they had embedded in the web pages to determine if anything had been typed into the forms in which we compose status updates or comment on people’s posts.

To protect users’ privacy the researchers decided to record “only the presence or absence of text entered, not the keystrokes or content”. A quote that serves as a helpful reminder that they could have tracked your keystrokes if they had wanted to.

(Note: logging keystrokes is no super secret, privacy-sucking vampire sauce. It’s plain old Web 1.0. This is not news, but it’s certainly worth repeating: anybody with a website can capture what you type, as you type it, if they want to.)

The researchers tracked that a user had started writing content only if a Facebook user typed at least five characters into a compose or comment box. If the content wasn’t shared within 10 minutes, it was marked as self-censored.

Why in the world would Facebook, Twitter, or similar care so much about my rash and subsequent decision not to tell the world about it?

While second thoughts come in handy to stop people who might otherwise post truly embarrassing Facebook or other social media content, as far as the social networks themselves are concerned, self-censoring users just starve sites of the content they otherwise feed upon.

From the paper:

Users and their audience could fail to achieve potential social value from not sharing certain content, and the [social network] loses value from the lack of content generation…

… Understanding the conditions under which censorship occurs presents an opportunity to gain further insight into both how users use social media and how to improve [social networks] to better minimize use-cases where present solutions might unknowingly promote value diminishing self-censorship

In her Slate article, Golbeck interprets Facebook’s 17-day collection of self-censorship data for this research to be an invasion of privacy in that, as she writes, “the things you explicitly choose not to share aren’t entirely private.”

The problem with this thinking is that it conflates two things: 1) Facebook’s ability to capture data about users who started typing something but then didn’t publish it, and 2) the incorrect notion that Facebook tracked the content of what users typed.

Could Facebook have captured my need for salve? Absolutely. As I said above, anybody with a website can capture what we type into their website as we type it. It’s the nature of the web.

But the researchers took pains to state that while they did track the presence or absence of text entered, they explicitly did not listen in on the abandoned content; indeed, they tracked neither the keystrokes nor the content entered.

From the paper:

All instrumentation was done on the client side. In other words, the content of self-censored posts and comments was not sent back to Facebook’s servers. Only a binary value that content was entered at all.

That said, Facebook was still looking over its users’ shoulders in a fashion that would likely come as an unpleasant surprise to many of them.

Golbeck’s conflation isn’t surprising. Particularly given NSA-gate and the heightened awareness about pervasive surveillance it’s bestowed upon us, we’re ready to see eavesdropping governments and their corporate lackeys lurking in every corner of the internet.

But there’s a yawning gap between what people think can and cannot be monitored and what is actually possible.

The reality is that JavaScript, the language that makes this kind of monitoring possible, is both powerful and ubiquitous.

It’s a fully featured programming language that can be embedded in web pages and all browsers support it. It’s been around almost since the beginning of the web, and the web would be hurting without it, given the things it makes happen.

Among the many features of the language are the abilities to track the position of your cursor, track your keystrokes and call ‘home’ without refreshing the page or making any kind of visual display.

Those aren’t intrinsically bad things. In fact they’re enormously useful. Without those sort of capabilities sites like Facebook and Gmail would be almost unusable, searches wouldn’t auto-suggest and Google Docs wouldn’t save our bacon in the background.

There are countless examples of useful, harmless things this (very old) functionality enables.

But yes, it also provides the foundation for any sufficiently motivated website owner to track more or less everything that happens on their web pages.

This is the same old web we’ve been using since forever but a lot of people don’t realize. When they find out, they’re often horrified.

This was illustrated by a recent news piece about Facebook mulling the tracking of cursor movements (actually, technically, it would be tracking the movement of users’ pointers on the screen) to see which ads we like best.

The comments on that story make clear that many people are utterly creeped out by the idea that websites can track their pointers. One commenter likened pointer tracking to keylogging.

But as Naked Security’s Mark Stockley pointed out in a subsequent comment on that article, none of this is new, and the capability is certainly not confined to Facebook:

If Facebook [wants] to do key logging then [it] can – so long as you’re browsing one of their pages they can capture everywhere your cursor goes and everything you type. I’m not saying they do, I’ve no idea, I’m just saying it’s possible – any website can do it and it’s very easy.

In fact, as Mark noted in his comment on the pointer-tracking story, if he had decided to ditch the comment he was writing halfway through, the Naked Security site could still have captured everything he typed, even if he’d never hit submit (it didn’t by the way, we don’t do that).

In sum: Facebook spent 17 days tracking abandoned posts in a manner that some might find discomforting and readers are reminded that the internet allows website owners to be far, far more invasive.

If you want to be sure that nobody is tracking your mouse pointer or what you type then you’ll have to turn off JavaScript or use a browser plugin like NoScript that will allow you to choose which scripts you run or which websites you trust.

Image of backspace key courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/I4xemn4JiPE/

Facebook and Apple to help draft facial recognition rules

Facebook and Apple to help draft facial recognition rulesFacebook, Apple, Wal-Mart and other companies that plan to use facial-recognition scans for security will be helping to write the rules for how images and online profiles can be used.

The US Department of Commerce will start meeting with industry and privacy advocates in February to draft a voluntary code of conduct for using facial recognition products, according to a public notice.

The draft will be ready by June.

Tech giants – at least, those with the most popular smartphones – have been interested in facial recognition technology for quite a while.

Apple, for its part, has filed two patents for using facial recognition to control iPhones and other igadgets: one in 2011, and the other one earlier this month.

This past spring, Google filed a patent to let us unlock our phones by grimacing at them.

Google has said that Glass isn’t going to get facial recognition until the privacy wrinkles get ironed out.

Facebook users have been particularly queasy about the company zeroing in on their likenesses by default – a concept with which the company has had an on-again, off-again relationship.

Of course, the foxes writing the rules for how the hen house is run doesn’t bode well, privacy advocates say.

Christopher Calabrese, an ACLU (American Civil Liberties Union) lawyer in Washington, told Bloomberg Businessweek that voluntary standards written primarily by companies that benefit from it probably can’t be trusted to keep people’s privacy top of mind.

One of the more chilling scenarios involves secret surveillance, he said:

One of the most serious concerns about facial recognition is it allows secret surveillance at a distance. … Suddenly, you’re really not anonymous in public anymore.

That’s certainly not an unreasonable fear, given how the US city of San Diego, for one, has quietly slipped facial recognition into law enforcers’ hands.

At any rate, what are the chances that facial recognition standards are going to fare better than other standards that have gone to die under the scalpel of technology companies?

I speak here of Do Not Track, a woebegone standard that, as Naked Security’s Mark Stockley put it, has been dying the death of a thousand conference calls ever since stakeholder companies got involved.

If the self-regulation process succeeds then the companies’ options about how to use facial recognition are narrowed.

And if the self-regulation process fails utterly then it’s an open invitation for government to step in, which may well result in stronger regulation than self-regulation.

But, if the self-regulation process just takes a very, very long time then everyone can continue to use it in an unregulated environment, they have an opportunity to establish favourable de-facto standards outside the official process, and they can be seen to be ‘playing ball’ by being part of the official process

The point is, at least some of the companies who’ll be whispering into the ear of the Commerce Department are the same ones who are filing facial recognition patents, selling Glass devices that will someday very likely be equipped with facial recognition, and/or have shown that they want their users to be automatically identified.

They clearly want facial recognition enabled far and wide.

Exactly how they plan to protect the privacy of the identities behind those faces remains to be seen, and that’s why we’ll be keeping a sharp eye on how these voluntary standards develop.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8HEu2E4JkdI/

Google accidentally improves Android privacy, just for a moment

Android and pants, via creative commonsAndroid 4.3, released in July, had a feature that enabled users to install apps but also keep them from collecting sensitive data such as a user’s location or address book.

That app, App Ops Launcher, was “a huge step” in the right direction for Android privacy, said Peter Eckersley, technical projects director at the Electronic Frontier Foundation (EFF), in a laudatory writeup posted last Wednesday.

From the Wednesday post:

Want to install Shazam without having it track your location? Easy. Want to install SideCar without letting it read your address book? Done.

… App Ops Launcher is a huge advance in Android privacy [that makes] Android 4.3+ a necessity for anyone who wants to use the OS while limiting how intrusive those apps can be.

The Android team at Google deserves praise for giving users more control of the data that others can snatch from their pockets.

One day later, Eckersley snatched that praise from Google’s pockets.

It turns out that Google, when it released Android 4.4.2 earlier in December, removed App Ops – a situation that Eckersley discovered on Thursday, when the EFF installed the 4.3 update to its test device, only to find that App Ops had blinked out of existence.

In fact, Google had, well, oops! never meant to release it in the first place.

The app, Google reportedly told Eckersley, was only released by accident, was experimental, and, for all Google knew, could well break some of the apps it’s supposed to police.

In a second posting, on Thursday, Eckersley said that the EFF is “suspicious” of Google’s explanation, saying that the EFF doesn’t think it justifies removing the feature rather than improving it.

The EFF not only wants Google to bring back the feature; it also wants to see other critical protections, such as letting users disable “all collection of trackable identifiers by an app with a single switch, including data like phone numbers, IMEIs, information about the user’s accounts.”

The EFF also wants Google to find a way to let users disable network access for certain applications that really shouldn’t need it, such as flashlights, wallpapers, UI skins, and “many” games.

Google hasn’t chosen to comment further on the accidentally-improved-Android-privacy issue, but the situation doesn’t necessarily call for pitchfork-wielding mobs.

As InformationWeek’s Thomas Claburn pointed out, it’s pretty easy to see that App Ops was leaked by accident, as Google told Eckersley, given that you had to be a fairly devoted Android fan to discover it.

In fact, when Android Police ferreted out and then reported on App Ops back in July, Ron Amadeo described it in his writeup as a “hidden” feature, tucked away in the package installer.

“It’s not really ready yet, so Google has hidden it,” Amadeo wrote at the time.

The feature seemed to work, he said, so he gave instructions on trying it out to those who might be “feeling adventurous”.

The choice to feel adventurous and get frisky with your Android is yours to make.

But if you want to strap a life jacket on before tossing your Droid into choppy water, I suggest reading about Sophos’s free Mobile Security for Android app.

Image of y fronts from Flickr

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2ViK8EI5AYI/

Apple updates Mavericks to 10.9.1, issues security fixes for Safari

Apple just announced the first point update for its recently released OS X 10.9, better known as Mavericks.

Most of the fixes and enhancements are of the not-really-to-do-with-security sort, but the update also bundles in a new version of Safari, with security patches.

That takes Apple’s latest browser version number to 7.0.1, dealing with a number of security holes including a data leakage flaw and eight vulnerabilities that “may lead to an unexpected application termination or arbitrary code execution.”

Remember that arbitrary code execution inside a browser generally means that a drive-by malware install is possible.

That’s where simply viewing a web page can cause malware to be downloaded, installed and activated, without any outward or visible signs: no warnings, no pop-ups, and no are-you-sures.

The Mavericks update can be fetched, as usual, by using the Software Update… option in the Apple menu, or by fetching a standalone installer in the form of a DMG (Apple disk image) file.

If you look after more than one Mac, or simply want to keep a complete set of reinstallation tools for your own Mac, getting and keeping the standalone installers for each point release is a handy thing to do.

It means that you can reinstall OS X and apply all the current security patches before going online for the first time on your newly-rebuilt Mac.

→ Point releases after 10.x.1 are usually available as a regular download or as what’s called a combo, which packages together all previous point releases as well. That way you only need to install OS X plus one (typically jumbo-sized) patch, rather than installing OS X and then applying each point release in numeric sequence. Of course, 10.x.1 updates have no combo flavour, as there are no previous point releases with which to combine.

There are two DMG installers to pick from:

  • DL1712: OS X Mavericks 10.9.1 Update for MacBook Pro with Retina Display (Late 2013). [363MB]
  • DL1707: OS X Mavericks 10.9.1 Update (for all other supported Macs). [243MB]

If you have earlier versions of OS X (10.7.5, aka Lion, and 10.8.5, aka Mountain Lion), the Safari security fixes are available on their own as Safari 6.1.1.

→ The Safari 6.1.1 update seems to bring earlier OS X versions into line with Mavericks as far as browser security patches are concerned, but none of the security fixes in Mavericks itself seem to have been backported to the Lion and Mountain Lion flavours. It’s looking increasingly certain that they never will be.

I’ll admit that my first thought on hearing about 10.9.1 was, “Gosh, that was quick. Mavericks itself hasn’t been out long.”

But a glance at Rob Griffiths’ long-running (and very handy) release history table for OS X versions shows that 10.9.1 came 55 days after 10.9, longer than any other 10.x.1 release so far.

How time flies when you’re having fun!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/R3Rm9cXUbVw/