STE WILLIAMS

Two million recordings of kids, parents leaked from cloud-connected toys’ crappy MongoDB

Two million voice recordings of kids and their families were exposed online and repeatedly held to ransom – because the maker of microphone-fitted, internet-connected stuffed toys used an insecure MongoDB installation.

Essentially, the $40 toys, built by CloudPets, connect to the internet via an iOS or Android app on a nearby smartphone or tablet, and exchange voice messages between children and their friends and relatives.

For example, a parent away on a work trip can open the app on their smartphone, record an audio message, and beam it to their kid’s CloudPets toy via a smartphone within Bluetooth range of the gizmo at home; the recording plays when the tyke press a button on a paw.

Similarly, the youngsters can record messages using the stuffed animal, and send the audio over to their mom, dad, grandparent, and so on, via the internet-connected app.

Cute … How Cloudpets passes messages from app to toy

These recordings, along with records of 820,000 CloudPets.com accounts associated with the each of the bears, has leaked in the hands of criminals due to a poorly secured NoSQL database holding 10GB of internal information.

CloudPets’ internet-facing MongoDB installation, on port 2701 at 45.79.147.159, required no authentication to access, and was repeatedly extorted by miscreants. The database contains links to .WAV files of Cloudpets’ voice messages hosted in the Amazon cloud, again with no authentication, allowing the mass slurping of more than two million highly personal conversations between families and their little ones.

It appears crooks found the database, presumably by scanning the public ‘net for insecure MongoDB installations, took a copy of all the data, deleted that data on the server, and left a note demanding payment for the safe return of a copy of the database. This happened three times, we’re told.

Of course, anyone else wandering by the database could have swiped the records for themselves and kept quiet, so the information could be in the hands of just about any miscreant. The IP address of the database is also the address of the backend web server used by the Android and iOS app accompanying the toy. That app was developed by Romanian biz mReady. The IP belongs to server host Linode, which is presumably providing the machine.

Computer security breach expert Troy Hunt, who maintains the HaveIBeenPwned website, was tipped off about the insecurity of CloudPets, a brand of Spiral Toys, and went public today with details of the cockup.

“This is kids’ voices recorded on teddy bears,” Hunt told The Register after spending a week investigating the security blunder. “I can picture my four-year-old girl, sitting in her room – it’s hard to picture a more innocent scenario – and all these actors have access to what she says to her teddy bear.”

As proof that something was wrong, Hunt’s informant provided more than 580,000 records from the CloudPets database, along with screenshots of his three attempts to alert the toy manufacturer to the gaping hole. Each warning had, we’re told, fallen on deaf ears.

As Hunt dug deeper, things got more bizarre: yes, the account passwords in the database were hashed with bcrypt, but the website had no password rules, and its tutorial used only a three-character password – meaning many of the passwords were crackable anyway. The account records contained email addresses, hashed passwords, user IDs and login times.

Niall Merrigan, a Capgemini solution architect who investigates breaches on his own time, has been tracking MongoDB installations held hostage on the open internet. He helped Hunt confirm the CloudPets’ database had been hit multiple times by extortionists.

Using internet device search engine Shodan.io to look at historic snapshots of exposed online systems, Merrigan found that in January, Cloudpets’ database was again and again deleted and replaced with DBs called “PLEASE_READ”, “README_MISSING_DATABASES”, and “PWNED_SECURE_YOUR_STUFF_SILLY” – a sign the data was being held to ransom for one Bitcoin at a time.

Hunt concluded: “The CloudPets data was accessed many times by unauthorised parties before being deleted and then on multiple occasions, held for ransom.” He’s added the 800,000-plus email addresses in the vulnerable database to HaveIBeenPwned.com, so as to alert owners of Cloudpets toys. He also warns against acquiring web-connected toys because it’s too easy for a single error to expose children to snooping.

His advice to MongoDB sysadmins is simple: don’t accept the default configuration that allows anonymous unauthenticated access, and instead secure your installation.

“I can see both sides of this,” Hunt told The Register. “People are screwing up, but to be honest, I haven’t seen this with SQL Server, because you can’t stand it up with anonymous open access. You need a baseline that forces you have an account and forces you to have a password.”

A spokesperson for Cloudpets and Spiral Toys, based in California, was not available for comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/28/cloudpets_database_leak/

20 Questions for SecOps Platform Providers

Security operations capabilities for the masses is long overdue. Here’s how to find a solution that meets your budget and resources.

The security operations platform is quickly emerging as a favorite talking point for 2017, even for organizations that do not find themselves with an expansive budget to improve their security maturity and posture. Of course, doing so is a complex undertaking with a wide variety of moving parts. Or is it?

Historically, advanced SecOps has been beyond the reach and resources for all but the most elite organizations. Today, the cloud has opened up new possibilities for these enhanced capabilities at reduced cost. This, in turn, creates new opportunities for mid-sized and smaller enterprise-sized organizations.

Of course, where there is interest, there are vendors ready to pounce. Lately, there are quite a few vendors talking about their security operations platforms. How can the conscientious security buyer interrogate potential vendors to make the most-informed decision? As you might guess, I would suggest a game of 20 questions.

Image Credit: By DuMont Television/Rosen Studios, New York-photographer.Uploaded by We hope at en.wikipedia (eBay itemphoto frontphoto back) [Public domain], via Wikimedia Commons.

1. How do you make it easy to seamlessly operationalize intelligence? Reliable, high-fidelity intelligence is an important component of a mature security operations capability. Plenty of vendors offer intelligence, and I have already discussed how to differentiate between different intelligence offerings. But there is another important point worth mentioning here. The greatest intelligence in the world won’t help an organization if it can’t operationalize it. In other words, if it isn’t easy for you to leverage intelligence to help defend your organization, it is more or less useless.

2. How do you facilitate risk mitigation? Everyone knows that security is all about risk mitigation. But if knowledge about risks and threats to the organization cannot be operationalized to help manage and mitigate risk, that knowledge is wasted.

3. Do you honestly believe that I want more alerts? I am suffering from a bad case of alert fatigue. What I need is help making order out of the chaos, and turning all of that information into knowledge. 

4. Where is my context? Alerts without the appropriate context do not provide a true understanding of what is going on. That makes it difficult for organizations to make educated, informed decisions. Context is king.

5. Can you provide me protection against a variety of attack vectors that compromise organizations? If a security operations platform cannot cover multiple different attack vectors, it isn’t going to cut it.

6. Can you help me see? The importance of proper visibility across the network, endpoints, mobile, cloud, and SaaS is huge. If you can’t see it, you can’t detect it.

7. How do you model attacker behavior? The best way to identify attacker behavior within an organization is to deeply understand different characteristics of that behavior, model them, and subsequently develop algorithms that recognize them. Simply developing algorithms without understanding how attackers attack isn’t going to be very productive.

8. How is your performance? Security operations is about both collection and analysis. It isn’t enough to collect vast quantities of data. Any reasonable SecOps platform needs to be able to allow analysts to interrogate that data rapidly.

9. Do you have integrated case management? The “swivel chair” effect, and the days of cutting and pasting manually between different systems needs to come to an end. If the analysis and investigation I am doing cannot be fed automatically into a case or ticket, that isn’t going to work for me.

10. How do you scale? I want to know that as my needs grow, I can buy additional capacity and functionality as necessary without a long, complex, and disruptive deployment cycle.

11. How do you provide integration between distinct components in a diverse security ecosystem? My security ecosystem is diverse, and you need to be able to help me maximize and optimize my existing investments.

12. How flexible is your query language? Can I ask precise, incisive, targeted questions? If your query language does not support that, it is not helpful.

13. Can you augment my existing talent? Although I want to run security operations 24×7, that’s not a realistic expectation, given my current resources. How can you augment my staff to help us get there?

14. Do you provide seamless pivoting across a wide variety of data sources? I don’t have time to issue multiple queries across multiple different systems to get the relevant data that I need.  If you can’t provide me a single interface to all of the data across my security ecosystem, I’m not interested.

15. Do you have an integrated automation and orchestration capability? Manual processes are inefficient and error-prone. I need to take advantage of automation and orchestration, but it needs to be integrated into the platform for that platform to be realistic.

16. Will you end my cutting and pasting nightmare? In 2017, seamless integration between alerting, analysis, investigation, case management, and documentation should be a given.

17. Can you help me free up resources for higher order work? It is not a good use of time or money to have analysts spending most of their time performing clerical tasks. I need them to focus on higher-order work.

 More on Security Live at Interop ITX

18.  Do you have real analytics based on real knowledge of attacker behavior? Everyone talks about analytics these days. But the only analytics that stand a chance of reliably detecting attacker behavior with low noise are analytics based on intimate knowledge of attacker behavior.

19. Do you support flexible deployment options? Any realistic platform needs to be easily consumable in a variety of different ways.

20. Is your solution affordable? The time to bring security operations to the masses is long overdue. In order to make that a reality, any solution needs to suit my budget.

Related Content:

Josh is an experienced information security analyst with over a decade of experience building, operating, and running Security Operations Centers (SOCs). Josh currently serves as VP and CTO – Emerging Technologies at FireEye. Until its acquisition by FireEye, Josh served as … View Full Bio

Article source: http://www.darkreading.com/operations/20-questions-for-secops-platform-providers/a/d-id/1328272?_mc=RSS_DR_EDT

Tomorrow on Dark Reading: Your Costs, Risks & Metrics Questions Answered

First up on the Dark Reading upcoming events calendar is our Dark Reading Virtual Event Tuesday, Feb. 28.

It’s almost here! Tomorrow, Tuesday, Feb. 28, beginning at 11:00 a.m. Eastern Time, we’ll host our next Dark Reading Virtual Event and devote the day to tackling Cybersecurity: Costs, Risks, and Benefits

Afraid you might have forgotten a few expensive items when estimating the costs of a data breach? Need more satisfying answers to the “are we secure” question? About to invest in cyber insurance and want to find the potential holes in your policy before it’s time to file that first claim? Need to make a business case for increasing your budget, but need better ways to measure performance first? 

Then this is the event for you. Experts from the Verizon Global Investigative Response Team, Deloitte Cyber Risk Services, Forrester, Optiv, Advisen, CenturyLink, RiskLens, and more will guide you to answers for your most pressing security management questions.

 IN CASE YOU MISSED IT

Check out these webinars you might have missed over the last week:

COMING SOON

Wednesday, March 15, Building a Cybersecurity Architecture to Combat Today’s Risks: “Layered defense” has traditionally been the modus operandi of IT security, but this approach can’t be counted on to stand up to today’s threats and attacks. In addition, attack surfaces are growing every day as companies adopt technologies like cloud and the Internet of Things. So how can you combat today’s risks? Christie Terrill, partner at BishopFox, will provide some answers.

Thursday, March 16, Becoming a Threat Hunter in Your EnterpriseYou’re tired of waiting. Tired of waiting for your technology to alert you that there’s already a problem. You want to be more proactive, sink your hands into those threat intelligence feeds, dig into those behavioral analytics reports, follow one clue after another after another, until it leads you to a would-be attacker, before they finish carrying out their grand plan. What you want is to be a threat hunter. Learn how, and what a formal threat hunting program looks like, from John Sawyer, senior security analyst of InGuardians and Chris Pace, technology advocate, EMEA of Recorded Future.

DOWN THE ROAD

 More on Security Live at Interop ITX

Interop ITX is coming to the MGM Grand in Las Vegas May 15-19. The conference program is overflowing with security sessions this year. Plus, the Dark Reading team will be back with the Cybersecurity Summit – a two-day crash course that will bring security teams, from newbies to time-crunched pros, up to speed. 

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/risk/tomorrow-on-dark-reading-your-costs-risks-and-metrics-questions-answered/a/d-id/1328271?_mc=RSS_DR_EDT

Microsoft Adds Technical Updates to SDL Site

Microsoft releases a new round of updates and technical content additions to its Security Development Lifecycle website.

Microsoft has released a new wave of updates and technical content additions for its Security Development Lifecycle (SDL) website, which contains resources for securely developing and testing software.

These changes affect the SDL Developer Starter Kit, security tooling guidance, and compiler and cryptographic recommendations. Microsoft has consolidated its Security Tools recommendations, replaced BinScope with BinSkim, and published guidance to help developers with cryptography, among other updates. 

“Detailed Cryptographic Recommendations taken from Microsoft internal standards are now available for the first time – providing valuable guidance for developers looking to build cryptography into applications and services in line with Microsoft’s own practices,” Andrew Marshall, Microsoft’s principal security program manager for security engineering wrote in a blog post today announcing the updates. 

The DSL updates are part of a broader investment to improve security at Microsoft, and the company promises additional changes in coming months.

Read more details on the Microsoft blog

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/application-security/microsoft-adds-technical-updates-to-sdl-site/d/d-id/1328275?_mc=RSS_DR_EDT

Google’s Ease-of-Use Email Encryption Project Goes Open Source

E2Email, together with open source Key Transparency project, are meant to take on the challenges that have dogged end-to-end email encryption adoption for decades.

Engineers at Google for several  years have been doing battle against the perception that end-to-end email encryption necessarily must be complex and hard to use. They’ve been cooking up a project called E2EMail that’s designed to offer a simpler alternative to PGP for Gmail through a Chrome Extension, and now they’re taking the project to the open-source community.

Google’s internal security team at the firm released the extension over a year ago, and hope that going open source will advance the project and easy-to-use email encryption.

“E2EMail is built on a proven, open source Javascript crypto library developed at Google,” wrote KB Sriram, Eduardo Vela Nava, and Stephan Somogyi of Google’s security and privacy engineering team in an announcement on Friday. “It’s now a fully community-driven open source project, to which passionate security engineers from across the industry have already contributed.” 

According to the engineers, Google initiated the project because PGP in command-line form “clumsily interoperates with Gmail” and is “too hard to use.”

E2Email integrates OpenPGP in a simpler-to-use extension that keeps cleartext of the message exclusively on the client. This announcement follows close on the heels of another made last month about another open source initiative called Key Transparency, which Google believes fits hand-in-glove with E2Email.

The Google security team kicked off the Key Transparency project to tackle the challenges of discovery and distribution in OpenPGP implementations. The goal is to create an open-source and transparent directory of public keys with an ecosystem of mutually auditing directories.

“We’ve spent a lot of time working through the intricacies of making encrypted apps easy to use and in the process, realized that a generic, secure way to discover a recipient’s public keys for addressing messages correctly is important,” wrote Ryan Hurst and Gary Belven of Google’s security and privacy team last month. “Not only would such a thing be beneficial across many applications, but nothing like this exists as a generic technology.”

They explain that the manual verification required by the PGP web-of-trust model has proven over the last 20 years to be too difficult to gain widespread use. They believe that the relationships between online personas and public keys need to be automatically verifiable and publicly auditable, and that’s what the Key Transparency project intends to strive for.

When it comes to email encryption, the Google security team believes that Key Transparency is “crucial” to E2Email’s evolution and that’s where it hopes to lead the open source efforts in the near future. Google’s encouraging community participation by directing interested contributors to the E2Email repository on GitHub.

Related Content:

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: http://www.darkreading.com/endpoint/privacy/googles-ease-of-use-email-encryption-project-goes-open-source/d/d-id/1328277?_mc=RSS_DR_EDT

Cloudbleed’s silver lining: the response system worked

Cloudbleed is a serious vulnerability in Cloudflare’s Internet infrastructure that Google Project Zero researcher Tavis Ormandy discovered in mid-February. Much has been made of its severity, and rightly so. But there’s another part of the story.

Though not perfect, many industry experts believe the incident was handled well by all sides, and is an example other companies can follow if they someday find themselves in Cloudflare’s position.

There have been some points of contention along the way. Some believe Ormandy jumped the gun and announced the vulnerability before the date he had worked out with Cloudflare, throwing the company into an unnecessary scramble.

But industry experts also praise Ormandy for finding this proverbial needle in a haystack, and Cloudflare for patching the vulnerability with lightening speed. Cloudflare is also getting credit for its honest, detailed public response.

Cloudbleed defined

Ormandy contacted Cloudflare to report a vulnerability in its edge servers on Feb. 17. It turned out that a single character in Cloudflare’s code caused the problem. In its blog post, Cloudflare said the issue stemmed from its decision to use a new HTML parser called cf-html.

From the Cloudflare blog:

It turned out that in some unusual circumstances, our edge servers were running past the end of a buffer and returning memory that contained private information such as HTTP cookies, authentication tokens, HTTP POST bodies, and other sensitive data. And some of that data had been cached by search engines. We quickly identified the problem and turned off three minor Cloudflare features (email obfuscation, Server-side Excludes and Automatic HTTPS Rewrites) that were all using the same HTML parser chain that was causing the leakage. At that point it was no longer possible for memory to be returned in an HTTP response.

Ormandy also laid out the details in this advisory. He said:

This situation was unusual, PII was actively being downloaded by crawlers and users during normal usage, they just didn’t understand what they were seeing. Seconds mattered here, emails to support on a Friday evening were not going to cut it. I don’t have any Cloudflare contacts, so reached out for an urgent contact on twitter, and quickly reached the right people. After I explained the situation, Cloudflare quickly reproduced the problem, told me they had convened an incident and had an initial mitigation in place within an hour.

Despite (or perhaps because) Tavis started his advisory “It took every ounce of strength not to call this issue ‘cloudbleed’” the flaw quickly received the same branding treatment given to such previous blockbuster vulnerabilities as Heartbleed and Shellshock. It got a catchy name and logo.

A rush to go public?

On the surface, this researcher-to-vendor collaboration went well. But in recent days, some in the security industry have suggested that Ormandy announced the bug too soon – specifically, sooner than the window he had originally worked out with Cloudflare.

When a researcher works with a vendor to mitigate a vulnerability, a window between discovery and public announcement is typically worked out so the affected organization has time to properly close the security hole and make sure customers are adequately protected.

Sources close to Cloudflare say that Ormandy went public earlier than promised, sending Cloudflare into a scramble to complete its investigation and communicate with customers.

Ormandy did not return requests for comment.

Misplaced rage

Wim Remes, CEO and principal consultant at NRJ Security, said criticism toward Ormandy is misplaced. In a conversation on Facebook Messenger Friday, he said the social media echo chamber was distorting matters.

“You’re either for or against Tavis Ormandy, you’re ok with Cloudflare’s approach or you aren’t, and so on,” he said. “It doesn’t really matter.”

In a blog post, he described this as misplaced rage. If companies using third-party service providers like Cloudflare took the time to understand what they were paying for and did more on their end to ensure security, the issues described above wouldn’t matter. He wrote:

I think I’ve been repeating the same mantra to companies for at least a decade: You outsource process and function, but never responsibility. If you include third-party services in your product, no matter what they are, you need to go beyond having the supplier fill in a 400-question SIG questionnaire. You have to actually freaking test that component as if it is a pacemaker that your mother will get implanted. Third-party components remain your responsibility!

Since it’s a responsibility companies don’t seem to take seriously, people should simply be thankful to Ormandy and Cloudflare for getting Cloudbleed sorted out, Remes said.

Defensive measures

Though Cloudflare dealt swiftly with Cloudbleed, there’s still concern about any potential damage done. Ryan Lackey, a well-known industry professional and former Cloudflare employee, mapped out the risks in his blog post:

While Cloudflare’s service was rapidly patched to eliminate this bug, data was leaking constantly before this point — for months. Some of this data was cached publicly in search engines such as Google, and is being removed. Other data might exist in other caches and services throughout the Internet, and obviously it is impossible to coordinate deletion across all of these locations. There is always the potential someone malicious discovered this vulnerability independently and before Tavis, and may have been actively exploiting it, but there is no evidence to support this theory. Unfortunately, it is also difficult to conclusively disprove.

With that in mind, Lackey suggested site owners and administrators who use Cloudflare take the following steps:

  • Change your passwords. “While this is on all probability not necessary (it is unlikely your passwords were exposed in this incident), it will absolutely improve your security from both this potential compromise and many other, far more likely security issues,” he said.
  • Use this incident to improve response plans. The situation presents a prime opportunity for users to put their incident handling process  to the test. Lackey suggests companies and individuals  discuss the specific impact to their application and what response makes the most sense.
  • Invalidate authentication credentials for mobile applications and other machine-to-machine communications such as IoT devices. This forces users to re-enroll apps and devices if they used Cloudflare as an infrastructure provider. It may not be as effective as having everyone change their passwords, Lackey wrote, but it’s still a useful exercise.
  • Review what this means from a compliance perspective. Lackey said that if an application or website is on Cloudflare and is subject to industry or national regulation, Cloudbleed may count as a reportable incident. “Your security and compliance teams should evaluate. Obviously, full compliance with applicable regulations is an essential part of security,” he said.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vTGFYVHbm4c/

News in brief: D-Link vulnerabilities; SHA-1 woe; MySQL hacks

Your daily round-up of some of the other stories in the news

Update your D-Link switches

D-Link have released a support announcement explaining that the firmware used in its DGS-1510 Websmart Switch Series has been “found to have security vulnerabilities” and the company is urging users to install the latest firmware update.

All firmware prior to version 1.31.B003 are affected. The firmware is used in all revisions of its DGS-1510-28XMP, DGS-1510-28X, DGS-1510-52X, DGS-1510-52, DGS-1510-28P, DGS-1510-28, DGS-1510-20 switches.

The vulnerabilities have been given the CVE number CVE-2017-6206, although almost no details have been disclosed. Firmware and release notes are available from the support announcement.

SHA-1 collision breaks WebKit repository

The first real world consequences of last week’s SHA-1 collision have started to emerge.

Whilst creating a test to see if the collision made WebKit software (the Safari browser’s HTML rendering engine) vulnerable to cache poisoning the WebKit team ground their subversion source control repository to a halt with this ominous message:

0svn: E200014: Checksum mismatch for […] shattered-2.pdf’

The team have since worked around the problem.

Meanwhile Linus Torvalds has taken to Google Plus (yes, I head no idea people used Google Plus either) to reassure users of his own source control software, git, that the sky is not falling in.

The Linux founder wrote that SHA-1 issues in git should be “pretty easy to mitigate against” and that there’s “a reasonably straightforward transition to some other hash”.

If you haven’t read Paul Ducklin’s excellent, detailed, plain English explanation of the SHA-1 collision do yourself a favour and read it now.

MySQL databases held to ransom

The recent attacks that have seen MongoDB and Hadoop databases held to ransom seem to have evolved to include a new target: MySQL.

A 30-hour attack against installations of the popular database software was picked up by GuardiCore:

Similarly to the MongoDB attacks, owners are instructed to pay a 0.2 Bitcoin ransom (approx. $200) to regain access to their content … The attack starts with ‘root’ password brute-forcing. Once logged-in, it fetches a list of the existing MySQL databases and their tables and creates a new table called ‘WARNING’ that includes a contact email address, a bitcoin address and a payment demand.

MySQL database operators are reminded to make sure that their databases are properly hardened, protected by strong passwords and, as the as the MySQL reference manual itself states: “may also wish to restrict MySQL so that it is available only locally on the MySQL server host, or to a limited set of other hosts”. Amen to that.

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/6f1jIyafcmA/

Brit cops can keep millions of mugshots of innocent folks on file

After unlawfully hoarding millions of mugshots of one-time suspects, police chiefs in England and Wales were this week told to delete the snaps – but only if people in the photos complain. And even then, requests can be easily waved away.

This is all set out in the UK Home Office’s “Review of the Use and Retention of Custody Images” [PDF], which was published on Friday. That review was launched in 2015 and discovered that police had amassed more than 19 million mugshot photos.

That’s rather unfortunate because the High Court in London had ruled this practice of keeping pictures of presumed innocent people on file unlawful in 2012.

Essentially, detectives were stashing away photos of people arrested, quizzed and ultimately not convicted of any crime, just in case. The review sought to set in place rules for police forces in England and Wales maintaining libraries of photos following that High Court showdown. Whereas there are regulations on the storage of DNA evidence and fingerprints, there are little or no controls on photographs.

That High Court judgment, in R (RMC and FJ) v MPS, recognised that mugshots of people taken into custody were useful to officers: for example, frontline coppers can use them to clock known suspects and people out on bail. Crucially, the court called for an urgent review on police retaining the images. Lord Justice Richards said: “It should be clear in the circumstances that a ‘reasonable further period’ for revising the policy is to be measured in months, not years.”

Five years later and the UK government has finally officially offered some guidance to police forces in Blighty.

Astonishingly, the Home Office stopped short of demanding the deletion of all images of innocent people no longer under investigation because, apparently, there are so many photos on file, it would be impractical to ask officers to go through all their databases and remove photographs of individuals who have never been convicted of an offence.

Instead, if your mug is still on file, and you haven’t been found guilty of anything, you can ask nicely to be removed. If investigators think they need to keep your snap “for a policing purpose”, your request can be denied.

And if you have been convicted, or released from prison more than six years ago, you can ask the plod to delete your mugshots, the review suggested, although, again, it’s completely up to the police to fulfil your request.

“Following consultation with key partners, the principal recommendation [of the review] is to allow ‘unconvicted persons’ to apply for deletion of their custody image, with a presumption that this will be deleted unless retention is necessary for a policing purpose and there is an exceptional reason to retain it,” said Home Secretary Amber Rudd on publication of the review.

Big Brother Watch’s Renate Samson responded: “The opportunity for people to have their custody photo deleted from the database is welcome, [but] we believe they shouldn’t have to ask, it should be an automatic process. The explanation as to why this can’t be done reveals a poorly designed IT system which is impacting innocent people’s right to privacy. Going forward, a system should be created whereby those who are found to be innocent have their images deleted automatically, as is the case with DNA and fingerprints.”

Meanwhile, Biometrics Commissioner Professor Paul Wiles said of the Home Office’s recommendation that people can ask to be removed: “This limited application process does add a degree of proportionality, but whether this would be enough in the face of any future [legal] challenge may depend on how many presumed innocent people apply successfully to have their images deleted before the minimum six-year review period.”

He continued: “The review leaves the governance and decision making of this new process entirely in the hands of the police, but future public confidence might require a greater degree of independent oversight, transparency and assurance than is proposed.”

Extensive

Police forces hoarding archives of photos is even more troubling when you consider the plod’s dalliances with automated facial recognition (AFR) technology. Of those 19 million custody images held in the UK, 16.6 million are held within the Police National Database’s facial recognition gallery. According to the UK’s previous Biometrics Commissioner, “hundreds of thousands” of these images belong to “individuals who have never been charged with, let alone convicted of, an offence.” And yet their faces are being stored in a system designed, in theory, to pick them out of crowds and potentially link them to wrongdoing.

The Register has reported at length on officers using AFR in the UK, including two public trials of real-time recognition systems, which snapped away at Download Festival in 2015 and the Notting Hill Carnival in 2016 – and both of which failed to detect any criminals.

In his annual report for 2015 as the then-Biometrics Commissioner, Alastair MacGregor QC wrote that “a searchable police database of facial images arguably represents a much greater threat to individual privacy than searchable databases of DNA profiles or fingerprints.” He noted that “this new database is subject to none of the governance controls or other protections which apply as regards the DNA and fingerprint databases by virtue of [the Protection of Freedoms Act 2012]” … and “has been put into operation without public or Parliamentary consultation or debate.”

On Friday, Big Brother Watch’s Samson said: “Whilst [this week’s] review addresses custody photos, it makes no recommendations regarding the revelation that three police forces are using facial recognition systems to create biometric records of people’s faces. We trust that the government’s other long-overdue strategy into biometrics will deal with this when it is eventually published.”

Last year, the Home Office was warned that its delays in addressing police use of AFR technology with innocents’ custody photographs risked inviting a legal challenge. A month following this warning, The Register revealed that the department had been seeking secret out-of-court settlements with claimants who had alleged that police had unlawfully retained their biometric information.

This week’s lightweight guidance now occupies what was a vacuum of legislation regulating the police use of AFR, with the promised Biometrics Strategy now four years late. The Register understands that this strategy has been completed but not yet published, as it was considered to be completed to an unsatisfactory standard, with the Home Office consistently informing us that it would be released “in due course” every time we have enquired over the past two years.

This has not stalled development of the Home Office’s £640m (US$797m) Biometrics Programme, which “will provide a single platform across the Home Office and wider government for biometrics, focusing on fingerprints, DNA and facial recognition services.” According to the eighth annual report of the DNA Ethics Group [PDF], the Home Office has continued to work on its Biometrics Programme and Biometrics Strategy throughout 2016.

Home Office data regarding its major projects portfolio from September 2015 stressed the need of the Biometrics Strategy, explaining that “mechanisms will be required to ensure that the ethical and legal issues are considered in relation to the storage of the data and the strategy will set out clear regulatory regimens to ensure the legal and ethical legitimacy of data sharing and linkage.”

Finally, this week’s custody image review made one other recommendation: “Future local or national IT systems, that will be used for the storage of images, should be designed to integrate relevant data – including court and CPS data – to facilitate a more efficient method of search, retrieval and deletion of images and aim to link and integrate intelligence, crime and prosecution data (subject to relevant legal considerations), and that this should be regularly reviewed. This will help facilitate a more efficient method of search, retrieval and deletion of images.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/25/custody_images_review/

Monday review – the hot 30 stories of the week

Get yourself up to date with everything we’ve written in the last seven days – it’s weekly roundup time.

Monday 20 February 2017

Tuesday 21 February 2017

Wednesday 22 February 2017

Thursday 23 February 2017

Friday 24 February 2017

News, straight to your inbox

Would you like to keep up with all the stories we write? Why not sign up for our daily newsletter to make sure you don’t miss anything. You can easily unsubscribe if you decide you no longer want it.

Image of days of week courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SN5Cazk3SSQ/

IT admin was authorized to trash employer’s network he says

Meet Michael Thomas, real-life BOFH.

On Dec. 5, 2011, he quit his job as IT admin for a startup called ClickMotive.

This was no ordinary resignation. This was the mother of all IT admin resignations: the type of blow-it-all-to-smithereens resignation that some – many? Please, Lord, let it not be not all – sysadmins dream about.

On the day he called it quits, he left a few things: his resignation letter, his keys, his laptop, his entry badge, his offer to stay on as a consultant, and a trail of tears for whoever came in on Monday to find that Thomas had deleted 615 pages of ClickMotive’s backups, the pager notification system for network problems, half a dozen wiki pages, and employees’ access to the VPN. According to court documents, he also “tinkered” with email servers at the Texas company, which sets up and runs car dealership sites.

Thomas also cut off contact with the company’s customers – large auto companies and dealerships – by snipping the names of company employees and executives from email distribution groups created for customer support.

In June 2016, Thomas was convicted of a single federal count of violating the Computer Fraud and Abuse Act (CFAA) in the Eastern District of Texas. He was sentenced to the four months he already spent in pre-trial detention, three years supervised release, and to pay restitution of $131,391.21.

After a three-day trial, a jury had found Thomas guilty of knowingly transmitting programs, information, codes, or commands that intentionally caused damage to his employer’s computer system, that he lacked authorization to cause the damage, and that those damages incurred losses to the employer in excess of $5,000.

But hold on a minute, said his lawyer, well-known hacker defense attorney Tor Ekeland: Thomas’s role as a sysadmin gave him all the authorization he needed to routinely delete the sort of files he deleted on his last days at ClickMotive.

Now, Thomas is appealing (PDF) his conviction in the Fifth Circuit Court of Appeals in New Orleans, on those grounds.

His defense: sure, he did damage to ClickMotive’s systems. Intentionally. But it certainly wasn’t “without authorization.”

In fact, every sysadmin is authorized to access all the systems he accessed, and they’re all authorized to do the things he did: delete backups, edit notification systems, and tweak email systems. That’s part of their job, his argument goes.

Another part of his appeal that should have managers jumping on the phone with their lawyers and digging up their policy manuals: there was nothing in ClickMotive’s policies that said Thomas couldn’t do exactly what he did.

From the appeal, filed on Tuesday:

Michael Thomas had unlimited authorization to access, manage, and use ClickMotive’s computer systems, and was given broad discretion in his exercise of that authority.

Thomas was handling all of the routine duties of a sysadmin – deleting data, managing user privileges and more – because his friend, colleague, and the only other employee working in IT administration had recently been fired from ClickMotive. If carrying out those parts of the job constitutes “damage,” isn’t every sysadmin liable for getting sued under the CFAA?

Yes, Ekeland has argued: Thomas’s guilty verdict is “dangerous for anyone working in the IT industry.” It should worry any IT admin that’s ever hit the “delete” key in the course of their duties, he said:

If you get in a dispute with your employer, and you delete something even in the routine course of your work, you can be charged with a felony.

From the appeal:

The central issue in this case is whether Thomas acted ‘without authorization’ if he performed these same actions in a manner that was contrary to the company’s interests.

During his trial, Thomas’s defense team explained how over the weekend during which he did the damage and quit, he had been in the office to deal with a denial-of-service attack on ClickMotive’s site and to repair a cascading power outage problem.

Those 615 backup files he deleted? They were all replicated at other servers on the network.

Ekeland told Wired that ClickMotive’s treatment of Thomas has been pretty shabby, considering:

They’ve destroyed this guy’s life over the fact that he worked on a Sunday to keep the company going, and then deleted some files on the way out to say f*** you to his boss.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DTuzmqGuqMM/