STE WILLIAMS

Foot Lose: Perv’s upskirt scheme draws too much heat

A Wisconsin man has surrendered to police after a plan to take ‘upskirt’ photos of women blew up in his face.. err…ankle.

The unnamed creep walked into the Madison West District police station earlier this week complaining of a foot injury and seeking to turn himself in. It turns out the bloke’s bad wheel was the result of an unsuccessful attempt to shoot covert clips of local women.

Apparently, the gimpy perp had bought a shoe-mounted camera with the intent of taking upskirt videos, only to have the setup catch fire on top of his laces at around 5:00 PM on Tuesday.

In an amazing show of restraint, police didn’t publish the name of the moron in question, who failed so hard he couldn’t even be charged with a crime- thankfully the rig’s battery ignited before it could be put into use.

Royal coat of arms on a court building. Pic: Elliott Brown

Database admin banned from Oxford Street for upskirt filming

READ MORE

“The subject reported he had purchased a shoe camera that he intended to use to take ‘upskirt’ videos of females, but the camera battery had exploded prior to obtaining any video, injuring the subject’s foot,” the police blotter reads.

“The subject was counseled on his actions and released from the scene as no illicit video had been taken.”

It has been a bad week in general for scummy men. In addition to our shoe-burning subject in Wisconsin, there was the story of Troy George Skinner, the New Zealand bloke who managed to catch a neckful of lead from the mother of a teenage girl he had been cyberstalking. Unlike Madison’s toe-scorcher, Skinner is going to face criminal charges for his actions. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/06/29/creep_blows_up_foot/

Equifax Software Manager Charged with Insider Trading

Sudhakar Reddy Bonthu used insider information about the company’s 2017 data breach to profit in stock transaction.

Fallout from the epic Equifax data breach just keeps coming: A second employee of the credit-monitoring firm has been charged with insider trading in connection with the attack.

Former Equifax software development manager Sudhakar Reddy Bonthu, 44, was arraigned in US District Court in Atlanta this week. Bonthu allegedly using his inside knowledge of the 2017 data breach before it was made public and purchased so-called “put” stock options that allowed him to earn more than $75,000 in profit when the announcement was made and Equifax’s stock dropped.

Equifax’s former chief information officer, Jun Ying, earlier this year pleaded not guilty to charges of insider trading related to the data breach.

Read more here

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/equifax-software-manager-charged-with-insider-trading/d/d-id/1332188?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Natural Language Processing Fights Social Engineers

Instead of trying to detect social engineering attacks based on a subject line or URL, a new tool conducts semantic analysis of text to determine malicious intent.

Social engineering is a common problem with few solutions. Now, two researchers are trying to bring down attackers’ success rate with a new tool designed to leverage natural language processing (NLP) to detect questions and commands and determine whether they are malicious.

Ian Harris, professor at the University of California, Irvine, and Marcel Carlsson, principal consultant at Lootcore, decided to combat social engineering attacks after many years of friendship and discussions around how effective but poorly researched they were.

“The reason why social engineering has always been an interest … it’s sort of the weakest link in any infosec conflict,” Carlsson says. “Humans are nice people. They’ll usually help you. You can, of course, exploit that or manipulate them into giving you information.”

Aside from the detection of email phishing, little progress has been made in stopping the rapid rise and success of social engineering attacks. And it’s getting harder for defenders: Adversaries are increasingly better at learning their targets, sending emails that seem legitimate, and integrating outside technologies to make their campaigns more powerful.

Many companies believe new technology is the answer, Carlsson says, and there’s often a disproportionate focus on preventing attacks but not detecting and responding to them. Much of the research on social engineering detection has relied on analysis of metadata related to email as an attack vector, including header information and embedded links.

Carlsson and Harris decided to take a different approach and focus on the natural language text within messages. Instead of trying to detect social engineering attacks based on a subject line or URL, they built a tool to conduct semantic analysis of text to determine malicious intent.

Harris, whose research has also focused on hardware design and testing, was using NLP to design hardware components when he recognized its applicability to social engineering defense. “It occurred to me after a while that the best way to understand social engineering attacks was to understand the sentences,” he explains.

By focusing on the text itself, this tactic can be used to detect social engineering attacks on non-email attack vectors, including texting applications and chat platforms. With a speech-to-text tool, it also can be used to scan for attacks conducted over the phone or in person.

How It Works
For a social engineering attack to succeed, the actor has to either ask a question whose answer is private or command a target to perform an illicit operation. The researchers’ approach detects questions or commands in an email. It flags questions requesting private data and private commands requesting performance of a secure operation.

Their tool doesn’t need to know the answer to the question in order to classify it as private, Harris explains. It evaluates statements by using the main verb and object of that verb to summarize their meaning. For example, the command “Send money” would be summed up in the verb-object pair “send, money.”

Verb-object pairs are compared with a blacklist of verb-object pairs known to describe forbidden actions. Harris and Carlsson scoured randomly selected phishing emails to identify private questions and commands, taking into consideration synonyms of each word so attacks were not incorrectly classified.

“Part of the difficulty of publishing this type of work is getting example attacks,” says Harris, explaining why the pair chose to use phishing emails to inform the blacklist. They have tested their approach with more than 187,000 phishing and non-phishing emails.

Going forward, the team plans to bring their desktop tool to both email and chat clients to scan for social engineering attacks. They also hope to expand their technique to improve on detection for highly individualized attacks, Carlsson adds.

“Phishing emails are generally scattershot – you’ve gotten these, they’re generic for everybody,” he explains. “The really personalized and painful attacks are the ones where someone is talking on the phone and they now something about you, so they adjust according to the conversation.”

The duo will present their approach to detecting social engineering attacks, and release the tool so attendees can test it, at Black Hat 2018 in a panel entitled “Catch me, Yes we can! Pwning Social Engineers Using Natural Language Processing Techniques in Real-Time.”

Related Content:

 

 

 
Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/natural-language-processing-fights-social-engineers/d/d-id/1332189?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Linux distro hacked on GitHub, “all code considered compromised”

Data breaches are always bad news, and this one is peculiarly bad.

Gentoo, a popular distribution of Linux, has had its GitHub repository hacked.

Hacked, as in “totally pwned”, taken over, and modified; so far, no one seems to be sure quite how or why.

That’s the bad news.

Fortunately (we like to find silver linings here at Naked Security):

  • The Gentoo team didn’t beat around the bush, and quickly published an unequivocal statement about the breach.
  • The Gentoo GitHub repository is only a secondary copy of the main Gentoo source code.
  • The main Gentoo repository is intact.
  • All changes in the main Gentoo repository are digitally signed and can therefore be verified.
  • As far as we know, the main Gentoo signing key is safe, so the digital signatures are reliable.

Like Drupal before it, the Gentoo team has started by assuming the worst, and figuring out how to make good from there.

That way, if things turn out to be better in practice than in theory, you’re better off, too.

Here’s what they said, less than an hour after they spotted the compromise:

[On] 28 June [2018] at approximately 20:20 UTC unknown individuals have gained control of the Github Gentoo organization, and modified the content of repositories as well as pages there. We are still working to determine the exact extent and to regain control of the organization and its repositories.

All Gentoo code hosted on github should for the moment be considered compromised. This does NOT affect any code hosted on the Gentoo infrastructure. Since the master Gentoo ebuild repository is hosted on our own infrastructure and since Github is only a mirror for it, you are fine as long as you are using rsync or webrsync from gentoo.org.

Also, the gentoo-mirror repositories including metadata are hosted under a separate Github organization and likely not affected as well.

All Gentoo commits are signed, and you should verify the integrity of the signatures when using git.

More updates will follow.

If you aren’t a Linux user, you might be thinking of letting out a sly snigger round about now – you’re probably tired of hearing from the small minority of ultrafans who not only love Linux but also can’t bear to hear anything negative about any part of the Linux ecosystem.

Please don’t gloat: this isn’t about Linux, or Windows, or macOS, or any other operating system’s attitude to cybersecurity.

This breach is a reminder of the difficulty of keeping everything secure in a cloud-centric world, where you have multiple people who need the keys to the castle, multiple repositories to deal with traffic, and an apparently ever-increasing number of attackers with an enormous range of motivations for breaking into and messing with your digital stuff.

(We don’t yet know the motivtion of the attackers in this case – a grudge against Linux? a grudge against Gentoo? a grudge against Microsoft for acquiring GitHub? an attempt to spread malware? – but the reasons aren’t immediately important.)

What to do?

Gentoo is a “build it yourself” sort of Linux distribution, where instead of downloading a set of ready-to-run files as you would with, say, Ubuntu – or macOS, or Windows, for that matter – you download the source code and compile it yourself.

The good news, of course, is that if you built it once, you can build it again – so if you fetched anything from the GitHub-hosted version of Gentoo during the danger period, get rid of it and fetch it again, using the master repository instead.

At worst, you may need or want to rebuild from scratch, bootstrapping your system from the master repository so that you’ve got a fresh start.

Then, keep your eye out for Gentoo’s official updates on what the crooks changed, and how that might have affected you during the thankfully very short window that this breach went unnoticed.

By the way, you can learn from Gentoo, even though it’s in a bit of a crisis right now:

  • Divide and conquer. The master repository is safe, so the crooks didn’t get the crown jewels.
  • Sign everything. Give your users a way to spot imposter files.
  • Tell the plain truth. Say what you know, and be clear what you don’t.
  • Respond quickly. Don’t find excuses to keep your users in the dark.

Happy recompiling 🙂


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mqqeIbNLg50/

Facebook and Google accused of manipulating us with “dark patterns”

By now, most of us have seen privacy notifications from popular web sites and services. These pop-ups appeared around the time that the General Data Protection Regulation (GDPR) went into effect, and they are intended to keep the service providers compliant with the rules of GDPR. The regulation requires that companies using your data are transparent about what they do with it and get your consent for each of these uses.

Facebook, Google and Microsoft are three tech companies that have been showing their users these pop-ups to ensure that they’re on the right side of European law. Now, privacy advocates have analysed these pop-ups and have reason to believe that the tech trio are playing subtle psychological tricks on users. They worry that these tech giants are guilty of using ‘dark patterns’ – design and language techniques that it more likely that users will give up their privacy.

In a report called Deceived By Design, the Norwegian Consumer Council (Forbrukerrådet) calls out Facebook and Google for presenting their GDPR privacy options in manipulative ways that encourage users to give up their privacy. Microsoft is also guilty to a degree, although performs better than the other two, the report said. Forbrukerrådet also made an accompanying video:

Tech companies use so-called dark patterns to do everything from making it difficult to close your account through to tricking you into clicking online ads (for examples, check out darkpatterns.org‘s Hall of Shame).

In the case of GDPR privacy notifications, Facebook and Google used a combination of aggressive language and inappropriate default selections to keep users feeding them personal data, the report alleges.

A collection of privacy advocacy groups joined Forbrukerrådet in writing to the Chair of the European Data Protection Board, the EU body in charge of the application of GDPR, to bring the report to its attention. Privacy International, BEUC (an umbrella group of 43 European consumer organizations), ANEC, a group promoting European consumer rights in standardization, and Consumers International are all worried that tech companies are making intentional design choices to make users feel in control of their privacy while using psychological tricks to do the opposite. From the report:

When dark patterns are employed, agency is taken away from users by nudging them toward making certain choices. In our opinion, this means that the idea of giving consumers better control of their personal data is circumvented.

The report focuses on one of the key principles of GDPR, known as data protection by design and default. This means that a service is configured to protect privacy and transparency. It makes this protection the default option rather than something that the user must work to enable. Their privacy must be protected even if they don’t opt out of data collection options. As an example, the report states that the most privacy-friendly option boxes should be those that are ticked by default when a user is choosing their privacy settings.

Subverting data protection by default

Facebook’s GDPR pop-up failed the data protection by default test, according to the report. It forced users to select a data management settings option to turn off ads based on data from third parties, whereas simply hitting ‘accept and continue’ automatically turned that advertising delivery method on.

Facebook was equally flawed in its choices around facial recognition, which it has recently introduced in Europe after a six-year hiatus due to privacy concerns. It turns on this technology by default unless users actively turn it off, making them go through four more clicks than those that just leave it as-is.

The report had specific comments about this practice of making users jump through hoops to select the most privacy-friendly option:

If the aim is to lead users in a certain direction, making the process toward the alternatives a long and arduous process can be an effective dark pattern.

Google fared slightly better here. While it forced users to access a privacy dashboard to manage their ad personalization settings, it turned off options to store location, history, device information and voice activity by default, the report said.

The investigators also criticized Facebook for wording that strongly nudged users in a certain direction. If they selected ‘Manage Data Settings’ rather than simply leaving facial recognition on, Facebook’s messaging about the positive aspects of the technology – and the negative implications of turning it off – became more aggressive.

“If you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you,” its GDPR pop-up messaging said. “If someone uses a screen reader, they won’t be told when you’re in a photo unless you’re tagged,” it goes on.

The report argues that these messages imply that turning the technology off is somehow unethical. The message also contains no information on how else Facebook would use facial recognition technology.

Microsoft drew less heat from the investigators, who tabulated each tech provider’s transgressions:

We would have liked to see Apple included, as the company has long differentiated itself on privacy, pointing out that it sells devices, not users’ data.

If nothing else, this report shows that reading and thinking about privacy options is important. Paying attention to these GDPR notifications and taking time to think about what they’re asking is worthwhile, even if it means taking a few minutes before accessing your favourite service. If you already shrugged and clicked ‘accept and continue’, there’s still an option to go in and change your privacy settings later. Just watch for those dark patterns: forewarned is forearmed.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/NmIyAp_g474/

Adidas US breach may have exposed millions of customers’ personal info

Adidas warned late on Thursday that hackers may have lifted customer data from its US website.

The sportswear maker said personal data, including contact information (addresses and email addresses), and encrypted passwords may have fallen into the hands of criminals, but was able to reassure customers that neither financial nor fitness information was at risk.

“According to the preliminary investigation, the limited data includes contact information, usernames and encrypted passwords,” it said. “adidas has no reason to believe that any credit card or fitness information of those consumers was impacted.”

The company has notified law enforcement and brought in experts to help investigate the breach, which Adidas said it became aware of on 26 June after claims by “an unauthorized party”, implying the breach was only detected once hackers attempted to sell the data.

Adidas said it is alerting affected customers.

This leaves an as-yet-unspecified number of customers at heightened risk of unusually convincing phishing emails. Extra vigilance and changing passwords is advisable. Only consumers who made purchases through adidas.com/US are thought to be affected.

Adidas is yet to respond to a request from El Reg to comment on the root cause of the breach or the number of records potentially exposed, which reportedly run into the millions. We’ll update this story as and when we learn more. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/06/29/adidas_breach/

The 6 Worst Insider Attacks of 2018

Stalkers, fraudsters, saboteurs, and all nature of malicious insiders have put the hurt on some very high-profile employers.PreviousNext

Image Source: Adobe Stock (Andrea Danti)

Image Source: Adobe Stock (Andrea Danti)

If recent statistics are any indication, enterprise security teams might be greatly underestimating the risk that insider threats pose to their organizations. One study, by Crowd Research Partners, shows just 3% of executives pegged the potential cost of an insider threat at more than $2 million. Yet, according to Ponemon Institute, the average cost of insider threats per year for an organization is more than $8 million.

And those are just the quantifiable risks. When insider attackers hit hardest — particularly malicious insiders who are looking to commit fraud or intentionally do bad — the ramifications can be much more widespread than the typical data breach.

We’re just six months into the year, and already we’ve seen some particularly damaging malicious insider events illustrate this truth. Here are some of the highest-profile incidents, all of which can act as a warning to enterprises to get serious about their monitoring and controls around employee activity.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/the-6-worst-insider-attacks-of-2018---so-far/d/d-id/1332183?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Adidas US Website Hit by Data Breach

The athletic apparel firm was hacked and data on potentially ‘millions’ of customers now at risk.

Adidas is the latest retailer to get hit with a data breach: the athletic apparel firm said it’s alerting some customers that their data may have been exposed due to a newly discovered hack of its US website.

“On June 26, Adidas became aware that an unauthorized party claims to have acquired limited data associated with certain Adidas consumers,” the company said in a statement on its website. Customer contact information, usernames, and encrypted passwords were exposed in the data breach.

According to some press reports, an Adidas spokesperson said the attack could have affected “millions” of customers. Adidas did not elaborate on the number of victims in its statement.

“Adidas has no reason to believe that any credit card or fitness information of those consumers was impacted,” the company said, and it’s currently working with security firms and law enforcement in an investigation into the attack. 

Read more here. 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/adidas-us-website-hit-by-data-breach/d/d-id/1332186?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Why Sharing Intelligence Makes Everyone Safer

Security teams must expand strategies to go beyond simply identifying details tied to a specific threat to include context and information about attack methodologies.

Cybersecurity is sometimes viewed as being inherently reactive. But given the security issues we face today, security professionals must push beyond merely blocking an attack before a network breach. Cybersecurity teams must also have the ability to disrupt an attack from achieving its goal. This might sound similar to blocking an attack, but there’s more to it.

This foresight can be acquired through knowledge of the kill chain, which refers to models that map the stages of attacks from initial system probing and network penetration to the final exfiltration of valuable data. Some people in our industry describe this process as “cyber threat intelligence.”

The Strategy Behind Cyber Threat Intelligence
Such a strategy goes beyond signatures or details tied to a specific threat. It could also include context and information about attack methodologies, tools utilized to obscure an infiltration, methods that hide an attack within network traffic, and tactics that evade detection.

It is also important to understand the different kinds of data under threat, the malware in circulation, and, more importantly, how an attack communicates with its controller. These elements of foresight enable the disruption of an attack at any of the points mentioned above.

But threat intelligence is also about being qualitative, at least to the degree that it can be leveraged to respond to an attack, whether that means a forensic analysis for full recovery or the attribution and prosecution of the people responsible for the attack.

Sources of Cyber Threat Intelligence
Information sharing is a critical aspect of any security strategy. It’s critical to compare the network or device you are trying to protect against a set of currently active threats; this allows you to assign the right resources and countermeasures against different attacks.

To leverage intelligence, start by accessing a variety of threat intelligence sources, some of which might include:

  • Actionable insights from manufacturers: These arrive as a part of regular security updates or, more accurately, as a signature with the ability to detect a known threat.
  • Intelligence from local systems and devices: When you establish a baseline for normal network behavior, it becomes easier to assess when something is out of whack. Spikes in data, an unauthorized device attempting contact with other devices, unknown applications rummaging the network, or data being stored or collected in an unlikely location are all forms of local intelligence. This can be used to identify an attack and even triangulate on compromised devices.
  • Intelligence from distributed systems and devices: As is the case with local intelligence, similar intelligence can be collected from other areas of the network. As they expand, networks provide and create new infiltration opportunities for attacks or threats. Also, different network environments — virtual or public cloud, for example  often run on separate, isolated networking and security tools. In those cases, centralized process for both the collection and correlation of these different intelligence threads become necessary.
  • Intelligence from threat feeds: Subscription to public or commercial threat feeds help organizations enhance their data collection, both from their own environment and those collected from a regional or global footprint in real time. It could boil down to two formats:
    • Raw feeds: Security devices simply cannot consume raw data, usually because it lacks context. This intelligence is utilized better post-processing from customized tools or local security teams. Such an effort converts the raw data into a more practical format. An added advantage with raw feeds is that they’re much closer to real time and are often cheaper to subscribe to.
    • Custom feeds: Information processed with context is easily consumed by security tools; an example could be specific information delivered using tailored indicators of compromise. Vendors may customize the data for consumption by an identified set of security devices. At the same time, organizations also need to ensure that their existing tools support common protocols for reading and utilizing the data.
  • Intelligence between industry peers: Information sharing has become an advantageous norm for many. Several groups, such as ISACs (information sharing and analysis centers) or ISAOs (information sharing and analysis organizations), share threat intelligence within the same market sphere, geographic region, or vertical industry. They are especially useful for identifying threats or trends affecting your peers with the potential to impact your own organization.

Intelligence in the corporate ecosystem is important, but the opportunity to reduce the number of threats, potentially exposing everyone to less risk, is more valuable than the advantage received from holding on to this information. Sharing is an important aspect of any security strategy. Then again, so is access to actionable intelligence in real time.

Whatever the case, just remember that sharing your own threat intelligence serves to make everyone safer.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info

Sanjay Vidyadharan heads Marlabs’ innovations team, which is responsible for next-gen digital technology services and digital security. Sanjay’s team plays a key role in innovating new technology platforms and intellectual properties. Under his leadership, Marlabs has … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/why-sharing-intelligence-makes-everyone-safer/a/d-id/1332127?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

UK.gov’s long-awaited, lightweight biometrics strategy fails to impress

Analysis The UK government’s lightweight biometrics strategy has failed to make any serious policy recommendations – and instead reiterated a series of already announced promises and promising further consultation on governance.

The long-awaited strategy – first promised in 2012 – landed at 4pm on Thursday with a rather light thud, running to just 27 pages. Of these, three are cover and contents, one is a ministerial foreword, two are a glossary, and seven an annex.

This leaves a whopping 14 pages to detail the Home Office’s approach to the increased use of biometric information in everyday public services – and unsurprisingly, the pamphlet falls short of the mark.

“The government’s biometrics strategy is a major disappointment,” said Big Brother Watch director Silkie Carlo. “After five years of waiting, it reads like a late piece of homework with a remarkable lack of any strategy.”

Norman Lamb, chairman of Parliament’s Commons Science and Technology Select Committee, agreed, saying that a 27-page document “simply does not do justice to the critical issues involved,” and lamenting the fact it doesn’t say what actions the government will take or, “just as importantly, what outcomes it wants to avoid.”

Perhaps in anticipation of such criticisms, the Home Office noted that the strategy “does not seek to address all the current or future uses of biometrics.”

However, even with this proviso, the recommendations leave much to be desired, acting more like a scene-setter to the existing use of biometrics by the Home Office that pulls together previous announcements, while offering few concrete overarching policy objectives.

‘Kicking the can down the road’

Among the bigger picture promises on oversight are an already announced board to provide the government with policy recommendations on the use of facial biometrics, and a plan to seek opinions on the governance of biometrics through a 12-month consultation.

But Lamb said this exercise “smacks of continuing to kick the can down the road,” adding that it was “simply not good enough” to wait another year for a proper strategy to be produced.

Elsewhere in this section – which is somewhat optimistically titled “maintaining public trust” – the government promised to carry out legally required data protection impact assessments before it uses a new piece of biometric technology or applies an existing one to a new problem.

The strategy also fails to do more than make passing references to some of the most controversial and widely debated aspects of the Home Office’s use of biometrics.

For instance, on the continued retention of photos of people held in police custody who haven’t been convicted, despite this practice being ruled unlawful, the government simply reiterated the fact its computers systems do not support the automatic removal of images, and new systems should help.

“When the Law Enforcement Data Service, which will replace the Police National Computer (PNC) and the PND, is in place it will enable more efficient review and where appropriate, automatic deletion of custody images by linking them to conviction status, more closely replicating the system for DNA and fingerprints,” it said.

Automated facial recognition? Yep, we’re still trialling it

It’s a similar story on the police’s use of automated facial recognition – something that has stirred up public debate and is the subject of two legal challenges backed by Liberty and Big Brother Watch.

Although the Home Office pledged to work with regulators to update codes of practice and “ensure that standards are in place to regulate the use of AFR [automatic facial recognition] in identification before it is widely adopted for mainstream law enforcement purposes,” it failed to offer a detailed explanation of how these standards would be developed or what they might include.

And, in the meantime, the police will continue with their trials of AFR – which have been criticised for a lack of transparency and an apparent ad hoc nature – as Carlo noted, the capital’s Met Police was out using the kit in Stratford, London, on Thursday.

The Home Office also outlined its own plan to “run proof of concept trials to develop this work, including at the UK border,” and mooted allowing forces access to facial image collections at custody suites and on mobile devices.

It added it was considering sharing and matching facial images held by the Home Office and those of other government departments, but again offered precious little extra detail.

Other plans included an increased use of biometrics at ports, extending access to fingerprints within the criminal justice system – including a trial to allow prisons to cross-reference local and national databases – and improving automation of fingerprint enrollment at visa centres.

The overall effect is of a shopping list of ways the government could use biometrics combined with earnest but thin references to the importance of ethics and oversight, which is at odds to the detailed and considered reports drawn up by smaller organisations in less time.

Summing up the mood, Carlo said: “While Big Brother Watch and others are doing serious work to analyse the rights impact of the growing use of biometrics, the Home Office appears to lack either the will or competence to take the issues seriously.

“For a government that is building some of the biggest biometric databases in the world, this is alarming.”

‘Disappointing and short-sighted’

The biometrics commissioner, Paul Wiles, issued his response to the strategy late last night, complaining that the document lays out the current uses of biometric information and says little about future uses.

“It is disappointing that the Home Office document is not forward looking as one would expect from a strategy,” he said, pointing out that it falls short of proposing legislation to set rules on the use and oversight of new biometrics.

This failure to set out a definitive picture of the future landscape is “short sighted at best”, Wiles said.

He also noted that the proposed oversight and advisory board is described as focusing only on police use of facial images.

“What is actually required is a governance framework that will cover all future biometrics rather than a series of ad hoc responses to problems as they emerge,” Wiles said.

“I hope that the Home Office will re-consider and clearly extend the advisory board’s remit to properly consider all future biometrics and will name the board accordingly.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/06/29/uk_biometrics_strategy/