STE WILLIAMS

London cop illegally used police database to monitor investigation into himself

A serving Metropolitan police officer who illegally accessed a police database to monitor a criminal investigation into his own conduct has pleaded guilty to crimes under the UK’s Computer Misuse Act.

Sergeant Okechukwu Efobi, of Byron Road, Wealdstone, Harrow, was ordered to complete 150 hours of community service and pay a total of £540, comprising a £90 victim surcharge tax and £450 of prosecution costs.

Efobi, who remains employed by the Met and is currently on restricted duty, had been accessing a police database to view details of suspects in an ongoing criminal investigation.

Between November 2017 and October 2018, at the force’s high security Empress State Building HQ in southwest London, Efobi trawled the unidentified database, sending himself documents from it and viewing details of other suspects in criminal investigations.

He pleaded guilty to three charges under sections 1(1) and (3) of the Computer Misuse Act 1990 last week at Westminster Magistrates’ Court. An internal misconduct review into Efobi’s actions is currently under way, the Met told The Register.

Police misuse of their access to internal databases continues to be an ongoing problem, quite separate from the one of British cops using Chinese-inspired mass surveillance tech whose legality has been repeatedly questioned by the public and the authorities alike.

Back in 2015, the Met recorded a tripling of computer misuse allegations over the year, with police employees alleged to have abused their privileges 173 times. That picture was mirrored more widely across the country in 2017, when it was found that police forces had investigated a total of 779 cases of potential data misuse within their own ranks. Even the police trade union confessed that same year that their members were “persistently” committing data breaches.

A few years ago HM Inspectorate of Constabulary discovered that a number of non-police organisations were merrily trawling through the Police National Computer at will. Legal agreements intended to regulate that access were vague and in many cases had been allowed to expire. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/11/met_police_sgt_pleads_guilty_computer_misuse_crimes/

Train maker’s coder goes loco, choo-choo-chooses to flee to China with top-secret code – allegedly

A software developer fled to China from America with vital train transportation system computer code, US prosecutors have alleged.

Xudong “William” Yao stole the software blueprints from his former employer, an unnamed locomotive manufacturer based in Chicago, it is claimed, flew to the Middle Kingdom, and took up a job with a Chinese biz that specializes in automotive telematics – think vehicle monitoring, tracking, and communications.

Yao was indicted by Uncle Sam in December 2017 roughly two years after he bailed out of the United States in 2015. His indictment [PDF] was unsealed by the court on Thursday this week after prosecutors agreed there was no longer a reason to keep the allegations hush-hush.

According to the indictment, Yao, 57, joined the unnamed locomotive builder in August 2014 as a software engineer and almost immediately began hoarding commercially sensitive documents. Just weeks into his employment, prosecutors say, he had already amassed a cache of 3,000 files containing trade secrets belonging to his employer, including source code for the control system software used to drive the locomotives. At the same time, he made contact with the Chinese company to negotiate a job deal.

Angry Judge

McAfee sues ship-jumping sales staff over trade secret theft allegations

READ MORE

Fast forward to February 2015 when, for reasons unrelated to this case, Yao was fired by the Chicago train firm. Later that month he made copies of the pilfered files, and attempted to find work with other businesses, the Feds claim. In July, we’re told, Yao visited China to finalize a job deal with the aforementioned car telematics provider.

In November that year, the actual transport of the stolen documents is said to have went down when Yao, carrying nine digital copies of the train company’s control system source code among other secrets, flew out from Chicago’s O’Hare International Airport for the last time on his way to China, where he is believed to still be residing.

US prosecutors indicted Yao on nine counts of theft of trade secrets. Should he ever return to the United States and be arrested, he would formally be charged and tried.

The case is one of a number involving allegations of US-based developers and engineers fleeing to China while in possession or trade secrets. In March, a former Tesla engineer was sued for lifting trade secrets from the Musk-y auto outfit with the intent of taking them to a Chinese rival, and last year a trio of Micron engineers were charged with stealing confidential docs from the chipmaker on behalf of two China-based outfits. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/12/train_software_theft/

Oh, lovely, a bipartisan election hack alert law bill for Mitch McConnell to feed into the shredder

Two US lawmakers are pushing a bipartisan bill that would force the Department of Homeland Security (DHS) to alert the public of hacking attempts on election computer systems.

House reps Mike Waltz (R-FL) and Stephanie Murphy (D-FL) agreed to reach across the aisle to sponsor HR 3259, the Achieving Lasting Electoral Reforms on Transparency and Security (ALERTS Act).

The bill, right now resting in the hands of the House Administration Committee, would require Homeland Security officials issue a notification to Congress, state governments, and local officials whenever they, or any other federal agency, “have credible evidence of an unauthorized intrusion into an election system and a basis to believe that such intrusion could have resulted in voter information being altered or otherwise affected.”

It seems incredible that this wouldn’t already happen, but then we remembered we’re living in America in 2019.

In addition to state and local authorities, the bill would require individual members of the public be notified when any of their personal information – such as information on voter rolls – is thought to have been pilfered by hackers.

That the bill would come from a pair of Florida reps is no accident. The state has been a pivotal battleground in presidential elections for decades and in 2016 multiple Florida counties were targeted by hackers.

“The one thing that is indisputable in the Mueller report is the fact that Russia interfered in our election. In Florida, it is unacceptable that the Russians know which systems were hacked but not the American voters who are the true victims of this intrusion,” Rep Murphy said on Wednesday.

Donald Trump and Vlad Putin

We’ve read the Mueller report. Here’s what you need to know: ██ ██ ███ ███████ █████ ███ ██ █████ ████████ █████

READ MORE

“Just like consumers expect credit card or social media companies to disclose when their personal data has been compromised, voters also expect their government to notify them when their voting information is improperly accessed.”

Having a bipartisan backing is an important step for the bill, as Democrats and Republicans have been at odds over how exactly to go about implementing election security in the aftermath of the 2016 election.

“Voters in these counties still don’t know if Russians have accessed their personal data,” Rep Waltz said yesterday.

“Our elections system is perhaps the most critical of all infrastructure to our democracy – and it is constantly under attack from foreign powers who do not share our values. After we adequately harden our infrastructure, the federal government needs to have an honest conversation about deterrent strategy.”

Even with the backing of lawmakers from both sides, the bill will face an uphill battle to get to the White House and be signed into law.

Congress has passed multiple bills aimed at stopping foreign hacking in elections, only to have the measures discarded in the Senate by the chamber’s majority leader Mitch McConnell (R-KY), with the reasoning that today’s election computer security defenses – despite objections from experts – are sufficient to protect future elections from foreign hackers. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/11/election_security_bill/

Software Engineer Charged for Taking Stolen Trade Secrets to China

Xudong Yao reportedly stole proprietary information from his employer and brought it to China, where he is believed to currently reside.

A newly unsealed federal indictment charges a software engineer for stealing proprietary information from his workplace and bringing it to China, the Department of Justice reports.

Xudong Yao was a software engineer for a locomotive manufacturer in suburban Chicago, where he began working in August 2014. Within two weeks of his hiring date, Yao downloaded more than 3,000 files containing proprietary and trade secret data related to the system that runs the company’s locomotives. Over the following six months he continued to download electronic files containing technical documents and software source code, the indictment says.

At the same time he was working for his Chicago employer and stealing information, Yao sought and accepted employment at a business providing automotive telematics service systems based in China. His Chicago employer terminated him in February 2015 for reasons unrelated to the theft, of which it was still unaware. Soon after his termination, Yao copied the stolen data, traveled to China, and began his new job. 

According to the indictment, Yao later flew back to Chicago in November 2015 with the stolen information, which included copies of his former employer’s control system source code and content explaining how the code worked. He later went back to China, where he is believed to reside.

Read more details here.

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/software-engineer-charged-for-taking-stolen-trade-secrets-to-china/d/d-id/1335224?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Catch a Phish: Where Employee Awareness Falls Short

Advanced phishing techniques and poor user behaviors that exacerbate the threat of successful attacks.

Teaching employees how to spot malicious emails is one of many steps toward keeping phishing attacks at bay. As attackers adopt more advanced techniques, it’s imperative teams also learn how the behavior inside and outside their inboxes can put a business at risk.

For the fourth annual “Beyond the Phish” report, Proofpoint researchers pulled data from nearly 130 million responses submitted to its Security Education Platform between Jan. 1, 2018, and Feb 28, 2019. It’s tough to compare the newest 2019 results with previous years because this time employees were quizzed on a newly expanded range of more advanced cybersecurity topics.

Simulated phishing attacks are handy for evaluating a portion of users’ weaknesses but don’t fully reflect how well employees understand phishing. After all, you can’t get a sense of someone’s password hygiene, mobile device security, or confidential data security by seeing whether or not they fall for a fake phishing attack. Instead, they have to answer questions.

“We obviously do look at phishing but also take a broader look at the cybersecurity landscape and behaviors that influence cybersecurity posture,” says Gretel Egan, security awareness and training strategist at Proofpoint. “Beyond email are behaviors and risk that influence cybersecurity for an organization.”

This year, users answered 22% of questions incorrectly, on average, across 14 subjects – up from 19% in Proofpoint’s 2018 analysis. Given the expansion of assessment programs and addition of tougher questions, Egan says the uptick isn’t a surprise. The decline doesn’t indicate a lack of awareness, she says; it’s a sign some organizations are starting to challenge people.

“It points to the complexity of these topics and the nuances around phishing, around data protection, and around understanding some compliance directives related to cybersecurity,” she explains. “It’s bigger than one decision inside of an email.”

Categories with the greatest percentage of wrong answers included “identifying phishing threats” (25%), “protecting data throughout its lifecycle” (25%), “compliance-related cybersecurity directives” (24%), and “protecting mobile devices and information” (24%). Those with the most correct answers? “avoiding ransomware attacks” (11%), “passwords and account authentication” (12%), and “unintentional and malicious insider threats” (13%).

Users struggled to answer questions about mobile device encryption, securing personally identifiable information (PII), technical safeguards in blocking social engineering attacks, distinguishing public from private data, and responding to a suspected physical security breach.

There was also good news, researchers found: Employees demonstrated mastery in questions on identifying potentially risky communication channels, physical security safeguards while traveling, recognizing ransomware and malicious pop-ups, and risks linked to Bluetooth pairing.

Egan describes how users’ actions can unknowingly put their employers at risk and exacerbate the phishing threat. Some overshare information on social media, for example: A post saying “my boss is out of town this week” may seem benign but can be valuable intel for an attacker.

“We also see users struggling to understand how their actions on local devices can impact the security of corporate data and sometimes personal data,” she continues. People have been educated on how to use devices from a functional standpoint but not a secure one. For example, letting family members use corporate devices and using the same device for personal and business matters are both common behaviors that can put sensitive information at risk.

Attackers Get Sophishticated
The need to educate employees on secure behavior grows stronger as cybercriminals adopt sophisticated phishing tactics, as researchers found in INKY’s “2019 Special Phishing Report.”

“The evolution of attackers’ techniques is really quite striking,” says Inky CEO Dave Baggett.

“In terms of trends we see, we’re seeing a ton of brand forgery emails whose goal is credential harvesting,” he continues. Attackers often disguise emails as coming from legitimate Microsoft or Amazon accounts, trying to get users to enter credentials on a fake login page. With usernames and passwords, they attempt logging into banking websites or webmail accounts.

Many people are still under the impression phishing is intrinsically complicated, he adds, and it often isn’t. In terms of a brand forgery, for example, “it’s incredibly easy,” Baggett says. More advanced actors know how secure email gateways (SEGs) work and how to bypass them.

One of these subtle tactics is “hidden text,” a specific way for attackers to sneak malicious code into an email, Baggett says. Most email is now designed using HTML, which is complex and difficult to properly interpret, making it tough for software to determine what users will see. This gives attackers new opportunities to slip malicious content through security systems.

SEGs often look for specific brand names or text that could indicate an email is brand spoofing. Cybercriminals can bypass this by inserting random small, white-text letters between the letters or phrases that are visible to users. Adding gibberish text, which is invisible to security systems and end users, will let phishing emails slip past SEGs and into unsuspecting users’ inboxes.

Some attackers craft emails to appear more conversational and forego the use of attachments or links in order to bypass SEGs. Security tools with traditional spam filtering techniques will likely allow a casual message from an attacker pretending to impersonate a CEO or vendor.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/how-to-catch-a-phish-where-employee-awareness-falls-short/d/d-id/1335228?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

APT Groups Make Quadruple What They Spend on Attack Tools

Some advanced persistent threat actors can spend north of $1 million on attacks, but the return on that investment can be huge.

Advanced persistent threat (APT) groups can sometimes spend a substantial amount of money mounting attacks on large, well-protected organizations. But for every dollar they spend, the payoff can be four times as much or more, a new study from Positive Technologies has found.

The security vendor analyzed the tools and tactics that 29 active APT groups are currently using in campaigns worldwide against organizations in multiple sectors, including finance, manufacturing, and government.

For the analysis, Positive Technologies looked at how much these groups have been spending, on average, to gain initial access to a target network and how much they are spending on developing the attack after they gain a foothold. The security vendor considered data both for financially motivated APT groups and separately for groups focused on cyberespionage and spying. The data was obtained from Positive Technologies’ monitoring of active threat groups and from Dark Web and publicly available sources.

The exercise shows that the starting price for a full set of tools for attacks on large financial enterprises could be as high as $55,000, while some cyber espionage campaigns can start at over $500,000. But when the attacks are successful. the payoffs can be enormous as well.

For instance “Silence,” a well-known, financially motivated cybercrime group, last year stole the equivalent of $930,000 from Russia’s PIR Bank. To pull off the caper, the group likely spent about $66,000 upfront on tools for creating malicious email attachments, stealing from the bank’s ATMs, spying on the bank’s employees, and on other legitimate penetration testing tools and homegrown malware, Positive Technologies estimates.

In addition, Silence likely forked out between 15% and 50% of the loot on money mules and other services that actually withdrew cash from PIR Bank’s ATMs — still leaving the threat actor with substantially more than it spent.

“The potential benefit from an attack far exceeds the cost of a starter kit, says Leigh-Anne Galloway, cybersecurity resilience lead at Positive Technologies. For groups like Silence, the profit from one attack is typically more than quadruple the cost of the attack toolset, she says.

The ROI for some APT groups can be many magnitudes higher. Positive Technologies, for instance, estimated that APT38, a profit-driven threat group with suspected backing from the North Korean government, spends more than $500,000 for carrying out attacks on financial institutions but gets over $41 million in return on average. A lot of the money that APT38 spends is on tools similar to those used by groups engaged in cyber espionage campaigns.

Building an effective system of protection against APTs can be expensive, Galloway says. For most organizations that have experienced an APT attack, the cost of restoring infrastructure in many cases is the main item of expenditure. “It can be much more than direct financial damage from an attack,” she says.

Positive Technologies’ breakdown of attack costs shows that financially motivated APT groups typically spend a relatively low amount on gaining initial access. In nine out of 10 attacks, the threat actors use spear-phishing as a way to penetrate the company’s internal network.

From $100 to Over $1 Million
Tools for creating the malicious attachments — or exploit builders — used in these email campaigns can range from as little as $300 to $2,500 for a monthly subscription to services for creating documents with malicious content. In some cases, exploit builders can cost substantially more. Positive Technologies estimates that the Cobalt Group, a group associated with attacks on numerous financial institutes, in 2017 paid $10,000 for malware it used in phishing emails to exploit a remote code execution vulnerability in Microsoft Office.

Meanwhile, APT groups that are focused on spying and cyber espionage rarely buy their initial access tools from Dark Web marketplaces and instead tend to use custom exploit builders. Prices for these are impossible to estimate, but evidence shows such groups are willing to pay even $20,000 for these tools, Positive Technologies said. For zero-day vulnerabilities, some APT groups don’t flinch at paying as much as $1 million.

Once inside a network, APT groups — both the financially motivated ones and the cyberspies — tend to rely heavily on legitimate, publicly available tools and custom products rather than Dark Web tools. The most commonly used legitimate tools are penetration-testing platforms such as Cobalt Strike and Metasploit, Galloway says. Legal utilities for administration, such as Sysinternals Suite, and remote access tools, like TeamViewer, Radmin, and AmmyAdmin, are all popular as well.

While these tools can be obtained legally via public access, APT actors are often forced to shop for them in underground forums because of how some vendors vet their buyers before selling to them. Prices for these tools can range from as little as $100 for a modified version of TeamViewer to $15,000 for a modified version of Metasploit Pro with one year of technical support.

The cost for some specialized tools that APT groups use can be relatively steep. Tools for escalating OS privileges can easily cost $10,000, while those that take advantage of zero-day vulnerabilities in Adobe products, for instance, can fetch over $130,000. Positive Technologies estimates that cyber espionage group FinSpy has spent some $1.6 million on FinFisher, a framework that allows it to spy on users through webcam and microphone, capture email and chat messages, steal sensitve data, and employ a variety of anti-analysis techniques.

These tools can be hard to defend against, which is why many APT groups are willing to spend on them. “It is almost impossible to stop APT attacks at the stage of infrastructure penetration, and it is extremely difficult to do it at the stages of consolidation and distribution in the infrastructure,” Galloway says.

Related Content:

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/apt-groups-make-quadruple-what-they-spend-on-attack-tools/d/d-id/1335229?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Data Center Changes Push Cyber Risk to Network’s Edge

Changes in fundamental enterprise architectures coupled with shifts in human resources mean that companies are considering new risks to their infrastructure.

Data centers face a huge increase in compute demand while looking at a precipitous drop in trained IT personnel. Add to those factors executive demand for changes in how data centers are powered, and the stage is set for shifts that could leave server farms, central storage, and enterprise network stacks open to cyberattacks. 

These are some of the points raised in a new report looking at the data center in 2025. The report, sponsored by Vertiv, is an update to a report first issued in 2014. In the original report, the data center brain drain was highlighted, with only 56% of survey participants expected to still be in the industry by 2025, and with retirement as the main reason for employees leaving.

But as the new report shows, the problem is much bigger. While the skills shortage in cybersecurity has been well-documented, it’s also an overall problem in IT. These shortages in trained IT professionals are looming as the industry sees a change in the way that data centers are structured – a change that may be as large as the shift to cloud computing. Enterprise computing, the new 2019 report says, is heading to the edge.

“Edge computing” in this context is computing that has been pushed closer to users and devices rather than delivering all compute services from central locations. Among organizations who have edge sites today or expect to have edge sites in 2025, more than half (53%) expect the number of edge sites they support to grow by at least 100% between now and then, with 20% expecting a 400% or more increase, according to the report.

Overall, survey participants said that they expect their total number of edge computing sites to grow 226% between now and 2025.

“The pressure on the edge has pushed the requirement for understanding IT applications out into places that that it didn’t exist just one generation ago,” says Peter Panfil, vice president of global power at Vertiv. “We’re going through this generational change and at the same time the industry is undergoing fairly significant changes in the way it’s gonna be able to deploy its workforce.” 

One way organizations are responding to the lack of trained professionals is by increasing the machine intelligence and automation capacity of different components in the data center. “If it’s not a smart cluster, it’s a smart rack, or a smart row, or a smart aisle where they can have complete flexibility in dropping ‘IT-capability delivery systems’ into places where before they just didn’t have them,” Panfil says.

Concerns about whether these more intelligent systems might become an attack vector for the enterprise has had an impact on how the intelligence is deployed. “For example, we offer a feature where we monitor the health of the of the UPS system,” he says. “We’ve got customers who say, ‘Nope we are not going to let you even connect to the network.’ So the your system has to be self-contained and self optimizing.”

“More and more of our customers are saying that a connection into the system is a way for people to get in and fiddle with it in a nefarious way,” Panfil says. And that means hard limits on the connectivity physical infrastructure components are allowed.

Fortunately, there are physical infrastructure components that fall into what Panfil calls the “blinking and breathing” part of the operation, akin to the human body’s autonomous systems that do things like breathe and blink without conscious intervention.

Even in complex situations like those involving percentages of green power at different times of day, or cooling operations based on ambient temperatures and moment-by-moment energy costs, the data center’s physical infrastructure has to be on a self-contained blinking and breathing basis to secure it.

Security-conscious IT executives are in a bind: cloud-based control and automation systems could provide solutions to the functional gaps left by the growing skills shortage. But the network connections to critical infrastructure in the data center are, to many, unacceptable risks. The question is whether the self-contained solutions can provide the proper balance between function and security.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/data-center-changes-push-cyber-risk-to-networks-edge/d/d-id/1335230?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Intel patches SSD firmware and microprocessor diagnostic tool

Intel has issued security updates for two of its products which enterprise and expert users will want to patch as soon as possible.

On paper, the most serious of the two affects 32/64-bit versions of the Intel Processor Diagnostic Tool (IPDT), a Windows utility used to test Intel microprocessor behaviour and troubleshoot faults.

Discovered by researcher Jesse Michael of firmware security company Eclypsium, the severity rating for this flaw (CVE-2019-11133) is ‘high’, which under the industry CVSS scoring system is a notch below critical.

The full details have yet to be released but are described in general terms as allowing:

An authenticated user to potentially enable escalation of privilege, information disclosure or denial of service via local access.

In the hands of an attacker, that would be carte blanche to do what they wanted. The limitation indicated by the use of the word “authenticated” means that local access to the computer is needed for an attack, but that could happen if a system were infected with malware.

On the other hand, the IPDT is a tool that only a subset of users, mostly specialists and admins, should have installed on their computers. The fix for anyone using it is to download version 4.1.2.24 or later.

SSD fix

Although the second flaw, affecting Intel’s Data Center S4500/S4600 Series Solid State Drive (SSD) firmware, is only rated ‘medium’ on CVSS, arguably it’s the more widespread and inconvenient of the two.

Identified as CVE-2018-18095 after being discovered internally by Intel, exploiting the vulnerability would allow privilege escalation on drives using firmware before version SCV10150.

Again, an attacker would need physical access to the management interface for the affected SSDs, which takes it out of the league of opportunist attackers.

However, although only launched less than two years ago in capacities up to 4TB, these drives are likely to have been installed inside numerous data centers that invested in the claimed lower failure rates and higher performance that comes with enterprise SSDs.

The good news is that updating multiple drives can be achieved using the Intel SSD Data Center Tool, which also automates finding updated firmware images.

Intel posts regular security updates across its product families on its support website.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/udI21dsbAVo/

Facial recognition surveillance must be banned, says Fight for the Future

The online activist group Fight for the Future is calling for a Federal ban on facial recognition surveillance.

Evan Greer, deputy director of Fight for the Future, compared facial recognition to nuclear or biological weapons: while we can’t go back in time to ban the development of those technologies, we still have time to stop facial recognition before we get to the point where we’re living in what the campaign calls a nation with “automated and ubiquitous monitoring of an entire population.”

Greer:

This surveillance technology poses such a profound threat to the future of human society and basic liberty that its dangers far outweigh any potential benefits. We don’t need to regulate it, we need to ban it entirely.

This is the latest campaign from the group that led a targeted internet blackout in 2015: thousands of sites blocked and redirected Congressional URLs to a Patriot Act protest page. Then, in 2017, Fight for the Future launched a last-ditch attempt to save net neutrality with its Break the Internet campaign.

Its latest call to action, BanFacialRecognition.com, offers visitors a form that connects them to their Congressional and local lawmakers in order to ask them to ban this “unreliable, biased” technology, which the group calls “a threat to basic rights and safety.”

Fight for the Future charges Silicon Valley lobbyists with “disingenuously calling for light ‘regulation’” of facial recognition so they can continue to profit from the rapid spread of this “surveillance dragnet,” thereby ducking the real debate: namely, should this technology even exist?

Industry-friendly and government-friendly oversight will not fix the dangers inherent in law enforcement’s use of facial recognition: we need an all-out ban.

‘It’s broken’

The campaign includes a laundry list of the criticisms that stick to facial recognition technology like so many civil rights burrs. One of its many problems is a high error rate. For example, as the Independent reported last year, freedom of information requests show that the facial recognition software used by the UK’s biggest police force – London’s Metropolitan Police – gets false positives in more than 98% of the alerts it generated. The UK’s biometrics commissioner, Professor Paul Wiles, told the news outlet that the technology is “not yet fit for use”.

As we reported in 2017, the Met’s use of facial recognition fell flat on its face two years in a row. Its “top-of-the-line” automated facial recognition (AFR) system, which it trialled at London’s Notting Hill Carnival, couldn’t even tell the difference between a young woman and a balding man. One man was wrongfully detained after being erroneously tagged as being wanted on a warrant for a rioting offense.

Multiple studies have found that AFR is an inherently racist technology: facial recognition algorithms have been found to be less accurate at identifying black faces.

During a scathing US House oversight committee hearing on the FBI’s use of the technology in 2017, it emerged that 80% of the people in the FBI’s facial recognition database don’t have any sort of arrest record. Yet the system’s recognition algorithm inaccurately identifies them during criminal searches 15% of the time, with black women most often being misidentified.

From the Ban Facial Recognition site:

These errors have real-world impacts: wrongful imprisonment, deportation, or worse.

Even though facial recognition has proved to be more error-prone than not, it’s being widely deployed by law enforcement in multiple countries. And once governments have our biometric information in their databases, it’s “an easy target for identity thieves or state-sponsored hackers,” Fight for the Future says.

In fact, our biometric data has already been ripped off, the group said, pointing to last month’s theft of a US Customs and Border Protection (CBP) database full of travelers’ photos and license plates.

‘It’s invasive’

Fight for the Future points out that “Law enforcement officers frequently search facial recognition databases without warrants – or even reasonable suspicion that you’ve done anything wrong.”

It’s not just facial image databases that police use for non-sanctioned uses, looking up romantic partners, business associates, neighbors, journalists and others for reasons that have nothing to do with official police work in criminal-history and driver databases. We’ve seen multiple cases of cops treating their state’s driver license databases like a kind of Facebook, using the databases to look up and ogle female colleagues’ images hundreds of times. It’s a hobby that’s set taxpayers back hefty amounts when those women have sued over breach of privacy.

‘It threatens our future’

Fight for the future says that facial recognition is uniquely Orwellian, and we’ve got to stop its spread before we’re living under an authoritarian state:

Facial recognition is unlike any other form of surveillance. It enables automated and ubiquitous monitoring of an entire population, and it is nearly impossible to avoid. If we don’t stop it from spreading, it will be used not to keep us safe, but to control and oppress us – just as it is already being used in authoritarian states.

The backlash is growing

As Fight for the Future pointed out in a press release about the campaign to ban facial recognition, police use of the technology has already been banned in San Francisco and Somerville, Massachusetts.

The group said that Axon, which makes tasers and body cams for police officers, has said that it wouldn’t commercialize facial recognition because it currently can’t “ethically justify” its use.

Fight for the Future also cited recent revelations that the FBI and Immigration and Customs Enforcement (ICE) are reportedly using driver’s license photos for facial recognition searches without license holders’ knowledge or consent. Doing so gives them access to millions of Americans’ driver’s license photos, creating what critics have called an “unprecedented surveillance infrastructure.”

Both Democrats and Republicans have been dumbfounded by law enforcement’s audacity – no elected official gave permission for 18 state DMVs to share their driver’s license databases – and have looked to ban it in the absence of rules about its use by law enforcement and government agencies.

Rep. Jim Jordan, R-Ohio, said during a House Oversight Committee hearing that it’s “scary.”

It doesn’t matter what side of the political spectrum you’re on. This should concern us all.

Fight for the Future:

We’re joining this outcry to call for a complete ban on facial recognition. It’s time the federal government take a stand now to prevent this technology from proliferating across the country.

But while there well may be bipartisan support for banning facial recognition, there’s also bipartisan support for keeping it, propped up by strong tech lobbying efforts. During the House Oversight Committee hearing, “Facial Recognition Technology (Part 1): Its Impact on our Civil Rights and Liberties,” which took place in May, Rep. Alexandria Ocasio-Cortez, D-N.Y., had this to say:

The consensus on this issue I think is bipartisan, but also the opposition is bipartisan as well. You know, big tech is a very strong lobby that has captured a lot of members of both parties.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gDvAWIaBBtI/

Feature