STE WILLIAMS

What do Windows 10 and Uber or Lyft have in common? One bad driver can really ruin your day. And 40 can totally ruin your month

DEF CON Too many trusted Windows 10 peripheral drivers, signed off by Microsoft and running with powerful kernel-level privileges, are riddled with exploitable security vulnerabilities, according to infosec biz Eclypsium.

During a talk [PDF] at this year’s DEF CON hacking shindig in Las Vegas, Eclypsium’s Jesse Michael and Mickey Shkatov warned that the driver software, which is developed by major vendors to ensure their devices work with Windows 10 systems, can be compromised by malware or rogue logged-in users to elevate their privileges and gain total control over otherwise fully patched computers.

The crux of the issue is that the drivers, which support gear from graphics cards to hard drives, run at the same level as the operating system’s kernel, which grants them pretty much free access to the underlying hardware and motherboard firmware. If a miscreant manages to exploit an escalation-of-privilege or information-disclosure hole in one of these drivers, they will gain the same level of control over the box. When that happens, it’s pretty much game over.

And it turns out dozens of these drivers, which are cryptographically signed by Microsoft so that Windows 10 trusts them, have exploitable flaws.

“Our analysis found that the problem of insecure drivers is widespread, affecting more than 40 drivers from at least 20 different vendors – including every major BIOS vendor, as well as hardware vendors like ASUS, Toshiba, Nvidia, and Huawei,” said Team Eclypsium in a memo accompanying its conference presentation.

“However, the widespread nature of these vulnerabilities highlights a more fundamental issue – all the vulnerable drivers we discovered have been certified by Microsoft.”

In practice, this means malware already on a machine could gain enough control to evade antivirus detection, snoop on users or steal data undetected, and cause other mischief.

Eclypsium alerted vendors to the security holes in the drivers: manufacturers including Intel and Huawei pushed patched versions of their code to users, while some others were less responsive.

Below is a list of hardware makers who, we’re told, have patched their drivers. If you use their kit with Windows 10, run Windows Update to get the new driver builds or otherwise check you’re using the latest driver software to ensure you have the necessary security fixes in place:

ASRock; ASUSTeK Computer; ATI Technologies (AMD); Biostar; EVGA; Getac; GIGABYTE; Huawei; Insyde; Intel; Micro-Star International (MSI); NVIDIA; Phoenix Technologies; Realtek Semiconductor; SuperMicro; and Toshiba.

As Microsoft was quick to note in a statement to El Reg on the matter, this isn’t a remote-code execution scenario: to abuse the drivers, you already need to be running code on a target machine. According to the Redmond giant, if you keep antivirus tools, drivers, operating system software, and applications up to date, and refrain from opening downloads, programs, and email attachments from untrusted sources, and stay away from bad websites, you’ll hopefully prevent malware from getting a foothold on your computer.

Ultimately, the Eclypsium crew points out, even those the driver code is signed by Microsoft, the onus to secure drivers falls on the vendors themselves, and there is only so much the Windows goliath and administrators can do themselves to secure machines from driver exploits.

“Organizations should not only continuously scan for outdated firmware, but also update to the latest version of device drivers when fixes become available from device manufacturers,” the Eclypsium team noted. “Organizations may also want to keep their firmware up to date, scan for vulnerabilities, monitor and test the integrity of their firmware to identify unapproved or unexpected changes.” ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/12/microsoft_windows_bad_drivers/

More Focus on Security as Payment Technologies Proliferate

Banks and merchants are expanding their payment offerings but continue to be wary of the potential fraud risk.

More than three-quarters of companies are investing in new payment technologies, despite concerns about the security of specific implementations and the potential for transaction fraud, according to a Forrester Research report released on August 12. 

The report, “Understanding the Evolving Payments Landscape,” says that retail merchants and online businesses expect the way that consumers pay for goods to change relatively quickly, with about half expecting to face increasing choices in payment offerings. At the same time, 61% of financial institutions and merchants believe that fraud will also increase.

This bifurcated outlook on the financial future is not all that surprising because many consumers are quick to adopt the latest technology, while companies often distrust new technologies until they prove themselves, says R.L. Prasad, senior vice president of payment system risk at Visa, which commissioned the Forrester report.

“Consumers are becoming more and more comfortable with carrying money on their phone, so mobile wallet expansion is an area that will see a lot of growth,” he says. “Peer-to-peer payments are also here to stay. The convenience is unparalleled right now.”

As merchants and credit-card issuers continue to suffer significant breaches, Visa and other payment technology companies are looking to improving analytics and deploy machine learning that can detect and prevent fraud. The Forrester report says that while digital payments have lower fraud rates, the impact of fraud in card-not-present transactions — the most common type of digital transaction —is much worse. Card-not-present fraud affected 28% of companies surveyed but accounted for 40% of fraud volume.

The continued concerns of merchants and business comes as they adopt more payment technologies. Fifty-eight percent of those surveyed support digital wallets, and 60% support peer-to-peer payments, according to the Forrester report. Almost three-quarters of respondents support paying bills through mobile banking.

“Consumers’ usage of new payment technologies is expected to increase substantially over the next five years,” according to the report. “Banks, merchants, and fintechs are working hard to ensure they offer these capabilities to their customers.”

Yet many merchants and banks do not have the security maturity to guard against fraudulent transactions. Most are still using usernames and passwords to protection accounts, without two-factor authentication. Half of merchants use some device data to identify the user, and 43% use some form of biometrics. Ways of countering fraud generally add friction to any transaction, potentially resulting in a lost sale. For that reason, many companies do not like their options to combat fraud.

“There are a lot of merchants and stakeholders in the payment ecosystem who are immature in their processes,” Prasad says. “Increasingly, those smaller and less-mature clients are seeking ways of improving their security.”

Most companies are hiring staff specifically tasked with security and anti-fraud roles, while more than three-quarters are spending on new tools to help secure transactions. 

Merchants and banks need to improve their anti-fraud defenses because cybercriminals are already doing so, Visa’s Prasad says. In the past three years, the company has seen fraudsters adopt a new technique, using machine learning to attempt to find valid credit card numbers by generating account numbers and then distributing the testing of those numbers across a large number of sites to evade detection.

“Fraudsters are using artificial intelligence to advance their techniques,” he says. “We see large-scale attacks happening, [where] they don’t have to steal a card from a wallet, but they can actually enumerate the numbers.”

The best approach to combating fraud is to improve security holistically, investing in people, products, and collaboration with service providers, Prasad says. About two-thirds of companies with mature security processes say that partners are critical to the effort, the report states.

Related Content

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/more-focus-on-security-as-payment-technologies-proliferate/d/d-id/1335496?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Security Pros, Congress Reps Talk National Cybersecurity at DEF CON

Cybersecurity and government leaders discussed why Congress is unprepared for a major cyberattack and how the two parties can collaborate.

Is the United States prepared to handle a societywide cyberattack spanning industries and government jurisdictions? “No, we are not,” said Rep. Ted Lieu (D-CA) in a talk at last week’s DEF CON 2019.

Lieu was one of four participants on a DEF CON panel entitled “Hacking Congress: The Enemy of My Enemy Is My Friend.” He, along with fellow congressman Jim Langevin (D-RI), Wilson Center president and former US representative (D-CA) Jane Harman, IBM X-Force Red Team director Cris Thomas, and Rapid7 director of public affairs Jen Ellis, discussed Congress’ responsibility in cybersecurity.

“Politics is analog,” said Harman in her opening remarks. “But the world, and the problems politicians confront, is digital.” These problems will never be solved without bridging silos between security and government, she added, noting “no one solves problems in isolation.”

Imagine if a massive destructive malware attack struck the US, shutting down hospitals and transportation systems. We have yet to see the type of “true doomsday” attack that would lead policymakers to the realization they need to do something about cybersecurity, said Ellis in an interview with Dark Reading. One goal of including representatives from Congress at DEF CON was to bring both teams together and introduce them to the experts and culture of cybersecurity.

There is an old perception that policymakers don’t know anything about tech, Ellis explained, but lots of people are interested and trying to do the right thing. The problem is, they’re dealing with an “unbelievably diverse” range of life-and-death issues. Cybersecurity is only one of them, but there is a growing interest in privacy and security as regulations like the California Consumer Protection Act (CCPA) and General Data Protection Regulation (GDPR), combined with 50-plus state-specific standards for breach notification.

Another challenge is the pace at which government moves and the structure of how its branches handle security. As Lieu pointed out, when a crisis hits, it’s generally too late for Congress to do anything. Its job is to make laws and do oversight, “none of which is speedy or quick,” he noted. Further, multiple areas of government – Department of Homeland Security, Department of Defense, the Office of Management and Budget – have a role in cybersecurity.

“If we could centralize a single point of contact, I think that would make things easier,” Lieu said.

Representatives and security experts discussed current and future steps that could improve the government’s security posture. Langevin emphasized the importance of practicing an incident response plan. “It needs to be exercised and drilled over and over again,” he said. The US was “totally caught off guard” by Russian interference in the 2016 election, Langevin noted, and while security improved for the 2018 midterms, he believes the Russians will be back in 2020.

“The other thing we need to do, I believe, is engage more with the cybersecurity research community,” he added. Hack the Pentagon is one example of how government is working with researchers, Lieu said, but more can be done. The Wilson Center, Hewlett Foundation, and I Am The Calvary are three organizations working to bring policymakers and security pros together. Another, the DC to DEF CON program, brings congressional staffers to the annual conference.

Of course, the security research and government communities work in entirely different ways, Ellis pointed out. Security experts want problems to be fixed immediately, while public policy is a slow-moving process. Congress works as an “evolution,” while the tech sector is a binary world. “We can be a bit absolutist about things,” she said, but changing policy “won’t be one and done.”

“The current congresspeople I see want to be sophisticated,” Thomas said. They want to be knowledgeable and understand cybersecurity issues, but those issues are complex, he added. The security community should engage with them and educate them on what they should be worried about. All panelists encouraged the audience of researchers, many of whom expressed interest in working to improve security policy, to contact their representatives and share their concerns.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/security-pros-congress-reps-talk-national-cybersecurity-at-def-con/d/d-id/1335497?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hackers Can Hurt Victims with Noise

Research presented at DEF CON shows that attackers can hijack Wi-Fi and Bluetooth-connected speakers to produce damaging sounds.

Sound can be damaging to physical health — even lethal. And a hacker can generate sounds that can do damage through common Wi-Fi- and Bluetooth-connected devices, according to a research presentation at DEF CON 27.

Matt Wixey, research lead for the PwC UK Cyber Security practice and a doctoral student, found that he could access the speaker and volume controls for a number of different devices and use them to produce sounds at volumes that could distract and annoy humans almost instantly, damage human hearing with a relatively short exposure, and even damage the device itself.

Wixey has reported his finding to a number of different device manufacturers, some of which have made changes to their firmware, but he found that there are viable attacks on many different devices (details of which he didn’t release to minimize possible public harm). In general, though, he reported that audio levels are a legitimate attack vector in the realm of cyberattacks intended to do physical, rather than data-based, damage.

For more, read here. (Note: Link is not working for all browsers, but report is opening in Firefox and Tor browsers at present.)

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/hackers-can-hurt-victims-with-noise/d/d-id/1335498?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

FBI Plans to Monitor Social Media May Spark Privacy Issues

A new initiative to pull data from social media platforms may clash with policies prohibiting the use of information for mass surveillance.

The Federal Bureau of Investigation wants to intensify its monitoring of Facebook, Twitter, and other social media platforms to detect potential threats. But its plans may collide with privacy policies and, potentially, Facebook’s ability to comply with its recent $5 billion settlement with the FTC.

FBI officials are soliciting vendor proposals for a contract that would gather “vast quantities” of publicly available data from social media sites including Facebook and Twitter, The Wall Street Journal reports. While vendors would not be able to access direct messages or other private content, they would collect data including names, user IDs, and photos, which could paint an accurate picture of individuals’ lives when merged with external data and enable a third party to track them online.

Facebook, which works with law enforcement when issued a warrant or subpoena for user information, also has a ban on the use of its data for surveillance and prohibits law enforcement from analyzing data without users’ permission. Similarly, Twitter does not allow use of its data for surveillance purposes. The FBI, under pressure to address waves of violence across the country, believes it can collect data and learn about potential threats without breaching privacy compliance requirements.

These plans may also conflict with Facebook’s ability to comply with its privacy settlement with the Federal Trade Commission, which mandates the social media giant maintain a data security program. As part of this, Facebook must stop the misuse of publicly available data, the type of information the FBI wants to scan.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/fbi-plans-to-monitor-social-media-may-spark-privacy-issues/d/d-id/1335499?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

GDPR privacy can be defeated using right of access requests

A British researcher has uncovered an ironic security hole in the EU’s General Data Protection Regulation (GDPR) – right of access requests.

Right of access, also called subject access, is the part of the GDPR regulation that allows individuals to ask organisations for a copy of any data held on them.

This makes sense because, as with any user privacy system, there must be a legally enforceable mechanism which allows people to check the accuracy and quantity of personal data.

Unfortunately, in what can charitably be described as a massive GDPR teething problem, Oxford University PhD student James Pavur has discovered that too many companies are handing out personal data when asked, without checking who’s asking for it.

In his session entitled GDPArrrrr: Using Privacy Laws to Steal Identities at this week’s Black Hat show, Pavur documents how he decided to see how easy it would be to use right of access requests to ‘steal’ the personal data of his fiancée (with her permission).

After contacting 150 UK and US organisations posing as her, the answer was not hard at all.

According to the accounts by journalists who attended the session, for the first 75 contacted by letter, he impersonated her by providing only information he was able to find online – full name, email address, phone numbers – which some companies responded to by supplying her home address.

Armed with this extra information, he then contacted a further 75 by email, which satisfied some to the extent they sent back his fiancee’s social security number, previous home addresses, hotel logs, school grades, whether she’d used online dating, and even her credit card numbers.

Pavur didn’t even need to fake identity documents or forge signatures to back up his requests and didn’t spoof her real email addresses to make his requests seem more genuine.

Lateral thinking

Pavur hasn’t revealed which companies failed to authenticate his bogus right of access requests, but named three – Tesco, Bed Bath and Beyond, and American Airlines – which performed well because they challenged his requests after spotting missing authentication data.

Nevertheless, a quarter handed over his fiancée’s data without identity verification, 16% asked for an easily forged type of identity he decided not to provide, while 39% asked for strong identity.

Curiously, 13% ignored his requests entirely, which at least meant they weren’t handing over data willy nilly.

The potential for identity theft doesn’t need spelling out here, notes Pavur’s session blurb:

While far too often no proof of identity is required at all, even in the best cases the GDPR permits someone capable of stealing or forging a driving license nearly complete access to your digital life.

The danger is that criminals might already have been exploiting this without anybody noticing.

As Pavur points out, automating bogus standardised access requests wouldn’t be hard to do at scale by using the sort of basic name and email address data that many people make public on social media.

Whose fault?

If Pavur’s research shows a failing, it’s that too many organisations still don’t understand GDPR.

It isn’t enough to secure data in a technical sense if you don’t also secure access to it. If someone phones up to requesting to know what data is held on them, not authenticating this request becomes a bypass that ends up endangering privacy rather than protecting it.

While it’s true that this could have been happening long before GDPR existed, giving citizens the legal right to request data has handed people with bad intentions a standardised mechanism to try and manipulate.

But there are deeper failures here too – if organisations try to verify someone’s identity, what should they ask for? GDPR or not, there is still no universal and reliable identity verification system to check that someone is who they say they are.

Today, the systems that do exist are all about identifying people for one app or service. Government schemes still feel as if they’re going sideways.

Until that’s solved, GDPR will remain legislation that closes some big privacy holes by creating a few new ones.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EyJdhSW2bqw/

GDPR privacy can be defeated using right of access requests

A British researcher has uncovered an ironic security hole in the EU’s General Data Protection Regulation (GDPR) – right of access requests.

Right of access, also called subject access, is the part of the GDPR regulation that allows individuals to ask organisations for a copy of any data held on them.

This makes sense because, as with any user privacy system, there must be a legally enforceable mechanism which allows people to check the accuracy and quantity of personal data.

Unfortunately, in what can charitably be described as a massive GDPR teething problem, Oxford University PhD student James Pavur has discovered that too many companies are handing out personal data when asked, without checking who’s asking for it.

In his session entitled GDPArrrrr: Using Privacy Laws to Steal Identities at this week’s Black Hat show, Pavur documents how he decided to see how easy it would be to use right of access requests to ‘steal’ the personal data of his fiancée (with her permission).

After contacting 150 UK and US organisations posing as her, the answer was not hard at all.

According to the accounts by journalists who attended the session, for the first 75 contacted by letter, he impersonated her by providing only information he was able to find online – full name, email address, phone numbers – which some companies responded to by supplying her home address.

Armed with this extra information, he then contacted a further 75 by email, which satisfied some to the extent they sent back his fiancee’s social security number, previous home addresses, hotel logs, school grades, whether she’d used online dating, and even her credit card numbers.

Pavur didn’t even need to fake identity documents or forge signatures to back up his requests and didn’t spoof her real email addresses to make his requests seem more genuine.

Lateral thinking

Pavur hasn’t revealed which companies failed to authenticate his bogus right of access requests, but named three – Tesco, Bed Bath and Beyond, and American Airlines – which performed well because they challenged his requests after spotting missing authentication data.

Nevertheless, a quarter handed over his fiancée’s data without identity verification, 16% asked for an easily forged type of identity he decided not to provide, while 39% asked for strong identity.

Curiously, 13% ignored his requests entirely, which at least meant they weren’t handing over data willy nilly.

The potential for identity theft doesn’t need spelling out here, notes Pavur’s session blurb:

While far too often no proof of identity is required at all, even in the best cases the GDPR permits someone capable of stealing or forging a driving license nearly complete access to your digital life.

The danger is that criminals might already have been exploiting this without anybody noticing.

As Pavur points out, automating bogus standardised access requests wouldn’t be hard to do at scale by using the sort of basic name and email address data that many people make public on social media.

Whose fault?

If Pavur’s research shows a failing, it’s that too many organisations still don’t understand GDPR.

It isn’t enough to secure data in a technical sense if you don’t also secure access to it. If someone phones up to requesting to know what data is held on them, not authenticating this request becomes a bypass that ends up endangering privacy rather than protecting it.

While it’s true that this could have been happening long before GDPR existed, giving citizens the legal right to request data has handed people with bad intentions a standardised mechanism to try and manipulate.

But there are deeper failures here too – if organisations try to verify someone’s identity, what should they ask for? GDPR or not, there is still no universal and reliable identity verification system to check that someone is who they say they are.

Today, the systems that do exist are all about identifying people for one app or service. Government schemes still feel as if they’re going sideways.

Until that’s solved, GDPR will remain legislation that closes some big privacy holes by creating a few new ones.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EyJdhSW2bqw/

GDPR privacy can be defeated using right of access requests

A British researcher has uncovered an ironic security hole in the EU’s General Data Protection Regulation (GDPR) – right of access requests.

Right of access, also called subject access, is the part of the GDPR regulation that allows individuals to ask organisations for a copy of any data held on them.

This makes sense because, as with any user privacy system, there must be a legally enforceable mechanism which allows people to check the accuracy and quantity of personal data.

Unfortunately, in what can charitably be described as a massive GDPR teething problem, Oxford University PhD student James Pavur has discovered that too many companies are handing out personal data when asked, without checking who’s asking for it.

In his session entitled GDPArrrrr: Using Privacy Laws to Steal Identities at this week’s Black Hat show, Pavur documents how he decided to see how easy it would be to use right of access requests to ‘steal’ the personal data of his fiancée (with her permission).

After contacting 150 UK and US organisations posing as her, the answer was not hard at all.

According to the accounts by journalists who attended the session, for the first 75 contacted by letter, he impersonated her by providing only information he was able to find online – full name, email address, phone numbers – which some companies responded to by supplying her home address.

Armed with this extra information, he then contacted a further 75 by email, which satisfied some to the extent they sent back his fiancee’s social security number, previous home addresses, hotel logs, school grades, whether she’d used online dating, and even her credit card numbers.

Pavur didn’t even need to fake identity documents or forge signatures to back up his requests and didn’t spoof her real email addresses to make his requests seem more genuine.

Lateral thinking

Pavur hasn’t revealed which companies failed to authenticate his bogus right of access requests, but named three – Tesco, Bed Bath and Beyond, and American Airlines – which performed well because they challenged his requests after spotting missing authentication data.

Nevertheless, a quarter handed over his fiancée’s data without identity verification, 16% asked for an easily forged type of identity he decided not to provide, while 39% asked for strong identity.

Curiously, 13% ignored his requests entirely, which at least meant they weren’t handing over data willy nilly.

The potential for identity theft doesn’t need spelling out here, notes Pavur’s session blurb:

While far too often no proof of identity is required at all, even in the best cases the GDPR permits someone capable of stealing or forging a driving license nearly complete access to your digital life.

The danger is that criminals might already have been exploiting this without anybody noticing.

As Pavur points out, automating bogus standardised access requests wouldn’t be hard to do at scale by using the sort of basic name and email address data that many people make public on social media.

Whose fault?

If Pavur’s research shows a failing, it’s that too many organisations still don’t understand GDPR.

It isn’t enough to secure data in a technical sense if you don’t also secure access to it. If someone phones up to requesting to know what data is held on them, not authenticating this request becomes a bypass that ends up endangering privacy rather than protecting it.

While it’s true that this could have been happening long before GDPR existed, giving citizens the legal right to request data has handed people with bad intentions a standardised mechanism to try and manipulate.

But there are deeper failures here too – if organisations try to verify someone’s identity, what should they ask for? GDPR or not, there is still no universal and reliable identity verification system to check that someone is who they say they are.

Today, the systems that do exist are all about identifying people for one app or service. Government schemes still feel as if they’re going sideways.

Until that’s solved, GDPR will remain legislation that closes some big privacy holes by creating a few new ones.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EyJdhSW2bqw/

Facebook facial recognition: class action suit gets court’s go ahead

Yes, yet another US court has reaffirmed, Facebook users can indeed sue the company over its use of facial recognition technology.

The US Court of Appeals for the Ninth Circuit on Thursday affirmed the district court’s certification of a class action suit – Patel v. Facebook – that a steady progression of courts has allowed to proceed since it was first filed in 2015.

Though a stream of courts has refused to let Facebook wiggle out of this lawsuit – and boy oh boy, has it tried – this is the first decision of an American appellate court that directly addresses what the American Civil Liberties Union (ACLU) calls the “unique privacy harms” of the ever-more ubiquitous facial recognition technology, that’s increasingly being foisted on the public without our knowledge or consent.

The lawsuit was initially filed by some Illinois residents under Illinois law, but the parties agreed to transfer the case to the California court.

What the suit claims: Facebook violated Illinois privacy laws by “secretly” amassing users’ biometric data without getting consent from the plaintiffs, Nimesh Patel, Adam Pezen and Carlo Licata, collecting it and squirreling it away in what Facebook claims is the largest privately held database of facial recognition data in the world.

Specifically, the suit claims that Facebook didn’t do any of the following:

  • Properly inform users that their biometric identifiers (face geometry) were being generated, collected or stored.
  • Properly inform them, in writing, what it planned to do with their biometrics and how long the company planned to collect, store and use the data.
  • Provide a publicly available retention schedule and guidelines for permanently destroying the biometric identifiers of users who don’t opt out of “Tag Suggestions”.
  • Receive a written release from users to collect, capture, or otherwise obtain their biometric identifiers.

The Illinois law in question – the Illinois Biometric Information Privacy Act (BIPA) – bans collecting and storing biometric data without explicit consent, including “faceprints.” This is one of the first tests of the powerful biometrics privacy law. Another test of BIPA is a class action suit, proposed in September 2018, brought against the US fast-food chain Wendy’s over its use of biometric clocks that scan employees’ fingerprints to track them at work.

Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, had this to say about the court’s decision to let the Facebook facial recognition class action go ahead:

This decision is a strong recognition of the dangers of unfettered use of face surveillance technology.

The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.

In her opinion, Judge Sandra Segal Ikuta wrote that the court concludes that Facebook’s development of a “face template” using facial recognition, allegedly without consent, could well invade an individual’s privacy rights:

The facial-recognition technology at issue here can obtain information that is ‘detailed, encyclopedic, and effortlessly compiled,’ which would be almost impossible without such technology.

In short, yes, the court concluded: the plaintiffs have made a case for having allegedly suffered sufficient privacy injuries to have standing to sue.

Rebecca Glenberg, senior staff attorney at the ACLU of Illinois, said that with this court go-ahead, Illinois’s BIPA law has passed legal muster. Citizens can let the lawsuits fly for having their faceprints taken without consent, even if nobody’s actually stolen it:

BIPA’s innovative protections for biometric information are now enforceable in federal court. If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court.

As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/En8PlzNDHD0/

Facebook facial recognition: class action suit gets court’s go ahead

Yes, yet another US court has reaffirmed, Facebook users can indeed sue the company over its use of facial recognition technology.

The US Court of Appeals for the Ninth Circuit on Thursday affirmed the district court’s certification of a class action suit – Patel v. Facebook – that a steady progression of courts has allowed to proceed since it was first filed in 2015.

Though a stream of courts has refused to let Facebook wiggle out of this lawsuit – and boy oh boy, has it tried – this is the first decision of an American appellate court that directly addresses what the American Civil Liberties Union (ACLU) calls the “unique privacy harms” of the ever-more ubiquitous facial recognition technology, that’s increasingly being foisted on the public without our knowledge or consent.

The lawsuit was initially filed by some Illinois residents under Illinois law, but the parties agreed to transfer the case to the California court.

What the suit claims: Facebook violated Illinois privacy laws by “secretly” amassing users’ biometric data without getting consent from the plaintiffs, Nimesh Patel, Adam Pezen and Carlo Licata, collecting it and squirreling it away in what Facebook claims is the largest privately held database of facial recognition data in the world.

Specifically, the suit claims that Facebook didn’t do any of the following:

  • Properly inform users that their biometric identifiers (face geometry) were being generated, collected or stored.
  • Properly inform them, in writing, what it planned to do with their biometrics and how long the company planned to collect, store and use the data.
  • Provide a publicly available retention schedule and guidelines for permanently destroying the biometric identifiers of users who don’t opt out of “Tag Suggestions”.
  • Receive a written release from users to collect, capture, or otherwise obtain their biometric identifiers.

The Illinois law in question – the Illinois Biometric Information Privacy Act (BIPA) – bans collecting and storing biometric data without explicit consent, including “faceprints.” This is one of the first tests of the powerful biometrics privacy law. Another test of BIPA is a class action suit, proposed in September 2018, brought against the US fast-food chain Wendy’s over its use of biometric clocks that scan employees’ fingerprints to track them at work.

Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, had this to say about the court’s decision to let the Facebook facial recognition class action go ahead:

This decision is a strong recognition of the dangers of unfettered use of face surveillance technology.

The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.

In her opinion, Judge Sandra Segal Ikuta wrote that the court concludes that Facebook’s development of a “face template” using facial recognition, allegedly without consent, could well invade an individual’s privacy rights:

The facial-recognition technology at issue here can obtain information that is ‘detailed, encyclopedic, and effortlessly compiled,’ which would be almost impossible without such technology.

In short, yes, the court concluded: the plaintiffs have made a case for having allegedly suffered sufficient privacy injuries to have standing to sue.

Rebecca Glenberg, senior staff attorney at the ACLU of Illinois, said that with this court go-ahead, Illinois’s BIPA law has passed legal muster. Citizens can let the lawsuits fly for having their faceprints taken without consent, even if nobody’s actually stolen it:

BIPA’s innovative protections for biometric information are now enforceable in federal court. If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court.

As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/En8PlzNDHD0/