STE WILLIAMS

Google: You know we said that Chrome tracker contained no personally identifiable info? Yeah, about that…

Updated Google has seemingly stopped claiming an identifier it uses internally to track experimental features and variations in its Chrome browser contains no personally identifiable information.

In February, Arnaud Granal, a software developer who works on a Chromium-based browser called Kiwi, claimed the X-client-data header, which Chrome sends to Google when a Google webpage has been requested, represents a unique identifier that can be used to track people across the web. As such, it could run afoul of Europe’s tough privacy regulations.

When The Register reported these claims, Google insisted the X-client-data header only includes information about the variation of Chrome being used, rather than a unique fingerprint. “It is not used to identify or track individual users,” the ad giant said.

The Register has no reason to believe the X-client-data header was ever used to track and identify people across websites – Google has better ways of doing that. Concern about the identifier has more to do with insufficient disclosure, inaccurate description, legal compliance, and the possibility that it might be abused for identifiable tracking.

The specific language appeared in the Google Chrome Privacy Whitepaper, a document the company maintains to explain the data Chrome provides to Google and third-parties.

Last month, Google’s paper said, “This Chrome-Variations header (X-client-data) will not contain any personally identifiable information, and will only describe the state of the installation of Chrome itself, including active variations, as well as server-side experiments that may affect the installation.”

That language is no longer present in the latest version of the paper, published March 5, 2020.

Google Chrome logo

Is Chrome really secretly stalking you across Google sites using per-install ID numbers? We reveal the truth

READ MORE

Asked why the change was made, a Google spokesperson said only, “The Chrome white paper is regularly updated as part of the Chrome stable release process.”

In place of the old language, seen in this diff image, is a slightly more detailed explanation of the X-client-data header, which comes in two variations, a low-entropy (13-bit) version that ranges from 0-7999 and a high-entropy version, which is what most Chrome users will send if they have not disabled usage statistic reporting.

The Register asked whether the change was made to avoid liability under Europe’s GDPR for claiming incorrectly that the X-client-data header contained no information that could be used to personally identify the associated Chrome user. But Google’s spokesperson didn’t address that question.

In an email to The Register, Granal said, “Knowing a bit the inner-workings on both sides (including Google’s lawyers), this is certainly a sensitive issue and it can be costly to Google if the issue is not addressed properly.

“As a user, in the current state, it’s important to understand that no matter if you use a proxy, a VPN, or even Tor (with Google Chrome), Google (including DoubleClick) may be able to identify you using this X-Client-Data. Do you want Google to be able to recognize you even if you are not logged-in to your account or behind a proxy? Personally, I am not comfortable with that, but each person has a different sensitivity with regards to privacy.

“I’m sure if you explain in simple words, to national data protection offices that Google can track your computer with a ‘permanent cookie’ they wouldn’t be happy with that at all.” ®

Updated to add

After this story was published, a Google spokesperson pointed out the Chrome privacy paper still says the X-client data header doesn’t include personally identifiable information, but in different words. The relevant paragraph, we’re told, is:

Additionally, a subset of low entropy variations are included in network requests sent to Google. The combined state of these variations is non-identifying, since it is based on a 13-bit low entropy value

Also, we’re told our claim that Chrome sends high-entropy variations in the header is incorrect: only low-entropy variations are sent.

Sponsored:
Webcast: Why you need managed detection and response

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/11/google_personally_identifiable_info/

Hey, friends. We know it’s a crazy time for the economy, but don’t forget to enable 2FA for payments by Saturday

Saturday is the delayed deadline for UK banks and financial institutions to have implemented two-factor authentication for payment transactions.

This is the result of the EU Payment Services Directive 2 (PSD2) for “Strong Customer Authentication” (SCA). This requires institutions to have two levels of authentication in place for online transactions to reduce fraud. Providers can choose two out of three – something the customer knows, like a PIN, something they have, like a phone or hardware token, and something they are – a biometric check.

James Stickland, chief executive officer at authentication platform Veridium, said the huge growth in use of digital services made better authentication vital.

Stickland said: “This Saturday’s deadline is a long-awaited triumph for consumer security and combatting online fraud. Ever-rising fraud levels are linked to the consumer preference of mobile e-commerce, forcing regulation to keep pace with innovation. Businesses have had an extended period of six months, in addition to the two years since the initial announcement, and there is no legitimate reason not to be compliant. A failure to integrate Strong Customer Authentication demonstrates a disregard for consumer protection – it should have been prioritised long ago and viewed as a business differentiator.”

Stickland warned that banks face large fines for not complying with the rules and though the changes could inconvenience customers, they could actually improve user experience and increase confidence as well as allow new services to be offered.

There are some exceptions to SCA such as recurring payments when only the first payment needs authenticating, low-value payments of less than €30 and merchant-initiated transactions.

How big a problem online fraud is a hotly disputed subject.

Government quango Action Fraud is the UK’s central referral point for online crime. But it referred fewer computer misuse cases to police in 2019, while the Crime Survey for England and Wales also saw a fall but a far higher total than Action Fraud.

In other news. Halifax and Lloyds’ online services were struggling to stay available this morning.

We contacted Lloyds, FirstDirect, the Royal Bank of Scotland as well as the Financial Conduct Authority but none were able to respond. ®

Sponsored:
Webcast: Why you need managed detection and response

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/13/eu_multi_factor_auth_banking/

Your data was ‘taken without permission’, customers told, after personal info accessed in O2 UK partner’s database

Hackers have slurped biz comms customers’ data from a database run by one of O2’s largest UK partners.

In an email sent to its customers, the partner, Aerial Direct, said that an unauthorised third party had been able to access customer data on 26 February through an external backup database, which included personal information on both current and expired subscribers from the last six years.

The data accessed included personal information, such as names, dates of birth, business addresses, email address, phone numbers, and product information. The company said no passwords or financial information was taken.

“As soon as we became aware of this unauthorised access we shut down access to the system and launched a full investigation, with assistance from experts, to determine what happened and what information was affected. We immediately reported this matter to the Information Commissioner’s Office and are actively working on fully exploring the details of how it happened.”

‘Sophisticated’

The company said that it was unsure who was responsible for the hack or what their intentions were. It added that it has “sophisticated safeguards in place to protect customer information”, and was “working to further enhance security by taking advice from relevant experts”.

Based in Fareham, England, Aerial Direct is O2’s largest direct business partner in the UK with more than 130,000 customers. The company provides IP telephony services and equipment, including mobile, fixed lines, as well as call, broadband, conferencing and hosting telecoms. In its most recent accounts, for FY2018, filed in May last year (PDF), it turned over £21.6m and chalked up earnings before interest, taxes, depreciation and amortization of £6.9m.

The company has set up a support website for customers affected by the breach, suggesting they change their passwords and advise their banks, building societies and credit card companies if they see any dodgy transactions on their statements.

The company did not reply to The Register‘s requests for further information on how it locked down that info. ®

Sponsored:
Webcast: Why you need managed detection and response

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/13/o2_customer_data_slurped_through_partner_databse/

A Lesson in Social Engineering

What kind of school project is this?

Source: Habitu8

What security-related videos have made you laugh? Let us know! Send them to [email protected].

Beyond the Edge content is curated by Dark Reading editors and created by external sources, credited for their work. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/a-lesson-in-social-engineering/b/d-id/1337309?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Princess Cruises Confirms Data Breach

The cruise liner, forced to shut down operations due to coronavirus, says the incident may have compromised passengers’ personal data.

Carnival-owned Princess Cruises — the cruise line forced to suspend operations after two ships became hot spots for coronavirus — reports that a breach may have compromised passenger data.

A notice published on the Princess website says suspicious activity was identified in late May 2019. Forensics experts were hired to launch an investigation, which found an unauthorized party gained access to some employee accounts between April 11 and July 23, 2019. It’s unclear why Princess waited to post the notice, which is believed to have gone live in early March 2020.

The employee accounts accessed contained personal data regarding Princess employees, crew, and guests. While the type of data compromised “varies by individual,” officials say, it could include name, address, Social Security number, government identification number (passport number or driver’s license number), credit card and bank account information, and health data.

Princess does not have any evidence indicating this personal data has been misused. The matter has been reported to law enforcement and an investigation is still ongoing. In addition, it’s reviewing its security policies and implementing changes to strengthen its security program.

Read the full breach notice here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “CASB 101: Why a Cloud Access Security Broker Matters.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/princess-cruises-confirms-data-breach/d/d-id/1337311?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

What Cybersecurity Pros Really Think About Artificial Intelligence

While there’s a ton of unbounded optimism from vendor marketing and consultant types, practitioners are still reserving a lot of judgment.PreviousNext

Image Source: Adobe

Image Source: Adobe

The cybersecurity industry has been targeted by technology and business leaders as one of the top advanced use cases for artificial intelligence (AI) and machine learning (ML) in the enterprise today. According to the latest studies, AI technology in cybersecurity is poised to grow over 23% annually through the second half of the decade. That’ll have the cybersecurity AI market growing from $8.8 billion last year to $38.2 billion by 2026.

The question seasoned cybersecurity veterans are asking themselves right now is, “How much does AI really help security postures and security operations?” There’s a ton of unbounded optimism from the vendor marketing and consultant types, but practitioners are still reserving a lot of judgment. As we piece together the surveys of cybersecurity industry perceptions, it becomes clear that a big part of the industry’s evolution in the 2020s will be how it can effectively balance AI and human intelligence. Here’s what the data shows at the moment.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/what-cybersecurity-pros-really-think-about-artificial-intelligence/d/d-id/1337308?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Beyond Burnout: What Is Cybersecurity Doing to Us?

Infosec professionals may feel not only fatigued, but isolated, unwell, and unsafe. And the problem may hurt both them and the businesses they aim to protect.

 

(image by alex.pin, via Adobe Stock)

“Sometimes I feel like I’m just yelling into the chasm.”

“I began to question everything that I believed to be true about myself.”  

“They see stuff that makes them wish they had bleach for their brains.”

You can hear it gurgling through every conversation at a cybersecurity conference, from the expo floor to the press room to the neighborhood bar  – that telltale combination of giddy fascination, wry gallows humor, and weary frustration. The field often attracts clever and creative individuals who want to help people. However, over time, curious minds crackling with ideas for how to fix the world’s cybercrime problems may fizzle out.

The industry is beginning now to talk openly about “burnout” – but beyond leaving infosec professionals feeling frustrated and tired, the job can leave some feeling isolated, unwell, and unsafe.

And that’s a problem not just for the professionals in the industry – it’s an issue that reverberates into their families, their world views, and the cybersecurity of the businesses and systems they aim to protect.

Cybersecurity professionals are trying to save everyone. Does someone need to save them?

The Impact: ‘The Only Ones to Feel Any Pain’
Over 400 CISOs and 400 C-suite executives revealed some sobering truths in a survey recently conducted by Vanson Bourne on behalf of Nominet. The “CISO Stress Report” found:

  • 21% of CISOs said they have taken a leave of absence because of job-related stress. Some CISOs took this significant step even though many reported being afraid to take sick days (41%) and neglecting to take all of their allotted time off (35%).  
  • 48% of CISOs said their work stress has impacted their mental health, and 35% said it has impacted their physical health.
  • 40% of CISOs said their work stress has impacted their relationships with their families or children, 32% said it has impacted their relationships with spouses or romantic partners, and 32% said it has impacted their relationships with friends.
  • 23% said they are using medication or alcohol to manage stress.
  • 94% of American CISOs and 95% of UK CISOs reported working more than their contracted hours – on average, 10 hours per week more. In addition, 83% of American C-suite execs and 73% of UK execs confirmed they do, indeed, expect security teams to work longer hours.

Curtis Simpson, now CISO of Armis says he’s begun to find some balance and even pick up hobbies, but it took him a long time “in the salt mines” before he reached this point.

“I personally spent my daughter’s entire high school graduation ceremony having to quarterback the global response to an attack – an attack that would have been easily prevented if any of the specific guidance we had been sharing with the business was followed,” says Simpson. “None of the guidance was followed, but the security team was, as is common, the only ones to feel any pain.”

Simpson’s experience is not uncommon; 45% of respondents to the Nominet survey stated their work as a CISO had caused them to miss a family milestone or activity.

However, long hours are something that workers in many fields suffer. So what makes infosec people special?

‘Bleach for Their Brain’
Observing the habits of cybercriminals day in and day out can leave its mark – particularly on threat researchers and forensic investigators.

“You do see the darker side of humanity,” says Adam Kujawa, director of Malwarebytes Labs.

He speaks specifically about stalkerware and of ransomware that extorts victims by threatening to dox them with false evidence that they viewed child pornography.

“That kind of stuff just breaks my heart,” he says.

And as Marcus Carey (who has worn many security hats, from Navy cryptographer, to entrepreneur, to his current status as Reliaquest enterprise architect) points out, digital forensics specialists don’t just face the fraudulent threats of child pornography, but the reality of it. Because psychologists have already determined that researchers who investigate child sexual abuse material may have responses similar to post-traumatic stress disorder, and even one individual investigation may deal with terabytes of data, technologists are beginning to search for ways to better automate this process.

In reference to the digital forensic investigators who conduct these cases and many other kinds of cybercrimes, Carey says, “They see stuff that makes them want to bleach their brain.”

‘Always on a Swivel’
“I actually draw several parallels between [the cybersecurity profession] and the homeless population,” says Dr. Ryan Louie, MD, Ph.D. Louie, a San Francisco-based board-certified psychiatrist who has worked with the homeless population and specializes in the mental health impacts of entrepreneurship and technology. He presented a session at the RSA Conference (RSAC) last month.   

Louie explains that both infosec pros and homeless individuals are always looking to see who might hurt them. “[The homeless are] out in the open,” he says. They don’t have the shelter at nighttime. They always have to look out if someone’s going to take their belongings, if anyone’s going to harm them, where are they going to get help.”

It’s a constant, 24/7 effort to address threats and an inability to “turn off,” he says, and he has seen it in both groups of people.

Carey says he’s rather amazed at the accuracy of this comparison. “Wow. You just blew my mind,” he says. “My head is always on a swivel. It drives my wife crazy.”

In a recent poll on Dark Reading’s The Edge, 83.1% of respondents indicated that working in infosec had made them a “less trusting person,” 59% said they were grateful for their increased caution, 4.9% said they wished they were more trusting, and 19.2% said that while they valued their caution, sometimes they wished they were more trusting.

When the need for safety or the fear of being harmed again becomes too great, it can become an illness, Louie says.  

This matter of safety was also discussed by NSA senior researcher Dr. Celeste Paul in her recent RSAC keynote session about the “fundamental needs of security professionals.” She referenced that a century ago, famed physician and educator Maria Montessori laid out the fundamental needs of humans, one of which was safety.

But cybersecurity professionals have a complicated relationship with safety. 

The infosec job is largely to keep people (organizations, systems, individuals) safe. But because so many cyberattacks exploit the end user, infosec pros rarely try to make anyone feel safe – quite the contrary.

(Continued on page 2: ‘Yelling Into the Chasm’)

 

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full BioPreviousNext

Article source: https://www.darkreading.com/edge/theedge/beyond-burnout-what-is-cybersecurity-doing-to-us/b/d-id/1337310?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Big BEC Bust Brings Down Dozens

Two dozen individuals have been named in the latest arrests of alleged participants in a business email compromise scheme that cost victims $30 million.

Federal officials have arrested two dozen individuals on charges related to a series of business email compromise (BEC) fraud and money-laundering schemes. The individuals, most of whom live in or around Atlanta, are alleged to have committed fraud against individuals and companies using BEC schemes, romance fraud scams, and retirement account scams, among others.

According to a statement released by the Justice Department, those arrested this week join 17 individuals already in federal custody as charged in the series of alleged crimes. The department says that those charged collected more than $30 million from their victims, laundering the money through accounts often opened in victims’ names and used to both defraud the victim and launder the criminal proceeds.

More than two dozen local, state, and federal law enforcement agencies participated in the investigation of the defendants.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “CASB 101: Why a Cloud Access Security Broker Matters.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/big-bec-bust-brings-down-dozens/d/d-id/1337314?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Confessions app Whisper spills almost a billion records

Researchers who uncovered a data exposure from mobile app Whisper earlier this week have released more details about the incident.

Whisper is an app from MediaLab, a mobile app company that owns a host of other apps including the popular messaging service Kik. It offers a kind of anonymous social network service that allows people to post their innermost fears and desires, supposedly without risk.

Its users post everything from dark family secrets to stories of infidelity. It gathers these up and uses them for articles on its website, including “Naughty Nannies Confess To Sleeping With The Fathers They Work For”, “Alcoholism Runs In My Family”, and “I Married The Wrong Person”.

The problem, according to researcher Dan Ehrlich of cybersecurity consultancy Twelve Security, is that Whisper didn’t steward that data very well. He says that he and his colleague Matthew Porter accessed 900m records in a 5 TB database spanning 75 different servers, logged between the app’s release in 2012 and the present day. The data was stored in plain text on ElasticSearch servers and included 90 metadata points per account.

The Washington Post broke the story about the app on Monday 10 March, having worked with the researchers.

The records didn’t include real names, but did divulge their stated age, gender, ethnicity, home town, and nickname, the story said, adding that it also divulged access to groups that included intimate confessions.

Ehrlich has since followed this up with the first two of a planned five-blog series going into more depth and is dropping more details about the alleged exposure. He said:

… one has the geocoordinates of nearly every place they’ve visited, and the ability to log into their account with their password/credentials. Depending on when the account was created and how much the user engaged with the app, dozens and dozens of fields of metadata can be reviewed.

These amounted to 90 data points including some bizarre ones, according to Ehrlich’s posts, such as predator_probability and banned_from_high_schools. He added:

Sexual fetish groups, suicide groups, and hate group membership of users can all be seen. Whether or not a user is a predator, if they are banned from posting near high schools, and their private messages can all be viewed.

Worst of all perhaps is the disclosure of the exact coordinates of a user’s most recent post. This not only affects children posting highly sensitive information from schools but also service members on military bases and in US embassies around the world, they warned.

A MediaLab spokesperson responded:

[…] no personally identifiable data was exposed as Whisper does not collect any PII such as names, phone numbers or email addresses. The referenced data is all accessible to users from public API’s [sic] exposed within the app. The data is a consumer-facing feature of the application which users can choose to share or not share depending on which features of the application they wish to utilize.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mkTNVHP-K2g/

Homeland Security sued over secretive use of face recognition

The American Civil Liberties Union (ACLU) is suing the Department of Homeland Security (DHS) over its failure to cough up details about its use of facial recognition at airports.

Along with the New York Civil Liberties Union, the powerful civil rights group filed the suit in New York on Thursday. Besides the DHS, the suit was also filed against US Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), and the Transportation Security Administration (TSA).

The ACLU says that the lawsuit challenges the secrecy that shrouds federal law enforcement’s use of face recognition surveillance technology.

Ashley Gorski, staff attorney with the ACLU’s National Security Project, said in a release that pervasive use of face surveillance “can enable persistent government surveillance on a massive scale.”

The public has a right to know when, where, and how the government is using face recognition, and what safeguards, if any, are in place to protect our rights. This unregulated surveillance technology threatens to fundamentally alter our free society and is in urgent need of democratic oversight.

The ACLU had filed Freedom of Information Act (FOIA) requests to find out how the agencies are using the surveillance technologies at airports – requests that the agencies ignored.

In its suit, the ACLU demands that the agencies turn over records concerning:

  • Plans for further implementation of face surveillance at airports;
  • Government contracts with airlines, airports, and other entities pertaining to the use of face recognition at the airport and other ports of entry;
  • Policies and procedures concerning the acquisition, processing, and retention of our biometric information; and
  • Analyses of the effectiveness of facial recognition technology.

As the ACLU’s complaint tells it, in 2017, CBP began a program called the Traveler Verification Service (TVS) that involves photographing travelers during entry or exit from the country.

The program involves the use of facial recognition technology to compare the photographs with faceprints that the government already has – a huge collection of biometrics that just keeps getting bigger. In June 2019, the General Accountability Office (GAO) said that the FBI’s facial recognition office can now search databases containing more than 641 million photos, including 21 state databases (a number that’s ballooned from the 412 million images the FBI’s Face Services unit had access to at the time of a GAO report from three years prior).

CBP’s piece of that burgeoning pie: as of June 2019, the agency had processed more than 20 million travelers using facial recognition, the ACLU says.

Major airlines and airports have partnered with CBP on TVS. As of August 2019, 26 airlines and airports had committed to employing CBP’s face-matching technology, and several airlines have already incorporated it into boarding procedures for outbound international flights.

It’s being done behind closed doors, the ACLU says. The public knows little about the nature of these partnerships, nor about the policies and privacy safeguards governing the processing, retention, and dissemination of data collected or generated through TVS.

It’s certainly not the first time that CBP has kept details about its images to itself. In June 2019, hackers managed to steal photos of travelers and license plates from a CBP database. In violation of CBP policies, the database had been copied by a subcontractor to its own network. Then, the subcontractor’s network had been hacked.

Initial reports indicated that the breach involved images of fewer than 100,000 people in vehicles coming and going through a few specific lanes at a single port of entry into the US over the previous one-and-a-half months.

While the image data wasn’t immediately put up for sale on the Dark Web, the breach showed that this type of data is of interest to hackers… and that government agencies are capable of losing control of it.

Separately, the TSA has outlined a plan to implement face surveillance for both international and domestic travelers, the ACLU’s lawsuit says. The complaint points to a document published by the TSA entitled “TSA Biometrics Roadmap” that describes how the TSA intends to partner with CBP on face recognition for international travelers; apply face recognition to TSA PreCheck travelers; and ultimately expand face recognition to domestic travelers more broadly.

Congress has authorized the DHS to collect biometrics from certain categories of noncitizens at border crossings. It hasn’t expressly given the go-ahead to collect faceprints from citizens, though. Citizens were given the right to opt out of the facial scans after the DHS faced intense backlash over a proposed regulation change that would have allowed the technology to be used on all people coming in or leaving the US.

Despite the backlash, however, the DHS hasn’t given up on those plans, the ACLU says.

Running tally of pushback

Opposition to the government’s pervasive use of the technology continues to strengthen. As of Thursday, Washington’s state Senate and House were still debating a bill to rein in facial recognition.

In October, the state of California outlawed facial recognition in police bodycams. Some of its biggest cities have gone further still in restricting the controversial technology, including San Francisco, Berkeley, and Oakland.

Outside of California, government use of facial recognition has also been banned in three Massachusetts municipalities: Somerville, Northampton and Brookline. New York City tenants also successfully stopped their landlord’s efforts to install facial recognition technology to open the front door to their buildings.

The ACLU’s Gorski said that when it comes to finding out how this technology is being used, it shouldn’t have to come to lawsuits:

That we even need to go to court to pry out this information further demonstrates why we should be wary of weak industry proposals and why lawmakers urgently need to halt law and immigration enforcement use of this technology. There can be no meaningful oversight or accountability with such excessive, undemocratic secrecy.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FBDwZ3c7DNg/