STE WILLIAMS

Supermicro: We told you the tampering claims were false

Computer manufacturer Supermicro is still trying to lay to rest reports that the Chinese government tampered with its equipment to spy on Western cloud users. The San Jose-based company published a letter this week claiming that independent tests had cleared its equipment of any compromise.

Supermicro sells data centre computers to Western customers using components made by contractors in China. It has spent the last two months denying that Chinese subcontractors have been secretly embedding microscopic chips onto its motherboards that enable it to remotely control the computers’ operating systems and watch what they’re doing.

In the letter, posted on the company’s website, president and CEO Charles Liang along with two senior vice presidents said that the company had completed an independent audit to look for malicious hardware on its motherboards. It found nothing, it said:

Because the security and integrity of our products is our highest priority, we undertook a thorough investigation with the assistance of a leading, third-party investigations firm. A representative sample of our motherboards was tested, including the specific type of motherboard depicted in the article and motherboards purchased by companies referenced in the article, as well as more recently manufactured motherboards.

This latest missive follows a letter to customers issued on 18 October 2018 that condemned a story published by Bloomberg on 4 October 2018. The story claimed that the Chinese government had coerced contractors to implant tiny monitoring devices on motherboards sold to Supermicro.

Apple and Amazon, which Bloomberg said knew about the compromised motherboards, both denied the tampering claims along with the manufacturer shortly after the story was published. Bloomberg didn’t back down, though. The company claimed in a story on 9 October 2018 that a security expert, Yossi Applebaum, had discovered embedded monitoring devices in the ethernet connectors on Supermicro motherboards sold to a major US telco. However, in neither story did it publish hard evidence such as photos or analysis data to support its claims.

Mind you, Supermicro didn’t publish the evidence in this latest report either, which Reuters says was conducted by investigations and cybersecurity forensics firm Nardello Co. Supermicro said:

Today, we want to share with you the results of this testing: After thorough examination and a range of functional tests, the investigations firm found absolutely no evidence of malicious hardware on our motherboards.

Why would Supermicro keep flogging this horse rather than letting the story silently die? Its share price might have something to do with it. It dropped from $21.40 to $12.60 on the day that the Bloomberg story broke, and has only just broken $16. The question is whether this new report will do anything to boost its fortunes or whether it will spark controversy all over again.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/QZoA5b81U0Q/

Border agents are copying travelers’ data, leaving it on USB drives

Are you one of the travelers to the US who’ve been stopped, questioned, and required to hand over your electronic devices for search?

Our apologies: there’s a good chance that we still have your data kicking around on a USB drive. Somewhere. Maybe. Unless we lost it, I guess.

The Office of Inspector General issued a report, published on the Department of Homeland Security (DHS) website earlier this week, that details how well US Customs and Border Protection (CBP) agents have been following standard operating procedures for searching travelers’ electronic devices, as authorized by the Trade Facilitation and Trade Enforcement Act of 2015 (TFTEA).

The findings: not so great.

CBP agents are allowed to carry out warrantless device searches at all 328 ports of entries in the US: they’re allowed to manually – i.e., visually – inspect travelers’ phones, laptops, tablets, thumb drives or other electronic devices as they look for content related to terrorism, child abuse imagery, or other material that smells of the criminal.

Beyond that, in a pilot program launched in 2007 in 67 ports, they’re also allowed to copy device data onto a USB thumb drive and upload it on a platform called an Automated Targeting System (ATS) so they can carry out more complex searches on travelers’ data.

The OIG found quite a few standard operating procedure (SOP) SNAFUs besides the unwiped drives, and they’re laying the blame on the CBP’s Office of Field Operations (OFO), which handles training manuals and conducts training sessions.

That’s the internet you’re searching, not my phone

One SOP that’s unfortunately not all that standard: agents aren’t always turning off the internet access of the devices they search. That’s a no-no, since search is supposed to be restricted to only the data that’s physically on the device, not information stored on a remote server who knows where.

In fact, even after an April 2017 memo was issued that required documentation of network connections having been disabled prior to a search, the OIG found that more than one-third – 14 out of 40 – searches had no documentation of internet access having been turned off, leaving the results of the searches “questionable.”

And that’s when the CBP could manage to carry out any searches at all: a situation that came to a grinding halt when…

Oops – I forgot to renew the license for the search software!

According to the report, it’s up to the OFO to manage the equipment used to search electronic devices. Well, somebody dropped the ball on that one: some manager somewhere forgot to renew the annual software license for the search equipment on time.

An OFO official blamed the budget. There’s no dedicated funding for the advanced searches of electronic devices, he said, which is only a pilot program.

That left a gap of about 7.5 months – between 31 January and 13 September 2017 – when agents couldn’t conduct any software-assisted searches. In the SNAFU soup of lacking earmarked funds, budgetary issues and getting elbowed aside by other funding priorities, the initial estimate to purchase the equipment expired, and the vendor had to scrape up a new estimate… which evidently took quite a while.

Leftovers on the thumb drives

Searches and seizures of travelers’ devices aren’t being properly documented, meaning that they could be lost or misplaced, the OIG found. But besides not taking care to document device seizure and to keep track of where they are, the CBP is having problems properly using their own USB drives.

This should sound familiar: Some of us have a USB drive kicking around the office. We use it to copy stuff and move it around. Some of us are not diligent about deleting the files from that thumb drive after we’ve plunked them where we want them to go.

Those somebodies are not supposed to be border agents who copy material from travelers’ electronic laptops, tablets, USB drives, phones and multimedia cards for inspectional purposes, but that’s exactly what they’re doing: leaving people’s content on thumb drives. Agents are supposed to use a thumb drive to copy material and transfer it to the ATS for search purposes, and then they’re supposed to delete the material – immediately.

Ain’t happening. The OIG inspected drives at five ports of entry. At three of them, the OIG found material copied from past searches – in other words, nobody wiped the drive.

That leaves travelers’ data susceptible to being disclosed should the drives be lost or stolen, the OIG said.

The upshot being… well, who knows?

While many travelers have been affronted by the CBP’s device searches, it’s worth noting, as the OIG does, that the program has led to at least a few success stories – in other words, dangerous or criminal individuals have been stopped from entering the country. One example: in March 2018, agents found images and videos of terrorist-related materials. In another incident, they found “graphic and violent” videos, including images of child abuse. Both travelers were denied entry.

But then there are the innocent travelers whose data is taken and who have found it impossible to get deleted. In August, for example, an American Muslim woman sued the CBP for seizing her iPhone at an airport, keeping it for 130 days, failing to explain why, and refusing to destroy whatever copies of her data that they might have grabbed, including photos of her when she wasn’t wearing a hijab, which she wears as an expression of her Islamic faith.

It wasn’t her physical phone that she wanted back. She got that back after 130 days.

Rather, she wanted assurances that copies of her data were deleted. She wanted the CBP to wipe out copies that were taken without the CBP explaining the reason for seizure and without her being charged with a crime.

How can anybody determine whether the CBP has deleted data? Unfortunately, even the CBP can’t tell how effective its electronics search is. While the pilot program is producing quantitative data, it’s impossible to tell if the searches are worth the trouble. That’s because the OFO hasn’t come up with any performance metrics.

Have these searches led to prosecutions? Convictions? Nobody can say. The OFO doesn’t track that information.

From the OIG report:

These deficiencies in supervision, guidance, and equipment management, combined with a lack of performance measures, limit OFO’s ability to detect and deter illegal activities related to terrorism; national security; human, drug, and bulk cash smuggling; and child pornography.

The OIG has come up with a list of recommendations for the OFO, including proper documentation of searches; disabling of data connections when searching devices; expeditious software license renewal; immediate deletion of travelers’ data from thumb drives; and creating and implementing program performance metrics.

The OFO has agreed to implement the OIG’s recommendations.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/-KJpWMVeUJ8/

‘Exclusive swag’ up for grabs as GitLab flings bug bounty scheme open to world+dog

DevOps outfit GitLab has opened its bug bounty scheme to world+dog, having paid out $200,000 last year and fixed “nearly 200 vulnerabilities reported to us”.

“In managing a public bug bounty program, we will now be able to reward our hacker community for reporting security vulnerabilities to us directly through the program,” said security director Kathy Wong in a blog post.

space bounty hunter

Get rich with Firefox or *(int *)NULL = 0 trying: Automated bug-bounty hunter build touted

READ MORE

Through its HackerOne page, GitLab promised to pay out up to $12,000 for critical bugs responsibly disclosed to it. It also pledged to respond to submitted reports “within 5 business days” or fewer.

Back in 2014, GitLab first ran a public vuln disclosure programme, according to an online QA with Wong. While that did not offer bug bounties, the code repo site did start coughing up in December 2017 to selected partners.

As for why GitLab is taking the bug bounty program public, Wong said it was all down to “open source contribution values”.

“We currently make the details of security vulnerabilities public 30 days after the mitigations have been released,” she said, which compares rather well with some firms who take months to mention anything publicly – if at all.

GitLab will also be killing off support for TLS1.0 and 1.1 in a couple of weeks’ time, and bounty-hunting hackers can look forward to receiving “exclusive HackerOne-only GitLab swag” as well as reasonably-sized cheques in return for disclosing vulns.

GitLab was last in the news for accidentally splitting its brains in half, as well as shifting its main site onto Google Cloud after Microsoft bought out rival site Github. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/13/gitlab_bug_bounty_program_public/

UK white hats blacklisted by Cisco Talos after smart security code stumbles

UK security training company Hacker House briefly had its site blocked after being mistaken for malware by Cisco’s security wing Talos’ smart “threat intelligence” software.

Hacker House runs training classes on ethical hacking and defense techniques, as well as its own business security services in areas like penetration testing or network analysis. But on Wednesday morning things started to go awry.

The company’s training programs include things like security sandboxes and hands-on with code samples. This, apparently, triggered the Talos service to label the site as malicious and block it for customers.

Hacker House co-founder Matthew ‘Hacker Fantastic’ Hickey told The Register the issue began when some of his customers had reported being unable to access his site.

“They categorised our website as malware, or rather their machine learning did, and blocked access to our website, we only found out when our customers complained they couldn’t reach our site due to it being labelled as malware,” Hickey explained.

“Obviously that can harm our business and everything that we try to do.”

Fortunately, word of the block made its way to Talos within a few hours and the Cisco-owned security outfit was able to resolve the matter.

A Cisco spokesperson later confirmed this and said Hacker House would not be charged for any service related to lifting the block.

“Cisco Talos tracks 1.5 billion instances of malware daily, and helps stop more than 7.2 trillion attacks each year. Occasionally, there is a false positive reading, which can be addressed by submitting a ticket. There is no charge for submission,” the spokesperson told El Reg

“This matter has been resolved and no fee was charged as is consistent with Cisco Talos’ policy.”

While this case had a happy ending, the story does point to a potential problem on the horizon as security training, machine learning, and antimalware services all see their usage skyrocketing.

Without a close eye being kept by all parties, legitimate research and training tools can inadvertently get swept up by automated detection and users can end up being blocked from legitimate sites and services that could keep them safe. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/13/talos_hacker_house/

Taylor’s gonna spy, spy, spy, spy, spy… fans can’t shake cam off, shake cam off

Spotify’s one-time nemesis Taylor Swift has reportedly used controversial facial recognition tech on fans while they’ve been getting down to her sick beats.

According to Rolling Stone, the Rose Bowl venue in California rolled out the tech at her 18 May concert, in a bid to spot anyone on Tay Tay’s long list of stalkers.

In what appears to be a nightmare dressed like a daydream, concertgoers’ images were snapped up when they watched a display screening Swift’s rehearsal clips, because there wasn’t just a blank space behind – rather, a camera was hidden inside.

“Everybody who went by would stop and stare at it, and the software would start working,” Mike Downing, chief security officer of an advisory board for concert venues, who saw a demo of the tech, is quoted as saying.

The images were then sent to a “command centre” and cross-referenced against potential stalkers. There is no detail of which company makes the kiosks, where the images are stored or how long they are kept for.

The move could create bad blood with Swifties who would prefer not to be covertly filmed, but the venue isn’t the first to use face-scanning technology.

In August, it was revealed that the Tokyo 2020 Olympics will roll out automated systems from Japanese biz NEC to speed up security checks for staffers and athletes.

And in the UK, the police have been using the tech at major sporting events and demonstrations in the hope of spotting known troublemakers, while the Department of Homeland Security was recently reported to be testing it to track people walking in and around the White House.

The use of the technology is controversial, due in part to a lack of evidence that it works all that well, and the fact there is little in the way of regulation for the technology in the countries using it.

Meanwhile, as privacy becomes an increasingly mainstream debate, critics are taking the opportunity to emphasise how invasive the widespread use of such technology could be for large groups of people.

But perhaps Swifty would tell those with privacy fears to shake it off. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/13/taylor_swift_facial_recognition/

It is with a heavy heart that we must inform you hackers are targeting ‘nuclear, defense, energy, financial’ biz

Hackers are targetting critical infrastructure providers, including nuclear power and defense agencies, in what may be a state-sponsored attack that’s hiding behind North Korean code.

Discovered by McAfee and dubbed “Sharpshooter”, the operation has been running since November, largely focusing on US-based or English-speaking companies and agencies around the world with an emphasis on nuclear, defense, energy, and financial businesses.

It appears that, for now, the hacking operation is focused mostly on reconnaissance and harvesting sensitive information from the infected machines. McAfee did not note any behavior related to damaging or sabotaging infrastructure.

As with most well-organized cyber-raids, the Sharpshooter operation goes after key members of the targeted companies with phishing emails that pretend to be from a job recruiting agency seeking English-speaking applicants, we were told today.

The emails contain poisoned Word documents (researchers note the version used to craft them was Korean-localized) that then look to install the first piece of malware: an in-memory module that dials up a control server.

Once connected to the control server, the infected PC then downloads and executes a secondary malware payload known as Rising Sun. The Rising Sun malware does most of the heavy lifting in the campaign, monitoring network activity as well as collecting information on the infected machine that is encrypted and sent back to the control servers.

McAfee noted that the attack, particularly the malware payload used, borrow heavily on source code from Lazarus Group, a North Korean hacking operation blamed for attacks on both infrastructure and financial agencies.

Kim Jong Un

‘Desperate’ North Korea turns to bank hacking sprees to rake in much-needed dosh

READ MORE

That doesn’t however, mean that the group is behind the operation. In fact, McAfee says it strongly suspects the connections to be a red herring.

“Operation Sharpshooter’s numerous technical links to the Lazarus Group seem too obvious to immediately draw the conclusion that they are responsible for the attacks, and instead indicate a potential for false flags,” McAfee explained.

It would not be unheard of for another group or government to be borrowing source code from Lazarus. Earlier this year researchers showed how the US government’s own attack tools had been torn down, repackaged, and sent back into the wild against new targets.

Because of this, McAfee says that, for now, it will hold off on any speculation as to who might be behind the attack. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/12/nuclear_defense_security_threat/

Samsung fixes flaws that could have let attackers hijack your account

A recently patched trio of flaws in Samsung’s mobile site was leaving users vulnerable to attackers who could have reset their user passwords and hijacked their accounts, The Register reports.

The flaws were found by security researcher Artem Moskowsky, who said that they were all cross-site request forgery (CSFR), or, alternatively, XSRF, bugs.

Moskowsky said that the problem was with the way that the Samsung.com account page handled password-reset security questions.

What should have been happening: the Samsung.com web app would check the “referer” header (yes, that’s the way it’s spelled) to check that data requests were coming from sites that were legitimately supposed to have access.

What glitched: the checks weren’t working properly. Any site could have gotten the security question answers, enabling an attacker to access user profiles, change information such as usernames, or even to disable two-factor authentication (2FA), to change passwords and to thereby steal accounts.

The Register reports that in one proof of concept, Moskowsky showed how an attacker could exploit the CSRF flaw to change security questions – and answers – to whatever they want. From there, it would have been an easy hop to reset the password and take over a Samsung account.

Moskowsky:

Due to the vulnerabilities, it was possible to hack any account on account.samsung.com if the user goes to my page. The hacker could get access to all the Samsung user services, private user information, to the cloud.

When reporting what he originally thought were two CSRF flaws to Samsung – via that same Samsung.com site – Moskowsky came across a third bug that could have let him forcibly change security questions and answers.

I first discovered two vulnerabilities. But then when I logged in to security.samsungmobile.com to check my report, I was redirected to the personal information editing page.

This page didn’t look like a similar page on account.samsung.com. There was an additional ‘secret question’ field on it.

Samsung hadn’t yet responded to a request for comment from The Register as of Tuesday evening. It reportedly paid Moskowsky a total of $13,300 for the three vulnerabilities, which were rated medium, high, and critical.

He also picked up $20,000 last month for finding a big (now patched!) hole in Steam that gave him every game’s license keys.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wozvCaKbQ5E/

Britain approved £2.5m of snooping kit exports to thoroughly snuggly regime in Saudi Arabia

British ministers have approved the export of more than £2.4m worth of telecoms snooping gear to Saudi Arabia, in spite of its very obvious human rights problems, according to a report.

Five licences were granted to send “telecommunications interception equipment” to the controversial kingdom, which was most recently in the news over the murder of dissident journalist Jamal Khashoggi inside its embassy in Istanbul.

The deal was brought to light by Freedom of Information requests from political news website Politics Home.

This year, the site reported, ministers from the Department of International Trade signed off three permanent contracts worth £2.4m for the export of interception kit. International Trade Secretary Liam Fox is a former Defence Secretary, while Saudi Arabia remains one of the top destinations for British-made military kit such as fighter jets. It appears that that also includes British-designed equipment whose primary purpose is internal surveillance.

Politics Home also reported that “previous UK exports of spy tech have included controversial IMSI Catchers”, which are used to precisely identify who is in a given location by exploiting mobile phone telecoms specs to make handsets give up their unique IMSI numbers to a fake base station operated by state agents.

Such equipment has obvious uses in a tightly-controlled country ruled by theocrats.

A DIT spokesman told Politics Home: “Risks around human rights abuses are a key part of our licensing assessment and the government will not license the export of items where to do so would be inconsistent with any provision of the Consolidated EU and National Arms Export Licensing Criteria. All export license applications are considered on a case-by-case basis against the Consolidated Criteria, based on the most up-to-date information and analysis available, including reports from Non-Government Organisations and our overseas networks.”

Edin Omanovic, head of Privacy International’s state surveillance programme, told the site that some countries “have a track record of targeting commercially-available surveillance technology against activists and journalists. Such exports should absolutely not have been approved. By empowering authoritarian agencies, it not only undermines people’s human rights, but the work of activists, journalists, and opposition groups which is key for promoting democratisation and the UK’s own long-term security interests.”

Three years ago, Saudi Arabia came close to buying a controlling stake in infamous Italian offensive hacking tech company Hacking Team. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/12/britain_2_5m_spytech_exports_saudi_arabia/

Bulk surveillance is always bad, say human rights orgs appealing against top Euro court

A band of human rights organisations have appealed against a top European court’s ruling on bulk surveillance, arguing that any form of mass spying breaches rights to privacy and free expression.

The group, which includes Liberty, Privacy International and the American Civil Liberties Union, has taken issue with parts of a September judgment from the European Court of Human Rights.

That ruling said oversight of the UK government’s historic regime for bulk interception of communication was insufficient and violated privacy rights under the European convention.

However, it did not say that bulk interception was unlawful in and of itself; neither did it rule that sharing information with foreign governments breached the rules.

It is these elements of the ruling that the groups disagree with, arguing that bulk surveillance can never be lawful, and that the sharing intelligence with other governments is just another form of bulk surveillance and also unlawful.

They argue that any use of such intrusive powers should be lawful, targeted and proportionate – and that bulk powers can never meet these bars.

The original case was launched after former NSA sysadmin Edward Snowden’s 2013 revelations that GCHQ was secretly intercepting communications traffic via fibre-optic undersea cables.

It was the first time the European Court of Human Rights had considered UK regimes – although it did only look at procedures governing bulk cable-tapping that have since been replaced – and the first time it looked at intelligence-sharing programmes.

The court considered three aspects of the UK’s spying laws, and the first two were found to have breached the European Convention on Human Rights: the regime for bulk interception of communications (under section 8(4) of the Regulation of Investigatory Powers Act 2000); the system for collection communications data (under Chapter II of RIPA); and the intelligence-sharing programme.

It ruled that the system governing the bulk interception of communications was “incapable” of keeping interference to what is “necessary in a democratic society”.

Broadly, this was due to a lack of oversight of the selection process at various stages of the surveillance, and a lack of safeguards applied to the selection process for which related communications data to probe.

Liberty has also tackled the current surveillance regime introduced in the Investigatory Powers Act. It won the first challenge in April and was last month given the go-ahead by the High Court to launch a full legal challenge of the regime. This will be heard next year. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/12/human_rights_groups_appeal_echr_decision/

Phones are selling location data from “trusted” apps

A New York Times investigation has found that apps such as GasBuddy and The Weather Channel are among at least 75 companies getting purportedly “anonymous” but pinpoint-precise location data from about 200 million smartphones across the US.

They’re often sharing it or selling it to advertisers, retailers or even hedge funds that are seeking valuable insights into consumer behavior. One example: Tell All Digital, a Long Island advertising firm, buys location data, then uses it to run ad campaigns for personal injury lawyers that it markets to people who wind up in emergency rooms.

The Times reviewed a database holding location data gathered in 2017 and held by one company, finding that it held “startling detail” about people’s travels, accurate to within a few yards and in some cases updated more than 14,000 times a day. Several of the businesses whose practices were analyzed by the Times claim to track up to 200 million mobile devices in the US.

The data being sold is supposedly anonymous, as in, not tied to a phone number. The Times could still easily figure out who mobile device owners were through their daily routines, including where they live, where they work, or what businesses they frequent.

The businesses can reflect intimate details. For example, the Times used the database to track a 46-year-old math teacher, starting from leaving her home, traveling to her school 14 miles away, attending a Weight Watchers meeting after work, visiting her dermatologist’s office for a procedure, hiking with her dog, and staying over at her boyfriend’s house – information sold without her knowledge that she found highly disturbing after giving the newspaper the go-ahead to review her location data:

It’s the thought of people finding out those intimate details that you don’t want people to know.

The teacher’s location was recorded over 8,600 times – on average, once every 21 minutes. Sometimes, the frequency went up to once every two seconds.

Many companies claim that the data is up for grabs once users enable location services. But the Times found that when apps prompt users for permission, the explanations they were given were often incomplete or misleading. And whatever fuzzy, vague disclosure there is, it’s often buried in a hard-to-read privacy policy.

For example, an app might tell a user that enabling location data will get them the latest weather or traffic updates, but not mention that the data will be shared and sold – a fact tucked away in the privacy policy.

In the US, legislators such as Senator Ron Wyden have proposed bills to limit the collection and sale of this type of data, as well as to punish company execs when it’s mishandled.

The way location data is being treated by profiteers is a classic example of the type of privacy invasion that should be regulated, Wyden told the Times:

Location information can reveal some of the most intimate details of a person’s life – whether you’ve visited a psychiatrist, whether you went to an A.A. meeting, who you might date. It’s not right to have consumers kept in the dark about how their data is sold and shared and then leave them unable to do anything about it.

Good luck legislating this away, Senator Wyden: the market for location-targeted advertising is on track to hit an estimated $21 billion this year.

Let’s not hold our breath for the US to turn into the European Union anytime soon when it comes to giving us control over our data, but there are still things we can do to limit this pervasive spying.

How to keep apps from tracking your location

There’s no definitive list of the hundreds of apps that are constantly dogging your heels and profiting off of your location data. Besides the apps the New York Times picked up on in testing, there are an untold number of apps flying under the radar: they could well be gathering and saving your data and not selling it straight away, meaning that such apps wouldn’t have shown up in the Times’ tests. (Speaking of which, here’s their testing methodology.)

Your best bet, the Times says, is to find out which apps have permission to get your location in the first place.

The newspaper has compiled thorough instructions on how to stop apps from tracking you on iOS and Android, be it app-by-app or by toggling tracking off on the phone itself, as well as how to delete what those mobile operating systems already have on you.

Scraping yourself out of their databases

While we can turn off location sharing and delete web activity histories, that still leaves the data that the apps have already collected about us and tucked away in their money-maker databases.

However, a lack of transparency or regulation in the US makes it crazy tough to get access to, or to delete, the data from companies’ databases (or from the databases of whoever they sold it to or shared it with). You might recall that in August, after the General Data Protection Regulation (GDPR) came into full force, a researcher banged on the door at Facebook’s data warehouse to get all the data it had on him (which the GDPR had granted citizens as their right).

Sorry, Facebook said: it’s too tough to find your information in our ginormous data warehouse.

Same story with the other tracking apps, the Times writes:

Most of them store location data attached not to a person’s name or phone number, but to an ID number, so it may be cumbersome for them to identify your data if you want to delete it – and they are under no obligation to do so.

Unless they’re in the EU, that is, where – thanks to the GDPR – people now have the legal right to request a copy of the data that companies hold about them, and to ask that it be deleted.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/GQALsGJSfhc/