STE WILLIAMS

Kubernetes Shows Built-in Weakness

A Shmoocon presentation points out several weaknesses built in to Kubernetes configurations and how a researcher can exploit them.

Containers — single processes virtualized in isolated environments — are becoming important parts of the IT infrastructure at many companies, especially those embracing DevOps or continuous deployment methodologies. And Kubernetes, an open source system for automating container deployment and management, is being embraced by a growing number of companies that use containers. So naturally, testing and improving Kubernetes’ security has become an important topic for security professionals.

At the recent ShmooCon in Washington, DC, Mark Manning, technical director of NCC Group, gave a presentation on Kubernetes for penetration testers — a presentation in which he examined the vulnerabilities of the system and the tools available for testing them. Explaining why he thinks the topic is important, he says the reasons come down to numbers.

“Eighty-five percent of customers are definitely deploying containers or they plan to, in the most common thing I see,” Manning says. “I think it’s pretty safe to say that with the Fortune 500, half of them are currently using containers in production.”

Built-in Dangers
In his presentation, Manning began his look at vulnerabilities with those arising through Kubernetes’ configuration — something he says is the root of many dangers.

“I think I’d go so far as saying that by default, it is actually an unsafe platform,” Manning says. “And if you took just the code from Kubernetes and deployed it into your own environment, the defaults that are provided create a lot of security vulnerabilities in itself.” That risk, he says, is why so many companies have abandoned the idea of deploying Kubernetes on their own servers and now use cloud service providers for their Kubernetes platforms.

One of the issues companies face with configurations, Manning says, is that there are so many ways to do it badly. And that wide variety of possible problems informs the way that Manning approaches security testing in Kubernetes deployments.

Divide and Test
“The way that we do assessments is we will look at the application separately from the infrastructure,” Manning says. As part of that process, he says that the application testing and evaluation is the same performed on any platform. The source code is evaluated for vulnerabilities and bugs, all of which are reported to the client for remediation. Then comes the Kubernetes evaluation.

In his presentation, Manning described a process of setting up “attack pods” (a pod is a collection of containers) using standard Kubernetes tools such as kuberctl (a control-line interface for Kubernetes) and cURL (an interface for accessing the API-only components of Kubernetes). His team will also use standard security assessment tools like metasploit, aimed at a Kubernetes target. The point of doing all this creation and “curling” the components is that a successful attack could allow someone to gain access to multiple containers within the pod, violating the separation that’s supposed to protect every application.

“We start inside a container and we do what we call ‘breakout assessments’ that will validate whether or not the container technology is effective at isolating an attacker to only be able to affect this one application,” Manning explains. And in a Kubernetes deployment, that separation technology might have been intentionally weakened.

Finding Weakness by Design
“Kubernetes is actively making choices designed to make things faster with higher performance, and make things work easier, at the cost of security,” Manning says. He contrasts this with Docker, which employs technology similar to that used in the Google Chrome browser’s sandbox to maintain separation and security.

In Kubernetes’ case, he explains, the security has been intentionally weakened to make it easier for administrators to rapidly create and manage containers and pods. It’s part of a pattern that Manning says he’s seen in other products, in which they’re made feature-rich and easy to use, with the hope that security will somehow be bolted on later.

The main criminal benefactors of all this, Manning says, are cryptominers who love to find unprotected pods. And they don’t have to research or use zero-day exploits to do it. “Throughout this whole presentation, I’m not talking about zero-days or CVEs or exploits like that. I’m talking about misconfigurations because that’s what’s been working in all these assessments,” Manning says.

Related content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “C-Level Studying for the CISSP.”

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/kubernetes-shows-built-in-weakness/d/d-id/1336956?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Twitter Suspends Fake Accounts Abusing Feature that Matches Phone Numbers and Users

The company believes state-sponsored actors may also be involved.

Twitter has disclosed a security incident in which third parties exploited its API to match phone numbers with user accounts. The company has identified and suspended a large network of fake accounts related to the incident and believes state-sponsored actors may also be involved.

The problem came to Twitter’s attention on Dec. 24, 2019, when it learned someone was using a network of fake accounts to match usernames with phone numbers – a legitimate feature that, if enabled, helps users find each other on the platform. A security researcher was able to exploit a flaw in Twitter’s Android app to match 17 million phone numbers with user accounts.

Following this report, Twitter launched an investigation and discovered more accounts outside the researchers’ findings that may have been exploiting the same official API endpoint beyond its intended function. The company identified accounts “located in a wide range of countries” with a high volume of requests coming from individual IP addresses in Iran, Israel, and Malaysia.

“It is possible that some of these IP addresses may have ties to state-sponsored actors,” Twitter said in a statement. “We are disclosing this out of an abundance of caution and as a matter of principle.” Changes were made to the endpoint so it no longer returns specific account names in response to queries. Accounts believed to have been exploiting the endpoint are suspended.

Twitter account holders who disabled the option for “Let people who have your phone number find you on Twitter” are not exposed to the vulnerability; neither are those who don’t have a phone number linked to their account.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “C-Level Studying for the CISSP.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/twitter-suspends-fake-accounts-abusing-feature-that-matches-phone-numbers-and-users/d/d-id/1336958?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

7 Ways SMBs Can Secure Their Websites

Here’s what small and midsize businesses should consider when they decide it’s time to up their website security.PreviousNext

Image Source: Adobe Stock -- Monster Ztudio

Image Source: Adobe Stock — Monster Ztudio

Too often small and midsize business (SMBs) run websites that aren’t secure or even have the basics, such as SSL encryption technology or a Web application firewall.

It’s understandable: SMB owners are typically very busy and wear many hats. Few have an IT person on staff, let alone a professional security person. Yet few can do security on their own.

What’s an SMB to do? Turning to the site’s Web hosting provider to find out what security features it offers is a good start. Getting recommendations for and then interviewing at least two or three other specialty security providers would be the next steps for an SMB to determine whether a security specialist makes sense.

Working with a provider for basic website security doesn’t have to break the bank, says Monique Becenti, a product and channel marketing specialist at SiteLock. Depending on what the site site and how much e-commerce traffic the business runs, it’s possible to have a base level of security for roughly $1,000 a year.

Pricing will vary based on how many features are required and how much real business is done on the site. The advice on the following seven slides provides an excellent game plan for when SMBs decide it’s time to up their website security.

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/7-ways-smbs-can-secure-their-websites/d/d-id/1336959?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google’s Super Bowl ad will make you cry. Or wince.

“How to not forget,” is typed into a Google search bar.

That’s the simple way that Google started its Super Bowl ad, which featured an elderly man’s voice as he asked Google Assistant to help him remember details about his late wife.

The narrator laughs as the ad goes on to show a photo of a younger, moustachioed version of himself with “Loretta.”

“Remember, Loretta hated my moustache,” he says in a way that makes the viewer think that the man is sitting around with his friends or family, sweetly reminiscing.

But while you or I may be moved by the idea that Loretta hated his facial hair, we know that it’s not romance that’s moving the conveyor belt in this nostalgia factory. In spite of the sentimental fog in which the tender musical soundtrack tries to swaddle us, we know that Google Assistants’ algorithms aren’t murmuring in sympathy. Rather, they’re churning away, recognizing that “Remember” voice command so that it can add keywords to its repertoire and thereby wind the marketing behemoth’s tentacles around us ever more firmly.

The marketing pitch worked. Emotions were evoked. Viewers’s heartstrings were plucked.

Google’s Super Bowl spot featured no celebrities. Like its previous commercials, this one was inspired by real people. In 2009, that meant an American finding love in Paris. Thanks, Google and all your products, for telling him the difference between truffles and Truffaut and advising about long-distance relationships and finding jobs in Paris.

Similarly, the narrator’s voice in Sunday’s ad was that of a Google employee’s 85-year-old grandfather.

Last week, Google’s chief marketing officer, Lorraine Twohill, wrote in a blog post that Google’s goal is to “build products that help people in their daily lives, in both big and small ways.”

Sometimes that’s finding a location, sometimes it’s playing a favorite movie, and sometimes it’s using the Google Assistant to remember meaningful details.

You have to have a heart of steel not to get feelies for the reality-inspired stories. Of course, we’re not idiots. Our waterworks were tempered with well-earned cynicism. The ad was done, after all, in the service of profit, as some noted:

Ads for products such as music/DVDs/streaming services/Broadway tickets:

Loretta used to hum showtunes.

…or flower delivery or tulip bulbs or Life Tulips x-ray photography canvas print Wall Art or Tulip Soft Fabric Paints 1oz 10-pkg-assorted (4.5 stars on Google Shopping):

Loretta’s favorite flowers were tulips.

You have to wonder: with all that we now know about Loretta, we’re lacking one important detail: Did Loretta enjoy having her every character trait and preference used for targeted marketing?

Do you? If not, perhaps you could ask Google Assistant to “remember that I hate being tracked.”

Fat lot of good it may do you, of course. Remember, Google Assistant, what we learned about a year ago? That thousands of Android apps bypass advertising ID to track users, and that as of 2018, Google could track the location of anyone using some of its apps on Android or iPhone even when they’ve told it not to.

Remember? Yes, Google, we do remember, and we’ll keep remembering, even as your ads masterfully stir our hearts.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/a63Ljbmk14I/

Twitter hands over student’s account to his college

No, we do not police the social media activity of our students, a New York university said last week, and yes, we have a sense of humor – remember the banana we taped to the wall in the student union and then posted on Instagram?

That was part of a Twitter stream posted by the State University of New York (SUNY) College at Geneseo, defending itself after a student’s parody account of the college – originally called @SUNYGenseeo, switched to NOT SUNY Geneseo, and now renamed geneseo’s #1 fan – was hijacked.

The account’s rightful owner is 20-year-old SUNY student Isaiah Kelly. As first reported by Business Insider, last week, Kelly had to use his personal Twitter account to vent about having been shut out of the parody account, which he uses to poke fun at the school’s social media presence, news and messages to students.

But it was neither the school nor hackers who took over the account and forced through an unrequested change to the associated email address, thus locking Kelly out. It was, in fact, Twitter, having royally screwed up when enforcing its own policy about impersonation accounts.

Twitter’s policy says that it may suspend an account that…

…portray[s] another entity in a misleading or deceptive manner.

According to the policy, Twitter doesn’t remove accounts that clearly state that they’re not affiliated with or connected to any similarly-named individuals or brands. Nor does it remove parody, newsfeed, commentary, or fan accounts.

You can see how the @SUNYGenseeo may have looked, at first glance, to be an impersonation account. True, if you took the time to read the messages, it would be pretty clear that a state college likely wouldn’t tweet about keeping its asbestos-contaminated library open while it handed out surgical masks for students or that, following a blackout, it would joke about having forgotten to pay the power bill.

But according to SUNY, some of the accounts’ tweets were, in fact, being confused with official communications from the college. While the college didn’t take down the account…

…it did mess with it once Twitter suspended the account and turned it over to a college administrator. Specifically, the school changed the account’s profile images to grey and removed tweets that were being confused with official communications. SUNY Geneseo’s official communications team said that the account crossed the line between parody and impersonation in a number of ways:

  1. It used the college’s actual name and trademark design without alteration.
  2. It added but later removed “NOT” from the account name.
  3. It changed its appearance several times to mimic changes the College made to the real Geneseo Twitter account in attempts to differentiate the real one from the parody.

Did Twitter break the law?

What’s got everybody in an uproar over the incident isn’t whether or not the account crossed the line, however. It’s that Twitter’s own policy says an account will be removed, not that it will be taken over, its content adulterated, and that control will be given to somebody else.

Doing so is, apparently, unprecedented. It’s also got observers suggesting that Twitter violated the Computer Fraud and Abuse Act (CFAA), which criminalizes unauthorized access to a computer.

Kelly got his account back on Thursday. Twitter has apologized, telling media outlets that it made a mistake about turning over access of Kelly’s parody account to college officials, and that it should have instead suspended the parody count for impersonation (under Twitter’s policies).

But as of Monday morning, the internet was not appeased. It was still seething, and at least one politician was demanding that Twitter answer some questions:

I asked Twitter what it had to say about the allegation that it violated the CFAA and will update the story if I hear back. Here’s just a guess about that: Kelly used his school-issued email account to open the parody account. I’m no lawyer, but this could mean that SUNY would be the one who’d need to make a CFAA complaint – an unlikely prospect.

Twitter hasn’t said how it got wind of the account. Nor has SUNY claimed responsibility for reporting it.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/THaPVg5m35Y/

NIST tests methods of recovering data from smashed smartphones

Smash it, submerge it in water, and perhaps shoot it for good measure – just three of the methods criminals use to permanently erase digital evidence from smartphones.

And yet, as many criminals have found out to their cost, reducing a device to a pile of smashed plastic and glass means nothing if the internal memory chips remain in working order.

The forensic engineers who help police gather evidence understand this even if it’s not always been clear which methods are the most effective as extracting data accurately enough for it to meet standards of evidence.

With more and more evidence now sitting on smartphones, a better understanding of what works and what doesn’t has suddenly turned into an urgent issue.

To examine the issue, the US National Institute of Standards and Technology (NIST) says it recently conducted tests using 10 popular Android smartphones careful loaded with a mix of data accumulated during simulated use.

This wasn’t as easy as it sounds and required the testers to load each device with photos, social media and app data, GPS traces and the like.

Engineers from NIST and its forensic partners then attempted to extract the data from the internal chips using different methods to compare with the original data set.

At a physical level this involved hooking up to the test smartphone’s circuit board via ‘JTAG’ test connectors or by carefully extracting the chips and connecting to them direct. NIST writes:

The comparison showed that both JTAG and chip-off extracted the data without altering it, but that some of the software tools were better at interpreting the data than others, especially for data from social media apps.

It’s a big challenge. Neither technique is easy, especially extracting data using JTAG, and that’s before factoring the shortage of trained forensics people and the subtle differences between different data extraction software and the diversity of smartphones.

Said NIST forensic expert, Rick Ayers:

Many labs have an overwhelming workload, and some of these tools are very expensive. To be able to look at a report and say, this tool will work better than that one for a particular case can be a big advantage.

Anyone who’s interested in their findings can read the first set of results on the Department of Homeland Security (DHS) website. So far, the researchers have only managed to test two software products against the physical methods, which underlines the scale of the testing challenge ahead of them.

Encryption barrier

These techniques allow forensics teams to retrieve data but of course have no bearing on their ability to bypass any encryption that has been applied to it.

Despite reports that specific tools can do this already, as with any data extraction it remains a skilled and time-consuming undertaking. That’s why the US Government keep returning to the issue, in October 2019 even publicly asking Facebook to delay its end-to-end encryption rollout until it can be showed that this doesn’t get in the way of investigators in a hurry.

Meanwhile, the long-running battle with Apple goes on despite the company cooperating by providing iCloud backups connected to the shooting in Pensacola in December.

But even supposing a bypass for encryption were to hand, the reality is that criminals still often damage their devices and delete backups.

If politicians underestimate the problems this poses, NIST doesn’t. But it won’t deliver quick answers. Smartphone forensics faces a long road ahead.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DhhD2UVxfZU/

Google’s OpenSK lets you BYOSK – burn your own security key

OpenSK, a new open-source project from Google, lets folk make their own security key for less than £10.

You flash the OpenSK firmware on a Nordic dongle – and voila. The USB dongle includes the nRF52840 SoC (32-bit Arm Cortex-M4), supports Bluetooth Low Energy and NFC (Near Field Communication), as well as a user-programmable button. If you have a 3D printer to hand, you can also print a suitable enclosure.

The Nordic dongle with a 3D-printed case

The Nordic dongle with a 3D-printed case

Google offers its own Titan security key for two-factor authentication (2FA) with FIDO U2F and using this or an alternative device goes a long way to protect an account from unauthorised access or takeover. The same keys can be used on other internet sites including AWS and GitHub – but probably not at your banking site.

OpenSK is coded in Rust and runs on TockOS, an embedded operating system designed for “mutually distrustful applications” and also written in Rust. Google’s Elie Bursztein, security anti-abuse research lead, and Jean-Michel Picod, software engineer, said: “Rust’s strong memory safety and zero-cost abstractions makes the code less vulnerable to logical attacks.”

The purpose of OpenSK is not to enable geeks to get DIY security keys but rather to encourage use “by researchers, security key manufacturers, and enthusiasts to help develop innovative features and accelerate security key adoption”. There is also a caution that “this release should be considered as an experimental research project to be used for testing and research purposes”.

Any form of 2FA is much better than nothing, but dedicated security keys have advantages over alternatives like text messages, since phone numbers can be hijacked. Sometimes the phone number can also be used for account recovery, making it a weak link despite its popularity.

You can find the code for OpenSK here. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/04/burn_your_own_security_key_google_releases_opensk/

Twitter says a certain someone tried to discover the phone numbers used by potentially millions of twits

Twitter has admitted a flaw in its backend systems was exploited to discover the cellphone numbers of potentially millions of twits en masse, which could lead to their de-anonymization.

In an advisory on Monday, the social network noted it had “became aware that someone was using a large network of fake accounts to exploit our API and match usernames to phone numbers” on December 24.

That is the same day that security researcher Ibrahim Balic revealed he had managed to match 17 million phone numbers to Twitter accounts by uploading a list of two billion automatically generated phone numbers to Twitter’s contact upload feature, and match them to usernames.

The feature is supposed to be used by tweeters seeking their friends on Twitters, by uploading their phone’s address book. But Twitter seemingly did not fully limit requests to its API, deciding that preventing sequential numbers from being uploaded was sufficiently secure.

It wasn’t, and Twitter now says that, as well as Balic’s probing, it “observed a particularly high volume of requests coming from individual IP addresses located within Iran, Israel, and Malaysia,” adding that “it is possible that some of these IP addresses may have ties to state-sponsored actors.”

Being able to connect a specific phone number to a Twitter account is potentially enormously valuable to a hacker, fraudster, or spy: not only can you link the identity attached to that number to the identity attached to the username, and potentially fully de-anonymizing someone, you now know which high-value numbers to hijack, via SIM swap attacks, for example, to gain control of accounts secured by SMS or voice-call two-factor authentication.

In other words, this Twitter security hole was a giant intelligence gathering opportunity,

eu

Brexit bad boy Arron Banks’ Twitter account hacked: Private messages put online

READ MORE

Twitter says that it initially only saw one person “using a large network of fake accounts to exploit our API and match usernames to phone numbers,” and suspended the accounts. But it soon realized the problem was more widespread: “During our investigation, we discovered additional accounts that we believe may have been exploiting this same API endpoint beyond its intended use case.”

For what it’s worth Twitter apologized for its self-imposed security cock-up: “We’re very sorry this happened. We recognize and appreciate the trust you place in us, and are committed to earning that trust every day.”

It’s worth noting that users who did not add their phone number to their Twitter account or not allow it to be discovered via the API were not affected. Which points to a painfully obvious lesson: don’t trust any company with more personal information than they need to have. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/04/twitter_phone_numbers/

Bad Certificate Knocks Teams Offline

Microsoft allowed a certificate to expire, knocking the Office 365 version of Teams offline for almost an entire day.

Microsoft Teams on Office 365, a service that passed the 20 million user mark late last year, was knocked down this morning by a certificate issue. Specifically, an authentication certificate was allowed to expire, keeping millions from logging in to the service.

Microsoft acknowledged the service interruption at approximately 9:15 a.m., ET and by roughly 10:30 a.m. ET it had acknowledged that the problem was certificate based. An hour later, the company tweeted that it had begun the remediation process, with a message at nearly 5:00 p.m. ET that the fix had been applied.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “AppSec Concerns Drove 61% of Businesses to Change Applications.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/operations/bad-certificate-knocks-teams-offline/d/d-id/1336951?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Your mobile network broke the law by selling location data and may be fined millions… or maybe not, shrugs FCC

It’s been nearly two years since it was first revealed that US cellular networks were selling real-time location data with inadequate safeguards. Late last week, after months of political pressure, the regulator in charge, the FCC, finally revealed the results of an investigation.

“I wish to inform you that the FCC’s Enforcement Bureau has completed its extensive investigation,” FCC chairman Ajit Pai informed lawmakers that had demanded to know where the report was three months earlier. “It has concluded that one or more wireless carriers apparently violated federal law.”

Pai’s statement went on: “Accordingly, in the coming days, I intend to circulate to my fellow Commissioners for their consideration one or more Notice(s) of Apparent Liability for Forfeiture in connection with the apparent violation(s). We are unable to provide additional information about any pending enforcement action(s) beyond what is stated in the letter.”

If that seems unusual vague: that “one or more” mobile operators “apparently violated” the law by selling location data, you’re not the only one.

The sale of location data would, in any other era, have provoked outrage and determined federal action. But the FCC’s response to revelations that bounty hunters were buying the real-time location of people for $100 through third-parties, contracted through third-parties with little or no oversight, has been almost complete silence.

That inaction has only added to fears that FCC boss, and former Verizon executive, Pai is not only hesitant to take on the powerful companies that his office is supposed to oversee, but actively defends and supports the industry from behind the scenes.

Third time lucky

The mobile operators were caught not once, not twice, but three times over the course of eight months selling location data without adequate privacy safeguards, despite promising each time to take corrective action.

When caught for the third time, all four operators – ATT, Sprint, T-Mobile US and Verizon – promised to stop the provision of location data to third parties altogether. But, notably, the promise is not binding and can be lifted at any time.

The issue and the failure by both the mobile companies and the FCC to take it seriously is one factor behind a broader push for federal privacy legislation.

But concerted pressure from lawmakers – who have sent repeated letters to the mobile operators and demanded answers in a series of Congressional hearings, has finally brought a promise of action from the federal regulator. It’s still not clear what that response will be however and those pushing the FCC to investigate remain frustrated.

The chair of the House Energy and Commerce Committee – which oversees the FCC – Frank Pallone (D-NJ) issued a statement: “Following our longstanding calls to take action, the FCC finally informed the Committee today that one or more wireless carriers apparently violated federal privacy protections by turning a blind eye to the widespread disclosure of consumers’ real-time location data. This is certainly a step in the right direction, but I’ll be watching to make sure the FCC doesn’t just let these lawbreakers off the hook with a slap on the wrist.”

For her part, Commissioner Rosenworcel put out a statement saying: “For more than a year, the FCC was silent after news reports alerted us that for just a few hundred dollars, shady middlemen could sell your location within a few hundred meters based on your wireless phone data.”

“It’s chilling to consider what a black market could do with this data. It puts the safety and privacy of every American with a wireless phone at risk. Today this agency finally announced that this was a violation of the law. Millions and millions of Americans use a wireless device every day and didn’t sign up for or consent to this surveillance. It’s a shame that it took so long for the FCC to reach a conclusion that was so obvious.”

In the dark

It’s still not clear what the FCC will do. We spoke to both Pallone and Rosenworcel’s offices and they both told The Register they have no details beyond the statement made on Friday. As for the FCC itself, it has continued with its entirely unhelpful approach.

We asked the FCC:

  • Why it feels unable to say the number of mobile operators that have broken the law
  • What steps remain for the sale of location data to be deemed an actual violation – as opposed to an “apparent violation” – and who makes that determination
  • Whether the FCC investigation has been completed or if it waiting on feedback from the mobile operators
  • What the precedents are for similar violations
  • Whether there will be a fine and how it will be calculated
  • Whether other measures will be considered against the mobile operators

And in response the FCC gave us… nothing. “We are unable to provide additional information about any pending enforcement action(s) beyond what is stated in the letter,” it told us in a statement.

ostrich

FCC’s answer to scandal of ATT, Sprint, T-Mobile US selling people’s location data: Burying its head in the ground

READ MORE

In truth, the FCC letter was likely only sent – on the last day of January – because FCC chair Pai had promised Congress to send some kind of response by January at the latest.

The FCC sometimes keeps the names of those it is placing enforcement actions against private until the action is formally voted on by the five commissioners; though not always. It’s not clear why it has refused to say how many of the main four carriers (soon to be three thanks to another controversial decision by the FCC) are affected.

It’s also not clear why the FCC won’t outline what measures it expected to take, or what precedent it will be using to access any fines, or what process it will be following from this point, or whether the mobile operators still have a say in the process.

In short, the FCC has been dragged to the point where it has been obliged to enforce its own rules and protect the privacy rights of users over the profit incentive of the mobile industry. And it isn’t happy about it, so it’ll be damned if it’s going to tell anyone what it has been forced into doing.

Yes, this is a federal regulator. And yes things really have got this petty. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/04/fcc_location_data/