STE WILLIAMS

Year after being blasted for dodgy security, GPS kid tracker biz takes heat again for leaving families’ private info lying around for crims

A manufacturer of child-tracking smartwatches was under fire this week following the discovery of a second major security lapse in its technology in as many years.

Back in late 2017, Gator-branded wearables were among various kid-monitoring gizmos raked over the coals by Norwegian researchers who found the devices were trivial to remotely hijack. These gadgets are essentially cellular-connected smartwatches youngsters wear so that parents can watch over their offspring from afar, tracking their whereabouts, listening in on built-in microphones, and contacting them.

Fast forward roughly a year, and Brit infosec outfit Pen Test Partners decided to take a look at the security of these gadgets to see if defenses had been shored up. The team found that the web portal used by families to monitor their tykes’ Gator watches had a pretty bad exploitable bug.

Logged-in parents could specify in a user-controlled parameter their access level, allowing them to upgrade their accounts to administrator level. That could be exploited by stalkers, crims and other miscreants to snoop on as many 30,000 customers, obtain their contact details, and identify and track the location of children.

“This means that an attacker could get full access to all account information and all watch information,” explained Pen Test Partners’ Vangelis Stykas earlier this week.

“They could view any user of the system and any device on the system, including its location. They could manipulate everything and even change users’ emails/passwords to lock them out of their watch.”

He explained: “The Gator web backend was passing the user level as a parameter. Changing that value to another number gave super admin access throughout the platform. The system failed to validate that the user had the appropriate permission to take admin control.”

kid

Up to three million kids’ GPS watches can be tracked by parents… and any miscreant: Flaws spill pick-and-choose catalog for perverts

READ MORE

The attacker would also be able to change the email and passwords on a given watch to lock victims out of devices. The researcher noted that other child-monitoring watch brands likely share the same web backend as Gator, meaning other gizmos would also be prone to the same attack.

While TechSixtyFour – which owns the Gator brand and built the vulnerable web portal – patched the flaw a few days after Pen Test Partners reported the programming cock-up in January, Stykas was critical of the biz and what he described as a “train wreck” situation.

Pen Test Partners alerted UK-based TechSixtyFour on January 11, and gave them a month to fix it due to the wide-open nature of the hole. TechSixtyFour asked for two months, but the request was denied. At first, the manufacturer tried to address the vulnerability and wound up blocking the researchers’ accounts with HTTP 502 errors, according to Stykas. In the end, it was patched by January 16.

TechSixtyFour founder Colleen Wong defended her company’s handling of security issues, noting that the gizmo maker maintains a full vulnerability disclosure policy, and since 2017 has undergone yearly penetration tests.

“We appreciate Ken Munro of Pen Test Partners disclosing this vulnerability to us, and our team have taken this seriously as our fix was completed within 48 hours. An internal investigation of the logs did not show that anybody had exploited this flaw for malicious purposes,” Wong said in a statement to The Register on Friday.

She added that TechSixtyFour’s engineers “implemented a partial fix within 12 hours. They then identified the root cause and deployed a full fix within 48 hours of the notification.”

TechSixtyFour is not alone in catching heat for its shoddy GPS watch security. In November of last year, Pen Test Partners discovered multiple vendors were using insecure transmission methods, and Stykas doesn’t expect any of the smartwatch makers to improve any time soon.

“On a wider scale the GPS watch market needs to ensure that their products are adequately tested. The problem is that the price point of these devices is so low that there is little available revenue to cover the cost of security,” Stykas said.

“Our advice is to avoid watches with this sort of functionality like the plague. They don’t decrease your risk, they actively increase it.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/01/gps_kid_tracker_website_vulnerable/

Three UK customer details exposed in homepage blunder

Exclusive Mobile operator Three UK’s website was showing visitors other customers’ names, postal addresses, phone numbers, email addresses and more – all without asking for a login.

Alarmed Reg reader Chris immediately tweeted at Three to ask what on Earth was going on, querying why Three’s site was displaying different people’s data to him every time he changed page.

Three UK data breach screenshot of other customer's details

Click to enlarge

The site was showing him as logged in even though he’d only gone to the mobile operator’s homepage.

“When you load their site over your mobile internet connection, it recognises you and automatically logs you in,” Chris told us. “I was doing this on my home Wi-Fi (which isn’t Three), so it should’ve required me to log in manually when I first went to their site. I guessed it might’ve either redirected me to a session for a valid user who was accessing at the same time, or some blip which didn’t recognise me and just assigned another user’s ID instead.”

Three UK data breach screenshot of other customer's details

Click to enlarge

“I wasn’t able to to view any payment details – card or direct debit, and I wasn’t able to load any detailed bills to view itemised activity,” added Chris. Three claims to have around 10 million registered subscribers.

Three UK data breach screenshot of other customer's details

Click to enlarge

While our reader waited for a response from Three (it replied to him on Twitter an hour and a half after his initial tweet), he tipped off El Reg. As we investigated, we noticed the company website went down for a little while with the standard “under maintenance” page displayed – and came back up again after about an hour. Chris said other people’s data was no longer visible once the site returned.

The nature of the data breach suggests that potentially the entire customer database along with some of the personal data held on file may have been exposed.

Despite repeated contact with Three’s PR representatives, none of The Register‘s questions about the potential size or scale of the breach have been answered.

Judging by the URLs visible in some of the other screenshots Chris sent us, which included the letters /new, the company’s techies may have accidentally deployed an under-construction revamp of the site to the mobe firm’s production servers. This is merely speculation and Three has not responded to questions on this.

The Information Commissioner’s Office was unable to say, at the time of publication, if Three had reported the breach. ®

Updated to add at 1628 UTC:

An ICO spokesperson told us: “Three has made us aware of an incident and we will be making enquiries.”

A Three UK spokesperson told us: “A small number of customer[s] have reported an issue to us regarding my3. We have blocked access to my3 while we investigate the issue.”

Updated to add at 1825 UTC:

Three UK wanted to make it known that only four people had complained about being able to view any random Three customer’s personal data by simply visiting its website and not even needing to log in. El Reg is very happy to make this clear.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/01/three_uk_data_breach_no_authentication_blunder/

Year after being blasted for dodgy security, GPS kid tracker biz takes heat again for leaving families’ private info laying around for crims

A manufacturer of child-tracking smartwatches was under fire this week following the discovery of a second major security lapse in its technology in as many years.

Back in late 2017, Gator-branded wearables were among various kid-monitoring gizmos raked over the coals by Norwegian researchers who found the devices were trivial to remotely hijack. These gadgets are essentially cellular-connected smartwatches youngsters wear so that parents can watch over their offspring from afar, tracking their whereabouts, listening in on built-in microphones, and contacting them.

Fast forward roughly a year, and Brit infosec outfit Pen Test Partners decided to take a look at the security of these gadgets to see if defenses had been shored up. The team found that the web portal used by families to monitor their tykes’ Gator watches had a pretty bad exploitable bug.

Logged-in parents could specify in a user-controlled parameter their access level, allowing them to upgrade their accounts to administrator level. That could be exploited by stalkers, crims and other miscreants to snoop on as many 30,000 customers, obtain their contact details, and identify and track the location of children.

“This means that an attacker could get full access to all account information and all watch information,” explained Pen Test Partners’ Vangelis Stykas earlier this week.

“They could view any user of the system and any device on the system, including its location. They could manipulate everything and even change users’ emails/passwords to lock them out of their watch.”

He explained: “The Gator web backend was passing the user level as a parameter. Changing that value to another number gave super admin access throughout the platform. The system failed to validate that the user had the appropriate permission to take admin control.”

kid

Up to three million kids’ GPS watches can be tracked by parents… and any miscreant: Flaws spill pick-and-choose catalog for perverts

READ MORE

The attacker would also be able to change the email and passwords on a given watch to lock victims out of devices. The researcher noted that other child-monitoring watch brands likely share the same web backend as Gator, meaning other gizmos would also be prone to the same attack.

While TechSixtyFour – which owns the Gator brand and built the vulnerable web portal – patched the flaw a few days after Pen Test Partners reported the programming cock-up in January, Stykas was critical of the biz and what he described as a “train wreck” situation.

Pen Test Partners alerted UK-based TechSixtyFour on January 11, and gave them a month to fix it due to the wide-open nature of the hole. TechSixtyFour asked for two months, but the request was denied. At first, the manufacturer tried to address the vulnerability and wound up blocking the researchers’ accounts with HTTP 502 errors, according to Stykas. In the end, it was patched by January 16.

TechSixtyFour founder Colleen Wong defended her company’s handling of security issues, noting that the gizmo maker maintains a full vulnerability disclosure policy, and since 2017 has undergone yearly penetration tests.

“We appreciate Ken Munro of Pen Test Partners disclosing this vulnerability to us, and our team have taken this seriously as our fix was completed within 48 hours. An internal investigation of the logs did not show that anybody had exploited this flaw for malicious purposes,” Wong said in a statement to The Register on Friday.

She added that TechSixtyFour’s engineers “implemented a partial fix within 12 hours. They then identified the root cause and deployed a full fix within 48 hours of the notification.”

TechSixtyFour is not alone in catching heat for its shoddy GPS watch security. In November of last year, Pen Test Partners discovered multiple vendors were using insecure transmission methods, and Stykas doesn’t expect any of the smartwatch makers to improve any time soon.

“On a wider scale the GPS watch market needs to ensure that their products are adequately tested. The problem is that the price point of these devices is so low that there is little available revenue to cover the cost of security,” Stykas said.

“Our advice is to avoid watches with this sort of functionality like the plague. They don’t decrease your risk, they actively increase it.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/01/gps_kid_trackers/

How Hackers Could Hit Super Bowl LIII

Security threats and concerns abound for the year’s biggest football game. What officials and fans can do about it.

Super Bowl LIII will draw the attention of millions of people around the world – and cybercriminals hoping to exploit attendees and fans before and during the big game.

Major sporting events are hot targets for cyberattacks. Consider the 2018 Winter Olympics, when attackers impersonated a North Korean nation-state group to target the Games and more than 300 associated organizations were hit with a phishing attack. Or the World Cup, when the Wallchart phishing campaign delivered malware under the guise of a game-related email.

The massive audience captivated by major sports games, concerts, political events, and similar large-scale gatherings gives attackers a perfect opportunity to strike. If they’re looking to launch a phishing campaign, they have a wealth of potential targets who will click links related to the event. If they want to cause disruption, millions of eyes will be watching when they do.

Unlike the Olympics or World Cup, the Super Bowl is a one-day spectacle, which narrows attackers’ window. “I think the primary threat with an event like this is something disruptive in nature – it’s a pretty common trend nowadays,” says Tom Hegel, director of threat research and analysis for ProtectWise, which runs a network detection and response service often integrated into pop-up SOCs, and which has worked with events similar to the Super Bowl in scale. There is a greater chance of hacktivism during these events, for example, Hegel adds.

In professional leagues, there is precedent of hackers targeting specific teams and their critical data, says Tom Kellermann, chief cybersecurity officer at Carbon Black. Television networks and online gambling sites, especially during the pregame and halftime show, are targets. However, he is most concerned with watering hole attacks, malicious SMS, and destructive attacks on American companies.

“The Super Bowl is a global affair but it represents all that is American,” Kellermann says. “Given the heightened state of geopolitical tension and given that most Americans, including cybersecurity professionals, will be watching, the game represents an opportune time to target businesses and consumers throughout the US.”

As with most cyberattacks, there is a financial motivation to target the Super Bowl. “There’s a huge amount of transactions going on there at the same time,” Hegel points out.

Ticket forgery and fake bar codes are also common concerns with these events, adds David Gold, ProtectWise vice president of solutions architecture. People may try to steal press credentials, or those who have credentials may post pictures online showing the bar code.

The Super Bowl brings a long list of security challenges. The stadium’s network is overwhelmed with an unusually high number of fans, many of whom may bring infected or poorly secured devices, putting themselves and others at risk. The security team must understand and monitor the network, identify suspicious devices, and detect threats in a chaotic environment.

“The sheer amount of people who come to these events is staggering,” says Gold. “Separating the noise from the things you actually care about is very challenging for an event of this scale.”

The NFL, which was contacted for this article, declined to discuss Super Bowl cybersecurity issues.

Security: More Than A Metal Detector

Planning and implementing security measures at the Super Bowl is a “big, coordinated effort,” Gold emphasizes. The National Football League (NFL), the network security team, and law enforcement are only three of many players involved with ensuring the Super Bowl is secure. Oftentimes organizations like the NFL hire external vendors or academia to help with security: in the past, Gold says, high-profile university programs have gotten involved with the game.

Kickoff is at Atlanta’s Mercedes-Benz Stadium, which has a whopping 1,800 wireless access points in the seating bowl and concourse. John Clay, director of global threat communications for Trend Micro, predicts scammers will be nearby to launch fraudulent Wi-Fi networks. “The more technology in these places, the bigger the attack surface becomes,” he says.

Threat monitoring is no small feat. “Coordination can be a huge challenge with scanning this stuff,” Gold notes. “Getting everything deployed is the biggest challenge. There are a lot of factors, a lot of different groups involved.”

The average security operations center uses 50- to 70 different tools – the Super Bowl doesn’t have time or resources to install those for one event. They need tech that can be spun up quickly and doesn’t require many people to operate. Cloud deployment is helpful here because it lets on-site teams expand to include remote experts, according to Gold.

To tackle security, organizations running major events typically have a SOC on-site with their own analysts and response teams available in case of an incident. Pop-up SOCs ProtectWise has worked with have threat hunters on the ground to triage and respond to alerts. Because its service is cloud-based, there are additional experts on the backend to offer support, help customers respond to unknown activity, provide context on incidents, and generate telemetry reports if needed.

But what are they tracking? Pretty much everything, says Gold. The pop-up SOC monitors endpoints, data, servers, websites, video streaming, rogue access points, point-of-sale systems, and the networks for different groups: teams, media, attendees. Externally they’re watching  threat actor groups, the Dark Web, social media platforms.

“You have to think of every single attack vector, and what the risk is of that impacting the event or the game,” says Gold. Other potential risks at the game could include card skimmers and keyloggers at stadium ATMs, and malicious USBs installed in device charging stations.

Fans as Targets

The NFL isn’t the only one on alert this Super Bowl Sunday – people attending the game, watching online, researching articles, and shopping for merchandise should be wary as well.

“It’s not just a game,” says Jessica Ortega, website security research analyst with SiteLock. “That’s something a lot of fans don’t realize – it’s a whole tourist attraction, basically, for the week and days leading up to the Super Bowl.”

Clay warns fans to heed caution when reading websites and emails related to the game in the days prior. Spam campaigns, phishing attacks, and fraudulent sites may be designed to look like the Super Bowl homepage, ticket sales page, or another related website. Malvertisements may compromise legitimate sites and redirect fans to malicious pages or get them to download content.

“In the last few years, we tend to not see the huge spray-and-pray types of campaigns,” he adds. “[Attackers] tend to be more targeted in their approach now.” Some may purchase lists of names and email addresses for people interested in sporting events; others will do some OSINT gathering and scan social media looking for team fans they can hit with targeted attacks.

For those fans buying merchandise online, check to make sure the site is legitimate and only purchase from official sellers, says Ortega. There’s a lot of SEO spam getting injected into websites, and ecommerce sites selling sports memorabilia being compromised, she notes. To her point, ZeroFox recently discovered nearly 500 advertisements on marketplaces for Super Bowl-related merchandise, many providing minimal information about where the goods came from – a sign they’re counterfeit.

“Be aware of what you’re looking at, what you’re downloading, what you’re getting on your phones and all devices,” says Clay. “When you’re looking at news and want information on the event, be cautious of what you’re clicking on or downloading from a website or email message.”

Super Bowl attendees planning to pay using their phones at the event should download a VPN to protect their transactions, Ortega notes, and use cash to pay if possible. Fans should also safeguard their tickets, both online and physical, to protect the bar codes from being stolen and resist the urge to post any photos of tickets or game credentials on social media.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/how-hackers-could-hit-super-bowl-liii-/d/d-id/1333777?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Nest Hack Leaves Homeowner Sleepless in Chicago

A Chicago-area family’s smart home controls were compromised in a hack that has left them feeling vulnerable in their own home.

A deep voice in a baby’s room and a thermostat set to a tropical temperature were the first signs a Chicago-area homeowner had that there were problems with the home’s IoT devices. When the voice followed the family into the living room, “… my blood truthfully ran cold…” the man reported.

The family’s two Nest thermostats and 16 Nest cameras had been hacked by unknown threat actors. According to Google, the breach occurred because of duplicated passwords stolen from other online sites. The homeowner told a local television station that the family wasn’t aware that two-factor authentication was a possibility for a Nest account. He says that the family hasn’t slept well in the days since the breach and now wants to return the smart thermostats and cameras, and be reimbursed.

According to a Google statement provided to the television station reporting the breach, “We take security in the home extremely seriously, and we’re actively introducing features that will reject compromised passwords, allow customers to monitor access to their accounts and track external entities that abuse credentials.”

For more, read here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/nest-hack-leaves-homeowner-sleepless-in-chicago/d/d-id/1333779?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google says sorry for pulling a Facebook with monitoring program

On Wednesday, TechCrunch reported that Facebook isn’t alone in inflicting an uber-snoopy app via what’s supposed to be a way for companies to distribute custom-made apps to their employees off of the App Store.

Google’s also been sneaking in through the back door, using the system to run an app called Screenwise Meter that sounds a lot like Facebook’s Research virtual private networking (VPN) app.

Fast on the heels of Apple kicking Facebook Research out of the program on Tuesday, so too did Screenwise Meter get the heave-ho.

This time, however, it sounds like Apple didn’t need its bouncers to show Google to the door. Rather, after being contacted about whether its app likewise violated Apple’s policy, Google apologized and showed itself out, disabling the app on iOS devices:

The Screenwise Meter iOS app should not have operated under Apple’s developer enterprise program – this was a mistake, and we apologize. We have disabled this app on iOS devices. This app is completely voluntary and always has been. We’ve been upfront with users about the way we use their data in this app, we have no access to encrypted data in apps and on devices, and users can opt out of the program at any time.

The reference to encrypted data is meant to differentiate Google’s app from Facebook Research, which Facebook said could collect data in some instances “even where the app uses encryption, or from within secure browser sessions”.

Before it blinked out of existence, Google was inviting users aged 18 and up (or 13, if part of a participating family) to download Screenwise Meter by using a special code and registration process that depended on Enterprise Certificate.

That’s the same thing that got Facebook in trouble with its Research app. When Facebook got kicked out of the Enterprise Developer Program, Apple noted that it had designed the program “solely for the internal distribution of apps within an organization,” not to distribute data-collecting apps to consumers: what Apple called “a clear breach” of its licensing terms.

Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.

Free tracking router, anyone?

Compared with Facebook, Google is far more upfront about how Screenwise Meter and its other market research programs work and what data they collect. And, unlike Facebook – which stripped its name off Research VPN following the similarly snoopy Onavo VPN getting pushed out of the App Store – Google’s clear about its involvement.

On the Screenwise Meter Play Store listing, for example, Google clearly states that data is collected for market research purposes and provides a link to its research panel membership page (you need to be on the panel to download the app).

According to TechCrunch, Google launched Screenwise in 2012. Users earn gift cards for sideloading an Enterprise Certificate-based VPN app that allows Google to monitor and analyze their traffic and data, tracking what they watch and what devices they watch it on. Google has since rebranded the program as part of its Cross Media Panel and Google Opinion Rewards programs, which reward users for installing tracking systems on their mobile phone, PC web browser, router and TV. The company even sends participants a special router that it can monitor.

Google also offers the ability to hit pause when participants want a break from monitoring or when someone younger than 13 is using the device.

Apple hadn’t responded to inquiries as of Thursday morning, but as TechCrunch’s Zack Whittaker hypothesizes, Google’s alacrity in responding to the issue – and, probably, its full awareness of how much bad press Facebook got over first the Onavo VPN and then the Research VPN – is probably enough to keep it out of being baked into another deep-dish Apple privacy-protecting pie.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/X3JjSHxUZCE/

Microsoft Azure data deleted because of DNS outage

Users of Microsoft’s Azure system lost database records as part of a mass outage on Tuesday. A combination of DNS problems and automated scripts were to blame, said reports.

Microsoft deleted several Transparent Data Encryption (TDE) databases in Azure, holding live customer information. TDE databases dynamically encrypt the information they store, decrypting it when customers access it. Keeping the data encrypted at rest stops an intruder with access to the database from reading the information.

While there are different approaches to encrypting these tables, many Azure users store their own encryption keys in Microsoft’s Key Vault encryption key management system, in a process called Bring Your Own Key (BYOK).

The deletions were automated, triggered by a script that drops TDE database tables when their corresponding keys can no longer be accessed in the Key Vault, explained Microsoft in a letter reportedly sent to customers.

The company quickly restored the tables from a five-minute snapshot backup, but that meant any transactions that customers had processed within five minutes of the table drop would have to be dealt with manually. In this case, customers would have to raise a support ticket and ask for the database copy to be renamed to the original.

Why were the systems accessing the TDE tables unable to access the Key Vault? The answer stems from a far bigger issue for Microsoft and its Azure customers this week. An outage struck the cloud service worldwide on Tuesday, causing a range of problems. These included intermittent access to Office 365 in which users had only half a chance of logging on. Broader Azure cloud resources were also down.

This problem was, in turn, down to a DNS outage, according to Microsoft’s Azure status page:

Preliminary root cause: Engineers identified a DNS issue with an external DNS provider.

Mitigation: DNS services were failed over to an alternative DNS provider which mitigated the issue.

Reports suggested that this DNS outage came from CenturyLink, which provides DNS services to Microsoft. The company had suffered a software defect, it had said in a statement.

This shows what can go wrong when cloud-based systems are interconnected and automated enough to allow cascading failures. A software defect at a DNS provider indirectly led to the deletion of live customer information thanks to a lack of human intervention.

CenturyLink seems to be experiencing serial DNS problems lately. The company, which completed its $34bn acquisition of large network operator Level 3 in late 2017, also suffered a DNS outage in December that reportedly affected emergency services, sparking an FCC investigation.

Azure users can at least take comfort in the fact that Microsoft is offering multiple months of free Azure service for affected parties.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nSW5a0K2vRA/

Hacker talks to baby through Nest security cam, jacks up thermostat

If the internet’s army of creeps isn’t busy blasting bogus warnings about fake nuclear warhead missiles through people’s Nest security cameras, they’re trying to parboil kids by jacking up the Nest thermostat.

A smart-home aficionado in the US state of Illinois told NBC News that he and his wife haven’t slept well in days, after a stranger accessed his Nest home security cameras and thermostats.

Arjun Sud – whom NBC described as an “avid” user of smart-home technology – told the station that shortly after he and his wife put their 7-month-old baby boy to bed on 10 January, they heard a strange noise coming out of the room. When Sud went to investigate, he said, he heard a deep, male voice coming from a Nest security camera that was installed in the nursery – one of 16 he owns, in addition to a security system and two Nest thermostats.

In addition, Sud found that somebody with a) too much time on their hands and b) the password to his Nest gadgets had remotely tinkered with the thermostat, jacking up the temperature to a balmy 90 degrees Fahrenheit (30 degrees Celsius).

Google, which owns Nest, told NBC that it’s aware of similar reports about customers using compromised passwords that were exposed on breaches on other websites.

The advice from Google, and from cyber security experts – including, of course, from us here at Naked Security – is to use unique passwords and two-factor authentication (2FA) to keep cyber intruders from breaking into smart-home devices, be they smart thermostats, baby monitors or other internet-enabled webcams.

Sud isn’t happy with that answer. He told NBC that he didn’t know that 2FA was an option. He wants to return $4,000 worth of Nest products, he wants his money back, and he wants Google and Nest to accept responsibility for not alerting him that 2FA is an option and giving him a heads-up when somebody else accesses his account.

Sud:

It was simply a blame game where they blamed me, and they walked away from it.

Sud’s wrath is understandable. It’s frightening to realize that an intruder could have been eavesdropping on what should be his family’s intimate, private conversations or spying on their child.

Still, we have to ask…

Who’s to blame, here?

Nest didn’t acquire a 2FA option until March 2017. Better late than never, it said at the time – after all, plenty of internet of things (IoT) gadgets still didn’t have it.

2FA involves authenticating yourself via not just a password, but also by a secondary code. Sometimes that code is sent via SMS – though, given phishing attacks that can nab one-time passcodes sent via text, that’s not the most secure option.

Secondary codes can also be accessed through a code-generating app such as Google Authenticator, Authy, or Sophos Authenticator (also included in our free Sophos Mobile Security for Android and iOS). Another option is a hardware 2FA key, such as Yubico or Google’s Titan.

No question, 2FA adds a security layer to authentication. But is it Google’s responsibility to make sure that Sud and other Nest users know about 2FA? And how do they know what users don’t know?

Sud may not realize it, but he also suggested that it’s Google’s responsibility to inform users when their reused passwords have been breached… even if the breach was from another, non-Google site or online service.

It’s not. People need to take responsibility for their online safety. We should all know better by now than to reuse passwords and leave ourselves liable to dirtworms taking our credentials from one breach and stuffing them in to other online services until they gain entry, be it to our online bank accounts, our social media accounts, our smart-home gadgets, or the plethora of other places and things we want to keep locked up.

This is a well-known attack called credential stuffing. Unfortunately, it’s successful far too often, given how many people have the bad habit of reusing the same passwords in several places. This doesn’t amount to “hacking.” It’s more like somebody found a key on the sidewalk. Lo and behold, it’s the only key used to secure every house on the block. Jackpot!

Credit where credit’s due

But while it’s not really Google’s responsibility, that hasn’t stopped the company from being proactive in protecting password-reusing users from themselves. To give credit where credit’s due, in May 2018, Google’s Nest division sent alerts to some users, telling them to change their passwords after it learned that their credentials had been involved in a data breach.

Google’s not alone. Facebook and Netflix, among many other big sites, also prowl the internet looking for your username/password combos to show up in troves of leaked credentials.

Sometimes they use gentle recommendations to change your password. Sometimes they lock users in a closet, as Facebook did when it found its users’ credentials had also been used on Adobe.

Don’t get locked in the closet, and don’t trust that such companies are always going to watch your back when you reuse passwords. Sometimes they will. Sometimes they won’t. Sometimes they don’t have enough time: the creeps go about credential stuffing too fast.

Instead, we should all make sure to have a unique set of credentials – one unique, strong set for every site, every service. That goes for all of us, whether or not we’re Nest users. Even if you’re sin-free, make sure your family, your friends, your colleagues and anybody else you can think of are choosing strong passwords, at least 12 characters long, that mix letters, numbers and special characters.

Better yet, think about using a password manager. Granted, they’re not perfect, but they’re pretty good: they’ll not only create tough, unique passwords, but they’ll also store them for you so you don’t have to remember a set of tangled-spaghetti passwords.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tlMUXaQWtqs/

Credential dump contains another 2.2 billion pwned accounts

How many user credentials have fallen into the hands of criminals during a decade of data breaches?

Earlier this month, the Have I Been Pwned? (HIBP) website offered a partial answer to that question by uploading something called Collection #1, a database of 773 million unique email addresses discovered circulating on a criminal forum.

Now researchers at Germany’s Hasso-Plattner Institute (HPI) have reportedly analysed a second cache that was part of the same discovery. This cache consists of four collections named, unsurprisingly, Collections #2-5, that they think contains a total of 2.2 billion unique pairs of email addresses and passwords.

Collection #1 consists 87GB of data cobbled together from more than 2,000 individual data breaches going back years.

Collections #2-5, for comparison, is said to be 845GB covering 25 billion records.

It’s a dizzying volume of data, which, despite the hundreds of millions or more people it must represent, is still small enough to fit on the hard drive of a recent Windows computer.

The obvious measure of these breaches is how much new data they represent, that which has not already been added to databases such as those amassed by HIBP or HPI.

Have I Been Pwned? estimated the unique data in Collection #1 at around 140 million email addresses and at least 11 million unique passwords.

HPI, meanwhile, estimates the number of new credentials at 750 million (it isn’t yet clear how many new passwords this includes).

The re-use deluge

When faced with these sorts of numbers, it’s tempting to shrug one’s shoulders and move on – most of these data breaches are old, so what harm might they be doing now?

Initially, breached credentials are probably traded to give attackers access to the account on the service from which they were stolen.

After that, they are quickly traded again to use as fuel for the epidemic of credential stuffing attacks. Credential stuffing thrives on our habit of reusing passwords – credentials for one service will often give a criminal access to other websites too.

Remember that while plaintext passwords are pay-dirt for criminals, usernames and email addresses are also valuable because they give them something to aim at when trying a brute-force attack.

But perhaps the real significance isn’t the volume of data so much as the fact it shows how criminals are able to build databases from lots of smaller breaches.

That’s where all the stolen credentials go – into larger databases where they can be more easily exploited.

Why have Collections #1-5 only come to light now?

Either because the data has already been exploited and is now so old that it no longer has much commercial value (Collection #1 was offered for sale at $45), or because so many criminals have access to it that’s effectively become an open source resource.

What to do?

It’s possible to check your email address and password against HIPB, although the site doesn’t appear to have uploaded Collections #2-5 yet. You can also check your email address against the HPI data.

No organisation is immune to the possibility of a breach. That’s why individuals must do more to secure themselves rather than trusting others to do it for them.

Start with simple principles:

  • Use a password manager, not only to store passwords but to choose strong ones in the first place.
  • These should be unique – use a different random password for every site.
  • Where possible, turn on two-factor authentication (2FA). Some versions of authentication are superior to others, but any version is much better than nothing.
  • If you think you might have reused any credentials in the past, change those ASAP or delete the account if it’s no longer important.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jxhQIfMMTdA/

Linux user? Check those patches! Public exploit published for systemd security holes…

A pair of recent bugs in a very widely used Linux system tool called systemd have just been “weaponised” by a US cybersecurity company called Capsule8.

The systemd project is a large, complex and popular – but also controversial – toolkit used by many mainstream Linux distros to handle system startup and logging.

The -d at the end of the name denotes that it’s a daemon, the Unix/Linux version of what Windows calls a service.

In other words, it loads like a regular app – in fact, a daemon is a regular app – but then eases itself into the background where it keeps running even when no one is logged in.

Because systemd is typically the “mother of all processes” on a Linux computer – in technical terms, it has the process ID 1 – many of its components run as root so that they have the administrative authority needed to take charge of the system.

If your Linux system were a game of chess, the kernel would be your King, and systemd would be your Queen.

To put it mildly, bugs in systemd typically matter quite a lot.

On a stripped-down server, system startup is often surprisingly simple, not least because less is more when it comes to security.

After all, features such as Bluetooth connectivity, touch pad operation, graphics acceleration, multiscreen support, sound, battery life and hibernation are unimportant on most servers.

But for desktop and laptop users, all these things matter enormously – fast bootup and super-quick ‘wake from sleep’ are highly desirable these days.

Tackling the convoluted startup complexities of a modern Linux desktop was part of the inspiration for systemd, and is one of the things that helped to make it popular.

At the same time, this led to convoluted complexities in systemd itself, which has made it controversial with a vocal minority in the Linux community.

After all, greater complexity means more to go wrong, something that’s a really serious issue for servers, where the do-it-all-and-do-it-fast feature set aimed at Linux desktop users is, ironically, less relevant.

Anyway, a trio of long-standing bugs in systemd were uncovered and reported by security researchers at Qualys at the end of 2018.

The bugs were documented in some detail early in January 2019, and should be patched by now in any mainstream Linux distro that incorporates the systemd code.

No BSD-derived operating system, including macOS, uses systemd. Many Linux-based IoT devices avoid it on account of its complexity, and various Linux distros, primarily those that are focused on simplicity or security, use alternative startup systems instead. Check your distro for details. Well-known Linuxes that don’t have systemd, at least by default, include Alpine, Gentoo and Slackware.

We shan’t go into the details of the vulnerabilities and exactly why and how they are exploitable, but we will use this story as a reminder of how security bugs can often be exploited in pairs.

Smashing the stack

CVE-2018-16865, one of the bugs in this trio, is triggered by code in systemd‘s logging software that allocates temporary memory without first checking that the request is of a sensible size.

If you’re writing an entry to the system log, it’s reasonable to send a line of text, or even a few kilobytes of diagnostic data, but this bug means you can throw gigabytes of data at the logging process…

…and it will try to allocate a dangerously vast amount of temporary storage on the stack.

As you can imagine, an overflow on the stack, where critical data such as software return addresses and system memory pointers are stored, is always going to mean trouble.

Crash on demand

Qualys quickly figured out how to use jumbo-sized log messages to redirect program execution where it didn’t belong.

However, in modern Linux distros, the widespread use of ASLR, short for Address Space Layout Randomisation, means that system software is automatically loaded at different locations in memory every time you reboot.

Many attacks require not only that you sneak unauthorised data into memory, but also that you guess in advance where it will end up, so you know where to mis-direct the flow of execution.

As a result of ASLR, therefore, it’s difficult to use a bug like CVE-2018-16865 on its own to take control of someone else’s server.

Smashing the stack often makes it easy to divert program execution, but if you can’t control where your diversion ends up, you generally crash the program you’re attacking instead of taking it over.

You can think of stack smashing on a system protected by ASLR as being like blindly and suddenly turning left (by which we mean right in the USA) while you’re driving though town.

Maybe, just maybe, you’ll end up turning safely into an actual side-street, but even if you do, you won’t have any idea where you’re going next – and you’ll probably rear-end a parked car anyway.

Much more likely is that if you make a blind turn, you’ll instantly end up in a ditch, T-bone a bus stop, plunge into someone’s front garden, or smash through a window into an unfortunate shop.

Controlling the crash

But there’s another bug in the trio, CVE-2018-16586, whereby you can send sneakily-formatted text to the system log that causes systemd to write out a message containing data from parts of memory that you aren’t supposed to see – a data leakage flaw often referred to as an information disclosure vulnerability.

By choosing your booby-trapped message carefully you can trick the logger into printing a message that gives away the memory address where various attack-friendly pieces of system code can be found.

In other words, by triggering CVE-2018-16856 first, you can find out the details of memory addresses that are supposed to be hidden due to the randomisation in ASLR.

Once you’ve done that, you can trigger CVE-2018-16585 with enough advance information to control the crash and achieve what’s known as remote code execution.

What to do?

With basic exploit code now openly published, even hackers who aren’t very technical have a good place to start.

So, if your Linux distro uses systemd, check that your distro has these holes patched…

…and that you’ve installed the relevant updates!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9VcYAqygmvQ/