STE WILLIAMS

DHS Issues Emergency Directive on DNS Security

All government domain owners are instructed to take immediate steps to strengthen the security of their DNS servers following a successful hacking campaign.

On Jan. 22, US-CERT issued notice of a CISA emergency directive on DNS infrastructure tampering. The notice was the typically brief CERT notice, but it linked to an emergency directive at cyber.dhs.gov that called on anyone managing .gov or other agency-managed domains to take a series of steps aimed at remedial efforts — and to take those steps very quickly.

“The fact that they put out the warning means that there’s been some sort of successful breach against a government site that they’re recovering from,” says John Todd, executive director at Quad9. “This type of warning means that there’s been some damage.”

Marc Rogers, executive director of cybersecurity at Okta, agrees. “CERT puts out notifications on a regular basis, but I haven’t seen one with such a strong sense of urgency before, which tells me that DHS is acting on actual knowledge of an ongoing attack,” he says.

In the emergency directive, DHS said “attackers have redirected and intercepted web and mail traffic, and could do so for other networked services.” The attacks began when someone stole, obtained, or compromised user credentials for an account able to make changes to the DNS records, the directive points out.

Most experts think the events alluded to in the emergency directive are related to a campaign of DNS attacks described by FireEye in a blog post dated Jan. 9. In that post, researchers said that attackers, most likely employed or sponsored by agencies in Iran, use a variety of techniques to gain access to and control over DNS servers. Once done, the result is activity that can compromise a variety of data and information types.

FireEye wrote that the attacks appeared to have begun as long ago as 2017, and prominently feature a technique first described by researchers at Cisco Talos in which the DNS “A” records are modified. This technique results in the attacker gaining a user’s username, password, and domain credential, without producing any activity that would alert the user to a problem.

One of the ways in which attackers hide their activity is through the use of a counterfeit encryption certificate. “The attack described is heavily using ‘Let’s Encrypt,’ which allows someone to easily get a certificate for a domain they control. The attackers went in, modified the records, then immediately got a certificate from Let’s Encrypt, so people coming in from other domains won’t get an error message,” says Adnan Baykal, global technical adviser at the Global Cyber Alliance.

While the duration of the overall attack makes it highly unlikely that it was timed to take advantage of the current partial government shutdown, aspects of the shutdown have made it easier for the attack to succeed. “When you see that there are close to 100 certificates in federal domains that have expired during the shutdown, each one represents a serious risk for users who go to the site. This pushes up the risk of DNS hijacking,” Rogers says.

Baykal agrees. “Visitors are getting browser errors, and people have no good way to tell whether the error is from an expired certificate or a spoofed certificate,” he says.

These statements amplify the point that there’s little for a site’s visitors to do regarding possible DNS hijacking. “You need to use or have access to a validating recursive DNSsec resolver,” Todd says. “You can use a service that tries to give me an accurate answer, and if it’s not accurate, it fails the request.” He notes, though, that most users rely on their ISPs’ DNS servers, few of which use DNSsec validation.

As for the emergency directive’s mandates, they include auditing DNS records, changing passwords for accounts that have DNS administration privileges, and putting two-factor authentication into service — and doing it all within 10 days. “All of the remediation makes perfect sense based on the FireEye report. You’d hope that they would have done so earlier, but that horse has left the barn,” says Cricket Liu, executive vice president of engineering and chief DNS architect at InfoBlox.

And the mandates shouldn’t be ignored by those who aren’t bound by the government directives. “This is a wakeup call for anyone who owns a domain. Although the US government is issuing the order, anyone anywhere in the world should be paying a lot of attention,” Todd says.

Liu agrees. “The things they’re recommended are a good idea for anyone, whether you’re part of the federal government or not,” he says. “All of these are a good idea, regardless.”

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities-and-threats/dhs-issues-emergency-directive-on-dns-security/d/d-id/1333716?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

RF Hacking Research Exposes Danger to Construction Sites

Trend Micro team unearthed 17 vulnerabilities among seven vendors’ remote controller devices.

A global team of researchers recently took industrial system hacking to a whole new — and visual — level by exploiting flaws they discovered in radio frequency (RF) controllers that move cranes and other large machinery at construction sites and in factories.

The Trend Micro team first tested out the vulnerabilities in their lab with a miniaturized crane, and later on a live construction site in Europe, where, with permission, two members of the team hacked the crane’s controller and were able to move the massive arm from side to side. Two other members of the team, who shared details of their RF hack at last week’s S4x19 conference in Miami, said the two-year-long research project included reverse engineering some remote-controller devices’ proprietary RF protocols, and using a software-defined radio (SDR) as well as a homegrown RF analyzing tool, to gain control of the RF devices.

In another twist to the hack, Trend Micro researcher Stephen Hilt built a digital watch to control the crane operation communications. The watch, based on the so-called GoodWatch created by renowned hardware hacker Travis Goodspeed, provided a stealthier method of the attack on the controllers. “I was thinking to myself, I wonder if I could control a crane with this watch? So I actually built a watch to control the crane.”

The Trend Micro research team overall discovered and reported some 17 vulnerabilities across seven popular controller products from Saga, Circuit Design, Juuko, Autec, Hetronic, Elca, and Telecrane, most of which have since issued patches. But as with any industrial system, there’s no guarantee users will apply the security updates due to the age of their products as well as concerns over disrupting their industrial operations.

This isn’t the first time RF technology’s security weaknesses have been exposed, but the Trend Micro work focused on cranes, which haven’t been closely studied previously, the researchers said. “There’s been a lot of research in the RF space, but none has actually applied to this type of industrial controllers,” Hilt said.

Radio Free of Security
The Trend Micro team found that the products lack so-called “rolling” or “hopping” code that prevents attackers from recording and replaying their RF communications to control the equipment. Nor do the controllers include encryption: The data sent between the transmitter and receiver is obfuscated, so it can be intercepted. And the software for uploading firmware to the transmitter isn’t secured, leaving it open for an attacker to tamper with it.

Using an SDR, the researchers were able to record and then replay the RF signals used by each controller. This replay attack could allow an intruder to gain access to the controller, by replaying the recorded RF transmission communication. The devices basically accepted the commands from the researchers. “There’s absolutely no security on these protocols,” Hilt said.

“They don’t have the security eyes that Bluetooth and Wi-Fi have,” said Trend Micro’s Jonathan Andersson, who reverse-engineered the RF protocols. Many of the vendors have been using the same radio protocol for a decade or longer, he noted.

The RF protocol flaws allowed them to override the emergency stop (e-stop) mode of their model crane. E-stop is a built-in physical safety feature that stops a crane from moving when RF communications fails or drops between the device and the crane, for example.

Dale Peterson, CEO of Digital Bond and the head of the S4 ICS SCADA conference, said Trend Micro’s RF research demonstrated just how pervasive this vulnerable RF communications technology is: “Very little attention has been paid” to these types of industrial operations, he said.

“Clients with these mobile fleets, the people responsible for them are different from those [who are for] ICS. They are in their own zones and not protected in the same way,” Peterson said.

While most have humans on-site handling the remote control operations, such as moving a crane in case of an emergency, the risk of an attack via RF is even more ominous as these operations become more automated, according to Peterson. “In the next [few] years when the human goes away, it will be an even bigger deal” for risk, he said.

Trend Micro’s Hilt said automation indeed could be the catalyst for better security of these RF-based industrial control devices. “If [vendors] want to be on the forefront of their automation push, they need to be secure,” he said.

The researchers also published a detailed technical report on their research.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/rf-hacking-research-exposes-danger-to-construction-sites/d/d-id/1333717?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google faces another GDPR probe – this time in the land of meatballs and flat-pack furniture

Google’s slurping of people’s location data and web browsing histories is being probed by Swedish privacy watchdog.

The Swedish Data Protection Authority (Datainspektionen) announced the investigation earlier this week, just as the search engine giant was handed a €50m penalty from the French data watchdog.

The probe is the result of a complaint submitted in November by the Swedish Consumer Association (Sveriges Konsumenter) that is based on a report from the Norwegian Consumer Council (Forbrukerrådet) about Google’s use of dark patterns – which are user interface design choices that attempt to trick users into doing things they may not want to do.

That report criticised the “overwhelming amount of granular choices” on Google’s privacy dashboard, and the pop-ups trying to dissuade users from turning off (or, as Google says, “pausing”) location data.

Google is already facing lawsuits in the US over admissions that when users “paused” location history, it still gathered up that information – unless they had also turned off “web and app activity”.

The complaint to the Swedish authority said Google used “deceptive design, misleading information and repeated pushing to manipulate users into allowing constant tracking of their movements”, the Datainspektionen said.

“In essence, the complainant holds that the processing of location data in this way is unlawful and that Google is in violation of Articles 5, 6, 7, 12, 13 and 25 of the GDPR.”

Toronto Google Map image via Shutterstock

Google responds to location-stalking outcry by… tweaking words on its BS support page

READ MORE

In order to assess this, the Datainspektionen fired off a series of questions to the search giant, and asked it to provide information and documentation by 1 February.

The questions include: the purpose and legal basis Google is relying on to process location data; when and what information data subjects were given on the processing; and whether any of the data processed is special category data, which is granted greater protections under GDPR.

The authority also asked whether the “design patterns” it is alleged to have used for obtaining the legal basis for location data processing is accurate for Swedish data subjects.

Google must also outline how many Swedish data subjects it obtained location data on between 25 May and 27 November 2018, and how many data points are gathered, on average, on an individual, broken down on an hour-by-hour basis for a 24-hour period.

Perhaps foreseeing the mess of data it could end up being served, the authority notes that this must be “presented in a structured and clear manner”.

Other documentation Google is told to hand over include privacy policies that were in place at the time, records of data processing activities for location data collected on Swedes and relevant data impact assessments.

We’ve asked Google for comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/01/23/google_gdpr_probe_sweden/

Think Twice Before Paying a Ransom

Why stockpiling cryptocurrency or paying cybercriminals is not the best response.

Imagine a scenario in which a financial services firm is hit with a ransomware attack that hijacks its corporate network, rendering systems unavailable to users and effectively grinding business to a halt. Even after officials at the company pay the offending cyber extortionists hundreds of thousands of dollars in ransom, the systems remain unavailable for days.

In such a case, the damages would include not only the ransomware payment itself but the enormous losses related to downtime. That includes uncompleted transactions, lost employee productivity, and unhappy customers — to name a few.

This type of situation unfortunately happens more often than we’d like to think. And it shows why the common practice of stockpiling cryptocurrency for just such an event is often a misguided strategy.

The Prolem with Stockpiling 
We’ve known for years that organizations are quite willing to pay ransoms to cybercriminals who take their data hostage through ransomware. This year, my company conducted a survey of 1,700 business, security, and IT executives to find out how widespread the trend really is.

Alarmingly, nearly three-quarters of the security executives and 60% of CEOs admitted to stockpiling cryptocurrency to pay cybercriminals in case of a ransomware attack or data breach. And about eight in 10 of the security executives whose companies have stockpiled cryptocurrency have made payments to cybercriminals in the past year.

There are many reasons we discourage the practice of stockpiling cryptocurrency to pay cyber ransoms. Buying cryptocurrency in the first place is risky, if only because of its wildly fluctuating values. Furthermore, paying attackers does not guarantee that they will decrypt the affected files and systems.

It’s also important to remember that cryptocurrency transactions can’t be reversed. Once the payment has been made, it’s gone for good.

Restore Your Data — and Your Peace of Mind
While prevention technologies definitely play a role in helping organizations mitigate the effects of ransomware, security plans that also include data loss protection strategies are actually giving companies a fuller defense. When we shift the lens from prevention to protection, enterprises are able to have access to every file in the event of an attack, which gives them options other than paying ransoms.

Even though the number of ransomware attacks have declined 30% since 2017, according to research from cybersecurity and antivirus provider Kaspersky Lab, the attacks remain particularly lucrative for criminals. For one thing, they’re inexpensive to execute, and they’re easy to pull off. That explains the recent surge in the popularity of “ransomware as a service.”

MIT Technology Review reported last April that in 2015 alone, enterprises infected by ransomware paid millions of dollars in bitcoin, which was also the cryptocurrency of choice in 2017’s string of WannaCry attacks. WannaCry attacked more than 250,000 systems in 150 countries across private and public sector organizations, including FedEx, Hitachi, Nissan, the Russian interior ministry, and thousands of enterprises in Spain and India.

Perhaps the most notorious attack crippled the UK’s National Health Service (NHS) in May 2017 by bringing its data systems to a halt. This is significant because human lives are on the line when healthcare organizations cannot access medical record data immediately to provide the right patient care. Hospitals and clinics often become prime targets for attackers because it is so crucial that they restore systems and access to medical records as quickly as possible and, as a result, often pay ransoms.

Heed the Warnings
These episodes, combined with analytical and empirical evidence, demonstrate that many organizations still have much work to do in order to better protect themselves against all types of cyberattacks, including ransomware.

Here are some suggested measures:

● Perform regular system updates and patches, so that vulnerable systems are not used to run ransomware exploits.

● Conduct regular external system data backups. This allows you to restore information from prior to the time of the ransomware attack.

● Make sure all users are aware of and educated about the tactics used in ransomware and other attacks. This will make users less likely to click on suspicious links and infect their companies with ransomware.

Organizations need to have full visibility over all of their data. This includes having the ability to search and investigate files across endpoints and cloud services in minutes, rather than over the days and weeks it usually takes following an attack.

By taking these initiatives, organizations can be much better prepared for ransomware attacks. It’s a far more sensible approach than saving up lots of cryptocurrency that organizations might end up throwing away.

Related Content:

 

Jadee Hanson, CISSP, CISA, is the Chief Information Security Officer and Vice President of Information Systems at Code42. Jadee’s passion for security started gathering steam with her first role as a security adviser at Deloitte. After five years and a lot of travel, Jadee … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/think-twice-before-paying-a-ransom/a/d-id/1333677?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cybercriminals Home in on Ultra-High Net Worth Individuals

Research shows that better corporate security has resulted in some hackers shifting their sights to the estates and businesses of wealthy families.

Threat intelligence experts and research groups have seen a shift of cybercriminals increasingly targeting ultra-high net worth (UHNW) individuals and their family businesses.

Lewis Henderson, vice president of threat intelligence at UK-based Glasswall Solutions, says some attackers find it increasingly challenging to get into large corporations, and are putting more of their efforts into attacking the super-rich and their estates and businesses.

“We’ve found that they are using similar tactics and techniques, such as using email and attachments and ransomware,” Henderson says. 

The conclusions drawn by Glasswall mirrors research conducted by UK-based Campden Wealth, which found that 28% of the UHNW families reported having been the victim of one or more cyberattacks. While UHNW families have an estimated net worth of at least $30 million, Campden Wealth recommends that those setting up single-family offices have wealth of $150 million or more. Many of the families that open single-family offices have far in excess of $150 million, with their average net worth standing at $1.2 billion, according to the Campden Wealth/UBS Global Family Office Report.

Dr. Rebecca Gooch, Campden Wealth’s director of research, says phishing was the most common type of attack, followed by ransomware, malware infections, and social engineering. She says UHNW individuals are targeted in a variety of ways including via their operating businesses, family offices, or through the family members themselves.

More than half the attacks were viewed as malicious. And, nearly one-third came from an inside threat, such as an employee intentionally leaking confidential information. Around one-in-ten were deemed accidental.

“The results of these attacks were notable,” adds Gooch. “More than a quarter of family offices and family businesses we surveyed lost revenue, one-fifth had their private or confidential information lost or exposed, and 15% suffered either a blackmail or ransom situation, or had a loss or delay in their company’s activity.” 

Defense 

Glasswall Solutions’ Henderson says there are at least four steps ultra-high net worth individuals can do to protect themselves from cyberattacks:

·      Hire a cybersecurity specialist. Henderson says whether it’s as a consultant or a permanent position with the company, a cybersecurity expert  can fully brief them on security trends.

  • Define policies and procedures. The consultant’s first job should be writing specific policies and procedures for classifying sensitive data. Typically, security experts have various templates they can follow, most notably from the national law enforcement agencies that publish guidance.
  • Have the security specialist explain the varied technology. Once a person gets hired and has established security policies, UHNW individuals need the security expert to explain how no single technology will protect them. Henderson says they are typically more than willing to pay for the protection, but the security expert must explain the elements of defense-in-depth – from antivirus and antimalware software to firewalls, intrusion prevention, and data loss prevention tools.
  • Make provisions for the right kind of cyber insurance. UHNW individuals are more than willing to pay for cyber insurance, but it’s up to the security expert to explain the need. It’s very important that they obtain a policy with fraud protection in the event of a social attack, because not all cyber insurance policies explicitly cover social attacks.

Campden Wealth’s Gooch adds that wealthy families should not consider cybersecurity planning merely an IT problem: the company’s board or top person also should be involved. Proper cybersecurity awareness training, such as teaching people how to notice suspicious emails, can also prevent breaches.  

Families also need to stay up-to-date on what information has been made public about them and their companies, Gooch says. The more an attacker can learn about a family or a business, the more he or she can organize an attack. Finally, Gooch says adequate incident response plans can control the extent of the damage. Families need to define roles and know who to call in the event of an attack. 

Related Content:

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/cybercriminals-home-in-on-ultra-high-net-worth-individuals-/d/d-id/1333706?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Creates Online Phishing Quiz

Google Alphabet incubator Jigsaw says knowing how to spot a phish plus two-factor authentication are the best defenses against falling for a phishing email.

Jigsaw, Google Alphabet’s incubator subsidiary, has launched a free online phishing quiz so users can test how well they can spot a malicious email message.

Justin Henck, Jigsaw product manager, said in a blog post today that the quiz was created from security training the company has held with 10,000 journalists, activists, and political leaders worldwide. “We’ve studied the latest techniques attackers use, and designed the quiz to teach people how to spot them,” he said in the post.

Google considers two-factor authentication the best way to protect against phishing, he said. “When you have two-factor authentication enabled, even if an attacker successfully steals your password they won’t be able to access your account,” Henck said. “We also offer a Chrome extension called Password Alert that protects you from entering your Google password in a fake login page.”

Take the Google Phishing Quiz here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/google-creates-online-phishing-quiz/d/d-id/1333709?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Evolution of SIEM

Expectations for these security information and event management systems have grown over the years, in ways that just aren’t realistic.

The concept of security information and event management (SIEM) has its origins in the need to have a unified view of security events across multiple technologies. From antivirus software detecting malware on some endpoint to a firewall blocking suspicious traffic on an unauthorized port, SIEM gives security operations teams a central dashboard from which they could assess the “security posture” of their organizations. The dashboard serves as a kind of one-stop shop for finding and acting on security-related information, hence the name.

Over the years, expectations from these systems have grown, with marketers beginning to claim that their SIEM platforms can even be used to correlate seemingly isolated events and identify threats that individual security products would otherwise miss. Given the rising importance of cybersecurity, this technology quickly catapulted to the top of every CISO’s wish list.

What followed was a rush fueled by paranoia and efficient marketing to set up SIEM-powered security operations centers. SIEM platforms evolved into multifaceted tools for monitoring, reporting, and forensic investigation. Suddenly, SIEM had become such a crucial component of cybersecurity strategy that regulations began requiring the use of such platforms in banking institutions.

SIEM, however, is hardly a panacea for all security problems. Issues like complex implementation, lack of flexibility, slow response times, and sky-high costs became seemingly necessary evils. Moreover, the torrent of alerts kept security teams under constant pressure to “close issues.” It was only natural for users to notice the elephant in the room: The reality of SIEM systems had fallen far short of all the hype.

In hindsight, it seems obvious that SIEM systems were unable to deliver on their marketers’ promises. They were built on a back-end technology — relational databases — that wasn’t designed for use like this. The volume and variety of data generated in today’s environments shows that traditional RDBMS-based SIEM systems cannot find the proverbial needle in the haystack. The volume of alerts generated cannot be investigated manually by SOC analysts. Correlation-based rules alone cannot identify the ever-increasing number of scenarios in which a cybersecurity incident can penetrate an organization.

Advancements in technologies managing unstructured data (i.e., big data), toolkits leveraging data science in machine-assisted decision-making (i.e., machine learning), and frameworks that allow multiple isolated security applications to work together (i.e., security orchestration) have given us the firepower needed to scale up the capabilities of the next generation of SIEM systems.

Security analytics and orchestration solutions available today are capable of processing large volumes of data at high speeds in a scale-as-you-grow architecture. This processing capability allows users to build models using data science. These models can identify both patterns of normal activity and outliers or potentially malicious events without writing specific rules. The systems can be integrated with other applications to exchange information and investigate the outliers identified. Orchestration can also be paired with automated threat response mechanisms, reducing human involvement.

Now, marketers have begun raising the bar on what technology alone can deliver, furthering the competition between vendors to prove technical superiority. Today’s automated black-box solutions claim to leverage advanced artificial intelligence and orchestration to provide security at the click of a button. There is a sense of déjà vu with banking regulators prescribing big data-based analytics solutions to manage security operations.

What is needed is an understanding of what capabilities users should expect of the technologies they use, and how to correctly leverage them to improve security. Every IT environment is unique, so expecting software to automagically secure them all with artificial intelligence is still beyond the realm of what technology can deliver.

Technology plays a crucial role in processing large volumes of data, performing repeated tasks, making disconnected systems work together, and using complex math to identify the potentially malicious. This is already the bulk of the heavy lifting in ensuring security; what is needed further is human intelligence for correct navigation.

No doubt, this calls for a comprehensive skill set, which is evidently a scarce resource. While it might appear difficult to pull off, being blown away by marketers boasting about one-size-fits-all solutions is certainly not the best way to start.

Related Content:

Chetan Mundhada drives the Sales team at NETMONASTERY with a focus on growing the company’s enterprise and channel. He brings to the position a successful track record of over 14 years in the IT infrastructure and security domain. He has a keen interest in data analytics and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-evolution-of-siem/a/d-id/1333675?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Aging PCs Running Out-of-Date Software Bring Security Worries

Age is an issue with application languages and frameworks, too.

More than half of the software running on PCs around the world is outdated, with millions of users still logging into computers running Windows Vista and XP. That’s just some of the information to come from a new report on PC software and the risks posed to security.

The “Avast 2019 PC Trends Report” is based on anonymized data from 163 million computers running Avast and AVG security software. It presents information on both the hardware and software running the world’s business and personal applications — and the picture it paints is of an infrastructure growing older with each passing year.

In fact, the average age of a desktop PC is 6 years old, up from 5.5 years old in 2017. That’s compared with less than a three-year average life span for a smartphone. Age continues to be an issue with application languages and frameworks, too. According to the report’s authors, “Our report shows that the number of installed tools and frameworks is higher than ‘real’ apps, such as Office or Skype. In some cases, these aren’t being kept up-to-date by the user or the vendor.”

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities-and-threats/aging-pcs-running-out-of-date-software-bring-security-worries/d/d-id/1333713?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hijacked Nest cam broadcasts bogus warning about incoming missiles

A hacker took over a Nest security camera to broadcast a fake warning about three incoming intercontinental ballistic missiles (ICBM) launched from North Korea, sending a family into “five minutes of sheer terror.”

Laura Lyons, of Orinda, California, told the Mercury News that she was preparing food in her kitchen on Sunday when a “loud squawking – similar to the beginning of an emergency broadcast alert” blasted from the living room, followed by a detailed warning about missiles headed to Los Angeles, Chicago and Ohio.

The newspaper quotes her:

It warned that the United States had retaliated against Pyongyang and that people in the affected areas had three hours to evacuate. It sounded completely legit, and it was loud and got our attention right off the bat… It was five minutes of sheer terror and another 30 minutes trying to figure out what was going on.

Her frightened 8-year-old son crawled under the rug while Lyons and her husband looked at the TV in confusion: why was the station airing the NFC Championship football game, instead of an emergency broadcast?

The couple eventually realized that the warning was coming from their Nest security camera, perched on top of the TV. After multiple calls to 911 – the US emergency number – and to Nest, they eventually figured out that they’d been the victims of a prank. A Nest supervisor told them on Sunday that they’d likely been victims of a “third-party data breach” that gave the webcam hijacker access to the Nest camera and its speakers.

The Lyons family went from terror to anger after they learned that Nest knew about a number of such incidents – though this was the first that involved a nuclear strike – but hadn’t alerted customers. What Laura Lyons told Mercury News:

They have a responsibility to let customers know if that is happening. I want to let other people know this can happen to them.

And what she posted on a local family Facebook group:

My son heard it and crawled under our living room rug. I am so sad and ANGRY, but also insanely grateful that it was a hoax!!

At any given time, it’s safe to assume that there’s a slew of Internet of Things (IoT) devices getting hacked, and not because the company suffered any kind of breach.

Like, say, last month, when an e-intruder used a baby monitor linked to a Nest camera to broadcast “sexual expletives” and threats to kidnap the baby.

Or, say, the creep who hacked into a Nest camera in October to ask a 5-year-old if he’d taken the school bus home, what toys he was playing with, and to shut up when the boy called for his mommy.

Another recent hack came in from a guy who described himself as a white-hat hacker. Don’t mind me, I just want to let you know that your camera is a sitting duck, he told a surprised but grateful Nest owner…

Shields UP! to keep out fake missiles

…who, hopefully, did exactly what that benevolent camera hijacker advised: change his password and set up two-step verification (2SV), also known as multiple- or two-factor authentication (MFA or 2FA).

Whether you call it MFA, 2FA or 2SV, it’s an increasingly common security procedure that aims to protect your online accounts against password-stealing cybercrooks.

For their part, in the wake of the missile prank, the Lyons family disabled the speakers and microphone on their Nest camera, changed the passwords and added 2FA.

Nest parent company Google responded to the white-hat incident, which happened last month, by saying that yes, it’s aware that passwords exposed in other breaches may be used to access its cameras (or your Facebook account. Or your credit card account. Or your eBay or PayPal or Netflix accounts, or, well any of your accounts where you use the same login). Nest cameras can’t be controlled wirelessly without a username and password created by the device owner.

If Nest owners use that same username/password combination for multiple online services, then miscreants can grab them whenever any of those other services gets breached. Then it’s just a matter of trying to log in all over the place to see what other accounts they can get into with the reused login. Online banking? Email accounts? Social media accounts? All of the above?

It’s known as credential stuffing. Because a lot of users have the bad habit of reusing the same passwords across several websites, the tactic is successful far too often.

On the plus side, Nest devices don’t come with default logins that users need to change to ward off hijackers. It’s up to users to come up with unique, difficult to crack password/user name combos that they don’t use anywhere else. If you have a hard time creating and remembering unique, properly convoluted passcodes, you might consider using a password manager (see caveats regarding mobile versions) to create, and store, them for you.

The advice not to reuse passwords isn’t meant just to keep your camera from getting hijacked and used to threaten your kids or broadcast bogus nuclear war alerts. It’s to keep your everything from getting broken into, hijacked, used to scare the bejeezus out of you and your family, and/or vacuumed clean of funds, as the case may be.

Of course, a password can get cracked even if it’s unique. That’s what makes 2FA such a good failsafe: even if creeps guess your password, they still have to have that second factor to get into your stuff. At Naked Security, we’re very pro-2FA, and we hope that all Nest owners flip that switch to keep out the camera hijackers.

Use 2FA whenever it’s available, for that matter. But do keep in mind that it’s not infallible. Last month, we saw a sneaky phishing campaign that beat 2FA. The most secure option is to use a FIDO U2F (or the more recent FIDO2) hardware token such as the YubiKey because they will refuse to log you in to an imposter site.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/70euV9jYS-0/

Google fined $57m for data protection violations

In a landmark ruling, France’s data protection commissioner has fined Google €50m (around $57m) for violating Europe’s General Data Protection Regulation (GDPR). The fines penalize the search and advertising giant for not giving information to users or obtaining valid consent when gathering data to personalize advertisements.

The fines are the result of an investigation into Google lasting almost eight months. It began when advocacy group None of Your Business (NYOB) filed a complaint against Google with data protection regulators in Austria, Belgium, Germany and France last May, shortly after GDPR came into force. The French regulator also received a similar complaint from French digital rights advocacy group La Quadrature du Net (LQDN).

France’s regulator, the Commission Nationale de l’Informatique et des Libertés (CNIL), announced on Monday that it agreed with the complaints, finding that Google “excessively” spread privacy information across several places during the Google account creation process.

This information includes what the data would be used for, how long it would be stored, and the types of personal data used to personalize ads. This made it hard for users to discover this information, the CNIL ruling says:

The relevant information is accessible after several steps only, implying sometimes up to 5 or 6 actions. For instance, this is the case when a user wants to have a complete information on his or her data collected for the personalization purposes or for the geo-tracking service.

Even when users do find that information, it is often vague, the CNIL adds. There are so many services collecting so much data that it is difficult for users to understand everything that their data will be used for.

The extent of Google’s data processing across all these services also invalidates Google’s claims that it obtains consent from users to personalize ads, the organization said:

For example, in the section “Ads Personalization”, it is not possible to be aware of the plurality of services, websites and applications involved in these processing operations… and therefore of the amount of data processed and combined.

The company also stumbles on consent gathering by failing to make it “specific and unambiguous”, says the CNIL, failing two key tests under the GDPR. First, it pre-ticks consent boxes during account creation that allow for ads personalization, which counts as an opt-out approach to consent rather than an opt-in one.

Moreover, Google doesn’t gather consent separately to address each use of the user’s data:

The user gives his or her consent in full, for all the processing operations purposes carried out by GOOGLE based on this consent (ads personalization, speech recognition, etc.). However, the GDPR provides that the consent is “specific” only if it is given distinctly for each purpose.

Why did France take the lead?

Normally the lead investigator is the data protection authority in the country where a company has its European headquarters. In this case, the CNIL says that it took the lead because the Data Protection Authority in Ireland, where Google has its European headquarters, didn’t consider itself to have jurisdiction over Google’s account creation processes.

The fines were so high because Google has become such an important and widely used service in France, says the CNIL, with thousands of French people creating Google accounts on Android phones every day. The implications of Google’s data gathering on French citizens are also broad, it warns:

The infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services and almost unlimited possible combinations.

Google isn’t the only company facing GDPR fines over its stewardship of user data. In October, the Irish Data Protection Commission announced an investigation into Facebook following the social giant’s announcement of a data breach affecting 50 million user accounts. NYOB also filed complaints against Facebook and its Instagram and WhatsApp companies along with the Google complaint.

Based on its Q3 2018 revenues, Google earns enough to pay the CNIL fine in under four hours.

NOYB is headed by Max Schrems, the Austrian lawyer whose complaints against Facebook eventually forced the EU to invalidate its Safe Harbor rules and changed the law regarding sharing of European citizens’ data with companies overseas.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2SLLNIZmSpo/