Company plans to integrate Light Point Security’s technology into the McAfee Secure Web Gateway and its Mvision UCE platform.
RSA CONFERENCE 2020 – San Francisco – McAfee has confirmed plans to acquire Light Point Security, with plans to integrate Light Point’s remote browser isolation technology into its Secure Web Gateway and Mvision UCE security offerings.
Baltimore-based Light Point, founded in 2010 by two former NSA employees, was created to change the way businesses protect critical data and applications from Web-based cyberattacks. Its platform isolates browser sessions in a remote virtual environment outside of the corporate network to protect against common threats like ransomware and credential phishing attacks.
McAfee’s acquisition arrives at a time when adversaries are taking aim at browsers. It plans to build Light Point Security’s technology into its Secure Web Gateway and its new Mvision Unified Cloud Edge (UCE) platform, which contains the McAfee Secure Web Gateway, Data Loss Prevention, and Mvision Cloud (CASB). This isn’t the first acquisition McAfee has made to strengthen its Mvision platform; in August 2019, it bought NanoSec to boost Mvision’s container security capabilities.
The investment in this technology will help McAfee provide a complete implementation of the secure access service edge (SASE) architecture, for which Gartner advises browser isolation as a recommended capability. In a statement, McAfee officials say this implementation will help customers apply a consistent threat protection policy across their networks and software-as-a-service applications, such as Office 365 and other collaboration tools.
Terms of the deal were not disclosed. Read more details here.
Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio
There are far more ways to be helpful than adding to the noise of what a company probably did wrong.
It’s natural to become angry and indignant when we see a major breach story in the news. Many of these potentially affect us and those we know, and often some concern about a potential vulnerability remains left unaddressed by the company in question.
However, as cybersecurity professionals, we also understand (but sometimes lose sight of) a few key facts that the general populace may not know.
We know, for example, that it is virtually impossible to plug every gap, address every vulnerability, and enforce every security procedure. We know that companies must determine the right amount of cyber spending against their other business priorities. While cybersecurity may be our primary focus, core business functions consume the majority of an organization’s resources.
We also understand that organizations that deploy strategic security programs do so by willingly assuming an agreed-on level of risk. The goal, of course, is to only accept lower-level risks to the business while mitigating higher-level, core-business-impacting cyber-risks.
Yet even this equation is getting harder to achieve — and we get that. The enterprise attack surface is skyrocketing alongside exponentially growing IT complexity. Organizations are struggling with an ever-expanding security perimeter — it is now every employee with a device — as well as hybrid and multicloud environments, legacy assets, migration initiatives, third-party risk, a patchwork regulatory environment, and IT complexity brought by rapid expansion and MAs. The cloud security challenge alone is compounded by an increasingly complex shared-responsibility model. And the human factor will always be a frailty in the enterprise armor that can never be fully mitigated.
Finally, we realize that despite IDC’s prediction that $133.7 billion will be spent on cybersecurity in 2022, up 45% since 2018, threat actors will continue to find a way in. Forrester predicts this year will see “more attackers with more sophisticated tools aimed at a larger attack surface,” and that those attackers will leverage ransomware, artificial intelligence, machine learning, and deep fakes to make enterprises pay (in addition to other common methodologies we see in our business every day). Indeed, ransomware actors take advantage of the very fact that companies must prioritize their core business functions over security — because that is the heart of this malicious tactic.
Look at how much we know. Then, why is it that so many of us continue blaming organizations when they fall victim to a breach? It’s time for us to stop and more boldly advocate against pointing fingers at cyber victims.
Certainly, every breach means some doorway may have been left open. But in many breaches, it can be difficult to understand the root cause. We can ask whether the victim was properly protecting the data, spending enough on cybersecurity, properly emphasizing the importance of protecting data, ensuring proper configurations, and deploying the right technologies, processes, and policies. Even if they can’t answer “yes” to each of these questions, we must still wonder whether it had an impact on the breach in question. More problematic, still, the reality is that even if they can answer “yes” to each of these questions, the company is still not immune to a data breach. Now, who do we blame?
I propose we shift the narrative and our approach. Rather than adding to the noise of what a company probably did wrong, we can offer helpful suggestions for what others can do today. We can assume the role as educators — offering best-practice advice through published content and partnerships, as well as helping organizations sort through the alarmist FUD factor (fear, uncertainty, and doubt) and get to the practical nuts and bolts. We can help companies determine where to prioritize their dollars to reduce the chances of more significant attacks (or reduce response times should one occur), acknowledging they aren’t going to purchase every tool or service available.
We once had a client who said his company’s approach had been to pay virtually any amount of money on security to help improve its security posture. If there was a new tool that looked useful, the company would buy it, even if it had a similar tool already deployed. However, rather than helping its security posture, this approach made it extremely difficult to sort out actual anomalies in the environment from false alarms. Likely, many companies would be willing to continue to sell him every tool in their arsenal — cybersecurity companies have revenue targets, too. A better approach we can all take is being a strategic partner, helping to reduce complexity, and building a base of longer-term trust.
We also need to ensure organizations are realistic about what their security investments can and cannot achieve and ensure they are planning for the worst-case scenario. They should plan for a data breach and know what should happen and how. Testing incident response and recovery plans can minimize the impact of a significant event and help increase the likelihood of a speedy respond and recovery.
Yes, organizations make mistakes, and breaches occur. But the balancing act that company leaders face isn’t easy. Security professionals can assume a more helpful, understanding, and empathetic role, rather than pointing fingers — particularly since we know the complexity of the challenge better than anyone.
Jessica Smith is a veteran practitioner of digital forensics with an extensive record of involvement in complex civil and criminal cases. She brings her experience and know-how to The Crypsis Group’s client engagements, as well as helping direct the daily operations of the … View Full Bio
Ahead of her keynote at the RSA Conference, Cisco’s head of advisory CISOs outlines to Dark Reading a unique paradigm that asks security teams to stop fighting their users — and start sharing control with them.
RSA CONFERENCE 2020 — San Francisco — End users choosing their own security measures. Kindergarteners using phones without parental controls. Dogs and cats, living together; mass hysteria. Is it anarchy? Or is it simply a better paradigm for enterprise information security that is easier for everyone, less expensive, and actually results in more effective security?
This concept of “democratizing” cybersecurity will be the subject of a keynote session here today by Wendy Nather, head of advisory CISOs at Cisco (formerly Duo).
In an interview with Dark Reading, Nather said she was pondering the questions that the security industry relentlessly asks itself, like, “Why do people keep clicking things that I tell them not to click on?” And also the questions the industry should be asking itself but isn’t, like “Should we just stop telling them not to click on things?”
In rethinking some of these sacred cows, she revisited the idea of democratization — a term she first become familiar with when working with Duo co-founder Dug Song — and again had a question.
“What would democratizing security really look like?” Nather says. “We talk about this, but what could we do concretely?”
Nather breaks it down into three main categories: a move from a control-model to a collaborative-model; simpler, more usable design; and a more open security culture.
From Control to Collaboration “We’ve always been thinking very authoritatively, from the very beginning, about security,” Nather says. “You know: ‘We’re the experts. We make the policy. You follow the policy. We control everything. Control the means and computing.’ But, as we know over the last decade-and-a-half or so, users have been taking away that control. They’ve been taking it over.”
The idea then is for security departments to collaborate with the people who need to be secured — and also more closely with the creators of the products that need to be secured.
“What if security were not a control organization but a service organization?” Nather says. “And how would that change how we interact with the people that we serve? And also, what would that look like concretely in architecture?”
If organizations can answer that question, they might also find cost savings because, Nather says, control equals cost.
“Everything that you still need to control is gonna cost you because you have to set policies for it. You have to monitor for compliance. You have to manage exceptions. You have to enforce compliance. All of this costs time and people and money,” she says. “So if you think about it in terms of control equals cost, what would we decide together with a business that it’s not so important for us to control?”
Design for Usability “What if they could design security to be as easy as a spoon?” Nather says. “We don’t need annual spoon awareness training.”
Simpler design could create less friction for users and make security less frustrating, easier to achieve, and even desirable.
“Really beautiful design encourages security adoption,” Nather says. “As Dug [Song] says, as part of democratizing security, we should be designing for adoption, not engineering to enforce security.”
Open Culture The infosec field tries to force its culture onto everyone else, Nather says, whether or not the rules and norms of the infosec community make sense in other populations. She gives the example of making kindergartners use passwords before they even know their numbers and letters.
However, Nather says, if the infosec community makes security less mysterious and less controlling, it might prevent sad security history from repeating itself again and again.
“Web came along, and we made a lot of mistakes. And then mobile came along, and we saw the same mistakes we made over and over again that we made with Web. Now, with IoT, we’re seeing them again and the question is, well, why?” Nather says. “And the answer is because it’s a different population developing [these technologies] every time, and they haven’t learned from our mistakes — because these are new people.
“So we have to spread out the security knowledge so that no matter what comes along in the future, anybody can secure it. Not this elite group of people — wizards in the security industry that have all the knowledge but are not sharing it or not adapting it to how everybody else wants to use it. You know, we have to upend that entire model.”
From Helicopter to Free Range Put all of this together, and a helpful analogy may be this: If the current state of cybersecurity management is akin to “helicopter parenting,” then democratized security is more like “free-range” parenting.
And that analogy can actually be taken quite literally.
(“Helicopter to Free Range” continued on next page)
Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full Bio
Order out of chaos? The saga of Chronicle continues with new security features for the Google Cloud Platform.
RSA CONFERENCE 2020 – San Francisco – Chronicle — the once spun-off, now reabsorbed cybersecurity division of Google Cloud — launched a handful of new features at the RSA Conference, which kicked off in San Francisco this week.
The company plans to demonstrate how its cloud-based threat intelligence service, Backstory, can detect threats and analyze them as part of a timeline, show the tool’s integrated fraud prevention services, and reveal new integrations with partners’ products, such as Palo Alto Networks’ Cortex XSOAR. As part of its rollout at RSA this year, Google Cloud will show how the platform can be used to investigate alerts and detect threats using the YARA-L, a language for describing behaviors and characteristics of cybersecurity threats focused on log files.
“Chronicle launched its security analytics platform in 2019 to help change the way any business could quickly, efficiently, and affordably investigate alerts and threats in their organization,” said Sunil Potti, vice president of Google Cloud Security, in a blog post announcing the new features. “This advanced threat detection provides massively scalable, real-time and retroactive rule execution.”
The latest additions to the software comes as the company is under scrutiny for what many critics see as a failed spinoff.
In January 2018, Alphabet — Google’s parent company — created Chronicle, with startup CEO Stephen Gillett calling the cybersecurity spinoff an “independent business” in a blog post announcing the launch of the firm. The company included VirusTotal, a virus detection service that submits a file to a variety of malware scanners, and a new cybersecurity analytics platform. At last year’s RSA, 14 months after the company came out of stealth, it launched Backstory, a cloud-based service to bring varied threat intelligence together to give security teams context regarding threats facing their business.
“The missing piece is a powerful investigation, analytics, and hunting system to tie together a customer’s internal network activity, external threat intelligence, and curated internal threat signals,” the company stated at the time. “Such a system would give analysts the context they need to protect their organizations… [that is,] the backstory.”
Last summer, however, Google reabsorbed Chronicle back into its Google Cloud business, potentially reducing the independent threat platform into a service offering on the Google Cloud Platform (GCP). “Chronicle’s products and engineering team complement what Google Cloud offers,” the company said at the time.
Yet the company continues to add partners and new features.
Palo Alto Networks, for example, posed two scenarios that would benefit from Chronicle’s Backstory. First, companies can automate the combining of threat intelligence information and network data to identify threats and then respond through a security orchestration, analysis, and response (SOAR) platform. Second, the Backstory platform can be used to help interactive, real-time investigations.
Such platforms help reduce the amount of time analysts are swapping between different dashboards on different products, the company said in a statement.
The YARA-L is a modified version of YARA, a rule-based approach to describing malware and threats originally developed by a founder of VirusTotal, an antimalware engine survey platform that was later bought by Google.
“The two capabilities work together so that customers can create powerful detection rules against intelligent, auto-enriched and structured telemetry,” Chronicle stated in a second blog post. “We make this available in our UI as well as through a new API that other security vendors can use to enhance their own products.”
Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio
Take a comprehensive approach to better protect your organization. Security hygiene is a must, but also look at your risk posture through a data protection lens.
People have been talking about making the transition to the cloud for more than a decade. The day that happens is no longer in the future: It’s here now. More businesses than ever use multicloud environments to handle an increasing number of workloads as well as software-as-a-service applications for their core business processes. This shift brings cloud security to the forefront — and it’s time for a fresh look at securing business in the cloud.
From a security standpoint, the cloud adds an extra layer of complexity on top of managing an increasingly mobile and demanding workforce. The old ways of building a rigid, static wall around our on-premises IT assets don’t work for the cloud. Securing it requires a much more comprehensive approach.
Some IT professionals make the mistake of thinking basic cloud security is good enough. But there’s a reason cloud data leaks continue to make headlines. One example: In 2019, a hacker gained access to around 100 million Capital One customers’ accounts and credit card applications via a misconfigured Web application firewall. The fact that such a breach could occur the way it did means there’s still something even the largest companies are missing.
But what is it? After speaking with CISOs, CTOs, and chief data officers, it’s clear to me that we’re still not speaking the language of cloud security in a way everybody can easily understand.
In addition, clarity is missing on who owns what part of cloud security and who is doing the necessary work to maintain it throughout the organization. It doesn’t help that there is hidden or abstracted complexity in cloud platforms that’s difficult to account for at all times. Combined with the rapid pace of technology change in general and the pressure that comes from the “push to production” in typical organizations, these factors all add to the overall risk we face in the cloud.
Here are three steps to consider that will make sure your cloud security is as modern as your business:
1. Converge Your People Align your IT decision-makers and your IT organization around an end-to-end view of infrastructure security and information protection that includes cloud environments. Start by standardizing the language used to discuss cloud and data security. It’s the first step in furthering your organization’s collective understanding.
Ongoing training also plays a role. Properly educated employees and users form a foundational piece of your security “stack,” since they often serve as a first or potentially last line of defense. Instilling the idea that everyone plays a role in a security protection chain is vital because any disconnect within or between an organization’s IT teams or employees can create exploitable gaps.
2. Converge Your Services and Tools There’s an interesting phenomenon regarding tools. CISOs are rightfully trying to reduce the number of diverse tools they have to manage. But on the cloud development side, there has been an explosion of tools and services being built for specific purposes. While many of these are consumed as managed services, they introduce a new demand on resources and potentially can increase risk. This is likely to continue until we see simplification and convergence.
As such, most CISOs are always looking to replace point solutions with integrated and converged security platform solutions. Seek platforms that provide security in the following areas:
Data loss prevention (DLP)
Endpoint protection
Network security (firewall-as-a-service and Secure SD-WAN)
Cloud security (cloud access security broker, or CASB, and Certified Security Project Manager if you deploy your own code)
A converged platform allows deployment of consistent policies across all levels and locations of your organization, and, importantly, it simplifies security management and gives data flow level visibility across your organization (from endpoint to cloud via the network). It also allows for real-time updates and responsiveness to changes in the organization and regulatory environment. GDPR and the California Consumer Privacy Act are just the beginning. As additional privacy regulations roll out, new policies will need to follow. Preparing for these is a must for most CISOs and data protection officers.
Modern cloud security incorporates solutions for a growing list of requirements: DLP, Web security, CASB, next-gen firewalls, elements of trust, etc. These should be complemented by behavioral analytics in order to apply the right level of user access controls across changing and disparate systems. Indicators of behavior, or IOBs, are the modern way to look at how users interact with company data, systems, and apps.
That’s why today’s cloud security requires utilizing converged services. Leveraging converged services is key to consolidating the number of tools in your security arsenal for maximum effectiveness and reducing operational burden.
3. Plan for a Data-Fluid Future There’s no way to put the genie back in the bottle. Cloud adoption will continue to accelerate because of the benefits it provides: cost and efficiency gains for businesses while offering employees flexibility to get work done wherever they are.
But adopting the cloud means your organization’s data moves between users, apps, and cloud environments in more dynamic ways. All of this requires a data-centric approach to security protocols. Deciding to migrate workloads or adopt new cloud applications is the easy part; maintaining the right level of corresponding permissions and policies won’t happen without a clear cloud security and information protection strategy.
Consider making changes to how you test your security framework. For example, in the past, penetration testing once a month might have made sense, but rapid alterations to cloud-based apps usually requires more frequent checks to ensure new vulnerabilities or attack surfaces are addressed.
There’s no magic bullet for getting cloud security right. Take a comprehensive approach to better protect your organization. Security hygiene is still a must, but also look at your risk posture through a data protection lens and implement DLP and behavioral analytics. Endeavor to give everybody who touches data and the cloud a common language of cloud security they can all understand. And stay on your toes — the future is only getting cloudier.
As Global CTO, Nico Fischbach drives corporate level vision, defines the research agenda, and pilots technology and architecture road maps that underpin Forcepoint’s human-centric cybersecurity solutions. He is responsible for companywide innovation as well as Forcepoint … View Full Bio
Barely noticed by web users, the life expectancy of SSL/TLS leaf certificates has lowered dramatically over the last decade.
Used as the foundation of HTTPS authentication, just over a decade ago domain registrars were selling SSL/TLS certificates that were valid for between 8 and 10 years.
In 2011, a new body called the Certification Authority Browser Forum (CA/Browser Forum), which included all the big browser makers, decided this was too long and imposed a limit of five years.
Then, in 2015 the time limit was dropped to three years, followed by a further drop in 2018 to only two years.
How low could this go?
This week, we learned that the latest answer is one year, or 398 days including the renewal grace period, a change that will apply from 1 September 2020.
What makes this new limit noteworthy, however, is that it was reportedly announced at a CA/Browser Forum meeting by a single member, Apple, in relation to one browser, Safari.
Although not yet officially confirmed, it’s a bold move that presumably prefigures similar announcements by other big browser makers, especially Google, which has assiduously promoted the idea of a one-year limit in recent CA/Browser Forum ballots.
That browser makers were voted down might explain why Apple has decided to enforce the change unilaterally, apparently against the wishes of the Certificate Authorities (CAs) which issue certificates as a business.
The browser makers are adamant that reducing validity is good for security because it reduces the time period in which compromised or bogus certificates can be exploited.
In theory, it also makes it less likely that in future, certificates using retired encryption (certificates based on SHA-1 being a prime example) will be able to soldier on when everyone knows they are vulnerable.
Hassle factor
In the real world, it’s a lot more complicated. CAs fear their customers will struggle to cope with the practical difficulties of renewing certificates – and changing the private keys used to authenticate them – more often.
Renewals can be done using automated tools, but it seems that many organisations still manage the process manually. Considering that some will have thousands of certificates to look after, doubling the frequency of renewals may well create problems that the CAs will need to take care of.
What, in practical terms, does all this mean for certificate admins and browser users?
For current certificates, not much. These will still be valid until their stated expiry date, even if that’s after 1 September 2020. After that, assuming CAs don’t stop selling the old two-year certificates, Safari users (plus users of other browsers adopting the same policy) visiting a site on which one was issued will see off-putting ‘website not secure’ warning messages.
That isn’t going to happen, of course, because the CAs know perfectly well that browser makers, the web’s gatekeepers, hold all the cards.
More likely, they’ll start offering automation of their own, multi-year plans, and discounts for organisations that sign up for longer time periods. A solution will be found that lightens the burden and stops alarming messages appearing for otherwise genuine certificates.
The question is where things go from here. If certificates are a security risk, why not move to even shorter renewal time periods that reduce the window of opportunity?
With increasing automation and adjusted business models that reduce the financial burden, it’s possible that even one year might one day sound like a long time for a certificate to remain valid. Watch that padlock.
Latest Naked Security podcast
LISTEN NOW
Click-and-drag on the soundwaves below to skip to any point in the podcast.
New Mexico Attorney General Hector Balderas is suing Google over its alleged slurping of students’ data off of the free Chromebooks it passes out to needy schools and from its free G Suite for Education products, including Gmail, Calendar, Drive, Docs, Sheets, and other apps.
According to the complaint, which was filed in the US District Court for the District of New Mexico on Thursday, Google has marketed its suite – formerly known as Google Education – to schools, parents and children as a “free and purely educational tool”, but in actuality, it comes “at a very real cost that Google purposefully obscures.”
Balderas said in a statement that Google has secretly collected information including students’ geolocation information, internet history, terms that students have searched for on Google, videos they’ve watched on YouTube, personal contact lists, saved passwords, voice recordings, and more, in violation of federal law. The Children’s Online Privacy Protection Act (COPPA) requires that companies such as Google obtain verifiable parental consent before collecting personal data from children under the age of 13.
The AG is also accusing Google of violating one of the state’s consumer protection laws, the Unfair Practices Act.
Balderas also released a copy of a letter that he sent to Google CEO Sundar Pichai on Wednesday. In that letter, Balderas said that his office had conducted an investigation that concluded that Google’s alleged data siphoning appears to be active and ongoing.
My investigation has revealed that Google tracks children across the internet, across devices, in their homes, and well outside the educational sphere, all without obtaining verifiable parental consent.
Google has used its access to collect “massive” amounts of data from young children, Balderas said, not to benefit the schools with which Google has contracted, but to profit off it. It’s not just that Google’s sucked it all up into its own gaping maw, the AG said, but that the data can spread “across the globe” in ways both legitimate and otherwise.
He called on Google to immediately cease and desist its alleged data collection, bringing up the specters of having children’s data used to market to them or having it wind up for sale on the dark web, “hosted in countries well beyond the reach of law enforcement.”
Well, it’s not as if it hasn’t happened in the past. In September 2019, the Federal Trade Commission (FTC) fined Google $170 million for illegally sucking up kids’ data so it could target them with ads.
In response, Google’s YouTube subsidiary decided to sit out the thorny task of verifying age, instead passing the burden on to content creators, leaving them liable for being sued over COPPA violations, even if the creators themselves think that their content is meant for viewers over the age of 13.
According to the New Mexico lawsuit, Google Education is now used by more than 80 million educators and students in the US, including more than 25 million who use Chromebooks in school. To drive up adoption in schools, Google has publicly promised that it takes students’ privacy seriously and that it will never mine student data for its own commercial purposes, the lawsuit says.
It’s broken those promises, the lawsuit says, pointing to Google’s response to a Congressional inquiry into the privacy practices associated with Google Education, in which it admitted to using students’ data – extracted and stored in profiles – for “product improvement and product development.”
The New Mexico Office of the Attorney General filed a similar lawsuit against Google and several other tech companies in September 2018, alleging illegal data collection from child-directed mobile apps. The companies have denied wrongdoing, and the case is awaiting a decision by a federal judge in Albuquerque.
Balderas has let New Mexico schools know that nobody’s going to yank Chromebooks out of students’ hands – at least, not in the short term: there’s “no immediate harm to the continued use of these products,” according to the statement from the AG’s office, and “this lawsuit should not interrupt daily instruction in our schools.”
Google’s response: Horsefeathers!
This is all nonsense, Google said in a statement about the lawsuit’s claims being “factually wrong.” From a statement that spokesperson Jose Castaneda sent to media outlets:
G Suite for Education allows schools to control account access and requires that schools obtain parental consent when necessary. We do not use personal information from users in primary and secondary schools to target ads.
According to Google’s G Suite for Education information page, it contractually requires that schools get parental consent in order to use the services – consent that’s required by COPPA.
Latest Naked Security podcast
LISTEN NOW
Click-and-drag on the soundwaves below to skip to any point in the podcast.
That smart home speaker isn’t listening to everything you say, according to new research – but it is listening a lot more than it should. Researchers have found some speakers activating by mistake up to 19 times each day.
Virtual assistants like Siri and Alexa are programmed not to listen to your conversation constantly. Instead, they listen for a ‘wake phrase’. When they hear it, it’s their cue to listen to what you subsequently say, which could be an instruction or a request. Google Assistant responds to “OK Google”, Apple’s Siri perks up when you say “Hey Siri” and Microsoft’s Cortana pricks up its digital ears when you say “Hey Cortana”.
The problem is that just like humans, virtual assistants often mishear things. Siri might think that “Seriously” sounds enough like its wake word to start listening to what you’re saying, but that’s just one of a range of sounds that might trigger it. That’s why it’s been reported recording everything from sex to criminal deals.
Until now, we haven’t known just how (in)accurate these voice assistants are at listening for wake phrases. Thanks to research by academics at Northeastern University and Imperial College London, now we do. It turns out they’re not that accurate at all.
The researchers wanted to simulate real-world conditions, so they set up a variety of smart speakers with embedded virtual assistants and played them 125 hours of audio from various Netflix shows ranging from The Office to The Big Bang Theory and Narcos. They tested the first generation Google Home Mini, Apple’s first-generation HomePod, Amazon’s second- and third-generation Echo Dot, and the Harmon Kardon Invoke, which has Microsoft’s Cortana embedded.
The researchers detected when speakers were recording by capturing video feeds to determine whether their lights activated, and by monitoring the network to spot any traffic that they were sending back to the cloud. They also checked their cloud accounts to watch for any self-reported recordings.
They found that devices would activate up to 19 times each day on average. The HomePod device was the worst, with an over-enthusiastic Siri switching on for lots of phrases. Speech that triggered it started with “Hi” or “Hey” followed by something starting with something sounding like an “S” and a vowel, or something that sounds like “ri”. Examples of speech that set it off included “He clearly”, “Hey sorry” or “I’m sorry”, and “Okay, yeah”, so watch who you’re apologising to or agreeing with. Even “historians” would set it off.
When the devices did wake up, they’d often do so for relatively long periods. The HomePod and the Echos would wake up for at least six seconds more than half the time. The second-generation Echo Dot and the Harmon Kordon speaker had the longest activations, earwigging for between 20 and 43 seconds.
Amazon’s Echo Dot 3 mistakenly woke up the fewest times, and has by far the widest range of wake-up phrases. You have to set the chosen wake word in advance, so we can assume the researchers ran the test using each wake word – “Alexa”, “Amazon”, “Echo”, or “Computer”.
… we found activations with words that contain “k” and sound similar to “Alexa,” such as “exclamation”, “kevin’s car”, “congresswoman”
An “Amazon”-enabled Dot did apparently wake up when it heard “My pants on” which could be potentially, um, embarrassing, depending on the context.
Every show caused at least one device to wake up, and most shows woke up multiple devices. However, the results were mostly inconsistent. The team experimented with each device 12 times (other than the Harmon Kordon speaker, which only got four tests). Only 8.44% of the activations occurred consistently across 75% of the tests. The researchers said:
This could be due to some randomness in the way smart speakers detect wake words, or the smart speakers may learn from previous mistakes and change the way they detect wake words.
That inconsistency compounds a known problem with AI-driven devices; they’re opaque. AI algorithms can’t explain what they do. They’re black boxes that produce results based on statistical models. There isn’t a procedural set of instructions that you can follow to predict their results. It’s a problem that distances us from the tech, putting it outside our complete control.
There were some upsides, though. Despite some past incidents, they found no evidence that these devices were always recording peoples’ conversations in their tests.
The good news is that you can turn off active listening on many of these devices, although doing so might leave with you with a relatively expensive bluetooth speaker unless your hardware has an alternative tap-to-talk option. In the meantime, be careful what you say – particularly immediately after mentioning Radiohead’s ground-breaking third studio album “OK Computer”.
Latest Naked Security podcast
LISTEN NOW
Click-and-drag on the soundwaves below to skip to any point in the podcast.
SophosLabs has just published a detailed report about a malware attack dubbed Cloud Snooper.
The reason for the name is not so much that the attack is cloud-specific (the technique could be used against pretty much any server, wherever it’s hosted), but that it’s a sneaky way for cybercrooks to open up your server to the cloud, in ways you very definitely don’t want, “from the inside out”.
The Cloud Snooper report covers a whole raft of related malware samples that our researchers found deployed in combination.
It’s a fascinating and highly recommended read if you’re responsible for running servers that are supposed to be both secure and yet accessible from the outside world – for example, websites, blogs, community forums, upload sites, file repositories, mail servers, jump hosts and so forth.
In this article, we’re going to focus on just one of the components in the Cloud Snooper menagerie, because it’s an excellent reminder of how devious crooks can be, and how sneakily they can stay hidden, once they’re inside your network in the first place.
If you’ve already downloaded the report, or have it open in another window, the component we’re going to be talking about here is the file called snd_floppy.
That’s a Linux kernel driver used by the Cloud Snooper crooks so that they can send command-and-control instructions right into your network, but hidden in plain sight.
If you’ve heard of steganography, which is where you hide snippets of data in otherwise innocent-looking files such as videos or images where a few “noise” pixels won’t attract any attention, then this is a similar sort of thing, but for network traffic.
As we say in the steganography video that we linked to in the previous paragraph:
You don’t try and scramble the message so nobody can read it, so much as deliver a message in a way that no one even realises you’ve sent a message in the first place.
In-band signalling
The jargon term for the trick that the snd_floppy driver uses is known as in-band signalling, which is where you use unexceptionable but unusual data patterns in regular network traffic to denote something special.
Readers whose IT careers date back to the modem era will remember – probably unfondly – that many modems would “helpfully” interpret three plus signs (+++) at any point in the incoming data as a signal to switch into command mode, so that the characters that came next would be sent to the modem itself, not to the user.
So if you were downloading a text file with the characters HELLO+HOWDY in it, you’d receive all those characters, as expected.
But if the joker at the other end deliberately sent HELLO+++ATH0 instead, you would receive the text HELLO, but the modem would receive the text ATH0, which is the command to hang up the phone – and so HELLO would be the last thing you’d see before the line went dead.
This malware uses a similar, but undocumented and unexpected, approach to embedding control information in regular-looking data.
The crooks can therefore hide commands where you simply wouldn’t think to watch for them – or know what to watch for anyway.
A sneaky name
In case you’re wondering, there isn’t a legitimate Linux driver called snd_floppy, but it’s a sneakily chosen name, because there are plenty of audio drivers called snd_somethingorother, as you can see from this list we extracted from our own Linux system:
In real life, the bogus snd_floppy driver has nothing to do with floppy disks, emulated or real, and nothing to do with sound or audio support.
What snd_floppy does is to monitor innocent-looking network traffic to look for “in-band” characteristics that act as secret signals.
There are lots of things that “sniffer-triggered” malware like this could look out for – slightly weird HTTP headers, for instance, or web requests of a very specific or unusual size, or emails with an unlikely but not-too-weird name in the MAIL FROM: line.
But snd_floppy has a much simpler and lower-level trick than that: it uses what’s called the network source port for its sneaky in-band signals.
You’re probably familiar with TCP destination ports – they’re effectively service identifiers that you use along with an IP address to denote the specific program you want to connect to on the server of your choice.
When you make an HTTP connection, for example, it’s usually sent to port 80, or 443 if it’s HTTPS, on the server you’re reaching out to, denoted in full as http://example.com:80 or https://example.com:443. (The numbers are typically omitted whenever the standard port is used.)
Because TCP supports multiple port numbers on every server, you can run multiple services at the same time on the same server – the IP number alone is like a street name, with the port number denoting the specific house you want to visit.
But every TCP packet also has a source port, which is set by the other end when it sends the packet, so that traffic coming back can be tracked and routed correctly, too.
Now, the destination port is almost always chosen to select a well-known service, which means that everyone sticks to a standard set of numbers: 80 for HTTP and 443 for HTTPS, as mentioned above, or 22 for SSH, 25 for email, and so on.
But TCP source ports only need to be unique for each outbound connection, so most programmers simply let the operating system choose a port number for them, known in the jargon as an ephemeral port.
Ports are 16-bit numbers, so they can vary from 1 to 65535; ephemeral ports are usually chosen (randomly or in sequence, wrapping around back to the start after the end of their range) from the set 49152 to 65535.
Windows and the BSD-based operating systems use this range; Linux does it slightly differently, usually starting at 32768 instead – you can check the range used on your Linux system as shown below.
On our Linux system, for example, ephemeral (also known as dynamic) ports vary between 32768 and 60999:
But there are no rules to say you can’t choose numbers outside the ephemeral range, and most firewalls and computers will accept any legal source port on incoming traffic – because it is, after all, legal traffic.
You can see where this is going.
Secret source port signals
The devious driver snd_floppy uses the usually unimportant numeric value of the TCP source port to recognise “secret signals” that have come in from outside the firewall.
The source port – just 16 pesky bits in the entire packet – is what sneaks the message in through the firewall, whereupon snd_floppy will perform one of its secret functions based on the port number, including:
Extract and launch a malware program. The malware program is packaged up as data inside the driver and is only extracted and run when this command arrives. This means the malware program itself isn’t visible when it’s not in active use. (Source port=6060.)
Redirect this packet to the malware. This means that packets unexpectionably aimed at, say, a web server – traffic that the firewall will typically accept – can be sneakily diverted once inside to act as malware command-and-control signals. (Source port=7070.)
Terminate and remove the running malware. This not only kills the malware process but also gets rid of its program file when it is no longer active. You won’t find the malware file because it will no longer be there. (Source port=9999.)
Divert this packet to the internal SSH server. If SSH (typically used for remote logins) is blocked from the outside, the crooks can now sneak their SSH traffic in via, say, the web server’s TCP port and then have it diverted once it’s through. (Source port=1010.)
Sure, the crooks are taking a small risk that traffic that wasn’t specially crafted by them might accidentally trigger one of the their secret functions, which could get in the way of their attack.
But most of the time it won’t, because the crooks use source port numbers below 10000, while conventional software and most modern operating systems stick to source port numbers of 32768 and above.
What to do?
If you’re worried about this particular malware, you could try setting special rules in your firewall to block the control packets specific to Cloud Snooper.
For details of the port numbers used and what they are for, please see the full Cloud Snooper report.
As suggested above, there is a small chance that source port filtering of this sort might block some legitimate traffic, because it’s not illegal, merely unusual, to use source port numbers below 32768.
Also, the crooks could easily change the “secret numbers” in future variants of the malware, so this would be a temporary measure only.
There are five TCP source port numbers that the driver watches out for, and one UDP source port number. Ironically, leaving just TCP source port 9999 unblocked would allow any “kill payload” commands to get through, thus allowing the crooks to stop the malware but not to start it up again.
If you aren’t already, consider using a Linux anti-virus that can detect and prevent malware files from launching.
This will help you to spot and stop dangerous files of many types, including rogue kernel drivers, unwanted userland programs, and malicious scripts.
Revisit your own remote access portals – pick proper passwords, and use 2FA.
Crooks need administrator-level access to your network to load their own kernel drivers, which means that by the time you are vulnerable to an attack like Cloud Snooper, the crooks are potentially in control of everything anyway.
Many network-level attacks where criminals need root or admin powers are made possible because the crooks find their way in through a legimitate remote access portal that wasn’t properly secured.
Review your system logs regularly.
Yes, crooks who already have root powers can tamper with your logging configuration, and even with the logs themselves, making it harder to spot malicious activity.
But it’s rare that crooks are able to take over your servers without leaving some trace of their actions – such log entries showing unauthorised or unexpected kernel drivers being activated.
The only thing worse than being hacked is realising after you’ve been hacked you could have spotted the attack before it unfolded – if only you’d taken the time to look.
Hey, Linux fans! Microsoft has got your back over fileless threats. Assuming you’ve bought into the whole Azure Security Center thing.
Hot on the heels of a similar release for Windows (if by “hot” you mean “nearly 18 months after”) comes a preview aimed at detecting that breed of malware that inserts itself into memory before attempting to hide its tracks.
A fileless attack tends to hit via a software vulnerability, inject a stinky payload into an otherwise fragrant system process and then lurk in memory. The malware also attempts to remove any trace of itself on disk, which makes disk-based detection tricky.
Since the malware hides in RAM, a reboot generally gets rid of the thing. However, Linux servers tend to not to be rebooted as frequently as certain other operating systems and so, once infected, the malware can linger in memory, performing its nefarious activities.
An example of such an infection would be an attacker spotting a vulnerable service on an exposed port, copying a malware package and executing it. A few hops, skips and jumps later, and the malware could be listening for TCP instructions, having ensured any trace of itself in the file system has been removed.
A properly locked-down server would, of course, also mitigate things somewhat.
Only security-relevant metadata
Microsoft’s detection feature scans the memory of all processes for the tell-tale footprint of a fileless toolkit, shrieking a warning in the Azure Security Center along with some details of the nasty. An admin can then decide what action to take (and what further investigation is needed).
The scan, according to the Windows giant, is not invasive and the “vast majority” take less than five seconds to run. More importantly for the those fearful of slurpage, memory analysis is performed on the host itself and the results only contain “security-relevant metadata and details of suspicious payloads”.
Unsurprisingly, once signed up for the preview, you’ll need the Log Analytics Agent for Linux installed, along with a supported distribution (the usual suspects: Red Hat Enterprise Server, SUSE, Ubuntu and Debian are all included in the list). You will also need to be in Standard or Standard Trial Pricing tier to play.
Microsoft isn’t the only outfit squaring up to fileless threats. Kaspersky has been quick to trumpet its effectiveness and Trend Micro points to some alarming statistics concerning the surge in threats as criminals seek different means to compromise systems.
However, as its continued love-in with Linux continues (heck, a large chunk of Azure is running the OS), Microsoft has decided that maybe, just maybe, the lessons learned monitoring its proprietary OS could be extended elsewhere. ®