STE WILLIAMS

Lessons from the NSA: Know Your Assets

Chris Kubic worked at the National Security Agency for the past 32 years, finishing his tenure as CISO. He talks about lessons learned during his time there and what they mean for the private sector.

Over the past decade, Chris Kubic has worked to secure the US intelligence community’s information systems and environment, architected the National Security Agency’s cloud infrastructure, and, for his past three years, led the agency’s network-security efforts as the chief information security officer. 

His lessons for public companies? Focus on your assets. Not enough companies know what is in their environments — the first step to knowing what to protect, he says.

“Securing the NSA is not that much different to securing a large, complex environment in the commercial sector,” he says. “It comes down to this: You just need to be good with the basic blocking and tackling of cybersecurity. What I mean by that is you really need to start out by gaining a very firm understanding of the environment that you are trying to secure.”

Kubic, who joined Fidelis Cybersecurity in October, spent 32 years at the NSA, working his way up from a security project engineer to CISO. During that time, the National Security Agency has had many successes but quite a few high-profile information-security failures, from the massive leak of operational data and tools by contractor Edward Snowden to the overreach of a surveillance program that Congress is now calling to end. In July, NSA director General Paul Nakasone argued for the agency to refocus on cybersecurity, following years of reportedly failing to prioritize the discipline

While Kubic would not discuss the details of his time at the NSA, he did discuss his thoughts on cybersecurity strategy that could be applied to the private sector. A key component of that is situational awareness, he says.

“You need to understand all your assets that are part of the enterprise that you are securing,” Kubic says. “You need to understand how those assets are configured, where they are deployed, how they are connected, how they are patched and updated — all those kinds of details are important when you are trying to secure your domain.”

With nation-states continuing to develop their capabilities, assistance from the government will become increasingly important, he says. The government has a lot to offer companies, he adds.

“Coming from the government, not surprisingly, I think it is a good idea to have more government involvement to defend nations’ networks,” Kubic says. “It gets a little dicey when you are talking about private industry, and it’s really hard to mandate government involvement in that, but I think the government has a lot to offer, and certainly can offer, to be that independent broker that helps by connecting the dots across industries.”

Looking toward 2020, Kubic sees the application of machine learning and artificial intelligence to security as inexorable. With a shortage of knowledgeable workers, using analytics to sift through the massive data produced by companies and detect threats is critical, he says.

“On the cybersecurity side, the continued use of analytics, machine learning, and artificial intelligence will be important in securing our environments, but on the negative side, I think we will see a lot more adversary use of such technology to disguise their attacks, making it even harder for folks to defend their network,” he says.

The majority of cybersecurity workers see machine learning and artificial intelligence as a positive benefit. A July 2018 report by the Ponemon Institute found 60% of respondents believed AI would help them enrich security information and simplify the detection and response to security threats. Only a third of respondents, however, thought the technology would reduce workload for security teams.

For companies overwhelmed with data, Kubic recommends the security team identify the truly critical data and systems and focus on making sure those are secure. 

“Not all assets are created equal,” he says, “so you really need to sit down and understand what are those high-value assets that need to be protected better than the rest of the enterprise. That is either where you have your critical data stored or assets that are supporting your business’ critical functions. It is not easy to map out those workflows, but without that it is hard to prioritize where you need to make investments in cybersecurity.”

Most of all, companies need to do the basics right, he says. 

“In 2020, people need to stay focused on the basics of cyber hygiene,” Kubic says. “It is the unpatched systems and unmitigated vulnerabilities that are the easy button for the cyber adversary. You cannot take your eye off the prize there.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The Next Security Silicon Valley: Coming to a City Near You?

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/lessons-from-the-nsa-know-your-assets/d/d-id/1336596?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

You had one job, Cupertino: Apple’s Intelligent Tracking Protection actually gets tracking protection

Apple on Tuesday updated its Intelligent Tracking Protection (ITP) system in its WebKit browser engine because it could be tracked.

While ITP has been somewhat effective, making Safari users more opaque and less valuable in the behavioral ad targeting ecosystem than cookie-laden Chrome users, it still has gaps. Recently, Google security researchers found a way to use ITP for the very thing it was created to stop and passed their findings on to Apple, to the potential detriment of their future ad revenue.

The iPhone biz acknowledged the tip in a WebKit blog post on Tuesday, somewhat masked by a slew of security updates. The software updates to Apple’s various operating systems and browser address three WebKit vulnerabilities (CVE-2019-8835, CVE-2019-8844, and CVE-2019-8846) that permit malicious web content to execute arbitrary code, but these appear to be unrelated. While the blog post thanks Google for the responsible disclosure, it doesn’t go into details about the ITP issue identified.

The blog post, by WebKit security and privacy engineer John Wilander, says only that Google researchers provided Apple with a report that explores “both the ability to detect when web content is treated differently by tracking prevention and the bad things that are possible with such detection.”

Wilander said that the report led to several ITP changes and promised to credit Google’s researchers in future security release notes.

Though not specific about the Google disclosure, Wilander’s post explains that WebKit’s tracking prevention system could itself be used as a mechanism for tracking. Hence the title of the post, “Preventing Tracking Prevention Tracking.”

“Any kind of tracking prevention or content blocking that treats web content differently based on its origin or URL risks being abused itself for tracking purposes if the set of origins or URLs provide some uniqueness to the browser and webpages can detect the differing treatment,” Wilander explains.

That makes it sound as if ITP could function as a browser fingerprinting vector, conveying information that could be used to follow users around the web despite the ostensible tracking protection. To fix this, the WebKit team implemented three changes.

First, ITP now trims all cross-site Referer headers down to the page origin, omitting additional path information.

When a browser user clicks on a link, the browser generally adds an HTTP header labelled “Referer” (rather than the properly spelled “referrer”) that contains the URL of the originating web page in the request sent to the destination web page. When that link points to a different domain, that’s a cross-site request.

For example, clicking on a devclass.com link from a theregister.co.uk webpage would send a cross-site request header, the URL of the referring Register page, which would go in the Referer header field.

China selfie revolution

Apple insists it’s totally not doing that thing it wasn’t accused of: We’re not handing over Safari URLs to Tencent – just people’s IP addresses

READ MORE

Henceforth, a Safari user on theregister.co.uk/example.html will send only theregister.co.uk and not the /example.html portion of the origin URL.

Second, ITP deny all third-party requests from seeing their cookies unless the associated first-party website has already established user interaction.

Finally, WebKit has revised the way it handles calls to document.hasStorageAccess(), an API put in place last year that allows embedded cross-site content to authenticate users (request access to first-party cookies) who are already logged in to first-party services while still limiting tracking. The API now weighs Safari’s cookie policy when invoked, so it may deny access in certain situations.

The Register asked Apple to provide more details about Google’s disclosure. You can imagine how that went. ®

Sponsored:
What next after Netezza?

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/12/apple_intelligent_tracking_protection/

LightAnchors array: LEDs in routers, power strips, and more, can covertly ship data to this smartphone app

Video A pentad of bit boffins have devised a way to integrate electronic objects into augmented reality applications using their existing visible light sources, like power lights and signal strength indicators, to transmit data.

In a recent research paper, “LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces,” Carnegie Mellon computer scientists Karan Ahuja, Sujeath Pareddy, Robert Xiao, Mayank Goel, and Chris Harrison describe a technique for fetching data from device LEDs and then using those lights as anchor points for overlaid augmented reality graphics.

As depicted in a video published earlier this week on YouTube, LightAnchors allow an augmented reality scene, displayed on a mobile phone, to incorporate data derived from an LED embedded in the real-world object being shown on screen. You can see it here.

Unlike various visual tagging schemes that have been employed for this purpose, like using stickers or QR codes to hold information, LightAnchors rely on existing object features (device LEDs) and can be dynamic, reading live information from LED modulations.

The reason to do so is that device LEDs can serve not only as a point to affix AR interface elements, but also as an output port for the binary data being translated into human-readable form in the on-screen UI.

“Many devices such as routers, thermostats, security cameras already have LEDs that are addressable,” Karan Ahuja, a doctoral student at the Human-Computer Interaction Institute in the School of Computer Science at Carnegie Mellon University told The Register.

“For devices such as glue guns and power strips, their LED can be co-opted with a very cheap micro-controller (less than US$1) to blink it at high frame rates.”

The system relies on an algorithm that creates an image pyramid of five layers, each scaled by half, to ensure that at least one version captures each LED in the scene within a single pixel. The algorithm then searches for possible light anchor points by looking for bright pixels surrounded by dark ones.

Candidate anchors are then scanned to see if they display the preamble binary blink pattern. When found, the rest of the signal can then be decoded.

wtf

With a warehouse of unsold AR googles, Magic Leap has a brainwave… let’s rebadge ‘em and sell to business!

READ MORE

Some of the example applications that have been contemplated include: a smoke alarm that transmits real-time battery and alarm status through its LED, a power strip that transmits its power usage, and a Wi-Fi Router presents its SSID and guest password when viewed through a mobile AR app.

Ahuja said the scheme works across different lighting conditions, though he allows that in bright outdoor lighting, device LEDs may get missed. “But usually the LED appears to be the brightest point,” he said.

The initial version of the specification (v0.1) has been published on the LightAnchors.org website. It requires a camera capable of capturing video at 120Hz, to read light sources blinking at 60Hz. The data transmission protocol consists of a fixed 6-bit preamble, an 8-bit payload, 4-bit parity, and a fixed 8-bit postamble. Mobile devices that support faster video frame rates and contain faster processors could allow faster data transmission.

Future versions of the specification may incorporate security measures against potential malicious use, such as a temporary token to ensure that users have line-of-sight to devices. Sample demo code for Linux, macOS, and Windows laptops, along with Arduino devices, can be found on the project website.

The boffins have also created a sample iOS app and interested developers can sign up on the website to receive an invitation to try it out through Apple’s Testflight service. ®

Sponsored:
What next after Netezza?

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/12/augmented_reality_glowing_diodes/

Intel Issues Fix for ‘Plundervolt’ SGX Flaw

Researchers were able to extract AES encryption key using SGX’s voltage-tuning function.

Intel this week urged customers to apply a new firmware update that thwarts a new class of attack techniques exploiting the voltage adjustment feature in several families of its microprocessors.

Three different academic research teams separately found and reported to Intel a vulnerability in its Software Guard Extensions (SGX) security feature that could be abused by an attacker to inject malware and even steal encryption keys. SGX, which is baked into modern Intel microprocessors, places sensitive computations such as memory encryption and authentication in protected “enclaves” so attackers can’t modify or access them. It allows frequency and voltage to be tuned for managing heat and power consumption of machines.

One group of researchers was able to lower the voltage on SGX-based systems – “undervolting” them – and allowing them to force an error that resulted in their recovering the AES encryption key within a few minutes.

The INTEL-SA-00289 vulnerability lies in the Intel 6th, 7th, 8th, 9th, and 10th Generation Core Processors, as well as the Xeon Processor E3 v5 and v6 and the Xeon Processor E-2100 and E-2200 lines. 

Intel’s security update disables the voltage-tuning function in SGX, basically locking down voltage to the default settings. The company advises applying the patch ASAP: “We are not aware of any of these issues being used in the wild, but as always, we recommend installing security updates as soon as possible,” said Jerry Bryant, director of communications for Intel, in blog post yesterday, pointing to a list of computer manufacturer support sites for update details.

‘Plundervolt’
Researchers from the University of Birmingham’s School of Computer Science, imec-DistriNet, and Graz University of Technology teamed up to study how to exploit the voltage feature in SGX in a project they dubbed “Plundervolt,” which they plan to present next month at the IEEE Security Privacy conference. They were the first to alert Intel to the vulnerability, in June 2019.

The team consists of Oswald, University of Birmingham’s Kit Murdock and Flavio Garcia, imec-DistriNet’s Jo Van Bulck and Frank Piessens, and Graz University’s Daniel Gruss.

In August 2019, researchers from Technische Universität Darmstadt and University of California gave Intel a proof-of-concept of the vuln, and University of Maryland and Tsinghua University researchers disclosed the issue to Intel as well that month.

David Oswald, senior lecturer in Computer Security at the University of Birmingham and a member of the Plundervolt team, says the concept of “undervolting” had been known for some time, but it previously had only been executed via hardware, attaching an external power supply unit, for instance.

What’s unique about Plundervolt and similar attacks is that they are mounted from software, Oswald says. “So we simply need to execute code on a target machine so it can do the undervolting” via the software interface, he says.

Even so, you need to gain administrative privileges to manipulate the voltage feature.

In a nutshell, here’s how Plundervolt works: The researchers reduced the supply of voltage to the CPU in short bursts to avoid crashing the computer, which allowed them to flip a bit in some critical computations, such as AES encryption.

“You can flip a bit here and there to carry out an attack,” he says. “There are tools on Github which you can use to carry out some mathematical analysis … and then you can recover the [AES] key in minutes.”

The researchers also were able to flip a bit in some computations to inject malicious code into the enclave, such as a buffer overflow exploit. 

The underlying vuln Plundervolt exploits the ability for an admin to tune the voltage.

“It looks like it was an oversight. Probably one [Intel] group developed SGX and another the power management features like undervolting,” Oswald says. “You have a very complex process developed by a lot of people. And you have a very big attack surface.”

The undervolting attacks come on the heels of a wave of speculative execution attack research on Intel chips, such as Spectre and Foreshadow. The latter read data from an SGX enclave’s memory while Plundervolt and others alter the values in the memory.

The researchers offer video clips and details, as well as their research paper, on a Plundervolt website they established.

Source: Plundervolt

Oswald’s team next hopes to explore other instructions it can alter in SGX and to test other hardware platforms for similar weaknesses, possibly some smartphones. They also want to investigate another way to defend against Plundervolt-style attacks rather than just shutting it off like Intel has done.

“Maybe there’s a more elegant way of defending against this without simply disabling undervolting,” he says. “It has a good use,” such as energy savings.

Even so, most end users don’t employ SGX on their machines, he notes. While it comes in many laptop processors, for example, for the most part “it’s not actively used” in those environments.

Don’t Panic
Oswald believes undervolting attacks obviously aren’t an imminent danger, but as operating systems become more secure, attackers will migrate more to hardware hacks.

“I think the researchers now are mainly ahead of the attackers,” he says. “For nation-states, [for example], it’s easier to buy a classic buffer overflow or something [else] than to do hardware-based attacks.”

Richard Bejtlich, principal security strategist at Corelight, says Plundervolt demonstrates how academic researchers have found a real niche in CPU hacking. While academia often gets criticized for obscure or “out-of-touch” security research, he says, this type of hardware research resonates.

“I think when they focus on this hardware-level analysis, there’s a really deep computer [science],” he says. “This seems to be something they are really good at.”  

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The Next Security Silicon Valley: Coming to a City Near You?

Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/intel-issues-fix-for-plundervolt-sgx-flaw/d/d-id/1336589?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Trickbot Operators Now Selling Attack Tools to APT Actors

North Korea’s Lazarus Group – of Sony breach and WannaCry fame – is among the first customers.

The operators of the prolific Trickbot banking botnet have begun offering advanced persistent threat actors access to a sophisticated new attack toolset called Anchor for exploiting the networks of high-value targets that the malware previously has compromised.

Researchers at security vendor SentinelOne’s newly established SentinelLabs recently spotted North Korea’s notorious state-backed Lazarus Group using the toolset to deploy one of its own malware samples on the network of an Anchor victim.

The discovery is significant because financially motivated crimeware operations like Trickbot so far mostly operated completely separately from APT campaigns — especially state-backed ones — that are typically more focused on data theft, surveillance, and other long-tailed activities.

“The maturity of the crimeware models and convergence of threats force us to rethink our defenses,” says Vitali Kremez, lead cybersecurity researcher at SentinelLabs.

“Criminals and the nation-state are hunting for high-value targets and [collaborating] on their breach accesses,” he says. Organizations now have to be concerned not just about criminal groups, but of crimeware threats that might mature into APT activity, Kremez notes.

Trickbot’s operators, who started in 2016 by using the malware to steal money from online banking accounts, have over the years morphed into a massive crimeware-as-a-service operation. Trickbot itself has evolved from a tool for stealing bank account login information to a tool that can perform a variety of malicious functions — including delivering ransomware, banking Trojans, and cryptominers.

The operators of Trickbot have built a database of information on networks that they have compromised, which other attackers can access and use for a fee to deliver ransomware and carry out attacks of their own.

So far, Trickbot’s crimeware-as-a-service offering has targeted mainly other financially motivated affiliates. But with the Anchor project, Trickbot’s business model appears to have expanded, according to SentinelLabs.

“It was a separate hidden project and/or fork from the main Trickbot malware codebase,” Kremez says. It appears to have been developed for high-value targets and intrusions and multiple APT groups are currently using it, he says. 

The Anchor attack framework includes tools ranging from a sophisticated malware installer to a clean-up tool for wiping clean all evidence of an attack. It includes mechanisms that allow attackers to load legitimate frameworks such as Metasploit, Cobalt Strike, and PowerShell Empire and use them for post-compromise exploitation, SentinelLabs said.

“Anchor presents as an all-in-one attack framework designed to compromise enterprise environments using both custom and existing toolage,” the vendor noted. It gives APT actors a way to do targeted data-extraction and to remain undetected on compromised networks for a long time.

Mutually Beneficial

For an operation like the Lazarus Group, Trickbot’s Anchor project is especially useful. The group, best known for its attacks on Sony as well as its abuse of the SWIFT financial network to steal tens of millions of dollars from the Bank of Bangadesh, is a somewhat rare APT threat actor. As an arm of the North Korean regime, the Lazarus Group is not just focused on data theft, but also on financially motivated attacks in support of the cash-starved government.

Some see the WannaCry ransomware attacks and the attacks via the SWIFT network as example of the group’s efforts to raise money for the North Korean government.

For the Lazarus Group, the primary benefit of the Trickbot Anchor tie-up “is access to compromised high-value targets for further post-exploitation and monetization without the need to run their own campaign,” Kremez says.

And the use of third-party tools such as those from Trickbot can also help make attribution harder for investigators.

SentinelLabs’ research suggests a working relationship between Lazarus Group members and some of the criminals behind Trickbot Anchor, which allows them to have a mutually beneficial financial relationship, Kremez says. “We believe it might be a partnership agreement given our knowledge of how the groups operate in a very private protective manner,” and only with the most trusted partners, he says.

APT groups are not the only focus, however. According to SentinelLabs, the Anchor attack toolset is also being used in large-scale cyber heists and attacks on point-of-sale systems.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Security 101: What Is a Man-in-the-Middle Attack?

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/trickbot-operators-now-selling-attack-tools-to-apt-actors/d/d-id/1336590?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Chrome will check for leaked credentials every time you sign in anywhere

A new feature in Google’s Chrome browser will warn you if your username and password matches a known combination in a security breach every time you type credentials into any website.

This credential check is “gradually rolling out for everyone signed into Chrome” as part of the Safe Browsing option, according to the announcement.

The potential worry here is that sending your credentials to Google for checking could itself be a security risk. The technology used was announced nine months ago, when the Password Checkup extension was introduced. At the time it was described as an “early experiment”. The way it works is as follows:

  1. Google maintains a database of breached usernames and passwords, hashed and encrypted. In other words, the username/password combinations are not stored, only the encrypted hash.
  2. When you type in your credentials, the browser sends a hashed and encrypted copy of the credentials to Google, where the key used for encryption is private to the user. In addition, it sends a “hash prefix” of the account details, not the full details.
  3. Google searches the breach database for all credentials matching the hash prefix and sends the results back to the browser. These are encrypted with a key known only to Google. In addition, Google encrypts your credentials with this same key – so it is now doubly encrypted.
  4. The final check is local. Chrome decrypts the credentials using your private key, yielding a copy encrypted only with Google’s key. This is then compared to the values in the database. If a match is found, an alert is raised.

The process by which Google checks credentials against a database of breached usernames and passwords

The process by which Google checks credentials against a database of breached usernames and passwords (Click to enlarge)

The idea is that your credentials are never sent to Google in a form it can read, and that details of other people’s breached credentials are never sent to you in a form you can read. The procedure, we are told, “reflects the work of a large group of Google engineers and research scientists”.

Even though users may still feel uncomfortable enabling this kind of check, the risks are likely lower than that of being unaware that your credentials have been stolen. The bigger snag, perhaps, is that you have to sign into Chrome with all that implies in terms of giving the data-grabbing giant more information about your digital life.

In addition, Google says it is improving its phishing site protection, with 30 per cent more cases being spotted. A further protection is that if you use Chrome’s password manager, you will be alerted if you enter credentials stored there into a suspected phishing site.

What about if someone else signs into Chrome on a shared computer, and you inadvertently save your password into someone else’s profile? If this can happen you probably already have some potential security issues to worry about, but Google is trying to make it less likely by a more prominent indication of the current profile. ®

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/11/google_chrome_credential_check/

Bad news: KeyWe Smart Lock is easily bypassed and can’t be fixed

File this one under “not everything needs a computer in it”. Finnish security house F-Secure today revealed a vulnerability in the KeyWe Smart Lock that could let a sticky-fingered miscreant easily bypass it.

To add insult to injury, the device’s firmware cannot be upgraded either locally or remotely. This means the only way to conclusively remediate this problem is to rip the damned things from your door and replace them with a bog-standard lock.

The KeyWe Smart Lock is primarily used in private dwellings, and retails for circa $155 on Amazon. It allows users to unlock their doors through a traditional metal key, via a mobile app, or with Amazon Alexa.

Its Achilles’ heel is what F-Secure describes as “improperly designed communications protocols”. These allowed the firm to intercept the secret passphrase as it transmitted from the smartphone to the lock, using just a $10 BLE sniffer and Wireshark.

The KeyWe Smart Lock uses AES-128 to communicate with the mobile app. However, the communication channel uses only two factors to generate that encrypted channel: a common key and a separate key calculation process. Both of these are trivial to overcome.

Speaking to The Register, F-Secure’s Krzysztof Marciniak said: “The KeyWe Smart Lock uses BlueTooth Low Energy, which is based on the concept of advertisements. These contain information about device capabilities, the device name, and the device [MAC] address. It’s from this address the common key is generated.”

F-Secure also figured out how to yank the key-calculation process from the mobile application, rendering the second factor redundant.

With the KeyWe’s encryption rendered null and void, an attacker would merely have to identify a property using the lock, then wait for someone to come and unlock the door. They would then be able to intercept the passcode in transit and use it to break into the property.

We have asked KeyWe for comment.

Arguably, the biggest issue here isn’t that the KeyWe had a glaring design flaw, but rather that it’s impossible to remediate. As with any tech product, one can assume that eventually someone will identify a security issue that needs fixing. Having no means to actually do so is… well… rather bad. ®

Sponsored:
Technical Overview: Exasol Peek Under the Hood

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/11/f_secure_keywe/

Only Half of Malware Caught by Signature AV

Machine learning and behavioral detection are necessary to catch threats, WatchGuard says in a new report. Meanwhile, network attacks have risen, especially against older vulnerabilities, such as those in Apache Struts.

For years, signature-based antivirus has caught about two-thirds of threats at the network edge — in the last quarter, that success rate has plummeted to only 50%, according to WatchGuard Technologies’ latest quarterly report, published on December 11.

The network security firm found that the percentage of malware that successfully bypassed signature-based antivirus scanners at companies’ network gateways has increased significantly, either by scrambling code — known as “packing” — using basic encryption techniques or by the automatic creation of code variants. In the past quarter, the share of malware using these obfuscation techniques has jumped to 50% of malicious programs detected at the edge of the network, bypassing common antivirus engines, the company found.

Dubbed “zero-day malware,” these attacks demonstrate how attackers have adapted to the decades-old signature-based antivirus scanning technology, says Corey Nachreiner, chief technology officer at WatchGuard Technologies.

“The big change is that more and more malware is becoming evasive, so that signature-based protection is no longer sufficient,” he says. “There is nothing wrong with having it, because it will catch 50% to two-thirds of the traffic, but you definitely need something more.” 

In the first quarter of 2019, the company saw signature antivirus catch 64% of malware. In the second quarter, that dropped only slightly to 62%. In 2017, antivirus firm Malwarebytes found that using two signature-based antivirus engines still only caught about 60% of threats.

While the statistic applies only to the BitDefender antivirus engine used in WatchGuard’s product, Nachreiner argues that the scanner is better than average — based on VirusTotal detections — suggesting that malware is even more successful getting past other companies’ products.

“The reason that we feel that we can extrapolate from a single engine is that we use VirusTotal all the time, and BitDefender is always one of the first to detect threats,” he says. “We feel that extrapolation, while not exact, will be very representative, even conservatively, of the capabilities of signature-based engines.”

Zero-day malware — not to be confused with zero-day exploits — need to be caught by technologies other than signature-based antivirus, he says. WatchGuard, for example, incorporate three different anti-malware services into its product, including machine learning-based pattern detection and a sandbox service to catch threats based on their execution behavior.

The rise in evasive malware is the most significant trend in WatchGuard’s “Internet Threat Report: Q3 2019,” but the company also saw a general rise in network attacks — those attempts that attempt to actually compromise a network — of about 8% from the previous quarter.

Attacks using SQL injection, cross-site scripting, and brute-force credential stuffing topped the list of attacks the company detected in the third quarter of 2019, but the top 10 network-based attacks also include exploits aimed at two older vulnerabilities in the Apache Struts web application framework, security issues that led to the massive breach of data-collection firm Equifax. The company missed patching key servers that were then compromised by attackers, leading to the leak of information on about 148 million Americans. The breach led to a $700 million fine and, because of his stock trading prior to public notification of the breach, the conviction of the former CIO on insider trading.

“With a 10 of 10 for severity in the National Vulnerability Database and the national attention the Equifax breach got from this vulnerability, we hope web admins have already upgraded their servers,” WatchGuard stated in the report. “If you’ve patched, this attack won’t work … [but] vulnerable servers won’t last long while connected to the Internet.”

The increase in attacks on older vulnerabilities makes it even more important for companies to look to their patching processes and make sure that they are not missing any servers, Nachreiner says.

“After Equifax, you would have hoped that everyone had patched immediately, but the fact that the attackers are ramping up attacks could mean that they have seen some success,” he says. “So, you need to ask, have you really patched the Apache Struts vulnerability? Check your environment to make sure that you are not vulnerable.”

The WatchGuard report gathers data from users that have opted into its data-collection program, about 37,000 devices in the latest quarterly report. 

 

Related Content

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Security 101: What Is a Man-in-the-Middle Attack?

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/only-half-of-malware-caught-by-signature-av/d/d-id/1336577?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Nation-State Attackers May Have Co-opted Vega Ransomware

The tactics used by the latest version of the Vega cryptolocker program indicates the code may have been stolen from its authors and is now being used for destructive attacks, a new report suggests.

Significant changes in the tactics of a new variant of the Vega ransomware may indicate that the code for the software is now in the hands of a nation-state actors, security firm BlackBerry Cylance stated on December 9.

The new ransomware variant, dubbed Zeppelin by BlackBerry Cylance, started spreading in early November and avoids infecting systems in Russia, Ukraine, Belorussia, and Kazakhstan, instead focusing on US and European technology and healthcare companies, according to the company’s researchers. While the malware framework is modular and can easily be configured for different tasks, Zeppelin focuses on destructive attacks, says Josh Lemos, vice president of research and intelligence at BlackBerry Cylance. (Lemos is not related to this reporter.)

“Our speculation is that it is a state actor using the generalized codebase used in Buran, Vega, and prior campaigns, as a way to somewhat obfuscate their intentions, especially given that its targeting is so narrow,” he says, adding: “We believe that they had access to source and have modified the code materially.”

Ransomware continues to be a popular attack for cybercriminals. Where attackers once focused the malware on consumers, businesses are now the preferred targets of attack, because a single compromise can net tens of thousands of dollars for the attackers. In 2019, the overall number of ransomware attacks have declined, but attacks are increasingly targeted, according to an October report covering the first nine months of the year

Yet apart from North Korean attempts to generate cash from ransomware, most nation-state attackers have used the main ransoming tactic of encrypting data to prevent access as a way to disrupt the operations of rival nations’ government agencies and companies. The change in the goals of the latest Vega variant suggest that a nation-state has gained access to the code, BlackBerry Cylance stated in its analysis.

“The major shift in targeting from Russian-speaking to Western countries, as well as differences in victim selection and malware deployment methods, suggest that this new variant of Vega ransomware ended up in the hands of different threat actors — either used by them as a service, or redeveloped from bought (or) stolen (or) leaked sources,” according to the analysis. 

While the latest variant is more modular, the primary reason that the company’s researchers attribute the latest malware to a nation-state is the change in targeting and the focus on destructive, rather than commercial, goals. The malware does leave a ransom demand behind but, unlike most malware, does not specify an amount or a bitcoin wallet to be paid.

“This seems intentionally there to disrupt or cause commercial harm to the targets, rather than yield a bunch of cash, and that is not really the M.O. of your run-of-the-mill cybercriminals,” Lemos says. “Given this skill set and care that was taken in the campaign, my personal assessment is that they probably have the skill to go out and steal it from whoever they want to.”

To date, the company has collected five samples of the hard-to-find malware. The attack likely affects “tens, not hundreds” of companies, he says.

Tactics-wise, the ransomware program does not break any new ground.

Zeppelin uses a basic method of obfuscating the various keys in the code that could be used to easily identify the software by security scanners that look for recognizable text. In addition, Zeppelin uses code of varying sizes and random APIs to evade detection, as well as delays in execution to foil sandbox analysis by outlasting the time that such analysis software spends waiting for signs of malicious execution. 

While BlackBerry Cylance did not attribute the ransomware to any specific nation, because the software avoids executing in Russia, Ukraine, Belorussia, and Kazakhstan by checking the machine’s default language and IP address, the operators likely reside in one of those nations.

“In a stark opposition to the Vega campaign, all Zeppelin binaries — as well as some newer Buran samples — are designed to quit if running on machines that are based in Russia and some other ex-USSR countries,” the company’s analysis states.

Many questions still remain, Lemos acknowledges. Because the company has collected a handful of samples of the malware, the picture of the group’s operations is less clear than BlackBerry Cylance researchers would like, he says.

“There is not a lot of information on the TTPs [tactics, techniques and procedures],” he says. “It is more about what is not there. With it being so modular and highly configurable, the fact that this was just used in this disruptive capacity could mean that it is just a cheap throwaway for them or that this is part of a larger campaign that we are not privy to.”

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “10 Security ‘Chestnuts’ We Should Roast Over the Open Fire.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/nation-state-attackers-may-have-co-opted-vega-ransomware/d/d-id/1336556?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Younger Generations Drive Bulk of 2FA Adoption

Use of two-factor authentication has nearly doubled in the past two years , pointing to a new wave of acceptance.

Adoption of two-factor authentication (2FA) is rapidly increasing, particularly among people aged 18-34, as consumers grow concerned with protecting online accounts from data breaches.

When Duo Labs researchers set out to learn more about adoption and perception of 2FA in 2017, they learned 56% of Americans polled had never heard of the technology. Only 28% used 2FA on at least one website or app. Over the past two years, that adoption has increased – reaching 51%.

In their new State of the Auth report released this week, Duo analysts shared the results of a second census-representative survey designed to measure 2FA usage in the United States. By 2019, the researchers found, 77% of people surveyed had heard of 2FA, up from 44% two years prior.

Sean Frazier, Advisory CISO with Duo, says there are several reasons why consumers have become more familiar with 2FA. “Now, people are starting to see these things in their personal lives,” he explains. Banks and tech companies are encouraging customers to use 2FA for accounts and devices. Newscasters reporting on data breaches now mention 2FA as an option for stronger consumer security.

“You’re seeing it come up in basic conversations about security hygiene,” Frazier continues. Indeed, much has changed in the years since Duo’s initial poll: Apple released Face ID in its iPhone X, GDPR was fully enforced throughout the EU, and WebAuth was published as a W3C recommendation. Businesses have increasingly implemented 2FA requirements for employees, broadening adoption.

These shifts arrive at a time when a growing amount of data points to the unreliability of password-based authentication. Only 32% of US respondents to the Duo survey reported using strong, complex passwords. Password reuse, rampant among consumers and security pros alike, is at the core of incidents like the compromise of developer repositories on Bitbucket, GitHub, and GitLab.

SMS-based authentication remains the most popular 2FA factor at 72% usage, followed by email (57%), authenticator app (36%), and phone callback (30%). SMS was also the most popular answer when respondents were asked about their preferred authentication factor, indicating their familiarity with this option makes it more convenient and appealing.

“We want to make sure that even though we have to add friction to the user experience, we want it to be as minimal as possible,” Frazier explains. While security experts have discovered flaws with SMS authentication, it’s “better than nothing” if it’s your only option, he continues.

Researchers asked participants which three online accounts they were most concerned with securing. Banking and financial accounts came out on top (85%), followed by communications and social media (32%), health (28%), retail (25%), and utilities (24%). Lack of concern for email accounts was “counterintuitive,” they note in the report. In the absence of 2FA, email is usually the source of identity for financial accounts. If an attacker can successfully compromise an email account, they can likely change credentials for other accounts without raising red flags.

“If you’re compromised once, you’re compromised everywhere,” Frazier points out.

Gaming Pays Off

A closer look at the demographic data shows younger users are at the forefront of 2FA adoption: 69% of users aged 18-24 use 2FA, as do 68% of those aged 25-34. The percentage drops for users among ages 35-44 (58%), 45-54 (49%), 55-64 (49%), and older than 65 (33%).

Part of the reason for this could be younger users’ stronger familiarity with technology. As Frazier points out, gamers have been using 2FA for several years. As schools include tech and security in curricula, kids will continue to develop a greater understanding of basic security.

Frazier emphasizes the importance of encouraging awareness at the building block level with kids in schools, and continuing as they grow up and enter the workplace. “Apple, Google, and Microsoft have done a pretty good job of building this into their ecosystems,” he continues. “As they are pervasive, it will make people intrinsically understand how these things work.”

He anticipates by this time next year, consumers will report greater familiarity with, and adoption of, other authentication methods like security keys. In this year’s survey, security keys were considered by those who used them to be efficient, secure, and easy to use.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Security 101: What Is a Man-in-the-Middle Attack?

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/application-security/younger-generations-drive-bulk-of-2fa-adoption/d/d-id/1336581?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple