STE WILLIAMS

Nvidia patches eight security flaws in graphics products

Chip maker Nvidia has released its first security update for 2019 (ID 4772), fixing eight CVE flaws in its Windows and Linux graphics display drivers. Users are advised to patch as soon as possible.

The company scores the flaws using the Common Vulnerability Scoring System (CVSS) v3, which shows five with a rating of 8.8, equating to ‘high’ severity rather than ‘critical’.

That’s because none can be exploited remotely and require local access, for example by executing malware on the target system.

Depending on the flaw, an exploit could lead to a denial of service state, code execution, information disclosure or, potentially worst of all, to an escalation of privileges in six of the vulnerabilities.

Affected products include the hugely popular GeForce, Quadro, and NVS, as well as the specialist Tesla graphics cards.

The full list in bulletin 4772 is: CVE-2019-5665, CVE-2019-5666, CVE-2019-5667, CVE-2019-5668, CVE-2019-5669, CVE-2019-5670, CVE-2019-5671, and CVE-2018-6260.

Despite being a 2.2 (low) on CVSSv3, the last of these is perhaps the most interesting because the fix emerged from research published last November into side-channel attacks on GPUs. Nvidia describes it as a…

Vulnerability that may allow access to application data processed on the GPU through a side channel exposed by the GPU performance counters.

This affects all GPU makers, including AMD and Intel as well as Nvidia and patching it requires several manual Nvidia control panel steps in addition to applying the driver update (instructions here).

Applying the latest drivers on Windows should bring users to version 419.17 (Linux versions vary depending on the distro).

Which brings us to the issue of how to update. Most users might have to do this manually via the vendor’s website although Nvidia offers a utility, GeForce Experience, which will helpfully alert users as and when new security updates become available.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mPbMGCpp6Mk/

Ep.021 – Leaked calls, a social media virus and passwords exposed [PODCAST]

The Naked Security podcast investigates a massive medical data blunder, tells you how NOT to do vulnerability disclosure, and finds out whether password managers do more harm than good.

With Anna Brading, Paul Ducklin, Mark Stockley and Matt Boddy.

This week’s stories:

If you enjoy the podcast, please share it with other people interested in cybersecurity, and give us a vote on iTunes and other podcasting directories.

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xk4oAHTGYAE/

Running Elasticsearch 1.4.2 or earlier? There’s targeted malware going for your boxen

Cisco’s security limb has spotted nefarious people targeting Elasticsearch clusters using relatively ancient vulns to plant malware, cryptocurrency miners and worse – though it does root out some other cybercrims’ dodgy wares, cuckoo-style.

“These attackers are targeting clusters using versions 1.4.2 and lower,” said the networking giant’s infosec arm, Talos, in a post summarising what its honeypot setup had caught for examination.

The seemingly China-based attackers used two known vulnerabilities in Elasticsearch – listed as CVEs in 2014 and 2015 respectively – to pass scripts to search queries, Talos said, allowing them further access to the old machines to drop a payload of their choice. Elasticsearch version 1.4.2 was first released in December 2014.

“These attacks leverage CVE-2014-3120 and CVE-2015-1427” said the security research outfit. The 2014 vuln lets attackers execute arbitrary MVEL expressions and Java code, while the 2015 flaw, which is specific to Elasticsearch’s Groovy scripting engine “allows remote attackers to bypass the sandbox protection mechanism and execute arbitrary shell commands via a crafted script”.

The infosec unit continued: “The first payload invokes wget to download a bash script, while the second payload uses obfuscated Java to invoke bash and download the same bash script with wget. This is likely an attempt to make the exploit work on a broader variety of platforms. The bash script utilized by the attacker follows a commonly observed pattern of disabling security protections and killing a variety of other malicious processes (primarily other mining malware), before placing its RSA key in the authorized_keys file.”

The nasties seen by Talos achieve persistence by installing shell scripts as cron jobs.

Cisco Talos’ Martin Lee told The Register: “In terms of the payloads we’ve been able to characterise, we’ve seen denial of service attack payloads, compromised systems being routed into a DoS or a botnet being used for DoS. We also see cryptomining.”

He added that some of the payloads Cisco had seen on its honeypot Elasticsearch boxen were “being used as a point of ingress into an environment to then look for other machines which can subsequently be compromised,” and that Talos had seen “six separate threat actors” exploiting the vulnerabilities.

Although businesses should not be running software suites that are five years out of date, Lee pointed out that organisations ought to make themselves more aware of “older unpatched machines in an environment which are faithfully doing what they’re supposed to do and nobody wants to alter them that much”.

While Talos stopped short of explicitly attributing the observed attacks to a China-based person or persons, its blog post goes into more detail about the QQ Chinese social network account, whose numeric handle was seen in a command executed by one of the payloads, concluding:

“We briefly reviewed the public account activity of 952135763 and found several posts related to cybersecurity and exploitation, but nothing specific to this activity. While this information could potentially shed more light on the attacker, there is insufficient information currently to draw any firm conclusions.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/27/elasticsearch_malware_cisco_talos/

Embracing DevSecOps: 5 Processes to Improve DevOps Security

In the cyber threat climate of the 21st century, sticking with DevOps is no longer an option.

In 2016, about eight years following the birth of DevOps as the new software delivery paradigm, Hewlett Packard Enterprise released a survey of professionals working in this field. The goal of the report was to gauge application security sentiment, and it found nearly 100% of respondents agreed that DevOps offers opportunities to improve overall software security.

Something else that the HPE report revealed was a false sense of security among developers since only 20% of them actually conducted security testing during the DevOps process, and 17% admitted to not using any security strategies before the application delivery stage.

Another worrisome finding in the HPE report was that the ratio of security specialists to software developers in the DevOps world was 1:80. As can be expected, this low ratio had an impact among clients that rely on DevOps because security issues were detected during the configuration and monitoring stages, thereby calling into question the efficiency of DevOps as a methodology.

This 1:80 ratio has been considerably improved since the HPE report thanks to sharp observations by the likes of John Meakin, former chief security officer at Burberry, who pointed out that a commitment to DevOps security was required from the upper echelons of organizations down to the managers who are in charge of hiring DevOps professionals.

How the DevSecOps Model Is Supposed to Work
There was a time when IT security and compliance were business processes that could be managed separately, but this is no longer reasonable or sustainable. According to a recent Deloitte Insights report related to DevOps, most enterprise organizations have no choice but to adopt DevSecOps models because failure to do so has a high potential of turning into major headaches.

Imagine a major retailer such as Burberry sticking with DevOps instead of DevSecOps. We are talking about a company that is constantly upgrading its point-of-sale systems for the purpose of keeping up with payment technologies such as near-field communication (NFC) contactless payments. Let’s say the new Burberry POS is coded, built, tested, packaged, released, and configured without checking if NFC transactions are being conducted with General Data Protection Regulation (GDPR) compliance in mind.

The last thing the legal department would want to learn is that thousands of point-of-sale transactions ran afoul of GDPR on the eve of Brexit. Aside from the headache of reporting the issue to the Information Commissioner’s Office, the DevOps team would have to check how far back into the process it needs to go in order to correct the issue.

Image by Mginise [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)]

Add DevSecOps and Stir
DevOps is all about automation and agility, but ignoring security can be costly. How costly? According to Microsoft, hacks result in a global cumulative expense of $500 billion in recovery. According to the report, data breach or hack costs the average company $3.8 million. That adds a big chunk to the cost of doing business for infected organizations, especially when you consider that 43% of cyberattacks target small and medium-sized businesses and more than half have zero security budget.

Where should DevOps teams start? First and foremost, following basic security procedures such as using enterprise firewalls, regularly auditing server logs, and mandating employee VPN usage. Surprisingly, only 30% of global users use a VPN for work on a daily basis. This means that in the majority of the cases, private company data is transmitted across public networks unencrypted and available to enterprising hackers.

One example of a company that had an infamous data breach due to employees using VPNs improperly was Ashley Madison.  Hackers said in a statement, “Nobody was watching. No security. Only thing was segmented network. You could use Pass1234 from the Internet to VPN to root on all servers.” Using a VPN allows your private data to be encrypted but if a hacking group can access the VPN by using a password anyone can guess, it’s pretty useless.

VPN usage notwithstanding, what happens when DevOps teams, as a safety precaution, enable traffic-logging during the testing stage and forget to disable it before release? If a VPN service keeps log files against its own terms of service, it puts user data at risk and could incur class-action lawsuits or damage reputation.

In essence, the DevSecOps model brings security and compliance experts into the team through the following five processes:

  1. Holistic security approach: This may not be easy to implement, but it is worth every effort. A DevOps team should bring in compliance and security personnel at the beginning and end of every step. The first interaction is to brief developers and the second is to check the work for the purpose of deeming it secure and compliant.
  2. Evaluation before automation: DevSecOps does not have to sacrifice automated processes; it only needs to audit them before they are implemented.
  3. Risk-oriented “what-if” scenarios: This is another DevSecOps process that may not be easy to introduce to an existing team of developers. Security and compliance professionals tend to operate in what-if environments that may cause friction with developers who observe actionable insights. One recommendation in this regard is to get HR involved and figure out team-building activities to break the ice and forge friendly bonds.
  4. Security-as-code: Whenever continuous delivery is sought, changes will be introduced, and this is where security-as-code comes into play. This process will need at least one or more security specialists who are comfortable with coding because they will have to apply threat modeling, functional testing, simulated attacks, and incident response strategies.
  5. Bug bounty programs: Assuming that DevSecOps team members are being trained on security topics, a bug bounty program with attractive rewards can be a smart and fun way to get everyone into a security state of mind.

In the end, the cyber threat climate of the 21st century is what makes DevSecOps a necessity and not something that would be nice to have. Embracing DevSecOps makes sense. Ignoring this emerging paradigm is simply too risky.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Sam Bocetta is a freelance journalist specializing in U.S. diplomacy and national security, with emphases on technology trends in cyber warfare, cyber defense, and cryptography. Previously, Sam was a defense contractor. He worked in close partnership with architects and … View Full Bio

Article source: https://www.darkreading.com/cloud/embracing-devsecops-5-processes-to-improve-devops-security/a/d-id/1333947?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Whose Line Is It? When Voice Phishing Attacks Get Sneaky

Researchers investigate malicious apps designed to intercept calls to legitimate numbers, making voice phishing attacks harder to detect.

What if social engineers, instead of calling victims with voice phishing attacks, intercepted phone calls their victims make to legitimate phone numbers? Malicious apps let cybercriminals do just that – a tactic that puts a subtle twist on traditional voice phishing.

Min-Chang Jang, manager at Korea Financial Security Institute and Korea University, began investigating these apps in September 2017 when he received a report of an app impersonating a financial firm. Analysis revealed a phone interception feature in the app, which intrigued him.

That’s how Jang discovered a new type of voice phishing crime, which combines traditional voice phishing with malicious apps to trick unsuspecting callers into chatting with cybercriminals.

Here’s how they work: An attacker must first convince a victim to download an app. The attacker may send a link to the victim, enticing the person with something like a low-interest loan, and prompt him to install the app for it. If the target takes the bait and later calls a financial company for loan consultation, the call is intercepted and connected to the attacker.

“The victims believe that they are talking to a financial company employee, but they aren’t,” Jang says. It’s unlikely victims will know a scam is taking place, he says. Most of these attacks mimic apps from financial firms.

Unfortunately, when Jang and his research team first discovered malicious apps with the interception feature, they didn’t have access to a live malicious app distribution server because it had already been closed by the time they received victim reports. In April 2018, Jang found a live distribution server – a pivotal point for their research into malicious phishing apps.

This particular distribution server had a very short operating cycle, ranging from a few hours to two days. “I found it while monitoring community sites for the information gathering,” Jang explains. He discovered a post written to educate users to be careful of phishing sites; fortunately, it discussed the malicious applications they were hoping to investigate.

“I found a specific string in the Web page source code of a live malware distribution server,” he says, “and I used the string for scanning to get more malware distribution servers.” 

With access to one server, researchers could check which of its ports were open and access the Web page source code. Based on those strings of code from the first distribution server, they were able to create a real-time malicious app collection script, Jang explains. The automated system they created is able to collect malware distribution servers and apps in near real time.

Using this script, researchers have been able to find malicious app distribution servers and variant malicious apps. Following their discovery of the first live distribution server, they have collected about 3,000 malicious apps from various servers. The command-and-control (C2) server address was hard-coded inside malicious apps, Jang says, and could be easily extracted.

Their research continued to unfold. The team analyzed the C2 server, where they discovered a file containing the account data they needed to access it. This data helped the team gain the privileges of the Windows server admin of the distribution server and of the database admin of the C2 server. A Remote Desktop Protocol (RDP) connection to the server led to more information – the team confirmed this attacker was connecting to the Internet via the Point-to-Point Protocol over Ethernet (PPPoE), a sign the server’s location was in Taiwan.

In a presentation at Black Hat Asia, entitled “When Voice Phishing Met Malicious Android App,” Jang will disclose and discuss the findings of criminal traces in voice phishing analysis conducted by his research team over the past few months.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/whose-line-is-it-when-voice-phishing-attacks-get-sneaky/d/d-id/1333982?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Millions of utilities customers’ passwords stored in plain text

In September, a security researcher discovered that their power company’s website was offering to email passwords to users who lost or forgot them…

…as in, emailing in unencrypted plain text, with no salting and nary a dab of hash, to whoever might pop in a given user’s email address, instead of offering the far more secure “password reset” option.

The independent security researcher, who chose to remain anonymous, told the story to Ars Technica contributor Jim Salter, who referred to the researcher as “X” in his writeup of what ensued.

Namely, a months-long saga of trying to get the software company behind the website to realize that it was jeopardizing customers’ security and to actually do something about it… which only happened after it had refused to answer X, then finally sent X to its lawyer, who requested that X stop talking to anybody else about it and who insisted that the company’s process of handling passwords was just fine.

The company in question is SEDC: an Atlanta firm that offers “Cyber Resilience Initiative Services and Solutions” – a bit of a confusing mouthful that translates into software that handles bill payment, cybersecurity and other services for utilities providers.

After X found SEDC’s copyright in the footer of the utility company’s website, the researcher went off in search of more customer-facing sites designed by SEDC. X found plenty: in fact, the researcher found more than 80 utility company sites that all offered to email plain-text passwords.

Ars estimates that those companies service some 15 million customers, but that number could be multiple times larger: SEDC itself claims that more than 250 utility companies use its software.

X didn’t attempt to exploit any of those utilities firms’ sites. If they had, the sites’ databases would have been chock-full of credentials that weren’t obscured via encryption. Such an unlocked treasure chest could have been drained by attackers and used for credential stuffing: a well-known attack in which credentials exposed in one breach get stuffed into other websites until an attacker gets in, be it to our online bank accounts, our social media accounts, our email accounts, our smart-home gadgets, or the plethora of other places and things we want to keep locked up.

Unfortunately, people’s lamentable tendency to reuse passwords makes it an extremely common attack.

Also on the lamentable side of the ledger was the response that X got from SEDC.

From chirping crickets to a lawyerly shrug

From Salter’s writeup, who says he’s done numerous PCI-DSS (Payment Card Industry Data Security Standard) audits for clients over the years:

When X informed SEDC there was a security problem, the corporate response varied from crickets chirping to cold shoulders. Eventually, SEDC’s attorney sent X an email that could reasonably be paraphrased as: ‘there’s no problem with what SEDC is doing, stop bothering SEDC, and you’re only allowed to talk to me from now on.’

The email, from Mark Cole, General Counsel for SEDC:

We were especially surprised by your accusation because SEDC and many of our customers undergo annual PCI assessments, annual PCI penetration tests, and quarterly PCI ASV scans, none of which have identified as a PCI DSS vulnerability the practice of which you have complained. After your initial calls to [redacted] Electric Cooperative, we expressly raised your concern with a certified PCI Qualified Security Assessor who confirmed that your issue does not present a PCI violation: the password attached to a non-administrator end user userid simply does not allow access to credit card information in a manner that violates PCI DSS.

[…]

Finally, I must request that you cease contacting SEDC employees, customers (other than any utility of which you may be an end-user), and third parties to repeat your erroneous assertions about this matter.

”Ridiculous” to say “talk to my lawyer” instead of “thank you”

During their attempt to get SEDC to acknowledge and fix this security lapse, X has been getting counseling from Electronic Frontier Foundation (EFF) Senior Information Security Counsel Nate Cardozo and EFF attorney Jamie Williams about legal and ethical disclosure responsibilities. This is what Cardozo had to say to Ars:

In 2019, it’s ridiculous that vendors are replying to security researchers via general counsel, not a bug bounty program.

Cole’s final email to X again refers to there being nothing wrong with plain-text passwords as far as PCI-DSS is concerned… but that SEDC has fixed it anyway.

Sort of. Maybe.

I wanted to let you know SEDC has changed the way our software handles “forgotten password” requests for the payment portal, and we have disclosed the change to all our Customers. We also have disclosed this change and the history of your communications of which we are aware – with SEDC and our employees, with some of our Customers, and with social media generally – in detail to our Board of Directors, which is comprised of a dozen of our Customer-Members. They do not believe any further “disclosure” by SEDC is needed or appropriate.

Given that there has been no PCI violation nor any indication of third party access to anyone’s PII (in fact, the plain-text password at issue does not enable such access), it is unclear what “disclosure” you think should be made, much less under what authority you think such a disclosure would be required.

As Salter points out, SEDC isn’t saying that the passwords are now encrypted, with a strong hash, with cryptographic salt unique to each record. That means that we don’t know if these passwords are being stored securely. All we know is that SEDC’s clients’ sites are now prompting people to reset lost or forgotten passwords, instead of emailing them in plain text.

Salting and hashing

Those who shrug off the implications of storing passwords in plain text, without proper salting and hashing, might not be aware of potential legal ramifications if this industry-standard level of security is shrugged off.

One example is LinkedIn, which got itself sued not for skipping salting and hashing entirely, but rather for doing a half-job of it. In 2012, LinkedIn suffered a massive breach that led to the leak of millions of unsalted SHA-1 password hashes that were subsequently posted online and cracked within hours.

A salt is a random string added to a password before it’s cryptographically hashed.

The salt isn’t a secret, it’s just there to make sure that two people with the same password get different hashes. That stops hackers from using rainbow tables of pre-computed hashes to crack passwords, and from cross-checking hash frequency against password popularity. (In a database of unsalted hashes the hash that occurs most frequently is likely to be the hashed version of the notoriously popular “123456”, for example.)

Salting and hashing a password just once isn’t nearly enough, though. To stand up against a password cracking attack, a password needs to be salted and hashed over and over again, many thousands of times.

Failing to do so “runs afoul of conventional data protection methods, and poses significant risks to the integrity [of] users’ sensitive data”, as a $5 million class action lawsuit against LinkedIn charged.

Better safe with unique passwords than sorry with password123

We don’t know if SEDC has seen the error of its salting/hashing-free ways, but we do know that the way for users to stay safe in this situation is to make sure you use one unique, strong set of credentials for every site and every service.

Passwords should be at least 12 characters long and should mix letters, numbers and special characters, though it’s even better if you use a password manager or a hardware-based security key, such as Yubico’s YubiKey or Google’s Titan.

That, and pray that the passwordless web comes soon!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vvEuxwODBzc/

Police bust their own radio shop manager for dodgy software updates

The manager in charge of Winnipeg’s police radios was arrested last Thursday for allegedly using fraudulent licenses to update the encrypted Motorola radios that police use to keep their conversations private, CBC News reports.

According to court documents, an employee tipped authorities off about the alleged actions of Ed Richardson, who was the manager of the radio shop for the City of Winnipeg. The radio shop is in charge of repairing and maintaining radios used by the Winnipeg Police Service and Winnipeg Fire Paramedic Service.

Richardson allegedly got his hands on millions of dollars’ worth of illegal licenses for the radios, which require frequent updates. Each of those software updates should have cost the city $94, but the informant said that Richardson didn’t like paying those fees to Motorola.

From the affidavit:

[The employee] does not believe his actions were for personal gain; he believes that Richardson likes the idea of not giving more money to Motorola.

According to what the employee told police, in 2011, Richardson gave him a device known as an iButton that was preloaded with more than 65,000 refresh keys, and told him…

You don’t want to know where these came from.

The employee said those keys “clearly” didn’t come from Motorola, according to the court document.

Police say that the bogus refresh keys would have cost the city millions if they’d been legitimately purchased. They estimate that the keys were used over 200 times, causing Motorola to lose nearly $19,000.

A ham radio enthusiast piqued the interest of US Feds

Police suspect that Richardson got the unauthorized keys from a Winnipeg ham radio enthusiast who was under investigation by the US Department of Homeland Security (DHS).

Court documents say that a DHS agent traveled to Winnipeg in 2016 to brief local police about the investigation. The agent told Winnipeg police that the man whom DHS was investigating reprogrammed Motorola radios for a roster of international clients. Such clients are of the criminal ilk, as in, people who have an interest in hiding their chats on encrypted radio. That includes drug lords. From the court documents:

[Encrypting radios] allows the criminal element to communicate without fear of interception by government or law enforcement. A significant number of these encrypted radios have been seized from the Mexican drug cartel members.

Police say that experts at Motorola checked out some of the encrypted radios seized by law enforcement and found that the techniques used to hack them were consistent with how they allege that the Winnipeg man went about it.

DHS detained the ham radio enthusiast in May 2016, when he was returning from a radio convention in Dayton, Ohio. Agents seized his electronics, including a laptop, tools used to encrypt Motorola radios, and an iButton that police believe he got from Richardson.

An iButton is a microchip similar to those used in a smart card but housed in a little, round, stainless steel button, or “can.” The iButton is incredibly tough and, among other uses, serves as a data logger for applications in harsh and demanding environments – for example, picking up temperature readings in agriculture.

iButtons are empty. You have to program them to do whatever it is you want them to do. In this case, that would be to store a whole lot of keys to encrypt Motorola radios that Motorola itself didn’t put into one of those little button cans. Police believe that Richardson gave the ham radio guy the iButton that police found in his possession when they detained him.

Prior to 2010, anybody could eavesdrop on police by buying a police scanner. Then, Winnipeg started using the fully encrypted Motorola radios, which require one of the encryption keys to use.

The radio shop employee was motivated to come forward with information about Richardson in 2017, when the city’s agencies were in the process of launching a new emergency radio system for first responders. Richardson was leading that project, and the employee feared that his allegedly corrupt boss could compromise it, according to the affidavit:

[The employee] is concerned that Richardson’s lack of integrity may put the security of this new radio system in jeopardy.

CBC News contacted Richardson earlier this month. He was reportedly surprised to hear he was under investigation, though he said he did know that the radio enthusiast was a person of interest to police. Richardson was put on leave a few days later.

A Winnipeg police spokesperson told CBC News that its investigation is now complete and that Richardson is expected to be formally charged during a court appearance next month. He’ll be looking at charges including fraud over $5,000, unauthorized use of a computer, possession of a device to obtain unauthorized use of a computer, and possession of a device to obtain telecommunication service.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/sGfoEK83ypQ/

Ready for another fright? Spectre flaws in today’s computer chips can be exploited to hide, run stealthy malware

Spectre – the security vulnerabilities in modern CPUs’ speculative execution engines that can be exploited to steal sensitive data – just won’t quietly die in the IT world.

Its unwelcome persistence isn’t merely a consequence of the long lead time required to implement mitigations in chip architecture; it’s also sustained by its ability to inspire novel attack techniques.

The latest of these appeared in a paper presented at the Network and Distributed Systems Security (NDSS) Symposium 2019 in San Diego, California, on Monday.

Co-authored by three computer science boffins from the University of Colorado, Boulder in the US – Jack Wampler, Ian Martiny, and Eric Wustrow – the paper, “ExSpectre: Hiding Malware in Speculative Execution,” describes a way to compile malicious code into a seemingly innocuous payload binary, so it can be executed through speculative execution without detection.

Speculative execution is a technique in modern processors that’s used to improve performance, alongside out-of-order execution and branch prediction. CPUs will speculate about future instructions and execute them, keeping the results and saving time if they’ve guessed the program path correctly and discarding them if not.

But last year’s Spectre flaws showed that sensitive transient data arising from these forward-looking calculations can be exfiltrated and abused. Now it turns out that this feature of chip architecture can be used to conceal malicious computation in the “speculative world.”

A ghost

Data-spewing Spectre chip flaws can’t be killed by software alone, Google boffins conclude

READ MORE

The Boulder-based boffins have devised a way in which a payload program and a trigger program can interact to perform concealed calculations. The payload and trigger program would be installed through commonly used attack vectors (e.g. trojan code, a remote exploit, or phishing) and need to run on the same CPU. The trigger program can also take the form of special input to the payload or a resident application that interacts with the payload program.

“When a separate trigger program runs on the same machine, it mistrains the CPU’s branch predictor, causing the payload program to speculatively execute its malicious payload, which communicates speculative results back to the rest of the payload program to change its real-world behavior,” the paper explains.

The result is stealth malware. It defies detection through current reverse engineering techniques because it executes in a transient environment not accessible to static or dynamic analysis used by most current security engines. Even if the trigger program is detected and removed the payload code will remain operating.

There are limits to this technique, however. Among other constraints, the malicious code can only consist of somewhere between one hundred and two hundred instructions. And the rate at which data can be obtained isn’t particularly speedy: the researchers devised a speculative primitive that could decrypt 1KB of data and exfiltrate it at a rate of 5.38 Kbps, assuming 20 redundant iterations to ensure data correctness.

To accommodate these constraints and craft efficient malware, the boffins devised a custom emulator and 6-bit instruction set called SPASM (Speculative Assembly).

“Using SPASM, developers can write programs, assemble and encrypt them into a payload program,” the paper explains. “When the associated trigger program runs, the payload will decrypt SPASM instructions in the speculative world, and execute them one at a time.”

The Colorado compu-boffins come to the same conclusion as other security researchers who have explored Spectre-related issues: these flaws need to be addressed in silicon and microarchitecture patches.

“Until then, attackers may iterate and find new variants of ExSpectrelike malware,” they say. “In the meantime, new detection techniques and software-level mitigations are desperately needed.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/27/spectre_malware_invisible/

Protect you and your biz by learning the tricks of cyber criminals’ trade at SANS London in March

Promo However sophisticated computer systems become, skilled and determined cyber criminals manage to find endless new and more ingenious ways of breaking in to steal data or hold organisations to ransom.

As a security professional, are you confident you have the skills and knowledge to keep all potential attackers at bay and fend off anything they throw at you?

Whatever your particular area of interest, you will be able to fill any gaps in your knowledge at the upcoming training event staged by leading IT security training and certification specialist SANS Institute in London, England, from 11 to 16 March.

Ten intensive courses are on offer, designed to suit all levels from security novice to seasoned expert. All provide the opportunity to gain valuable GIAC certification and promise to arm attendees with defensive weapons they can put to use as soon as they return to work.

Choose between the following courses:

  • Introduction to cyber security

    Jump-start your security education with this basic five-day course covering terminology, networks, security policies, incident response, passwords and cryptography.
  • Security essentials bootcamp style

    Would you be able to find compromised systems on your network? Learn how to set up proper security metrics and communicate them to your executives.
  • Hacker tools, techniques, exploits and incident handling

    Follow a step-by-step response to computer incidents that illustrates issues such as employee monitoring, working with law enforcement and handling evidence.
  • Continuous monitoring and security operations

    If attackers can find a way past perimeter security they will be able to achieve what they came for. Learn to detect dangerous anomalies and nip intrusions in the bud.
  • Mobile device security and ethical hacking

    Mobile devices can be an organisation’s biggest security headache. Explore the strengths and weaknesses in Apple iOS and Android devices.
  • Advanced Web app penetration testing, ethical hacking and exploitation techniques

    As applications continue to evolve, catch up with new frameworks and backends, delve into practical cryptography and examine new protocols such as HTTP/2 and WebSockets.
  • Advanced penetration testing, exploit writing and ethical hacking

    A course for experienced penetration testers. Walk through dozens of real-world attacks and sharpen your skills in hands-on lab sessions.
  • Advanced Incident Response, Threat Hunting, and Digital Forensics

    It’s important to catch any intrusions in progress rather than after attackers have done their worst. Study the signs of criminal behaviour to identify data breaches.
  • Advanced memory forensics and threat detection

    This course on Windows memory forensics for incident investigators uses freeware and open-source tools to examine RAM content that shows what happened on a system.
  • Secure DevOps and cloud application security

    How to build and deliver secure software using DevOps and and Amazon Web Services using popular open-source tools such as GitLab, Puppet, Jenkins, Vault, Graphana and Docker.

More information and registration details are right here.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/27/square_up_to_the_cybercriminals_at_sans_london_march_2019/

Thunder, thunder, thunder… Thunderclap: Feel the magic, hear the roar, macOS, Windows pwnage tools are loose

Computers have enough trouble defending sensitive data in memory from prying eyes that you might think it would be unwise to provide connected peripherals with direct memory access (DMA).

Nonetheless, device makers have embraced DMA because allowing peripherals to read and write memory without oversight from the operating system improves performance. It’s become common among network cards and GPUs, where efficient data transfer is necessary.

To prevent abuse, vendors have implemented input-output memory management units (IOMMUs), which attempt to limit the CPU memory regions available to attached devices.

Unfortunately, as with CPU architecture capabilities designed to deliver speed, like speculative execution, device makers turn out to be overconfident in their defenses. A wide variety of laptop and desktop computers can be compromised by malicious peripherals, allowing the extraction of secrets from memory or root shell access, despite supposed protections.

Proof that peripherals can pwn you

A paper presented today at the Network and Distributed System Security Symposium (NDSS) in San Diego, California, describes a set of vulnerabilities in macOS, FreeBSD and Linux, “which notionally utilize IOMMUs to protect against DMA attackers.”

“Notionally” here serves as polite academese for “fail to.” As the paper’s author’s put it, “We investigate the state-of-the-art in IOMMU protection across OSes using a novel I/O-security research platform, and find that current protections fall short when faced with a functional network peripheral that uses its complex interactions with the OS for ill intent.”

The aforementioned research platform, dubbed Thunderclap, and the associated paper represent the work of assorted academic and think tank boffins: A. Theodore Markettos, Colin Rothwell, Allison Pearce, Simon W. Moore and and Robert N. M. Watson (University of Cambridge), Brett F. Gutstein (Rice University) and Peter G. Neumann (SRI International).

Thunderclap is an FPGA-based peripheral emulation platform. The researchers claim that it can be used to interact with a computer’s operating system and device drivers, bypassing IOMMU protections. You connect it to a device and seconds later it’s compromised.

“The results are catastrophic, revealing endemic vulnerability in the presence of a more sophisticated attacker despite explicit use of the IOMMU to limit I/O attacks,” the paper explains. “We are able to achieve IOMMU bypass within seconds of connecting on vulnerable macOS, FreeBSD, and Linux systems across a range of hardware vendors.”

Malicious peripherals may not be as alarming as remote code execution vulnerabilities because local access to a target device is necessary and physical security precautions can be effective. But DMA attack scenarios shouldn’t be brushed aside too lightly.

“In the most accessible version of our story, you obtain a VGA/Ethernet dongle, power adapter, or USB-C storage device from a malicious person/organization and your device is immediately compromised,” explained Robert N. M. Watson, senior lecturer in systems, security, and architecture at the University of Cambridge Computer Laboratory, in an email to The Register.

“But it’s worth thinking a bit further: we can consider a range of supply-chain and remote device attacks, such as attacks against Thunderbolt or PCI-e devices themselves that allow them to then be used against an end user.”

Think supply and demand

As examples, Watson cites supply chain attacks originating in a factory, in firmware development or as a result of a vulnerability in Ethernet dongle firmware or Wi-Fi firmware that could be triggered via malicious network traffic. He also suggests the possibility of a supply chain attack involving malicious firmware on public USB charging stations.

Devices that include a Thunderbolt port (Apple laptops and desktops since 2011, some Linux and Windows laptops and desktops since 2016) or support for Thunderbolt 3 (USB-C) or older versions of Thunderbolt (Mini DisplayPort connectors) are affected by this research. So too are devices that support PCI-e peripherals, via plug-in cards or chips on the motherboard.

Apple, Microsoft, and Intel have issued patches that partially fix the revealed vulnerabilities, but additional mitigation will be required to address the issues identified by the researchers. Windows, which makes limited use of the IOMMU, remains vulnerable.

For example, the paper says, macOS 10.12.4 implements a code-pointer blinding feature, which limits the injection of kernel pointers, but fails to secure other data fields, including data pointers, that may leave systems vulnerable.

Microsoft released Kernel DMA Protection to provide IOMMU support in devices shipped with Windows 10 1803 (updates don’t count), but hasn’t yet provided documentation for device-driver makers to implement such defenses.

The Linux security team considers peripheral security within its threat model but considers the problem difficult to address due to the variety of driver drivers. An Intel patch in kernel 4.21 enables the IOMMU for Thunderbolt ports and disables ATS. The FreeBSD Project doesn’t consider malicious peripherals part of its threat model but asked for a copy of the paper for review.

Protect yourself

“For systems where it’s under admin control (Linux and FreeBSD), we recommend enabling the IOMMU at boot,” said Theodore Markettos, senior research associate in the University of Cambridge Computer Laboratory, in an email to The Register.

“This will likely have a performance implication. More deeply, we are highlighting that the interface between peripherals capable of DMA and the kernel is much richer and more nuanced than previously thought.”

Markettos argues that operating system kernels and device drivers should treat interactions with peripherals with the same wariness that operating systems and applications treat data from the internet.

image of binary on screen with word 'exploit'

Intel Management Engine JTAG flaw proof-of-concept published

READ MORE

“The system call interface between processes and the kernel has received substantial scrutiny and hardening, and the same process should be applied to the interface between peripherals and the kernel,” he said.

The researchers have been exploring IOMMU issues since 2015 and working with vendors since 2016. They’ve have now released Thunderclap as an open source project to assist with the identification and remediation of DMA attacks.

“We began our research into this problem in early 2015 using OS tracing techniques to investigate how IOMMUs were managed by various operating systems – the results were not encouraging,” said Watson.

“This led us to a far more detailed multi-year vulnerability analysis, hardware prototyping, and close conversations with multiple vendors to help them understand the implications of the work on their current and future products. We hope very much that our open-source research platform will now be used by vendors to develop and test their I/O security protections going forwards.”

And it appears there’s more work to do. Markettos said DMA in peripherals has become popular due to increasing performance requirements. He and his colleagues have yet to poke around NVMe storage on phones, other phone peripherals including Wi-Fi, GPU, audio, mobile baseband, and cameras, SD card spec v7 (which supports PCIe/NVMe), NVMe over ethernet and other fabrics, and DMA in embedded systems.

“We’ve been advising vendors to be cautious about adding new devices that support DMA before they understand the security model,” said Markettos. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/26/thunderclap_hacking_devices/