STE WILLIAMS

6 Steps CISOs Should Take to Secure Their OT Systems

The first question each new CISO must answer is, “What should I do on Monday morning?” My suggestion: Go back to basics. And these steps will help.

The inevitable digitalization of an industry can create strife within companies, especially between colleagues tasked with blending often old and idiosyncratic business-critical operational technology (OT) with information technology (IT).

One crucial source of confusion: Who is responsible for the all-important cybersecurity risk mitigation of OT systems as they become part of the Industrial Internet of Things? There’s no universal answer yet. Some chief information security officers (CISOs) are drawn from OT, and some from IT.  

Either way, the first question each new CISO must answer is, “What should I do on Monday morning?” My suggestion: Go back to basics.

What I’ve noticed working with industrial companies around the world is confusion among CISOs distracted by thousands of companies — new and old — offering shiny new tools to prevent and detect threats in exciting ways. As a result, there’s a good chance new CISOs could overlook the basic, fundamental steps needed to build the broadest, strongest risk mitigation.

Here are the six steps all new CISOs should take to begin protecting their OT environments in the most effective way possible:

• Step 1: Asset inventory. A company’s OT systems are its crown jewels, and the CISO’s primary role is to protect them. First step: Explore, discover, and inventory every OT element in the organization to learn exactly what you’re protecting — data, software, systems, etc. Without a complete and accurate asset inventory, the succeeding steps will fall short in minimizing cybersecurity risk.  

• Step 2: Backup/test restore. The most effective way to protect OT systems from expensive to ruinous ransomware attacks, to cite just one risk, is to back up OT data and perform a test restore to make certain the backups are optimal. Backing up systems is crucial for multiple reasons, security among them.

(Tip: In case of ransomware attacks, don’t forget the European police agency Europol’s public/private No More Ransom site, which offers proven, valuable anti-ransomware tools free of charge.)

Yes, test restore can be challenging, but OT network backups are only as good as the test restore process that assures their effectiveness by protecting the network from data loss.

As we’ll see in step 5, it’s important to identify pertinent data for test restore on a continuous basis — often by asking users in the organization which data is most important for their work — but for the first backup/test restore, do it as widely and deeply as possible now to avoid data loss and other problems down the road.

• Step 3: Software vulnerability analysis. Step 1’s asset inventory will reveal all the software in the organization’s OT systems. The CISO must know the state of every software asset. Every piece of software must be subjected to vulnerability analysis. What version of the software do you have? Is it up to date? Are there more recent versions — safer and more effective — the OT system will accept and continue to thrive with?

A crucial question about the software: Does it need patching? If so, here’s a critical warning: Don’t do the automatic IT thing of reflexively patching everything, because OT patching is a complex and challenging process that rates an entire step onto itself.

• Step 4: Patching. Though automatic in IT, patching in OT is the proverbial briar patch. Sometimes patching OT software can make things worse. The soft underbelly of digitalizing the industrial economy is old OT machines and systems. Some absolutely vital systems have been on factory floors for 15 to 25 years or more, and they can’t be taken down and patched. And even if appropriate (and safe) patches are available, old OT may not have enough memory or CPU bandwidth to accept them.

Finally, many OT systems are highly orchestrated combinations of software and hardware that develop “personalities,” and when they’re patched, they come back up with unpredictable results.

What to do? I suggest a threat analysis approach that can identify vulnerabilities and minimize risk short of patching.

• Step 5: Backup/test restore  again. Backup/test restore must become an ingrained habit whenever anything in the OT or IT system changes — updates, for example. The test restore process should include a plan that identifies testing frequency and the specific mode of testing. It is also important to make certain the operating system directly correlates with the version of software being used, as well as the structure of the database.

Important advice: Repeat steps 3 to 5 regularly, forever. New vulnerabilities are often found in old software.

• Step 6: Enable centralized logging. CISOs must know not just how something is working or failing, but why it’s failing — and for that, centralized logging is a must. Centralized logging consolidates, manages, and analyzes logs to empower CISO teams to understand their environments, identify threats as early as possible, and optimize defenses.

In my experience, many OT systems have never been monitored. Given how much goes on in OT systems, consistent centralized logging is a must-have: It enables CISOs to confidently identify alarming security signals amid the potentially deafening routine noise.  

If new CISOs take these six basic but essential steps — and habitually repeat those that need repeating — they can go home Monday night confident they’ve done a solid job minimizing risk for their organization’s OT.

Related Content:

 

Satish joined San Jose-based ABB in February 2017 as chief security officer and Group VP, architecture and analytics, ABB Ability™, responsible for the security of all products, services and cybersecurity services. Satish brings to this position a background in computer … View Full Bio

Article source: https://www.darkreading.com/risk/6-steps-cisos-should-take-to-secure-their-ot-systems--/a/d-id/1337236?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Poll: Strengthening Security … by Easing Security?

If security measures were made easier for end users, would your organization be more secure?

The Edge is Dark Reading’s home for features, threat data and in-depth perspectives on cybersecurity. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/poll-strengthening-security--by-easing-security/b/d-id/1337248?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Zynga faces class action suit over massive Words With Friends hack

Zynga – maker of addictive (and crook-tempting) online social games such as FarmVille, Mafia Wars, Café World and Zynga Poker – is facing a potential class action lawsuit over the September 2019 breach in which hackers got access to more than 218 million Words with Friends accounts.

Zynga’s Draw Something was also targeted in the September breach.

The threat actor known as GnosticPlayers went on to claim responsibility for the breach – yet another cache to add to the nearly one billion user records they’d already claimed to have stolen from nearly 45 popular online services earlier in 2019.

Zynga admitted to the breach at the time, saying that hackers got their hands on “certain player account information” but that, at least during the early stages of its investigation, it didn’t think any financial information was accessed.

The game maker didn’t disclose how many accounts were affected, saying only that they’d contact players with affected accounts. Have I Been Pwned confirmed in December 2019 that more than 173 million accounts were hit.

Hacker News, which scrutinized a sample sent over by GnosticPlayers, said that the breached data included names, emails, Login IDs, hashed passwords – “SHA1 with salt”, password reset tokens, Zynga account IDs, and connections to Facebook and other social media services.

We don’t know exactly what “SHA1 with salt” means, but we do know that it isn’t bcrypt, scrypt, PBKDF2 or any other of the recognized password hashing function you’d hope and expect to have been used.

At any rate, GnosticPlayers also claimed to have drained data from other Zynga-developed games, including Draw Something and the discontinued OMGPOP game, which allegedly exposed clear text passwords for more than seven million users.

The complaint (PDF), which is seeking a jury trial and class status, was filed on Tuesday in the US District Court for California. The plaintiffs’ lawyers say that Zynga allegedly failed “to reasonably safeguard” player information, referring to Zynga’s “substandard password security.”

The failed complaint also maintains that Zynga failed to notify users in a timely manner. It’s charging Zynga with being responsible for the plaintiffs’ personally identifiable information (PII) being…

…accessed, acquired, and stolen for the purpose of of misusing the Plaintiffs’ data and causing further irreparable harm to Plaintiffs’ personal, financial, reputational, and future well-being.

After the theft of Plaintiffs’ PII from Zynga’s platform, it was distributed to and among hacker forums and other identity and financial thieves for the purpose of illegally misusing, reselling, and stealing Plaintiffs’ PII and identity.

Plaintiffs have been damaged as a result, their lawyers said in the complaint.

The suit was brought on behalf of two affected users, one of whom is a parent of an affected user who’s underage, and one of whom had a Zynga account herself.

The Plaintiffs’ lawyers suggest that Zynga “unconscionably” deceived users regarding the safety and protection of their user information. They also maintain that a large number of minor children were implicated in the breach, pointing to a study that estimates that 8% of all mobile gamers are between the ages of 13 and 17.

As the lawyers noted, the Federal Trade Commission (FTC) has said that when children are victims of a data breach, “it might be years before you or your child realizes there’s a problem.”

The lawsuit lists 14 counts of action and claims for relief, ranging from negligence and violation of state data breach statutes to unjust enrichment.

It also claims that while Zynga posted a warning on its website, it has yet to notify users to warn them of the breach, with the class arguing the company “effectively hid the fact that it suffered a data breach” and instead spent the time “shoring up its legal defenses.”

From the complaint:

Only those users who happened to visit Zynga’s website on their own volition, read about the breach in the news, or had signed up to receive email data breach notifications from independent third parties that monitor data breaches were made aware of the breach.

The plaintiffs, along with others affected by the breach, are at risk of fraud, identity theft, and criminal misuse of their personal information “for years to come,” the lawsuit argues.

As of Wednesday afternoon, Zynga hadn’t responded to media requests for comment.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7xSHa_e71Tg/

Google launches FuzzBench service to benchmark fuzzing tools

First came ‘fuzzing’, a long-established technique for spotting bugs such as security flaws in real applications using automated tools.

More recently, security fuzzing tools have expanded in number, and today there are hundreds of specialised open-source tools and online services designed to probe specific types of software.

But which security fuzzing tools, techniques and algorithms work the best when assessing real programs for bugs?

That’s been harder to know without fuzzing the fuzzers. But doing this presents a problem – traditional assessments often use too few benchmarks and don’t run over long enough periods because testers lack the resources to do anything more ambitious.

So fuzzing users base their enthusiasm for specific tools on incomplete data or their own experience, which creates some uncertainty.

Now Google, which delivered its own open-source testing tool OSS-Fuzz in 2017, has announced FuzzBench, a free service “for painlessly evaluating fuzzers in a reproducible way.”

Reading Google’s description, it almost sounds too good to be true. Researchers integrate the fuzzer they want to test using an easy API and 50 lines of code. FuzzBench then throws real-world benchmarks and many trials at the tool until, after 24 hours, the results appear:

Based on data from this experiment, FuzzBench will produce a report comparing the performance of the fuzzer to others and give insights into the strengths and weaknesses of each fuzzer.

It would be flippant to call this fuzzing by numbers, but the hope is that by giving fuzzers more data on what works they can spend more time making fuzzing tools better.

Fuzzing future

Improving fuzzing matters because being able to do it quickly, cheaply, and easily should, in theory, be one of the best ways to reduce the number of security flaws in software when used under what is politely called real-world conditions.

Fuzzing software involves throwing large numbers of random, tweaked and permuted (fuzzed) input files at an application in the hope of triggering unexpected or hard to find bugs, thereby highlighting security vulnerabilities.

Essentially, it tries to reproduce how a program might be used and the security vulnerabilities this might give rise to in everyday use, ones that are difficult to detect using manual code review.

Requiring no access to source code, this makes it what is called a ‘black box’ technique – the same fuzzing principle used by hackers when trying to find flaws worth exploiting.

Google scale

Developers submit the fuzzer they want to test to the FuzzBench platform which generates the report by running 20 trials of 24 benchmarks over a 24-hour period using 2,000 CPUs. The fuzz also runs ten other popular fuzzers (including AFL, LibFuzzer, Honggfuzz, QSYM, Eclipser) to provide a comparison.

Statistical tests are part of the suite to estimate how much of the difference between one fuzzer and another is down to chance as well as providing the raw data so developers or pen-testers can make their own assessment. Crashes aren’t included as a metric but will be in future.

Google has offered a sample report to give some idea of how the data is presented at the end of the fuzz.

As with all fuzzing tools, the benchmark of FuzzBench’s success will be how many researchers and developers use it. If it works as advertised, the ultimate beneficiaries will be the billions of people who depend on reliable software.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rwfS-KbJmi4/

Ethical hackers swarm Pentagon websites

Hackers are crawling all over the US Department of Defense’s websites. Don’t worry, though: they’re white hats, and DoD officials are quite happy about the whole thing.

Four years after it first invited white hat hackers to start hacking its systems, the Pentagon continues asking them to do their worst – and a report released this week says that they’re submitting more vulnerability reports than ever.

The DoD’s Department of Defense Cyber Crime Center (DC3) handles cybersecurity for the DoD, and is responsible for tasks including cyber technical training and vulnerability sharing. It also runs the DoD’s Vulnerability Disclosure Program (VDP).

The VDP emerged from the Hack the Pentagon bug bounty program that the military ran in 2016. That initiative was so successful that it continues to invite hackers to play with its systems. Last year, the Air Force even bought an F-15 to Defcon for hackers to tinker with. Next year, it plans a satellite.

These high-profile events punctuate a more modest but ongoing program that invites hackers to submit security vulnerability reports focusing on DoD websites and web applications. The DoD engaged its DC3 unit to run the continuous program and keep the ethical hacks rolling in.

DC3 just published its first annual report on the program, revealing that it processed 4,013 vulnerability reports from 1,460 white hat researchers. It validated 2,836 of them for mitigation, it said, adding:

These vulnerabilities were previously unknown to the DoD and not found by automated network scanning software, red teams, manual configuration checks, or cyber inspections. Without DoD VDP there is a good chance those vulnerabilities would persist to this date, or worse, be active conduits for exploitation by our adversaries.

2019 was the busiest year for bug reports, the report said, representing a 21.7% increase over 2017 and bringing the total number of bug reports to 12,489.

Information exposure bugs were the most common type reported during the year, followed by violation of secure design principles, cross-site scripting flaws, business logic errors, and open redirects (which are a way to mount phishing attacks).

In the future, DC3 wants to expand the scope of the program beyond DoD websites to cover any DoD information system. It also wants to partner with the Defense Counterintelligence and Security Agency (DCSA) to create what it calls a defense industrial base (DIB) VDP program to secure the DoD’s supply chain. That’s notable, given the past controversy over potential vulnerabilities in third-party drones and cameras sourced by the DoD.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/94RrQb16rIQ/

Facebook: No, we are not killing Libra

Facebook denies that it’s cringing away from its virtual currency plans due to the fact that regulators loathe it, saying that it “remains fully committed to the project.”

On Tuesday, multiple reports suggested that Facebook has decided not to support its Libra virtual currency in its own products and will instead offer users the ability to make payments with government-issued currencies, or that the platform and its partners are weighing whether they should recast it as mostly a payments network that could operate with multiple coins.

According to a report from The Information that cited three sources, Facebook has been mulling offering digital versions of currencies such as the US dollar and the euro, in addition to its proposed Libra token. The Information also reported that Facebook will still launch a digital wallet to enable users to make purchases and send and receive money, but that the rollout would be delayed by several months.

On Tuesday, Facebook moved to quash both The Information’s claims and another report from Bloomberg about Libra turning into a payments network that could operate with multiple coins.

A Facebook spokesperson sent this statement:

Reporting that Facebook does not intend to offer the Libra currency in its Calibra wallet is entirely incorrect. Facebook remains fully committed to the project.

Facebook didn’t address many of the two media outlets’ claims, including the reported delay in Calibra’s rollout, whether or not it plans to create new digital government currencies, nor whether Libra might be reimagined as mostly a payments network that deals in multiple coins.

Dante Disparte, head of policy and communications for the Libra Association, sent out this statement:

The Libra Association has not altered its goal of building a regulatory compliant global payment network, and the basic design principles that support that goal have not been changed.

What happened?

Things have been rocky since June 2019, when Facebook announced, along with 27 other companies, that it would launch Libra in 2020. The virtual currency was supposed to be a way to connect the globe, circumvent the financial system, and shrink the cost of sending money, particularly for those populations without ready access to banks. The Libra project’s members included the financial heavyweights: Visa, Mastercard, and other large companies that would partner with Facebook to govern the system.

Within months came a chorus of “Hell, no” responses from multiple governments, followed by the cryptocurrency project being pelted by rapid-fire, major body blows from founding members of the Libra Association.

First came PayPal’s terse “On second thoughts, how about ‘No’?” …followed by Mastercard, Visa, eBay, and the payments firms Stripe and Mercado Pago all jumping ship. Then, in October, it was hit again with news of a not particularly optimistic report about the virtual currency from the G7 group.

At that point, Libra had lost all but one payment company.

The G7 report outlined nine major risks posed by digital currencies like Libra. Even if Libra’s backers were to address concerns, it said, the project still might not be approved by regulators. The report came from a G7 taskforce made up of senior officials from central banks, the International Monetary Fund (IMF) and the Financial Stability Board (FSB), which coordinates rules for the G20 economies.

Also in October, the FSB published a separate report addressing the regulatory dangers of “global stablecoins” in general. Stablecoins are a type of cryptocurrency that, unlike a currency such as Bitcoin, are pegged to established currencies such as the dollar and euro.

The G7 report echoed concerns already put out by the US Congress in July 2019, when it asked Facebook to halt the cryptocurrency for the time being, and of France, which in September rejected Libra as being too dangerous.

That’s the bureaucratic version of what’s happened to Libra in these months of planning. Bloomberg gave this more colorful rendition of its history:

What happened to Libra since its June 2019 unveiling is a story of hubris, wary lawmakers, protective regulators and partners fearful of the risks involved.

According to Bloomberg’s sources, representatives of Facebook and the Libra Association have continued to meet with US regulators to work on their concerns. One of the media outlet’s sources said that treasury officials, in particular, are still interested in determining how Libra will ensure that the payments network isn’t used for money laundering – one of the concerns raised by multiple regulators.

Jennifer Campbell, the founder of Tagomi, one of the new members of the consortium, told Bloomberg that the cryptocurrency is already being tested. It’s regulatory approval that’s the biggest hurdle, given that Facebook has said that it won’t launch Libra without the US government’s OK.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3whZvaWvWGo/

Coronavirus warning spreads computer virus

Earlier this month, we reported on a phishing scam in which the lure was “safety measures” against the Coronavirus (Covid-19).

In that attack, the crooks took you to a facsimile of the website of the World Health Organization (WHO), where the information was originally published.

On the ripped-off copy of the site, however, the crooks had added the devious extra step of popping up an email password box on the main page.

Of course, the WHO website wouldn’t ask for your email password – it’s a public information website, after all, not a webmail service, so it has no need for your email details.

The crooks were hoping that because their website looked exactly like the real thing – in fact, it contained the real website, running in a background browser frame with the illicit popup on top – you might just put in your email details out of habit.

Well, here’s another way that the crooks are using concerns over the Coronavirus outbreak, combined with the WHO’s name, to trick you into clicking buttons and opening files you’d usually ignore:

SophosLabs tracked this particular spam campaign in Italy, where the crooks have made it believable and clickworthy by:

  • Writing the message in Italian.
  • Pretending to quote an Italian official from the WHO.
  • Referencing known virus infections in Italy.
  • Urging Italians in particular to read the document.

In other words, the crooks haven’t just pushed out a blanket message trying to capitalise on global fears, but have given their scam email a regionalised flavour, and therefore a specific reason to act:

coronavirus: informazioni importanti su precauzioni

A causa del fatto che nello suo zona sono documentati casi di infezione […] [l]e consigliamo vivamente di leggere il documento allegato a questo messaggio!

coronavirus: important information on precautions

Because there are documented infections in your area […] we strongly recommend that you read the document attached to this message!

This time, there isn’t a link to a fraudulent website, but an attachment you are urged to read instead.

By now you ought to be suspicious, given that Word documents can contain so-called macros – embedded software modules that are often used to spread malware, and that are an obvious risk to accept from outside your company.

Indeed, Word macros – often used legitimtely inside companies for managing internal business workflow – are sufficiently risky when they arrive from outside that Microsoft has, for many years, blocked them by default.

As you probably know, however, the crooks have learned how to turn Microsoft’s security warnings into “features”, as you see here:

The actual document – the part that isn’t dangerous, and doesn’t harbour the macro code – is the text with the blue background you see above, and it has been deliberately created by the crooks to look like a message from Microsoft Office itself:

Your application activated

This document was created in an earlier version of Microsoft Office Word. To view full content, please click “Enable Editing” and then click “Enable Content”

© Microsoft 2020

As reasonable as this sounds, DON’T ENABLE CONTENT!

The “content” you will activate by clicking the [Enable Content] button is not the document itself – you’re already looking at the document part, after all – but the macros hidden in it.

And the macros in this document aren’t anything to do with your company’s workflow – they make up the malicious software code that the crooks want to run.

SophosLabs has published a technical report on what happens if you run this macro malware, which involves a series of stages that ultimately result in infection by a well-known strain of Windows malware called Trickbot.

We recommend you read the Labs report to learn how a modern malware infestation unfolds, with each step downloading or unscrambling the next part, usually in the hope of breaking the attack into a series of operations that are less suspicious, one-at-a-time, than running the final malware right away.

Where have I heard that name before?

If you’re wondering where you’ve heard the name Trickbot before, it might very well have been on the Naked Security Podcast, where our resident Threat Response expert Peter Mackenzie has mentioned it more than once. (In the episode below, Peter’s section about malware attacks starts at 19’10”.)

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Trickbot is dangerous in its own right – it started life as a so-called banking Trojan, a type of malware that tries to hijack access to your bank account.

These days, Trickbot is also very commonly a precursor to a full-blown ransomware attack.

By implanting Trickbot on your computer, the crooks get a foothold inside your network where they can harvest passwords and data and much more, as well as mapping out what resources you have.

Once they’ve squeezed all the criminal value they can out of the Trickbot part, the crooks often use the bot as a launch pad for their final act: a ransomware attack.

One ransomware family that commonly follows unchecked Trickbot infections is the malware strain known as Ryuk, whose criminal operators are notorious for asking for six- and even seven-figure ransom payments.

What to do?

  • Don’t be taken in by authority figures mentioned in an email. This scam claims to be from an Italian WHO official, but anyone can sign off an email with an impressive name.
  • Never feel pressured into opening attachments in an email. Most importantly, don’t act on advice you didn’t ask for and weren’t expecting. If you are genuinely seeking advice about the coronavirus, do your own research and make your own choice about where to look.
  • Never click [Enable Content] in a document because the document tells you to. Microsoft blocked the automatic execution of so-called “active content”, such as macros, precisely because they are so often used to implant malware on your computer.
  • Educate your users. Products like Sophos Phish Threat can demonstrate the sort of tricks that phishers and scammers use, but in safety so that if anyone does fall for it, no real harm is done. Sophos also has a free anti-phishing toolkit which includes posters, examples of phishing emails, top tips to spot email scams, and more.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aO-3E9fbiao/

Sadly, the web has brought a whole new meaning to the phrase ‘nothing is true; everything is permitted’

Column “Hey there,” the message begins. Out of the blue over Skype, someone I hadn’t communicated with in nearly a year reaches out.

“…thanks for replying…I had my laptop AND phone stolen from a hotel lobby and am locked out of all my Apple iCloud everything because of 2 factor authorisation. I saw you and a couple of other people on Skype which I can get to from the hotel computer…anyway…I’m sorry to bother you and ask…”

Oh god, here it comes. First the hook, then the line.

“…could I borrow $60 (US) via PayPal…”

Here we go. And because this is so obviously a scam, the only thing to do is to point to it and dismiss it, with “…and this isn’t some stupid scam…”

Because of all the people you know in the universe, you’re contacting someone you’ve never met in the flesh and only spoke to once on Skype when you’re in an extreme pickle.

Ugh. The worst part isn’t my reply of, “Goodness I’m afraid I cannot help,” with the horrible feeling of guilt that accompanies my reply – a feeling the scammer relies upon, necessary for their hacking of the social bond. The worst part is recognising my utter inability to discern where the truth lies.

For those with a slight touch of paranoia (that’s pretty much anyone who’s spent any time doing infosec work), you probably have a set of questions or conversation topics you’d discuss with a friend – things only the two of you would know. With another friend I have an agreed-upon “safe word”. These things uniquely identify us to one another, in whatever medium we choose to communicate.

I had none of this history to fall back upon here. When you can’t tell what’s real and what’s not, the safest thing is to do nothing at all. Safe, but corrosive. Because there are lots of situations we can imagine where we might need help and may not have the network of friends at hand to prove our authenticity.

Our ability to discern the truth at a distance has never been great: “Believe only half of what you see and none of what you hear.” That’s only grown more difficult in an era of deepfakes, voice synthesis and Russian propaganda bots. It’s possible that if I’d challenged this message, they’d have produced the voice or even the moving image of this person. And how could I know – truly know – whether I’d been scammed? When it was too late.

It feels as though we’ve crossed a line, where the evidence of our senses has become suddenly and comprehensively insufficient to the tasks we need to master if we want to make our way in a well-connected and altogether-too-crafty world. Everything that comes to us at a distance – mediated by technology – could be assumed to be fake. That’s not paranoia, but even so, it generates a kind of hysterical blindness. If we can not trust all of this connectivity, we’ve made a very uneasy bed of lies for ourselves. Now we have to lie down in it, for a very nervous sleep.

Can we dream up a world where we change our emphasis – from connection to authenticity? Where our focus remains fundamentally upon the proofs that form the basis for trust? It would mean losing our illusions, and all the comforting lies we tell ourselves and let others tell us, but what we’d gain would allow us to know something like the truth at a distance.

Technology can’t be the entire answer to that, but as part of the problem, it also has to be part of the solution. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/05/the_whole_truth/

Enable that MF-ing MFA: 1.2 million Azure Active Directory accounts compromised every month, reckons Microsoft

Microsoft reckons 0.5 per cent of Azure Active Directory accounts as used by Office 365 are compromised every month.

The Window giant’s director of identity security, Alex Weinert, and IT identity and access program manager Lee Walker revealed the figures at the RSA conference last month in San Francisco.

“About a half of a per cent of the enterprise accounts on our system will be compromised every month, which is a really high number. If you have an organisation of 10,000 users, 50 will be compromised each month,” said Weinert.

It is an astonishing and disturbing figure. Account compromise means that a malicious actor or script has some access to internal resources, though the degree of compromise is not stated. The goal could be as simple as sending out spam or, more seriously, stealing secrets and trying to escalate access.

Password spray attacks account for 40% of compromised accounts

Password spray attacks account for 40% of compromised accounts

How do these attacks happen? About 40 per cent are what Microsoft calls password spray attacks. Attackers use a database of usernames and try logging in with statistically probable passwords, such as “123” or “p@ssw0rd”. Most fail but some succeed. A further 40 per cent are password replay attacks, where attackers mine data breaches on the assumption that many people reuse passwords and enterprise passwords in non-enterprise environments. That leaves 20 per cent for other kinds of attacks like phishing.

The key point, though, is that if an account is compromised, said Weinert, “there’s a 99.9 per cent chance that it did not have MFA [Multi Factor Authentication]”. MFA is where at least one additional identifier is required when logging in, such as a code on an authenticator application or a text message to a mobile phone. It is also possible (and preferable) to use FIDO2 security keys, a feature now in preview for Azure AD. Even just disabling legacy authentication helps, with a 67 per cent reduction in the likelihood of compromise.

MFA is only possible with what Microsoft calls modern authentication such as OAuth 2.0. Legacy authentication asks only for username and password. Even when the credentials are sent over an encrypted connection, it is more vulnerable thanks to techniques such as those described above.

SMTP-enabled users have the highest chance of being compromised

SMTP-enabled users have the highest chance of being compromised

Microsoft was able to correlate account compromises with the protocols for which a user has legacy authentication enabled. If SMTP is enabled, the chance of being compromised rises to 7 per cent, RSA attendees were told.

How many users have MFA enabled? Weinert and Walker said the global adoption rate is currently around 11 per cent, accounting for the high rate of account compromise.

Disable legacy authentication, break stuff

The solution seems simple: disable legacy authentication for all users. Microsoft itself set out to do this for its own employees in September 2018. A test with a small number of users was successful so for the next phase of the rollout it disabled legacy authentication for its entire sales team, around 60,000 users. “In the middle of the night we started getting calls,” the speakers said.

The problem turned out to be a telesales application which had a backend component using a single account. The login for this component used legacy authentication. The result was to break the application for everyone, causing serious business disruption, defined within the company as a “severity 1” meltdown. The new policy was rolled back.

The team started to keep a 90-day sign-in history to identify legacy authentication logins. They discovered an array of tools and utilities in use. Even the tools used to build Windows and Office depended on legacy authentication. They began the slow process of identifying the owners of these tools and working with them to update the authentication. By March 2019 they had turned off legacy authentication for 94 per cent of users, and the figure is nearer 100 per cent today. According to Weinert and Walker, who showed live monitoring graphs, Microsoft receives 1.5 million attempted legacy authentication logins every day, which are now blocked.

What’s next for the rest of us?

The statistics are compelling. Disabling legacy authentication and enforcing MFA looks like a wise move for any organisation that cares about security. It is hard, though, as Microsoft’s own experience shows. Fixing applications is problematic, particularly since you may not have the code. It is also a little more complex for developers, requiring token exchange in place of simply submitting username and password. Note too that the most common attacks can be prevented simply by using long, unique and unguessable passwords.

At RSA, Microsoft showed tools for disabling legacy authentication and enforcing MFA in Azure AD. The key settings are in the Conditional Access section of Azure AD, where you can set policies. A new feature in preview is to set a policy to report-only. This means that the policy is not enforced, but you get a log of sign-ins that would have failed, so you can fix them without business disruption.

Basic security defaults can be set without Azure AD Premium

Basic security defaults can be set without Azure AD Premium

There is a snag. Conditional Access Policies are a feature of Azure AD Premium at extra cost. Many organisations therefore cannot use them. For them, Microsoft offers a feature called “security defaults”, which is in the Properties section of the Azure AD dashboard. When enabled, this enforces use of the Microsoft authenticator app for iOS or Android and disables legacy authentication. This is enabled by default in Office 365 tenants created after October 22, 2019. It is all or nothing, however, and if you upgrade to using conditional access policies instead, you have to disable security defaults.

From October 2020, Microsoft is disabling legacy authentication in Exchange, which will also break some applications, but may also give organisations a nudge towards MFA.

The bottom line is that any organisation tolerating an account compromise rate of 0.5 per cent a month or more is a long way from where it should be regarding security. Disabling legacy authentication helps and enforcing MFA helps even more. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/05/microsoft_12_million_enterprise_accounts_are_compromised_every_month/

‘Unfixable’ boot ROM security flaw in millions of Intel chips could spell ‘utter chaos’ for DRM, file encryption, etc

A slit in Intel’s security – a tiny window of opportunity – has been discovered, and it’s claimed the momentary weakness could be one day exploited to wreak “utter chaos.”

It is a fascinating vulnerability, though non-trivial to abuse in a practical sense. It cannot be fixed without replacing the silicon, only mitigated, it is claimed: the design flaw is baked into millions of Intel processor chipsets. The problem revolves around cryptographic keys that, if obtained, can be used to break the root of trust in a system.

Buried deep inside modern Intel chipsets is what’s called the Management Engine, or these days, the Converged Security and Manageability Engine (CSME). We’ve written about this a lot: it’s a miniature computer within your computer. It has its own CPU, its own RAM, its own code in a boot ROM, and access to the rest of the machine.

More recently, the CSME’s CPU core is 486-based, and its software is derived from the free microkernel operating system MINIX. You can find a deep dive into the technology behind it all, sometimes known as the Minute IA System Agent, here [PDF] by Peter Bosch.

Like a digital janitor, the CSME works behind the scenes, below the operating system, hypervisor, and firmware, performing lots of crucial low-level tasks, such as bringing up the computer, controlling power levels, starting the main processor chips, verifying and booting the motherboard firmware, and providing cryptographic functions. The engine is the first thing to run when a machine is switched on.

The exploit

One of the first things it does is set up memory protections on its own built-in RAM so that other hardware and software can’t interfere with it. However, these protections are disabled by default, thus there is a tiny timing gap between a system turning on and the CSME executing the code in its boot ROM that installs those protections, which are in the form of input-output memory-management unit (IOMMU) data structures called page tables.

During that timing gap, other hardware able to fire off a DMA transfer into the CSME’s private RAM may do so, overwriting variables and pointers and hijacking its execution. At that point, the CSME can be commandeered for malicious purposes, all out of view of the software running above it.

It’s like a sniper taking a shot at a sliver of a target as it darts past small cracks in a wall. The DMA write operation can be attempted when the machine is switched on, or wakes up from sleep, or otherwise when the CSME goes through a reset, which resets the IOMMU protections. You’ll need local, if not physical, access to a box to exploit this.

Who found it?

The weakness was spotted and reported to Intel by Positive Technologies, an infosec outfit that has previously prodded and poked Chipzilla’s Management Engine. Although Positive announced its findings today, it is withholding the full technical details until a whitepaper about it all is ready. In a summary advisory, seen by The Register earlier this week, the team described the issue thus:

1. The vulnerability is present in both hardware and the firmware of the boot ROM. Most of the IOMMU mechanisms of MISA (Minute IA System Agent) providing access to SRAM (static memory) of Intel CSME for external DMA agents are disabled by default. We discovered this mistake by simply reading the documentation, as unimpressive as that may sound.

2. Intel CSME firmware in the boot ROM first initializes the page directory and starts page translation. IOMMU activates only later. Therefore, there is a period when SRAM is susceptible to external DMA writes (from DMA to CSME, not to the processor main memory), and initialized page tables for Intel CSME are already in the SRAM.

3. MISA IOMMU parameters are reset when Intel CSME is reset. After Intel CSME is reset, it again starts execution with the boot ROM.

Therefore, any platform device capable of performing DMA to Intel CSME static memory and resetting Intel CSME (or simply waiting for Intel CSME to come out of sleep mode) can modify system tables for Intel CSME pages, thereby seizing execution flow.

Intel attempted to mitigate the hole, designated CVE-2019-0090, last year with a software patch that prevented the chipset’s Integrated Sensor Hub from attacking the CSME, though Positive today reckons there are other ways in. The team also said all Intel chip families available today, prior to tenth-generation Ice Point parts, are vulnerable.

What’s the impact?

The CSME provides, among other things, something called Enhanced Privacy ID, or EPID. This is used for things like providing anti-piracy DRM protections, and Internet-of-Things attestation. The engine also provides TPM functions, which allow applications and operating system software to securely store and manage digital keys for things like file-system encryption. At the heart of this cryptography is a Chipset Key that is encrypted by another key baked into the silicon, and you can’t do too much damage, it seems, until you can decrypt the Chipset Key.

If someone manages to extract that hardware key, though, they can unlock the Chipset Key, and start to undo Intel’s root of trust on large swathes of products at once, we’re told.

“To fully compromise EPID, hackers would need to extract the hardware key used to encrypt the Chipset Key, which resides in Secure Key Storage (SKS),” explained Positive.

“However, this key is not platform-specific. A single key is used for an entire generation of Intel chipsets. And since the ROM vulnerability allows seizing control of code execution before the hardware key generation mechanism in the SKS is locked, and the ROM vulnerability cannot be fixed, we believe that extracting this key is only a matter of time.

“When this happens, utter chaos will reign. Hardware IDs will be forged, digital content will be extracted, and data from encrypted hard disks will be decrypted.”

Intel says folks should install the firmware-level mitigations, “maintain physical possession of their platform,” and “adopt best security practices by installing updates as soon as they become available and being continually vigilant to detect and prevent intrusions and exploitations.” ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/05/unfixable_intel_csme_flaw/