STE WILLIAMS

Fleeceware is back in Google Play – massive fees for not much at all

Last September, we wrote about “fleeceware“, a term we coined to describe apps that charge huge amounts but give you very little in return.

Technically, the apps themselves aren’t malware, because the code in the app doesn’t do anything illegal, dangerous, sneaky, snoopy, subversive or surreptitious.

The treachery lies in the payment model – the fleeceware we identified back in September 2019 didn’t charge a fee for the app, but instead sold you a subscription to go along with the app.

And what subscriptions they were!

How about a QR code reader, much like the one already built into your mobile phone’s camera app, that was free for a three day trial…

…but then suddenly cost you a massive €104.99 even if you uninstalled the app straight after trying it and never used it again.

The app’s free, don’t forget; it’s the subscription that you’re being charged for, and Google permits app developers to ask that sort of money.

As SophosLabs researcher Jagadeesh Chandraiah wrote last year :

Because the apps themselves aren’t engaging in any kind of traditionally malicious activity, they skirt the rules that would otherwise make it easy for Google to justify removing them from the Play Market.

In a free market, it’s up to you to decide if the product and its associated service really is worth it.

So even apps that do very little can charge a lot, and hope that you forget to cancel them before your brief trial expires.

Well, we’re now in 2020, and it seems that it’s a case of plus ça change, plus c’est la même chose.

Jagadeesh has revisited the Play Store and found that new fleeceware apps seem to appear whenever old ones get removed, so there are still plenty of “moneytrap apps” waiting to catch out trusting or unsuspecting users.

The numbers beggar belief

Jagadeesh has written an update to his earlier paper and we suggest you read it because some of the facts and figures almost beggar belief.

Would you pay €104.99 to use an emoji keyboard? Would you pay €64.99 to use a camera app?

We suspect not, and yet these are examples of two apps that show up with more than 100 million installs each – as Jagadeesh points out, that’s twice as many installs as the staggeringly popular game Call of Duty: Mobile.

Many of these apps also sport a surprisingly high number of 5-star reviews, often with just one or two words such as ‘Perfect’, ‘Great’, ‘Love it’ and ‘Like it’.

For a list of apps, sample screenshots, the charges they’re asking, and some good advice on how not to ge tricked, please read Jagadeesh’s article Fleeceware apps persist on the Play Store.

As we said last time, perhaps this is simply an extreme case of caveat emptor (buyer beware).

But on the app store of the world’s largest mobile operating system maker, we’d like to think that users would never find themselves being charged hundreds of euros for an unremarkable app.

What to do?

Remember:

  • Always read the small print.
  • Subscriptions outlive the app.
  • Subscriptions can’t be ended simply by uninstalling the app.

If in doubt, leave it out!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/hTaSy5s-eoE/

Windows 7 computers will no longer be patched after today

Do you know what you were doing 3736 days ago?

We do! (To be clear, lest that sound creepy, we know what we were doing, not what you were doing.)

Admittedly, we didn’t remember all on our own – we needed the inexorable memory of the internet to help us recall what happened on 22 October 2009.

That was the official release date of Windows 7, so we armed ourselves with a fresh-out-of-the-box copy (remember boxed software?) and tried a bunch of new viruses against it.

Simply put, we took the next 10 Windows malware samples that showed up for analysis at SophosLabs, checked that they ran on the previous versions of Windows and then threw them at the all-new Windows 7.

The good news is that three of the 10 samples didn’t work on Windows 7; the bad news is that seven did.

You can’t really blame Microsoft for that, as much as you might like to, given that everyone expected existing software to work “out of the box” with the new version, despite numerous security improvements.

That was a decade ago – 10 years and nearly 3 months, to be precise.

Today marks the other end of the Windows 7 story – the very end of the other end, in fact.

It’s the first Patch Tuesday of 2020 and once today’s Windows 7 updates are shipped…

…that’s that.

“So long, and thanks for all the fish.”

There won’t be any more routine Windows 7 updates, as there haven’t been for Windows XP since Tuesday, 08 April 2014.

The problem is that “new” malware samples, together with new vulnerabilities and exploits, are likely to work on old Windows 7 systems in much the same way, back in 2009, that most “old” malware worked just fine on new Windows 7 systems.

Even if the crooks stop looking for new vulnerabilities in Windows 7 and focus only on Windows 10, there’s a fair chance that any bugs they find won’t be truly new, and will have been inherited in code that was originally written for older versions of Windows.

Bugs aren’t always found quickly, and may lie low for years without being spotted – even in open source software that anyone can download and inspect at their leisure.

Those latent bugs may eventually be discovered, “weaponised” (to use one of the security industry’s less appealing jargon terms) and exploited by crooks, to everyone’s unfortunate surprise.

The infamous Heartbleed flaw in OpenSSL was there for about two years before it became front-page news. In 2012, the Unix security utility sudo fixed a privilege escalation bug that had been introduced in 2007. OpenSSH patched a bug in 2018 that had sat undiscovered in the code since about 2000.

Who’s at fault?

Windows 10 is significantly more secure against exploitation by hackers than Windows 7 ever was, and retrofitting those new security features into Windows 7 is just not practicable.

For example, there are numerous “breaking changes” in Windows 10 that deliberately alter the way things worked in Windows 7 (or remove components entirely) because they’re no longer considered secure enough.

For that reason, going forwards by upgrading can be considered both a necessary and a desirable step.

At the same time, not going forwards will leave you more and more exposed to security holes – because any vulnerabilities that get uncovered will be publicly known, yet unpatched forever.

For better or for worse, the modern process of bug hunting and disclosure generally involves responsibly reporting flaws, ideally including a “proof of concept” that shows the vendor how the bug works in real life as a way of confirming its importance.

Then, once patches are out, it’s now considered not only reasonable but also important to publish a detailed exposé of the flaw and how to exploit it.

As crazy as that sounds, the idea is that we’re more likely to write secure software in future if we can readily learn from the mistakes of the past, on the grounds that those who cannot remember history are condemned to repeat it.

The downside of the full disclosure of exploits, however, is that those disclosures are sometimes “attack instructions in perpetuity” against systems whose owners haven’t patched, can’t patch, or won’t patch.

What to do?

  1. Identify systems on your network that simply can’t be updated. You then need to decide whether you absolutely need to keep them, for example because they are irreplaceable and specialised devices that you simply can’t do without, or to get rid of them forever. If you have to keep them, put them on a separate network and limit their exposure as much as you can.
  2. Identify systems where other vendors’ software is holding you back. Keeping on with a now-insecure version of Windows just to carry on running a now-also-insecure software stack from another vendor is just making a bad thing worse. GOTO 1.
  3. Set yourself some hard deadlines for finally making the move to Windows 10. Technically, you’ve already left it too late, so don’t delay any more. The longer you put it off, the more exposed you will be, and the greater the number of cybercrooks who will be able to penetrate your network if they decide to try. GOTO 1.

Depending on whom you ask, you’ll see figures suggesting that somewhere between 25% and 33% (that’s one-fourth to one-third) of desktop computers are still running Windows 7.

So, please… don’t delay – do it today!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5GCHHFgc5Rs/

US hands UK ‘dossier’ on Huawei: Really! Still using their kit? That’s just… one… step… beyond

It would be “nothing short of madness” to use Huawei gear in Britain’s 5G mobile networks, an American national security adviser has reportedly told UK Prime Minister Boris Johnson.

As reported this morning, a US delegation consisting of deputy national security advisor Matt Pottinger, junior foreign minister Chris Ford, special envoy Robert Blair and three others flew into London yesterday to hand unspecified “intelligence” to British officials.

The delegation refused to clarify publicly what was so compelling about this intelligence that it would convince the UK to shut out Huawei.

One of the delegates did tell the Guardian that “Donald Trump is watching closely”, while the officials are also reported to have threatened to reduce intelligence-sharing with the UK if Blighty chooses the Chinese firm for 5G – flatly contradicting domestic spy chief Sir Andrew Parker, who yesterday shrugged his shoulders about the risks.

Those known risks are twofold: Huawei’s coding practices are pisspoor, as Britain’s Huawei Cyber Security Evaluation Centre (HCSEC) found last year; and there is the ever-present fear that Huawei, or people within Huawei, could be forced to abuse their product knowledge to serve the Chinese regime, perhaps through espionage conducted on UK comms networks or helping with denial-of-service attacks.

Although the US have been claiming for years that Huawei poses a threat to communication security, given the well-documented activities of American spy agencies over the last couple of decades this seems like a hollow concern. It’s not implausible, even, that American spies are concerned their level of covert access to the world’s conversations will also become available to Chinese eavesdroppers, presenting yet another threat to US dominance.

With Huawei offering a cut-price alternative to US enterprise tech brands such as Cisco, as well as arguably better technology to Western 5G network products, it’s little surprise the American government is furiously lobbying on its industry’s behalf.

Despite US unease, none of the technical threats said to be posed by Huawei have made it into the public domain. In the absence of evidence such as that gathered by HCSEC, remaining US objections could appear to the onlooker to be mostly political.

Huawei’s UK veep, Victor Zhang, said in a canned statement: “We are confident that the UK government will make a decision based upon evidence, as opposed to unsubstantiated allegations. Two UK parliamentary committees concluded there is no technical reason to ban us from supplying 5G equipment, and this week the head of MI5 said there is ‘no reason to think’ the UK’s intelligence-sharing relationship with the US would be harmed if Britain continued to use Huawei technology.” ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/14/us_huawei_uk_lobbying_trip/

How to Keep Security on Life Support After Software End-of-Life

It’s the end of support this week for Windows 7 and Server 2008. But what if you truly can’t migrate off software, even after security updates stop coming?

(image by MR, via Adobe Stock)

Support for Windows 7 and Server 2008 will officially end after today. That means shops running anything on these OSes will no longer receive routine security updates and patches going forward. That’s significant, because the 10-year-old Win7 operating system is still in-style; according to Statcounter, Windows 7 is still deployed on one out of every four Windows machines.

Without the appropriate security support going forward, threat actors will be keeping an eye out for targets running older OSes like this. (Over 1,000 vulnerabilities were found in Win 7 last year alone.) Businesses face a very real security risk by using products after support runs out.

But some organizations simply cannot migrate away from unsupported software immediately – and may find they are using unsupported software for many months, years – or even decades to come.

For example, budget constraints may hold back SMBs’ migration plans. According to Kaspersky research: 40 percent of very small businesses and 48 percent of small, medium-sized businesses still rely on unsupported or approaching-end-of-support operating systems for their security needs.

It is even more common in industrial control system environments to find older or outdated software.

As Jason Christopher, principal cyber risk advisor at Dragos notes, that’s because migration is not as simple as merely upgrading an operating system or buying a new laptop. Industrial systems are designed for reliability and physical safety — and it takes a considerable amount of time and engineering to upgrade the equipment.

“While traditional IT environments occasionally manage unsupported technology with outdated software, the problem is exponentially more difficult in industrial control systems — as is the potential impact. These devices not only ensure reliability for things like water, power, and manufacturing—but they are also in the field for decades, not years,” says Christopher. “Securing these systems, where 24×7 operations is necessary and safety is paramount, becomes more difficult as technology reaches end of life and is no longer supported. This means if a new vulnerability is discovered, you may need to take extra precautions to protect critical systems—without vendor support, in many cases.”

If your business counts among the unlucky who are trapped using an end-of-life or end-of-support OS, what can you do in the meantime to protect your environment? 

Buy Extended Support
This is probably the least attractive option for companies that are already resource-pinched, but certainly the most secure. Enterprise customers have the option to pay Microsoft for extended support through January 2023. However, it’s far from cheap.

“Keep in mind that Microsoft usually offers extended support for EOL products beyond the stated public, ‘free,” support,” says Roger A. Grimes, data-driven defense evangelist with KnowBe4. “But it’s usually very expensive. Like ‘don’t call unless you are ready to spend a million dollars’ expensive.” 

And the price goes up for every year you pay for the support.

“These updates will need to be paid for and will increase in price each year, leading to some hefty bills for businesses that fail to migrate. In fact, when Microsoft ended support for Windows XP, the cost of extended support for an organization with 10,000-plus machines levelled out at just under $2,000,000 a year!” adds Jon O’Connor, Solutions Architect at Kollective.

Isolate It From the Network
If you can’t afford extended support, the next best option, experts say, is to isolate any outdated systems from the rest of your network. 

“If it can work as a stand-alone machine not attached to the network, do that,” says Grimes. “If it must be on [an internal network], don’t let the [public] Internet reach it. … If it must be on the internal network, lock down what other devices and ports can be used to reach it. Create a separate VLAN, use a firewall, whatever you can do to isolate it the best, do.”

“If your organization is operating with old, decommissioned and non-supported operating systems or software that can no longer be patched, you have to isolate those systems on a separate network and control all inbound and outbound traffic via firewall rules to limit the surface layer of attack,” adds George Gerchow, chief security officer with Sumo Logic.

Limit User Access
Is it possible to give access to an outdated system to just a handful of users who really need it? That’s the next course of action.

“Do a complete audit of your users and determine who still needs access to that software,” says Richard Henderson, head of global threat intel at Lastline.

“Perhaps you go so far as to have each user provide justification for continuing to have it,” he says. “Uninstall the software from those devices that no longer have a need for the software. If use of the product can be limited to a small subset of users, I would seriously consider providing those users with two computers — one that only has the out-of-date tools, which are segmented off the network from the rest of the infrastructure, and a newer machine that they will use for the rest of their day-to-day tasks.”

Watch for ‘Out of Band’ Fixes
Despite being outdated, some vendors may still issue critical fixes, says Dragos’ Christopher. So, stay on top of vendor news in case you may be in need of an unexpected, after-EOL patch.

“When Windows XP hit end of life in 2014, people thought, ‘Well, that’s it. It’s over.’ Yet Microsoft has released two critical patches since then for XP,” he says. “While patching is more difficult in industrial control systems, an update like that should make organizations reexamine their security controls for older systems.”  

Related Content:

 

Joan Goodchild is a veteran journalist, editor, and writer who has been covering security for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online. View Full Bio

Article source: https://www.darkreading.com/theedge/how-to-keep-security-on-life-support-after-software-end-of-life/b/d-id/1336790?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Industrial Control System Features at Risk

How some ICS product functions can be weaponized by altering their configurations.

A new analysis of industrial control systems (ICS) running in the networks of oil and gas, power generation, refining and chemicals, pulp and paper, and mining industries sheds light on how some legitimate and deeply rooted product features and functions can actually threaten their security.

In many cases, these purpose-built functions embedded in systems can be exploited by malicious attackers by merely changing easily adjusted configurations and settings; malware isn’t necessarily required to do so, according to Mark Carrigan, chief operating office of PAS Global. Carrigan’s firm studied some 10,000 of its customers’ ICS systems to pinpoint features that he describes as “delicate by design” with easily abused configuration capabilities. 

It’s a spin on the well-known “insecure by design” issue associated with these systems: Security vulnerabilities as well as the lack of security controls in the design of many OT products have been the subject of research for some time. But Carrigan points out that some product features, too, can easily be reconfigured and weaponized.

Not only do many of these systems predate today’s cyberthreats, but some of the security-challenged features were built into the systems to make engineers’ jobs easier and so they could perform their tasks quickly, according to Carrigan. They can’t easily be updated, and even removing some of these weaknesses can, in many cases, cause a ripple effect of disruption in the system and production.

While sophisticated or potentially damaging attacks on ICS networks to date have been relatively rare and difficult to pull off without knowledge of a plant’s processes as well as the ICS systems within it, security experts say nation-state attackers, via cyber espionage campaigns, are increasingly gaining knowledge to sabotage a plant’s operation via a cyberattack. Stuxnet was the first wakeup call for the industrial sector. Most recently was Triton/Trisis, which in 2017 shut down the safety instrumentation system at a petrochemical plant in Saudi Arabia. 

Understanding the processes beneath the control systems could be a deadly combination for a malicious hacker able to change the configuration of the OT assets, Carrigan says. “That’s the next landscape for attackers,” he says.

‘Tip of the Iceberg’
Carrigan won’t name names of the affected vendors PAS found in its analysis, but some of the exposed features span multiple brands of OT equipment. Of the 10,000 industrial systems, there were some 380,000 total known vulnerabilities with published CVEs, most of which were Microsoft Windows-related. He will present his findings next week at the S4x20 ICS security conference in Miami.

“This is just the tip of the iceberg. There are tons of these out there,” Carrigan says.

Dale Peterson, CEO of Digital Bond, a control systems research and consulting firm that runs S4, says Carrigan’s study provides another level of insecure-by-design issues. Specifically, some of them include features that are required for components to communicate. To actually fix these issues would entail simultaneously upgrading a large number of components and weeks of production outage, he explains.

“Also, many of these ‘features’ are hidden or known only to the team deploying the ICS,” Peterson notes.

Many of the security-challenged features PAS is highlighting are well-known by the vendors in question, according to Carrigan. Most of the fallout he has seen with those features so far has been with mistakes and inadvertent misconfigurations by the operators themselves, not malicious hackers.

One feature that spans all vendors’ control systems products is a parameter or setting generally known as the output characteristic. This control function parameter is a binary setting that determines, for example, the flow rate of air or gas in a valve. If the flow controller is set for 100 and the current flow is 80, it will open the valve to reach 100, for example, he says. “If I just reverse that setting — just switch it … it’s a simple thing to do — that valve opening is going to close. I’ve [then] reversed the action of the output characteristic,” he explains.

Another feature specific to one vendor’s control system product — which it recently patched — is a command in the human machine interface’s (HMI) graphics files that could allow an attacker to gain administrative control of the entire DCS network, Carrigan says. This feature is for engineers or designers to create a command for an operator, but if an attacker got hold of it, he or she could land admin privileges. It even bypasses any set Windows admin privileges, he says.

An additional feature at risk is an HTML weakness in most HMIs built in the past 10 years, according to PAS’s findings. These HMIs use a free format of HTML for graphics design, according to Carrigan. “If you have HTML, you can inject code and do almost anything you want — change flow control settings, no problem. Do SQL injections in the configuration database, no issue there,” he says.

PAS also found a setting in one vendor’s control system for balancing computing loads for a flow-control indicator process. By “mixing up” the calculations of the flow indicator and the flow controller on the CPU, an attacker could wreak havoc on the operation. “There’s nothing in the system to prevent you from setting those things wrong. So at a minimum I can cause production problems by changing a bunch of these settings,” or even overload the CPU, he says.

Another “delicate” feature they found: one vendor’s ICS system that uses a single, hard-coded system engineer username and password that is stored in its configuration database. The password is hashed, but Carrigan says it’s a very simple and guessable hash. “Once I’ve [the attacker] figured it out, I know for every single system, I can access that hash and access that system,” he explains. The password can’t be changed because it’s “designed into” the product.

Damon Small, technical director of the NCC Group, says the threat model has changed significantly for the ICS sector, and attacks are becoming more common. But the real-world impact of making security changes like patching or other moves that take operations offline even for a short period of time often deter any major security changes by operators. Small says once when he recommended an operator to patch monthly for Windows flaws, the operator told him that taking those systems offline for just four hours would cost his organization $350,000 in outage time.

“Operators know very well the inefficiencies introduced when you start messing with control systems,” Small explains.

What to Do
How do OT operators protect their plants from their own systems, especially if patching may be off the table in some cases?

Adopting configuration management, especially for the most critical systems and assets, is one way to thwart an attack via “delicate” features, according to Carrigan. That way only the changes you allow can take place, for instance, he says.

Passive network monitoring can also catch anomalous traffic and behavior, notes NCC Group’s Small, who is also a founding member of the Operational Technology Cyber Security Alliance. “If you see something other than what you were expecting, that could an indicator of something having gone wrong. But this assumes you understand what your normal traffic looks like — what your baseline is,” he notes, which requires analysis.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “6 Unique InfoSec Metrics CISOs Should Track in 2020.”

Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/risk/industrial-control-system-features-at-risk/d/d-id/1336796?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Processor Vulnerabilites Put Virtual Workloads at Risk

Meltdown, Spectre exploits will likely lead to customers making tradeoffs between performance and security of applications, especially virtual and cloud-based apps

Back in January 2018, a consortium of security researchers from organizations including Google, Cyberus Technology and several universities disclosed two ominously-named vulnerabilities found in nearly all modern computer processors. These vulnerabilities broke open the floodgates for research into flaws in some of the most fundamental security protections found in computer processors. Meltdown, Spectre, and the other related vulnerabilities are significantly more dangerous and useful to an attacker in a virtual environment versus a non-virtual server or desktop. In response, I expect to see Intel and AMD eventually create separate processor lines to protect cloud applications from this threat.  

The Processor Speed Race
Modern processors handle dozens if not hundreds of applications simultaneously. Billions of transistors packed into multiple cores allow them to seamlessly and automatically switch between execution threads as needed. They typically enforce a set of rules on this dance of applications, including one very big one: The processor should prevent applications from accessing data from other running applications. Meltdown and Spectre allow malicious applications to break this rule.

Processing power continues to increase each year, but no longer at the same rates that we used to see when Moore’s Law still held true. Processor manufacturers have to use clever “cheats” to squeeze more performance from their devices as they run into limits of transistor technology. One of these cheats is an optimization technique called speculative execution 

Speculative Execution: Faster but Flawed
In a nutshell, application execution paths often contain many forks, or branches, where they may go down one of multiple code paths depending on the result of a calculation. The processor doesn’t know what branch the application will follow until it completes the calculation, but it can save time by guessing the outcome and continuing execution down that path while it waits for the calculation result. If it guessed correctly, it already has a head start and saves a few microseconds. If it guessed incorrectly, it simply discards the work it started and continues down the correct path.

Meltdown and Spectre both abuse speculative execution, though in slightly different ways. While the technical explanation could take a full article in itself, the short story is that they use speculative execution to load restricted memory into the processor’s memory cache and then use a few tricks to accurately identify the contents of that memory even after the process recognizes they shouldn’t be able to read it directly. The restricted memory could include anything from an administrative password to sensitive cryptographic keys on a Web server.

Spectre and Meltdown in the Cloud
While expanding the potential impact of malware on a desktop or non-virtualized server is never good, Meltdown and Spectre become much more dangerous in the cloud and virtual environments. An attacker with code execution on a physical desktop or server usually has much easier ways to elevate their privileges and access sensitive data from other applications. Using Meltdown or Spectre would be excessive.

But in a virtual environment, a single piece of hardware (for example, an EC2 instance in an AWS data center) can house multiple different tenants, each of which expects their applications and services to be completely isolated from the other tenants with which they share the resources. Usually, the hypervisor (the management software that handles virtualizing a single piece of hardware into multiple virtual servers) has strict security controls to enforce tenant isolation. 

But Spectre and Meltdown completely bypass these software protections by targeting the hardware itself. An attacker with access to one application on a cloud server could steal data from all the other applications using a shared resource on the same physical hardware, no matter how good the security of those other applications is!

Since Meltdown and Spectre’s disclosure, researchers have found several variants and other vulnerabilities that abuse speculative execution to access restricted memory. Intel and AMD, the two largest processor manufacturers, have been playing a cat-and-mouse game of patching these flaws, usually at the cost of processor performance. The performance loss has been up to 30% in extreme cases. This has led many desktop users, who are less impacted by Spectre, Meltdown, and the like, to disable the security options to retain more processing power. 

How to Solve the Problem
Mitigating this type of vulnerability in a cloud environment where security is paramount ranges from difficult to impossible. Patching these vulnerabilities requires difficult microcode updates to the processor itself. Because of these challenges, we’re likely heading towards a future where Intel and AMD manufacture different classes of processors that focus on either security or speed.

Cyber security is all about risk trade-offs. Desktop computers and non-virtualized servers have less to lose from an attacker successfully exploiting a Meltdown-like vulnerability than virtual environments, where an exploit could be a disaster. Since their risk is substantially lower, they could benefit from remaining vulnerable in return for significantly better processor performance. Processors used in virtual environments would likely swing the other way: prioritize security over speed by removing speculative execution entirely (or possibly something slightly less drastic). This could lead to different processor lines, one focused on security with slightly degraded performance and another focused on pure execution speed that risks falling victim to speculative execution attacks.

Researchers have already opened Pandora’s box for processor security vulnerabilities and the days are clearly numbered for speculative execution in its current form. Since the original Meltdown and Spectre disclosures, researchers have discovered additional serious flaws nearly every other month. At this rate, something will have to change to keep cloud applications safe. Whether that will be a fundamental re-architecture on all processors or a split into different security and performance-focused lines remains to be seen.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “In App Development, Does No-Code Mean No Security?

Marc Laliberte is a senior security analyst at WatchGuard Technologies. Specializing in networking security protocols and Internet of Things technologies, Marc’s day-to-day responsibilities include researching and reporting on the latest information security threats and … View Full Bio

Article source: https://www.darkreading.com/cloud/processor-vulnerabilites-put-virtual-workloads-at-risk/a/d-id/1336735?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Dustman Attack Underscores Iran’s Cyber Capabilities

For nearly six months, an attack group linked to Iran reportedly had access to the network of Bahrain’s national oil company, Bapco, before it executed a destructive payload.

On December 29, a group of attackers used a data-deleting program known as a “wiper” to attempt to destroy data on systems at Bahrain’s national oil company, overwriting data with a string of characters including the phrases “Down With Bin Salman” and “Down With Saudi Kingdom,” according to multiple analyses.

While the destructive malware, dubbed “Dustman” by the Saudi National Cyber Security Centre (NCSC), differs from previous wiper attacks, many of its techniques link the code to Shamoon and ZeroCleare, two data-destroying programs used by Iranian-linked groups to target firms in the Middle East. In addition, while the group behind Dustman had access to the victim’s network since July 2019, they only executed the wiper code on December 29, the same day that the United States retaliated for the death of an American contractor by bombing Iranian-linked targets in Syria and Iraq.

The attack deleted the data on most of the victim’s computers, according to other NCSC analysis.

“Just because it is anti-Saudi does not make it necessarily Iranian,” says Dmitriy Ayrapetov, vice president of platform architecture at SonicWall. “But because it is so related in techniques and modules that it uses [when compared] to the previous two attacks that have been attributed to Iran, we can — with fairly clear confidence — say this is a continuation of the campaigns of Iranian hacking groups.”

The attack demonstrates both the technical capabilities of the group behind Dustman and the level of access that it has to networks in the Middle East.

The attackers gained access by using a vulnerability in the company’s virtual private networking software, used the antivirus management server to distribute the malware, manually deleted data on the company’s storage servers, and then deleted the VPN access logs to hide their tracks. However, the attack missed some machines on the network because they had been in sleep mode.

“Based on analyzed evidence and artifacts found on machines in a victim’s network that were not wiped by the malware, NCSC assess that the threat actor behind the attack had some kind of urgency on executing the files on the date of the attack due to multiple OPSEC [operational security] failures observed on the infected network,” NCSC stated in its analysis.

Iranian-linked groups — the two major actors known as APT33 and APT34 — have been active for some time in the Middle East and against US targets. A 2-year-old vulnerability in Microsoft Outlook, for example, has been used to attack companies because of the complexity of patching the issue correctly.

The NCSC report did not name the target, but both press reports and security firm’s analyses indicated that the victim was the Kingdom of Bahrain’s national oil company.

While Iranian espionage and hacking groups may be best known for their destructive attacks, the groups are also quite adept at stealing data and other intelligence operations, says Adam Meyers, vice president of intelligence at CrowdStrike.

“Dustman is one of the destructive [and] disruptive tools that we associate with Iranian government-affiliated threat actors, though we have not associated it directly to any of the groups CrowdStrike tracks at this time with any degree of confidence,” Meyers says, adding “Iran has deployed destructive wipers several times over the years. They are more commonly engaged in intelligence collection intrusions, but they have been known to use wipers.”

The NCSC report stated that the initial infiltration occurred in July 2019 using a vulnerability in a virtual private network (VPN) application. A critical vulnerability in Pulse Secure’s VPN software has been used in several attacks — most recently, it was purportedly used in the breach of travel-service provider Travelex — but none of the analyses linked that specific vulnerability to the Dustman incident.

The attack also used legitimate, signed drivers with known vulnerabilities to bypass some Windows security features, says SonicWall’s Ayrapetov. The attackers first load the driver, for the virtual machine software VirtualBox, and then exploit the driver to load a different untrusted driver to overwrite data, SonicWall stated in its analysis.

“They load an old signed driver that is vulnerable, and then they exploit that vulnerability and load the modules from a legitimate piece of software to do the wiping attack,” he says. “They are hijacking legitimate functionality to bypass some of the Windows security controls.”

The use of the antivirus management console should also be noted by security teams, Yaron Kassner, chief technology officer of cybersecurity firm Silverfort, said in a statement.

“Highly privileged service accounts are a top target for hackers because once compromised, they can be exploited to reach sensitive systems and gain control over them,” he said. “These accounts can pose significant risk to corporate networks. Therefore it is important to monitor and restrict access of such service accounts.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “6 Unique InfoSec Metrics CISOs Should Track in 2020.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/advanced-threats/dustman-attack-underscores-irans-cyber-capabilities/d/d-id/1336797?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Consumer Reports Calls for IoT Manufacturers to Raise Security Standards

A letter to 25 companies says Consumer Reports will change ratings to reflect stronger security and privacy standards.

Consumer Reports has issued a letter to 25 connected camera manufacturers, urging them to adopt stronger security and privacy measures for cameras, doorbells, and security systems.

The letter is directed to companies including ADT/LifeShield, Guardzilla, Honeywell Home, Google/Nest, Ring, SimpliSafe, TP-Link, and Samsung SmartThings. In it, Consumer Reports’ Policy Counsel Katie McInnis requests clarifications on the steps manufacturers are taking to prevent hacks and unauthorized access to devices and systems following a series of recent incidents in which connected cameras were used to harass people in their homes.

“Connected devices such as cameras are increasingly being used in the private sphere of the home and collect highly sensitive information including voice and visual recordings of the home and the area immediately around a private residence,” she writes. As consumers learn of attacks on home systems, she adds, they have grown more concerned with privacy than cost.

Consumer Reports’ product ratings will continue to change to reflect the security and privacy standards it believes are necessary to protect users. Companies are urged to adopt stronger measures: automatic software/firmware updates enabled by default; protection for credential stuffing and reuse; requirement for multifactor authentication and more secure passwords; and inclusion of a visible indicator when cameras are active are a few suggestions that the letter offers.

Device makers are requested to submit which security practices they have implemented, and which additional measures they plan to use in the future, and by which date, by January 27.

Read the full letter here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “6 Unique InfoSec Metrics CISOs Should Track in 2020.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/consumer-reports-calls-for-iot-manufacturers-to-raise-security-standards/d/d-id/1336798?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft now reviewing Skype audio in ‘secure’ places (not China)

Following reports about text transcriptions of live Skype calls being vetted by humans, meaning that sensitive conversations could have been bugged, Microsoft says it’s moved its human grading of Cortana and Skype recordings into “secure facilities”, none of which are in China.

On Friday, The Guardian published a report after talking to a former Microsoft contractor who lived in Beijing and transcribed thousands of audio recordings from Skype and the company’s Cortana voice assistant – all with little cybersecurity protection, either from hackers or from potential interception by the government.

The former contractor said that he spent two years reviewing potentially sensitive recordings for Microsoft, with “no security measures”, often working from home on his personal laptop. He told the Guardian that Microsoft workers accessed the clips through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet.

They received no help to protect the recordings from eavesdroppers, be they Chinese government, disgruntled workers, or non-state hackers, and were even told to work off new Microsoft accounts that all shared the same password – for “ease of management.”

The Guardian quoted the former contractor:

There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details.

Being British, he was put to work listening to people whose Microsoft devices were set to British English. After a while, he was allowed to work from home in Beijing, where he used a simple username and password to access the clips – a set of login credentials that he said were emailed to new contractors in plaintext. The password was the same for every employee who joined in any given year, he said.

Humans reviewing audio has been par for the course at all the tech companies that are refining their voice assistants’ voice recognition technologies: since April, the “our contractors are listening to your audio clips” club has grown to include Facebook (with some Messenger voice chats), Google, Apple, Microsoft (via Xbox as well as Skype and Cortana) and Amazon.

But Microsoft has taken it a step beyond mere voice assistant clips. In August 2019, Motherboard reported that Microsoft was also using human graders to listen in on calls made using its Skype phone service. As Motherboard reported, Microsoft had been vetting an experimental feature enabling live text translation of Skype calls, meaning that users could have engaged in sensitive conversations without realizing that they were bugged.

Unlike its recalcitrant tech brethren, Microsoft didn’t apologize for any of it, pointing out that its terms of service said that it might “analyze audio” of calls. It did, however, update its privacy policy to be more explicit about playing those calls to human contractors.

As with previous “humans are listening” stories, the story of Microsoft’s Skype/Cortana voice analysis involves workers overhearing deeply personal conversations, including what the former Beijing contractor said sounded potentially like domestic violence. The privacy implications aren’t what’s new, however – rather, what’s new is the lack of cybersecurity safeguards, in a country where the government intercepts just about everything that happens online. Here’s the former Microsoft contractor:

Living in China, working in China, you’re already compromised with nearly everything. I never really thought about it.

They just give me a login over email [in order to access] Cortana recordings. I could then hypothetically share this login with anyone. […] It sounds a bit crazy now, after educating myself on computer security, that they gave me the URL, a username and password sent over email.

Microsoft issued a statement saying that, since Motherboard’s reporting over the summer, it’s ended its grading programs for Skype and Cortana for Xbox and moved the rest of its human grading into “secure facilities”. None of them are in China.

We review short snippets of de-identified voice data from a small percentage of customers to help improve voice-enabled features, and we sometimes engage partner companies in this work. Review snippets are typically fewer than ten seconds long and no one reviewing these snippets would have access to longer conversations. We’ve always disclosed this to customers and operate to the highest privacy standards set out in laws like Europe’s GDPR.

This past summer we carefully reviewed both the process we use and the communications with customers. As a result we updated our privacy statement to be even more clear about this work, and since then we’ve moved these reviews to secure facilities in a small number of countries. We will continue to take steps to give customers greater transparency and control over how we manage their data.

Microsoft didn’t detail what those “steps” may entail.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MmxOcairoTY/

Lottery hacker gets 9 months for his £5 cut of the loot

Back in November 2016, 26,500 accounts for the UK’s National Lottery got credential-stuffed like they were a bunch of Thanksgiving turkeys.

And last week, 29-year-old Anwar Batson from London, who supplied his criminal buddies with the brute-force, automated password-guessing, Dark Web-delivered tool behind the credential-stuffing attack – a hacking tool called Sentry MBA – was sentenced to up to nine months in jail.

All this, for what? The shrinky-dinky sum of £5 (USD $6.50), that’s what. As The Register reports, that was his agreed-upon cut of whatever ill-gotten goods the thieves managed to pry out of accounts.

On Friday, Crown Prosecutor Suki Dhadda told the court that Batson had downloaded Sentry MBA and joined a chat group discussing the software and swapping the configuration files necessary to use it. Batson, the father of one, “counseled others on how to hack” and “enabled them to successfully use Sentry MBA to hack others’ accounts,” Dhadda said.

At least back in May 2016, Sentry MBA was considered the most popular tool for these kind of attacks, which involve taking sets of breached credentials, combining them with configuration files that are specific to a targeted site or service, and using a hacking tool like Sentry MBA to automatically plug in the credentials to see which ones will get a crook into a live account.

If account holders have reused passcodes across sites/services, there’s much more of a chance that their credentials will get a crook into a targeted site/service. Which is why it is really, truly a bad idea to use the same password on different sites!

Credential dumps are easy to find and buy – they’re as common as dirt on the Dark Web. The configuration files, though, are another thing.

As JUMPSEC managing director Sam Temple has told Infosecurity, the true value is in the config files, which tell the hacking tool how to attack a specific website: for example, config files tell the tool how a website handles login requests, which CAPTCHA is running, and how many requests per proxy should be attempted before the site or service detects a brute-force attempt and locks accounts.

Batson, using the chat handle “Rosegold,” discussed “config-file” this and “how do we use Sentry MBA to hack the National Lottery website” that with others online, including Idris Akinwunmi and Daniel Thompson: two hackers who were jailed in July 2018 for the cyberattack.

During the 2016 attack, Akinwunmi – an Aston University student – transferred just £13 into his account. That’s how much the crooks stole from Dr. Ian Bentley, a National Lottery player: the entire contents of his account.

Police traced one of the IP addresses used in the attack back to Akinwunmi. When police interrogated him, he said that he’d learned how to use Sentry MBA from Rosegold. Chat logs also showed up on his computers when police examined them.

The crooks had agreed that in exchange for sending them Sentry MBA, Batson would get a cut of the loot. Batson’s attorney, Daniel Kersh, had this to say in defense of his client:

They made an arrangement. Mr Batson would send [Akinwunmi] the Sentry MBA and that whatever Mr Akinwunmi did with it, he would get a cut. That in essence was his involvement.

When he was arrested on 10 May 2017, Batson denied having anything to do with the National Lottery hacks. His claim: It was somebody else! He got hacked! He was “the victim of online trolling”! His devices “had been trolled or hacked and other people had access to his laptop”!

His devices, however, sang a different tune. On them, investigators found a copy of the same chat that they’d discovered on Akinwunmi’s machines, as well as evidence that Batson had accessed Dr. Bentley’s account using Sentry MBA.

Nine months for a lousy £5?

Who cares how little the crooks made? Not Camelot, the outfit that runs the lottery. In a statement from CISO David Boda that was read to the court, he said that the company spent £230,000 responding to the attack. The fallout included 250 customers closing their accounts as a result of the bad publicity that followed. Another cost: £40,000 for a staff training event that had to be postponed as the company worked to fend off the hacks.

In passing sentence, Judge Jeffrey Pegden QC said that Batson’s jail time didn’t hinge on how much he made off with. Besides, he’s been forced to pay back that £5 to his victim, Dr. Bentley, on top of £250 for court costs.

No, the jail time has more to do with the fact that you went after an organization that does charity work, the judge said:

In my view, the gravity of your offending does not lie in the loss occasioned by the hacking and by the fraud. That indeed was low. But it does lie in the fact that you targeted a large charitable organization, namely the National Lottery, which gives something like £30m per week to chosen charities.

Batson pleaded guilty to four counts under the Computer Misuse Act 1990, as well as one count of fraud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MQxKTz5FL8E/