STE WILLIAMS

Google to Mark All HTTP Websites ‘Not Secure’

Google will push websites to adopt HTTPS encryption by marking all HTTP sites as ‘not secure’ starting in July 2018.

Google plans to mark all HTTP websites “not secure” with the release of Chrome 68. Starting in July 2018, users will see “not secure” in the omnibox of their Chrome browser when they visit sites not protected with HTTPS encryption.

Within the past year, Google has gradually flagged more HTTP websites as “not secure” in an effort to help users differentiate secure sites. It reports over 68% of Chrome traffic on Android and Windows is now protected, and over 78% of Chrome traffic on Chrome OS and Mac is protected. Further, 81 of the top 100 websites use HTTPS by default.

Google offers developers mixed content audits to help move their sites to HTTPS in the latest Node CLI version of Lighthouse, a tool for improving Web pages. It helps developers determine which resources a site loads using HTTP, and which of those can be migrated to HTTPS.

Read more details here.

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/google-to-mark-all-http-websites-not-secure/d/d-id/1331037?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft Adds Windows Defender ATP Support to Windows 7, 8.1

Microsoft brings Windows Defender ATP down-level support to older versions of Windows for businesses transitioning to Windows 10.

Microsoft is adding Windows Defender Advanced Threat Protection (ATP) down-level support for Windows 7 and Windows 8.1. This will provide additional security to businesses gradually updating their devices to Windows 10, ahead of the end-of-life for Windows 7 in January 2020.

Windows Defender ATP is an endpoint security platform built to prevent breaches in Windows 10. Until now, it has been exclusive to Microsoft’s latest OS. The company is adding support to older versions of Windows because businesses in the process of upgrading likely have a mix of devices running Windows 10, Windows 7, and Windows 8.1 in their environments.

Starting this summer, businesses still transitioning to Windows 10 can add Windows Defender ATP Endpoint Detection Response (EDR) functionality to Windows 7 and Windows 8.1. The support will give them a broader view of security across endpoints running older systems.

Detections and events will appear in the Windows Defender Security Center, the cloud-based console for Defender ATP, so admins can view and respond to malware detections. Microsoft reports the tool can run alongside third-party antivirus tools in addition to its Windows Defender Antivirus, also known as System Center Endpoint Protection (SCEP) for down-level.

Read more details here.

 

 

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/microsoft-adds-windows-defender-atp-support-to-windows-7-81/d/d-id/1331039?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Better Security Analytics? Clean Up the Data First!

Even the best analytics algorithms using incomplete and unclean data won’t yield useful results.

Our industry is losing the cybersecurity war. Not a week goes by in which we don’t hear about a new data breach. Overwhelmed security operations center (SOC) personnel, who already were in short supply, are leaving the profession because of sheer exhaustion. The rapid rate of change brought on by DevOps and cloud computing has completely overwhelmed our traditional, rules-based perimeter defense. Sophisticated hacking syndicates and nation-states are coming at us with machines, and we’re responding with humans.

The industry’s current response to this has been to offer practitioners a dizzying array of shiny, new artificial intelligence (AI)-enabled analytics regimes, each of which claims to have better algorithms than everybody else. Nowhere has this been more pronounced than in standalone user and behavior analytics regimes, but it’s undeniable that there has been a rush to add fancy new analytical features to existing, siloed security information and event management tools, intrusion-detection systems, threat feeds, network monitoring, cloud access security brokers, common vulnerabilities and exposures lists, configuration management databases, log tools, and more.

Here’s the problem with that approach: even the best analytics algorithms operating against incomplete and unclean data aren’t going to yield useful results.

Economists, behavioral scientists, mathematicians, and ethicists often refer to the concept of “imperfect information,” in which parties involved in the decision-making process (be it a market, game-theory scenario, or ethical question) do not have equal access to all the information required to make a decision. The concept is important because it is both theoretically and empirically demonstrable that imperfect information leads to bad outcomes; for example: markets don’t function as well, game theory doesn’t accurately predict what will happen, or one party in a transaction takes advantage of another. The drive toward transparency in many areas of business and life is a direct reflection of the fact that imperfect information is undesirable. Even though truly perfect information may be unachievable, most transactional and behavioral scenarios certainly benefit from the availability of less-imperfect information (or in other words, closer-to-perfect information).

The environment already presents a huge amount of data to the SOC. We have security events, user activity, intrusion detection, threat intelligence, network activity, cloud access, known exploits and vulnerabilities, configuration and IT activity metrics, security and operational logs, identity, and many other sources of data. Each of these sources tends to both emanate from and land in separate data silos. Traditionally, we expected our human SOC operators to be able to work across all of these silos, process all of this data, and turn it into actionable information. That didn’t work. SOCs were overwhelmed, exhausted people naturally missed things, we didn’t have all the information we needed, and we landed pretty much where we are today.

Now we are expecting that bolted-on, AI-enabled regimes will solve all of our problems. It’s true that machines don’t get tired and can analyze more data at scale than humans. That’s good. But machines can only analyze the data with which they are presented. That means if we apply AI to, say, our user activity data silo, but that data is separated from our configuration information silo, our topology-mapping silo, or our network monitoring silo (you get the idea…), we’re back to the imperfect information problem. Fancy analytics against imperfect information still yields decisions you can’t entirely trust. If you can’t trust the decisions, how can you automate the remediation based on them?

Time for a Fresh Approach
The hard truth is that we need to rethink our data tier. A data tier that perpetuates unconnected silos of data and expects an AI-enabled analytic regime to somehow normalize across them will yield the same “analysis paralysis” that faces human operators: too much uncertainty and too many gray areas to draw a definitive conclusion (and therefore to take action). The common phrase for this is “garbage in, garbage out.” The reason that truth is hard is because most SOCs have substantial investment in those siloed data tiers already, and there is natural inertia to consider replacing them.

A better data tier will allow the ingest and normalization of the full operational and security data set as a single data lake that can then be optimized for AI-enabled analysis. I say “operational and security data set” because they are closely related. For example, user activity drives optimization for performance (operations) and hardening (security). Configuration information is critical to resolving performance issues (operations) and vulnerabilities (security). Derived topology and dependency mapping is as equally useful for troubleshooting performance problems as it is for data-loss prevention and attack detection.

Better data tiers exist, but they aren’t bolt-ons to existing silos; they are replacements for them. While that may be hard to swallow, we need to adapt to the new reality, and a bolt-on approach won’t get us there. Armed with better and cleaner data, an AI-based analytics regime is more able to derive better conclusions, and those conclusions can be used to directly interface with automated remediation, yielding a highly automated cyber-defense regime that is more appropriate for today’s threat environment.

Think radically. Your attackers are, I assure you.

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Dan Koloski is a software industry expert with broad experience as both a technologist working on the IT side and as a management executive on the vendor side. Dan is a Vice President in Oracle’s Systems Management and Security products group, which produces the Oracle … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/better-security-analytics-clean-up-the-data-first!/a/d-id/1331012?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook is not testing a dislike button, except for the one it’s testing

You know all those flimsy rumors about Facebook creating a dislike button?

Just when you’re getting ready to crack a joke about really needing a bang-your-head-on-the-desk button to be deployed in cases of Facebook hoaxes (like this recent one), one spotting of a dislike button turns out to be real.

Kind of. Sort of. Well, not really, said Facebook, responding to questions about fleeting glimpses of something spotted in the hinterlands of the US (as in, on about 5% of Androids, according to Business Insider) that’s somewhat like a dislike button.

A Facebook spokesperson told Business Insider that it’s a limited test, and it’s not a dislike button:

We are not testing a dislike button. We are exploring a feature for people to give us feedback about comments on public page posts. This is running for a small set of people in the US only.

Yup, it is most certainly not an upside-down thumb, as Facebook said. It’s a “downvote” button, as you can see:

It was first spotted by Taylor Lorenz of The Daily Beast. She reports that the downvote option appeared for several users on Thursday in the comment section of posts within Facebook groups and on old Facebook memories content.

This shouldn’t be much of a shocker. Facebook founder Mark Zuckerberg said back in 2015 that Facebook was working on offering more options than “Like.” And indeed, since 2015, we’ve had weepies, love hearts, laughing, Wow! and steamy red angry faces: all emoji reactions that are less of an emotional hammer than thumbs-down.

Of course, that’s what Facebook wants: it wants to be our happy place. With the experimental downvote option – which appeared underneath comments on posts, next to the “like” and “reply” buttons, on select Android users’ accounts – it’s enlisting users to help Facebook keep it sunny.

A Facebook spokesperson told Business Insider that downvoting a comment won’t affect how it’s displayed.

According to reports on Twitter, when users downvote a comment, they’ll be prompted to select from a list of options explaining the decision. Was the comment offensive? Misleading?

Well, that’s one way to fight the scourge of fake news, I guess. Unless, of course, people use it to gang-downvote a comment just to be cyberbullies and/or to tinker in elections.

That will remain to be seen: as it is, Facebook says it’s not planning to expand the test.

After the debacle with its fake news flag – it admitted last month that it was making things worse, like waving a red flag in a bull’s face – let’s hope it hits on a solution to fixing its woes soon.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rVSDE3IVJ7w/

You have five months to switch your website to HTTPS

As far as Google is concerned, unencrypted HTTP web connections should be nearing the end of the road.

In 2014 at the I/O conference, it declared “HTTPS everywhere” as a security priority for all web traffic, followed in 2015 by the decision to downrank plain HTTP URLs in search results in favour of ones using HTTPS (where the latter was available).

A year ago, it started labelling sites offering logins or collecting credit cards without HTTPS as ‘not secure’.

In a symbolic moment, it has now confirmed that with the release of Chrome 68 in July, this label will be applied to all websites not using HTTPS.

It’s a small change that streamlines the slightly confusing way Chrome denotes the presence or absence of HTTPS in address bars. From July, the ambiguous grey ‘i’ icon used to tag many non-HTTPS sites today will disappear, replaced by a simpler ‘not secure’. This will look like:

Other browsers (Firefox, Edge, Opera) rely on green or grey padlock symbols to denote HTTPS sites, dropping back to more than one type of grey icon for non-secure HTTP.

But Google’s Chrome is the only one to use words and not simply symbols and colours to denote the use of HTTPS.  Explains Google:

Chrome’s new interface will help users understand that all HTTP sites are not secure, and continue to move the web towards a secure HTTPS web by default.

Getting there?

A look at Google’s figures suggests this strategy of coaxing website owners and users to see HTTPS as important is working, with 68% of Chrome traffic on Android and Windows connecting to HTTPS sites. Eighty-one of the top 100 web destinations use it by default.

Some surprisingly big sites such as the BBC apply it inconsistently, using HTTPS for its homepages but dropping back to HTTP for individual content pages (compared to, say the New York Times, which uses HTTPS for everything).

But as more and more sites adopt HTTPS, history suggests getting the last few percent of holdouts to sign up might take a while.

Google’s other problem is the old adage about being careful what you wish for: criminals have been seen to exploit HTTPS to gain the trust of users.

No matter how worthy Google’s dream of HTTPS everywhere, there’s still a lot of work ahead.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ZdK1Ehv5pMo/

Google-Nest merger reawakens privacy worries

Four years ago, Google paid $3.2 billion for Nest, a fancy smart-home thermostat and smoke alarm maker.

Privacy advocates found this a daunting marriage, but Google wound up running the business at arm’s length, over in its Alphabet division.

Nest co-founder and former CEO Tony Fadell told the BBC at the time of the acquisition that consumers could relax. Nest data wouldn’t be mixed with all the other information Google gathers:

When you work with Nest and use Nest products, that data does not go into the greater Google or any of [its] other business units. We have a certain set of terms and policies and things that are governed. So, just when you say we may be owned by Google, it doesn’t mean that the data is open to everyone inside the company or even any other business group – and vice versa. We have to be very clear on that.

Whew! What a relief, eh?

After all, on the one hand, we had Google, with its already vast knowledge of us. On the other hand, there was Nest, maker of Internet of Things (IoT) thermostats that learn, tracking customers’ daily usage to automatically set heating and cooling temperatures, and of smoke alarms that communicate via Wi-Fi with the company’s other devices or with your smartphone or tablet to send smoke or carbon monoxide alarms.

Put them together, and what do you get? Google’s hardware entrance into the IoT. Such a merger could have meant that Big Google Brother would be able to know even more intimate things about us than it already did at the time, such as whether we were home or not. Then, it easily could have connected that information with our mobile phone data to form ever-more-deep portraits of us for ever-more-targeted advertising or other profit-rich ventures.

Well, it turns out that Fadell’s “let’s be clear on that” promises on data privacy have gotten a bit muddy.

After two years of the thermostat company doing lukewarm on profits, Google trying to sell it in 2016, and Alphabet’s reporting “meh!” fourth-quarter earnings earlier this month, Nest and Alphabet last week announced that the Nest and Google Hardware teams would be smushed back together.

The goal is to “supercharge Nest’s mission,” Nest CEO Marwan Fawaz said. That mission is to “create a more thoughtful home, one that takes care of the people inside it and the world around it.”

By working together, we’ll continue to combine hardware, software and services to create a home that’s safer, friendlier to the environment, smarter and even helps you save money – built with Google’s artificial intelligence and the Assistant at the core.

Yes, Google wants your home to be thoughtful, as in, your home will be thinking about you, and it will have artificial intelligence (AI) to power all that thinky-think data crunching.

Why is that worrisome from a privacy perspective?

The BBC talked to Silkie Carlo, director of the Big Brother Watch campaign group, who said that the merger will expand “Google’s monopoly on personal data.”

Google already harvests an incredible amount of detailed information about millions of internet users around the globe. Now, Google is becoming embedded in the home, through ‘smart’ soft surveillance products.

Adding data from Nest’s home sensors and security cameras will significantly expand Google’s monopoly on personal data. Many customers will be justifiably anxious about Google’s growing, centralized trove, especially given that its business model relies on data exploitation.

At the time of the acquisition, privacy advocates worried about what would happen to Nest’s user data afterwards. Pre-acquisition, it was handled by Amazon Web Services – would Google move the data onto its Compute Engine public cloud to do heaven knows what with?

And since then, Nest has added yet more products, which means the sources for its data have increased. It’s moved beyond its initial products – smart thermostats and smoke detectors that use motion detection to know when owners are at home – and added security webcams for inside and outside the home, as well as a camera-equipped doorbell. On top of all that, Nest’s app can be set to gather data from other IoT products, including lights, appliances, fitness trackers, cars, and even sensor-equipped beds, to help “save energy, get comfortable and stay safe”.

Are you comfortable with Google knowing how you sleep? How many steps you take? When the BBC asked Google if it intended to honor Fadell’s stated commitment to keeping Nest data out of Google’s maw, the company provided this statement:

Nest users’ data will continue to be used for the limited purposes described in our privacy statement like providing, developing, and improving Nest services and products. As we develop future plans and future product integrations, we will be transparent with users about the benefits of those integrations, any changes to the handling of data, and the choices available to consumers in connection with those changes.

Nest’s current privacy statement asserts that it will provide notice of any changes on its website or by contacting customers directly.

On earnings calls, Google lists Nest as one of its few “Other Bets” division: one that generates considerable revenue, alongside healthcare company Verily and internet service Fiber.

But now, it’s no longer an “Other Bet.” It’s just another piece of Google.

Unfortunately, it’s a part of Google that has a history of security vulnerability. Last March, security researcher Jason Doyle found a vulnerability in Google’s Nest Cam, Dropcam and Dropcam Pro that could be exploited by a burglar within Bluetooth range of your house.

Let’s hope that Google’s merger with Nest means less security holes in IoT products. Can we get there without customer data privacy being lost?

It would be nice to think so, but c’mon – we’re talking about Google, the data gobbler. What do you think will happen?

The BBC quoted Ben Wood from the CCS Insight consultancy:

It would be naive to expect that as Nest is folded into the bigger Google entity, that there aren’t efforts to bring its platforms and all of the intelligence together. It will be positioned as enhancing the products, but for some customers that may be something that they feel uncomfortable about.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cmRts7rcaTo/

Cryakl ransomware antidote released after servers seized

Free decryption keys for the Cryakl ransomware were released last Friday – the fruit of an ongoing cybercrime investigation.

The keys were obtained during an ongoing investigation by the Belgian Federal Police and shared with the No More Ransom project, an industry-led effort to combat the growing scourge of file-encrypting malware.

The decryption utility was developed by security experts after the Belgian Federal Computer Crime unit located and seized a command-and-control server, allowing the recovery of decryption keys. Kaspersky Lab provided technical expertise to the Belgian authorities.

The decryption tool allows the file decryption of most – but not all – versions of Cryakl. White hat group MalwareHunterTeam told The Register that all infection versions newer than CL 1.4.0 resist this antidote.

Nonetheless, the release of tool will offer welcome relief to many of those organisations hit by Cryakl, which will now have the ability to recover encrypted files without paying crooks a ransom.

Since the launch of the NoMoreRansom scheme more than a year ago – in July 2016 – more than 35,000 people have managed to retrieve their files for free, thus preventing miscreants from pocketing over €10m, according to a statement by European policing agency Europol.

There are now 52 free decryption tools on www.nomoreransom.org, which can be used to decrypt 84 ransomware families. CryptXXX, CrySIS and Dharma are the most detected infections.

Ransomware has eclipsed most other cyber threats over recent years, with global campaigns now indiscriminately affecting organisations across multiple industries in both the public and private sectors, as well as consumers. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/12/cryakl_ransomware_antidote/

If you haven’t already killed Lotus Notes, IBM just gave you the perfect reason to do it now, fast

IBM has warned that bugs in its Notes auto-updater mean the service can be tricked into running malicious code.

In its advisory, IBM says the Notes Smart Updater service, which sees upgrades of Notes sent to users’ desktops, “can be misguided into running malicious code from a DLL masquerading as a windows DLL in the temp directory.”

Compromising an auto-updater is serious business: users trust them to bring in safe code, in this case new versions of Notes. Flaws in such a service are therefore extraordinarily dangerous.

The bug, CVE-2017-1711, affects versions in the Notes 8.5 and 9.0 branches.

It’s one of two turned up by Danish infosec company Improsec, which has made its disclosures here (you’ll need Google translate).

Author Lasse Trolle Borup explains “the service simply copies itself to the TEMP directory and executes the copy, probably for when the update service must update its own executable. The problem here is, that though normal users are not allowed to list the contents of TEMP, they can still write files there.

“By executing a file from an uncontrolled location, the service is exposing itself to DLL Search Order Hijacking”, Borup continued.

All that’s needed to reproduce the bug, Borup wrote, is to compile his proof-of-concept code and give it a static link as MSIMG32.dll; copy that file to C:windowstemp; and run sc control lnsusvc 136 at the command line.

IBM made a second disclosure about the same bug here, since it also affects IBM Client Application Access.

Spectre and Meltdown POWERed down, and an AIX fix

Big Blue had a busy week last week, and on Saturday also updated security folk about its Meltdown/Spectre status here.

It has now issued firmware patches for its POWER7 through to POWER9 platforms here (older chips are out-of-service), IBM i operating system patches are here, and AIX patches here.

POWER-series users running a Linux will get their patches from the distribution they use.

In a separate issue, AIX and VIOS also needed patching against CVE-2018-1383, which the company describes as “An unspecified vulnerability in AIX [which] could allow a user with root privileges on one system, to obtain root access on another machine.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/12/notes_dll_impersonation_bug/

Winter Olympics network outages blamed on unexplained cyberhack

The Mail Online has a URL that explicitly states, Russian-cyber-crooks-hacked-Winter-Olympics.html.

The article it links to isn’t quite so explicit, instead demanding to know, “Did Russian cybercriminals hack the Winter Olympics opening ceremony?”

The headline then answers its own question by adding, “[O]fficials don’t know who was behind it.”

Rival UK tabloid The Sun isn’t sure either, but that didn’t stop it shouting, “Cyber crooks HACKED the Winter Olympics opening ceremony”, before wondering, “[B]ut who is responsible?”

In comparison, Mashable is conciliatory, leading with, “Olympic organizers hit with hack during opening ceremony.”

(Even though a hack during the opening ceremony is not at all the same as a hack of the opening ceremony, Mashable couldn’t resist putting the slug olympic-opening-ceremony-hack in its URL.)

So, what actually happened?

As far as we can tell, some systems went down around the time of the opening ceremony, though the true side-effects of any outages still aren’t terribly clear.

Reuters is saying, “The Games’ systems, including the internet and television services, were affected by the hack”; Mashable has it somewhat more modestly as, “the TVs at the main press center are said to have malfunctioned.”

Apparently, the main Pyeongchang 2018 website went offline, though whether this was down to overload as visitors tried to print their tickets, as a direct outcome of hacking, or due to a precautionary disconnect by the organisers isn’t clear either.

Reuters has quoted International Olympic Committee spokesperson Mark Adams, when asked if he knew who was responsible, as saying:

I certainly don’t know. But best international practice says that you don’t talk about an attack.

We’ll give him the benefit of the doubt and assume that the reason he doesn’t know is that he hasn’t been told either way, because of the “no talking” rule, rather than that he just accidentally talked enough to admit that there was an attack but that it is still a mystery.

At any rate, the Games seem to have gone ahead smoothly anyway…

…except perhaps for for German third-time gold medal hopeful Felix Loch, whose apparently certain first-place finish in the men’s luge was scotched by a bumpy final run and a dramatic last-moment skid that saw him finish forlonly outside the medals.

So this hack, such as it was, doesn’t seem to have had much effect.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/A5BkkxHLsZM/

Cryptomining script poisons government websites – What to do

Reports surfaced over the the weekend claiming that a whole raft of government websites were “infected with malware”.

The full story seems to be more nuanced than that, which is just as well, because the list of infected sites stretches across the Anglophone world, with web pages affected in at least the US, the UK, and Australia.

The malware involved – you’d probably have guessed what it was going to be even if we hadn’t mentioned it in the headline – was a cryptomining script.

Cryptomining malware is when crooks covertly infect your computer with software to do the calculations needed to generate cryptocurrency, such as Bitcoin, Monero or Ethereum. The crooks use your electricity and processing power, but keep any cryptocoin proceeds for themselves.

The infection source in this case seems to have been browsealoud DOT com, a service run by a company called Texthelp Limited.

The browsealoud site serves up JavaScript that can convert pages on your website to speech, in order to help out visitors who aren’t fluent in English, or who aren’t good at reading.

As you can imagine, government websites are meant to serve everyone, even those who aren’t literate, and numerous regulations exist that cover how accessible the public sector needs to make its web pages.

Indeed, Texthelp lists some of these regulations on its website, including: EU – Convention on Human Rights, UK – Accessible Information Standard, IRE – Disability Act 2005, US – Americans with Disabilities Act (ADA), CA – Canadian Charter on Rights and Freedom, AUS – Disability Discrimination Act, and more.

Server hacked by crooks

Unfortunately, however, the browsealoud script server was hacked by crooks, and its usual JavaScript content – the very script that was meant to help government sites to serve their users better – was augmented with an obfuscated chunk of JavaScript that started mining for cryptocurrency.

Fortunately, thanks to pressure from the security community, notably a researcher named Scott Helme, the offending site was taken down, so the rogue cryptomining has stopped, at least for now.

Surprisingly, there’s no notification or clarification yet [2018-02-11T23:45Z] on Texthelp’s website, but as far as we can tell from existing reports:

  • No malicious activity other than rogue cryptomining was reported.
  • The browsealoud DOT com site is currently [2018-02-11T23:45Z] offline.
  • The known malicious scripts all relied on cryptomining software downloaded from coinhive DOT com, a site that is blocked by many security companies, including Sophos.

Interestingly, the rogue script that was injected into the browsealoud server includes code that tries to limit the amount of processing power that the cryptomining will steal, presumably in the hope of staying unnoticed for longer.

On my dual-core hyperthreaded Mac running Firefox, for example, the cryptomining code limits itself to a single mining process running at 60% of the maximum possible rate.

Who was hit?

We don’t have an exhaustive list, but Scott Helme’s twitter thread shows screenshots of affected pages on services as diverse as US Courts (US), the General Medical Council (UK), the National Health Service (UK), Manchester City Council (UK), the Queensland Government (AU) , and – in an irony we might as well laugh at now – the Information Commissioner’s Office (UK).

The victims seem mainly to have been public sector websites, but Texthelp sells its services into the private sector, too, and Helme has listed at least one private sector victim.

What to do?

Ss far as we can see, simply shutting down your browser is enough to kill off any cryptomining scripts that may have been left behind by this attack.

As far as we know, no rogue code other than cryptomining scripts were reported on the browsealoud DOT com site.

Additionally, the script that we examined from this attack simply:

  • Downloaded the coinhive DOT com cryptominer.
  • Tried to limit CPU usage to stay unnoticed as long as possible.
  • Started mining.

We have therefore formed the opinion that the rogue script in this case: didn’t try to launch any other attacks, didn’t make itself persistent (in other words, won’t survive after you exit your browser), didn’t steal any data, and didn’t try to change any browser settings.

(If we have reason to change this opinion based on what emerges as this attack is investigated more thoroughly, we’ll let you know.)

If you run a website that uses the services of browsealoud DOT com we recommend that you stop your own pages from even trying to load content from that site (no matter that it is offline) until you receive a credible explanation and an all-clear from Texthelp.

PS. Worried about malware that might be left behind on your your computer for whatever reason? Try our Virus Removal Tool. It’s free, and you don’t have to uninstall your existing anti-virus software first. (Windows only.)


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qFVIolwkTwo/