STE WILLIAMS

Malware and HTTPS – a growing love affair

If you’re a regular Naked Security reader, you’ll know that we’ve been fans of HTTPS for years.

In fact, it’s nearly nine years since we published an open letter to Facebook urging the social networking giant to adopt HTTPS everywhere.

HTTPS is short for HTTP-with-Security, and it means that your browser, which uses HTTP (hypertext transport prototol) for fetching web pages, doesn’t simply hook up directly to a web server to exchange data.

Instead, the HTTP information that flows between your browser and the server is wrapped inside a data stream that is encrypted using TLS, which stands for Transport Layer Security.

In other words, your browser first sets up a secure connection to-and-from the server, and only then starts sending requests and receiving replies inside this secure data tunnel.

As a result, anyone in a position to snoop on your connection – another user in the coffee shop, for example, or the Wi-Fi router in the coffee shop, or the ISP that the coffee shop is connected to, or indeed almost anyone in the network path between you and the other end – just sees shredded cabbage instead of the information you’re sending and receiving.

HTML source code of simple web page.
The HTML source above, rendered in a browser.
Web page ‘on the wire’ without TLS – raw HTTP data can be snooped.
Blue: HTTP ‘200’ reply. Red: HTTP headers. Green: page content.
Web page fetched using HTTPS via a TLS connection – encrypted content can’t be snooped.

Why everywhere?

But why HTTPS everywhere?

Nine years ago, Facebook was already using HTTPS at the point where you logged in, thus keeping your username and password unsnoopable, and so were many other online services.

The theory was that it would be too slow to encrypt everything, because HTTPS adds a layer of encryption and decryption at each end, and therefore just encrypting the “important” stuff would be good enough.

We disagreed.

Even if you didn’t have an account on the service you were visiting, and therefore never needed to login, eavesdroppers could track what you looked at, and when.

As a result, they’d end up knowing an awful lot about you – just the sort of stuff, in fact, that makes phishing attacks more convincing and identity theft easier.

Even worse, without any encryption, eavesdroppers can not only see what you’re looking at, but also tamper with some or all of your traffic, both outbound and inbound.

If you were downloading a new app, for example, they could sneakily modify the download in transit, and thereby infect you with malware.

Anyway, all those years ago, we were pleasantly surprised to find that many of the giant cloud companies of the day – including Facebook, and others such as Google – seemed to agree with our disagreement.

The big players ended up switching all their web traffic from HTTP to HTTPS, even when you were uploading content that you intended to publish for the whole world to see anyway.

Fast forward to 2020, and you’ll hardly see any HTTP websites left at all.

Search engines now rate unencrypted sites lower than encrypted equivalents, and browsers do their best to warn you away from sites that won’t talk HTTP.

Left: Safari on iOS warning about a non-HTTPS web page.
Right: Firefox notification for the same page.

Even the modest costs associated with acquiring the cryptographic certificates needed to convert your webserver from HTTP to HTTPS have dwindled to nothing.

These days, many hosting providers will set up encryption at no extra charge, and services such as Let’s Encrypt will issue web certificates for $0 for web servers you’ve set up yourself.

HTTP is no longer a good look, even for simple websites that don’t have user accounts, logins, passwords or any important secrets to keep.

Of course, HTTPS only applies to the network traffic – it doesn’t provide any sort of warranty for the truth, accuracy or correctness of what you ultimately see or download. An HTTPS server with malware on it, or with phishing pages, won’t be prevented from committing cybercrimes by the presence of HTTPS. Nevertheless, we urge you to avoid websites that don’t do HTTPS, if only to reduce the number of danger-points between the server and you. In an HTTP world, any and all downloads could be poisoned after they leave an otherwise safe site, a risk that HTTPS helps to minimise.

Goose and gander

Sadly, what’s good for the goose is good for the gander.

As you can probably imagine, the crooks are following where Google and Facebook led, by adopting HTTPS for their cybercriminality, too.

In fact, SophosLabs set out to measure just how much the crooks are adopting it, and over the past six months have kept track of the extent to which malware uses HTTPS.

Well, the results are out, and it makes for interesting – and useful! – reading.

In the this paper, we didn’t look at how many download sites or phishing pages are now using HTTPS, but instead at how widely malware itself is using HTTPS encryption.

Ironically, perhaps, as fewer and fewer legitimate sites are left behind to talk plain old HTTP (usually done on TCP port 80), the more and more suspicious that traffic starts to look.

Indeed, the time might not be far off where blocking HTTP entirely at your firewall will be a reliable and unexceptionable way of improving cybersecurity.

The good news is that by comparing malware traffic via port 80 (usually allowed through firewalls and almost entirely used for HTTP connections) and port 443 (the TCP port that’s commonly used for HTTPS traffic), SophosLabs found that the crooks are still behind the curve when it comes to HTTPS adoption…

…but the bad news is they’re already using HTTPS for nearly one-fourth of their malware-related traffic.

Malware often uses standard-looking web connections for many reasons, including:

  • Downloading additional or updated malware versions. Many, if not most, malware samples include some sort of auto-updating feature, often used by the crooks to sell access to infected computers onwards to the next wave of crimimals by “upgrading” to a new malware infection.
  • Fetching command-and-control (CC or C2) instructions. Many, if not most, modern malware “calls home” in order to find out what to do next. Crooks may have thousands, tens of thousands or more computers all waiting for commands from the same source, giving the criminals a powerful “zombie army”, known as a botnet (short for robot network), of devices that can be harnessed for evil simulataneously.
  • Uploading stolen data. Data stealing is known in the jargon as exfiltration, and by hiding uploads in encrypted network connections, crooks can not only make it look like routine web browsing, but also make it much harder for you to scan and verify the data before it leaves your network.

What to do?

  • Read the report. You will learn how various contemporary malware strains are using HTTPS, along with other tricks, to look more like legitimate traffic.
  • Use layered protection. Stopping malware before it gets in at all should be your top-level goal.
  • Consider HTTPS filtering at your network gateway. A lot of sysadmins avoid HTTPS filtering for a mixture of privacy and performance reasons. But with a nuanced web filtering product you don’t need to peek inside all the encrypted traffic on your network – you can leave online banking connections alone, for example – and you won’t bring your network to its knees due to the overhead of decrypting network packets.

Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2I1AdG1EeE4/

8 Things Users Do That Make Security Pros Miserable

When a user interacts with an enterprise system, the result can be productivity or disaster. Here are eight opportunities for the disaster side to win out over the productive.PreviousNext

IT security would be so much easier were it not for users. To be specific, it would be easier if users didn’t insist on doing things with their computers and devices. Unfortunately for security teams, it’s hard to have a productive workforce if all they do is sit and stare at their lovely, perfectly safe computers, so security professionals have to constantly take into account users and their risk behaviors.

Not all user interactions are risky, fortunately, and not all risky interactions are equally risky. So which of the unfortunate interactions are most likely to send security professionals diving for their quart-sized bottle of bright pink antacid beverage?

This list springs from a conversation with Corey Nachreiner, CTO at WatchGuard. As with many of these conversations, it began with a short list that grew with, “Oh, and another one is … ” repeated a couple of times. After that conversation, Dark Reading had the same chat with other security professionals and found an unsurprising level of agreement that these are bad, bad things.

It’s important to note that not all of these bad interactions are the fault of users. While some undeniably do fall squarely at the feet of the individual behind the keyboard, some are the result of design or implementation decisions by enterprise IT — decisions that users have no real control over. In every case, though, regardless of who is responsible, there are steps enterprise security can take to reduce the impact of these bad interactions. Let’s take a look at the list of bad things, the good options for dealing with them, and how your security team can work to have more secure interactions — and fewer hits off the big pink bottle.

“Many employees will perform some risky behavior within organizations; however, it really comes to what the risk is exposing and what data it is meant to be protecting,” says Joseph Carson, chief security scientist at Thycotic. How is your organization dealing with these behaviors? And do you think we left some critical interactions off our list? Let us know in the Comments section, below — the conversation there should be a very good interaction, indeed.

(Image: Benzoix VIA Adobe Stock)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/theedge/8-things-users-do-that-make-security-pros-miserable/b/d-id/1337047?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Staircase to the Cloud: Dark Reading Caption Contest Winners

A humorous nod to the lack of gender equity in cybersecurity hiring was our judges’ unanimous choice. And the winners are …

First Prize (a $25 Amazon gift card) goes to Robin Bader, IT operations manager at the Attorney Registration and Disciplinary Commission (ARDC) in Chigao, Ill. The winning caption:

Second prize, a $10 Amazon gift card, to Ahmed Elezaby for: 

Well, he was saying something about exploiting a privilege escalation vuln, and then poof.

The two captions bested 44 other entries, all of which made our panel of judges (John Klossner, Tim Wilson, Kelly Jackson Higgins, Sara Peters, Kelly Sheridan, Curtis Franklin, Jim Donahue, Gayle Kesten, and yours truly) smile for days.

Thanks also to everyone who entered the contest and to our loyal readers who cheered the contestants on. If you haven’t had a chance to read all the entries, be sure to check them out today.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Name that Toon: Private Button Eye.”

Marilyn has been covering technology for business, government, and consumer audiences for over 20 years. Prior to joining UBM, Marilyn worked for nine years as editorial director at TechTarget Inc., where she launched six Websites for IT managers and administrators supporting … View Full Bio

Article source: https://www.darkreading.com/operations/staircase-to-the-cloud-dark-reading-caption-contest-winners/a/d-id/1337005?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Roads to Riches

You could be making millions in just two years!

Source: Don McMillan

What security-related videos have made you laugh? Let us know! Send them to [email protected].

Beyond the Edge content is curated by Dark Reading editors and created by external sources, credited for their work. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/the-road(s)-to-riches/b/d-id/1337070?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Severe vuln in WordPress plugin Profile Builder would happily hand anyone the keys to your kingdom

A vulnerability in a popular WordPress user role plugin lets any random person create an admin-level account on targeted websites.

The bug in Profile Builder was given a CVSS score of 10.0 by WordPress security biz Wordfence, though precise details of the bug are not yet available on the usual CVE-tracking websites.

According to Wordfence: “A bug in the form handler made it possible for a malicious user to submit input on form fields that didn’t exist in the actual form. Specifically, if the site’s administrator didn’t add the User Role field to the form, an attacker could still inject a user role value into their form submission.”

Profile Builder is a form-building plugin used mainly for blogs and websites with comment sections. Going by the description on the WordPress.org plugin repository, it automates the user registration process and adds a nice-looking frontend menu for users to do things like request password resets and so on.

Wordfence reckoned in a detailed blog post that if, during initial setup of Profile Builder versions up to and including version 3.1.0, a site admin did not set a default user role field for newly registered users, a malicious person could simply submit a new user registration along with their own chosen user role, such as admin.

If no user role was defined by the site admin during initial setup of the plugin, the form field defining the user role was not present for new users registrations – yet the plugin would happily act on a form field if one was received. An unauthenticated attacker could therefore remotely create an admin-level account and cause chaos.

Version 3.1.1 of Profile Builder was released a week ago. WordPress.org’s counter tracks 50,000 installs of the plugin.

Vulnerabilities in WordPress plugins are not uncommon. Just a few weeks ago a similar authentication vuln was plugged in two popular plugins running on around 320,000 WordPress-powered websites. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/17/wordpress_profile_builder_v3_1_0_vuln/

Tutanota cries ‘censorship!’ after secure email biz blocked – for real this time – in Russia

Fresh from last week’s controversy with a US telco, German secure email biz Tutanota has declared today that the Russian authorities have pulled the plug on its services.

Russia’s move appears to be a continuation of domestic policy aimed at shutting out foreign-owned services that it cannot control or influence.

In a statement announcing the block, Tutanota co-founder Matthias Pfau lamented the spread of “censorship” online.

“We condemn the blocking of Tutanota. It is a form of censorship of Russian citizens who are now deprived of yet another secure communication channel online. At Tutanota we fight for our users’ right to privacy online, also, and particularly, in authoritarian countries such as Russia and Egypt.”

Tutanota reckoned that VPNs and Tor still worked as a means of accessing the site locally, it said on its corporate blog.

The block comes just days after American telco ATT was accused of blocking the email provider under murky circumstances, though an ATT spokesman denied there was “any blocking” and put it down to a technical glitch.

Tutanota joins fellow Western email provider ProtonMail in Vladimir Putin’s internet naughty corner. Last year ProtonMail found itself inaccessible by Russians, though it took unspecified technical measures to ensure this was a short-lived block.

This is not the first time that Tutanota has been blocked: US telco Comcast was accused by Pfau of blocking his business for one night in March 2018, apparently triggering “hundreds” of complaints.

“Tutanota is also being blocked in Egypt since October 2019,” lamented Pfau in today’s statement.

Russia has long been trying to wall itself off from the wider internet, going as far as to mandate the inclusion of Russian software in domestically sold smart devices. Censorship is, as we in the West know, a feature of life under communist or post-communist regimes. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/17/tutanota_blocked_russia/

Police bust alleged operator of Bitcoin mixing service Helix

The guy who allegedly wanted to be the Dark Net’s “go-to” money launderer by acting as a “Bitcoin mixer” – soliciting cryptocurrency from crooks, slicing and dicing the coins, and then remixing them in an ultimately futile attempt to obscure their source – has been busted.

The US Department of Justice (DOJ) announced on Thursday that Larry Harmon, 36, of Akron, Ohio, has been indicted on three counts of allegedly running a Bitcoin mixer service called Helix from 2014 to 2017.

These services are also called Bitcoin tumblers, which is how Harmon allegedly referred to Helix in his sales pitch to the underworld. This is how the indictment summarizes Harmon’s alleged first post about his service in June 2014 – a pitch to convince criminals to pay him to hide their transactions from law enforcement:

Before launching Helix. HARMON posted online that Helix was designed to be a ‘bitcoin tumbler’ that ‘cleans’ bitcoins by providing customers with new bitcoins ‘which have never been to the darknet before.’

Harmon allegedly went on to promise that there was no way that law enforcement could tell which addresses are Helix addresses, given that the service uses new addresses for each transaction. His alleged “I’ll-scare-you-crooks-into-paying” followup advertising spiel:

No one has ever been arrested just through bitcoin taint, but it is possible and do you want to be the first? …Most markets use ‘Hot Wallets’, they put all their fees in these wallets. [Law enforcement] just needs to check the taints on these wallets to find all the addresses a market uses.

In short, “taints” are the trail left by bitcoins as they travel from wallet to wallet. Here’s a discussion about traceability from Stack Exchange.

Harmon’s Helix bitcoin mixer allegedly moved at least 354,468 bitcoin on behalf of customers: a sum that was valued at over $300 million at the time of the transactions and which is now worth about USD $3.6 billion. Most of those customers came in from Dark Net markets. Helix had partnered with AlphaBay – one of the largest Dark Net markets before law enforcement seized it in July 2017 – to provide bitcoin laundering for AlphaBay’s customers.

Harmon’s also been linked to “Grams,” a Dark Net search engine. Other Dark Net marketplaces that funneled funds to Helix included Agora, Market, Nucleus, and Dream Market, according to the indictment.

One of those bitcoin transfers led to Harmon’s bust. In November 2016, an FBI agent working undercover transferred 0.16 bitcoin from an AlphaBay bitcoin wallet to Helix. The tumbler mixed it up and exchanged it for an equivalent amount of “clean” coins, minus a fee of 2.5%. Those new coins weren’t directly traceable to AlphaBay, but that hasn’t stopped law enforcement in the past.

In January 2018, for example, researchers figured out how to unmask dark web markets’ buyers and sellers by forensically connecting them to Bitcoin transactions.

They didn’t unmask many, and, granted, those they did manage to identify made mistakes that were more common in the early, less careful days of dealing in cryptocurrency: they didn’t hide transactions using Bitcoin laundering services, for example, while some were none too scrupulous about using fake online identities that couldn’t be traced to personally identifiable information (PII).

Helix isn’t the first mixing service to go down in non-anonymous flames. A string of mixing services have eventually figured out that Bitcoin transactions aren’t fully anonymous. That’s why, in July 2017, the biggest mixer of the day – BitMixer – abruptly shut down.

Although BitMixer’s operator denied the connection at the time, its closure came fast on the heels of the shutdowns of the AlphaBay and Hansa dark markets. The reason why BitMixer closed up shop: it may have taken its operators a few years of operating, but they finally realized that Bitcoin doesn’t have additional protections for PII by design, not by omission.

BitMixer’s epiphany and shutdown came a week after Google and blockchain analysis firm Chainalysis – which markets a tool called “Reactor” to track and analyze the movement of Bitcoin – announced that they had managed to track ransomware payments, paid in Bitcoin, from end to end.

Some of those ransomware payments had been moved through Bitcoin mixers. That doesn’t mean that all bitcoin mixer payments are done for illicit purposes, mind you. In fact, Chainalysis said in August 2019 that most mix transactions are done for additional privacy.

In spite of that, 2017 was not a good time for mixers – many of which, like Helix, made their sales pitches directly to the “solidly illicit” crowd. In fact, BitMixer shut down three days after the DOJ shut down BTC-e: a fraudulent Russian cryptocurrency exchange that was handling 95% of all ransomware payments at the time and which itself relied on mixing services.

They can find you

Criminals seem to think that the Dark Net and cryptocurrency have a lot more opaque nooks and crannies than it does, the FBI suggests. Here’s a statement from Special Agent in Charge Timothy M. Dunham of the Criminal Division of the FBI Washington Field Office:

The perceived anonymity of cryptocurrency and the Darknet may appeal to criminals as a refuge to hide their illicit activity. However, as [Harmon’s arrest] demonstrates, the FBI and our law enforcement partners are committed to bringing the illegal practices of money launderers and other financial criminals to light and to justice, regardless of whether they are using new technological means to carry out their schemes.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wxaX5m7CMuw/

Senator calls for dedicated US data protection agency

The US needs a data protection agency of its own, and Kirsten Gillibrand wants to be the one that makes it happen.

Gillibrand, the US senator for New York, released the call to action last week. She announced draft legislation known as the Data Protection Act on Thursday 13 February, a day after explaining her reasoning in a post on Medium. We need to do this to catch up, she said:

The United States is vastly behind other countries on this. Virtually every other advanced economy has established an independent agency to address data protection challenges, and many other challenges of the digital age.

At the moment, the US doesn’t have a single body dedicated to enforcing privacy rules. It’s a side-mission at the Federal Trade Commission (FTC), which is limited in its approach.

Under Section 5 of the FTC Act, it can’t issue fines for privacy violations immediately. Instead, it has to issue a consent decree (the violator has to agree that it won’t be naughty again) and it can only fine a company if it violates that decree. That’s why it didn’t fine Facebook for privacy infractions in 2011 but did levy a $5bn fine last year.

In any case, the FTC doesn’t just focus on privacy. Gillibrand wants a federal data agency dedicated to the task with three core missions.

The first would give Americans control over their own data by enforcing data protection rules. The key word here is ‘enforcing’ – it would be able to not just conduct investigations and share its findings, but to impose civil penalties. These would be capped at $1m for each day that an organisation knowingly violates the Act. This money would go into a relief fund that the Agency would use to help compensate victims of data privacy violations.

The second mission would be to promote privacy innovations, including technologies that minimise the collection of personal data or eliminate it altogether. Under this mission, Gillibrand would also come down hard on service contracts that gave customers no choice but to give up their privacy. She also says that she’d protect against “pay for privacy” provisions in service contracts.

Finally, the third mission would be to “prepare the American government for the digital age”. It would advise Congress on emerging privacy and technology issues like deepfakes and encryption, and represent the US at international privacy forums.

The law defines personal data very broadly, as the California Consumer Protection Act (CCPA) does, including online identifiers and IP addresses alongside names and addresses as identifying information. Bank account numbers also count.

The law would apply to any company with revenues over $25m, or which handles the personal data of 50,000 or more people. The clause that would seem to throw many companies outside the scope of the act is that a covered organisation would have to derive at least half its annual revenues from the sale of personal data.

It isn’t clear that a company such as Facebook would fall under those conditions, as it doesn’t actually sell personal data – it collects and uses it internally to target ads for its clients. Still, this is only a draft for discussion.

In any case, this law wouldn’t pre-empt state laws. A company that violated California’s CCPA privacy law would still be liable for state prosecution under that law too.

This isn’t the only attempt at reform being considered on the Hill. The Consumer Online Privacy Rights Act (COPRA) would outline strict privacy rules and establish a dedicated office within the FTC to enforce them. The Brookings Institute, which researches policy in Washington DC, has said that the FTC is up to the job of regulating privacy, but hasn’t been doing an especially good job lately, taking on too few cases and focusing on issuing consent decrees rather than legislating. It would need some significant reforms to be ready – including a clear Congressional mandate.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mOUsx7W00QI/

Google forced to reveal anonymous reviewer’s details

It’s a small business’s worst nightmare: someone leaves a review on a popular site trashing your company, and they do it anonymously. That’s what happened to Mark Kabbabe, who runs a tooth whitening business in Melbourne, Australia. Last week, a court forced Google to reveal the details of an anonymous poster who published a bad review of his business.

According to the court judgement, the anonymous poster used the pseudonym CBsm 23 to publish a review on Google about a procedure they had undergone at Kabbabe’s clinic. The review said that the dentist made the whole experience “extremely awkward and uncomfortable”, claiming that the procedure was a “complete waste of time” and was not “done properly”. It seemed like Kabbabe “had never done this before”, said the review, adding that other patients had “been warned!” and should “STAY AWAY”. Ouch.

Kabbabe contacted Google in November 2019, according to the court order, asking it to take down the review, but Google refused. He mailed again on 5 February, asking for information about the poster, but Google replied that:

We do not have any means to investigate where and when the ID was created.

This was enough for Justice Murphy, presiding over the case, who has ordered that Google hand over the anonymous poster’s details. In his court ruling, he said:

Dr Kabbabe is not required to make inquiries that will be fruitless and in my view he has done enough.

He added:

…notwithstanding Google’s response, I consider that Google is likely to have or have had control of a document or thing that would help ascertain that description of the prospective respondent CBsm 23…

Google could possibly surface the offending poster’s subscriber information and related IP addresses and phone numbers, along with location metadata, the judge said. It could also probably provide any other Google accounts, including their full name, email address and identifying details, originating from the same IP address around the same time that CBsm 23 posted their negative review.

Things seem to have progressed while Kabbabe pursued his case against Google. First, the search and advertising giant has removed the link on the business’s maps page that reveals all the 28 reviews for that business. Now, only a review summary is available showing three reviews, all of which are positive.

Second, Kabbabe’s lawyer, Mark Stanarevic of Matrix Legal, has launched a class action lawsuit against Google, aimed at helping companies suffering from anonymous bad reviews. It offers plaintiffs the chance to “fast track the process of finding out identifiable information”.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/d4sEoG9Kawk/

Google pulls 500 malicious Chrome extensions after researcher tip-off

Google has abruptly pulled over 500 Chrome extensions from its Web Store that researchers discovered were stealing browsing data and executing click fraud and malvertising after installing themselves on the computers of millions of users.

Depending on which way you look at it, that’s either a good result because they’re no longer free to infect users, or an example of how easy it is for malicious extensions to sneak on the Web Store and stay there for years without Google noticing.

That they were noticed at all is thanks to researcher Jamila Kaya who used Duo Security’s CRXcavator tool (also available at CRXcavator.io) to spot a handful of extensions that seemed suspicious, mostly themed around marketing and advertising.

Spotting dodgy extensions was only the start – she still had to connect them to one another to uncover recurring patterns that might highlight other offenders.

The first giveaway was that the extension code often looked like copycats of one another despite small changes to the names of internal functions designed to obscure this.

Another troubling similarity was the number of permissions requested. Enough to allow them to access browsing data and run when visiting websites using HTTPS.

Working with Duo Security, they eventually identified 70 extensions that seemed to be related to one another. All also contacted similar command and control networks and seemed to have been designed to detect and counteract sandbox analysis.

Ad fraud was the biggest activity – contacting domains without the user being aware – as well as redirecting users to malware and phishing domains.

Could it get worse?

Many of the extensions had been active for nearly a year, with evidence some had been around for much longer.

Google carried out its own fingerprinting based on the research and the number of dubious extensions ballooned to over 500. Google later said:

We do regular sweeps to find extensions using similar techniques, code, and behaviors, and take down those extensions if they violate our policies.

Except, an infected user might point out, not often or effectively enough to stop 500 malicious extensions from finding a home inside the Chrome Web Store.

The extensions discovered by Duo Security and Kaya had been installed a total of 1.7 million times.

Google’s Chrome Web Store has around 190,000 extensions, which puts the loss of 500 dubious ones into perspective. That said, a report by Extension Monitor last August estimated that three-quarters of these have between zero and a handful of installs.

Perhaps the sheer number is part of the problem. Malicious extensions have a large population of unused software in which to hide.

Mozilla’s Firefox has experienced the same issue on a smaller scale to the extent that it recently banned 197 risky extensions and reminded everyone that it no longer tolerates extensions that execute remote code.

Anyone using one of the now-suspended 500 extensions will find they’ve automatically been deactivated in their browser, with warnings that mark them as malicious. Deinstallation must be done from the user’s side, however.

The lesson is not to assume that because an extension is hosted from an official web store that means it is safe to use. The best advice:

    • Install as few extensions as possible and, despite the above, only from official web stores.
    • Check the reviews and feedback from others who have installed the extension.
    • Pay attention to the developer’s reputation and how responsive they are to questions and how frequently they post version updates.
    • Study the permissions they ask for (in Chrome, Settings Extensions Details) and check they’re in line with the features of the extension. And if these permissions change, be suspicious.

Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/izP6lapUPOU/