STE WILLIAMS

Supra smart TVs aren’t so super smart: Hole lets hackers go all Max Headroom on e-tellies

Owners of Supra Smart Cloud TVs are in danger of getting some unwanted programming: it’s possible for miscreants or malware on your Wi-Fi network to switch whatever you’re watching for video of their or its choosing.

Bug-hunter Dhiraj Mishra laid claim to CVE-2019-12477, a remote file inclusion zero-day vulnerability that allows anyone with local network access to specify their own video to display on the TV, overriding whatever is being shown, with no password necessary. As such it’s more likely to be used my mischievous family members than hackers.

Mishra told The Register the issue is due to a complete lack of any authentication or session management in the software controlling the Wi-Fi-connected telly. By crafting a malicious HTTP GET request, and sending it to the set over the network, an attacker would be able to provide whatever video URL they desired to the target, and have the stream played on the TV without any sort of security check.

In practice, this bug would be exploited by someone who was on the local network, either by already knowing the wireless password or taking advantage of an unsecured network, who would then send the request to the TV with a link to their own video. if the television is somehow facing the public internet, it could be commandeered from afar, of course.

Nelson from the Simpsons. Pic: 20th Century Fox

Pewdiepie fanboi printer, Chromecast haxxx0r retreats, says they’re ‘afraid of being caught’

READ MORE

While this would usually just be a harmless prank, Mishra noted that a particularly malicious user could try to stir up panic by displaying a phony emergency alert.

“A legit user is watching some action movie and attackers trigger the remote file inclusion vulnerability at the same time, so the attacker would have full control over the TV and he can broadcast anything,” Mishra told El Reg.

“The attacker can broadcast any fake emergency message, or the worst case could be broadcasting a purge message.”

Here’s a video demonstrating how the telly could be compromised:

Youtube Video

Mishra said he tried to get in touch Supra, which is listed as a Japanese business, but was unable to find any contact information. The Register was similarly unable to get hold of the manufacturer. As such, the flaw remains unpatched. The security researcher said he has not found any other brands to be vulnerable thus far.

Those of a certain age will no doubt be reminded of the infamous Max Headroom Incident. Back in 1987, hackers in the Chicago area of the United States were able to hijack the signal of two local television stations and, for a brief period of time, serve viewers a bizarre clip of an unknown person ranting in a rubber mask of the animated Coke spokesdroid.

Owners of Supra TVs worried of experiencing their own Max Headroom moment would be well-advised to make sure their Wi-Fi networks are secured, and only trusted users have local access. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/04/supra_cloud_tv_flaw/

Baltimore Ransomware Attacker Was Behind Now-Suspended Twitter Account

Researchers at Armor were able to confirm the person or persons behind a Twitter account that appeared to be leaking confidential files was the actual ransomware attacker that hit the city.

A now-suspended Twitter account that taunted and warned the mayor of Baltimore to pay the ransom for the city’s hijacked servers has been confirmed to be that of the actual attacker who launched the May 7 ransomware campaign on the city.

Researchers at security firm Armor who have been investigating the documents leaked via the now-defunct account — which was suspended by Twitter this afternoon after posting a tweet riddled with obscenities — earlier told Dark Reading they had suspected the account was run by the actual attacker. 

Now they say they can confirm it was, indeed, that of the attacker after he or she posted the attack panel interface used to communicate with the city in the wake of the attack, which locked down Baltimore’s servers with the so-called Robbinhood ransomware.

Eric Sifford, Armor security researcher, and Joe Stewart, an independent security researcher working on behalf of Armor said in a statement of their tying the attacker to the Twitter account: “We believe that when the Baltimore hacker posted, verbatim, the last two tweets from the Robbinhood Twitter profile into the ransomware panel (which is specific only to the city of Baltimore) that the attacker(s) had totally lost their patience and was fed up with anyone questioning their validity and capability to decrypt the city’s data.”

The attacker today via the Twitter account also warned that the city had until June 7 to pay the ransom of $17,600 in bitcoin per system — a total of about $76,280 — even though the original ransom note said the data would no longer be recoverable after 10 days.

The city had vowed not to pay the ransom, although Mayor Bernard C. Jack Young hinted last week that paying was not out of the question. The attack is estimated to have cost the city around $18.2 million, according to the city budget office.

Efforts to reach the mayor’s office have been unsuccessful. 

The Robbinhood attacker’s Twitter account first appeared on May 12, posting what it claimed was a screenshot of sensitive documents and user credentials from the city of Baltimore. The data still has not fully recovered from the ransomware attack, which disrupted everything from real estate transactions awaiting deeds, bill payments for residents, and services such as email and telecommunications. Email remains down for most city operations.

Armor initially said the account could either have been the real attacker, a city employee, someone with access to the documents — or a hoax. 

The same ransomware recently hit the city of Greenville, N.C., as well as several power companies in India last month, according to Armor.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/baltimore-ransomware-attacker-was-behind-now-suspended-twitter-account-/d/d-id/1334860?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft Urges Businesses to Patch ‘BlueKeep’ Flaw

Fearing another worm of WannaCry severity, Microsoft warns vulnerable users to apply the software update for CVE-2019-0708.

Microsoft’s Security Response Team (MSRC) is warning organizations to patch BlueKeep (CVE-2019-0708), a critical remote code execution vulnerability it fixed earlier this month.

The flaw is in Remote Desktop Services (RDS), formerly known as Terminal Services, and affects some older versions of Windows. It’s pre-authentication and requires no user interaction; future malware that successfully exploits the bug could spread across vulnerable machines. Fearing this, Microsoft took the unusual step of issuing fixes for out-of-support systems Windows 2003 and XP, and still-supported Windows 7, Server 2008, and Server 2008 R2.

When it released the patch on May 14, Microsoft had not seen BlueKeep exploited in the wild but said it was “highly likely” cybercriminals would write an exploit and build it into malware.

Now, Microsoft is “confident” an exploit exists for this vulnerability, security officials said in an blog published late last week. Nearly one million Internet-connected machines remain vulnerable to CVE-2019-0708, they note, citing research from Errata Security.

“It only takes one vulnerable computer connected to the internet to provide a potential gateway into these corporate networks, where advanced malware could spread, infecting computers across the enterprise,” the MSRC post said. The scenario is even more dangerous for those who neglected to update internal systems, they continue, as future malware could try to exploit vulnerabilities that have already been patched.

At the time of publication, officials hadn’t seen signs of a worm – but this doesn’t mean we’re out of the woods, they said, pointing to the WannaCry timeline: Microsoft issued security fixes for a set of SMBv1 vulnerabilities on March 14, 2017. One month later, the Shadowbrokers publicly released a set of exploits, including a wormable exploit dubbed EternalBlue, leveraging the same SMBv1 vulnerabilities. Less than a month later, EternalBlue was used in WannaCry.

“Despite having nearly 60 days to patch their systems, many customers had not,” they wrote.

Researchers at Kenna Security have been monitoring for activity around CVE-2019-0708 since its patch was released and can confirm attempts are being made to reverse the patch and build an exploit. Like Microsoft, they believe there’s a high likelihood BlueKeep will be exploited.

“We have seen a number of public attempts to create reliable exploits, with work still ongoing,” says Jonathan Cran, Kenna Security’s head of research. “Given this activity, it’s reasonable to expect that there [is] an even larger number of folks working on it privately.”

Based on what researchers have seen, he says, most organizations are still running a “significant number” of vulnerable systems, particularly those still in support contracts: Windows 7 and Server 2008.

These systems are often configured with Remote Desktop Protocol (RDP), he says, which Microsoft says is not vulnerable but is part of the attack chain exploiting RDS. Authenticated attackers could exploit BlueKeep by connecting to a target system via RDP and sending specially crafted requests. If successful, they could execute code on a target system. Even if Windows 7 and Server 2008 are not exposed to the Internet, they’re susceptible to exploitation via a multi-pronged attack – similar to what we’ve seen with NotPetya using EternalBlue, he explains.

Could BlueKeep serve as a gateway to the next WannaCry? Cran points out that while one million is a significant number, there are fewer systems exposing RDP to the Internet compared with the number exposing SMB ahead of WannaCry. Further, RDP has long been considered “somewhat safe” to expose to the Internet, as there has never been a wormable vulnerability in the protocol.

“That’s now changed, and security teams must now deal with this new reality,” Cran continues. Following WannaCry and Petya/NotPetya, security awareness has improved among employees and consumers. While it’s unlikely BlueKeep could cause as much damage as quickly as the other attacks did, that doesn’t mean it won’t be seen in the wild.

Jérôme Segura, head of threat intelligence at Malwarebytes, predicts attackers will “waste no time” in weaponizing a proof of concept should it land in their hands. Comparing BlueKeep to WannaCry serves as a reminder of the costly consequences of a worm attack, he adds.

“Attacks leveraging BlueKeep could range from crashing computers for fun to loading malicious code onto them,” says Segura. “In either case, the impact on businesses that use legacy systems and aren’t able to patch could be costly.” Of course, one of the many challenges in patching is system visibility: Many organizations may not be aware their networks still run legacy systems.

Still, even if they know about legacy systems, patching is a challenge. Security teams typically apply security updates on a monthly or quarterly basis, says Cran. Something like BlueKeep forces them to consider an out-of-cycle process, driving the potential for downtime.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/microsoft-urges-businesses-to-patch-bluekeep-flaw/d/d-id/1334862?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Zebrocy APT Group Expands Malware Arsenal with New Backdoor Family

Group’s constant experimentation and malware changes are complicating efforts for defenders, Kaspersky Lab says.

Zebrocy, a Russian-speaking advanced persistent threat (APT) actor associated with numerous attacks on government, military, and foreign affairs-related targets since at least 2015 is back at it again.

Researchers from Kaspersky Lab say they have observed the group using a new downloader to deploy a recently developed backdoor family on organizations in multiple countries, including Germany, the United Kingdom, Iran, Ukraine, and Afghanistan.

The backdoor, written in the Nim programming language, is designed to profile systems, steal credentials, and help the attackers maintain persistence on a compromised computer over an extended period of time. As with its previous campaigns, Zebrocy is using spear-phising emails to distribute the new malware to targeted organizations.

It is the latest addition to Zebrocy’s continually expanding malware set and demonstrates the group’s long-term commitment to gaining access to targeted networks, Kaspersky Lab said in a report Monday. “We will see more from Zebrocy into 2019 on government and military related organizations,” the security vendor noted.

Zebrocy and its eponymously named malware first surfaced in 2015. Kaspersky Lab and other security vendors have linked Zebrocy to Fancy Bear/APT 28/Sofacy, a Russian-speaking APT group associated with attacks on numerous organizations including — most notoriously — the US Democratic National Committee in the run-up to the last general elections.

Some security firms, such as ESET, for instance, have described Zebrocy as Fancy Bear’s attack toolset, and not necessarily as a separate group. Earlier this month ESET published a new report noting numerous improvements to the toolset to give attackers better control over compromised systems.

Kaspersky Lab itself considers the team using Zebrocy as a sort of separate subgroup that shares its lineage with Sofacy/Fancy Bear and the BlackEnergy/Sandworm APT group that is believed to be behind a series of disruptive attacks on Ukraine’s power grid in 2015.

“For our part, we have referred to Zebrocy as a subset of Sofacy for the past several years,” says Kurt Baumgartner, principal security researcher with Kaspersky Lab’s Global Research and Analysis Team. Initially, at least, the group behind Zebrocy shared limited infrastructure, targeting, and code with Fancy Bear and the BlackEnergy group. “However, research over time shows that this malware set, activity, and infrastructure is unique,” he says.

What makes Zebrocy somewhat different from other APTs is its tendency to build malware using a wide set of programming languages and technologies, including AutoIT, Delphi, C#, Go, PowerShell, and most recently Nim, Baumgartner notes.

Despite its links to two extremely sophisticated threat groups, Zebrocy has so far not exploited any zero-day security vulnerabilities in its campaigns. It also has shown a tendency to build its malware by copying and pasting legitimate and malicious code from sites such as Pastebin and GitHub. Often Zebrocy has recoded its malware for specific campaigns using bits and pieces of code obtained from external sources.

“It’s very unusual to observe an APT with lineage in two groups known for zero-day exploit use and highly agile technical capabilities experiment and recode their malware in so many languages,” Baumgartner says.

That fact and Zebrocy’s tendency to rip source code from public forums and code-sharing sites suggest the group’s sophistication lies in experimentation, and that the selection of the group’s malware implementations are partially guided by machine learning algorithms, he notes.

“Network defenders already have a complicated enough time mitigating and clustering all the varieties of attacks thrown at them,” Baumgartner says. “Zebrocy’s constant experimenting and changing malware only compounds those problems.”

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/zebrocy-apt-group-expands-malware-arsenal-with-new-backdoor-family/d/d-id/1334863?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Majority of C-Level Executives Expect a Cyber Breach

Survey of executives in the US and UK shows that worries abound — about cyberattacks and the lack of resources to defend against them.

Nine out of ten business leaders in the US and UK say their organization lacks at least one critical resource necessary for defense against a cyberattack – and three-quarters of those leaders say believe a cybersecurity breach is inevitable.

These findings, based on research conducted by research firms Vanson Bourne and Osterman on behalf of security firm Nominet, show confusion about the role and responsibility of CISOs remains a factor in many organizations, with 35% of executives saying that the CEO — not the CISO — is responsible for response to a data breach. That’s more than the 32% who say that cyber-breach response is the CISO’s job.

The report suggests that this discrepancy might be one of the reasons only half of CISOs feel valued by the executive team, with 18% saying that the executive team is indifferent (or worse) to the security team.

That indifference is having an impact on CISOs, with 27% reporting that the job has an impact on their physical or mental health and 23% saying that it has hit their personal relationships. All of this despite 76% of C-level executives saying that the CISO is a “must have” position.

For more, read here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/majority-of-c-level-executives-expect-a-cyber-breach/d/d-id/1334859?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

G Suite users will have ‘confidential’ Gmail mode set to ON by default

Google announced on Wednesday that on 25 June 2019, its Gmail confidential mode will be switched on by default as the feature becomes generally available.

The feature gives G Suite users who use Gmail the option to send emails with expiration dates or to revoke previously sent messages. It also prevents recipients from forwarding, copying, printing, or downloading messages. Since confidential mode will be switched on by default, admins will have to switch it off if they so choose – for example, if they’re in industries that face regulatory requirements to retain emails.

Google introduced confidential mode for personal Gmail accounts last year and made the beta available in March 2019.

The screenshot/photo caveats still apply

As with other ephemeral-messaging services, including Snapchat and ProtonMail, there’s nothing stopping recipients from doing a screen grab of a message or simply taking a photo of it.

And as we noted in April 2018, when Google first gave admins a heads-up about confidential mode, there’s a reason why the company called it “confidential” rather than “private.”

For one thing, an email sent in confidential mode isn’t encrypted end-to-end. That’s unlike ProtonMail, the end-to-end, encrypted, self-destructing email service.

Into the Vault with you

For another thing, confidential emails are going to live on Google’s servers.

As Google explains on its help center, its confidential mode works with Vault, a web-based Google storage spot where organizations can retain, hold, search, and export data to support their archiving and eDiscovery needs.

When somebody sends a message in confidential mode, Gmail strips out the message body and any attachments from the recipient’s copy of the message and replaces them with a link to the content. Gmail clients make the linked content appear as if it’s part of the message, while third-party mail clients display a link in place of the content.

Vault can hold, retain, search, and export all confidential mode messages sent by users in your domain, Google says. Vault has no visibility into confidential messages’ content when it comes to messages sent to your organization from external parties, though.

To support Vault’s requirement to access confidential mode messages, Gmail attaches a copy of the confidential mode content to the recipient’s message, Google says. There are a few things to be aware of when it comes to that copy, namely:

  • It’s attached only when the message sender and recipient are in the same organization.
  • It’s only available to Vault.
  • Senders and recipients cannot access the copy from Gmail.
  • Third-party mail archiving tools cannot access the copy.
  • To delete all copies of a confidential mode message, you must delete it from the sender account and all recipients’ accounts.

How to use confidential mode

Confidential mode can be used on a desktop or through the mobile Gmail app.

Sending a confidential email

To switch it on:

  1. On your computer, go to Gmail, or on a mobile go to the Gmail app.
  2. Click Compose.
  3. In the bottom right of the window, click Turn on confidential mode.
    Tip: If you’ve already turned on confidential mode for an email, go to the bottom of the email, then click Edit.
  4. Set an expiration date and passcode. These settings impact both the message text and any attachments.
    • If you choose No SMS passcode, recipients using the Gmail app will be able to open it directly. Recipients who don’t use Gmail will get emailed a passcode.
    • If you choose SMS passcode, recipients will get a passcode by text message. Make sure you enter the recipient’s phone number, not your own.
  5. Click Save.

Revoke access to a sent email

You can also remove access early to stop a recipient from viewing the email before the expiration date. Here’s how:

  1. On your computer, open Gmail.
  2. On the left, click Sent.
  3. Open the confidential email.
  4. Click Remove access.

Receiving a confidential email

If you’re the recipient of an email sent in confidential mode:

  • You can view the message and attachments until it expires or the sender revokes access.
  • You can’t copy, paste, download, print, or forward the message or attachments.
  • You might need to enter a passcode to open the email.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/j2EUo0wJ38c/

Fake news writer: If people are stupid enough to believe this stuff…

In 2017, Facebook banned several fake news sites. One of them was the one that “Tamara” (not her real name) was writing for.

Poof! went her livelihood. Poof! went her boss’s Facebook Messenger account. When she finally got through to him, he sounded “shook up,” said Tamara, a Macedonian fake-news writer who recently described to the BBC what it’s like to manufacture mental sludge.

She didn’t hear from him again until last summer, when “Marco” – an awkward young man who seemed to be embarrassed about being younger than his employee – called to see if Tamara wanted to write for another website. She declined.

It’s not that she was overwhelmed with guilt that her job consisted of copying and pasting obviously made-up stories from other sites after searching for strings such as “Muslim attacks,” then creating a mashup of fact and fiction and searching Google for images to attach to the articles she published.

My take was that if people are stupid enough to believe these stories, maybe they deserve this. If they think this is the truth, then maybe they deserve this as a way of punishment.

And it’s not that she agreed with the content she was writing. Tamara says she’s a liberal, and she was “horrified” by the content she had to rewrite. She told the BBC that she basically turned off her brain and became a set of hands at a keyboard as she rewrote US articles to hide them from being flagged as plagiarized content.

I try to split myself and my own beliefs from the stuff I was writing. So I tried to stay as out of it as I can. I just saw it as writing words. I tried not to think about writing propaganda.

So why did she stop?

Tamara didn’t really say why she turned down Marco’s invitation to write for his new site, but it sounds like the splitting of her beliefs and the garbage she was posting became untenable.

I would usually shorten these kinds of articles and just skip the parts I don’t want to write. Or maybe put something that I want to be there.

For example, if they are attacking, let’s say, Muslims all the time, I would get so furious about all their attacking that I would cut all of the bulls**t and maybe put something nice at the end that the boss wouldn’t notice because he wouldn’t read all the articles all the time. It would ease the pain, my pain.

… something like ‘and at the end of the day, everybody is equal’. Something like this in the context of the article.

No puppetmaster, just profit

Besides obscuring the plagiarization, Tamara’s job was also to make the articles more compact and more likely to be shared.

It’s propaganda, Tamara said, but there’s no Russian puppetmaster pulling the strings on fake-news writers like her. There’s nobody in the Kremlin who’s telling them how to influence the US elections.

Rather, it’s all about the clicks. It’s all about the ad revenue. It doesn’t matter how preposterous the content is: what matters is that somebody – or many somebodies – open the articles and generate ad impressions.

The can-do fake-news town

This is a thriving industry in the Macedonian city of Veles.

In 2017, CNN published a report about the hundreds of producers of fake US news that have established a profitable fake-news industry based in the small city. CNN’s reporter talked to one producer who claimed to be making as much as $2,000 to $2,500 per day off of his fake news site, which had around a million Facebook likes at the time. To get around account shutdowns, the producer said, he’d buy Facebook accounts off of kids for two euros. That’s more than those kids ever had, he said.

Tamara told the BBC that her boss, Marco, ran two sites, which, combined, had more than 2 million Facebook followers.

Political meddling is only one slice of online influence

The 2016 US presidential election was a circus, but Tamara’s story shows that it wasn’t just because of Russia and its infamous troll factory.

That factory, which also goes by the name of the Internet Research Agency (IRA), created an army of trolls that flamed Hillary Clinton, waged a pro-Trump propaganda war, and turned Americans against their own government in the election. The IRA also set loose left-wing trolls, leaving most Americans to presume that the goal wasn’t entirely as clear-cut as simply being pro-Trump; rather, there were elements of flame-fanning on either side of the political divide.

But as both the 2017 CNN report on the Veles fake-news industry and the BBC interview with Tamara make clear, the term “fake news” doesn’t come close to describing the nuances of this type of online influence. After all, Tamara didn’t create fake news: she created something far more pernicious. From the BBC:

Much of what she produced was misinformation based on real events, written in a way to provoke fear and anger among its readers. In the aggregate, the stories gave a false, skewed view of the world, playing to people’s prejudices.

And here’s Tamara:

That thing happened, the people were there, the place was there. So it was never [100% fabricated] fake stories. It was propaganda and brainwashing in the way of telling the story.

It would be so much simpler if the stories were complete rubbish, but they’re not. Not entirely. Is it any wonder that Facebook last year ditched the “fake news” flag, admitting that it was only making things worse, as in, akin to waving a red flag in front of people’s faces, who then went right ahead and zealously clicked and shared away?

So many types of fakery farms

When it comes to propaganda as a path to profit, so-called content farms do something similar by tracking search terms that are popular enough in the moment that they might be worth something, yet are unusual enough to have no obvious high-ranking pages or videos yet.

Such farms hire people to write articles or even to film videos that are just about good enough to pass muster with the search engines as non-fake, but which are also ill-researched, rushed, inaccurate or simply made up. They get the content up, they get it visible, and in come the clicks and the profits.

The content enjoys a brief run as the top search result, meaning that people will click on it. There’s a chance to sell ads and make money on what is suddenly a top-ranked and thus implicitly trustworthy page – even though it’s inauthentic and possibly even downright dangerous drivel that was written not to be informative or correct but to be mistaken for something that has value.

You might argue that political fake news is worse because it might influence your overall opinion or your world view; but content farms are more similar than different, peddling apparently trustworthy fake information without any moral concern over how readers might use it.

You can imagine the harm that a credulous reader might experience from trusting an official and legitimate-looking piece that was dashed off in five minutes to match search terms such as “fix electrical outlet”, “refill camping gas bottle” or “emergency bicycle brake repair”.

Let’s not be as stupid as Tamara thinks we are

We’re always warning kids that mystifyingly lonely, purportedly famous people who reach out to strike up random romances with people they’ve stumbled across online have no credibility whatsoever.

Likewise, we’ve all learned that if we argue with election-related trolls spouting outrageous statements online, we’re possibly arguing with cardboard cutouts created by nation states such as Russia or Iran.

We should also bear in mind that in spite of Congress’s interest in stopping election meddling in 2020, the incendiary things we come across online might well be pushing our buttons for a more banal reason than to carry out an adversarial nation’s agenda.

In other words, sometimes, it’s just about making money, no matter how politically inflammatory a given post.

Be careful out there, everybody. Let’s not swallow the sludge that Tamara used to create and thinks we deserve if we’re stupid enough to believe it.

Closing thoughts from Naked Security’s Paul Ducklin:

Whether it’s political propaganda that seduces you into believing made-up nonsense, or ad-grabbing ‘self help’ information that persuades you to trust advice that could put you and your family in serious danger, there’s no shortage of online sleazebags speedily churning out content that’s cunningly tuned so that your favorite search engines will see it, trust it and promote it.

Use search engines and ‘news’ feeds wisely: their artificial intelligence is good at finding stuff, but it’s your human intelligence that has to make the final call on whether to accept it. To rework an old aphorism: ‘Make sure your brain is in gear before opening your mind.’ Never forget that the primary goal of most search engines is to make money out of what you click, not to improve the world by making you smarter. That part you have to work on yourself.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/LXa2VjPF_CA/

Going to Infosec Europe this week? Want a free T-shirt?

Are you making your way to Olympia, London for Infosec Europe this week?

If the answer is yes, make sure to come to stand F160 to say hello, and stick around for talks from Sophos.

You can also meet some of the Naked Security team – Duck, Anna, Alice, Matt and Ben will all be around and would love to meet you.

We’ll be presenting on a range of topics:

  • Fiendish malware – and how to unravel it – Paul Ducklin
  • It hurts when RDP – Matt Boddy and Ben Jones
  • An insight into Emotet malware – from humble banking malware to sophisticated botnet – Fraser Howard
  • Reinventing Cloud Security using AI and Automation – Richard Beckett
  • Adapt or Die – What animals can teach us about security – Jon Hope
  • LOL PWNED – A nerd’s reflections upon modern threats and cyberattacks – Greg Iddon

More from Duck

On Wednesday at 10.40am, our very own Paul Ducklin will be presenting at the Technology Showcase Theatre on Cryptography and malware: How crooks hide and how to spot them anyway. You can expect to learn about:

  • Cryptography and malware
  • Understanding your cyberenemy
  • How to spot so-called “hidden” attacks
  • Proactive techniques to keep the crooks out

What’s that about T-shirts?

Say “SECURITY IS THE BEST POLICY“, to any Sophos staff member on the stand and we’ll give you a coveted Sophos T-shirt. (While stocks last, so be quick!)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OQQK12IdOwc/

New controversy erupts over Chrome ad blocking plans

Google Chrome extension developers were fuming last week over a new approach in the way that the browser will handle extensions. It will limit the way that Chrome lets browsers block content – unless you’re an enterprise user.

In November 2018, Google proposed an update to the Manifest system, which restricts what extensions can do in Chrome. In its forthcoming Manifest v3, it wants to change the way that browser extensions intercept and modify network requests from the browser.

The proposed change would limit the functions of a specific application programming interface (API). APIs define how a piece of software can be spoken to by other bits of software.

Today, extensions running on the Chromium browser use the webRequest API to intercept network requests. They can use it to analyze and block requests from online domains like advertising networks.

Chromium’s developers want to limit the blocking form of webRequest, instead allowing only a neutered version that simply observes network requests. If developers want to block a site, they’d need to use another API called declarativeNetRequest.

The move would improve performance and improve user privacy, said Chromium’s developers. When using webRequest, Chrome gives the network request to the extension and waits for its decision. Under declarativeNetRequest, the extension tells Chromium its rules and lets the browser use those to handle the decisions itself.

Google hinted at the changes as far back as 1 October 2018, when it announced Manifest v3 in a blog post, promising:

More narrowly-scoped and declarative APIs, to decrease the need for overly-broad access and enable more performant implementation by the browser, while preserving important functionality

Nevertheless, several developers aren’t happy. They have been voicing concerns since at least January 2019. One of them is Raymond Hill, developer of the ad blocking software uBlock. He complained that declarativeNetRequest only features 30,000 rules, which isn’t nearly enough to support the filters his product uses. In a complaint posted to the Chromium bugs mailing list, Hill added:

If this (quite limited) declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two content blockers I have maintained for years, uBlock Origin (“uBO”) and uMatrix, can no longer exist.

Ad blocking isn’t the only activity at risk according to extension developers. Claudio Guarnieri, senior technologist at Amnesty International is concerned:

We, at Amnesty International, have also been working on browser extension to support at-risk communities in protecting from targeted attacks.

Another extension developer, Kristof Kovacs, added that the move would effectively kill the product he is working on in Google’s browser:

… We are the developer of a child-protection add-on, which strives to make the internet safer for minors. This change would cripple our efforts on Chrome.

Then Cliqz, creator of privacy extension Ghostery, analyzed the performance impact of the most popular ad blockers and found that the extensions were fast and efficient. The study showed that:

The manifest v3 performance claim does not hold based on our measurements.

Shortly afterwards, in a post on 15 February 2019, Google software engineer Devlin Cronin said that:

It is not, nor has it ever been, our goal to prevent or break content blocking.

He also announced that webRequest wouldn’t be entirely removed, and committed to increasing the ruleset limits in declarativeNetRequest.

This didn’t appease developers much. For example, the Electronic Frontier Foundation (which makes the Privacy Badger Chrome extension) published a document in April asking Google to reconsider messing with the blocking capabilities of the webRequest API.

Simeon Vincent, developer advocate for Chrome Extensions at Google, tried to clear things up a bit with a post about the issue on Google Groups in late May, but caused a stir with this sentence:

Chrome is deprecating the blocking capabilities of the webRequest API in Manifest V3, not the entire webRequest API (though blocking will still be available to enterprise deployments).

So the current proposal limits blocking capabilities unless you’re an organization that uses an enterprise version of Chrome.

Developers were incensed. One said:

It looks like they don’t have the guts to force this unjustified, user-hostile change on enterprises.

Others criticized the company for not addressing the Cliqz performance study.

Raymond Hill posted an angry retort on Github last week, accusing Google of ceding control to ad blockers only for as long as it took to build market share:

The deprecation of the blocking ability of the webRequest API is to gain back this control, and to further now instrument and report how web pages are filtered since now the exact filters which are applied to web page is information which will be collectable by Google Chrome.

He also pointed to Google’s 10-K filing which states that ad blocking technologies could affect its profits.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/yESGBGlUvyA/

Your phone’s sensors could be used as a cookie you can’t delete

Researchers keep finding new ways that advertisers can track users across websites and apps by ‘fingerprinting’ the unique characteristics of their devices.

Some of these identifiers are well known, including phone and IMEI numbers, or the Wi-Fi and Bluetooth Mac addresses, which is why access to this data is controlled using permissions.

But iOS and Android devices have a lot of other hardware that could, in theory, be used to achieve the same end.

In the study SensorID: Sensor Calibration Fingerprinting for Smartphones, Cambridge University researchers give some insight into the latest candidate – sensor calibration fingerprinting.

If sensors don’t sound like a big deal, remember that today’s smartphones are stuffed with them in the form of accelerometers, magnetometers, gyroscopes, GPS, cameras, microphones, ambient light sensors, barometers, proximity sensors, and many others.

Researchers have been looking at whether these sensors could be used to identify devices for some time using machine-learning algorithms without much success, but the Cambridge researchers finally cracked the problem with a novel proof of concept for iOS devices using M-series motion co-processors.

And there’s a good reason why sensors represent an attractive target, say the researchers:

Access to these sensors does not require any special permissions, and the data can be accessed via both a native app installed on a device and also by JavaScript when visiting a website on an iOS and Android device.

In other words, unlike traditional fingerprinting nobody is going to stop them, ask for permission to do what they’re doing, or even notice it’s happening at all, rendering the whole exercise invisible.

For advertisers, that’s the perfect form of device fingerprinting – one nobody notices.

Calibration fingerprinting attack

It turns out that MEMS (Micro-Electro-Mechanical Systems) sensors are inaccurate in tiny ways that can be used to identify one from the other:

Natural variation during the manufacture of embedded sensors means that the output of each sensor is unique and therefore they may be exploited to create a device fingerprint.

For high-end devices (all Apple devices and Google’s Pixel 2 and 3 smartphones), manufacturers try to compensate for this using a calibration process applied to each.

This means that the identifying inaccuracy can be inferred by knowing the level of compensation applied during this process.

The good news is that when the researchers reported their findings to Apple last August it fixed it in an update identified as CVE-2019-8541 in iOS 21.2 in March 2019.

Apple adopted the researchers’ suggestion of adding random noise to the analogue-to-digital converter output and removing default access to motion sensors in Safari.

That’s just as well because, ironically, Apple iOS devices are far more susceptible to calibration fingerprinting than the bulk of Android devices where the complicated calibration stage is often missed out for cost reasons.

However, higher-end Android devices that use calibration could still be affected, which Google was told about in December 2018 but, unlike Apple, has yet to address with a fix.

Research like this serves to emphasise a familiar theme. If advertisers and websites want to track devices, they have plenty to aim at.

Whether any would jump through the complex hoops necessary to crunch sensor data seems highly unlikely when there are many simpler ways to achieve the same.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/lW3EenUad2Y/