STE WILLIAMS

Ironpie robot vacuum can suck up your privacy

According to its maker, Trifo, the Ironpie home surveillance robot vacuum isn’t just your dust bunnies’ worst nightmare. It also chases intruders away with its “advanced vision system”.

I am always alert and never sleep on the job.

Funny thing about that always alert thing. It’s true, the artificial intelligence (AI) -enhanced internet of things (IoT) robot vacuum can indeed be connected to the internet via Wi-Fi, can be controlled remotely for vacuuming, and can remotely stream out video showing its surroundings, given that – like other IoT gadgets – it comes equipped with a video camera.

But, likewise, it’s similar to other IoT devices in having dusty, fusty security, as was discovered by  Checkmarx when its researchers decided to poke at the Ironpie’s security and privacy. Due to multiple security issues – in the nonstandard update procedure that doesn’t happen through the Google Play Store; insecure encryption in the Android app; and lack of proper authentication on the server end – that “always alert” video stream is also always up for grabs to remote attackers if they know how to take a few simple steps.

Checkmarx detailed the vulnerabilities and their repercussions at the RSA Conference in San Francisco last week. Its report is also available on its blog, here, and detailed in a YouTube video.

In the worst of the robot vacuum’s flaws, Checkmarx found that a remote attacker can intercept users’ video streams by accessing Trifo’s servers. Another flaw would let hackers send a fake software update to the vacuum’s app, tricking users into downloading malware.

Checkmarx found that a remote attacker could get at information via MQTT – a machine-to-machine (M2M), IoT connectivity protocol that bridges Trifo’s vacuum, its backend servers, and Trifo’s Home app.

Via MQTT, attackers could get at data such as the SSID of Wi-Fi network to which the vacuum is connected, plus the vacuum’s IP address, MAC address and more. With those pieces of data, an attacker can get a key that can get them remote access to the video feed of all connected, working, Ironpie vacuums, regardless of where they are.

That means, for example, that attackers could remotely access an Ironpie robot vacuum in a corporate office, record confidential conversations, and use its AI capabilities to make a map of the room, Checkmarx says. Using the same simple techniques, an attacker could also easily invade the privacy of a homeowner by spying on a room, its inhabitants and their conversations.

Checkmarx says its researchers reached out to Trifo with details about the vulnerabilities on 16 December 2019, but as of Wednesday, Trifo hadn’t responded. Given that the vulnerabilities still exist, Checkmarx held back details that could lead to exploits.

What to do

Erez Yalon, who contributed to the research, told CNET that users can protect their privacy by disconnecting their Ironpies from Wi-Fi, which will keep the app from working. Another option is to cover up the camera, Yalon said.

There’s no end to the stories about webcams being so-not-secured with default passwords, nor of being prey to hijacking because their owners reuse their passwords… which then get stolen in breaches and subsequently used in credential stuffing attacks. That, in fact, is what recently motivated first Google, with its Nest gadgets, and then Amazon, with its Ring video doorbells, to force users to use two-factor authentication (2FA).

If a website gives you the option to turn on 2FA, take them up it. It’s one way that can help to keep robot vacuums hard at work sucking up crumbs, not your privacy. As more and more IoT devices lacking good security come into people’s homes, we need all the help we can get to avoid those nifty gadgets turning into little hacked prowlers.

Here’s an informative podcast that tells you all about 2FA, if you’d like to learn more…

LISTEN NOW

(Audio player above not working? Download MP3 or listen on Soundcloud.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/kGtvD8tF-C4/

Let’s Encrypt issues one billionth free certificate

Last week was a big one for non-profit digital certificate project Let’s Encrypt – it issued its billionth certificate. It’s a symbolic milestone that shows how important this free certificate service has become to web users.

Publicly announced in November 2014, Let’s Encrypt offers TLS certificates for free. These certificates are integral to the encryption used by HTTPS websites.

HTTPS is HTTP that uses the Transport Layer Security (TLS) protocol for privacy and authentication. Your browser uses it to be confident that you’re not visiting an evil website that’s impersonating your real destination using a DNS spoofing attack. It also encrypts the information passing between your browser and the web server so that someone who can snoop on your traffic still can’t tell what you’re doing.

Netscape created HTTPS in 1994, but in 2014 a minority of websites used it. That’s because it could be technically difficult to implement, it was time consuming and it cost money. There was too much friction. That’s what Let’s Encrypt set out to change.

The project is a non-profit effort from the Internet Security Research Group (ISRG), an organisation sponsored by a mixture of privacy advocates and those who benefit from making the online ecosystem healthier. The Electronic Frontier Foundation (EFF) is a sponsor, along with Cisco, Facebook, Google, the Internet Society (which houses the Internet Engineering Task Force or IETF), Mozilla, and French cloud service provider OVH.

The project issues free certificates, keeping them valid for 90 days before forcing people to renew. It isn’t just the free nature of these certificates that has helped them flood the internet. The other key to the puzzle is automation. Let’s Encrypt created a protocol called Automated Certificate Management Environment (ACME). This is a challenge-response system that automates enrolment with the certificate authority and validation of the domain.

Version two of ACME became a proposed internet standard in May 2019 (did we mention that the IETF’s parent organization is a sponsor?) giving it more credence still. There are various ACME clients, and some have been baked directly into default Linux server distributions, enabling Apache and nginx web servers to run automatic scripts to handle the whole process.

Let’s Encrypt’s approach isn’t perfect. For one thing, it only offers domain validation that checks a person is in control of a domain, rather than extended validation certificates that go the extra mile to validate the legal name of the owner. This has led to some problems, such as Let’s Encrypt’s automatic validation of PayPal phishing sites.

This isn’t a mistake – it’s simply that the organization’s goal is to encrypt as many websites as possible rather than investigate their content, which it prefers to leave to others like Google. Eagle-eyed readers of today’s other stories will spot that the certificate issued on the Stripe phishing scam domain was also from Let’s Encrypt.

Thanks to this flood of free certificates, the web is a lot more encrypted than it was a few years ago. In June 2017, 58% of webpage loads were delivered over HTTPS, the project stated, adding that the number has grown to 81% today. That’s due in large part to free and automated certificate provisioning, but also to a firmer hand by web browser developers. Mozilla now shames any web pages that don’t use HTTPS, while Google removes the ‘secure’ label for HTTP-only sites and gives them a lower search ranking than HTTPS ones.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/m_p5aVh50cI/

Siri and Google Assistant hacked in new ultrasonic attack

Unsettling news for anyone who relies on smartphone voice assistants: researchers have demonstrated how these can be secretly activated to make phone calls, take photos, and even read back text messages without ever physically touching the device.

Dubbed SurfingAttack by a US-Chinese university team, this is no parlor trick and is based on the ability to remotely control voice assistants using inaudible ultrasonic waves.

Voice assistants – the demo targeted Siri, Google Assistant, and Bixby – are designed to respond when they detect the owner’s voice after noticing a trigger phrase such as ‘Ok, Google’.

Ultimately, commands are just sound waves, which other researchers have already shown can be emulated using ultrasonic waves which humans can’t hear, providing an attacker has a line of sight on the device and the distance is short.

What SurfingAttack adds to this is the ability to send the ultrasonic commands through a solid glass or wood table on which the smartphone was sitting using a circular piezoelectric disc connected to its underside.

Although the distance was only 43cm (17 inches), hiding the disc under a surface represents a more plausible, easier-to-conceal attack method than previous techniques.

As explained in a video showcasing the method, a remote laptop generates voice commands using text-to-speech (TTS) Module to produce simulated voice commands which are then transmitted to the disc using Wi-Fi or Bluetooth.

The researchers tested the method on 17 different smartphones models from Apple, Google, Samsung, Motorola, Xiaomi, and Huawei, successfully deploying SurfingAttack against 15 of them.

The researchers were able to activate the voice assistants, commanding them to unlock devices, take repeated selfies, make fraudulent calls and even get the phone to read out a user’s text messages, including SMS verification codes.

Responses were recorded using a concealed microphone after turning down the device’s volume so this communication would not be heard by a nearby user in an office setting.

DolphinAttack rides again

In theory, voice assistants should only respond to the owner’s voice, but these can now be cloned using machine learning software such as Lyrebird, as was the case in this test. It’s a defence of sorts – the need to capture and clone the victim’s voice.

A bigger might simply be the designs of individual smartphones – the team believe the two that did not succumb to SurfingAttack, Huawei’s Mate 9 and Samsung’s Galaxy Note 10, did so because the materials from which they were constructed dampened the ultrasonic waves. According to the researchers, putting the smartphone on a tablecloth was better still.

SurfingAttack was inspired by the 2017 DolphinAttack proof-of-concept, which showed how voice assistants could be hijacked by ultrasonic commands.

Elsewhere, sound has also proved interesting to researchers looking to jump air gaps, and exfiltrate data from computer fan noise.

While hacking voice assistants remains a lab activity with no known real-world attacks to speak of, there’s always a risk that could change. At some point, smartphone makers will surely have to come up with better countermeasures.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/d4GKhM4slSc/

New Trickbot Delivery Method Focuses on Windows 10

Researchers discover attackers abusing the latest version of the remote desktop ActiveX control class introduced for Windows 10.

Researchers have identified the use of Windows 10 functionality to automatically execute the OSTAP JavaScript downloader on victim machines. In their investigation, they found other attack groups abusing the same control, and earlier controls, with a slightly different technique.

The functionality being exploited is the latest version of the remote desktop ActiveX control class introduced for Windows 10, Morphisec Labs analysts explain in a blog post. Over the past few weeks, they have identified “a couple dozen documents” that execute the OSTAP JavaScript downloader.

Attackers use the ActiveX control to automatically execute a malicious macro after a victim enables a document. Most documents held an image to convince people to enable the content. Doing this executed the malicious macro; however, the image also concealed an ActiveX control below it. The OSTAP downloader is hidden in white text so it’s invisible to people but can be read by machines. Researchers report this technique will work only on Windows 10 devices.

“As newer features are introduced to a constantly updating OS, so too the detection vendors need to update their techniques to protect the system,” according to the blog post. “This often creates very exhaustive and time-consuming work, which in turn can lead to the opposite effect of pushing defenders even farther behind the attacker.” Trickbot attackers are taking advantage of this.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “How to Prevent an AWS Cloud Bucket Data Leak.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/new-trickbot-delivery-method-focuses-on-windows-10/d/d-id/1337207?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

State of Cybersecurity Incident Response

Reducing Risk with Data Minimization

Putting your company on a data diet that reduces the amount of the sensitive data you store or use is a smart way to achieve compliance with GDPR and CCPA.

As we continue to navigate an ever-changing regulatory landscape for privacy issues, organizations need to start thinking about privacy-related policies, procedures, and oversight as a must-have, not a nice-to-have.

A solid first step in reducing the risk your sensitive data presents is to minimize the amount of sensitive data being stored and used. Data minimization is the concept that only sensitive data necessary for business is kept — all other sensitive information (irrelevant or outdated) is securely disposed of. Minimizing the amount of sensitive data reduces an organization’s risk of improper disclosure and reduces the cost of storage. Data minimization is a requirement of General Data Protection Regulation (GDPR) Article 5(1)(c) and a reoccurring theme throughout the California Consumer Privacy Act (CCPA), which requires disclosure of what information is being held and how it is being used.

One popular strategy we’ve observed is for companies to go on a “data diet.” Equating data minimization to going on a diet gives employees a reference they can relate to. When on a diet, food is viewed through a “need” lens: Do I need this food? Will this food provide what I need?

When working to minimize sensitive data, the question is: Do we need this sensitive data to conduct business right now? Will we need this sensitive data to conduct business in the future?

If the answer is no, the sensitive data should be securely disposed of.

If the answer to either question is yes, the next step is to reduce the visibility of the sensitive data. Every time sensitive information is viewed, additional risk is generated. Using sensitive information in a way that allows the business to function but shows the sensitive information to the least number of people helps to manage risk. Both GDPR and CCPA have stringent guidelines on breach notification and the security of sensitive data. A logical step in securing sensitive data and reducing the risk of improper disclosure (breach) is to reduce the number of individuals who can view the data.

One way to accomplish this is through data masking. Data masking allows an organization to use an effective substitute (maintaining structural integrity) for the actual data when the real data is not required. Data masking is generally categorized as static, dynamic, or on-the-fly.

Static Data Masking: Data is masked in the original database and then copied to another database, such as a test environment. Only users with access to the original database can see the actual data. For example, as Diana tests a new software application, she is using a database copy that had the sensitive data replaced (sensitive data altered at rest in the copied database). This provides Diana with high-quality, representative data without disclosing sensitive information.

Dynamic Data Masking: Data is masked in real time, making query results of the database false so the actual data is never shown to users who access the original database. Due to the intricacy of preventing masked data being written to the database, dynamic data masking is best for read-only situations. Only authorized users of the original database can see the actual data. For example, Mark, a customer service representative, is submitting a SQL query to a database containing sensitive information that Mark does not need to conduct his job. The database proxy identifies Mark and modifies the SQL query before it is applied to the database so that masked data is returned to Mark. 

On-the-Fly Data Masking: Data is masked in real time, similar to dynamic data masking, except that masking occurs in the memory of a database application instead of in the database. Thus, the sensitive information is never in the application. Only users with authorized access can see the actual data. For example, Whitney is using an audit application to conduct an internal audit in HR. The audit application has access to the needed database, but the script is written so that any sensitive information requested by the application is masked before delivery.

A similar avenue to data masking is data scrambling, where sensitive data is obscured or removed. This is a permanent process. The original data cannot be derived from the scrambled data and thus can be utilized only when the data is being duplicated. Unlike data masking, scrambled data does not always retain structural integrity.

By taking steps to minimize the sensitive data that is stored or used and limiting viewability of the information to as few individuals and systems as possible, your organization is well on its way to reducing risk and obtaining compliance with privacy regulations.

Related Content:

 

Bethany is a Senior Information Technology Auditor out of The Mako Group’s Fort Wayne, Ind., office, where she brings a solid background in auditing and consulting on risk and controls. In her years of audit experience, she has worked with a Fortune 500 publicly traded … View Full Bio

Article source: https://www.darkreading.com/reducing-risk-with-data-minimization-/a/d-id/1337113?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Exploitation, Phishing Top Worries for Mobile Users

Reports find that mobile malware appears on the decline, but the exploitation of vulnerabilities along with phishing has led to a rise in compromises, experts say.

RSA Conference — San Francisco — Mobile malware appears to be declining as a favored tactic of cybercriminals, but the mobile ecosystem is far from risk-free as phishing and vulnerability exploitation become more significant threats, security experts said this week at the RSA Conference.

In 2019, the worldwide mobile ecosystem continued to expand, growing by 8.9 million new apps, or 18%, while at the same time the number of malicious apps declined, especially on premium app stores, such as Apple and Google, according to the “2019 Mobile App Threat Landscape Report,” published by RiskIQ. At the same time, companies saw mobile- and Internet of Things-related compromises grow, with 39% of firms suffering such a security incident, up from 33% in 2018, according to Verizon’s “Mobile Security Index 2020.”

The current threat landscape is best exemplified by the vulnerabilities in the WhatsApp chat application last year, says Michael Covington, vice president of product at Wandera, a provider of mobile cloud security. In April and May, nation-state attackers used serious vulnerabilities, including a remote exploit for a vulnerability in the video player on WhatsApp, to compromise targeted users.

“These are apps that have already gone through the app store vetting process, and they are installed on the device,” Convington says. “And when a vulnerability comes out, many companies cannot do anything, because they have no visibility into what apps are on their employees’ devices.”

The two trends — less mobile malware, but more mobile-related compromises — highlight that attackers are finding ways to compromise devices that do not rely on convincing a user to download malicious software.

The impact of the attackers’ tactics is significant. In 2019, two-thirds of companies suffering a breach from mobile malware considered the impact significant, while more than a third also considered the effects of the breach to be lasting, according to Verizon’s report. The majority of companies suffered downtime or loss of data in a breach, but many also found that other devices were compromised following a mobile breach and they had to deal with reputational damage and regulatory fines.

“When most people think of cybersecurity compromises, it’s the loss or exposure of data that springs to mind,” Verizon stated in its report. “But it’s much more than a company’s sensitive information that’s at risk. A mobile security compromise can have a range of other consequences, including downtime, supply chain delays, lost business, damage to reputation, and regulatory fines.

The major mobile app stores have forced attackers to change, with the brand-name stores seeing fewer malicious apps submitted to their vetting process, according to threat intelligence firm RiskIQ’s report. The number of blacklisted mobile apps fell by 20% overall in 2019, while the Google Play store blacklisted fewer than a quarter of the apps it blacklisted in 2018, the company found. Rather than an indication that app stores are easing up on security, RiskIQ argues that the ecosystem is doing a better job of weeding out malware developers from publishing apps to the store.

In addition, malicious apps in apps stores often remain easy to spot, says Jordan Herman, a threat researcher at RiskIQ.

“One potential giveaway is excessive permissions, where an app requests permissions that go beyond those required for its stated functionality,” he says. “Another is a suspicious developer name, especially if it does not match the developer name associated with other apps from the same organization. User reviews and number of downloads, where present, also help to give some level of reassurance that the app is legitimate.”

Because of the shift in attackers’ tactics, companies need to worry about more than just mobile malware. In August, Google revealed that at least five exploit chains for iOS — attacks strung together to gain access to a device — were found on websites in the wild. The attacks could compromise many versions of iPhone and iPads.

“[S]imply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant,” Ian Beer, a researcher with Google’s Project Zero, stated in an analysis of the attacks. “We estimate that these sites receive thousands of visitors per week.”

In many cases, even the legitimate functionality of legitimate apps can pose a risk for their business, says Wandera’s Covington.

“It is not just malware that defines a malicious app for them,” he says. “Other behavior is considered risk for many companies. Manufacturing firms don’t want apps that can use the camera, for example.”

Companies should learn to improve their security before they get breached. In 2019, 43% of companies that had a compromise ended up spending more on security. Only 15% of companies that did not suffer a breach spent more on protection, according to Verizon’s “Mobile Security Index” report.

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How to Prevent an AWS Cloud Bucket Data Leak.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/mobile/exploitation-phishing-top-worries-for-mobile-users/d/d-id/1337201?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Truths About Disinformation Campaigns

Disinformation goes far beyond just influencing election outcomes. Here’s what security pros need to know.PreviousNext

Image Source: Chumakov Oleg via Shutterstock

Image Source: Chumakov Oleg via Shutterstock

Exploding social media use and the growing availability of software bots and tools for manipulating video and other online content have made it easier than ever for bad actors to conduct broad disinformation campaigns.

While many tend to think of these campaigns as being mostly aimed at influencing election outcomes, the reality is that disinformation impacts a lot more than just politics and political leaders.

Recently, governments, hacktivists, and other threat actors have begun using disinformation and propaganda to push various partisan agendas, including those tied to health emergencies like the Coronavirus, religious beliefs, and financial markets. Security experts expect those with malintent to increasingly use disinformation campaigns to try and harm companies’ brands and reputations, spread rumors about business leaders, and hurt organizations financially.

“Disinformation is as old as communication. It just happens to take a new form,” says Chris Morales, head of security analytics at Vectra. “Fighting disinformation is hard and comes down to what people will and will not believe.”

Following are six things to know about disinformation campaigns.

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full BioPreviousNext

Article source: https://www.darkreading.com/risk/6-truths-about-disinformation-campaigns-/d/d-id/1337199?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google has right to censor conservative nonprofit on YouTube

Just because YouTube is everywhere doesn’t make it the town square, a Seattle appeals court said on Wednesday. It’s neither a public forum nor a “state actor”, and it can’t be held to First Amendment court oversight as if it were a government body.

Thus did the 9th Circuit Court of Appeals in San Francisco dismiss a top right-wing content creator’s allegation that Google had violated its First Amendment rights by tagging dozens of its videos on abortion, gun rights, Islam and terrorism with its Restricted Mode and demonetizing them so the nonprofit can’t make money from advertising.

The suit was originally brought in 2017 by radio talk show host Dennis Prager, who runs the conservative, nonprofit educational company Prager University (PragerU). PragerU isn’t an actual university, and it doesn’t award certificates or degrees. It’s best known for its many 5-minute videos, some of which, starting in 2016, Google dubbed Restricted, including videos about the 10 Commandments, whether police were racist, and Israel’s legal founding.

The suit claimed that Google, with its outsize power to moderate user content on YouTube, was using that power to censor conservative viewpoints. Google’s content filters apply the Restricted Mode to material seen as unfit for minors, including videos that include alcohol abuse, sexual situations, violence, and other mature matters.

None of that applied to PragerU’s content, but dozens of its videos were still flagged for “objectionable content” by Google’s algorithm. After being flagged, the videos were then reviewed by humans, who often upheld the content restriction and, in addition, demonetized videos, making it tough for PragerU to leverage the platform for moneymaking opportunities by advertising.

Prager’s suit argued that Google’s opposition to conservative political views led to its content being flagged, in violation of First Amendment protection of free speech. That argument doesn’t fly, the appeals court said on Wednesday, given that YouTube isn’t a public forum:

PragerU’s claim that YouTube censored PragerU’s speech faces a formidable threshold hurdle: YouTube is a private entity. The Free Speech Clause of the First Amendment prohibits the government – not a private party – from abridging speech.

The appeals court also rejected PragerU’s claim that Google’s “braggadocio” about free speech constituted false advertising. Nope, that’s just opinion, the court said on Wednesday. Or, to be more precise, it’s marketing puff-speak:

Lofty but vague statements like ‘everyone deserves to have a voice’, and that the world is a better place when we listen, share and build community through our stories or that YouTube believes that ‘people should be able to speak freely, share opinions, foster open dialogue, and that creative freedom leads to new voices, formats and possibilities’ are classic, non-actionable opinions or puffery.

Farshad Shadloo, a Google spokesman, told Reuters that the company’s products “are not politically biased,” and the decision “vindicates important legal principles that allow us to provide different choices and settings to users.”

Donald Verrilli, a US solicitor general under President Barack Obama, wrote on behalf of the Computer Communications Industry Association in support of Google and YouTube, saying in a legal brief that courts have consistently found private companies such as Google, YouTube and Facebook don’t qualify as state actors for First Amendment purposes.

Interpreting them otherwise would “change the internet” by threatening to make websites “chock-full of sexually explicit content, violent imagery, hate speech, and expression aimed at demeaning, disturbing, and distressing others”, he wrote.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OVLHTQuoTUc/

Firefox rolling out DNS-over-HTTPS privacy by default in the US

Mozilla has said it plans to make a privacy technology called DNS-over-HTTPS (DoH) the default setting for US users of Firefox within weeks.

As our previous coverage explains, DoH encrypts Domain Name System (DNS) queries, which browsers use to resolve website addresses to their underlying numeric IP addresses.

Normally, these requests are sent in the clear, which means that ISPs and governments can see which web domains someone is visiting, which is where the privacy concerns begin.

In the US, ISPs have been accused of selling this data to advertisers. Although not a perfect shield against DNS snooping, DoH makes that a lot harder.

The technology’s been inside Firefox since mid-2018 although until now users had to enable it manually. In September 2019, Mozilla started testing DoH-by-default in the US – with that completed, from next month DoH will become a setting that users have to consciously opt to turn off.

Users can do this via Options General scroll down to Network Settings at the bottom of the page and then click Settings. The ‘Enable DNS over HTTPS’ tick box is the last one on the page.

Notice how buried this setting is? Having backed DoH development since its earliest days in 2017, Mozilla doesn’t want to make it easy to turn off something it thinks is against the user’s interests.

Just below the tick box, there’s a second setting that allows users to choose which trusted DNS resolver to use. Cloudflare, Mozilla’s long-time DoH collaborator, is the default but recently users gained the ability to choose a second, NextDNS.

Trusting DoH providers

This aspect has bothered some critics – using companies such as Cloudflare effectively centralises DNS resolution for the tens of millions of people who use Firefox.

It’s a weak argument. People already set alternative DNS resolvers for performance reasons (Google’s 8.8.8.8, for instance) so the idea of using one service provider is hardly new. And is the alternative of routing DNS queries through an ISP’s servers any less centralised?

From an internet topology perspective, perhaps. But browser users don’t care about that. What bothers them more is: Who is recording the websites they go to?

As Mozilla reminds us, currently in the US, 80% of traffic travels through the DNS servers of only five broadband providers. All that using Cloudflare or NextDNS requires is that users trust these companies’ promises to protect privacy in the same way they do for any service provider. It’s a personal choice.

What to do

There are currently no plans to turn on DoH by default outside the US, most likely to defuse criticism by government agencies that it will, in the short term, make it harder to keep tabs on illegal activity by citizens. Google, which also backs DoH, is experimenting more cautiously than Mozilla.

Similar arguments were once made about the risk posed by HTTPS security and, in the 1990s, the spread of encryption more generally. But anyone who is serious about evading web surveillance can already do that in several ways that are more effective than using DoH, for example using Tor or firing up a VPN.

For non-US users, DoH can be turned on using the same settings mentioned above.

The technology can also be configured with slightly more difficulty in rival browsers such as Chrome, Edge, Brave and Opera although not, so far, in Apple’s Safari. The technology is coming to Windows 10 at some point.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/U2NXSe69Ao0/