STE WILLIAMS

Cryptographers Panel Tackles Espionage, Elections & Blockchain

Encryption experts gave insights into the Crypto AG revelations, delved into complexities of the “right to be forgotten,” and more at RSA Conference.

RSA CONFERENCE 2020 — San Francisco — In a typically wide-ranging conversation, the Cryptographers’ Panel at the RSA Conference here Tuesday showed how cryptography is wending its way into more and more parts of society. The discussion spanned election security, blockchain use cases, SIM swapping, the right to be forgotten, encryption backdoors, “quantum-proofing,” new revelations about the CIA’s secret ownership of Crypto AG, and more. 

This year Adi Shamir — Borman professor of computer science at the Weizmann Institute in Israel and the “S” in “RSA” — returned to the panel after missing last year because of a widely reported visa issue. The panel, led by RSA CTO Zulfikar Ramzan, included a trio of crypto panel regulars: Ron Rivest, MIT professor and the “R” in RSA; cryptographer and security expert Whitfield Diffie; and Tal Rabin, head of research for the Algorand Foundation. Princeton University associate professor Arvind Narayanan also joined the conversation. 

Crypto AG
Diffie shared nuanced insights relating to the joint report released two weeks ago by the Washington Post and German public broadcaster ZDF. According to a Dark Reading article, “Crypto AG, a Switzerland-based communications encryption firm, was secretly owned by the CIA [US Central Intelligence Agency] in a classified partnership with West German intelligence. For years, it sold rigged devices to foreign governments with the intent of spying on messages its users believed to be encrypted.” 

Diffie says he’s “enthusiastic” about intelligence gathering — that it actually increases global stability when nations know more about each other. Nevertheless, the CIA’s successes and excesses with Crypto AG have new lessons for the cryptography community. 

“I think the first thing we learned is it’s easy to get the illusion working in academic cryptography that there’s some playing fair. And intelligence is not about playing fair — it’s about succeeding,” said Diffie. “And there’s no reason [for an intelligence agency] to be sitting waiting for [another nation or adversary] to make up cryptographic algorithms that maybe you can break and maybe you can’t if instead you could push one [algorithm] on them that you can. And that is what this did with amazing success for 20, 30, 40 years.”

However, Diffie says, if cryptographic algorithms were all made public — as many cryptographers have long preached — then customers would not have to rely on an encryption company’s word that the communications are indeed secure. The sort of espionage carried out by Crypto AG would not have occurred if the algorithms were public, Diffie says.

Also, cryptography is hard, he says. And it isn’t something that everyone should go do themselves. Nevertheless, if more nations had endeavored to create their own algorithms, the code-breakers and eavesdroppers at intelligence agencies would face a far greater challenge. Instead, many countries rely on the same technology, which might be compromised right out of the box. 

“So these lessons are very relevant today,” he said, “where we’re accusing Kaspersky in Russia or Huawei in China of building compromises into their equipment or haven’t been buying them for that reason. And I think perhaps we should be and perhaps they should.

Right to Be Forgotten
The panelists discussed the operational and societal challenges of protecting European citizens’ privacy under the European Union’s “right to be forgotten” regulations, as well as its limitations. 

“The ‘right to be forgotten’ can’t be anything other than something that keeps the little people in line,” said Diffie. “But it’s not a right to be forgotten by the secret police. It’s not going to be effective for anybody who can keep their own records. It just affects small researchers, nosy busybodies, and employers.”

Narayanan countered that while that may indeed be the case, these uses alone can be powerful. For example, Narayanan cited how a common cause of recidivism is that people with a criminal history have a difficult time getting a job after they’ve served their sentences because the first search result about them may be about their incarceration.

For those individuals, the right to “delist” that information — not necessarily to “forget” it — could make a big difference.  

“I think that in the context of the right to be forgotten we can discuss about it in various ways,” said Rabin. “But I think we do need technologies to eliminate data from the Internet. Of course there are things that we as a society, not just as an individual, want removed.”

Rabin cited the example of child pornography and the need to protect children who appeared in these published videos.

Just because we cannot not satisfy the right to be forgotten, or maybe because we think something should not be forgotten, does not mean we shouldn’t “work on these types of technology that enable deletion of information,” she said.

Shamir, however, noted a challenge with this. “Clearly, global trade is all about making the past immutable,” he said. “So any legislation that will require that people will be able to undo past actions is going to lead to the idea of the blockchain — where after some amount of blocks have been accumulated there is no way to patch the past.”

Election Security Blockchains
Shamir said he has “major reservations” about blockchain. “Not because it doesn’t work, but because in most cases it is overhyped, and there are much simpler ways to achieve the same goal,” he said. 

Blockchain proponents continue to hunt for the killer app or breakthrough use case that will move the technology mainstream. Some have proposed that the next promising frontier for blockchain is at the voting booth. 

But Rivest disagreed. “Blockchain is the wrong security technology for voting,” he said. 

“Many things we do in society — like flying an airplane — you need high tech,” said Rivest. “Voting is a place where you don’t really need high tech to make it work. You can get by just fine with paper ballots.” Rivest described and recommended election practices that use a voter-verified paper trail with regular audits of those paper records to validate the tabulations of voting machine software.

The risk of running elections without the verified paper trail is that to trust the results, you must trust the software. “That’s a dangerous path to go down if you don’t need to. And with voting we don’t need to,” he says. “Blockchains provide us certain things — ‘garbage in, garbage stored forever,'” but if an adversary does change or manipulate a vote, “it goes on the blockchain and never gets changed again. So blockchain is just a mismatch for voting.” 

The Future
Looking forward, Rivest said wryly that while preparing or “future-proofing” for quantum-powered attacks on encryption is good, “I hope that the people who are building quantum computers, uh, fail.”

Rabin said that the future for the crypto profession is bright. The power and beauty of the field, she says, is partly in the fact that there are innovations and technologies that “maybe today we don’t even know 100% what to do with them, but maybe in 20, 40 years we will. … I see a future for everybody here for a long time.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Wendy Nather on How to Make Security ‘Democratization’ a Reality.

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full Bio

Article source: https://www.darkreading.com/endpoint/cryptographers-panel-tackles-espionage-elections-and-blockchain/d/d-id/1337142?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Emotet Resurfaces to Drive 145% of Threats in Q4 2019

Analysis of 92 billion rejected emails reveals a range of simple and complex attack techniques for the last quarter of 2019.

RSA Conference 2020 – San Francisco – Emotet is back with a bang, driving a 145% spike in threat activity throughout the fourth quarter of 2019, researchers report in a new analysis of 202 billion emails from the past quarter. Of the messages analyzed, 92 billion were rejected. 

The “Mimecast Threat Intelligence Report: RSA Conference Edition” covers data from October to December 2019 and reveals a mix of attacks ranging from low-effort, low-cost threats to advanced and targeted campaigns. Attackers are trying to take enterprise victims by surprise; this is shown by the resurgence of Emotet malware, now growing increasingly aggressive.

This isn’t the first indicator Emotet is ramping up: Earlier this year, Cisco Talos researchers detected some activity against US military domains, as well as domains belonging to state and federal governments. The malware’s operators reportedly compromised accounts belonging to at least one federal government employee and sent malicious spam to the victim’s contact list. At the same time, Cofense analysts found an Emotet campaign targeting the United Nations.

Now its resurgence is a focal point of Mimecast’s findings, which report 61 significant attack campaigns in the last quarter of 2019 — a 145% increase over the past year. Emotet was a key driver in this increase, researchers say, as it was a component in nearly every attack identified. The subscription-based malware-as-a-service model makes it accessible to a wider audience.

Researchers noticed Emotet’s operators shift from fileless malware when it first reemerged to attachment-based attacks later on. “They’re trying to hone their skills in the process,” says Josh Douglas, vice president of threat intelligence with Mimecast. “They’re changing the dynamics of how they’re targeting.”

Many of the significant campaigns that used Emotet have included ransomware detections. This finding, researchers say, indicates a high likelihood that attackers are focusing on ransomware delivery, especially given the attacks using ransomware in previous quarters.

File compression was the preferred attack format for the last quarter of 2019; however, researchers saw an increase in Emotet activity via .doc and .docx file formats. Compressed files allow for a more complex and potentially multimalware payload, they explain, but they also provide an easy way to hide the actual file name of any items in the container.

Emotet’s activity coincided with an increase in spam, one of the four primary threat categories analyzed in this report, along with impersonation attacks, opportunistic attacks, and targeted attacks. Spam is a significant and high-volume means to distribute malware; it was especially popular in attacks against the legal, software/software-as-a-service, and banking industries. Researchers anticipate it will continue as a popular vector given the likelihood someone will fall for it.

Impersonation attacks also remain effective as attackers spoof domains, subdomains, landing pages, websites, mobile apps, and social media profiles to manipulate their victims into sharing credentials and personal data. Management/consulting, legal, and banking are the most common industries for impersonation attacks, for which detections were down 5% during the past quarter. Attackers are using more nuanced techniques such as voicemail phishing to succeed.

Researchers also noticed a difference in the more significant attacks of the last quarter. Threats targeted a wider range of companies across different sectors, and for shorter periods of time than seen in previous quarters. Specific campaigns have only spanned one- or two-day periods, as opposed to multiday campaigns detected in the past. These attacks show an uptick in the use of short-lived, high-volume, targeted and hybridized attacks against victims across sectors.

That said, the overwhelming majority of attacks are less complex and more high-volume. This is “almost certainly” an indicator of broader access to online kits that less-skilled attackers can use to deploy attack campaigns, researchers explain in their report.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “SSRF 101: How Server-Side Request Forgery Sneaks Past Your Web Apps.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/emotet-resurfaces-to-drive-145--of-threats-in-q4-2019/d/d-id/1337147?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mystery zero-day in Chrome – update now!

Google has issued an update for its widespread Chrome browser to fix three security holes.

Unfortunately, one of those holes is what’s known as a zero-day: a bug that was already being exploited by cyerbcrooks before Google tracked it down and fixed it.

Google, which is often vociferous about bugs and how they work, especially those found by its own Project Zero and Threat Analysis teams, is playing its cards close to its chest in this case.

As the company’s update notification says:

Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.

We’re guessing that Google is worried that giving away too much at this stage might encourage additional attackers – ones who haven’t figured this bug out yet – to try to get in on the act.

If those crooks know other Bad Guys have already figured out how to exploit this vulnerability for active attacks, then they know that there’s more than just a theoretical chance of abusing the bug if they happen to rediscover it themselves.

So far, then, Google has only offered this comment about the vulnerability:

CVE-2020-6418: Type confusion in V8. Reported by Clement Lecigne of Google’s Threat Analysis Group on 2020-02-18

Google is aware of reports that an exploit for CVE-2020-6418 exists in the wild.

Two researchers at a business called Exodus Intelligence have already published a proof-of-concept exploit, which they devised by studying recent changes in the V8 source code. Fortunately, their example requires you to visit a web page using Chrome with its so-called sandbox protection turned off. In regular use, however, Chrome runs with its protective sandbox enabled, so even if this proof-of-concept exploit were to trigger the bug, it couldn’t then grab control from the browser to run malware code of an attacker’s choosing. We assume that Google’s statement about an exploit “in the wild” refers to an attack that works even if Chrome is run in the usual way.

To explain.

A type confusion bug is where you are able trick a program into saving data for one purpose (data type A) but then using it later for a different purpose (data type B).

Imagine that a program is very careful about what values it allows you to store into memory when you are treating it as type B.

For example, if a ‘type B’ memory location keeps track of a memory address (a pointer, to use the jargon word), then the program will probably go to great lengths to stop you modifying it however you like.

Otherwise, you might end up with the power to read secret data from memory locations you aren’t supposed to access, or to execute unknown and untrusted program code such as malware.

On the other hand, a memory location that’s used to store something such as a colour you just chose from a menu might happily accept any value you like, such as 0x00000000 (meaning completely transparent) all the way to 0xFFFFFFFF (meaning bright white and totally opaque).

So if you can get the program to let you write to memory under the low-risk assumption that it is storing a colour, but later to use that “colour” as what it thinks is a trusted memory address in order to transfer program execution into your malware code…

…you just used type confusion to bypass the security checks that should have been applied to the memory pointer.

(For performance reasons, a lot of software verifies the safety of data when its value is modified, not every time it is used, on the grounds that if the data was safe when it was saved, it should remain safe until the next time it is modified.)

What’s V8?

V8, in case you are wondering, is the JavaScript “engine” that is built into the Chrome browser.

Numerous other projects use V8, notably the node.js software development system, widely used these days for server programming, and Microsoft’s new-but-not-quite-official-yet variant of its Edge browser, which is based on Google’s V8 engine rather than Microsoft’s own ChakraCore JavaScript system.

We’re assuming that if other V8-based applications do turn out to share this bug, they will soon be patched too – but as far as we know now [2020-02-25T18:50Z], the in-the-wild exploit only applies to V8 as used in Chrome itself.

What to do?

As Google reports:

The [regular release version] has been updated to 80.0.3987.122 for Windows, Mac, and Linux, which will roll out over the coming days/weeks.

However, given what seems to be a clear and present danger in this case, we suggest that you don’t wait for your Chrome to get round to updating by itself – go and check for yourself if you’re up-to-date.

And remember, patch early, patch often, especially if the crooks are already ahead of you!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xrcwT2eXQRY/

Mind the gap: Google patches holes in Chrome – exploit already out there for one of them after duo spot code fix

Google has updated Chrome for Linux, Mac, and Windows to address three security vulnerabilities – and exploit code for one of them is already public, so get patching.

In a release note on Monday, Krishna Govind, a test engineer at Google, said Chrome version 80.0.3987.122 addresses three flaws identified by various researchers. Each is rated high severity.

One, reported by André Bargull, is an integer-overflow bug in International Components for Unicode (ICU), a set of libraries for C/C++ and Java that handle Unicode and globalization support. This bug earned a $5,000 bounty from Google for Bargull, and no CVE has been issued.

The second flaw, reported by Sergei Glazunov of Google’s Project Zero team, is an out-of-bounds memory access in the streams component of the Chromium browser. It’s designated CVE-2020-6407.

The third, reported by Clement Lecigne of Google’s Threat Analysis Group, is a type-confusion bug in the TurboFan compiler for V8, the open-source Chromium JavaScript engine.

This particular remote-code execution vulnerability, CVE-2020-6418, was disclosed by Lecigne to the Chromium team on February 18, and quietly fixed a day later.

Rapper Jay-Z on stage

If you’re running Windows, I feel bad for you, son. Microsoft’s got 99 problems, better fix each one

READ MORE

Interestingly enough, at the time, this public source-code tweak was spotted and studied by Exodus Intelligence researchers István Kurucsai and Vignesh Rao, who hoped to see whether it’s still practical to identify security bug fixes among code changes in the Chromium source tree and develop an exploit before the patch sees an official release, a practice known as patch-gapping.

As such, Kurucsai and Rao developed proof-of-concept exploit code for CVE-2020-6418 after spotting the fix buried in the source tree, and before Google could emit an official binary release. The duo have now shared their exploit code [ZIP] which can be used by white and black hats to target those slow to patch.

The bug arises from a side-effect of the JSCreate operation and the way it handles JavaScript objects; this can be abused by a malicious webpage to execute arbitrary code within the browser sandbox. This involves modifying the length of an array to an arbitrary value to get access to the V8 memory heap. A hacker would need to break out of the sandbox to hijack a device or PC, we note.

In their write-up, Kurucsai and Rao observe that it took three days to analyze the flaw and develop exploit code. “Considering that a potential attacker would try to couple this with a sandbox escape and also work it into their own framework, it seems safe to say that one-day vulnerabilities are impractical to exploit on a weekly or bi-weekly release cycle,” they said.

According to Govind, Google is keeping the discussion of the V8 bug private until the update, usually distributed automatically, reaches the majority of Chrome users. The Googler noted the we giant “is aware of reports that an exploit for CVE-2020-6418 exists in the wild.”

Google’s most recent Chrome zero-day fix arrived last November, when the Chocolate Factory repaired a use-after-free vulnerability (CVE-2019-13720). ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/25/google_chrome_security_bugs/

KnowBe4 At BlackHat

KnowBe4 at BlackHat

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/knowbe4-at-blackhat/d/d-id/1337135?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Report: Shadow IoT Emerging as New Enterprise Security Problem

Much of the traffic egressing enterprise networks are from poorly protected Internet-connected consumer devices, a Zscaler study finds.

When it comes to protecting against Internet of Things (IoT)-based threats, many organizations seem have a lot more to deal with than just the officially sanctioned Internet-connected devices on their networks.

A new analysis by Zscaler of IoT traffic exiting enterprise networks showed a high volume associated with consumer IoT products, including TV set-top boxes, IP cameras, smart watches, smart refrigerators, connected furniture, and automotive multimedia systems.

In some cases, the traffic was generated by employees at work, for instance, checking their nanny cams or accessing media devices or their home security systems over the corporate network. In another instances, consumer-grade IoT devices installed in work facilities, such as smart TVs, generated a lot of the IoT traffic.

Though all IoT devices — authorized and unauthorized — that Zscaler observed used at least some level of encryption, a startling 83% of IoT transactions were happening over plain-text channels, making it vulnerable to eavesdropping, sniffing, and man-in-the-middle attacks.

“We are noticing a big increase in IoT device traffic aggressing the enterprise network,” says Deepen Desai, vice president of security research at Zscaler.  

As recently as last May, the volume of IoT traffic generated by Zscaler’s enterprise customer base was in the range of 56 million transactions per month. Currently it is around 33 million transactions a day, or roughly 1 billion transactions per month. As a proportion of all Internet transactions that Zscaler processes, the volume of IoT-related traffic is still relatively small but is growing very fast, Zscaler said.

While the traffic increase itself is in keeping with previous predictions about IoT growth, the concern is the number of unauthorized, consumer-oriented shadow-IoT devices that are showing up on enterprise networks, Desai says. Many of these devices have insecure configurations, use default passwords, and present relatively easy targets for attackers. New exploits that target IoT devices are constantly surfacing, and attackers are actively looking to exploit vulnerabilities in connected cameras, DVRs, and home routers, he says.

Zscaler’s analysis of some 500 million transactions from more than 2,000 organizations over a two-week period uncovered traffic from a total of 553 unique devices across 21 categories from 212 manufacturers. TV set-top boxes accounted for nearly 30% of the IoT devices that Zscaler discovered across the organizations in its study. Three of the other IoT devices among the top five were consumer products as well — smart TVs, smart watches, and media players.

The top authorized devices that Zscaler discovered in its study — including wireless barcode readers, digital signage media players, medical systems, industrial control devices, and payment terminals — were significantly smaller in number compared to the unauthorized IoT devices. However, and somewhat unsurprisingly, these devices were the ones that generated most of the IoT traffic on the networks.

The situation highlights the need for enterprises to enable greater visibility into IoT traffic on their networks, Desai says. Without knowing what’s on their networks, administrators are going to find it very hard to manage the problem.

“Organizations need to understand the risk,” Desai says. They need to be able to identify and separate the authorized IoT traffic on the network from the traffic generated by vulnerable and poorly secured consumer device. “If your MRI [machine] is talking back to the Internet, there could be many other devices [doing it] as well,” he says.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “SSRF 101: How Server-Side Request Forgery Sneaks Past Your Web Apps.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/iot/report-shadow-iot-emerging-as-new-enterprise-security-problem/d/d-id/1337144?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google stops indexing WhatsApp chats; other search engines still at it

On Friday, Jordan Wildon – a journalist for the German media outlet Deutsche Welle – warned the world that their WhatsApp groups “may not be as secure as you think they are.”

A simple Google search could lead people to invite codes that would let them find and join private WhatsApp group chats, given that the pages were indexed by Google…

…and somebody who filed a bug report about it revealed that both WhatsApp parent company Facebook and Google have known about this for months.

Wildon said that any group link shared outside of secure, private messaging could be found relatively easily and joined. This is past tense, at least for Google search: as of Saturday, WhatsApp tweaked the glitch out of existence, though the search was still working on other, major search engines as of today. Worse still, the links could have been found through Google search even if they hadn’t been shared, he said:

[…] it’s possible, but difficult, to run a kind of brute-force method to get access to a URL that corresponds to an active group chat.

Stop me if you’ve heard this one before: it’s a feature, not a bug. That’s what Facebook told Twitter user @hackrzvijay when the platform turned down their bug report in September 2019:

This was an “intentional product decision,” Facebook said. It’s not our fault that group admins haven’t invalidated the links that people can find with a simple search. Heck, we’re surprised that Google’s even indexing them.

Well, Facebook shouldn’t be surprised. The invite codes are just URLs with specific strings of text that Google uses to index pages across the internet. Here’s a response from Google’s Danny Sullivan, its public search liaison:

Regardless of what Facebook says, its hands likely aren’t tied in this matter. It’s simple enough for WhatsApp to plunk a line of code that tells search crawlers not to index the information on private group pages. Later on Friday, one hacker who reverse-engineers apps – Jane Manchun Wong – confirmed that it was WhatsApp’s fault. It wasn’t inevitable that those group pages got indexed, Wong said. All it would have taken to keep them from being indexed was the insertion of a simple text string:

… which apparently also occurred to somebody at WhatsApp – eventually, after the media storm had been raging for a while. By Saturday, the app had picked up a noindex code on the chat invitation URLs and the listings had been removed from Google:

… thus, fittingly enough, rendering the private chats unfindable and, hence, more private… at least on Google, that is. As of this morning, you could still find the strings when using other major search engines, as Forbes reported and which I confirmed by searching on one of the strings using DuckDuckGo.

As of this morning, WhatsApp hadn’t gotten back to any news outlets with a mea culpa, at least not that I could find. Facebook didn’t reply to my request for a comment. But WhatsApp did take the time to blame the privacy breach on users.

The Facebook subsidiary told Vice’s Motherboard that the problem was users’ fault for posting invite codes on public sites:

Group admins in WhatsApp groups are able to invite any WhatsApp user to join that group by sharing a link that they have generated. Like all content that is shared in searchable, public channels, invite links that are posted publicly on the internet can be found by other WhatsApp users. Links that users wish to share privately with people they know and trust should not be posted on a publicly accessible website.

Vice’s Joseph Cox had also found that many of the publicly available URLs for private chats led to groups for sharing porn. But others appear to be for groups that share other sensitive material: one URL that Vice checked out was leading to a group chat that describes itself as being for NGOs accredited by the United Nations.

Vice joined and was able to view a list of all 48 participants and their phone numbers.

Just the latest

WhatsApp may well be encrypted end-to-end, but it’s certainly had its share of security pratfalls. Earlier this month, for example, PerimeterX researcher Gal Weizman uncovered a clutch of vulnerabilities that led him to a cross-site scripting (XSS) flaw affecting WhatsApp desktop for Windows and macOS when paired with WhatsApp for iPhone – a flaw that gave attackers access to local files.

Other recent incidents have included an MP4 flaw that could have led to remote code execution (RCE), and another involving malicious GIFs that did the same thing on Android.

Last May, a severe WhatsApp zero-day was being exploited by a nation state group to attempt to install spyware on targets simply by phoning them. In 2018, Google researchers revealed a flaw that could have compromised a device, again via a simple call.

Facebook is now in the process of stitching together the technical infrastructure of all its messaging products – Instagram, Facebook Messenger and WhatsApp – so that users of each app can talk to each other more easily.

Whatever happened behind the scenes with this glitch getting semi-fixed, WhatsApp and Facebook don’t seem to want to reach out to all the major search engines to get group chat links de-indexed. Or, at least, if they’re in the process of doing that, it’s certainly not something that’s being owned up to publicly.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/De9SWjTWYs0/

Android 11 to clamp down on background location access

Is Android finally about to get on top of apps that quietly suck up location data?

Google’s been wrestling with the issue for years but each control it has introduced has turned out to have exceptions that app makers are happy to exploit.

Some apps need location access to work (SatNav, weather apps, carrier services), some need it some of the time (shopping apps, social media), while most of the rest rarely or never need it at all.

And yet, with Android 11 in the works, Google finds itself having to refine location access once again by announcing a lock on how apps access location even when they have general access permission.

The problem is apps that continue to track device location even when they are not being used, otherwise known as background access – something users only acquired some granular control over in Android 10 last year.

Now, to batten down the hatches for good, the search giant has upped the ante – by forcing developers to submit apps to Google for checking to make sure their location access design is legit.

A quick recap…

In Android 6, 7 and 8, app location access was handled by a toggle between on and off, with a separate option to do so using ‘high accuracy’ mode (i.e. using Wi-Fi, Bluetooth, mobile networks in addition to GPS).

Android 9 continued the on/off toggle for app access but turned high accuracy mode into the global default when location access was turned on (turning it ‘off’ disabled everything bar GPS).

In all cases, apps that wanted to access a device’s location had to ask permission, in effect turning the slider to ‘on’ on behalf of the user.

But once turned on, that permission proved tempting to abuse by apps looking to siphon off as much commercially valuable data about device users as possible, whether that was necessary for the app to function as advertised or not.

Frankly, this was a bit of a mess. A 2019 study discovered that apps with location permission could use covert and side channels to quietly share it with other apps that had been denied the same access, all handily enabled by third-party SDKs. Another 2019 study showed just how much can be inferred about people’s lives from simple location data.

In theory, Android 10 tamed this by allowing apps to access location only when they’re being used rather than all the time, curtailing background access.

According to Google, more than half of Android 10 users selected the while app is in use option but it seems some apps have continued to abuse the feature by requesting background access anyway, that is, persuading users to agree to the allow all the time option.

Says Google’s developer blog:

As we took a closer look at background location usage, we found that many of the apps that requested background location didn’t actually need it.

Google’s new approach in Android 11 is to assess background access as part of the app approval process.

This policy will apply for all new apps from 3 August 2020, with the same control applied to existing apps on the Play store by 2 November, Google said. In short, apps wanting background access will have to justify it.

The probable downside in this new policy is simply that users will find themselves having to cope with more permission requests.

With an Android 11 device, users will gain the ability to grant temporary access permissions while the app is being used, which will have to be re-granted. Normally, that should only happen when the app is closed and then re-opened, but under some conditions, it might occur more often.

Apple’s iOS 13 has had this ability for six months. Another example of how, at least in relation to user privacy, Google is still running to catch up.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pgha-yLJwm8/

Samsung cops to data leak after unsolicited ‘1/1’ Find my Mobile push notification

Updated Samsung has admitted that what it calls a “small number” of users could indeed read other people’s personal data following last week’s unexplained Find my Mobile notification.

Several Register readers wrote in to tell us that, after last Thursday’s mystery push notification, they found strangers’ personal data displayed to them.

Many readers, assuming Samsung had been hacked, logged into its website to change their passwords. Now the company has admitted that a data security breach did occur.

A spokeswoman told The Register: “A technical error resulted in a small number of users being able to access the details of another user. As soon as we became of aware of the incident, we removed the ability to log in to the store on our website until the issue was fixed.”

She added: “We will be contacting those affected by the issue with further details.”

From the not-insignificant number of emails El Reg received about the website snafu, it remains to be seen whether Samsung’s definition of “small number” is the same as that of the rest of the world.

Of potentially greater concern is the mystery 1/1 push notification from Find my Mobile, a baked-in app on stock Samsung Android distributions. Although the firm brushed off the worldwide notification as something to do with unspecified internal testing, many of those who wrote to El Reg said they had disabled the app. Stock apps cannot be uninstalled unless one effectively wipes the phone and installs a new operating system – unlocking the bootloader and reformatting with a new third-party, customised ROM.

Samsung did not answer our questions as to how a “disabled” app was able to receive and display push notifications. Nor did it say what other functions this “disabled” app was capable of executing. ®

Updated to add

A Samsung rep later told us: “Less than 150 customers were affected, and we are contacting them directly.”

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/24/samsung_data_breach_find_my_mobile/

Apple tries to have VirnetX VPN patent ruling overturned again, US Supremes say no… again

The United States Supreme Court has kicked out Apple’s attempt to overturn a judgement in one of the cases in its 10-year patent fight with VirnetX.

The Supremes rejected Apple’s petition for a judicial review in a bid to overrule the 2016 decision of a lower court, which awarded VirnetX $302m, which later rose to $439.8m in damages, fees and interest for Apple’s use of its patents.

Apple had argued (PDF) earlier this month that the “Federal Circuit has created a gaping loophole that facilitates massive damages in patent cases where the damages claims are based on prior licenses” – in essence saying that VirnetX had overvalued the inventions to the court.

Kendall Larsen, VirnetX CEO and president, said in a statement: “It has taken us 10 long years, 4 successful jury trials, 2 successful Appellate Court rulings and a favorable Supreme Court decision to get here.”

Larsen stressed that his firm is not just a patent troll farm or a beanfeast for IP lawyers.

He said: “We are a small company with valuable security technology. The inventors of that technology have senior level positions at VirnetX. It has always been our objective to create our own products with our proprietary technology. Unfortunately, when other companies are using your technology without permission, you must take action to protect that company asset. We have always believed that we were in the right with our court actions against Apple. Four juries and countless judges agree.

“We believe that our technology provides an important security feature in some Apple products especially the iPhone. The jury award we received and confirmed by Federal judges, is less than a quarter of one per cent of the cost of an iPhone. We believe this amount is more than fair considering the importance of Internet security.”

There are actually two separate cases brought against Apple by the firm. The first resulted in a $503m verdict for VirnetX, which Apple is currently leading through the appeals court. There’s a breakdown of the specific technologies under question here.

The dispute over the patents has seen Apple go to extreme lengths – including closing retail stores in the eastern district of Texas.

It also re-engineered its video conferencing software FaceTime and switched it off for anyone who declined the update – which led to a class-action case.

It stopped FaceTime working as a peer-to-peer product and instead calls went via a relay run by Akamai in order to avoid the relevant patents.

Apple first lost against VirtnetX in eastern Texas in 2012, when the company was awarded $386m.

In 2013 it lost an appeal of that decision which it claimed was excessive and saw the award increased.

Shares of Apple took a gut punch earlier this week as it copped to possible impacts from the novel coronavirus on its revenues.

Apple did not respond to our emails asking for comment on the verdict. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/25/supreme_court_apple_virnetx_patent_appeal/