STE WILLIAMS

Ransomware Attack Via MSP Locks Customers Out of Systems

Vulnerable plugin for a remote management tool gave attackers a way to encrypt systems belonging to all customers of a US-based MSP.

An attacker this week simultaneously encrypted endpoint systems and servers belonging to all customers of a US-based managed service provider by exploiting a vulnerable plugin for a remote monitoring and management tool used by the MSP.

The attack resulted in some 1,500 to 2,000 systems belonging to the MSP’s clients getting cryptolocked and the MSP itself facing a $2.6 million ransom demand.

Discussions this week on an MSP forum on Reddit over what appears to be the same — or at least similar — incident suggest considerable anxiety within the community over such attacks, with a few describing them as a nightmare scenario.

“From the MSP’s standpoint, the tool they use to manage everything was just used against them” to inflict damage on customers, says Chris Bisnett, chief architect at Huntress Labs. “Everyone is looking at the attack and saying, ‘This could have been me.'”

Huntress provides managed detection and response services to the MSP that was attacked and to numerous others like it. Bisnett says one of the company’s MSP clients reported the ransomware attack on Monday. After an initial investigation showed that the MSP’s systems itself had not been compromised, researchers from Huntress did some further digging and eventually linked the attack to a vulnerable plugin for a remote management tool from Kaseya.

Many MSPs use Kaseya’s VSA RMM tool to remotely monitor and manage client systems and servers. The vulnerable plugin for Kaseya that was exploited in the MSP attack itself was from ConnectWise and is used to manage support tickets raised in Kaseya, Bisnett says.

The vulnerability basically gave the attackers a way to run remote commands that allowed them complete access to the Kaseya VSA database. “They were able to task the RMM tool as if they were an administrator at the MSP,” Bisnett says. “They said, ‘Take this executable and put it out on every system the MSP is managing.'”

In this case, the executable was Gandcrab, a widely distributed ransomware tool that has been used in numerous previous attacks. All customer systems that the MSP was managing via the Kaseya RMM tool were encrypted simultaneously, locking users out of them.

A poster on Reddit on Tuesday described a similar incident impacting a local MSP in which all client systems were encrypted. It’s unclear, however, whether the incident mentioned in the Reddit report is the same one reported by the Huntress MSP customer.

Previously, attackers have installed cryptomining tools on business systems and stolen data from organizations in various sectors by gaining access to their networks via MSP connections. There have also been incidents where MSPs have reported one or two clients getting hit with ransomware. “But this was extra alarming because all customer systems were encrypted at the same time,” Bisnett notes.

Rising Concerns
Attacks on MSPs are a growing concern. Recently, threat actors, some sponsored by nation states, have begun targeting MSPs in an attempt to get to the networks of their clients. APT10, a threat group believed to be working for the Chinese Ministry of State Security’s Tianjin State Security Bureau, is one of the best-known operations targeting MSPs. For the past few years, the group has been conducting a broad cyberespionage operation called Cloud Hopper to steal data from organizations in banking, manufacturing, consumer electronics, and numerous other sectors by attacking their MSPs.

In fact, concerns over such attacks are so high that the Cybersecurity and Infrastructure Security Agency of the Department of Homeland Security scheduled to brief MSPs on Chinese malicious activity later this month.

The vulnerability that the threat actor exploited in the latest attack exists in ManagedITSync, a ConnectWise plugin for Kaseya VSA. A security researcher from Australia first reported the vulnerability in November 2017 and posted details, along with proof of concept code, on GitHub.

ConnectWise issued an update addressing the issue sometime later, but for some reason the bug and the update patching it appear to have received little attention until now, Bisnett says. The bug was assigned a formal CVE number only this week after Huntress Lab informed MITRE about the issue, he says. The CVE was backdated to 2017 to reflect the fact it was first reported at that time.

In a note that appears to have been posted six days ago and updated yesterday, Kaseya urged customers using the ConnectWise plugin for VSA to upgrade to the patched version immediately or, alternatively, to remove the plugin altogether.

“This only impacts ConnectWise users who have the plugin installed on their on-premises VSA,” the company said, adding that only a very small number of customers appear vulnerable to the threat.

“We are lucky enough not to be directly in the path of this particular storm,” says Joshua Liberman, president of Net Sciences, a New Mexico-based MSP. “The only way we’ll survive this as an industry, short of stopping the threat at its source, which is well beyond our scope, is to tighten our own defenses, share information with each other, and create an ‘offensive defense posture,'” he says.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/ransomware-attack-via-msp-locks-customers-out-of-systems/d/d-id/1333825?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mumsnet data leak: Moaning parents could see other users’ privates after cloud migration

Parent gabfest platform Mumsnet has reported a data security breach that it claimed happened amid a “software change” en route to migrating services to the cloud.

Justine Roberts, founder and CEO at Mumsnet, today told users: “We’re very sorry to say that we’ve become aware of a data breach which affected some Mumsnet user accounts.”

A user sounded the alarm yesterday evening that they were able to log into and view details of another user’s account. This security screw-up, likely some kind of caching blunder, happened between 2pm GMT on 5 February and 9am GMT on 7 February.

“During this time, it appears that a user logging into their account at the same time as another user logged in, could have had their account info switched,” Roberts added.

“We believe that a software change, as part of moving our services to the cloud, that was put in place on Tuesday PM (5 February) was the cause of this issue. We reversed the change this morning. Since then there have been no further incidents.”

By logging into someone else’s account, data on show could have included a user’s email address, account details, posting history, and personal messages. Passwords were encrypted, the CEO said.

“We’ve reversed the software change… and this morning we forced a log out, requiring users to log in again before they can post. This ensures that anyone who had inadvertently logged in as someone else will no longer be logged in to the wrong account.”

Roberts said it is not yet certain how many Mumsnet members were caught up in this mess but is “investigating the logs” and “hope to know definitively very soon”.

“We do know that approximately 4,000 user accounts were logged into in the period in question but we don’t as yet know which of those were actually breached (i.e. also affected by mismatched login), although we know for sure it wasn’t every account.”

She said users reported 14 “incidents” and Mumsnet is trying to ascertain if there were more.

“You’ve every right to expect your Mumsnet account to be secure and private. We are working urgently to discover exactly how this breach happened and to learn and improve our processes,” Roberts added.

The breach has been reported to the Information Commisioner’s Office.

This isn’t the first time the platform for snarky parents has suffered a security wobble: it was hit by the Heartbleed OpenSSL vulnerability in 2014; and it was hacked in 2015. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/07/mumsnet_breach/

Apple puts bullet through ‘Do Not Track’, FaceTime snooping bug and other vulnerabilities

Apple on Wednesday removed the vestigial “Do Not Track” (DNT) privacy technology from Preview Release 75 of its macOS Safari browser, and buried the corpse without ceremony. DNT is also missing from mobile Safari 12.1 in the soon-to-be released iOS 12.2.

The shiny device biz did so, it says, to protect privacy – the presence of the setting could be used as a data point in a browser fingerprint.

No tears were shed because DNT does not work: it presents a request to websites to show restraint and forego ad tracking. But compliance is voluntary and – surprise – websites have shown little interest in foregoing potentially valuable data. Facebook, Google, and Twitter – ad businesses all – ignore DNT, which takes the form of some text in the header of an HTTP request.

Apple’s browser surgery follows a decision last month by web standards group W3C to close the DNT working group because the technology hasn’t received wide enough support to justify continued development.

Microsoft announced support for DNT first in late 2010. Mozilla, Apple, Opera, and eventually Google were all on board by the end of 2012. That was not long after America’s trade watchdog, the Federal Trade Commission, voiced support for the technology – anything to avoid actually stepping in and regulating.

Firefox logo

Mozilla changes Firefox policy from ‘do not track’ to ‘will not track’

READ MORE

Although research firm Forrester last year found that almost a quarter of American adults have enabled “Do Not Track” in their web browsers and privacy-focuses search biz DuckDuckGo this week published similar numbers, enthusiasm for DNT among browser makers has waned.

Left to themselves to defend against ad tracking, many internet users have opted for ad and content blocking, though with Google looking to limit how browser extensions can intercept and alter incoming web traffic, existing filtering tools, at least in the dominant Chromium ecosystem (Chrome, Edge, Opera, and many others) may need to be rewritten or may no longer be possible.

Apple, which relies on a different rendering engine (WebKit) than Chromium-based browsers (Blink), is focusing on to other web privacy mechanisms, namely Intelligent Tracking Protection. Its Safari browser, however, only accounts for about 5 per cent of desktop browser use globally and holds only about 20 per cent of the mobile browser market globally, according to StatCounter.

Mozilla, which makes the Firefox browser, has also pursued a separate path on privacy. Last summer, it said it would begin blocking tracking tech by default. And it implemented those changes with the release of Firefox 65 late last month.

While US lawmakers dither, European data rules have begun to change the ad tracking landscape abroad and made the value of tracking and ads to publishers visible: The Washington Post charges EU residents $90 for a yearly subscription without ads or tracking, or $60 annually for those who surrender GDPR protections and submit to surveillance capitalism. ®

Security updates

Today, Apple also emitted security fixes for iOS 12.1.4. This fixes the FaceTime eavesdropping bug (CVE-2019-6223) found by 14-year-old Grant Thompson of Catalina Foothills High School and Daven Morris of Arlington, Texas. We understand the teen and his family will get some compensation from Apple, which will also pay toward his education.

The OS update also fixes two elevation-of-privilege holes (CVE-2019-7286 in Foundation, CVE-2019-7286 in IOKit), and a vague problem with Live Photos in FaceTime (CVE-2019-7288).

Meanwhile, FaceTime has been fixed in macOS, too.

According to Googe Project Zero’s Tavis Ormandy, “Three out of the four vulnerabilities in the latest iOS advisory were exploited in the wild, yikes.” The team discovered two of them: the elevation of privilege bugs. Get patching!

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/07/apple_puts_bullet_through_do_not_track/

Security Bugs in Video Chat Tools Enable Remote Attackers

Lifesize is issuing a hotfix to address vulnerabilities in its enterprise collaboration devices, which could give hackers a gateway into target organizations.

Newly discovered security bugs in Lifesize videoconferencing products can be remotely exploited, giving attackers the ability to spy on a target organization or attack other devices.

Trustwave SpiderLabs security researcher Simon Kenin found the remote OS command injection vulnerabilities, which affect Lifesize Team, Lifesize Room, Lifesize Passport, and Lifesize Networker. Lifesize has a range of major clients – eBay, PayPal, and Netflix among them.

Exploitation of these flaws can give adversaries access to the products’ firmware. Kenin called the bug “trivial,” but it requires some hard-to-get information: Remote hackers will need the firmware code for their target devices, which can only be downloaded from the Lifesize website with a valid serial number for the specific product in mind. But firmware code isn’t necessary for attackers with physical device access, says Trustwave threat intelligence manager Karl Sigler.

These bugs affect the Lifesize support page, where users can troubleshoot issues and send log files for their devices. Attackers must log in to the support interface, which often isn’t difficult because many owners fail to change the default credentials that ship with Lifesize products.

“The vulnerability itself is in how they implement PHP in the Web interface to the devices,” Sigler explains in an interview with Dark Reading. “Unfortunately, the PHP code is pretty poor in how it’s implemented … you can basically execute any command you want on the device using that interface.”

It’s a “classic programming error,” Kenin wrote in a blog post on the findings. User input is passed without any sanitization to the PHP shell_exec function, which executes system commands as the Web server user. With no limit on the type of code that can be passed, attackers who know how to pass arguments to a PHP page can launch any commands they want.

With this vulnerability alone, intruders could gain a foothold on the network and execute commands on the target device to probe other machines on the network. But they also could achieve full persistence on the device with an unpatched privilege escalation bug, which was discovered in 2016 and affects the same pool of devices, Sigler says. The duo would give someone full control of the appliance, access to media, as well as access to other devices.

A Patch is En Route
Trustwave contacted Lifesize in November to begin the disclosure process, did not receive a response, and then re-established contact last month. Lifesize initially said it would not be releasing a patch because the affected devices were legacy and had end-of-life and end-of-sale dates.

It has since changed its position and will be offering a patch. In the meantime, customers using Lifesize 220 systems should contact support for a hotfix. There is no evidence the bugs have been exploited in the wild, says Sigler, and Trustwave promptly reached out to Lifesize so it could create a patch before someone takes advantage of the flaw.

“If we can find it, criminals can, too,” he notes.

For companies that decide to abandon support for their legacy systems, Sigler urges making customers aware at least one year ahead of time so they can pursue upgrade or replacement options. They should also make upgrade options available so users understand the risk they’re taking on by continuing to use legacy products. 

Trustwave is holding off on its release of the proof-of-concept for these vulnerabilities so users can apply the hotfix. Researcher plan to publish the PoC on Feb. 21, 2019. “At that time, we will release the PoC code to provide users, administrators, and network security professionals with the technical details and tools to validate whether they are still vulnerable,” Sigler says.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/security-bugs-in-video-chat-tools-enable-remote-attackers/d/d-id/1333819?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

4 Payment Security Trends for 2019

Visa’s chief risk officer anticipates some positive changes ahead.

Change that leads to improvement is usually good, in my opinion, and in my role at Visa, I anticipate some healthy changes ahead for the payment industry. Of course, no one can perfectly predict what is to come, but here is my take on four notable payment security trends for 2019.

Trend 1: Continued growth in E-Commerce and M-Commerce will drive the need for secure digital payments.
The volume of digital payments will likely continue to increase, driven, in part, by the growing comfort and habit among consumers with making purchases on their smartphones, tablets, computers, and IoT devices. Industry analysts predict that there could be more than 20 billion IoT devices by 2020. While chip technology has significantly reduced fraud in stores, we need a similar security defense for the digital channel. Tokens can be that solution.

Tokens replace the transmission of actual payment card numbers, so if a point-of-sale (POS) system, mobile device, mobile application, or network connection is compromised, payment card numbers are safe since they are not exposed. Tokens also include a dynamic value that changes with each transaction, similar to chip technology for in-person transactions.

With tokenization, merchants no longer have to store sensitive data, like primary account numbers, greatly reducing risk for people who store their card information on mobile devices, in mobile apps, or online with e-commerce merchants. Instead, merchants will be able to mask their customers’ primary account number with a token, which is protected by restrictions that render it useless to fraudsters if it were ever to be compromised.

Trend 2: Password insecurity and consumer frustration will lead to increased adoption of biometrics.
Cardholder verification methods have evolved, including the optional removal of signatures in 2018. Many people would probably also agree that remembering passwords and PINs as a way to verify identity can be difficult and insecure. The use of biometrics for authentication for in-person and online shopping causes less friction for consumers and offers stronger identity verification for issuers and merchants.

A survey commissioned by Visa showed that 86% of consumers are interested in using biometrics to verify identity or to make payment, and more than 65% are already familiar with biometrics.

Last year, issuers piloted on-card biometrics programs in which a fingerprint scanner was built directly into a payment card because consumers still prefer the plastic card form factor to other available options. I expect more pilot programs to emerge in the year ahead.

Trend 3: Sharing of cyber threat intelligence will Continue to chip away at attempted fraud.
Cybercriminals are increasingly organized and well-funded, backed by criminal organizations with deep pockets. The black market for cybercrime has also evolved to enable individuals of all skillsets to participate as long as they have the desire. This democratization means more attempts at exploiting known vulnerabilities will take place, so organizations have to be vigilant.

Although collaboration already exists among partners in the payment industry and law enforcement, I believe you will see more collaboration in the coming year because it yields results. Most notably, three senior members of the Fin7 cybercrime group – one of the largest known cybercrime organizations, responsible for stealing roughly $1 billion over the years from some well-recognized retail and hospitality companies – were arrested last year because of a public-private partnership between payment networks (including Visa), financial institutions, merchants, and law enforcement.

Trend 4: Advanced technology in risk-based decision-making will help reduce CNP payment fraud.
According to the latest figures from eMarketer, e-commerce was on track to represent only 11.9% of total global retail sales in 2018, with brick and mortar still the dominant retail channel. This means there is still much room for growth for e-commerce sales. However, we know cybercriminals follow the money, so what can we do to protect card-not-present (CNP) transactions?

This year the payment industry will be introducing advanced, risk-based decision-making for e-commerce to reduce CNP fraud using updated standards from EMV 3D-Secure. This will enable financial institutions to better assess whether a transaction is legitimate or fraudulent by examining 10 times more risk factors than before, including browser type, device type, and location of a transaction, among other factors to help decide whether step-up authentication is required. In addition, companies that facilitate digital payments will likely layer 3D-Secure with other advanced analytics technologies like artificial intelligence, to help analyze for fraud.

In 1965, Gordon Moore of Intel predicted that the increase in computing power and the decrease in relative cost would occur at an exponential pace. The pace of digital innovation over the years has been fast, but so has the evolution of payment security and risk management. I’m optimistic about the future.

Related Content:

 

Ellen Richey joined Visa in 2007 and serves as vice chairman and chief risk officer. She leads risk management, including enterprise risk, settlement risk, and risks to the integrity of the payments ecosystem. She coordinates the company’s strategic policy initiatives, leads … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/4-payment-security-trends-for-2019/a/d-id/1333796?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Unlimited crypotocurrency? Zcash fixes counterfeiting flaw

Privacy-focused cryptocurrency Zcash fixed a flaw last year that could have allowed an attacker to produce counterfeit currency. The flaw, only revealed this week, existed for two years until the project’s technical team fixed it in October 2018.

Zcash is one of the cryptocurrency world’s most security-conscious projects. It is built on the Bitcoin Core code base, but adds new features for privacy, including shielded transactions. These transactions obscure both the payments and the sender and recipient Zcash addresses, and they also allow for the inclusion of encrypted notes. Participants in a shielded transaction can also disclose specific details to a third party for compliance purposes without revealing everything.

How can transactions be included on a blockchain and agreed upon by everyone if they are entirely secret? The lies in zero-knowledge proofs, or in Zcash’s case, zero-knowledge succinct non-interactive argument of knowledge (zk-SNARKs).

These use a common code known by both the person who wants to prove that a transaction exists, and the person verifying it. You can think of this as a public/private key pair, but the private key is effectively thrown away. It was vital that no one ever find that private key, because it could be used to generate counterfeit ZEC (the name for units of the Zcash cryptocurrency).

To make the generation of this public/private key more secure, six trusted individuals computed it in multiple parts under highly secure conditions in October 2016. Each of these pieces forms part of the whole public key used when verifying a transaction.

The Zcash team took security very seriously during this computation process, even removing wireless and network chips from brand-new computers before using them for the calculations. They wrote the final parts of the public key to DVDs, discarding the private key parts, and then destroyed the computers used for the calculation.

Unfortunately, while the ceremony may have been secured, there was a vulnerability in the cryptographic algorithm itself. Zcash cryptographer Ariel Gabizon discovered the flaw on 1 March 2018 while attending the Financial Cryptography 2018 conference.

The algorithm created extra elements that were not needed, and were included by mistake. They enable someone to make a zero knowledge proof of one transaction look as though it is proving another. Because these parameters were included in the public transcript of the MPC ceremony, anyone with that transcript would have been able to create false proofs and therefore counterfeit any amount of Zcash.

Thus began the long process of fixing the problem, beginning with the removal of the public MCP transcript, which had been published online. The Zcash team didn’t believe that many people would have downloaded it. It would also have taken some serious cryptography expertise to exploit the flaw. Nevertheless, the means to do so were nevertheless in the public domain. So Zcash needed to be stealthy while it fixed the problem.

Rather than create an emergency hard fork (effectively halting the blockchain and starting a new one to solve the problem), the company decided to fix the issue in a forthcoming upgrade. The vulnerable version of the Zcash network that used the original public key was called Sprout. ZCash was already planning a new version of its network called Sapling with a new public key, which would be generated by a new MCP ceremony. In a blog post about the whole affair, the team said:

The Zcash Company adopted and maintained a cover story that the transcript was missing due to accidental deletion. The transcript was later reconstructed from DVDs collected from the participants of the original ceremony and posted following the Sapling activation.

Sapling was launched on 28 October 2018, effectively fixing the problem. However, there were still issues for other projects. It said:

While Zcash is no longer affected, any project that depends on the MPC ceremony used by the original Sprout system that was distributed in the initial launch of Zcash is vulnerable.

There were two major projects affected: Horizen (formerly ZenCash) and Komodo. Zcash revealed the vulnerability to those teams without disclosing full details, and believes that they have both fixed the problem.

Of note is the fact that the four people initially aware of the problem — Gabizon, Zcash cryptographer Sean Bowe, CEO Zooko Wilcox and CTO Nathan Wilcox — didn’t even brief their own director of security until after the October upgrade, according to the timeline of the vulnerability, published this week. They subsequently informed the VP of marketing and business development, and then Horizen and Komodo. Only after that did they tell their own COO, followed by the five founding scientists behind the original Zcash papers. Then came the internal cryptography team and other employees.

This stealthy approach avoided a hard fork and according to Zcash prevented anyone else from exploiting the flaw, based on monitoring the blockchain for changes in the amount of Zcash held in its shielded pool, and looking for unusual patterns in the Zcash blockchain.

The whole thing was a narrow escape, explained Andrew Miller, board member of the Zcash Foundation, in a tweet:

All of which goes to show that when it comes to keeping cryptocurrency secrets, operational security is only one part of the puzzle. The other lies in using cryptography correctly, which is famously difficult to do.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FnNxNG-uog0/

Chrome extension warns users their login credentials have been breached

Google chose Safer Internet Day to announce Password Checkup, a Chrome extension designed to warn users when they enter a username and password the company has detected in a data breach.

As with the Mozilla’s recently-launched and very similar Firefox Monitor, Password Checkup has a simple surface level the user interacts with built on top of more complicated inner workings that take more explaining.

The simple bit

After downloading from the web store, Password Checkup installs like any other extension as an icon in Chrome’s address bar.

Every time the user logs into a website, the extension checks a hashed version of the password and username used against a database of four billion possibilities amassed by Google from real data breaches, warning if it finds a match.

This presents a choice – log into the website, change the password to something unique, after which Password Checkup will stop issuing warnings for that site, or unwisely ignore the warning by clicking ‘close’.

The warning will keep popping up for that site unless the user also clicks ‘ignore for this site’ after which re-enabling warnings requires the user to click on the address bar icon, select advanced settings, and hit ‘clear extension data’.

To avoid alert fatigue, it won’t warn people if it detects trivial passwords (‘123456’) and only activates when both the username and password are in its database.

Before diving into how Google does the credential checking, a blog accompanying the announcement by the company’s senior product manager Kurt Thomas made this interesting admission.

We already automatically reset the password on your Google Account if it may have been exposed in a third-party data breach – a security measure that reduces the risk of your account getting hacked by a factor of ten.

In other words, the company has been conducting password breach checking for Google and G Suite accounts for some time.

Google alluded to this as far back as 2014, which tells us that today’s expanded Password Checkup hasn’t come out of nowhere.

Since then, it’s been quietly building its own database of breached credential as it finds them on the internet, which potentially crosses over with but is not identical to that collected by Have I Been Pwned (HIBP) used by Firefox Monitor.

The complicated bit

As with Firefox Monitor, an important issue is how Google checks the passwords and usernames entered by the user against its database without that data being leaked either to it or to anyone else hypothetically intercepting the query. Nor does it want to leak its database back to the user. Writes Google:

Password Checkup addresses all of these requirements by using multiple rounds of hashing, k-anonymity, private information retrieval, and a technique called blinding.

Indeed, Google says it was so concerned about user privacy, it consulted with cryptography engineers at Stanford University to help it make the system secure.

The result of that collaboration is that Google hashes new entries into its central database of breached passwords and usernames using the Argon 2 hashing algorithm, encrypting the output using elliptic curve encryption. One 2-byte prefix of the hash is left unencrypted for indexing.

When a user logs into a website with the extension running, it performs an identical but local process on the user’s credentials, this time encrypting it using a secret key at the user’s end – all that is sent to Google is the anonymous 2-byte part of the hash.

Google then returns to the user an encrypted database of every username and password that shares the same prefix. The final search for a match between the user’s credentials and the database is done locally while keeping the user’s account details and the database secure.

It’s like a more developed form of the k-anonymity principle Firefox Monitor uses to query the HIBP database hosted by Cloudflare.

Of course, this being Google, Password Checkup is bound to arouse some suspicion. It appears Google has gone to some lengths to allay these fears but our advice to anyone who feels strongly is simply not use it!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rlqON8_6ETo/

Anyone want to lay claim to the USB drive found in seal poo?

It’s hard to imagine anything good coming out of littering the landscape with USB thumb drives.

If you’re a crook, it can lead to getting busted for whatever incriminating stuff is on there.

If the drive is unencrypted, whatever’s on there is up for grabs.

If you’re a border agent at a US port of entry, you risk getting taken to task by the Office of Inspector General for fumbling the data you’ve copied when searching travelers’ seized devices.

Then again, it could become a snack

If you’re a seal, you could eat it and poop it out.

That can’t have been comfortable for whatever leopard seal passed a still-functioning USB stick, which was found in a lump of scat collected in New Zealand. However it felt for the seal, the photos on the drive got through their journey just fine, according to the National Institute of Water and Atmospheric Research (NIWA), which tweeted out one of their videos as it searches for the drive’s owner:

As NIWA tells it, the frozen slab of seal poo has been sitting in a freezer at the Crown Research Institute since November 2017.

The lump is about the size of two bread rolls. It was collected by one of a network of volunteers who search the country for leopard seal scat that they then send on to NIWA marine biologist Dr Krista Hupman and her team at LeopardSeals.org. It’s “as good as gold” to researchers, NIWA said, yielding up data about what the Antarctic predators eat, a little bit about their health, and how long they may have been in New Zealand waters.

This particular thumb-drive-enriched sample was collected by a local vet who was out checking on the health of a skinny leopard seal that was resting on Oreti Beach, Invercargill.

It’s a dirty job, but…

It stayed in the freezer for over a year. Then, in January, it was removed and defrosted by volunteers Jodie Warren and Melanie Magnan, whose jobs include pulling apart seal globs. NIWA quoted Warren:

[After defrosting a sample] we basically have to sift it. You put it under the cold tap, get all the gross stuff off, smoosh it around a bit and separate the bones, feathers, seaweed and other stuff.

In this particular case, “other stuff” included the intact USB drive – yet another of an increasing number of instances of plastics found to have been ingested by marine animals. Warren:

It is very worrying that these amazing Antarctic animals have plastic like this inside them.

The volunteers left it out to dry for a few weeks. Then, they plugged it in to see what it held.

That, of course, is exactly the opposite of what we advise here at Naked Security. It’s not safe to plug in random USB drives, be they “conveniently” scattered throughout a parking lot by who knows who or handed out as prizes at – SEVERE IRONY ALERT! – a cybersecurity expo.

But plug it in they did, and this is what they found: photos of sea lions and a video of a mother and baby sea lion frolicking in the shallows. Their only clue to the drive’s owner: the nose of a blue kayak seen in the foreground, as depicted in that video they tweeted.

NIWA is open to reuniting the thumb drive with its rightful owner, but there’s a cost involved: namely, they say they’ll swap it for more poo.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YtEbc8zD0K8/

KeySteal could allow someone to steal your Apple Keychain passwords

An 18-year-old German researcher has discovered and published a proof of concept he’s calling KeySteal: what he claims is a zero-day bug that could be exploited by attackers using a malicious app to drain passwords out of Apple’s Keychain password manager.

No fix is expected anytime soon. The researcher, Linus Henze, says he’s not sharing details with Apple – and yes, the company asked – in protest of the company’s invite-only/iOS-only bounties.

I won’t release this. The reason is simple: Apple still has no bug bounty program (for macOS), so blame them.

The bug affects even the most recent MacOS, Mojave.

On Sunday, Henze posted this proof of concept video to YouTube:

It demonstrates extraction of all local Keychain passwords on macOS Mojave, and Henze says it works on earlier versions of the OS. It works without root or administrator privileges and without password prompts, he says.

A bit about Keychain

Apple’s Keychain is a password manager that’s built in to MacOS and turned on by default, hopefully making it hassle-free for users to make the switch to using a manager to store the gazillion passwords people tend to have nowadays.

As Naked Security’s Maria Varmazis has explained, Keychain captures passwords that you enter on one device or website, stores them in an encrypted form in the cloud, and then automatically fills in your credentials the next time you need them. That way, you don’t have to remember your passwords or glue them to your monitor on a sticky note.

This isn’t the first time

Of course, having all your credentials in one, convenient place makes it crucial that the one place is as secure as possible. But, unfortunately, this isn’t the first time we’ve seen a password stealer prey on Keychain. In 2016, password-stealing malware was uploaded to the popular BitTorrent client Transmission not once, but twice.

And in 2017, security researcher Patrick Wardle demonstrated keychainStealer. That one got into Keychain passwords via an unsigned Mac app, then dumped the credentials into a plain text file.

It sounds like KeyStealer is similar to keychainStealer in that it too could be exploited via malicious apps. In his video, Henze opens Keychain Access, where he’s stored fake version of his passwords, such as to his Facebook and Twitter accounts. Whatever app he created – again, he’s not sharing details – was able to read the content of the keychain without the need for a victim’s explicit permission, nor with any admin level permissions. Henze:

Running a simple app is all that’s required.

How would the bad app get onto a Mac in the first place? Henze suggested that an attacker could tuck it into a legitimate app, or that it could be downloaded from a boobytrapped website.

Henze told Forbes that the attack can also grab tokens for accessing iCloud. Thus, an attacker could potentially also take over an Apple ID and download the keychain from the company’s servers.

Yet another bug-collecting kid

This is the second time in two weeks that a teenager has discovered a bug in Apple’s products. Just last week, we found out about a FaceTime eavesdropping bug that was reported discovered by a 14-year-old (and his mom).

Apple is working on a fix for the FaceTime bug and will reportedly give the 14-year-old, Grant Thompson, an as-yet-undetermined bounty via its iOS bug bounty program.

That program, which Apple set up in 2016, offers rewards up to $200,000 for vulnerabilities found in its software.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DBIlNIqBz6U/

Serious Security: Post-Quantum Cryptography (and why we’re getting it)

Traditional computers work with binary digits, or bits as they are called for short, that are either zero or one.

Typically, zero and one are represented by some traditional physical property – a hole punched in a tape, or no hole; a metal disc tilted left or right by an electric current; an electronic capacitor that holds a charge or not; a magnetic field pointing north or south; and so on.

Quantum computers aren’t quite like that – they work with qubits, which can essentially represent zero or one at the same time.

In theory, that makes it possible to perform calculations in parallel that would normally require a loop to do them one at a time.

The qubits represent what quantum physicists would call a superposition of all possible answers, tangled together through the mystery of quantum mechanics.

The idea, loosely speaking, is that for some types of mathematical calculation, a quantum computer can calculate formulas in N units of time that would otherwise take 2N units of time to work out.

In other words, some problems that are conventionally considered to be exponential time algorithms would turn into polynomial time algorithms.

Multiplication versus exponentiation

To explain.

Exponents involve “raising something to the power of X”, and exponential functions grow enormously quickly.

Polynomials involve “multiplying X by something”, and even though polynomial functions can grow very fast, they’re much more manageable than exponentials.

Here’s a thought experiment: lay 50 sheets of office paper on top of each other to create a pile 50 times thicker than one sheet – about 5mm in total.

Now imagine taking the top sheet and folding it in half 50 times.

That many folds are impossible in practice, of course, but if you could do it, you’d end up with a piece of paper more than 100 million kilometres thick.

Two more folds and you’d be further out than the moon.

As a result, many people are worried that quantum computers, if they really work as claimed and can be scaled up to have a lot more processing power and qubit memory than they do today, could successfully take on problems that we currently regard as “computationally unfeasible” to solve.

The most obvious example is cracking encryption.

If your security depends on the fact that a crook would need months or years to figure out your decryption keys, by which time he’d be too late, then you’re in trouble if someone finds a way to do it in seconds or minutes.

Code cracking made polynomial

Here’s the difference between exponential time and polynomial time in measuring the cost of cracking codes.

Imagine that you have a cryptographic problem that takes 1,000,000 loops to solve today if you have a 20-bit key, but by doubling the key to 40 bits you square the effort needed, so that it now takes 1,000,000,000,000 loops. (Actually, 240, which is approximately a million million, or one trillion.)

Imagine that you can do 1000 loops a second: multiplying the key size by 2 just boosted the cracking time of your cryptosystem one million-fold, from 1000 seconds (under 20 minutes) to a billion seconds (more than 30 years).

Now imagine that a quantum computer’s cracking time doubled along with the keylength, instead of squaring – your added safety margin of 30 years just dropped back to an extra 20 minutes, so a key that you thought would keep your secrets for decades wouldn’t even last an hour.

In other words, if reliable quantum computers with a reasonable amount of memory ever become a reality – we don’t know whether that’s actually likely, or even possible, but some experts think it is – then anything encrypted with today’s strongest algorithms might suddenly become easy to crack.

TEOTWAWKI?

Is this the end of the world as we know it, at least for cryptography?

Fortunately, the answer is, “No,” because there’s a catch.

If you loop through 256 possible solutions to a problem using a conventional algorithm and 16 of them are correct, you end up with a list of all 16 possibilities, thus reliably ruling out 240 of them.

From there, you can go on to dig further into the problem, knowing that you will eventually solve it because you’ll end up trying every valid path to the answer.

But with quantum computers, even though you can do a whole load of calculations in parallel because the qubits are in multiple quantum states at the same time, you can only read out one of the valid answers, at which point all the other answers vanish in a puff of quantum collapse.

You can calculate and “store” multiple answers concurrently, but you can’t enumerate all the valid answers afterwards.

If you’ve heard of Erwin Schrödinger’s Cat, you’ll recognise this problem.

Schrödinger’s Cat is a thought experiment in which a “quantum cat” hidden out of sight inside a box is simultaneously both alive and dead, because quantum cats can be in both states at the same time, provided you aren’t looking at them. But as soon as you open the box to see this amazing creature in real life, it immediately adopts one of the possibilities – so you actually have a 50% chance that opening the box will kill the cat instantly. You can’t figure out in advance if it’s safe – safe for the cat, that is – to open the box.

So if your quantum computer can do, say, 256 computations in parallel, you have to make sure that that there’s only one correct answer that can emerge before you go on to the next stage of the algorithm, or you might have discarded the path that leads to the right answer later on.

In other words, you might be able to “solve” each stage of a problem much faster than before, yet hardly ever get the correct answer, meaning that you’re stuck with repeating your “fast” calculations over and over again until you get lucky all the way through and end up at the genuine solution.

As a result of this stumbling block, not all encryption algorithms will be vulnerable to quantum cracking, even if a viable quantum computer is ever built.

Which algorithms are at risk?

Unfortunately, quantum computer calculations based on a process known as Shor’s algorithm just happen to provide super-quick solutions to various mathematical problems that we currently rely on heavily in modern cryptography.

Algorithms such as SHA-256 (used in hashing, for example to store passwords securely) and AES (used to encrypt files and hard disks securely) can’t be cracked with Shor’s algorithm.

But the algorithms that are widely used today for public key cryptography – the way we set up secure, authenticated web connections, for example – can be attacked quickly with a quantum computer.

When we encrypt data over a secure web connection, we usually use a non-quantum-crackable algorithm such as AES to keep the data secret, after agreeing on a random AES key first.

So far, so good, except that we use public key algorithms, such as RSA and elliptic curve cryptography (ECC), to do our initial AES key agreement, and those public-key algorithms can be attacked using Shor’s algorithm.

In other words, quantum computing can’t crack the AES encryption, but it doesn’t have to because it can crack the AES key instead, and then decrypt the AES data directly.

What to do?

Some experts doubt that quantum computers can ever be made powerful enough to run Shor’s algorithm on real-world cryptographic keys.

They suggest that there’s an operational limit on quantum computers, baked into physics, that will eternally cap the maximum number of answers they can reliably calculate at the same time – and this quantum limit on their parallel-processing capacity means they’ll only ever be any use for solving toy problems.

Others say, “It’s only a matter of time and money.”

Rather than simply bet that the first group are right, US standards body NIST is currently running a competition to design, analyse and choose a set of new algorithms for public key cryptography that are considered uncrackable even if a quantum supercomputer does get built.

The vast majority of people have never experienced a direct lightning strike on the building they’re in, and never will. Yet many countries have building codes that require protection against lightning – it’s easy to build in, so there’s little reason not to have it.

The project is very much like previous crypto competitions that NIST has run, with a similar motivation.

In the 1990s, NIST ran a contest to select AES, needed to replace the no-longer-quite-safe-enough DES algorithm.

In the 2000s, the competitive target was SHA-3, a cryptographic hashing algorithm that was standardised just in case someone finds a way to crack SHA-256, and we need a trustworthy replacement in a hurry.

This latest contest is known as the PQC Standardization Challenge, where PQC stands for Post-Quantum-Cryptography.

It’s been running since April 2016, when NIST started accepting proposals, and entered its first evaluation stage in November 2017, when NIST stopped accepting new algorithms for consideration.

On 30 January 2019, the project went into Round 2, with NIST announcing that 26 out of the original 69 submissions were through to what it calls the ‘semifinals’.

NIST expects the next stage of evaluation to take 12 to 18 months, after which there may be a third round, and then official standard algorithms will be picked.

Why so long?

Cryptanalysis is hard.

Peer review, unbiased assessment and a transparent process to choose open standards all take time, not least because deciding that a cryptographic algorithm doesn’t have holes is effectively proving a negative.

If you find a hole, then your search is over and your work is done; if you don’t, assuming you haven’t come up with a formal mathematical proof of security, then there’s always the chance that with a bit more effort you might find something you missed before.

Additionally, rushing the process would inevitably ends up creating concerns that NIST, which is a US government organisation, was keen to approve something it knew it could crack but figured other countries couldn’t.

Lastly, NIST is trying to cover a lot of bases with its new standards, as NIST mathematician Dustin Moody explained:

“We want to look at how these algorithms work not only in big computers and smartphones, but also in devices that have limited processor power. Smart cards, tiny devices for use in the Internet of Things, and individual microchips all need protection too. We want quantum-resistant algorithms that can perform this sort of lightweight cryptography.”

In addition to considering the multitude of potential device types that could use the algorithms, the NIST team is focusing on a variety of approaches to protection. Because no one knows for sure what a working quantum computer’s capabilities will be, Moody said, the 26 candidates are a diverse bunch.

Who will win?

The new algorithms have a wide range of names, including some really funky ones…

…but we’re sure that the names will not have any influence on the outcome.

The 17 semifinalist algorithms for public-key encryption and key agreement are:

    
    BIKE
    Classic McEliece
    CRYSTALS-KYBER
    FrodoKEM
    HQC
    LAC
    LEDAcrypt
    NewHope
    NTRU
    NTRU Prime
    NTS-KEM
    ROLLO 
    Round5 
    RQC
    SABER
    SIKE
    Three Bears

The nine semifinalist algorithms for for digital signatures are:

    CRYSTALS-DILITHIUM
    FALCON
    GeMSS
    LUOV
    MQDSS
    Picnic
    qTESLA
    Rainbow
    SPHINCS+

As to who will win – only time will tell.

Some of the algorithms proposed have been around for years, but never caught on because they just weren’t as convenient as RSA or ECC.

The McEliece algorithm, for example, was invented by US mathematician Robert McEliece back in 1978, but took a back seat to RSA, and more recently to ECC, because it requires cryptographic keys that are several megabits long.

RSA keys are typically a few thousand bits, and ECC keys just a few hundred, making the use of McEliece over a network connection much more cumbersome than the conventional alternatives.

But by the time an RSA-cracking quantum supercomputer is built, we’ll probably regard a few megabits of bandwith as insignificant…

…and so we might as well get ready now.

Just in case.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qzG2T3bIVtY/