STE WILLIAMS

Password re-use is dangerous, right? So what about stopping it with password-sharing?

Two comp-sci boffins have proposed that websites cooperate to block password re-use, even though they predict the idea will generate “contempt” among many end users, .

Their expectation is founded on experience: Troy Hunt’s HaveIBeenPwned is useful because so many people reuse passwords, and it currently claims to record more than five billion breached accounts.

As the University of Carolina’s Ke Coby Wang and Michael Ritter write in this paper at arXiv, password re-use doesn’t just expose users: “preventing, detecting, and cleaning up compromised accounts and the value thus stolen is a significant cost for service providers as well”.

While single sign-on schemes like OAuth are moderately popular among users, the paper points out two issues holding them back.

Man peeks into box

Time to ditch the Facebook login: If customers’ data should be protected, why hand it over to Zuckerberg?

READ MORE

First, if a user’s OAuth credentials are compromised (and they don’t run extra protections such as two-factor authentication), the attacker has access to all of the associated accounts.

Second, the paper says, “the identity provider in these schemes typically learns the relying parties visited by the user” – something recent privacy scandals cast in a poor light.

Even if users are hostile to being asked to live by the “one password per site” rule, the pair believe it’s worth braving user hostility to stop them re-using passwords. The question is: how?

At the outline level it’s easy: a server where the user is registering a new account – the requester – asks other sites (responders) whether that individual has used the same password with them.

However, they write, that has to be done in a way that protects those passwords (the sites can only say “yes” or “no”, without handing around a password); the sites also have to identify the right user; and the scheme would have to avoid imposing excessive overheads on authentication servers.

Since any kind of Internet-transported directory lookup demands the directory be protected, the Wang/Ritter protocol proposes homomorphic encryption (a scheme called ElGamal), meaning lookups can query the database without decrypting it to get their “hit/miss” decision.

Wang and Ritter believe if a scheme like theirs were adopted by even a relatively small subset of major Websites (say, YouTube, Facebook, WhatsApp, Gmail, Instagram, Tumblr, iCloud), users would end up with more passwords than they could recall – and that would achieve the most important aim of the proposal, which is to force punters to use password managers that get in their faces and firmly insist on complex and fresh passwords for every online service. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/07/blocking_password_reuse/

5 Ways to Better Use Data in Security

Use these five tips to get your security shop thinking more strategically about data. PreviousNext

Image Source: sdecoret via Shutterstock

Image Source: sdecoret via Shutterstock

The current silo-style organization of threat researchers reviewing logs in one place, threat hunters in another, and the data scientists in yet another silo working on algorithms, just doesn’t cut it anymore with today’s security threats.

Security teams need to get smarter with how they use and manage all types of data. That’s because the lines between pure infosec data (Web logs, threat intelligence) versus other business data have become increasingly blurred. A piece of Web log data, for example, could be just as easily used to identify attackers as it could to optimize the customer experience. The same holds true for business data as well.  

They need data science tools to detect threats, and the data scientists coming up with the algorithms have to work much more closely with threat hunters and threat researchers, experts say.

“I think security pros are becoming more like data scientists,” says John Omernik, distinguished technologist at MapR. “But we can’t have data science for data science’s sake: We have to apply these new algorithms to our everyday business problems. My hope is that infosec pros realize that to advance their careers and for the good of the industry they will have to learn more advanced data management and data science skills.

“I want to break down the walls that infosec pros put up and the onus is on the security practitioners to learn these new skills,” he says.

Joshua Saxe, chief data scientist at Sophos, says many infosec pros are using Coursera to learn data science. Saxe says while infosec pros need to understand data science, it’s unlikely that most of them will get to the point where they are actually data scientists. 

“Becoming a data scientist does take a lot of foundation and it’s hard to learn by yourself,” Saxe says. “I think people in infosec need to think more like scientists versus hackers, and while people who are data scientists are more apt to come from top universities, there’s always going to be a need for people who are not data scientists. Before you just had threat researchers; moving forward we’ll have the data scientists working with the threat researchers.”

Here are five ways experts say enterprise security teams can get smarter about how they use all types of data in their jobs. 

 

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/5-ways-to-better-use-data-in-security/d/d-id/1331687?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Serious Security: The GLitch “row hammering” attack

Remember row hammering?

It’s an old and well-known problem with computer memory – the sort of memory known as dynamic RAM, or DRAM for short.

DRAM is constructed as a silicon chip consisting of a tightly packed grid of minuscule storage capacitors arranged in electrically connected rows and columns.

Greatly simplified, row hammering means reading the same DRAM memory addresses over and over again, concentrating electronic activity in one tiny part of the chip for sufficiently long to interfere with nearby memory cells.

From time to time, some of those nearby cells may change their electrical charge, flipping them from 0 to 1 or from 1 to 0.

LEARN MORE ABOUT ROW HAMMERING

Concerns over row hammering have led to a series of recent changes and patches in most contemporary operating systems and commonly used apps, notably browsers.

These changes have make it harder and harder to cause bit-flips at all, let alone to provoke them at will in an exploitable way.

Well, sort of.

Dutch researchers at the Vrije Universiteit in Amsterdam noted that most of the mitigations against row hammering had focused on the interaction between your device’s CPU and its RAM.

But modern devices don’t just have a CPU, they typically have a range of auxiliary processors, too, notably including one or more GPUs, or graphics processing units.

GPUs are devoted to accelerating the sort of mathematical and bit-twiddling operations that graphics-intensive apps demand.

A journey of many steps

The researchers decided to see if they could use code running on the GPU in an Android device to pull off row hammering tricks that wouldn’t be possible via traditional programming techniques.

To make the problem even more specific and interesting, they also wanted to see if they could do all of this without requiring a rooted Android, and without relying on an already-installed malware app.

Their aim: pull off a row hammering attack in the browser, using nothing more that JavaScript served up in a web page.

They gave their research the trendy name GLitch, where the letters GL come from WebGL, short for Web Graphics Library.

WebGL is a programming library built into modern browsers that allows JavaScript code to work very closely with the GPU in order to improve performance in graphics intensive online apps.

The GLitchers assumed that WebGL’s added features would bring added risks, so that’s what they went looking for.

They succeeded…

…but it was a journey of many steps.

First, bypass the cache

Reading from memory in JavaScript is easy – pretty much every time you access a JavaScript variable, it happens automatically.

For row hammering, however, you need precise control, where “read really means read”, forcing your program code to access the the actual silicon in the DRAM chip itself.

That’s harder than it sounds, because modern computers try to speed things up by sidestepping actual DRAM reads as often as possible by storing commonly-used values in special fast storage locations called cache memory.

In contrast, for row hammering to work, you need to create plenty of electronic load on the DRAM circuitry, which means reading the same physical memory area over and over again as fast as you can – without the cache trying to “improve” your performance.

On ARM chipsets, commonly used in mobile devices, it’s possible to empty the cache in order to remove its behaviour from the equation, but regular apps can’t do this – you have to be the Android kernel, or to have a rooted phone.

The Vrije Universiteit team, however, figured out that the memory caching algorithm in the chipset they used in their research was easy to predict.

By accessing memory in a well-defined pattern, they could effectively clog the cache so that it no longer got in the way.

Second, keep track of time

To do row hammering, you need to figure out which memory addresses live where in the silicon, because you’re relying on concentrating your memory reads on one tiny part of the chip in the hope of interfering with the capacitor circuitry right next to it.

DRAM reads happen in bursts of adjacent bits, rather than one bit at a time, so you can tell when you’ve just read from two addresses that are physically close by, because the two reads can be completed in one burst.

That makes the reads happen a tiny bit faster than when you access two addresses that are far apart on the chip.

But to map out memory this way, you need to be able to keep track of time with astonishing precision – we’re talking about measurements down to nanoseconds, not just microseconds.

To picture how a nanosecond matches up to modern computer speeds, remember that 1GHz is shorthand for “one billion of whateveritis per second”, which means “one billionth of second each”, and that one billionth of a second is a nanosecond (10-9 seconds). Even though a microsecond is one millionth of a second (10-6 seconds), thousands of machine code instructions can run in that time.

Measuring memory access speeds precisely is hard in today’s browsers because many of JavaScript’s official timer functions have purposely been degraded by browser makers, precisely to defend against row hammering.

These purposely inexact timers are implemented so that they are accurate enough for general use, but not precise enough for row hammering trickery.

But our intrepid researchers found a pair of timing functions specific to WebGL that hadn’t yet had their accuracy “smudged” for security purposes.

Thanks to the GLitch paper, browser makers are now deliberately reducing the accuracy of those timers, too (TIME_ELAPSED_EXT and TIMESTAMP_EXT), but the researchers also found other ways to write WebGL code that they claim provided the precision that they needed without using any special timer functions.

Third, map out the DRAM chip

If you can bypass the cache to perform “real” memory accesses, and you can time those accesses with sufficient precision, you’re in a position to map out the DRAM chip.

You don’t need to construct a detailed layout of the whole memory space – it’s sufficient to figure out when you have three physically adjacent rows of DRAM capacitors.

With access to three contiguous rows of capacitors in the chip, you can repeatedly and rapidly read data out of the outer two rows, creating sufficient electrical activity to give you a good chance of flipping one or more bits in the row of capacitors sitting in the middle.

This is called “double-sided row hammering”, for obvious reasons.

Fourth, figure out the memory allocator

Getting the operating system to dish out memory corresponding to three adjacent DRAM rows isn’t as simple as asking for three identically sized memory blocks, one after the other.

In fact, with the Android memory allocator used to support the GPU that the researchers were targeting, three memory allocations in a row didn’t produce adjacent memory blocks at all.

But by studying the allocator, the researchers figured out how to construct a mixture of allocations and deallocations so that they reliably ended up with memory dished out from adjacent rows of capacitors inside the DRAM itself.

Once they had three adjacent rows of DRAM real estate allocaed, plus high-speed direct read access to that physical memory, they had a “hammerable row” lined up that they could subject to an electronic pummelling in the hope of deliberately corrupting it.

Still not enough…

The power to corrupt memory at will, even if it’s only a single bit in a quasi-random location, can always be considered an exploit – at the very least, you could force an app or even the whole device to crash, thus causing a denial of service attack (DoS).

But the GLitchers went further than just a browser-driven DoS aattack.

They figured out how to align their “hammerable row” with a JavaScript array in such a way that random bit flips in the array might, with a bit of luck, give them read and write access to memory in ways that JavaScript is supposed to prevent.

That means not only data leakage by reading from memory that’s supposed to be private, but also the possibility of remote code execution (RCE) by poking machine code into protected memory and then running it.

What to do?

Previous row hammering attacks were often considered irrelevant on mobile devices.

Either you needed to install an app that was already authorised to pull off the very sort of attack that row hammering might help you to achieve, or you needed to wait ages to have any hope of success, during which time other activity on the device would probably disrupt the attack and send you back to square one.

But a “Glitch” attack, claim the researchers, can be triggered in a reasonable time using only JavaScript code running in a browser – no need to have a rooted device or a malicious app already installed.

Does that mean Android is broken and you should stop using it?

No.

So far, the researchers have a limited set of attacks that work under controlled circumstances on an outdated device of their own choosing, running an old version of Android.

Nevertheless, GLitch reminds us is that when you add features and performance – whether that’s building GPUs into mobile phone chips, or adding fancy graphics programming libraries into browsers – you run the risk of reducing security at the same time.

If that happens, IT’S OK TO BACK OFF A BIT, deliberately reducing performance to raise security back to acceptable levels.


Chip image courtesy of https://zeptobars.com/en/contacts.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rSgfjZliLMg/

Cookie code compromise caper caught and crumbled

NPM, the biz responsible for the Node Packet Manager for JavaScript and Node.js, has caught a miscreant trying to tamper with web cookie modules on Wednesday and managed to exile the individual and associated code before significant harm was done.

It’s a good sign for the code registry which over the past few years has had to clean up several security snafus tied to its increasingly popular collection of libraries.

In January, NPM mistakenly removed a developer’s account owning to a failure to review sanctions suggested by an automated anti-spam system. And in August last year, NPM missed a typosquatting attack that went on for two weeks.

This time, the process was a bit more surgical.

“Early May 2nd, the NPM security team received and responded to reports of a package that masqueraded as a cookie parsing library but contained a malicious backdoor,” security engineer Adam Baldwin disclosed in a blog post. “The result of the investigation concluded with three packages and three versions of a fourth package being unpublished from the NPM registry.”

Javascript photo via Shutterstock

Unlucky Linux boxes trampled by NPM code update, patch zapped

READ MORE

The backdoored package was called getcookies. Two other packages were involved, express-cookies, which included getcookies, and http-fetch-cookies, which included express-cookies (and therefore getcookies). The fourth package, mailparser, incorporated http-fetch-cookies in three sequential versions.

The backdoor was designed to scan HTTP request.headers, looking for a command string. Were someone to set up a web application using the Express.js framework and one of the compromised modules, an attacker could submit a remote command as a web request and potentially execute arbitrary code under the same privilege level as the application.

As a result of its investigation, NPM removed the account of dustin87, associated with the malicious code, and unpublished getcookies, express-cookies and http-fetch-cookies. It also removed three versions of mailparser (2.2.3, 2.2.2, and 2.2.1) that incorporated the unsafe http-fetch-cookies module and reset the NPM tokens tied to mailparser to prevent the appearance of more unauthorized variants.

The mailparser module, said Baldwin, has been deprecated – meaning it’s no longer recommended and should be removed from production code when possible – but it still gets downloaded 64,000 times a week.

Playing the long game

In a phone call with The Register, Baldwin said he believed the attack represented an effort to inflate the download counts of express-cookies and http-fetch-cookies, to make them appear popular enough that developers would chose to use one in conjunction with Express.js, a popular JavaScript framework for making web applications.

The scheme involved including http-fetch-cookies in mailparser but not actually using it, in order to inflate its apparent popularity and boost its legitimacy.

Baldwin speculates that the attacker somehow obtained credentials for mailparser and used those to publish versions with the compromised code.

Baldwin claims no packages published to the NPM registry incorporated the malicious modules in a way that would have allowed the backdoor to function.

However, if a developer created an Express.js application and included one of the malicious modules, that application could be accessible through the backdoor.

“We believe that the attacker likely would have used another application to create payloads to be used with this backdoor,” said Baldwin in an email to The Register.

“The goal of these backdoored modules was to look legitimate enough to be included in Express-based applications; once deployed, the attacker then could have remotely executed commands on those systems through this backdoor.”

Aware of that it attracts troublemakers, NPM has been hardening its security posture. Last month, it acquired ^Lift Security, the group that developed the Node Security Platform and included Baldwin. Last week, it rolled out npm@6, which includes security features like alerts when attempting to install vulnerable modules and an “audit” command.

Baldwin explained that NPM’s registry now has almost 700,000 packages and almost 10 million users, making it a magnet for those seeking to distribute malware.

“We’ll continue to see people attempt to publish software like this,” he said. “The thing to remember here is that anybody can publish some piece of code to the NPM registry, but this is not a guarantee that others will use it – or, even if they use it, that they will use it in a way that leads to a malicious outcome.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/04/cookie_compromise_caper_caught_and_crumbled/

Parisa Tabriz to Keynote Black Hat USA 2018

Director of Engineering and Project Zero manager at Google, Parisa Tabriz will Keynote Black Hat USA 2018. On Wednesday, August 8, Tabriz will present “Optimistic Dissatisfaction with the Status Quo: Steps We Must Take to Improve Security in Complex Landscapes” and detail means for mitigating evolving security threats and actualizing your developments at scale.  Read the complete abstract here.
 

Article source: https://www.darkreading.com/black-hat/parisa-tabriz-to-keynote-black-hat-usa-2018/d/d-id/1331725?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Spectre Returns with 8 New Variants

Researchers have discovered versions of the processor vulnerability.

It’s hard to keep a bad vulnerability down. In this case, Spectre is back in eight new varieties that promise to keep alive the conversation on the best way to defend a vulnerability that exists at the most basic level of a computing system — and how to close a vulnerability that is an integral part of modern computing’s high performance.

German security website reported yesterday that unnamed researchers have found a series of new vulnerabilities that take advantage of the same issues reported in the original Spectre and Meltdown incidents. According to the site, each of the vulnerabilities will have its own number in the CVE directory as parts of a block of entries Intel has reserved for just such a possibility.

Of the eight, four have been designated “high risk” and four “medium risk,” with all apparently having results similar to the original vulnerabilities — all, that is, except one.

The one exception would allow an exploit to go much farther in its boundary crossing than the original. In the new version, a malicious process launched in one virtual machine could read data from the cache of another virtual machine or from the hypervisor. This behavior significantly increases the potential impact of a breach.

“The basic problem is that, as part of the operating system, we’ve taken great pains to isolate the memory space of process 1 from the memory space of process 2. This security domain is destroyed by the time you get into the cache,” says Satya Gupta, CTO and co-founder of Virsec. That domain destruction is already in process by the time Spectre exploit code executes, though, because of the way that Spectre operates.

“Specter and melt down are components of something else,” says Mike Murray, vice president of security intelligence at Lookout. “If you give me an account on your laptop you should worry about Specter,” he says, adding, “but if you don’t, and you’re not going to any sketchy Web pages that happen to be exploiting it or things like that, then the odds of me being able to use it are pretty small.”

“It’s a local privilege escalation more than anything else,” Murray says, though that may do little to soothe fears of a vulnerability so deeply embedded in the system.

According to Heise.de, the website reporting these new variants, Intel has patches in process and will release the patches in two waves, the first in May and the second in August. These patches will be accompanied by patches from Microsoft and other operating system publishers.

Gupta says the ultimate fix to the problem involves a change to one of the processor’s core components. “The smallest possible part to change is the instruction cache,” he says. “It’s agnostic now and it loses the linkage between process and instruction. The processors need to have an idea of which process is executing — memory isolation is really important.”

In some ways, the issue may be even more basic than the silicon. “Complexity breeds opportunity for vulnerability. And we just keep making the systems more complex,” says Murray.

Contacted separately, Gupta and Murray were each asked whether they thought that there would be more Spectre-like vulnerabilities announced in the future. Each began their answer with a laugh before continuing, “Oh, yes.”

Related content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/spectre-returns-with-8-new-variants/d/d-id/1331723?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Report: China’s Intelligence Apparatus Linked to Previously Unconnected Threat Groups

Multiple groups operating under the China state-sponsored Winnti umbrella have been targeting organizations in the US, Japan, and elsewhere, says ProtectWise.

Multiple previously unconnected Chinese threat actors behind numerous cyber campaigns aimed at organizations in the United States, Japan, and other countries over the past several years are actually operating under the control of the country’s state intelligence apparatus.

An investigation by security vendor ProtectWise has shown that the groups operating under the so-called Winnti umbrella since at least 2009 share a common goal, common infrastructure, and often the same tactics, techniques, and procedures.

Many of the Winnti umbrella’s initial attack targets have been software and gaming companies. Winnti threat groups also have shown a proclivity to attack smaller organizations with the intent of finding and stealing code-signing certificates, which they have then used to sign malware directed against higher-value targets.

Like almost every other threat actor, members of the Winnti umbrella typically have tended to use phishing lures to gain initial access to a target organization’s networks, says ProtectWise. The groups have then used publicly available tools like Metasploit and Cobalt Strike or custom malware to expand their access and maintain a presence on the compromised network.

ProtectWise’s report is based on its review of data from active compromises at multiple organizations, its analysis of external infrastructure used in attacks, and other telemetry.

The data set shows that the Winnti umbrella is a loosely organized collection of China-based threat actors that are currently being actively supported by intelligence agencies in the nation. Over the years, threat groups under the Winnti umbrella have been referred to by names such as BARIUM, GREF, PassCV, and Wicked Panda. Another member of the Winnti umbrella, with the alias LEAD, has for some time been associated with attacks on online gaming, telecom, and high-tech organizations.

“While inside knowledge of their operations is quite limited from any external research such as this, we can still assess with confidence that the various groups are functioning in a singular direction for a greater overall mission,” says Tom Hegel, senior threat researcher at ProtectWise. Evidence suggests that Chinese intelligence agencies are supplying all the necessary resources to members of the Winnti umbrella, including finances and human skills.

Though each group within the Winnti umbrella has operated individually, the lines between them are often blurred because of the manner in which they have shared infrastructure, tactics, and tools. Winnti itself is a name that Kaspersky Lab created in a 2013 report on the group and of its targeting of organizations in the gaming industry to steal code-signing certificates, source code, technical documentation, and digital currencies.

In 2014, Novetta published a report on the group — which the vendor calls Axiom — and its links to China’s intelligence organizations. The report cited Axiom’s potential connections to Operation Aurora, a 2010 China-hosted campaign targeted at major US tech firms, including Google and Yahoo. Other entities that have reported at various times about Winnti’s operations include Trend Micro, Citizen Lab, and Cylance.

ProtectWise says its report is the first to make public the previously unreported links that exist between the multiple Chinese state intelligence operations and the fact that they were all operating under the aegis of the Winnti umbrella.

“The various operations conducted by the Winnti umbrella and the associated entities vary depending on the target and their importance,” Hegel says. The earlier-stage attacks against gaming and software companies seek internal tooling and code-signing certificates.

Based on ProtectWise’s research and from other public research, the early-stage attacks appear to be a preparation for later attacks on more valuable targets.

“Attacks against high-value targets tend to be seeking information beneficial to the Chinese government, such as attacks on journalists, which present a threat to the Chinese government,” he says.

Many of the group’s targets have included high-tech companies, almost certainly because of the valuable data such firms can possess.

The Winnti umbrella’s long-term goals appear to be political in nature. Some of its campaigns, for instance, have involved mimicking various Chinese-language news websites that normally are unavailable from within the country because of their content, Hegel says. A recent campaign involved sending phishing lures with the theme of strengthening sanctions against North Korea to unknown targets.

Attacks against some high-value technology companies have involved a political agenda as well, but ProtectWise is not at liberty to share specific details, Hegel says.

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/report-chinas-intelligence-apparatus-linked-to-previously-unconnected-threat-groups/d/d-id/1331724?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Breakthrough pushes Quantum Key Distribution beyond 500km

Cambridge physicists have come up with a new way to build a secure Quantum Key Distribution (QKD) network that could extend the technology’s range beyond 500km for the first time.

Key distribution is the process of sharing cryptographic keys between people who want to communicate securely. QKD is key distribution that attempts to use the fundamental properties of quantum mechanics to make eavesdropping impossible.

Currently, the distance a point-to-point QKD network can communicate is a few hundred kilometres at best, at which point data rates plunge towards the ‘emptying a swimming pool with a straw’ rate of (and you’re not misreading this) 1.16 bits per hour at 400km.

Hypothetically, a quantum repeater could be used to boost the signal, but these are not yet technically feasible thanks to their complex physics.

Another option, demonstrated recently by the Chinese Micius programme, is to send photons through the air from satellites to a network of ground stations.

The team at the Cambridge Research Laboratory of Toshiba Research Europe (CRL) has come up with a more down-to-earth alternative called Twin-Field QKD (TF-QKD) designed to dovetail with conventional communications networks in use today.

Instead of sending photons between two points a long way apart, each station sends photons to a closer central location, boosting bitrates (and therefore secure key rates) to around 100 bits per second for the same channel loss.

TF-QKD allows this to be done securely without the complexity of a repeater, while guaranteeing the security of the channel. The equipment needed for the intermediary would be simple, the team said.

Dr Andrew Shields of Toshiba’s Cambridge Research Lab and co-author of the paper on TF-QKD, told Naked Security:

It doesn’t measure their bit value, it just tells us if they’re the same or different. The station then reports it to Alice and Bob [the communicating parties]. This intermediate point doesn’t have to be in a special location and can even be operated by an adversary.

But why does QKD matter anyway?

At some point, a future quantum computer running Shor’s famous algorithm could pose a threat to the public key encryption that is central to today’s internet.

According to NIST, that could happen by 2029 in the worst case, which would give us a decade to come up with alternatives.

This could conceivably drive security back to symmetric encryption ciphers not based on integer factorisation, such as AES, which can be made more resistant to quantum computers by increasing their key length and boosting hashing output length.

In such a world, the job of QKD would be to distribute these keys securely backed by an absolute guarantee that should the keys be intercepted – i.e. the photons read – that will become known.

Unfortunately, decades of slow development mean that QKD has plenty of sceptics – ‘it’s the future of secure communication and always will be’ to paraphrase this view.

Its point-to-point protocols are also seen as unsuitable to serious use on the internet, not to mention the possibility that it might be expensive to implement.

A couple of years ago, Britain’s NCSC put out a glum document pointing out how far QKD has to go before it can be used in anger.

Reckoned the NCSC:

Post-quantum public key cryptography appears to offer much more effective mitigations for real-world communications systems from the threat of future quantum computers.

Easier said than done of course – which is why TF-QKD could be helpful insurance come the day when a quantum computer makes life more complicated for everyone.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/iA9ri5sbdWs/

We’re Doing Security Wrong!

When you simply heap technology onto a system, you limit your hiring pool and spread your employees too thin. Focus on your people instead.

If your company produces gadgets that improve cybersecurity, brace yourself. No matter how much we spend on your next great solution, it won’t be good enough. There’s still one thing we must give more attention, funding, and resources: humans. An organization can implement firewalls, intrusion-detection and intrusion-prevention systems, and artificial intelligence defenses, but they still won’t conquer the human factor, the most vulnerable aspect of a cybersecurity plan.

The technological revolution has introduced a plethora of advanced solutions to help identify and stop intrusions. However, data leaks and breaches persist. Shouldn’t all this technology stop attackers from gaining access to our most sensitive data?

Historically, Stuxnet, WannaCry, and the Equifax breach are examples of weaknesses in the flesh-and-bone portion of a security plan. These attacks could have been prevented had it not been for human mistakes.

Stuxnet is the infamous worm (allegedly) authored by a joint US-Israeli coalition, designed to slow the enrichment of uranium by Iran’s nuclear program. The worm exploited multiple zero-day flaws in industrial control systems, damaging enrichment centrifuges. So, how could this have been prevented?

The Natanz nuclear facility, where Stuxnet infiltrated, was air-gapped. Somebody had to physically plant the worm. This requires extensive coordination, but personnel in Natanz should have been more alert. Also, Stuxnet was discovered on systems outside of Natanz, and outside of Iran. Somebody likely connected a personal device to the network, then connected their device to the public Internet. While Stuxnet went from inside to outside, the inverse could easily have happened by connecting devices to internal and external networks.

WannaCry and its variants are recent larger-scale examples. Microsoft had issued patches for the SMBv1 vulnerability, eventually removing the protocol version from Windows. Still, some 200,000 computer systems were infected in over 150 countries worldwide to the tune of an estimated $4 billion in ransoms and damages. If human beings had updated their systems, we may never have added “WannaCry” to our security lexicon. At least we can use it as an example of the costs of negligence in cybersecurity curricula, right?

The infamous Equifax breach resulted in the compromise of the personal data from millions of people. The culprit? Equifax reported a vulnerability in Apache Struts that had already been patched by the Apache Software Foundation. This attack was described as a “relatively easy” hack, meaning it would not have required a highly skilled technician to execute this attack and compromise the 145 million or so batches of personal data. (PATCH!)

The lesson here? We care too much about gadgets and logical control systems, and not enough about our personnel. Increasing investments in people over more technology aids retention. The IT industry sees a lot of turnover. These decisions are not always about money, but salary is a consideration. Also, if companies were more willing to train their own employees, they would benefit from new skill sets without hiring new personnel. Employee retention enhances familiarity systems. Experienced personnel can quickly address issues. Time is money; downtime is loss of money.

Shallow End of the Hiring Pool
In every conversation I’ve had with hiring managers, and at every cybersecurity conference I’ve attended, there has been a common theme concerning the state of IT/IS: there is not enough talent in the hiring pool. But I’d argue that their organizations haven’t shown enough willingness to train their own, provide their employees with the opportunity to learn and grow, and hire people they can teach. Too often, job boards are littered with postings chasing unicorns — mythical IT experts with a mix of experience that couldn’t exist. If organizations would invest in their own people, they could mold someone into that magical creature rather than banging their head against the job board walls in search of a candidate that doesn’t exist.

Invest in training and awareness for your day-to-day operational employees. Give them incentive to pay attention to the sender, content, and links of an email. Give them a sense of ownership of security, so they challenge an unfamiliar face in the hallway. Teach them techniques that attackers will use to socially engineer them, how they can smell a rat, and how they can thwart the attackers’ efforts.

I’m not saying you should do away with all technical control systems, of course. However, when you continue to heap technology onto a system, you limit your hiring pool, and you’re spreading your employees too thin. Don’t create Jacks-and-Jills-of-all-trades. Create masters of yours.

Related Content:

Gary Freas is a cybersecurity professional with 12 years of industry experience in information, installation, and personnel security; cyber threat intelligence; cybersecurity systems engineering; and risk management. After 10 years of naval service as a fire controlman … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/were-doing-security-wrong!/a/d-id/1331639?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Half a million pacemakers need a security patch

The US Food and Drug Administration (FDA) last month approved a firmware patch for pacemakers made by Abbott’s (formerly St Jude Medical) that are vulnerable to cybersecurity attacks and which are at risk of sudden battery loss.

Some 465,000 patients are affected. The FDA is recommending that all eligible patients get the firmware update “at their next regularly scheduled visit or when appropriate depending on the preferences of the patient and physician.”

Pacemakers are small devices used to help treat irregular heartbeats. The cybersecurity vulnerabilities were found in Abbott’s radio frequency- (RF-) enabled implantable cardioverter defibrillators (ICDs) and cardiac resynchronization therapy defibrillators (CRT-Ds).

The issues with St Jude Medical’s devices have been playing out for a while. In September 2016, the company sued Internet of Things (IoT) security firm MedSec for defamation after it published what St Jude said was bogus information about bugs in its equipment.

In January 2017, five months after the FDA and the Department of Homeland Security (DHS) launched probes into claims that St Jude Medical’s pacemakers and cardiac monitoring technology were vulnerable to potentially life-threatening hacks, security consultants at Bishop Fox confirmed the validity of MedSec’s findings. The company begrudgingly stopped fighting and litigating and issued security fixes.

The January updates were for the Merlin remote monitoring system, which is used with implantable pacemakers and defibrillator devices.

At the time, cryptographic expert Matthew Green, an assistant professor at John Hopkins University, described the pacemaker vulnerability scenario as the fuel of nightmares.

He put out a series of tweets on the matter, including these messages:

The summary of the problem is that critical commands: shocks, device firmware updates etc. should only come from hospital programmer 5/

Unfortunately SJM didn’t use strong authentication. Result: any device that knows the protocol (including home devices) can send these 6/

And worse, they can send these (potentially dangerous) commands via RF from a distance. Leaving no trace. 7/

Specifically, the devices use 24-bit RSA authentication, he said: “No, that’s not a typo.” Beyond the weak authentication, St Jude also included a hard-coded 3-byte fixed override code, Green said.

I’m crying now.

To date, there have been no known reports of patients being harmed due to security vulnerabilities, either in the Merlin systems or in the ICDs and CRT-Ds covered in the most recent security advisory. Here’s the list of those devices:

  • Current
  • Promote
  • Fortify
  • Fortify Assura
  • Quadra Assura
  • Quadra Assura MP
  • Unify
  • Unify Assura
  • Unify Quadra
  • Promote Quadra
  • Ellipse

Fortunately, the update doesn’t entail open-heart surgery, though it does require an in-person trip to a healthcare provider’s office. It can’t be done from home via Merlin.net. The firmware update takes three minutes, during which the pacemaker will operate in backup mode, pacing at 67 beats per minute.

Abbott said that with any firmware update, there’s always a “very low” risk of an update glitch. Based on the company’s previous firmware update experience from an August 2017 pacemaker firmware release and the similarities in the update process, Abbott said that installing the updated firmware on the ICDs and CRT-Ds could potentially result in the following malfunctions:

  • Discomfort due to backup VVI pacing settings
  • Reloading of the previous firmware version due to an incomplete update
  • Inability to treat ventricular tachycardia/fibrillation while in back-up mode
  • Device remaining in back-up mode due to an unsuccessful update
  • Loss of currently programmed device settings or diagnostic data

The FDA said that nothing bad happened to patients in that August 2017 firmware update. About 0.62% of the devices experienced an incomplete update and remained in the back-up pacing mode, but in all of those cases, the devices were restored to the prior firmware version or received the update successfully after Technical Services intervened.

The FDA says that an update to the programmer should reduce the frequency of these minor update issues. Also, a small percentage (0.14%) of patients complained of diaphragmatic or pocket stimulation, or general discomfort for the time that the device was in the back-up pacing mode. There haven’t been any cases reported to Abbott where the device remained in back-up mode following an attempted firmware update.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JKJnLFCGhkM/