STE WILLIAMS

Blockchain’s New Role In The Internet of Things

With next gen ‘distributed consensus’ algorithms that combine both security and performance, organizations can defend against DDoS attacks, even those that leverage IoT devices

On October 21st, a new malware weapon called the Mirai botnet took down a huge portion of the Internet, by launching a DDoS attack on Dyn, a company that controls much of the Internet’s domain name system (DNS) infrastructure. Affected sites included Twitter, the Guardian, Netflix, Reddit, CNN and many others in Europe and the US.

 More on Security Live at Interop ITX

The Mirai botnet is unique because it is largely made up of Internet of Things devices such as digital cameras and DVR players. Because it has so many Internet-connected devices to choose from, attacks from Mirai are much larger than previous DDoS attacks. Dyn estimated the attack involved “100,000 malicious endpoints” at a strength of 1.2Tbps. For comparison, that makes the October 21st attack roughly twice as powerful as any similar attack on record.

Since then, source code for Mirai has been published as open source in hacker forums, and the techniques have been adapted in other malware projects, making it more likely that we will see these attacks increase in frequency and size as other threat actors learn how to harness Mirai-like IoT botnets. While the Mirai botnet was used in this case to attack the DNS system, this form of attack will certainly be used against company servers directly, and traditional approaches to DDoS defense are simply inadequate for this emerging threat. 

It is very difficult to protect a single target against an army of attackers. Instead, we must find a way to divide and conquer. If we have multiple targets, then an attacker must divide their forces, with each group being less powerful than the whole. Distributed consensus technology replaces a central server with a community of peers. A would-be attacker can no longer target a single server, but rather, must successfully attack at least one third of all peers of the network.

Distributed consensus algorithms (such as blockchain and hashgraph) enable communities of people – strangers who are both unknown and untrusted – to securely collaborate with each other over the Internet without the need for a trusted third party.  In other words, it enables the development of multi-participant, general ­purpose applications that execute without the need for a central server. Each member of the community runs a local copy of the application. The consensus algorithm ensures that all instances of the application accurately reflect changes made by all members of the community, while ensuring no single member can cheat.

Until recently there has been two categories of consensus technology from which to choose: 

1) Public networks, like Bitcoin and Ethereum, that have poor performance and are grossly inefficient (requiring Proof of Work), and

2) Private (Permissioned) solutions such as HyperLedger Fabric, and non-Proof of Work Bitcoin or Ethereum (in which case the nodes take turns publishing a block of transactions).

Public networks have better security but poor performance in terms of transaction throughput and consensus latency, which is the time it takes for members of the community to come to an agreement on the order of transactions in the application. These performance constraints dramatically limit the number of applications that can practically use the technology. For example, Bitcoin blockchain can process only 7 transactions per second, and it takes the community an hour to agree on the order of those transactions. There aren’t many applications that can use a database with those performance characteristics.

Some users have opted to relax the security requirements of the distributed consensus algorithm, and restrict the use of the algorithm to private networks of known and trusted participants. This improves performance (achieving 100s or low 1000s of transactions per second, and seconds consensus latency), but at the expense of security.  If even a single member of the network is compromised, then the attacker can disrupt the flow of transactions for the entire network (i.e., launch a DoS attack).  

A new generation of distributed consensus technology products in the pipeline from a variety of vendors (including Swirlds)  provides a third category from which to choose: algorithms with both high security and high performance. For many applications, this combination of security and performance enables a new defense to DDoS attacks, even those that leverage IoT devices. 

To demonstrate the point, let’s consider a popular online game, World of Warcraft (WoW).  The current system has a central server that ensures all players have a common view of the world and can’t cheat. However, a DDoS attack on the server can disrupt the game for everybody.  Also, the integrity and availability of the game can be compromised by a malicious insider or a remote attacker. 

A distributed version of WoW would provide a layer of defense against those types of attacks. In distributed WoW, each player is a node in a network, and the consensus technology ensures a common view of the world and prevents cheating. There is no central server to attack. A DDoS attack might be able to disrupt one (or even a few) players, but the game continues to be available for the rest of the community.    

Bitcoin blockchain introduced us to the modern era of distributed consensus, but it only provides a taste of what’s possible. The emerging, next generation of distributed consensus technology offers a unique combination of performance and security. This enables a new category of DDoS defense.  Eventually every industry will have networked, distributed applications, and wide-spread adoption will fundamentally change the security of the Internet.  

Related Content:

 

Mance Harmon is an experienced technology executive and entrepreneur with more than 20 years of strategic leadership experience in multi-national corporations, government agencies and high-tech startups, and is co-founder and CEO of Swirlds. Prior experience includes serving … View Full Bio

Article source: http://www.darkreading.com/iot/blockchains-new-role-in-the-internet-of-things/a/d-id/1328239?_mc=RSS_DR_EDT

Survey: Most Attackers Need Less Than 12 Hours To Break In

A Nuix study of DEFCON pen testers shows that the usual security controls are of little use against a determined intruder

If the methods used by penetration testers to break into a network are any indication, a majority of malicious attackers require less than 12 hours to compromise a target. Four in ten can do it in barely six hours.

That’s the just released findings from a survey of 70 penetration testers that Nuix North America conducted at the DEFCON Conference last year.

Nuix asked the pen testers about their attack methodologies, their favorite exploits, the security controls that deter them the most and the ones that are easiest to bypass.

The results showed that most pen testers find it almost trivially easy to break into any network that they take a crack at, with nearly 75% able to do it in less than 12 hours. Seventeen percent of the respondents in the Nuix survey claimed to need just two hours to find a way through.

Troubling as those numbers are likely to be for enterprises, what is sure to be even more challenging are the claims by survey respondents about how quickly they can find and siphon out target data. More than one in five said they needed just two hours, about 30% said they could get the job done in between two and six hours while almost the same number said they needed between six and 12 hours.

About one-third of the pen testers claimed that they have never been caught so far while breaking into a client network and accessing the target data, while about 36% said they were spotted in one out of three tries.

The survey results show that organizations face a more formidable challenge keeping attackers at bay than generally surmised, says Chris Pogue, chief information security officer at Nuix.

“You are squared off against a dynamic enemy whose technical capabilities are likely far beyond that of your security staff, and whose tool development has far outpaced your own,” he says.

Some of the results in the Nuix survey are similar to those discussed by Rapid7 in a recent report summarizing its experience conducting penetration tests for clients. According to Rapid7, in two-thirds of the engagements, clients did not discover the company’s penetration tests at all. An organization’s inability to detect a penetration test, which often is noisy, rapid fire, and of short duration, makes it highly unlikely it would detect an actual attack. Rapid7 noted at the time.

The experience of the pen testers in the Nuix survey suggests that malicious attackers like to use freely available open source tools and custom tools more than exploit kits or other malware tools purchased in the Dark Web. A bare 10% of the survey respondents said they used commercial tools like Cobalt Strike or the Core IMPACT framework to break and enter a client network, while an even smaller 5% said they used exploit kits.

The methods employed by pen testers are representative of the tactics, techniques and procedures used by criminal attackers, so enterprise security managers would do well to pay attention to the results, says Pogue. “The only real difference is motivation,” he notes.

Often the main variance between a pen tester and someone that attacks a network with malicious intent is a piece of paper representing a contract with a client. Consequently, the methods employed by pen testers are a reliable indicator of the methods that criminals are likely to use as well, he says. “The way I see it, this is the only way to truly understand the efficacy of your security countermeasures and detection capabilities,” Pogue says.

Significantly, more than one in five of the attackers claimed that no security controls could stop them. Among those controls that the remaining pen testers found the most effective were endpoint security tools and intrusion detection and prevention systems. Just 10% found firewalls to be a problem.

Also interesting was the fact that the survey respondents claimed they used different attack methodologies for almost every new attack, meaning that countermeasures focused on indicators of compromise have only limited effect. “Attackers are as creative as they need to be,” Pogue says. “When specific attack patterns start to get detected or blocked, then they switch things up slightly, and use that methodology until it gets detected or blocked.”

The message for defenders is that threats are not static and they need to be prepared for and able to detect the different methods criminals can employ to break in, he says.

“If an organization cannot detect a multitude of attack patterns, some of which they have likely never seen before, they are already lagging several paces behind their adversaries.”

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: http://www.darkreading.com/threat-intelligence/survey-most-attackers-need-less-than-12-hours-to-break-in/d/d-id/1328256?_mc=RSS_DR_EDT

Russia Top Source Of Nefarious Internet Traffic

Honeypot research from F-Secure shows majority of illicit online activity coming from IP addresses in Russia – also where ransomware is a hot commodity.

A global research honeypot tracked what appeared to be a large amount of reconnaissance traffic coming from Russian IP addresses in the second half of last year: some 60% of the overall volume of traffic came from Russia.

The second-closest region was the Netherlands, with 11% of the overall traffic, followed by the US (9%); Germany (4%); and China (4%), according to data culled from F-Secure’s global honeypot network, which provides a snapshot of just where attack attempts, recon, and other nefarious activity is originating – and targeting.

F-Secure found that close to half of the traffic was searching for exposed HTTP and HTTPS ports, most likely for the purpose of seeking out vulnerable software to exploit and spread malware, or compromise the targeted device. These systems then can be used as proxies for other attacks, for instance. Simple Main Transfer Protocol (SMTP) ports were also high on the recon radar screen.

“With Russia being the largest source of this traffic, it’s no surprise that most countries in the world were targeted by Russian IPs, including Russia,” F-Secure said in its newly published annual threat report. “The US was the most frequent target of both global and Russian traffic.”

Most ransomware activity comes out of Russia as well, noted Mikko Hypponen, chief research officer for F-Secure in a press briefing during the RSA Conference last week in San Francisco. There are more than 100 ransomware gangs, he said, and some operate out of Ukraine.

Russian-speaking cybercrime gangs and individuals account for 80% of ransomware families seen in the last 12 months, Kaspersky Lab data shows. The ransomware attackers are a combination of skilled developers to script kiddies, all cashing in on the ease and relative anonymity of cyber-extortion attacks that now come in easy-to-use-kits. Some are making tens of thousands of dollars a day via ransomware attacks, according to Kaspersky Lab.

Hypponen expects ransomware incidents to get worse. “One of the things making it worse is that it’s becoming so decentralized. There are so many different gangs making money on ransomware, and they are competing,” he said.

They have sophisticated application interfaces that help them track their campaigns and how successful they were; some even provide customer support to help the victim get bitcoin for ransom payment. He showed one campaign’s interface indicating it had a conversion rate of 16% success.

Other security experts last week echoed Hypponen’s prediction that ransomware would escalate, and get uglier: not only are the attackers getting more aggressive and strict about payment deadlines, but some attack a victim multiple times, even after he or she pays up. “Traditional blackmailers know if someone pays once, they are probably going to pay again,” said James Lyne, global head of security research at Sophos Labs.

Look for ransomware attacks that also steal, damage, or wipe data, so even if a victim pays the ransom, his or her data is still at risk or lost forever.

Related Content:

 

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: http://www.darkreading.com/threat-intelligence/russia-top-source-of-nefarious-internet-traffic-/d/d-id/1328255?_mc=RSS_DR_EDT

Healthcare data breaches ‘mostly caused by insiders’

Targeting healthcare organizations remains about as easy as shooting fish in a barrel. The industry has one of the lowest rates of data encryption and the security culture is severely lacking. Employee education remains poor, leading to a lot of costly mistakes in how patient data is handled.

Naked Security has written about the problem at length, and Sophos has done polling that makes the issues described above all too clear.

The latest evidence comes in the form of two reports: one from Big Data analytics firm Protenus, the other from IBM Managed Security Services (MSS).

Both reports show that the number of privacy violations in healthcare organizations remains high, and that clueless or malicious insiders are a huge problem left unchecked.

The insider problem

Protenus said insiders committed 59.2% of patient health record privacy violations in January 2017, and that the figure stayed well above 43% for all of 2016. From the report:

With 2016 averaging at least one health data breach per day, 2017 is off to a similar start with 31 breach incidents, averaging one data breach for every day of the month. There were slightly fewer incidents disclosed in January than in December (36 incidents), and dramatically fewer affected patient records (1,431,449 vs 388,307).

Protenus’ analysis is based on incidents either reported to HHS or disclosed in the media or other sources last month. Information was available for 26 of those incidents. The largest single incident involved 220,000 patient records, a result of a third-party breach involving insider wrongdoing, the company said.

The majority (59.2%) of breached patient records – 230,044 records – were attributable to insider incidents. Five of nine insider incidents were the result of insider wrongdoing.  For the four insider-wrongdoing incidents for which we have numbers, 226,798 patient records were affected. Four other insider incidents were the result of insider error, affecting 3,246 patient records.

Meanwhile, a healthcare data security report from IBM Managed Security Services (MSS) said insiders were responsible for 68% of all network attacks targeting healthcare data in 2016. Almost two thirds of those attacks were the result of people using misconfigured servers and falling victim to phishing scams.

Why do attackers continue to sharpen their focus on healthcare? IBM MSS explained in the report:

It’s because the exploitable information in an electronic health record (EHR) brings a high price on the black market. In the past, malicious vendors have touted an EHR as being worth $50,300 but IBM X-Force researchers have found that these days, with health records often combined for sale in the underground markets with other personal/financial data, the price may be even higher.

Jonathan Lee, Sophos’s UK healthcare sector manager, said too many breaches are still caused by the inadvertent actions of users:

Therefore it is vitally important that users are educated about the cyber-risks they face and the safeguards in place to protect them.

They should also understand their individual cyber security responsibilities, be aware of the consequences of negligent or malicious actions, and work with other stakeholders to identify ways to work in a safe and secure manner, he said.

Five tips to turn the tide

Late last year, Lee wrote a post in the Sophos Blog outlining five things healthcare organizations can do to better protect patient data. The tips, which focused heavily on National Health Services organizations in the UK, cover the insider threat head on. Here’s a summary of his recommendations:

1. Know your risk

The first thing to do is carry out a thorough risk assessment so that you know what threats you face, understand your vulnerabilities and assess the likelihood of being attacked. It’s only when that is complete that you can go on to the next stage of creating an integrated cybersecurity plan.

2. Follow best practice

Health organizations – and others, too – only too often spend money on cybersecurity solutions but then fail to properly deploy them. Make sure you’re following the recommendations for best practise when deploying your defenses.

3. Have a tried and tested incident response plan

Work on the assumption that an attack will happen and ensure you have a tried and tested incident response plan than can be implemented immediately to reduce the impact of the attack.

4. Identify and safeguard your sensitive data

It’s almost impossible to protect all your data all of the time, so identify the information you keep that would harm your organization if it were stolen or unlawfully accessed and implement suitable data security procedures to ensure it is appropriately protected.

5. Educate employees

With so many breaches being the result of something an employee has done – inadvertently or otherwise – part of your cybersecurity plan must be to make sure all your staff know the risks they face and their responsibilities. Educating them is your job, and should be part of your plan.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/QsL9CjXilzs/

News in brief: San Diego plans data-gathering smart city upgrade; Amazon says no; judge says no

Your daily round-up of some of the other stories in the news

San Diego’s ‘smart city’ upgrade adds cameras to streetlights

We’re pretty much used to CCTV cameras in our cities, but San Diego in California is going a step further with plans to install cameras, microphones and sensors on more than 3,000 streetlights.

These devices will, according to the Times of San Diego, start rolling out in July and provide data to help “locate gunshots, estimate crowd sizes, check vehicle speeds and other tasks”.

It’s part of a $30m “smart city” scheme to upgrade the city’s lighting, which, rather surprisingly, was approved without any discussion about potential privacy issues. According to Jen Lebron of the mayor’s office, “it’s anonymous data with no personal identifiers”. Apparently the video won’t be detailed enough to identify individuals.

Privacy groups may yet beg to differ, especially as the city says it plans to make the data it gathers in this way available to businesses.

Amazon resists Echo search warrants

Amazon is resisting efforts by police to get the company to hand over audio recordings from an Echo device that investigators think might be able to offer evidence in a murder case.

James Andrew Bates has pleaded not guilty to the November 2015 murder of Victor Collins, who was found dead in a hot tub in Arkansas. Amazon was issued with two search warrants seeking to find out if the device had recorded any audio around the time of the victim’s death.

In its response this week, Amazon argues that prosecutors have failed to establish it was necessary for the company to hand over the data, saying it has to weigh its customers’ privacy against such a request.

Judge denies swoop on fingerprints

A judge in Chicago has refused to allow a provision in a search warrant that would have required people working in a building to provide their fingerprints “on to the Touch ID sensor of any Apple iPhone, iPad, or other Apple brand device in order to gain access to the contents of any such device”.

The details of the prosecution are sealed, says Ars Technica, but the documents do reveal that the warrant is part of an investigation into child sex abuse images.

Refusing the application, Judge David Weisman notes that this isn’t a case like that of the San Bernadino iPhone, with just one device the focus of the investigation. Instead, notes the judge, this is a new strategy of “forced fingerprinting”, which the government says is “the language we are making standard in all of our search warrants”.

The judge adds: “Essentially, the government seeks an order from this Court that would allow agents executing this warrant to force persons at the subject premises to apply their thumbprints and fingerprints to any Apple electronic device recovered at the premises.”

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/70t6M-rV1QA/

Drones can steal data from infected PCs by spying on blinking LEDs

Imagine you’re sitting in an office building at night, the only light coming from the blinking of your hard drive’s LED. Imagine a drone, hovering outside your window, peering in.

Are you being snooped on by a peeping Tom?

Maybe. Or, as researchers have demonstrated, you might not be of interest at all to whoever’s operating the quadcopter. Rather, they could be reading the blinking LED lights as if they made up a form of optical Morse code, intercepting strings of data that malware might have caused the system to encode and transmit.

Such data can stream at fast enough rates to include encryption keys, keystroke logging, or text and binary files, the researchers say.

Researchers at Ben-Gurion University’s Negev Cyber Security Research Center this month demonstrated this type of espionage technique: one that can defeat an air gap. An air gap is a network security measure in which highly sensitive computers are physically isolated, kept away from both the public internet or from unsecured local area networks and the hackers who could get at their data.

You can see their demonstration in this video:

Granted, for such an attack to work, the hackers would first need to infect a targeted system with malware. As the researchers describe in their paper (PDF), such malware could be used to control a system’s hard disk drive’s LED, turning it off and on at a rate of up to 5,800 blinks per second: faster than human eyes can detect.

For air-gapped systems, that dirty work would have to be carried out by an insider: somebody who could infect a system with a USB or SD card, for example (I can’t help wondering if an attacker with that much accesses would need to resort to these kind of elaborate exfiltration tricks).

After the machine’s infected, there are a number of ways an attacker could pick up on the encoded LED blinks. Hiding a camera internally would work, as would a camera carried by a malicious insider – as long as the receiving camera has a line of sight to the front panel of the transmitting, infected computer.

The drone approach works, too, as the researchers showed. A camera installed on a drone that’s flown to a spot where it has line of sight with the front panel of the transmitting computer – such as near the window – can pick up data, though they said that this type of receiver is relevant for leaking a small amount of data, including encryption keys.

This type of attack is called a side-channel attack. They exploit a system’s physical parts – be they fans, LED lights, stray sounds, or WiFi emissions – as opposed to targeting a system by weaknesses in its algorithms or by brute force.

In other words, you don’t directly try to eavesdrop on the actual process or procedure that’s your target in a side-channel attack. Instead, you listen in to the side effects it causes and figure out what’s going on indirectly.

We’ve written about these attacks quite a lot, as we’ve seen:

How to fend off the peeping drones

Fortunately, some of the countermeasures against the blinking-LED attacks are not only cheap; they’re also low-tech. You could just disconnect a computer’s LED light, for one thing, or just cover it with black tape. You could also pick up window film that shields computers from electronic eavesdropping.

Then again, you could always just move the air-gapped PC away from the windows, or to a room that doesn’t have windows at all.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/lJ26P5ozapw/

Bang! SHA-1 collides at 38762cf7­f55934b3­4d179ae6­a4c80cad­ccbb7f0a

Back in the old days, “going online” meant calling up with your modem at 300 bits per second and interacting slowly with a basic command prompt (sometimes BASIC in the literal sense).

Noise on the line and other electrical interference could easily turn zeros into ones and vice versa, causing corruption in your session, such as BANANA turned into BANAMA, MANGO into MaNGO, or even ONLINE into +++ATZ.

A common way to spot obvious errors automatically was by using a checksum, quite literally calculated by checking the sum of all the numeric values of every byte in the message.

Checksums were used because they were quick to calculate, as far back as the 1960s and 1970s, because even the most underpowered processors usually had an ADD or ACCUMULATE instruction that could efficiently maintain a running total of this sort.

But checksums were error prone.

If you swap round any of the bytes in a message, the checksum remains unchanged because A+B = B+A.

Likewise, two errors can easily cancel out, for example if BANANA gets corrupted into CANAMA, because (A+1) + (B-1) = A+B.

Enter the CRC

CRCs, short for cyclic redundancy checks, were a better solution, using a comparatively simple series of bit-shifts and XOR operations to maintain a bigger accumulator that wasn’t so easily fooled by double errors or swapped bytes.

CRC-32 produces a 32-bit (4-byte) checksum – today, the term checksum is used metaphorically, not literally to mean that the bytes were added together – designed to do a much better job of detecting accidental errors such as those caused by mistyping or by modem corruption.

But CRCs aren’t any good against deliberate errors.

That’s because CRCs are based on a process involving a form of long division, making the algorithm predictable, so the output can be tweaked to be whatever you like by tweaking the input slightly in a way that can be calculated automatically.

That makes it trivial to create a message with any checksum you like, for example so that its checksum matches an earlier message in order to create confusion, commit fraud, or worse.

Note that there are only 4 billion (232) different possible CRC-32 values, so that at modern computer speeds you could forge a CRC-32 without any arithmetical trickery by brute force, simply by making billions of random modifications to the message until you hit paydirt.

But even if you extend your CRC to 64 bits, 128 bits or even more to make accidental duplicates as good as impossible, it’s still easy to calculate forgeries very quickly, with no need to rely on brute force.

Moving up to cryptographic hashes

For security-related purposes, you need what’s commonly referred to as a cryptographic checksum or cryptographic hash.

This sort of algorithm is designed not only to detect accidental errors, but also to be “untrickable” enough to prevent deliberate errors.

In particular, a cryptographic hash, denoted here as a mathematical function H(), should have at least these characteristics:

  1. If you deliberately create two messages M1 and M2 (any two messages; you get to choose both of them) such that H(M1) = H(M2), you have a collision, so that H has failed as a digital fingerprint. Therefore you should not be able to construct a collision, other than by trying over and over with different inputs until you hit the jackpot by chance.
  2. If you know that H(M) = X, but you don’t know my message M, then you should not be able to “go backwards” from X to M, other than by trying different messages over and over until you hit the jackpot by chance.
  3. If I choose M and tell you what it is, so you can compute H(M) = X for yourself, you should not be able to come up with a different message M’ that also has H(M’) = X, other than by guesswork. (This is much tougher than case 1 because you don’t get to choose any matching pair of hashes from a giant pool of messages. You have to match my hash, not any hash, which squares the effort needed.)

For many years, an algorithm called MD5 was widely used because it claimed to provide these three protections against abuse, but it is now forbidden in the cryptographic world because it is known to fail on Promise One above.

Once a hashing algorithm fails in respect of Promise One, it’s prudent to assume it won’t meet its other design goals either, even if it seems to be holding out on the other two promises.

MD5 collisions are easy to generate on purpose, so the algorithm can no longer be trusted.

SHA-1 replaces MD5

SHA-1 was the next-most-complex hash after MD5, and was widely used as a replacement when MD5 fell out of favour.

Greatly oversimplified, the SHA-1 algorithm consumes its input in blocks of sixteen 32-bit words (512 bits, or 64 bytes), mixing each block into a cumulative hash of five 32-bit words (160 bits, or 20 bytes).

for block in blocks() do
   for i = 17 to 80 do
      -- each step here extends the original 16-word input
      -- block to 80 words by adding one word made by mixing 
      -- together four of the previous sixteen words.     
      block[i] = minimixtogether(block,i)
   end
   
   for i = 1 to 80 do
      -- each step here mixes one of the words from the 80-word
      -- "extended block" into the five-byte hash accumulator
      hash = giantmixtogether(block,i)
   end
end

The giantmixtogther() function that scrambles the extended input into the hash uses a range of different operations, including NOT, AND, OR, XOR, ADD and ROL (rotate left); the minimixtogether() function used to massage the input data uses XOR and ROL.

The algorithm certainly looks complicated, and at first glance you would assume that it mixes-minces-shreds-and-liquidises its input more than well enough to be “untrickable”.

Indeed, the complexity of SHA-1 was considered sufficient to immunise it against the weaknesses in the similar but simpler MD5 algorithm.

At the same time, SHA-1 was not so much more complicated han MD5 that it would run too slowly to be a convenient replacement.

SHA-1 considered harmful

For years, however, experts have been telling everyone to stop using SHA-1, and to use more complex hash algorithms such as SHA-2 and SHA-3 instead, predicting that the first actual real-world, in-your-face chink in SHA-1’s armour would turn up soon.

Google’s Chrome browser, for example, stopped accepting web site certificates with SHA-1 hashes at the start of 2017, considering them no longer safe enough.

The Mozilla Firefox browser will soon follow suit.

The reason is simple: as soon as someone actually turns theory into reality, and produces a hash collision, you can no longer rely on saying, “She’ll be right for a while yet,” because your “while yet” period just expired.

So it’s a good idea to get ahead of the game and to abandon creaky cryptographic code before it goes “Bang!”

Even if a collision takes an enormous amount of work – imagine that you’d need 110 top-end graphics cards running flat out for a whole year, for example – the first actual collision would be what you might call the digital disproof of the pudding.

The digital disproof

So, to cut what has become a long story short, you need to know that researchers from Google and the CWI Institute in Amsterdam…

…have just disproved the pudding.

Bang!

A hash collision that in theory should have taken them thousands of years to stumble upon by chance has been created on purpose within all of our lifetimes – and that should simply never have happened.

Apparently, they did indeed need 110 top-end graphics cards running for a whole year, but that is still 100,000 times faster than the design goals (and the theoretical strength) of SHA-1, making SHA-1 a risky proposition for evermore.

TL;DR: SHA-1 really is broken, so use a stronger hash from now on, because cryptographic attacks only ever get faster.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ZB1GPLgqHAk/

Ex-employees sued for £15m over data slurpage ordered to pay up just £2

The High Court in London, UK, has agreed that a company’s former employees who took thousands of confidential files away on USB sticks when they quit the firm were indeed naughty – and ordered them to pay damages of just £1 each.

Marathon Asset Management, based in London, sued James Seddon and Luke Bridgeman for £15m after business information was copied from company servers when the two left.

The two quit with the aim of setting up a rival firm called Seculum.

The information was not actually used after they left – and Marathon claimed that they should pay damages for the potential value of what they had taken, rather than any actual gains made.

Marathon claimed that 40,000 files were taken by Bridgeman, who shared them with Seddon. Mr Justice Leggatt, sitting in the Commercial Court, noted that 37,000 of these were the entire contents of Bridgeman’s email account. Of the rest, Bridgeman only accessed 52 files – and of those 52, just 11 contained any client information. In turn, 7 of those 11 files were Powerpoint presentations made to clients.

“There is no evidence to suggest that Mr Bridgeman (or Seculum) derived any material benefit from looking at these documents,” said the judge. “In circumstances where the misuse of confidential information by the defendants has neither caused Marathon to suffer any financial loss nor resulted in the defendants [making] any financial gain, it is hard to see how Marathon could be entitled to any remedy other than an award of nominal damages.”

Although Bridgeman admitted liability, and Seddon was found to have breached Marathon’s confidence and his contract of employment, Mr Justice Leggatt said “Marathon has missed the jackpot” as he awarded nominal damages of £1 each in favour of the company.

The judge also noted that metadata about when Bridgeman accessed the stolen files on one of his laptops had been erased after he installed Windows 10. The delay in taking a copy of its contents for forensic purposes was not explained. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/23/data_removal_usb_sticks/

‘First ever’ SHA-1 hash collision calculated. All it took were five clever brains… and 6,610 years of processor time

Google researchers and academics have today demonstrated it is possible – albeit with a lot of computing power – to produce two different documents that have the same SHA-1 hash signature.

This is bad news because SHA-1 is used across the internet, from Git repositories to file deduplication systems to HTTPS certificates used to protect online banking and other websites. SHA-1 signatures are used to prove that blobs of data – which could be software source code, emails, PDFs, website certificates, etc – have not been tampered with by miscreants, or altered in any other way.

Now researchers at CWI Amsterdam and bods at Google have been able to alter a PDF without changing its SHA-1 hash value. That makes it a lot easier to pass off the meddled-with version as the legit copy. You could alter the contents of, say, a contract, and make its hash match that of the original. Now you can trick someone into thinking the tampered copy is the original. The hashes are completely the same.

SHA-1 is supposed to be deprecated but too many applications still support it, including the widely used source-code management tool Git. It is possible to create two Git repositories with the same head commit SHA-1 hash and yet the contents between the two repos differ: one with a backdoor stealthily added, for example, and you wouldn’t know this from the hash. The hashes would be completely the same.

Specifically, the team has successfully crafted what they say is a practical technique to generate a SHA-1 hash collision. As a hash function, SHA-1 takes a block of information and produces a short 40-character summary. It’s this summary that is compared from file to file to see if anything has changed. If any part of the data is altered, the hash value should be different. Now, in the wake of the research revealed today, security mechanisms and defenses still relying on the algorithm have been effectively kneecapped.

Abandon SHA-1p! Google’s illustration how changes made to a file can sneak under the radar by not changing the hash value

The gang spent two years developing the technique. It took 9,223,372,036,854,775,808 SHA-1 computations, 6,500 years of CPU time, and 110 years of GPU time, to get to this point. The team is made up of Marc Stevens (CWI Amsterdam), Elie Bursztein (Google), Pierre Karpman (CWI Amsterdam), Ange Albertini (Google), and Yarik Markov (Google), and their paper on their work can be found here [PDF]. Its title is: “The first collision for full SHA-1.”

For the full gory details, and the tech specs of the Intel CPU and Nvidia GPU number-crunchers used, you should check out their research paper. On a basic level, the collision-finding technique involves breaking the data down into small chunks so that changes, or disturbances, in one set of chunks is countered by twiddling bits in other chunks. A disturbance vector [PDF] is used to find and flip the right bits.

The tech world is slowly moving from SHA-1 to newer and stronger algorithms such as SHA-256. We’ve known for a few years that SHA-1 is looking weak, and now its weakness is on full display. This latest research underlines the importance of accelerating the transition to SHA-256 or stronger hashing routines.

It’s utterly unlikely anyone will create a rogue SHA-1 hash any time soon from the team’s work, due to the amount of computation power required. However, it is not beyond the reach of a Western intelligence agency to forge a TLS certificate, a Git repo, or a document, if it really, really wanted to – and this process will only get easier over time as computers get faster.

“Today, 10 years after of SHA-1 was first introduced, we are announcing the first practical technique for generating a collision,” the research team said today.

“This represents the culmination of two years of research that sprung from a collaboration between the CWI Institute in Amsterdam and Google … For the tech community, our findings emphasize the necessity of sunsetting SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. As early as 2014, the Chrome team announced that they would gradually phase out using SHA-1. We hope our practical attack on SHA-1 will cement that the protocol should no longer be considered secure.”

David Chismon, senior security consultant at MWR InfoSecurity, told The Register: “The SHA-1 algorithm has been known to be weak for some years and it has been deprecated by NCSC, NIST, and many vendors. However, until today no real-world attacks have been conducted. Google’s proof of concept, and the promise of a public release of tools may turn this from a hypothetical issue to a real, albeit expensive one.

“The attack still requires a large amount of computing on both CPUs and GPUs but is expected to be within the realm of ability for nation states or people who can afford the cloud computing time to mount a collision attack.”

Google has tried other tactics to encourage SHA-1 sunsetting such as having its Chrome browser mark sites “insecure” if they have SHA-1 signed certificates. The research into collisions might be taken as a further shot across the bows of those ploughing on regardless in relying an obsolete cryptographic algorithm.

Chismon added: “Hopefully these new efforts of Google of making a real-world attack possible will lead to vendors and infrastructure managers quickly removing SHA-1 from their products and configuration as, despite it being a deprecated algorithm, some vendors still sell products that do not support more modern hashing algorithms or charge an extra cost to do so.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/23/google_first_sha1_collision/

US ‘security’ biz trio Sentinel Labs, Vir2us, SpyChatter accused of lying about certification

Three US companies have settled with the FTC after they were accused of lying about the security safeguards on their customer information.

Sentinel Labs, SpyChatter, and Vir2us have all agreed to adhere to the US trade regulator’s settlement terms after they were formally charged with falsely claiming certification with the Asia-Pacific Economic Cooperation (APEC) Cross-Border Privacy Rules (CBPR) standard.

Sentinel Labs produces anti-malware software, while Vir2us makes the Xeropass password manager, and SpyChatter offers a private messaging app.

The CBPR rules [PDF] outline how companies within the APEC nations secure and transfer customer data, as well as how they handle requests to disclose what personal data they have collected.

Among the requirements for certification is an audit performed by an outside “accountability agent” who reviews the business for compliance and then recommends whether to award the certification.

This, the FTC said, was where the three companies fell short. None of them had that review, and thus were not formally certified and had no legal right to claim their products were compliant with the APEC-CBPR. The commission further charged that one of the companies, Sentinel Labs, also falsely claimed it was certified under the TRUSTe program.

“Cross-border commerce is an important driver of economic growth, and our cross-border privacy commitments help enable US companies to compete around the world,” FTC chairman Maureen Ohlhausen said of the deal.

“Companies, however, must live up to the promises they make to protect consumer data.”

The settlement itself doesn’t carry any fine, but does put the companies under a looming threat of stiff penalties should they – at any point in the next 20 years – be found to be lying about their security or privacy certifications (or lack thereof). Each violation would put the offender on the hook for FTC fines of up to $40,654. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/02/23/us_companies_take_deal_fake_apec_certification/