STE WILLIAMS

GitHub users with weak passwords

GitHub, one of the world’s biggest online repositories of software source code, is warning users to jolly well shape up when it comes to login security.

Of course, GitHub didn’t put it quite like that (it was a bit more polite to its customers), but we did.

GitHub was more diplomatic, saying:

This is a great opportunity for you to review your account, ensure that you have a strong password and enable two-factor authentication.

Concise, true and important.

After all, you can see why cybercrooks might want to take aim – even in a general, “let’s see whom we can hack today” attack – at a site like GitHub.

With millions of users and millions of software projects, small and large, free and proprietary, getting hold of other people’s GitHub logins gives cybercriminals the chance to take their attack straight to the source – literally and figuratively.

If you can compromise a software project before it even ships, by modifying its own source code repository, then you don’t need to waste time trying to find a vulnerability to exploit later on.

That’s exactly what happened to open source advertising server product OpenX earlier this year, with just a few characters of obfuscated, malicious PHP source code buried inside a JavaScript file buried inside a plugin that was part of the official software distribution.

So, what did GitHub do wrong?

Nothing, as far as we can tell – quite the opposite, in fact:

Github offers two-factor authentication to improve login security.

→ By sending one-time passcodes to your mobile phone each time you login (or something similar using an app running on your mobile device), GitHub makes sure that a stolen or guessed password is not enough on its own for a crook to hijack your account.

Github uses safe techniques to store its users’ passwords.

→ Github has followed all the advice, and more, in our recent article “Serious Security: How to store your users’ passwords.” We advised you to choose one of PBKDF2, bcrypt or scrypt as your password hashing mechanism; GitHub uses bcrypt. That means that passwords are never stored in plaintext; two users with the same password get a different hash; and brute force attacks are slowed down tremendously.

GitHub uses rate limiting to take the edge off password guessing attacks.

→ If someone/something tries to login much more often than you might expect, you can wait longer and longer before responding to each login request.

According to GitHub, nearly 40,000 different computers have participated in this attack, presumably because the crooks involved have rented time on (or themselves own and operate) a large botnet.

A botnet is a loosely-coupled collection of co-opted computers infected with malware that regularly “calls home” to download instructions on what criminal act to commit next.

That can include keylogging for passwords, taking screenshots, stealing files, sending spam – or simply trying to login to GitHub with some supplied credentials, and reporting back if it worked.

With bcrypt slowing down each login attempt, and rate limiting making sure that no individual computer tries to log in too often, you might think that the crooks were wasting their time.

But a botnet of 40,000 devices nevertheless makes a realistic attack possible.

After all, if each computer tries to login just once every fifteen minutes, which would be a pretty feeble attack all on its own, the crooks can try 4,000,000 logins per day.

If they keep their heads down well enough to keep up this sort of work for a month, they get more than 100 million tries at GitHub users’ front doors.

And some of those front doors, we now know, are as good as open already.

GitHub isn’t saying explicitly, but it mentions that the crooks have been trying “passwords used on multiple sites.”

So it’s a good bet that the attackers are using data derived from Adobe’s recent megabreach.

Even though most of the passwords in the 150,000,000 records stolen from Adobe have not been worked out, millions of passwords have.

That’s thanks to users who chose unwisely, so that their passwords could be guessed straight from Adobe’s data. (One chap was even specific enough to write a password hint to say, “The password is password.”)

So please take GitHub’s advice, on GitHub and elsewhere: review your account, ensure that you have a strong password and enable two-factor authentication.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cRMpOS7msKo/

GitHub users with weak passwords – you have been warned!

GitHub, one of the world’s biggest online repositories of software source code, is warning users to jolly well shape up when it comes to login security.

Of course, GitHub didn’t put it quite like that (it was a bit more polite to its customers), but we did.

GitHub was more diplomatic, saying:

This is a great opportunity for you to review your account, ensure that you have a strong password and enable two-factor authentication.

Concise, true and important.

After all, you can see why cybercrooks might want to take aim – even in a general, “let’s see whom we can hack today” attack – at a site like GitHub.

With millions of users and millions of software projects, small and large, free and proprietary, getting hold of other people’s GitHub logins gives cybercriminals the chance to take their attack straight to the source – literally and figuratively.

If you can compromise a software project before it even ships, by modifying its own source code repository, then you don’t need to waste time trying to find a vulnerability to exploit later on.

That’s exactly what happened to open source advertising server product OpenX earlier this year, with just a few characters of obfuscated, malicious PHP source code buried inside a JavaScript file buried inside a plugin that was part of the official software distribution.

So, what did GitHub do wrong?

Nothing, as far as we can tell – quite the opposite, in fact:

Github offers two-factor authentication to improve login security.

→ By sending one-time passcodes to your mobile phone each time you login (or something similar using an app running on your mobile device), GitHub makes sure that a stolen or guessed password is not enough on its own for a crook to hijack your account.

Github uses safe techniques to store its users’ passwords.

→ Github has followed all the advice, and more, in our recent article “Serious Security: How to store your users’ passwords.” We advised you to choose one of PBKDF2, bcrypt or scrypt as your password hashing mechanism; GitHub uses bcrypt. That means that passwords are never stored in plaintext; two users with the same password get a different hash; and brute force attacks are slowed down tremendously.

GitHub uses rate limiting to take the edge off password guessing attacks.

→ If someone/something tries to login much more often than you might expect, you can wait longer and longer before responding to each login request.

According to GitHub, nearly 40,000 different computers have participated in this attack, presumably because the crooks involved have rented time on (or themselves own and operate) a large botnet.

A botnet is a loosely-coupled collection of co-opted computers infected with malware that regularly “calls home” to download instructions on what criminal act to commit next.

That can include keylogging for passwords, taking screenshots, stealing files, sending spam – or simply trying to login to GitHub with some supplied credentials, and reporting back if it worked.

With bcrypt slowing down each login attempt, and rate limiting making sure that no individual computer tries to log in too often, you might think that the crooks were wasting their time.

But a botnet of 40,000 devices nevertheless makes a realistic attack possible.

After all, if each computer tries to login just once every fifteen minutes, which would be a pretty feeble attack all on its own, the crooks can try 4,000,000 logins per day.

If they keep their heads down well enough to keep up this sort of work for a month, they get more than 100 million tries at GitHub users’ front doors.

And some of those front doors, we now know, are as good as open already.

GitHub isn’t saying explicitly, but it mentions that the crooks have been trying “passwords used on multiple sites.”

So it’s a good bet that the attackers are using data derived from Adobe’s recent megabreach.

Even though most of the passwords in the 150,000,000 records stolen from Adobe have not been worked out, millions of passwords have.

That’s thanks to users who chose unwisely, so that their passwords could be guessed straight from Adobe’s data. (One chap was even specific enough to write a password hint to say, “The password is password.”)

So please take GitHub’s advice, on GitHub and elsewhere: review your account, ensure that you have a strong password and enable two-factor authentication.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aBvyTtiy1z4/

Cryptolocker infects cop PC: Massachusetts plod fork out Bitcoin ransom

Email delivery: 4 steps to get more email to the inbox

Massachusetts cops have admitted paying a ransom to get their data back on an official police computer infected with the devilish Cryptolocker ransomware.

Cryptolocker is a rather unpleasant strain of malware, first spotted in August, that encrypts documents on the infiltrated Windows PC and will throw away the decryption key unless a ransom is paid before a time limit. The sophisticated software, which uses virtually unbreakable 256-bit AES and 2048-bit RSA encryption, even offers a payment plan for victims who have trouble forking out the two Bitcoins (right now $1,200) required to recover the obfuscated data.


On November 6, a police computer in the town of Swansea, Massachusetts, was infected by the malware, and the cops called in the FBI to investigate. However, in order to get access to the system the baffled coppers decided that it would be easier to pay the ransom of 2 BTC, then worth around $750, and received the private key to unlock the computer’s data on November 10.

“It was an education for [those who] had to deal with it,” Swansea police lieutenant Gregory Ryan told the Herald News. “The virus is so complicated and successful that you have to buy these Bitcoins, which we had never heard of.”

Ryan said that essential police systems weren’t affected by the infection, and federal agents are still investigating the infection, hopefully to find clues that’ll lead the Feds to the malware’s writer. The software nasty is thought to have been the work of Eastern European criminal gangs, but no one knows for sure.

“The virus is not here anymore,” Ryan said. “We’ve upgraded our antivirus software. We’re going to try to tighten the belt, and have experts come in, but as all computer experts say, there is no foolproof way to lock your system down.”

Apart from not being a fool that is. Cryptolocker primarily spreads via email attachments, typically a PDF that claims to be from a government department or delivery service. As ever, experts advise not to open attachments unless you are sure of its contents and the source. ®

ioControl – hybrid storage performance leadership

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/11/21/police_pay_cryptolocker_crooks_to_get_their_computers_back/

Healthcare.gov Security Hiccups

Senior penetration testers used to say that if you wanted to practice on a live Internet website and not get into any trouble, then pick a porn site. Even if you were caught by the site owners, they’d never prosecute and, if they did, the court of public opinion would be on your side.

Today it looks like there’s a new candidate for honing your hacking skills: the website of President Obama’s flagship Affordable Care Act.

Healthcare.gov has been subject to a barrage of attacks, both online and in the media, ever since appeared online. This week, the site has a portion of the media hot under the collar due to a few client-side flaws and an expectation that recorded attack attempts should have been scrubbed from some prefill search results.

For the public reading these stories, many are bound to think that someone’s limb had been torn away by a pack of rapid wolves, and surgeons were desperately trying to sew the victim back together. In reality, it’s more like dealing with a hangover from the night before, where the cure is a good old aspirin.

Essentially, two vulnerabilities are being talked about this week. The most visible is merely a reflection of how people have been trying to hack the website, and how the contextual prefill of the search box lists the most common attack strings folks have been throwing at it. It’s amusing, really.

The site developers appear to have done a good job sanitizing the input (i.e., replacing potentially malicious characters with their safe HTML counterparts), but they could have probably saved themselves the present grief had they simply dropped certain strings from making it to the prefill candidate list. They appear to have applied some prefill filtering in the past to prevent common swear words from appearing, and have now (since this media frenzy started) added many of the strings more commonly associated with SQL injection since the issue was pointed out. For example, the following no longer appear if you type a semi-colon:

The second vulnerability has to do with the way the client-side script components of the website handle HTML characters as they’re typed into the search box (and, no doubt, other areas of the site if you were to go hunting for them). In essence, the client-side scripts get a little confused. While potentially annoying for people having a poke at the website in their bug-hunting quest, it’s nothing to be concerned about by those actually intent on using the site for what it’s supposed to do.

I’ve heard a few people point out that the combination of these two bugs could potentially be exploited in a cross-site scripting attack, but you have better odds of being hit by a meteorite.

For all of the faults in the site that have been pointed out in the past month, this latest batch only merits a one-shoulder shrug.

— Gunter Ollmann, CTO IOActive Inc.

Article source: http://www.darkreading.com/vulnerability/healthcaregov-security-hiccups/240164133

McAfee Labs Sees New Threats Subverting Digital Signature Validation

November 20, 2013 12:00 PM Eastern Standard Time

SANTA CLARA, Calif.–(BUSINESS WIRE)–McAfee Labs today released the McAfee Labs Threats Report: Third Quarter 2013, which found new efforts to circumvent digital signature app validation on both PCs and Android-based devices. The McAfee Labs team identified a new family of mobile malware that allows an attacker to bypass the digital signature validation of apps on Android devices, which contributed to a 30% increase in Android-based malware. At the same time, traditional malware signed with digital signatures grew by 50% to more than 1.5 million samples. Less surprising but no less daunting was a 125% increase in spam.

“Virtual Laundry: An Analysis of Online Currencies, and Their Use in Cybercrime.”

“The efforts to bypass code validation on mobile devices, and commandeer it altogether on PCs, both represent attempts to circumvent trust mechanisms upon which our digital ecosystems rely,” said Vincent Weafer, senior vice president of McAfee Labs. “The industry must work harder to ensure the integrity of these technologies given they are becoming more pervasive in every aspect of our daily lives.”

The third quarter also saw notable events in the use of Bitcoin for illicit activities such as the purchase of drugs, weapons, and other illegal goods on websites such as Silk Road. The growing presence of Bitcoin-mining malware reinforced the increasing popularity of the currency.

Weafer continued: “As these currencies become further integrated into our global financial system, their stability and safety will require both financial monetary controls and oversight, and the security measures our industry provides.”

Leveraging data from the McAfee Global Threat Intelligence (GTI) network, the McAfee Labs team identified the following trends in Q3 2013:

Digitally signed malware. Digitally signed malware samples increased 50%, to more than 1.5 million new samples. McAfee Labs also revealed the top 50 certificates used to sign malicious payloads. This growing threat calls into question the validity of digital certificates as a trust mechanism.

New mobile malware families. McAfee Labs researchers identified one entirely new family of Android malware, Exploit/MasterKey.A, which allows an attacker to bypass the digital signature validation of apps, a key component of the Android security process. McAfee Labs researchers also found a new class of Android malware that once installed downloads a second-stage payload without the user’s knowledge.

Virtual currencies. Use of new digital currencies by cybercriminals to both execute illegal transactions and launder profits is enabling new and previously unseen levels of criminal activity. These transactions can be executed anonymously, drawing the interest of the cybercriminal community and allowing them to offer illicit goods and services for sale in transactions that would normally be transparent to law enforcement. McAfee Labs also saw cybercriminals develop Bitcoin-mining malware to infect systems, mine their processing power, and produce Bitcoins for commercial transactions. For more information, please read the McAfee Labs report “Virtual Laundry: An Analysis of Online Currencies, and Their Use in Cybercrime.”

Android malware. Nearly 700,000 new Android malware samples appeared during the third quarter, as attacks on the mobile operating system increased by more than 30%. Despite responsible new security measures by Google, McAfee Labs believes the largest mobile platform will continue to draw the most attention from hackers given it possesses the largest base of potential victims.

Spike in spam. Global spam volume increased 125% in the third quarter of 2013. McAfee Labs researchers believe much of this spike was driven by legitimate “affiliate” marketing firms purchasing and using mailing lists sourced from less than reputable sources.

Each quarter, the McAfee Labs team of 500 multidisciplinary researchers in 30 countries follows the complete range of threats in real time, identifying application vulnerabilities, analyzing and correlating risks, and enabling instant remediation to protect enterprises and the public. To read the full McAfee Labs Threats Report: Third Quarter 2013, please visit: http://mcaf.ee/s4xfb.

About McAfee Labs

McAfee Labs is the world’s leading source for threat research, threat intelligence, and cyber security thought leadership. The McAfee Labs team of 500 threat researchers correlates real-world data collected from millions of sensors across key threat vectors–file, web, message, and network–and delivers threat intelligence in real-time to increase protection and reduce risk.

About McAfee

McAfee, a wholly owned subsidiary of Intel Corporation (NASDAQ:INTC), empowers businesses, the public sector, and home users to safely experience the benefits of the Internet. The company delivers proactive and proven security solutions and services for systems, networks, and mobile devices around the world. With its Security Connected strategy, innovative approach to hardware-enhanced security, and unique Global Threat Intelligence network, McAfee is relentlessly focused on keeping its customers safe. http://www.mcafee.com

Note: McAfee is a trademark or registered trademark of McAfee, Inc. in the United States and other countries. Other names and brands may be claimed as the property of others.

Article source: http://www.darkreading.com/vulnerability/mcafee-labs-sees-new-threats-subverting/240164159

Only 52% Of Healthcare IT Pros Use Formal Risk Assessments

PORTLAND, OREGON — November 20, 2013 — Tripwire, Inc., a leading global provider of risk-based security and compliance management solutions, today announced the results of research on risk-based security management in the healthcare and pharmaceutical industries.

The survey, conducted in April 2013 with the Ponemon Institute, evaluates the attitudes of 1,320 respondents from IT security, IT operations, IT risk management, business operations, compliance/internal audit and enterprise risk management. One hundred seventeen health and pharmaceutical sector respondents from the U.S. and U.K. participated in the healthcare portion of the survey.

The health and pharmaceutical industries have undergone significant information security changes in 2013, and Health Insurance Portability and Accountability Act (HIPAA) fines have grown in both size and frequency. In August, Affinity Health Plan was fined more than $1.2 million for HIPAA violations and insurer WellPoint agreed to pay a $1.7 million penalty in July. As the final omnibus rule goes into effect, new state healthcare exchanges place additional security and privacy pressures on healthcare organizations. Despite these regulatory pressures, Tripwire’s survey indicates that the healthcare industry lags behind other industries in the implementation of critical security controls.

Key findings include:

70% say communicating the state of security risk to senior executives is not effective because communications are contained in one department or line of business.

Only 52% use formal risk assessments to identify security threats.

Only 58% have fully or partially deployed change control and security configuration management.

“It is true that healthcare organizations rank better than average in some areas of this survey, but there is still a lot of room for improvement,” said Dwayne Melancon, chief technology officer for Tripwire. “About half of healthcare and pharmaceutical organizations are not using any kind of formal risk assessments, and they are also far less open to challenging current assumptions. Both of these factors could cause them to be blindsided by the increasing number of cybersecurity threats to their businesses.”

For more information about this survey, please visit http://www.tripwire.com/ponemon/2013/.

About the Ponemon Institute

The Ponemon Institute is dedicated to advancing responsible information and privacy management practices in business and government. To achieve this objective, the Institute conducts independent research, educates leaders from the private and public sectors, and verifies the privacy and data protection practices of organizations in a variety of industries.

About Tripwire

Tripwire is a leading global provider of risk-based security and compliance management solutions, enabling enterprises, government agencies and service providers to effectively connect security to their business. Tripwire provides the broadest set of foundational security controls including security configuration management, vulnerability management, file integrity monitoring, log and event management. Tripwire solutions deliver unprecedented visibility, business context and security business intelligence allowing extended enterprises to protect sensitive data from breaches, vulnerabilities, and threats. Learn more at www.tripwire.com, get security news, trends and insights at http://www.tripwire.com/state-of-security/ or follow us on Twitter @TripwireInc.

Article source: http://www.darkreading.com/government-vertical/only-52-of-healthcare-it-pros-use-formal/240164160

PasswordBox Acquires Legacy Locker

SAN FRANCISCO, Nov. 20, 2013 /PRNewswire/ — PasswordBox today announced its acquisition of Legacy Locker, the first-to-market digital legacy solution. The acquisition follows the recent announcement that PasswordBox has raised $6 million, led by OMERS Ventures and several Silicon Valley angel investors, including two Facebook executives.

(Logo: http://photos.prnewswire.com/prnh/20131120/LA20665LOGO)

The acquisition of Legacy Locker gives PasswordBox a deeper reach into providing its digital life management services to estate planners and existing Legacy Locker customers. According to a recent McAfee survey, online consumers have $55,000 on average in digital assets, including photos, projects, hobbies, personal records, work info, entertainment, social media and email. PasswordBox is the only free service to manage your online accounts during life and after.

“Digital death is a growing pain point most of us don’t want to think about, yet we are storing more of our lives online each day,” said Dan Robichaud, PasswordBox CEO. “We came to market four months ago with a product that not only remembers your passwords for you, but one that offers secure one-click login and the ability to collaborate with co-workers and friends without divulging passwords. Our biggest differentiator is our ability to name a digital heir so your digital assets are always protected if anything happens. This acquisition allows us to gain market share and cements our position as the most comprehensive digital management solution on the market.”

Even during a legacy transfer, the user’s data is always encrypted and never accessible even by PasswordBox employees. PasswordBox has a patent-pending end-to-end encryption sharing system with a trigger release process, so the only time a user can view a readable version of their data is on their device after they’ve logged in to their PasswordBox.

“Over the past four years, we have taken the market by storm with our online safety deposit box for internet passwords,” said Jeremy Toeman, Founder of Legacy Locker. “Since our entrance into the market, the adoption of smartphones, iPads, online banking and social media has expedited the accumulation of digital assets. We helped fill a growing unmet need in the market then. Now our customers will benefit from the next wave of technology that will allow them to manage their current digital life, while ensuring their digital afterlife is still being protected too.”

PasswordBox allows you to securely store, retrieve, create and share passwords with ease and efficiency on any device. The PasswordBox mobile app includes one-tap log-in capabilities to quickly access websites and apps without having to memorize or enter your passwords. With complete end-to-end encryption, people can safely collaborate with friends, family or co-workers and share accounts from any device. You can also create secured notes, while keeping track of passport, credit card or other sensitive data in a digital wallet anytime, from anywhere without having to worry about identity theft.

“We hear a lot about identity theft and cyber-crime, but we are just scratching the surface with the issue of the digital afterlife,” said Legacy expert Richard Bruno. “While no one really wants to face their mortality, it’s worse for our families when we don’t pre-plan. The beauty of PasswordBox is it takes care of securing your passwords now and later.”

About Legacy Locker

Legacy Locker (www.legacylocker.com) is the safe, secure way to pass online accounts to friends and loved ones. Founded in 2008 by Jeremy Toeman and Adam Burg, Legacy Locker Inc. is a privately held company based in San Francisco, CA. In April, 2009, the company publicly launched to consumers and professional estate planners to include digital assets as an effective part of estate planning.

About PasswordBox

With more than one million active users, PasswordBox is the only free password manager that helps people securely store, retrieve, create and share passwords anytime, anywhere, and on any device. Keep your online identity safe and say goodbye to multiple user names and passwords that are impossible to remember. Your PasswordBox master password is the only one you’ll ever need. PasswordBox includes secure one-click sharing, a strong password generator and a legacy feature to entrust your digital life to a loved one after you pass. For more info, go to www.passwordbox.com.

Article source: http://www.darkreading.com/management/passwordbox-acquires-legacy-locker/240164143

Serious Security: How to store your users’ passwords safely

You probably didn’t miss the news – and the fallout that followed – about Adobe’s October 2013 data breach.

Not only was it one of the largest breaches of username databases ever, with 150,000,000 records exposed, it was also one of the most embarrassing.

The leaked data revealed that Adobe had been storing its users’ passwords ineptly – something that was surprising, because storing passwords much more safely would have been no more difficult.

Following our popular article explaining what Adobe did wrong, a number of readers asked us, “Why not publish an article showing the rest of us how to do it right?”

Here you are!

Just to clarify: this article isn’t a programming tutorial with example code you can copy to use on your own server.

Firstly, we don’t know whether your’re using PHP, MySQL, C#, Java, Perl, Python or whatever, and secondly, there are lots of articles already available that tell you what to do with passwords.

We thought that we’d explain, instead.

Attempt One – store the passwords unencrypted

On the grounds that you intend – and, indeed, you ought – to prevent your users’ passwords from being stolen in the first place, it’s tempting just to keep your user database in directly usable form, like this:

If you are running a small network, with just a few users whom you known well, and whom you support in person, you might even consider it an advantage to store passwords unencrypted.

That way, if someone forgets their password, you can just look it up and tell them what it is.

Don’t do this, for the simple reason that anyone who gets to peek at the file immediately knows how to login as any user.

Worse still, they get a glimpse into the sort of password that each user seems to favour, which could help them guess their way into other accounts belonging to that user.

Alfred, for example, went for his name followed by a short sequence number; David used a date that probably has some personal significance; Eric Cleese followed a Monty Python theme; while Charlie and Duck didn’t seem to care at all.

The point is that neither you, nor any of your fellow system administrators, should be able to look up a user’s password.

It’s not about trust, it’s about definition: a password ought to be like a PIN, treated as a personal identification detail that is no-one else’s business.

Attempt Two – encrypt the passwords in the database

Encrypting the passwords sounds much better.

You could even arrange to have the decryption key for the database stored on another server, get your password verification server to retrieve it only when needed, and only ever keep it in memory.

That way, users’ passwords never need to be written to disk in unencrypted form; you can’t accidentally view them in the database; and if the password data should get stolen, it would just be shredded cabbage to the crooks.

This is the approach Adobe took, ending up with something similar to this:

→ For the sample data above we chose the key DESPAIR and encrypted each of the passwords with straight DES. Using DES for anything in the real world is a bad idea, because it only uses 56-bit keys, or seven characters’ worth. Even though 56 bits gives close to 100,000 million million possible passwords, modern cracking tools can get through that many DES passwords within a day.

You might consider this sort of symmetric encryption an advantage because you can automatically re-encrypt every password in the database if ever you decide to change the key (you may even have policies that require that), or to shift to a more secure algorithm to keep ahead of cracking tools.

But don’t encrypt your password databases reversibly like this.

You haven’t solved the problem we mentioned in Attempt One, namely that neither you, nor any of your fellow system administrators, should be able to recover a user’s password.

Worse still, if crooks manage to steal your database and to acquire the password at the same time, for example by logging into your server remotely, then Attempt Two just turns into Attempt One.

By the way, the password data above has yet another problem, namely that we used DES in such a way that the same password produces the same data every time.

We can therefore tell automatically that Charlie and Duck have the same password, even without the decryption key, which is a needless information leak – as is the fact that the length of the encrypted data gives us a clue about the length of the unencrypted password.

We will therefore insist on the following requirements:

  1. Users’ passwords should not be recoverable from the database.
  2. Identical, or even similar, passwords should have different hashes.
  3. The database should give no hints as to password lengths.

Attempt Three – hash the passwords

Requirement One above specifies that “users’ passwords should not be recoverable from the database.”

At first glance, this seems to demand some sort of “non-reversible” encryption, which sounds somewhere between impossible and pointless.

But it can be done with what’s known as a cryptographic hash, which takes an input of arbitrary length, and mixes up the input bits into a sort of digital soup.

As it runs, the algorithms strains off a fixed amount of random-looking output data, finishing up with a hash that acts as a digital fingerprint for its input.

Mathematically, a hash is a one-way function: you can work out the hash of any message, but you can’t go backwards from the final hash to the input data.

A cryptographic hash is carefully designed to resist even deliberate attempts to subvert it, by mixing, mincing, shredding and liquidising its input so thoroughly that, at least in theory:

  • You can’t create a file that comes out with a predefined hash by any method better than chance.
  • You can’t find two files that “collide”, i.e. have the same hash (whatever it might be), by any method better than chance.
  • You can’t work out anything about the structure of the input, including its length, from the hash alone.

Well-known and commonly-used hashing algorithms are MD5, SHA-1 and SHA-256.

Of these, MD5 has been found not to have enough “mix-mince-shred-and-liquidise” in its algorithm, with the result that you can find two files with the same hash very much faster than by chance.

This means it does not meet its original cryptographic promise – so do not use it in any new project.

SHA-1 is computationally quite similar to MD5, and many experts consider that it might soon be found to have similar problems to MD5 – so you may as well avoid it.

We’ll use SHA-256, which gives us this if we apply it directly to our sample data (the hash has been truncated to make it fit neatly in the diagram):

The hashes are all the same length, so we aren’t leaking any data about the size of the password.

Also, because we can predict in advance how much password data we will need to store for each password, there is now no excuse for needlessly limiting the length of a user’s password. (All SHA-256 values have 256 bits, or 32 bytes.)

→ It’s OK to set a high upper bound on password length, e.g. 128 or 256 characters, to prevent malcontents from burdening your server with pointlessly large chunks of password data. But limits such as “no more than 16 characters” are overly restrictive and should be avoided.

To verify a user’s password at login, we keep the user’s submitted password in memory – so it never needs to touch the disk – and compute its hash.

If the computed hash matches the stored hash, the user has fronted up with the right password, and we can let him login.

But Attempt Three still isn’t good enough, because Charlie and Duck still have the same hash, leaking that they chose the same password.

Indeed, the text password will always come out as 5E884898DA28..EF721D1542D8, whenever anyone chooses it.

That means the crooks can pre-calculate a table of hashes for popular passwords – or even, given enough disk space, of all passwords up to a certain length – and thus crack any password already on their list with a single database lookup.

Attempt Four – salt and hash

We can adapt the hash that comes out for each password by mixing in some additional data known as a salt, so called because it “seasons” the hash output.

A salt is also known as a nonce, which is short for “number used once.”

Simply put, we generate a random string of bytes that we include in our hash calculation along with the actual password.

The easiest way is to put the salt in front of the password and hash the combined text string.

The salt is not an encryption key, so it can be stored in the password database along with the username – it serves merely to prevent two users with the same password getting the same hash.

For that to happen they would need the same password and the same salt, so if we use 16 bytes or more of salt, the chance of that happening is small enough to be ignored.

Our database now looks like this (the 16-byte salts and the hashes have been truncated to fit neatly):

The hashes in this list, being the last field in each line, are calculated by creating a text string consisting of the salt followed by the password, and calculating its SHA-256 hash – so Charlie and Duck now get completely different password data.

Make sure you choose random salts – never use a counter such as 000001, 000002, and so forth, and don’t use a low-quality random number generator like C’s random().

If you do, your salts may match those in other password databases you keep, and could in any case be predicted by an attacker.

By using sufficiently many bytes from a decent source of random numbers – if you can, use CryptoAPI on Windows or /dev/urandom on Unix-like systems – you as good as guarantee that each salt is unique, and thus that it really is a “number used once.”

Are we there yet?

Nearly, but not quite.

Although we have satisfied our three requirements (non-reversibility, no repeated hashes, and no hint of password length), the hash we have chosen – a single SHA-256 of salt+password – can be calculated very rapidly.

In fact, modern hash-cracking servers costing under $20,000 can compute 100,000,000,000 or more SHA-256 hashes each second.

We need to slow things down a bit to stymie the crackers.

Attempt Five – hash stretching

The nature of a cryptographic hash means that attackers can’t go backwards, but with a bit of luck – and some poor password choices – they can often achieve the same result simply by trying to go forwards over and over again.

Indeed, if the crooks manage to steal your password database and can work offline, there is no limit other than CPU power to how fast they can guess passwords and see how they hash.

By this, we mean that they can try combining every word in a dictionary (or every password from AA..AA to ZZ..ZZ) with every salt in your database, calculating the hashes and seeing if they get any hits.

And password dictionaries, or algorithms to generate passwords for cracking, tend to be organised so that the most commonly-chosen passwords come out as early as possible.

That means that users who have chosen uninventively will tend to get cracked sooner.

→ Note that even at one million million password hash tests per second, a well-chosen password will stay out of reach pretty much indefinitely. There are more than one thousand million million million 12-character passwords based on the character set A-Za-z0-9.

It therefore makes sense to slow down offline attacks by running our password hashing algorithm as a loop that requires thousands of individual hash calculations.

That won’t make it so slow to check an individual user’s password during login that the user will complain, or even notice.

But it will reduce the rate at which a crook can carry out an offline attack, in direct proportion to the number of iterations you choose.

However, don’t try to invent your own algorithm for repeated hashing.

Choose one of these three well-known ones: PBKDF2, bcrypt or scrypt.

We’ll recommend PBKDF2 here because it is based on hashing primitives that satisfy many national and international standards.

We’ll recommend using it with the HMAC-SHA-256 hashing algorithm, repeated 10,000 times or more.

HMAC-SHA-256 is a special way of using the SHA-256 algorithm that isn’t just a straight hash, but allows the hash to be combined comprehensively with a key or salt:

  • Take a random key or salt K, and flip some bits, giving K1.
  • Compute the SHA-256 hash of K1 plus your data, giving H1.
  • Flip a different set of bits in K, giving K2.
  • Compute the SHA-256 hash of K2 plus H1, giving the final hash, H2.

In short, you hash a key plus your message, and then rehash a permuted version of the key plus the first hash.

In PBKDF2 with 10,000 iterations, we feed the user’s password and our salt into HMAC-SHA-256 and make the first of the 10,000 loops.

Then we feed the password and the previously-computed HMAC hash back into HMAC-SHA-256 for the remaining 9999 times round the loop.

Every time round the loop, the latest output is XORed with the previous one to keep a running “hash accumulator”; when we are done, the accumulator becomes the final PBKDF2 hash.

Now we need to add the iteration count, the salt and the final PBKDF2 hash to our password database:

As the computing power available to attackers increases, you can increase the number of iterations you use – for example, by doubling the count every year.

When users with old-style hashes log in successfully, you simply regenerate and update their hashes using the new iteration count. (During successful login is the only time you can tell what a user’s password actually is.)

For users who haven’t logged in for some time, and whose old hashes you now considered insecure, you can disable the accounts and force the users through a password reset procedure if ever they do log on again.

The last word

In summary, here is our minimum recommendation for safe storage of your users’ passwords:

  • Use a strong random number generator to create a salt of 16 bytes or longer.
  • Feed the salt and the password into the PBKDF2 algorithm.
  • Use HMAC-SHA-256 as the core hash inside PBKDF2.
  • Perform 10,000 iterations or more. (November 2013.)
  • Take 32 bytes (256 bits) of output from PBKDF2 as the final password hash.
  • Store the iteration count, the salt and the final hash in your password database.
  • Increase your iteration count regularly to keep up with faster cracking tools.

Whatever you do, don’t try to knit your own password storage algorithm.

It didn’t end well for Adobe, and it is unlikely to end well for you.

Image of magnifying glass outline courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2EQbUnlArVU/

LG Smart TVs phone home with viewing habits and USB file names

LG logoLast year we wrote about a security hole in Samsung TVs which could have allowed hackers to get in to your television, watch you, change channels and plant malware.

Now, a UK blogger, known only as ‘DoctorBeet‘, has apparently discovered that his LG Smart TV has actually been sending data about his family’s viewing habits back to the South Korean manufacturer.

After some investigation he found that his Smart TV would send data back to LG, even after he disabled an option in the system settings menu called “Collection of watching info.”

collection of watching info

He said that his LG set, model number LG 42LN575V, connects to a non-functional URL with details of the times and channels being watched.

Worse still, he also discovered that the filenames of some media on a USB device connected to the TV were also transmitted, saying that:

My wife was shocked to see our children’s names being transmitted in the name of a Christmas video file that we had watched from USB.

This discovery prompted DoctorBeet to create a mock video file which he transferred to a USB stick. He deliberately chose a filename – Midget_Porn_2013.avi – that couldn’t possibly be confused with the TV set’s firmware. After connecting the USB drive to his TV he later found that the filename had been transmitted in an unencrypted format to GB.smartshare,lgtvsdp.com.

Strangely, not all filenames belonging to media on USB devices were transmitted:

Sometimes the names of the contents of an entire folder was posted, other times nothing was sent. I couldn’t determine what rules controlled this.

He did stress, however, that the URL to which the data is being sent returned HTTP 404 errors which could mean that LG’s servers may not have logged any personal information. Although that isn’t necessarily the case, as one commentator on DoctorBeet’s blog posting pointed out:

Note in particular that it means *nothing* that the script returns a 404: The information may still be in their logs – collecting information this way without actually having anything at the endpoint is an old practice, and more efficient on server resources than making the web server execute anything.

DoctorBeet himself said that the current 404 status of the URL could mean very little:

However, despite being missing at the moment, this collection URL could be implemented by LG on their server tomorrow, enabling them to start transparently collecting detailed information on what media files you have stored.

It would easily be possible to infer the presence of adult content or files that had been downloaded from file sharing sites.

According to DoctorBeet, LG was somewhat dismissive of his concerns when he brought them up in a letter.

In an emailed reply the company simply said that, as he had accepted the Terms and Conditions on his TV, it wasn’t really its problem. LG suggested that he take up the issue with the retailer who sold him the set.

LG spoke to the BBC, saying that the company is looking into the complaint:

Customer privacy is a top priority at LG Electronics and as such, we take this issue very seriously

We are looking into reports that certain viewing information on LG Smart TVs was shared without consent.

LG offers many unique Smart TV models which differ in features and functions from one market to another, so we ask for your patience and understanding as we look into this matter.

As for why this particular LG Smart TV is collecting data in the first place, DoctorBeet cites a corporate video aimed at potential advertising partners. The lengthy clip includes claims such as:

LG Smart Ad analyses users’ favourite programs, online behaviour, search keywords and other information to offer relevant ads to target audiences. For example, LG Smart Ad can feature sharp suits to men, or alluring cosmetics and fragrances to women.

The kind of data collection and serving of targeted ads is reminiscent of Tesco’s recent decision to use facial recognition for a similar purpose in its petrol forecourts.

Short of boycotting the UK’s most successful supermarket or wearing a balaclava there isn’t much consumers can do about that scheme.

Fortunately, owners of LG smart TVs can do something to protect their privacy though: at the end of his post DoctorBeet identifies 7 domains that he blocked via his router in order to prevent the collection of data and presentation of ads by his too-smart-by-far TV set.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Yhzq-hluG2Y/

Anti-Bullying Week 2013: Understanding cyber-bullying

This week is Anti-Bullying Week in the UK, so we spoke to Luke Roberts, National Coordinator of the Anti-Bullying Alliance and Anti-Bullying Week. He’s been talking to us about the rise and impact of cyber-bullying and how we should all help tackle it.

ABW2013

Tell us about the Anti-Bullying Alliance and Anti-Bullying Week

The Anti-Bullying Alliance has been running for 11 years and we started Anti-Bullying Week in 2007. As the National Coordinator for both, I’m delighted to see how much it’s grown, with schools and the media really getting behind the idea of generating awareness and coming up with strategies to tackle the problem.

When does teasing become cyber-bullying?

It’s important to draw distinctions between the two. It’s pretty much a fact of life that all children and young people are going to get into conflicts.

Teasing happens. But if someone says stop, the person doing the teasing usually does so. If we say that it’s a natural part of growing up, then we need to give those young people the skills to deal with it. Teasing and banter is often about context. What’s an acceptable joke made by a friend has a very different effect if made by someone else.

The other thing to remember is that bullying is intentional – it’s the deliberate targeting of an individual or a group. It’s also repetitive behaviour, and that behaviour doesn’t always have to be the same. A negative comment on Facebook, then MSN, then a nasty Vine video are all different behaviours, but the harmful act is repetitive. We call it the “chain of harm”.

How do our lifestyles facilitate cyber-bullying?

Now that bullying goes on outside the school gates, and continues all day long it’s much harder for children to get away from their bullies. Everything is connected.

Smartphones and other devices allow youngsters to put something up online so quickly, wherever they are, and they may not think about the implications. It’s very important to educate kids about how to use them thoughtfully.

Your digital history is like a tattoo. Children and young people need to understand what personal information they are putting online – it’s hard to take it back, and it ends up being the digital identity you create for yourself. What’s online is there for life.

Teenager on computer. Image courtesy of Shutterstock.Young people don’t necessarily recognise the role they play in colluding with bullying behaviour. It’s common for these ‘cyber-bystanders’ to forward something negative that perpetuates the bullying, without realising they’re part of the problem.

It’s important for parents to recognise the power of the technology they allow their children to use, including age-inappropriate games. Someone might be ok with their son or daughter playing Grand Theft Auto on the Xbox or Playstation, but they might not realise it’s a connected device and enables them to access different communities which may not be appropriate.

I think the risks associated with social networking are more of a problem for girls, and for boys, cyber-bullying can happen in real time with microphones over Xbox and Playstation.

In preventing or stopping their children from being bullied, parents are often tempted to take away their technology. But the youngsters we speak to say they would rather be connected and bullied than disconnected from their friends and social networks.

If children fear their technology will be taken away if something bad happens then they are far less likely to tell their parents if they have a problem with something online.

What’s being done to tackle cyber-bullying?

Good, cohesive advice is very important so parents, schools and organisations can work together and know how to tackle bullying properly. We need strong advice from industry, government and the voluntary sector to help keep children safe online.

The industry also needs to play a part to make it easy for people to find information about reporting abusive behaviour on social networking sites. It takes huge courage for kids to tell someone they’re being bullied, so if we don’t make it easy for them to do that then we are failing them.

The Anti-Bullying Alliance is calling for a National Debate for children and young people. We need everyone to bring their piece of the puzzle so we can better educate, prevent and report on cyber-bullying. Youngsters who have become victims of bullying need coping strategies. Different approaches are needed in the connected 21st century world, and we need to bring ourselves up to date on this issue.

Where can kids, parents and teachers go to get ongoing support?

The Anti-Bullying Alliance website should be the first step, acting as a gateway to further information. We have a broad membership, so there are resources for children and young people and for their parents. For professionals there are video clips featuring experts giving advice.

Part 2: What can parents do to help their children?

While the Anti-Bullying Alliance works with schools and clubs to help create strong relationships, put preventative measures in place and provide education and advice, parents are the ones best placed to support and educate their children on cyber-bullying.

Tomorrow, in part 2 of this article, Luke will give parents important advice for helping to prevent – and cope with – cyber-bullying.

Image of teenager courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JcWVLvdxT3Y/