STE WILLIAMS

Tech companies resist government hacking back and backdoors

The US government is coming after cybersecurity with a multi-pronged pitchfork. Again.

The tines include new proposals to break encryption by engineering backdoors into devices and services, according to Reform Government Surveillance (RGS), a coalition of tech companies.

Another tine: the state of Georgia is poised to decriminalize hacking-back and would, as legislation is now (vaguely) written, potentially criminalize the well-intentioned poking around that security researchers do: as in, finding bugs, disclosing them responsibly, or working for bug bounties.

First, the news from Georgia:

Hackles up over hacking-back

The governor’s poised to either sign or veto a bill overwhelmingly passed by the General Assembly – SB 315 – meaning that Georgia’s on the brink of turning the state into a petri dish for offensive hacking, some tech giants are saying.

The bill would decriminalize what it calls “cybersecurity active defense measures that are designed to prevent or detect unauthorized computer access,” or what’s known more succinctly as hacking-back.

In mid-April, Google and Microsoft sent a letter to Georgia Governor Nathan Deal, urging him to veto the bill.

The companies noted that hacking back is “highly controversial within cybersecurity circles” and that Georgia lawmakers need “a much more thorough understanding” of the potential ramifications of legalizing the hacking of other networks and systems. After all, the companies said, what’s to stop this sudden policy shift from inadvertently rubber-stamping so-called “protective” hacking that’s actually abusive and/or anti-competitive?

We believe that SB 315 will make Georgia a laboratory for offensive cybersecurity practices that may have unintended consequences and that have not been authorized in other jurisdictions.

The bill would also expand the state’s current computer law to create what it calls the “new” crime of unauthorized computer access. It would include penalties for accessing a system without permission even if no information was taken or damaged. In fact, accessing a computer or network would only be valid when done “for a legitimate business activity.”

That’s pretty vague, say security researchers. Critics of the legislation believe it will ice Georgia’s cybersecurity industry, criminalizing security researchers for the sort of non-malicious poking around and bug-finding/bug-reporting that they do.

The same day that Microsoft and Google sent their letter, threat detection and remediation firm Tripwire sent its own, in which it argued that the bill wouldn’t help – rather, it would further weaken security:

It is our firm belief that an explicit exception is required to exclude prosecution when the party in question is acting in good-faith to protect a business or their customers from attack. Without this exclusion, S.B. 315 will discourage good actors from reporting vulnerabilities and ultimately increase the likelihood that adversaries will find and exploit the underlying weaknesses.

Tripwire suggested that the legislation could enable “frivolous lawsuits” brought by the type of vendor that ignores potential vulnerabilities reported by researchers and would rather hide their products’ security defects than address the flaws.

Tripwire might well up and move its employees out of the state, CTO David Meltzer said, rather than risk having them face prosecution under SB 315 for their efforts to get through to such recalcitrant vendors.

When all reasonable attempts to inform a vendor have been exhausted or the vendor demonstrates an unwillingness to act on the information, it is sometimes appropriate to publicly disclose limited details of the security threat so that affected individuals and organizations can take appropriate steps to protect themselves. The vague definitions of S.B. 315 could enable frivolous lawsuits by vendors looking to hide their security defects.

Many critics of the bill think that it was born out of the attorney general’s office getting caught with its pants down, embarrassed by a data breach at Kennesaw State University, whose Election Center was handling some functions for elections in the state. The breach was big news, and it was messy: it spawned a lawsuit over destruction of election data, for one.

The thing about that breach was that it had been responsibly disclosed by a security researcher who wasn’t even targeting the university’s elections systems. Rather, he simply stumbled upon personal information via a Google search, then tried to get authorities to remove it. In other words, he poked around.

The FBI wound up investigating that researcher, but they couldn’t come up with anything, so off they went without a case to prosecute him.

Equifax is another case in point: As the Electronic Frontier Foundation suggested in a letter criticizing the legislation, fear of prosecution under a bill like SB 315 could have dissuaded an independent researcher from disclosing vulnerabilities in the credit broker’s system: vulnerabilities that Equifax ignored when the researcher responsibly disclosed them to the company. Those vulnerabilities led to the leak of sensitive data belonging to some 145 million Americans and 15 million Brits.

The Georgia General Assembly passed SB 315 on 29 March and sent it over to Deal on 5 April. After it landed on his desk, a window of 40 days opened for Deal to either sign or veto it.

But why wait for his decision? The Augusta Chronicle reported on Friday that a hacking group has prematurely retaliated before the fate of SB 315 has even been decided… As if malicious attacks are going to somehow convince Governor Deal that he shouldn’t sign new legislation that – at least on the face of it, to politicians who don’t work in the cybersecurity industry – seems to be taking a proactive move to stem malicious hacking.

It makes no sense. But the hacking, overall, doesn’t seem particularly well thought out. The targets: a website for the Calvary Baptist Church of Augusta, and possibly the city of Augusta itself. The Augusta Chronicle reports that the hacker(s) posted a link on the church’s site, labeled:

This vulnerability could not be ethically reported due to S.B. 315.

The statement purports to be from the EFF and decries the bill as an overreach. On Friday, the EFF said it had nothing to do with it.

And now for the government’s actions on the national level:

Backdoors are back again

Actually, the government’s lust for busting encryption’s kneecaps never went away. The US has clearly been positioning itself for an(other) assault on encryption.

In October, Deputy Attorney General Rod Rosenstein gave a couple of speeches focusing on encryption that sounded like they could have been written by former FBI director James Comey. The same arguments against unbreakable encryption – or what the government likes to refer to as “responsible” encryption – resurfaced. He defined it as the kind of encryption that can be defeated for any law enforcement agency bearing a warrant, but is otherwise bulletproof against anyone but the user.

Or, as the #nobackdoors proponents (including Sophos) put it, weakened security which the bad guys will inevitably figure out how to exploit.

Then, in November, Rosenstein urged prosecutors to challenge encryption in court. The FBI would be “receptive” to pro-backdoor litigation, he said:

I want our prosecutors to know that, if there’s a case where they believe they have an appropriate need for information and there is a legal avenue to get it, they should not be reluctant to pursue it. I wouldn’t say we’re searching for a case. I’d say we’re receptive, if a case arises, that we would litigate.

In March, there was more of the same. This time, the FBI slipped a velvet glove over the iron fist that’s been banging on the encryption door ever since Apple made it a default on the iPhone in September 2014.

We’re not looking for a backdoor, said director Christopher Wray. We just want you to break encryption, he said, though the actual words were more along the lines of “a secure means to access evidence on devices once they’ve shown probable cause and have a warrant.” How that gets done is up to you smart people in technology, Wray said, the “brightest minds doing and creating fantastic things.”

All of which brings us up to more recent anti-encryption efforts, including the Department of Justice’s (DOJ’s) push for a legal mandate to unlock phones.

In March, the New York Times reported that FBI and DOJ officials had been meeting with security researchers, working out how to get around encryption during criminal investigations.

As far as the security researchers go, think big names: Ray Ozzie, a former chief software architect at Microsoft; Stefan Savage, a computer science professor at the University of California, San Diego; and Ernie Brickell, a former chief security officer at Intel, all three of whom are working on techniques to help police get around encryption during investigations.

After 18 months of research, the National Academy of Sciences had, in February, released a report on encryption and exceptional access that shifted the question of whether the government should mandate exceptional access to the contents of encrypted communications, to how it can be done without compromising user security. Presentations from Ozzie, Savage and Brickell were included in that report.

Then, international think tank EastWest Institute published a report (PDF) that proposed “two balanced, risk-informed, middle-ground encryption policy regimes in support of more constructive dialogue.”

Most recently, Wired published a story focusing on Ozzie and his attempt to find an exceptional access model for phones that can supposedly satisfy “both law enforcement and privacy purists.”

He calls his solution Clear. Basically, it’s passcode escrow. It involves generating a public and private key pair that can encrypt and decrypt a secret PIN that each user’s device automatically generates on activation. It’s like an extra password, stored on the device, protected by encrypting it along with the vendor’s public key. After that, the vendor is the only one that can unlock the phone with the private key, which the vendor would store in a highly secured vault that only “highly trusted employees” could get at in response to law enforcement who have court authorization.

Ozzie posted this slide show explaining Clear.

Well, as long as they’re highly trusted, what could possibly go wrong? We don’t need to hypothesise, we can just look at recent examples, such as the theft of exploits and hacking tools from the NSA and their use by criminals in global ransomware outbreaks.

And of course there’s the threat of rogue employees too – people like Edward Snowden, or the Facebook engineers and National Security Agency (NSA) agents who abused their access to sensitive data.

Experts including Matt Green, Steve Bellovin, Matt Blaze, Rob Graham have not been slow to point this out.

For example, here’s just one of Green’s criticisms:

Does this vault sound like it might become a target for organized criminals and well-funded foreign intelligence agencies? If it sounds that way to you, then you’ve hit on one of the most challenging problems with deploying key escrow systems at this scale. Centralized key repositories – that can decrypt every phone in the world – are basically a magnet for the sort of attackers you absolutely don’t want to be forced to defend yourself against.

In a nutshell, backdoor proponents such as Wray might publicly propose the notion that there’s some sort of middle ground that still allows for strong encryption – which, they claim, has created a world in which people can commit crimes without fear of detection – and which accommodates “exceptional access” for law enforcement.

There is no such middle ground, the EFF said on Wednesday. From the statement, written by EFF writer David Ruiz:

The terminology might have changed, but the essential question has not: should technology companies be forced to develop a system that inherently harms their users? The answer hasn’t changed either: no.

But try telling that to the DOJ and the FBI. They clearly don’t believe it, and they show no signs of giving up the quest.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aWIoC70XZ2E/

Penetrate the mind of the cyber criminal at SANS London July 2018

Promo As the security landscape constantly changes, keeping your data and systems safe from a growing variety of attacks becomes more challenging than ever.

Reports of prominent organisations being hacked and suffering irreparable damage are increasingly common, and that means security-savvy employees who can detect and prevent intrusions are in great demand.

SANS London July 2018 offers the opportunity to deepen your security knowledge and gain respected GIAC specialist certification in your chosen area.

Staged by the leading security training provider, the event runs 2-7 July at the Grand Connaught Rooms in London. An intensive programme of courses on cutting-edge aspects of cyber crime and security combines lectures by leading experts with hands-on lab workshops.

Attendees are assured that they will be able to bring their newfound skills into play as soon as they return to work.

Course topics include:

Intrusion detection in depth The security focus is changing from perimeter protection to always on and exposed mobile systems. Learn how to examine network traffic for signs of intrusion.

Web app penetration testing and ethical hacking Application flaws play a major role in security breaches. Discover the advanced techniques required to test web apps and next-generation technologies.

Windows forensic analysis Government agencies increasingly require media exploitation specialists to recover vital intelligence from Windows systems. The course trains forensic analysts through a series of new lab exercises that incorporate evidence found on the latest Microsoft technologies.

Find more information and register here.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/04/penetrate_the_mind_of_the_cyber_criminal_at_sans_london_july_2018/

Fresh fright of data-spilling Spectre CPU design flaws haunt Intel

Researchers have unearthed a fresh new set of ways attackers could potentially exploit data-leaking Spectre CPU vulnerabilities in Intel chips.

German publication Heise reported that eggheads are preparing to disclose at least eight new CVE-listed vulnerability reports describing side-channel attack flaws in Chipzilla’s processors.

“So far we only have concrete information on Intel’s processors and their plans for patches. However, there is initial evidence that at least some ARM CPUs are also vulnerable,” Jürgen Schmidt reported.

intel

Intel shrugs off ‘new’ side-channel attacks on branch prediction units and SGX

READ MORE

“Further research is already underway on whether the closely related AMD processor architecture is also susceptible to the individual Spectre-NG gaps, and to what extent.”

The report notes that Intel has been alerted as to the exploit methods, though Chipzilla isn’t saying much on the matter right now.

“Protecting our customers’ data and ensuring the security of our products are critical priorities for us. We routinely work closely with customers, partners, other chipmakers and researchers to understand and mitigate any issues that are identified, and part of this process involves reserving blocks of CVE numbers,” executive VP and general manager of product assurance and security Leslie Culbertson said in a statement to The Register.

“We believe strongly in the value of coordinated disclosure and will share additional details on any potential issues as we finalize mitigations.”

The disclosure of new ways to leverage Spectre – which can be exploited by malicious software on a device or PC to extract passwords and other secrets from memory it shouldn’t be allowed to access – should hardly come as a shock, given the nature of the deep design flaw and how difficult it is for chip designers to fully address. Seemingly every few weeks, brainiacs have found and written up new variants and points of entry related to the bug, and new variations will likely continue to be found until chipmakers can get redesigned processors to market later this year. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/03/just_your_monthly_reminder_that_the_spectre_bug_is_still_out_there/

Twitter admits to password storage blunder – change your password now!

We just received a warning from Twitter admitting that the company had made a serious security blunder: it had been storing unencrypted copies of passwords.

You read that correctly: plaintext passwords, saved to disk.

What an announcement to have to make on World Password Day!

Any regular reader of Naked Security will know that plaintext passwords are an enormous no-no.

It’s OK to have passwords in memory temporarily when verifying them at login time, but that’s all you should ever do with raw passwords.

You shouldn’t write passwords to a temporary file that you intend to delete later – your program could crash before it cleans up properly, or the disk where you wrote the data could be unmounted before you’ve finished.

You shouldn’t even keep passwords in virtual memory that the operating system might page out into the system swapfile, lest the passwords get flipped out to disk when the system is heavily loaded.

On Windows, you can use the VirtualLock() system function to keep memory blocks “locked into” physical RAM, thus preventing them from getting paged out to the swapfile.

You really, certainly, absolutely shouldn’t let unencrypted passwords get saved anywhere that’s supposed to be permanent…

…and that most definitely means taking great care that you DON’T WRITE PASSWORDS INTO LOGFILES BY MISTAKE.

Unfortunately, that’s what Twitter realised it’s been doing, thus its mea culpa warning today.

The good news is that the password databases actively used by Twitter to patrol logins were implemented securely.

Twitter uses bcrypt, an algorithm that performs what’s known as salt-hash-stretch to turn passwords into cryptographic checksums that you can later use to verify that a password held temporarily in memory was supplied correctly.

That’s because a correctly implemented password hash such as bcrypt lets you work forwards from a supplied password to compute the stored password hash, but not to work backwards from the hash to recover the original password.

Even if crooks hack in and steal the password file, they can’t just read the passwords out of the purloined database.

Instead, they have to try every possible password (or try a “dictionary” of likely passwords) against each user’s hash and see which passwords they are lucky enough to guess correctly.

The bad news here, of course, is that the safety provided by using bcrypt for password verification was undone by writing plaintext passwords to the system logs.

If the crooks went for the logs instead of the password database, they could extract the logged passwords directly, with no cracking or “dictionary attacks” required.

What to do?

Twitter claims that it has now “fixed the bug” and that its investigation “shows no indication of breach or misuse by anyone”.

Twitter therefore suggests merely that you “consider changing your password”.

We’ll go one step further and urge you to change your password – after all, Twitter isn’t saying how long it’s been logging passwords, or how many it collected by mistake along the way, so there’s no way to judge how far and wide any saved passwords might have been replicated by now.

We also suggest that you start using Twitter’s two-factor authentication system, also known as login verification, if you aren’t already.

This means you need to supply a single-use code that’s sent to, or calculated by, your mobile phone when you login – your password isn’t enough on its own.

Should you stop using Twitter altogether on account of this carelessness?

That’s up to you, of course, but we suggest that changing your password promptly ought to be enough.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/A-FmVsbLcos/

Hurry up patching those Oracle bugs: Attackers aren’t waiting

Security experts are advising administrators to hurry up installing Oracle patches after finding that attackers are quick to target their vulnerabilities.

The Sans Institute issued a warning after one of its honeypot systems was targeted by exploits of the CVE-2018-2628 remote code execution flaw in WebLogic just hours after the test server was put live.

According to Sans, the flaw has been aggressively targeted since it was first disclosed by Oracle on April 18. The security training company says it took all of three hours after the patch was released for the first compromised servers to be detected.

Flyswat

Oracle whips out the swatter, squishes 254 security bugs in its gear

READ MORE

Since then, Sans says, attacks have become so prevalent that new systems will be hit with exploit attempts almost immediately after coming online. To underscore this, Sans researchers set a vulnerable server live earlier this week and monitored attempts to exploit the flaw.

Within three hours of going live, that honeypot system had been targeted for attack with an attempt to install and execute crypto-mining malware.

“It seems that the time window between vulnerability disclosure and opportunistic exploitation is shrinking more and more,” writes researcher Renato Marinho.

“From this episode, we can learn that, those who don’t have time to patch fast, will have to find much more time to recover properly from the coming incidents.”

With the vulnerabilities being so quickly weaponized, researchers are advising administrators to be sure they keep an eye out for patches from Oracle and other enterprise software vendors so they can test and deploy updates as soon as possible.

In this case, however, simply patching may not be enough. Marinho notes that for the Oracle bug in question, researchers have shown it may be possible to circumvent the patch and exploit the vulnerablity even on updated servers.

As such, Marinho advises companies to restrict access to the TCP/7001 port on WebLogic servers as much as possible in the short term. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/03/slow_to_patch_oracle_bugs_dont_be_attackers_jump_all_over_them/

Twitter: No big deal, but everyone needs to change their password

Twitter is ringing in World Password Day by notifying its users, all 330 million of them, that their login credentials were left unencrypted in an internal log file and should be changed.

Chief technology officer Parag Agrawal broke the news on Wednesday that its internal team had found that, while passwords are usually stored scrambled by encryption, something had caused at least one log to record them in plaintext.

Twitter logo image

Shocker: Cambridge Analytica scandal touch-paper Aleksandr Kogan tapped Twitter data too

READ MORE

“We mask passwords through a process called hashing using a function known as bcrypt, which replaces the actual password with a random set of numbers and letters that are stored in Twitter’s system. This allows our systems to validate your account credentials without revealing your password. This is an industry standard,” Agrawal said of the non-functioning security feature.

“Due to a bug, passwords were written to an internal log before completing the hashing process.”

Twitter is stressing that the issue was found in-house by its own engineers, and that so far there are no indications of anyone outside the company being able to even view the file, let alone harvest the passwords.

Still, Twitter is advising everyone who has an account to change their password and do the same with any other site where the password was re-used (as a best practice you shouldn’t be reusing passwords anyway).

“We are very sorry this happened,” Agrawal told users. “We recognize and appreciate the trust you place in us, and are committed to earning that trust every day.”

The timing of the disclosure is particularly bad for Twitter, as much of the internet is today observing World Password Day by raising awareness of good password management practices and safe storage.

Certainly this was not the type of exposure Twitter was seeking, particularly as it tries to beef up its protection of user data in the wake of the Cambridge Analytica data-harvesting scandal.

Git stub

Meanwhile, GitHub suffered a similar blunder: it also dumped plaintext passwords into its log files.

“During the course of regular auditing, GitHub discovered that a recently introduced bug exposed a small number of users’ passwords to our internal logging system, including yours,” GitHub wrote in an email to its users.

“We have corrected this, but you’ll need to reset your password to regain access to your account.

“GitHub stores user passwords with secure cryptographic hashes (bcrypt). However, this recently introduced bug resulted in our secure internal logs recording plaintext user passwords when users initiated a password reset. Rest assured, these passwords were not accessible to the public or other GitHub users at any time.

“Additionally, they were not accessible to the majority of GitHub staff and we have determined that it is very unlikely that any GitHub staff accessed these logs. GitHub does not intentionally store passwords in plaintext format. Instead, we use modern cryptographic methods to ensure passwords are stored securely in production. To note, GitHub has not been hacked or compromised in any way.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/03/twitter_no_big_deal_but_everyone_needs_to_change_their_password_right_now/

European Space Agency wants in on quantum comms satellites

The European Space Agency is looking to build a communications satellite to send data securely using quantum key distribution.

On Thursday, it signed a contract with SES Techcom S.A, a satellite communications company based in Luxembourg, to develop QUARTZ (Quantum Cryptography Telecommunication System).

Quantum entanglement is a booming area of research. The strange phenomenon of pairs of particles being coupled together in such a way that the quantum state of each particle cannot be described independently of the state of the other is used to probe quantum mechanics, build quantum computers, and form cryptographic keys.

Beam of light

China flaunts quantum key distribution in-SPAAACE by securing videoconference

READ MORE

Satellites like QUARTZ act as a mediator between two ground stations that are trying to keep their communications secure. The information describing a string of random numbers is encoded as entangled photons. These are beamed from one ground station into space to the satellite and then sent back to the other Earth-based receiver. The key is then used to decrypt information.

If any adversaries try to tamper with the keys it changes the quantum state of the entangled photons, so the senders know someone has tried to intercept their communication.

Quantum entanglement is tricky to preserve over long distances, and most quantum key distribution is carried by optical fibers that extend to only a few hundred kilometers. By sending photons to and from satellites and ground stations, it allows the secret keys to be sent between people that are much further away.

“QUARTZ is the first commercial step in this direction, aiming to provide a reliable, globally-available system for carrying and dispensing the keys,” the European Space Agency said.

“Under QUARTZ and with the help of ESA, SES plans to develop the platform to be a robust, scalable and commercially-viable satellite-based QKD system for use in geographically-dispersed networks.”

QUARTZ won’t be the first quantum communications satellite to set foot in space. The QUESS (Quantum Experiments at Space Scale) project lead by researchers from the Chinese Academy of Sciences launched the Micius satellite in 2016. In 2017, it was reported that the team had managed to keep photons entangled for 1,200 kilometers. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/03/the_european_space_agency_wants_in_on_quantum_communications_satellites/

It’s World (Terrible) Password (Advice) Day!

It’s World Password Day! And you know what that means: all the effort you’ve put into trying to persuade people to rethink how they do passwords turns to mush because some company sees a PR opportunity and floods social media with terrible advice.

This year’s award for Terrible Password Advice goes to the wireless industry’s lobbying organization, the CTIA. It has even set up a dedicated webpage that it joyfully tweeted out to people this morning.

“It’s World #PasswordDay! A reminder to change your pins/passwords frequently,” it advised anyone following the hashtag “PasswordDay”. But this, as lots of people quickly pointed out, is terrible advice.

But hang on a second: isn’t that the correct advice? Weren’t all sysadmins basically forced to change their systems to make people reset passwords every few months because it was better for security?

Yes, but that was way back in 2014. Starting late 2015, there was a big push from government departments across the world – ranging from UK spy agency GCHQ to US standard-setting National Institute of Standards and Technology (NIST) and consumer agency the Federal Trade Commission (FTC) – to not do that.

That said, the past few years has been virtually defined by the loss of billions of usernames and passwords from corporations, ranging from your email provider, to your credit agency, home improvement store, retail store and, yes, even government departments.

In that case, does it not in fact make sense to get people to periodically change their passwords? Well, yes. And no.

Yes, because the information would age and so become irrelevant faster. No, because constant resets eat up resources, tend to nudge people toward using simpler passwords, and don’t really make it harder for some miscreant using a brute force attack to guess the password.

bank

Critical infrastructure needs more 21qs6Q#S$, less P@ssw0rd, UK.gov security committee told

READ MORE

But we wouldn’t be at all surprised to find that in 2019, following a shift in hacking patterns, everyone advises regular password changes, and the 2021 World Password Day sees some organization lambasted for offering 2018’s advice.

There is no shortage of organizations and individuals that are willing to tell you what to do about passwords: NCSC, CESG, NIST, FTC, Google, Microsoft, Mozilla, Edward Snowden, to name just a few.

All of which suggests to us that is may be time to go meta and look at the different aspects of passwords and often conflicting advice that comes with each. And then to provide you, dear readers, with the best possible password advice – which we can all mock in two years’ time.

Strap in, here we go.

Random or pronounceable?

Everyone agrees that using the word “password” for a password is pretty much the dumbest thing you can do. But so many people still do it that designers have been forced to hardcode a ban on the world into most password systems.

But from there – where do you go? How much better is “password1”? Is it sufficiently better? What about switching letters to other things, like “p@ssw0rd”? Yes, objectively, that is better. But the point is that there are much better ways. And that comes down to basically two choices: random or pronounceable.

The best random password is one that really is random i.e. not a weird spelling that you quickly forget but a combination of letters, numbers and symbols like “4bqJv8dZrXgp” that you would simply never be able to remember.

But here’s the thing – the reason that particular password is better is largely because in order to use and generate such passwords, you would likely use a password manager. And password managers are great things that we’ll deal with later.

But here’s the thing: if someone is trying to crack your password randomly they are likely to be using automated software that simply fires thousands of possible passwords at a system until it hits the right one.

In that scenario, it is not the gibberish that is important but the length of the password that matters. Computers don’t care if a password is made up of English words – or words of any language. But the longer it is, the more guesses will be needed to get it right.

As our dear truthsayer XKCD points out: “Through 20 years of effort we’ve successfully trained everyone to use passwords that are hard for humans to remember, but easy for computers to guess.”

Of course, a big part of that assumption is that there will be lots of people that will introduce numbers and symbols and uppercase letters into their password. Without them, password-cracking software would limit itself to lowercase letters and so find the correct login much faster.

Because length can be a critical factor, and because typing random letters and numbers is much more taxing for people, there are lots of people and organizations that argue that people should come up with passwords that comprise several random words that you can actually remember. XKCD used “correct horse battery staple” as an example.

There is merit to this argument and Google has been pushing the approach for a number of years. So which is better?

The answer is: both and neither. The pronounceable words approach is better if you want to remember the password and type it in. But it would be undermined if huge numbers of people weren’t also using numbers and symbols in their passwords.

Plus of course there is the reality that many organizations institute strict password policies when you sign up with them that often require you to have an uppercase letter, a number and/or a symbol. In these cases, your pronounceable-words password won’t actually work.

The random password can be much more effective overall because it typically forces users to approach passwords differently – often using a different password for each different usa. Now that is better security because it stops other accounts that use the same password from being compromised. And, if you are already using random passwords that you can’t possibly remember, then why not use longer versions? What do you care?

Conclusion: use pronounceable if you want to remember the password; random otherwise. But make sure it’s not too short (less than, say, 10 digits).

That leads to:

Password manager or brain?

There are really, really good reasons to use a password manager. For one, if you can make it a habit to use one for every login you have, you are immediately increasingly your overall security because every login will be different.

Plus, since you are using software to save and paste in passwords, why not up that password length? This combination is a really good way to be secure online and is the best kind of security you can have within the context of the inherently insecure system of a single username/password to gain access to confidential information.

But there are downsides, and using your good old-fashioned brain has some distinct advantages.

The biggest of course is that it is in your brain and not stored on a hackable database somewhere. As great as password managers are, they are still software and so are susceptible to security holes.

Of course these companies go to extra lengths to protect security. But commercial imperatives drive less secure solutions. For example, one of the best password managers, 1Password, shifted its accounts to an “online vault” where all your passwords are stored and then accessed from your phone/computer etc rather than those passwords being stored on your device itself.

There are good reasons for this. For example, you no longer have to sync between devices to make sure everything is up-to-date. And, from the companies’ perspective it makes much, much easier for the company to charge a monthly fee rather receive a one-off purchase – good news for the bottom line.

Unfortunately, that approach also makes the company a target for every hacker in the world: if you can crack its system, you have access to everything. Plus, of course, there is the uncomfortable fact that governments the world over have ways of forcing companies to provide them access to confidential information, sometimes complete with gag orders.

And there is the issue of usability: opening an app or a piece of software every time you want to get into a website can be a pain.

On the flipside, you carry your brain around with you all the time and it is, largely, open and unlocked. Plus your brain doesn’t come with an annual renewal fee. An unlocked, unhackable database? Amazing. Right there in your head.

Conclusion: use and get used to a password manager. Unless you are working on something extremely confidential. But if you are, then you shouldn’t be accessing it solely through a username/password interface anyway.

Frequently change your password?

As discussed above, there are good reasons to do so and not to do so.

Frequent changes mean that the old password is useless. At least in theory. The reality, as multiple researchers have discovered, is that since we are humans and not computers this approach brings with it a whole range of other issues.

For one, if people have to keeping changing their passwords, they will tend to use shorter and less secure versions. They put less store in a password’s inherent security because it’s going to change again soon. That is obviously completely irrational but, let’s be honest, it also makes sense because most of us don’t really believe we are going to be hacked.

Frequent changes also eat up an enormous amount of resources: systems have to be constantly updated and people have to be constantly urged to make changes. And, of course, they keep forgetting the “new” password, leading to more changes and more time with tech support.

It’s a question of balance: do the benefits of periodically changing password outweigh the downsides? And in most cases, they will not.

In scenarios where people have good reason to suspect that will be actively targeted by hackers, it could make sense. But then people in those positions should already be acutely aware of the need for operational security, including using long, complex passwords that they periodically change. Having some guy from IT jump in every couple of months to tell them to do what they are already doing is just unnecessary and annoying.

So what we are really talking about is people who are hopeless at security but in important positions: so, basically, C-suite execs and politicians. The answer for them: get their staff to do it.

The most common forced-password changes will likely come from companies like Twitter (we kid you not – Twitter did exactly this after we wrote this line) that get hacked and then impose a password change on everyone. But at a corporate level, leave it alone as a policy.

Conclusion: don’t force periodic password changes, despite its appeal.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/03/world_password_day/

Fresh crop of Spectre flaws haunting Intel

Researchers have unearthed a fresh new set of ways attackers could potentially target the Spectre CPU vulnerabilities.

German publication Heise reported that researchers are preparing to disclose at least eight new CVE-listed vulnerability reports describing side-channel attack flaws in Intel processors.

“So far we only have concrete information on Intel’s processors and their plans for patches. However, there is initial evidence that at least some ARM CPUs are also vulnerable,” Heise said.

“Further research is already underway on whether the closely related AMD processor architecture is also susceptible to the individual Spectre-NG gaps, and to what extent.”

intel

Intel shrugs off ‘new’ side-channel attacks on branch prediction units and SGX

READ MORE

The report notes that Intel has been alerted as to the vulnerabilities, though Chipzilla isn’t saying much on the matter right now.

“Protecting our customers’ data and ensuring the security of our products are critical priorities for us. We routinely work closely with customers, partners, other chipmakers and researchers to understand and mitigate any issues that are identified, and part of this process involves reserving blocks of CVE numbers,” executive VP and general manager of product assurance and security Leslie Culbertson said in a statement to The Register.

“We believe strongly in the value of coordinated disclosure and will share additional details on any potential issues as we finalize mitigations.”

The disclosure of new CVEs related to Specter should hardly come as a shock, given the nature of the Spectre vulnerability and how difficult it is for chip designers to fully address. Seemingly every few weeks, researchers have found new variants and points of entry related to the bug, and new variations will likely continue to be found until chipmakers can get redesigned processors to market later this year. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/03/just_your_monthly_reminder_that_the_spectre_bug_is_still_out_there/

Encryption is Necessary, Tools and Tips Make It Easier

In the InteropITX conference, a speaker provided tips, tools, and incentives for moving to pervasive encryption in the enterprise.

InteropITX 2018 — Las Vegas — Encryption is critical and encryption can be complicated. Fortunately, there are tips, techniques, and information tools that can help reduce the difficulty and make proper encryption more achievable and affordable for most organizations. That was the message Ali Pabrai was preaching from the platform in an early morning session on Thursday.

Pabrai, CEO of security consulting firm ecfirstm, began the session by running through some of the critical security incidents that have forced companies to the conclusion that encryption is not merely an option for enterprise security — it’s a necessity.

October 21, 2016 was one of the dates Pabrai recalled because it was the day on which the Mirai botnet activated and went on the offensive against DNS provider Dyn. At the time the most powerful DDoS attack on record, Pabrai said that Mirai will be the model for more powerful attacks to come. Avoiding having company computers and IoT devices become part of an attacker’s weaponized network is just one of the reasons he gave for encrypting as much data (and data traffic) as possible. Many of the other reasons involve regulations.

May 25, 2018 is the day that GDPR begins to be enforced. It has a lot to say about protecting data and notifying customers if unprotected data is breached. It’s also not the first regulation with those requirements. “HITECH has been with us since 2010,” said Pabrai, noting that the Health Information Technology for Economic and Clinical Health (HITECH) Act defines “unprotected data” and specifies actions to be taken when it is breached.

Beyond the various regulations regarding encryption and data, there are frameworks and tools that help organizations understand how to go about protecting data both at rest and in transit.

“You can’t be in cybersecurity without being literate in the NIST framework,” Pabrai said, noting that, on April 16, NIST introduced the Cybersecurity Framework 1.1. The framework is quite prescriptive regarding best practices for building encryption into an enterprise process. That makes it and PCI DSS — another highly prescriptive standard — especially valuable resources for enterprise data security professionals.

In addition to the large frameworks and regulations, Pabrai argued in favor of a low-tech tool and basic strategy for dealing with encryption.

The low-tech tool is a spreadsheet listing every possible enterprise attack surface (data base, mobile devices, IoT, etc.) and whether the state of encryption is yes, no, or unknown. He said  that this tool allows him to understand the basic security stance of an organization in a matter of minutes during the initial encounter.

The basic strategy is taking the argument for wide-spread encryption — that it dramatically reduces risk exposure — to the executive committee with questions about why they would want to take this critical step.

There’s no doubt that encryption requires investment, Pabrai said, but compared to the financial risk of unprotected data it is an investment that makes sense for practically every organization.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy/encryption-is-necessary-tools-and-tips-make-it-easier/d/d-id/1331714?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple