STE WILLIAMS

How to make people sit up and use 2-factor auth: Show ’em a vid reusing a toothbrush to scrub a toilet – then compare it to password reuse

RSA Despite multi-factor authentication being on hand to protect online accounts and other logins from hijackings by miscreants for more than a decade now, people still aren’t using it. Today, a pair of academics revealed potential reasons why there is limited uptake.

Spoiler alert: it’s because, apparently, there isn’t enough focus on clearly explaining the actual need for this extra layer of account security.

In a presentation at this year’s RSA Conference, taking place in San Francisco this week, Dr L Jean Camp, a professor at Indiana University Bloomington in the US, and her doctoral candidate Sanchari Das, detailed their research into why people aren’t using Yubico security keys or Google’s hardware tokens for multi-factor authentication (MFA).

For those who don’t know: typically, you use these gadgets to provide an extra layer of security when logging into systems. You enter your username and password as usual, then plug the USB-based key into your computer and tap a button to activate it. The thing you’re trying to log into checks the username and password are correct, and that the physical key is valid and tied to your account, before letting you in.

That means a crook has to know your username and password, and have your physical key to log in as you. We highly recommend you investigate activating MFA on your online accounts, particularly important ones such as your webmail.

Findings

What the pair found during their research work derails any previous assumptions that the lack of MFA uptake is because people are stupid, or can’t use the technology. What it comes down to is education and communicating risk.

The duo carried out a two-phase test, where users were told about the technology and shown instructions on how to use it. Feedback from this phase, which revealed where folks were getting stuck in the process of MFA enrollment, was passed along to manufacturers of the security keys, who, we’re told, changed their instructions and prioritized ease-of-use as a result. However, it still wasn’t enough.

“Even after the [training] sessions, they still didn’t use it,” Dr Camp said. “It wasn’t cost, because they got the hardware for free, and it wasn’t usability, because we changed the instructions to make those easier. In the end, risk communication was key.”

panicked eye with Facebook logo reflected on surface

When 2FA means sweet FA privacy: Facebook admits it slurps mobe numbers for more than just profile security

READ MORE

Actually getting this message across needs a variety of techniques. Millennials, they noted, were much less concerned with the loss of personal information, with many saying they put all that info online in public anyway. But show how their bank accounts could get pillaged, and they sat up and paid attention.

The most effective way to get the security message across appears to be video. Dr Camp said that a video showing how reusing a password is like reusing a toothbrush to clean a toilet got the message across more effectively than a print warning. Not only should you not be doing it, but also, password security matters, and MFA is part of that. Single-factor, password-only security is flimsy and weak, compared to MFA protections.

That said, longer videos work best for older folks, while shorter videos were better at convincing da yoof.

There are also privacy fears. Das noted that biometric two-factor systems – think fingerprints and face scans – were the most popular with users by a long chalk. But its adoption has been hurt by concerns that, if data like an iris print is stolen, you can’t change your eyes.

With less than 10 per cent of Gmail users logging in with two-step authentication, last time we checked, there’s clearly a long way to go. But with a little more encouragement, adoption rates can be increased, the two academics concluded. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/06/password_two_factor_auth_security/

Level up Mac security, and say game over to malware? System alerts plus Apple game engine equals antivirus package

RSA Infosec guru Patrick Wardle has found a novel way to attempt to detect and stop malware and vulnerability exploits on Macs – using Apple’s own game engine.

The boss of Objective-See, a maker of in-our-opinion must-have macOS security tools, explained at this year’s RSA Conference, held this week in San Francisco, how he and his colleagues developed a series of rules to potentially identify malicious software and network intruders, then plugged them into Apple’s macOS games development toolkit to create a capable Mac security suite.

The idea, said Wardle, was to develop a package that would address what he saw as serious deficiencies in the Mac security space, both technically and culturally, from insecure Safari browser code to Apple fans convinced their computers can’t fall victim to software nasties.

“If you look at the market for zero-days, Safari vulnerabilities are cheaper than Windows browsers, and it’s not because of supply and demand,” Wardle mused. “Macs are softer targets, they’re easier to attack, and Mac users are overconfident.”

To address these issues, Wardle and his team took a two-phase approach. First, they developed MonitorKit, which is open-source software due to appear on GitHub, that ties into multiple macOS components to fire off alerts whenever suspicious stuff such as keylogging, downloads, simulated clicks, and file encryption occur. The idea here was to create a system that could collect up tell-tale signs of a potential malware infection, ransomware attack, or even an attempt at a zero-day exploit.

“It ingests all these events using a variety of low-level subsystems and generates an output of standard events,” Wardle explained.

Golden rules

The second phase was to create an engine that could sort through those events using rules and event triggers to filter potentially malicious actions from everyday activities, the ultimate goal being to detect and thwart bad actors or at least warn the user that shenanigans are afoot. This part is where Wardle turned to video games.

He realized that the basic function of a computer game engine – to receive events, apply rules to them, and generate an outcome based on that information – would be the perfect platform for his new security system, and Apple’s Gameplaykit was a particularly easy framework to work with.

Using Gameplaykit’s GKRuleSystem as a logic controller, Wardle had a way to apply sets of rules and triggers to MonitorKit’s signals to alert the user of suspicious events – such as attempts to disable system monitoring processes (as would happen during an infection attempt), perform mass encryption of files (the main step of a ransomware attack) or the appearance of applications emblazoned with a vendor’s name without a matching certificate (such as a fake Adobe Flash installer). These should quickly add up to alarm bells going off.

Taking things a step further, Wardle said, administrators could also apply their own rules to detect and block dangerous user behavior, rogue employees, and other insider threats. The tool can be set up to flag up misuse of superuser privileges, or the insertion of USB drives, or after-hours log-ins, and so on.

What’s more, Wardle thinks the approach is something that could be applied to any number of platforms. He told The Reg that, in theory, any games engine with a decent API, not just Apple’s, could in theory be linked up to a set of system calls and alerts to create a similarly powerful security suite on any number of devices and operating systems. The tight developer constraints of iOS mean MonitorKit is pretty much a non-starter, though. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/06/macos_wardle_apple_game_security/

Did you know?! Ghidra, the NSA’s open-sourced decompiler toolkit, is ancient Norse for ‘No backdoors, we swear!’

RSA The NSA has released its home-grown open-source reverse-engineering suite Ghidra that folks can use to poke around inside applications to hunt down security holes and other bugs.

Spoiler alert: it’s Apache 2.0-licensed, available for download here, and requires a Java runtime – and the agency swears it hasn’t backdoored the suite.

Speaking at this year’s RSA Conference in San Francisco on Tuesday, Rob Joyce, famed Christmas light hacker and cyber security adviser to the NSA director, unveiled the code-analysis software to a packed house. The agency hopes its open-source code will spark a renaissance in secure software research, he said, and reassured attendees that no dirty tricks are involved.

“There is no backdoor in Ghidra,” he announced. “This is the last community you want to release something out to with a backdoor installed, to people who hunt for this stuff to tear apart.”

Matthew “HackerFantastic” Hickey, cofounder of British security shop Hacker House, however, told The Register there was something a little odd within the program. When you run it in debug mode, it opens port 18001 to your local network that accepts and executes remote commands from any machine that can connect in. Debug mode is not activated by default, though it is an offered feature.

Don’t lose your mind. It’s just something to be aware of if you intend to improve or bugfix the thing, and start it up with debugging enabled. This issue is, therefore, more of a bugdoor than a backdoor, and can be neutered by changing the launcher shell script so that the software listens only to debug connections from the host, rather than any machine via the network.

An NSA spokesperson on the agency’s stand on the RSA event floor told us the open port was to allow teams to collaborate and share information and notes with each other at the same time over the network. Hickey, however, said that feature is provided by another network port.

“The shared project uses a different port, 13100, so, no, it’s not the same function. They made an error and put * instead of localhost when enabling debug mode for Ghidra,” Hickey told The Reg. We have asked the NSA for further comment.

The nitty gritty

In his talk, Joyce said that Ghidra was developed internally by the NSA for tearing down software, including malware, and finding out what exactly was lurking within executable binaries. It’s the sort of tool the spies use to find security weaknesses in products and projects to exploit to pwn intelligence targets.

Screenshot of NSA's Ghidra

Dude, suite … Ghidra examining an executable on a Windows desktop

The program’s 1.2 million lines of code are designed to reverse the compiler process, decompiling executable code into assembly listings and finally into approximate C code. It also helps out graph out control flows through functions, inspect symbols and references, identify variables, data, and such information, and more. It’ll all be very familiar to you if you’re used to similar reverse engineering tools, such as IDA, Hopper, Radare, Capstone, Snowman, and so on.

The platform is processor independent, capable of analyzing code targeting x86, Arm, PowerPC, MIPS, Sparc 32/64 and a host of other processors, and can run on Windows, macOS and Linux. While built using Java, the code can also handle Python-based plugins as well as Java-written ones, because, Joyce said, an NSA analyst doesn’t like Java so added Python support.

You can watch Marcus “MalwareTechBlog” Hutchins play around with the software in this Twitch stream right here.

The NSA Unchained

NSA installed ‘50,000 malware sleeper cells’ in world computer networks

READ MORE

You can use it with or without a graphical user interface, and is scriptable. As mentioned above, not only can you annotate code with your own comments, you can bring in notes from other team players via network-based collaborative functions.

For new users, extensive help files are provided, and Joyce said he hoped the community can add more functions and scripts and share them, because the NSA wants to make this a decent widely used tool.

“Ghidra is out but this is not the end,” he promised. “This is a healthy ongoing development in the NSA, it’s our intent to have a GitHub repository out there. The buildable environment will come and we’ll accept contributions.”

Further down the line, Joyce promised, the NSA will release an integrated debugger, a powerful emulator, and improved analysis tools. It’s US taxpayer dollars at work, he said, and it may also bring future NSA recruits up to speed faster on the agency’s internal tools.

One other question on our mind: why no Earth give this away for free to everyone on the planet? Perhaps the NSA’s enemies are assumed to have better or similar tools, and perhaps the agency internally has moved onto more sophisticated suites internally, leaving Ghidra for public release. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/06/nsa_ghidra_joyce/

Consumers Care About Privacy, but Not Enough to Act on It

People claim to value data privacy and don’t trust businesses to protect them – but most fail to protect themselves.

RSA CONFERENCE 2019 – San Francisco –When it comes to data privacy, users’ practices fail to align with their values. Most claim to value privacy and don’t fully trust businesses to protect their information, yet they aren’t taking the necessary steps to put their own privacy safeguards in place.

The data comes from a new Malwarebytes survey, entitled “The Blinding Effect of Security Hubris on Data Privacy,” released here this week at the RSA Conference. Between Jan. 14 and Feb. 15, 2019, researchers polled nearly 4,000 people to learn about their confidence in their own privacy and security practices, as well as their trust in organizations to protect data.

As it turns out, participants do care about security – but only enough to do the bare minimum. Their perceived confidence in their privacy practices is higher than reality, researchers report.

Most (96%) people across generations, and more than 93% of Millennials, say they care about privacy. Nearly all take steps to secure their information online. Most (93%) use security software, nearly 90% say they regularly update software, and about 85% verify websites are secure before purchasing. Ninety-four percent avoid sharing personal data on social media.

People largely distrust social media platforms with their data. Researchers asked participants to rate, on a scale of 1-5, how much they trusted social media to protect their data. The average response: 0.6. Baby Boomers are most distrustful of social media (96%), followed by Gen X (94%), Gen Z (93%), and Millennials (92%). In total, 95% say they distrust for social media platforms.

Search engines are considered more trustworthy. When asked to rank their trust of search engines on a 1-5 scale, the average response was a little over 2. Gen Z (75%) is the most distrustful of search engines, followed by Gen X (65%), Millennials (64%), and Baby Boomers (57%).

“One of the things that caught me by surprise was how much you trust social media versus search engines,” said Marcin Kleczynski, CEO of Malwarebytes, in an interview with Dark Reading. “From a social media perspective, you’re already giving up the data pretty willingly.”

It’s no surprise, given Facebook’s privacy scandals and tech giants’ advertising practices, that users feel skeptical to share information. “How much you’re willing to share with Facebook is also how much you’re willing to lose in terms of privacy,” Kleczynski pointed out.

Eighty-seven percent of respondents aren’t confident in sharing personally identifiable information (PII) online. Those who are willing to share are most likely to share contact information, payment card details, and banking and health-related data with those sites.

Despite their distrust in tech giants and confidence in their privacy practices, people aren’t likely to go out of their way to safeguard their information: One-third of respondents claim to read end user license agreements (66% either skim through or ignore them entirely), 47% know which permissions their applications have, and about 53% use password managers. Twenty-nine percent reuse the same passwords across websites; for Millennials, that number was 37%.

“This kind of behavior is what criminals want users to do,” experts say in the report. The practice makes it easy for attackers to steal credentials from one place and use them elsewhere – a practice easily prevented with password managers, they continue.

“These are pretty concerning trends,” Kleczynski noted, adding that using a password manager is “the biggest thing you can do as a citizen online.” The common thread of unfollowed practices is they’re tough to do correctly. License agreements are long and packed with technical and legal jargon, for example, and many users don’t care about app permissions.

What can businesses take away from this data? Identity is key, Klecsynski said. Password managers and single sign-on services are critical to protect the credentials that grant access to data. Security software and patching are the next most important factors to protecting people in the enterprise.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/consumers-care-about-privacy-but-not-enough-to-act-on-it/d/d-id/1334071?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook criticised for misuse of phone numbers provided for security

Facebook’s under fire – again. This time, it’s for using phone numbers provided for security reasons, for other things.

Users are once again accusing Facebook of playing fast and loose with their privacy, allowing users to look up their profiles using the phone number they thought they were only providing for 2FA (two-factor authentication). What’s more, there’s no getting out of it, since Facebook has no opt-out for the “look me up by my phone number” setting.

This latest scandal blew up on Friday, when Emojipedia founder Jeremy Burge publicly criticized Facebook’s information-slurping operation:

In a string of tweets sent after that, Burge said that he noticed that in September Facebook slipped in an understated “and more,” appended to the original phone number prompt. The “and more” linked to a page that explained that the number would be used for purposes other than securing your account.

Burge also noted that getting users to put in their phone number to sign up for services has been “the single greatest coup” for the social media and advertising industries: it’s “one unique ID that is used to link your identity across every platform on the internet,” he said.

When is a search not a search?

In April 2018, Facebook CTO Mike Schroepfer announced new data access restrictions: one of a string of attempts the company made to try to appease lawmakers and regulatory bodies and to try to keep users from torching their accounts in the Cambridge Analytica fallout.

Facebook said at the time that “most people on Facebook” may have had their public profile information scraped by “malicious actors.” The scraping was done with account recovery and search tools that let users look up people by their phone numbers and email addresses, then take information from their profiles.

From Schroepfer’s post:

Until today, people could enter another person’s phone number or email address into Facebook search to help find them. So we have now disabled this feature.

Burge tweeted today that while the ability to “search” for people using their phone number was turned off last year, it’s still possible to “look up” profiles using phone numbers stored in your phone:

“This isn’t a mistake”

Facebook former chief security officer said that Facebook once had plans to segregate phone numbers provided for 2FA from those which users handed over for other purposes. So much for that – it’s now clear that Facebook made an intentional choice not to do so, he said:

Facebook never did replace Stamos. Too bad: as Stamos pointed out in another Tweet, this is a clear example of why companies need somebody devoted to advocating for security:

The privacy and safety repercussions

These are the privacy repercussions: if someone you know has used her phone number to turn on Facebook 2FA, and if you’ve allowed the Facebook app to access your contacts on your phone, it will see your friend’s phone number and offer to connect the two of you – in spite of your friend not having offered to make her phone number available for looking her up.

This doesn’t just lead to potentially awkward situations, such as when you’re not real-life friends with the person whom Facebook suggests you link up with… as pointed out by security expert and academic Zeynep Tufekci, it can prove dangerous for people who need to stay hidden:

What to do?

If you choose to remove your phone number from your account, you can’t use it to recover the account or use SMS-based 2FA.

The good news is that in May 2018, Facebook made it easier to use third-party authentication apps for 2FA – such as, for example, Google Authenticator, Authy, Duo Security, or Sophos Authenticator (here are the links for the iOS and the Android version).

That doesn’t necessarily mean that profiles aren’t findable by phone number search, though. As Burge pointed out, phone numbers have been used throughout Facebook’s other apps, including WhatsApp and Instagram. And even if you don’t give Facebook your number, a friend who shares their address book with one of Facebook’s apps might do it for you.

You can at least mitigate the fallout by limiting who can look you up by using your phone number.

Go to Settings Privacy How people can find and contact you. Set the drop down next to Who can look you up using the phone number you provided? to “Friends,” rather than “Everyone” or “Friends of friends.” As it is, Facebook has the setting set to “Everyone” by default.

If you’re concerned about which privacy and security settings to focus on in Facebook, you might be interested in our guide to protecting your account.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/boK9_Nl_7I8/

Adi Shamir visa snub: US govt slammed after the S in RSA blocked from his own RSA conf

RSA Adi Shamir, the S in the renowned RSA encryption system, didn’t take his usual place on the Cryptographers’ Panel at this year’s RSA Conference in San Francisco – because he couldn’t get a visa from the US government. And he’s not alone.

Shamir – the 2002 Turing Award co-winner and a member of the US, French, and Israeli Academy of Sciences and Britain’s Royal Society – lives in Israel, and applied for a US visa two months ago to attend the information-security conference, the largest of its type in the world, which is being held this week in California. Shamir, along with Ron Rivest and Leonard Adleman, invented the widely used RSA cryptosystem, and cofounded RSA Security, which has been running the RSA Conference since 1991.

Shamir usually attends the USA event each year to speak on a panel with fellow cryptographers. However, he heard nothing back about his visa application this time around, and so is stuck at home.

“I’ve been left in total limbo,” he told the conference via video link this morning. “If someone like me can’t get a visa to get a keynote, perhaps it’s time we rethink where we organize our conferences.”

He and other panelists pointed out that he was far from alone in this. Over the past few years, security researchers have found it increasingly difficult to enter America legally for conferences, though this year, possibly as a result of President Trump’s partial federal government shutdown, the problem is much, much worse, with countless visa applications stuck in a backlog.

“Adi is not the only one, there are other researchers who couldn’t get a visa,” warned Shafi Goldwasser, professor of computing at University of California, Berkeley. “There are other researchers who haven’t got a visa and there is no word of any progress. It’s unclear who is in charge.”

Two out of five Silicon Valley techies complain Trump’s H-1B crackdown has hit ’em hard

READ MORE

Rivest, Shamir’s long-time colleague and the R in RSA, was visibly fuming over the blocking of his friend. Rivest said he would be writing to his senator and congressperson to protest, and he urged the thousands of conference attendees to do the same.

“It’s embarrassing to be a Yankee some days,” sighed the crypto-boffin.

The US government wasn’t the only organization taking flack from the experts in cryptography: the Australian government also received a verbal flaying for its new law requiring communications service providers to install surveillance backdoors in their software to allow cops and spies to secretly listen in on private chatter. Oz political leaders had insisted the laws of Down Under overrode the mathematical laws of cryptography, suggesting therefore it was possible to somehow introduce peepholes into strong encryption just for government agents without completely undermining the crypto.

“Australia has given us the lovely quote that the laws of mathematics are all well and good, but the laws of Australia apply in Australia,” crypto-guru Whitfield Diffie said, with a grin. “If you extend this to physics they could ban fission and ensure they are safe from nuclear weapons, or ban certain chemical reactions and solve their global warming problem. It’s a step that isn’t going to be productive.”

He said corporations will probably comply with Australia’s new rules and introduce wiretapping points to their products, though smaller folks rolling their own end-to-end encrypted systems – like, say, terrorists, probably won’t. Ultimately, he said, he’d like to see personal privacy not touched by legislation.

200 years ago, he said, people had more privacy than anyone does today. But with computer brain interfaces looming, he believed our personal thoughts could be up for grabs to anyone with a court order.

Overall there was optimism that the world was heading in the right direction on some topics. Election systems are getting hardened, crypto systems are getting stronger, and vendors were getting the message that security is vital. It’s clear governments aren’t helping, in more ways than one. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/05/rsa_cofounder_us_visa_row/

FBI boss: Never mind Russia and social media, China ransacks US biz for blueprints, secrets at ‘surprisingly’ huge scale

RSA While Russian hackers, Kremlin-backed or otherwise, grab the headlines, China remains the biggest cyber-security threat to America, FBI director Christopher Wray warned today.

Speaking at the RSA Conference in San Francisco this morning, Wray said the scale of Beijing’s government-orchestrated online espionage is greater than that of any other nation state. The Middle Kingdom’s spies are attacking US corporate computer networks, and plundering systems for blueprints and other top-secret intellectual property, at an unprecedented rate, the FBI boss claimed.

“Of all the things that surprised me since taking on the directorship, it was the breadth and depth and scale of the Chinese counterintelligence threat,” Wray said.

“We’re investigating espionage and criminal investigations in nearly all 56 FBI field offices, almost all of which lead back to China. For too long, the US has been focused on the threat China poses. There’s nothing like it.”

china_future_648

Oh no Xi didn’t?! China’s hackers nick naval tech blueprints, diddle with foreign elections to boost trade – new claim

READ MORE

When asked if there was a political element to such claims, in light of the ongoing trade row between the White House and Beijing, Wray was dismissive. The FBI is an independent agency, he said, adding, “If we find someone doing crimes, we will go after them, and I don’t really care what a foreign government has to say about it,” to warm applause from the conference crowd.

Russia’s online meddling is still an issue, Wray said, but it mainly targeted social media and other internet forums to “sow divisiveness and discord, and undermine faith in democracy.” These efforts were curtailed somewhat by Uncle Sam’s cyber-warriors during the midterm elections, and the FBI is bracing for more Kremlin-masterminded inference and mischief when the 2020 elections roll around.

The Feds have had a lot more help from tech giants in countering Russia’s social-media shenanigans than they have had in the past, Wray noted, and overall the public-private partnership was working well. He recalled the time a list of British and US military personal was stolen and passed onto Daesh-bags in Syria as a kill list – a case Wray said would not have been cracked without the help of private industry.

Now about encryption backdoors

Possibly in light of his audience, Wray insisted the FBI doesn’t want to weaken encryption nor demand surveillance backdoors in software and other products – his predecessor, on the other hand, wanted the front-door keys to the world’s crypto-systems. However, Wray still expects his agents to somehow be let in to inspect private encrypted chatter and data when investigating terror threats and foreign snoops, without undermining people’s personal security.

“I’m well aware this is a provocative subject, and my first approach is that we aren’t combative,” the FBI boss said. “It’s a public safety issue. We are a very strong believer in strong crypto, but we are duty bound to protect American people. This can’t be a sustainable end state, an unfettered space for terrorists and spies to hide their communications.”

In the last year, Wray said, he had seen increasing signs the technology community and law enforcement were talking more reasonably about this. There may well be a way to combine strong encryption and lawful intercepts he said, if people are willing to put their heads together.

Wray finished his keynote with a plea for more recruits to join the FBI and serve their country.

“The grass is browner on this side of the fence but there’s no greater challenge,” he said.

“We are more selective than some ivy league colleges, and had a 0.5 per cent staff attrition rate last year – very few organisations have that. Getting up in the morning and going to work to defend the Constitution keeps people motivated, and I would stack our crew against anyone else in the world.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/05/fbi_china_warning/

You. Shall. Not. Pass… word: Soon, you may be logging into websites using your phone, face, fingerprint or token

RSA At 2004’s RSA Conference, then Microsoft chairman Bill Gates predicted the death of the password because passwords have problems and people are bad at managing them. And fifteen years on, as RSA USA 2019 gets underway in San Francisco this week, we still have passwords.

But the possibility that internet users may be able to log into websites without typing a password or prompting a password management app to fill in the blanks has become a bit more plausible, with the standardization of the Web Authentication specification.

Known as WebAuthn for those who find six syllables a bit taxing to say aloud, the newly blessed specification is already supported in Android, Apple Safari (preview), Google Chrome, Microsoft Edge, Mozilla Firefox, and Windows 10.

The spec will allow people to authenticate themselves and log into internet accounts using a preferred device, through cryptographic keys derived from biometrics, mobile hardware, and/or FIDO2-compliant security keys.

“Now is the time for web services and businesses to adopt WebAuthn to move beyond vulnerable passwords and help web users improve the security of their online experiences,” said Jeff Jaffe, CEO of web standards group W3C, in a statement on Monday.

WebAuthn doesn’t really get rid of passwords. Rather, it eliminates the security risk of storing even hashed user passwords on servers – phishing, password theft, and reply attacks – and shifts the focus from typed credentials to hardware-based cryptographic login credentials and some form of authentication gesture or code.

Looking ahead, you’ll get to worry about losing your physical hardware key rather than losing the secrecy protecting your passwords through a poorly secured server.

passcode

No password? No worries! Two new standards aim to make logins an API experience

READ MORE

The technology should allow websites to support low-friction authentication from visitors who have FIDO2 credentials associated with their desktop or mobile device.

In such a scenario, a user with a laptop or desktop computer and a Bluetooth-paired mobile phone might navigate a website’s sign-in page and receive a prompt to authenticate via phone. The user would then take some authorization action like pressing the phone’s fingerprint reader, if available, or entering a PIN to be logged in on the applicable computer.

In another scenario, a user with a laptop or desktop computer may rely on a dedicated FIDO2 fob in lieu of a phone-based authenticator. But the authentication process will probably still require pressing a button on the fob or entering a PIN. That’s because automatic authentication could go wrong – you wouldn’t want a USB stick to provide access to your bank account without some challenge.

At Dropbox, which implemented WebAuthn last year, the technology provides two-step verification rather than one-step access. The company said it kept passwords as part of the authentication process because there are a variety of security and usability factors that make it premature to get rid of them entirely.

Microsoft meanwhile has done its best to fulfill its co-founder’s password death wish, adding support for FIDO2 hardware authentication in its Windows 10 October 2018 update last year. The company now allows those using Windows 10 with Microsoft Edge to log in to their Microsoft account without entering a password. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/05/web_authentication/

NSA may kill off mass phone spying program Snowden exposed, says Congressional staffer

Special report The NSA may kill off a controversial mass surveillance program of Americans that was exposed by Edward Snowden, according to a Congressional staffer.

Luke Murry is national security advisor to House minority leader Kevin McCarthy (R-CA), and over the weekend told the Lawfare podcast (5 minutes in) that the US spying agency hasn’t been using its system for blanket collection of US citizens’ telephone metadata for the past six months “because of problems with the way in which that information was collected, and possibly collecting on US citizens.”

Murry then suggested the White House may simply drop the program, especially since it requires Congress to reauthorize it this December. “I’m not actually certain that the administration will want to start that back up given where they’ve been in the last six months,” he said.

That comment was picked up by reporters, and has led to lots of speculation that the NSA may be ending one of its most disliked spying programs: one that has been repeatedly criticized as unconstitutional by the law courts, privacy advocates, and legislators, because it indiscriminately snoops on America’s own citizens.

The truth, unfortunately, is very different.

Murry muddied the issue by conflating the bulk collection of phone metadata with the broader Section 215 of the USA PATRIOT Act, which g-men previously used to obtain various bits of intelligence on pretty much anyone on US soil. Section 215 expired at the end of May 2015, and was, in a rather roundabout and messy way, reenabled through to the end of 2019 via the USA Freedom Act that passed the following month. Now its’s 2019, and the section’s up for renewal again.

When Murry was asked about national security topics coming up this year, he said: “One which may be must-pass, may actually not be must-pass, is Section 215 of USA Freedom Act, where you have this bulk collection of, basically metadata on telephone conversations — not the actual content of the conversations but we’re talking about length of call, time of call, who’s calling — and that expires at the end of this year.”

Now, Section 215 is in fact much broader than phone metadata collection. In 2014, for example, there were 180 orders authorized by the US government’s special FISA Court under Section 215, but only five of them related to metadata; the rest cover, well, the truth is that we don’t know what they cover because it remains secret.

Tangible

The best guess is that the remaining 97 percent of the Section 215 orders cover things like emails and instant messages, search engine searches, and video uploads. The actual wording of the law is that the NSA can collect “tangible things” – which is likely the broadest possible language that the NSA and FBI could imagine when the law was written.

The reason that telephone metadata has been so closely associated with Section 215 is because that was the part of the blanket surveillance program of Americans whistleblower Edward Snowden first exposed, and it created a huge stink. It has stuck in people’s heads.

We’re spying on you for your own protection, says NSA, FBI

READ MORE

Telephone metadata collection has also repeatedly featured in battles over spying powers and in public announcements by the NSA. Why? In large part because it has become a part of larger public awareness in a way that all the others things Uncle Sam carries out under Section 215 have not.

In 2015, one month after the program was reauthorized, the Office of the Director of National Intelligence (ODNI) issued a rare statement that seemed like good news: the NSA would stop analyzing old bulk telephony metadata and start deleting it.

After Congress had looked into Section 215, a new agreement was reached that the NSA would turn away from bulk collection of data and instead focus on “new targeted production.” The message was that the security services had learned their lesson and Congressional oversight has worked: the spying program had been scaled back.

But had it? Well, no, is the answer. Because despite the new “targeted” approach, three years later in June 2018, another statement came out of the blue in which the ODNI said it had begun deleting all “call detail records” – CDRs – that it had acquired since that 2015 change in approach.

Why? And why did it make the decision public?

Irregular

We don’t know. But the ODNI’s explanation was “because several months ago NSA analysts noted technical irregularities in some data received from telecommunications service providers.” Those “irregularities” resulted in the NSA receiving information it was not authorized to receive, it said.

For some reason, prior to going public, the spy nerve-center felt the need to share its findings with the Department of Justice and Office of the Director of National Intelligence, and then they all decided the best course of action was to delete everything.

And once those entities had been pulled in, the NSA decided it needed to inform Congressional oversight committees, and the Privacy and Civil Liberties Oversight Board (PCLOB), as well as let the DoJ know it should inform the Foreign Intelligence Surveillance Court (FISC). At that point so many people knew that the ODNI presumably thought it was only a matter of time before the information leaked and so it put out a public statement.

But, the NSA assured everyone, “the root cause of the problem has since been addressed for future CDR acquisitions, and NSA has reviewed and revalidated its intelligence reporting to ensure that the reports were based on properly received CDRs.”

Except, now, according to McCarthy’s national security advisor Luke Murry, the NSA never started the program up again. And it is prepared, at least according to Murry’s reading of the situation, to let the whole metadata program drop.

Why? It almost certainly has something to do with two US senators: Ron Wyden (D-OR) and Rand Paul (R-KY). Both have been battling with the NSA for several years over its spying programs.

Wyden is a member of the Senate intelligence committee, and so is given classified briefings about what the NSA really gets up to. Paul is a libertarian, and philosophically opposed to state surveillance.

Together, with others, they put up an almighty fight last year to push greater controls on another controversial spying program – Section 702 of the Foreign Intelligence Surveillance Act (FISA) – as it went through reauthorization. Ultimately they failed, but they got very close to winning, and that worried the NSA immensely.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/05/nsa_spying_program/

Incident Response: Having a Plan Isn’t Enough

Data shows organizations neglect to review and update breach response plans as employees and processes change, putting data at risk.

Businesses are slowly improving their data breach plans, but lack of executive involvement, failure to review and update plans, and regulatory and compliance challenges prevent them from being able to respond to security incidents with increasingly severe consequences.

A new study – entitled “Is Your Company Ready for a Big Data Breach?” – conducted by the Ponemon Institute and commissioned by Experian, polled 643 professionals in IT and IT security on their organizations’ data breach response practices.

They learned 52% of respondents rate their response plans as “very effective,” slightly up from 49% one year prior and 42% in 2016. Still, only 36% feel sufficiently prepared to respond to incidents involving business confidential information and intellectual property.

It’s slow-going progress at a time when more businesses are disclosing breaches and realizing their far-reaching effects. Nearly 60% of respondents reported a data breach in 2018. Of those, 73% reported multiple. Incidents are causing greater financial damage: A 2018 Ponemon study showed the average consolidated cost of a breach is $3.86 million. Fear of reputational damage is also top-of-mind among 27% of respondents who believe a breach would tarnish their brands.

Most (92%) companies have a data breach notification plan in place. The problem is, most companies with a breach response plan fail to adapt to change. Forty-two percent of respondents have “no set time period” for reviewing and updating their response plans, and 23% haven’t reviewed or updated their plans since it was put in place – “which may be years at a time,” says Michael Bruemmer, vice president of data breach resolution at Experian.

“Where we see simple mistakes being made it, the plan is set on the shelf and done once, then employees and processes change and they don’t update the plan,” he explains. “In data breach response, timing and accuracy of information is really important.”

It’s one thing to have a good response, but there’s a great penalty if your company suffers multiple security incidents and doesn’t alter its plan to reflect what was learned from them. It should regularly follow up, update the plan, and practice the process of incident response, researchers note in the report.

Unpacking Response Plans
Which incidents do companies plan for? Most (87%) plans include guidance on how to handle a distributed denial-of-service (DDoS) attack that could cause system outage, 80% address loss or theft of personally identifiable information (PII), and 79% address loss or theft of data on customer associations that could lead to brand damage. About three-quarters include guidance on loss or theft of payment data; 73% address loss or theft of intellectual property or confidential business data.

“Many companies are among those that recognize the sensitive PII in their possession and know they are an attractive target,” Bruemmer says. They know they need to have a plan regardless of whether they’ve already been hit. Still, “a vast number of businesses only learn that their company needs to have a plan in place once the security incident occurs,” he adds.

Bruemmer advises organizations to form a data privacy program or job-specific security or privacy training program for employees who have access to PII and other sensitive information. Twenty-seven percent of businesses don’t have this type of program, he adds, and people who have admin access and handle PII should be trained on how to avoid cyberattacks.

“The blanket approach that everyone takes the same training … that used to be the norm five years ago. That can’t be the norm now,” he explains.

Breach Response’s Biggest Burdens
Cloud complicates breach response, researchers report. Sixty-three percent say lack of visibility into end users’ data access is their biggest barrier in improving breach response. Sixty percent say the proliferation of cloud services is another major challenge, and 43% are concerned about the lack of security process for third parties that handle their corporate data.

Lack of expertise may have fallen in fourth place, listed among only 37% as a barrier to breach response, but more people have cited this as an obstacle over the years. Less than one-third worried about lack of expertise in the 2017 survey, which was up from 29% the year prior.

Some types of security incidents pose a greater challenge than others. Only 21% of respondents expressed confidence in their ability to handle ransomware attacks, and 24% said the same for spear-phishing, researchers found. Less than half (47%) educate employees on spear-phishing.

Organizations also face compliance and regulatory challenges, Bruemmer points out. The EU’s General Data Protection Regulation (GDPR) went into effect in May 2018; since then, 59% of respondents report their organizations’ plans now include processes to handle an international data breach, up from 51% in 2016. However, GDPR rules are tough to comply with, and only 36% of companies say they have a high ability to comply with the data breach notification rules.

It’s Time for Execs to Chip In
Senior leadership’s involvement in breach response is “mostly reactive.” C-suite and board members mostly want to know whether a material breach took place and generally don’t know about the specific security threats to their organizations. Only 22% of respondents say the C-suite regularly participates in response plan reviews; 10% say the same for board members.

About half (49%) of respondents say executives don’t know about response plans, and 81% think their response plans would be more effective with executive involvement. They also cite a need for more drills to practice incident response and for more skilled infosec employees.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/incident-response-having-a-plan-isnt-enough-/d/d-id/1334056?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple