STE WILLIAMS

New Threat Group Using Old Technique to Run Custom Malware

Whitefly is exploiting DLL hijacking with considerable success against organizations since at least 2017, Symantec says.

Whitefly, a previously unknown threat group targeting organizations in Singapore, is the latest to demonstrate just how effective some long-standing attack techniques and tools continue to be for breaking into and maintaining persistence on enterprise networks.

In a report Wednesday, Symantec identified Whitefly as the group responsible for an attack on Singapore healthcare organization SingHealth last July that resulted in the theft of 1.5 million patient records. The attack is one of several that Whitefly has carried out in Singapore since at least 2017.

Whitefly’s targets have included organizations in the telecommunications, healthcare, engineering, and media sectors. Most of the victims have been Singapore-based companies, but a handful of multinational firms with operations in the country have been affected as well.

Like many threat groups, Whitefly has been using a combination of custom malware, open source tools, and living-off-the-land tactics in its attacks. One of them is a well-documented technique known as search-order hijacking or DLL load-order attacks.

Whitefly has been consistently using the approach to run a custom malware tool called Vcrodat on compromised systems. Vcrodat is designed to decrypt, load, and launch files to run in memory on victim systems, according to Symantec.

Search-order hijacking is a well-known technique that other attackers have used for quite some time, says Jon DiMaggio, senior threat intelligence analyst at Symantec.

The technique exploits the predictable manner in which Windows loads dynamic link libraries (DLLs) when an application itself does not explicitly specify the path. Attackers can abuse the process to get Windows to load a malicious DLL instead of the legitimate one.

“If the import name of the DLL matches the name of an authorized library, the OS will map the DLL to the process in memory of the victim system,” DiMaggio says. With Vcrodat, for instance, what Whitefly frequently has been doing is using DLLs with the same name as DLLs belonging to legitimate security software. “Defeating search order hijacking on its own can be difficult since it is not a recognized vulnerability but instead a legitimate OS component being misused,” DiMaggio says.

But security and anti-malware tools exist that can prevent malicious DLLs from running. And keeping apps and operating systems properly patched can mitigate the risk too, he says.

In addition to DLL hijacking, Whitefly has been using other commonly known tools in its attacks as well. For instance, once the group compromises an initial computer, it maps the network and tries to infect other computers. The group has been does this using the open source Mimikatz credential gathering tool and another open source tool that exploits a previously known Windows privilege escalation vulnerability (CVE-2016-0051). “If the victim had patched against this vulnerability, the attack would be unsuccessful and the attacker would be forced to find another infection vector,” DiMaggio says.

Whitefly has also been using a combination of legitimate tools such as PowerShell and other publicly available hacking tools — such as those used for penetration testing — to remain undetected on compromised networks for as long as possible.

By living off the land and using tools already in the environment, Whitefly has been blending its malicious activity with traffic and tool use associated with legitimate administrative activity. “Since anyone can download these tools, it’s almost impossible to use them for attribution,” DiMaggio notes.

Whitefly currently appears to be focused only on organizations in Singapore. But its tactics, techniques, and procedures are similar to those used by numerous other groups, including low-level cybercrime gangs that increasingly have been borrowing ideas from persistent threat actors and state-sponsored players.

Importantly, some of the tools that the group has developed — including Vcrodat and a multipurpose command tool — have been used in attacks outside Singapore. While it is possible that Whitefly was responsible for these attacks, it is more likely that other attackers have access to the same tools, Symantec said in its report.

“Attackers continue to use creative ways to infect targets,” DiMaggio says. “Whitefly is persistent and has been successful at compromising targets and maintaining an undetected presence on the victim network for months at a time.” For enterprise organizations, such campaigns highlight the need to monitor for both malicious and legitimate activity, he says.

Related Content:

  

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/new-threat-group-using-old-technique-to-run-custom-malware/d/d-id/1334089?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Serious Chrome zero-day – Google says update “right this minute”

Chrome users, make sure you’ve got the very latest version.

Or, as Justin Schuh, one of Chrome’s well-known security researchers, put it:

[L]ike, seriously, update your Chrome installs… like right this minute.

We’re not big Chrome fans – we’ve always thought that Firefox is better in both form and function, to be honest – but we have Chrome installed at the moment and can tell you that the version you want is 72.0.3626.121, released at the start of March 2019.

To check that you’re up-to-date, go to the About Google Chrome… window, accessible from the address bar using the special URL chrome://settings/help.

This will not only show the current version but also do an update check at the same time, just in case any recent auto-updates have failed or your computer hasn’t called home yet.

The reason that even the Chrome team are wading in with you’d-better-update warnings is the recent appearance of a zero-day security vulnerability, dubbed CVE-2019-5786, for which Google says it is “aware of of reports that an exploit […] exists in the wild.”

To clarify.

A vulnerability, or vuln for short, is a bug that makes software go wrong in a way that reduces computer security.

An exploit is a way of deliberately triggering a vulnerability to sneak past a security control.

Exploitable or not?

To be clear, all vulnerabilities represent a risk, by definition, even if the worst you can do with the bug is to crash a program or produce a sea of unexpected error messages.

But in the same sort of way that all thumbs are fingers, while not all fingers are thumbs…

..,all exploits arise from vulnerabilities, while not all vulnerabilities can be turned into exploits.

Nevertheless, some vulnerabilities, when analysed, examined, probed and attacked with sufficient ingenuity, can be tricked into doing much more than just provoking an unwanted error or bombing out an app.

For example, attackers may be able to make a program crash in a cunning way that leaves the software alive but with the attackers in direct control of its execution, rather than killing off the program entirely and leaving the attackers staring at an apologetic operating system error message.

You can see why this sort of attack, relying as it does on a specific and treacherous abuse of a vulnerability, ended up with the nickname exploit.

And a zero-day, very loosely speaking, is a vulnerability that the Bad Guys figured out how to exploit before the Good Guys were able to find and patch it themselves.

In other words, a zero-day, often written 0-day for short, is an attack against which even the best- informed sysadmins had zero days during which they could have patched proactively.

The name zero-day is a little curious, given that most 0-days are only noticed several days – or perhaps even weeks or months – after the crooks started using them. Obviously, the longer the crooks can keep an 0-day away from security researchers, the longer it can be abused. The term comes from the old days of piracy and game cracking, where hackers rushed for the bragging rights to be the first to produce cracked versions. The ultimate crack was known as a zero-day – one that came out on the very same day as the legitimate product, meaning that the pirates had zero days to wait before they could leech the game for free.

Precise information about the Chrome CVE-2019-5786 zero-day is hard to come by at the moment – as Google says:

Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.

According to the official release notes, this vulnerability involves a memory mismanagement bug in a part of Chrome called FileReader.

That’s a programming tool that makes it easy for web developers to pop up menus and dialogs asking you to choose from a list of local files, for example when you want to pick a file to upload or an attachment to add to your webmail.

When we heard that the vulnerability was connected to FileReader, we assumed that the bug would involve reading from files you weren’t supposed to.

Ironically, however, it looks as though attackers can take much more general control, allowing them to pull off what’s called Remote Code Execution, or RCE.

RCE almost always means a crooks can implant malware without any warnings, dialogs or popups.

Just tricking you into looking at a booby-trapped web page might be enough for crooks to take over your computer remotely.

What to do?

There doesn’t seem to be a workaround, but if you make sure you’re up to date, you don’t need one because the bug will be squashed.

Without a vulnerability to exploit, the exploit – rather obviously – isn’t and can’t, so patching is the ultimate fix for this one.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/x-UxEiYwx10/

UK Ministry of Justice: Surprise! We tested out biometric tech in prisons and ‘visitors’ with drugs up their bums ran away

The UK Ministry of Justice is mooting a rollout of biometric technology in prisons to cut down on visitors bringing in contraband, reporting that a “successful” recent trial had a deterrent effect.

However, campaigners said news of the trial had come as a “total shock” and watchdogs warned the tech must only be used if there is evidence that it is necessary and proportionate.

According to the MoJ, the new tech – from Tascent, FaceWatch and IDScan – was tested at three prisons in December and January: iris scanners at HMP Lindholme; facial recognition kit at HMP Humber; and identity document verification at HMP Hull.

The aim is to crack down on the amount of contraband brought into prisons. Last year more than 23,000 seizures of drugs and phones were made by prison staff, which was up 4,000 on the previous year.

Most prisons use paper-based verification for documents like driving licences, which the MoJ said were susceptible to fraud, with traffickers using fake documents to gain entry.

The idea is that using biometrics to check the identities of visitors would be more accurate, as well as speeding up the process and reducing the number of people needed – crucial for resource-stricken prisons.

In a canned statement, the MoJ branded the trials “successful”.

In particular, it pointed to the deterrent effect of the kit, saying one prison had reported a higher number than usual of “no-shows” after visitors found the software was in use.

However, it isn’t clear whether this took into account the fact that some innocent visitors might choose not to attend if they thought they would face biometric tests – such concerns are well-documented in the police’s use of facial recognition in live environments.

An MoJ spokeswoman emphasised this deterrent effect was “anecdotal” and that the main aim of the work was to show that the trial was designed to see if the tech could perform the tasks prison staff currently do – effectively, checking visitors’ IDs.

Nonetheless, the MoJ said it was looking at rolling it out elsewhere, and the justice secretary David Gauke issued a gushing statement saying that the kit “has the potential to significantly aid our efforts” to fight prison gangs, as part of a “multimillion-pound investment” in security.

Campaigners in ‘total shock’

The move comes amid wider concerns about the government’s increasing use of intrusive biometric technologies on the public, largely due to the fact it is being used without a legal or oversight framework. More policy details were expected in the Home Office’s long-awaited biometrics strategy, but this failed to deliver.

There are, however, various regulators and advisory bodies in the field that the MoJ could have consulted ahead of the trial – which it appears did not occur.

“The use of facial recognition in prisons comes as a total shock to everyone, including the Commissioners who have been tasked to oversee this new surveillance technology,” said Griff Ferris, policy officer at campaign group Big Brother Watch.

Indeed, surveillance camera commissioner Tony Porter confirmed to The Register that he hadn’t been notified in advance, and was therefore “unsighted” on the technology used, the standards applied, or if any of the 12 principles in the Surveillance Camera Code of Practice were used.

Porter added that he was surprised not to have been consulted given that he has been active in advising police and local authorities in their use of artificial intelligence, and cameras connected to such technology.

But he did stress that he recognised the “very real issues” the Prison Service faces in ensuring safety for inmates and staff, and said he would be writing to the MoJ to offer advice and guidance on future trials.

The Information Commissioner’s Office – which launched a formal probe on facial recognition tech last year – didn’t say whether it had been consulted. But it did warn that organisations needed to ensure they properly assess the risks of using new and intrusive technologies, particularly involving biometric data.

“The use of this technology in prisons must be carefully considered, with clear evidence to demonstrate that it is necessary and proportionate,” the ICO said in a statement to El Reg.

This echoes recent advice from a government biometrics ethics advisory group that said facial recognition tech should only be used if it is proven to be effective and is the only method available.

The MoJ moved to head off concerns by saying that visitors weren’t cross-checked against any databases; that data collected was deleted at the end of each day of the trial; and that it had undertaken privacy impact and data protection assessments before the trial.

It added that it would “of course, consult with the statutory and ethical bodies responsible for information and biometrics” before proceeding with any further trials.

But that news of the initial study came via a press release is indicative of observers’ concerns about the effect poorly communicated trials could have.

The ICO said the “risk of eroding public trust is great if there is a lack of transparency” about the use of new and emerging technologies.

And Ferris said: “Government seeking public approval for facial recognition cameras in low-rights environments such as prisons is a staggering move, since we know it’s also trying to introduce them as a general public surveillance tool.”

He added the group was surprised that the government was “continuing to take such an experimental approach to human rights” in the face of legal challenges on its use of the tech by police.

We’ve contacted the biometrics commissioner Paul Wiles to confirm if he was consulted and will update this article if we receive a response. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/06/moj_facial_recogntion_technology_trials/

Fighting Alert Fatigue with Actionable Intelligence

By fine-tuning security system algorithms, analysts can make alerts intelligent and useful, not merely generators of noise.

To address the barrage of advanced cyberattacks, organizations are turning more and more to an increasing number of security products. Research by Enterprise Strategy Group found 40% of organizations use 10 to 25 different security solutions, while 30% use 26 to 50. That translates — depending on the size of the organization — to tens of thousands of alerts daily.

While alerts are crucial in helping analysts identify and mitigate damages from cyberattacks, they also can be too much of a good thing. A 2017 Ponemon Institute study found that more than half of security alerts are false positives and that companies waste an average of 425 hours per week responding to them. The pressure to quickly differentiate critical alerts from the “noise” becomes overwhelming for understaffed security teams, leading to alert fatigue, frustration, burnout, and desensitization.

Worse, when analysts are unable to open and respond to alerts, many may be left unread, or marked as “read” or “closed” without being addressed. Analysts may also become weary of chasing down red herrings and start to ignore them altogether. When analysts are desensitized to false positives or overburdened by a sea of alerts, the organization is at risk of missing a harmful threat that slips through their defense. The 2014 Target breach and 2015 Sony breach are just two prominent examples of this problem.

Context Is Key
Preventing alert fatigue requires proactive fine-tuning of system alert algorithms by security analysts, who can leverage their skill sets, experience, and knowledge of their environment to add context so that alerts are intelligent and useful, not merely generators of noise.

For example, since valid users occasionally make mistakes logging in, you may not want to receive an alert for every failed login. Instead, you can add intelligence to the algorithm triggering alerts for multiple failed logins, first by grouping them by a specific source, such as a specific IP address logging in and failing multiple times. Then you can add more conditional logic, such as a time-based parameter set to notice something like 10 failed logins for a specific user within a five-minute period. This added context triggers alerts to the more suspicious activity that warrants investigation, saving time, preventing desensitization, and shortening the gap in response time to a valid attack.

As another example, consider a company with a business justification for its users accessing IP addresses in China. A security team might be inclined to add a rule that displays alerts for any foreign traffic. But the better approach in this specific case might be to modify the system to check such IP addresses against a threat intelligence source and to trigger an alert if it matches. Again, this approach would limit alerts to only those that are worth investigating.

New network, endpoint, and SIEM technologies incorporate machine learning and behavioral analysis as a means of automating the addition of context to alerting. Proactively, artificial intelligence can scrape up metadata and may even be able to suggest rules for alerts that analysts may be missing, but analysts still need to vet those rules to ensure the alerts are relevant and actionable. The strongest approach uses both artificial and human intelligence, in concert, to generate intelligent alerts so that analysts only spend time investigating and mitigating critical events and not all the noise.

Blue Team Efficiencies
Intelligent alerting also greatly impacts the structure of a blue team. It allows such teams to operate more effectively and efficiently, enabling lean security organizations to make the most of limited personnel resources — a huge benefit, considering the growing talent shortage. Organizations save on costs and are able to deploy their more experienced analysts to perform other important tasks, like threat hunting and response. And, freed up from chasing down so many false positives, analysts have more time to educate themselves on emerging cybercrime trends, as well as to sharpen existing skills and learn new ones — all measures that ultimately improve an organization’s security posture.

Organizations must recognize that alert fatigue has serious security implications and take the steps necessary to empower analysts to make informed decisions and take quick action through intelligent alerting. Security teams will save time and energy by responding to fewer false positives and focusing more on investigating the alerts that matter.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Brazzell manages the penetration and application testing team at Pondurance. With a lifelong passion for all things IT, he brings a well-rounded skill set to the team. His prior roles include professional experience as a database administrator, systems administrator, … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/fighting-alert-fatigue-with-actionable-intelligence/a/d-id/1334007?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Photos disables sharing on Android TV

Imagine you’re setting up your Android TV to display pictures of your cat, or your kids, or your main squeeze, in Backdrop/Ambient Mode.

But instead of photos of your trip to Belize, you see a parade of strangers: as in, Google accounts belonging to people you don’t know, including their profile pictures, all showing up as linked accounts.

That’s what happened to Twitter user Prashanth, who on Saturday posted a 44-second long clip of the accounts that streamed by when he was trying to access his Vu Android TV through the @Google Home app on his phone:

Fortunately, the strangers’ photos stayed tucked away, given that access to the photos themselves was blocked. In fact, Google Photos functionality didn’t seem to be working.

Prashanth told Android Police that he first spotted the bug on his home TV, a 55-inch Vu LED TV (model number: 55SU134) with built-in Android TV functionality, while setting up Backdrop/Ambient Mode through his Pixel 2XL phone.

He said that he double-checked the glitch by signing onto the TV with his wife’s Google account. It again showed the list, except this time Prashanth also spotted his own name and profile picture.

He couldn’t replicate the bug on his other Android TV, a Xiaomi Mi Box 3 running Android 8.0, Oreo. His Vu TV was running an older operating system: it was on Android 7.0 and hadn’t received any security updates since December 2017, though Prashanth had manually checked for them. According to Android Police, the website where he bought the TV says that its current operating system is Oreo, which suggests that the over-the-air update never arrived.

The old kick-the-TV trick

Google initially responded to Prashanth by suggesting he contact the TV manufacturer:

… A suggestion that drew the “nope, it’s not the TV” response of another user, who confirmed the same bug, this time on an Android TV-equipped set by TCL-subsidiary iFFalcon (model number 32F2A).

No more pics on Android TVs until this bug is dissected

Google thanked Aarjith Nandakumar for the additional details and, this time, said that it’s looking into the possible privacy breach. In the meantime, it’s disabled the ability to remotely cast via the Google Assistant or to view photos from Google Photos on Android TVs:

Vu Technologies sent this statement to Android Police:

We were recently notified that there was a malfunction of Google Home App in some of the Android TVs. After verifying the incident we have informed our customers that it was not an issue of Vu Television but it was software malfunction of the Google Home App. We take your privacy very seriously. Vu has a long-standing commitment to protecting the privacy of the personal information that our customers entrusts to us.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0x95DNmBFN4/

Leaky ski helmet speakers expose conversations and data

On the face of it, Outdoor Tech’s Chips 2.0 speakers seem like the perfect accessory for any on-trend snow sports enthusiast.

The $130 Bluetooth helmet speakers attach to your audio-equipped ski helmet, giving you 10 hours of wireless audio with the ability to talk to your friends. There’s just one problem, said a security researcher this week: Everyone else can listen in too, and do a lot more besides.

Alan Monie, a researcher at cybersecurity consulting company Pen Test Partners, discovered the flaws after poking around in the walkie-talkie app that comes with the Bluetooth headphones.

Rather than connecting directly with other users on the slopes via Bluetooth, the app connects your Chips 2.0 speakers to the internet via your smartphone, meaning that all communications pass through Outdoor Tech’s servers.

The app allows you to form groups of other skiers or snowboarders, all of whom can then talk to each other via the app. Monie tried it out by creating a group and typing in his own name. That’s when the problems started, he says:

I began setting up a group and noticed that I could see all users. I started searching for my own name and found that I could retrieve every user with the same name in their account.

He dug a little deeper, typing ‘A’ into Outdoor Tech’s application programming interface (API), which is the software interface that the app uses to communicate with the back-end server. IT showed 19,000 users.

Names were not the only piece of personally identifiable information that the app revealed. The API returned all the other users’ email addresses too, and he was also able to retrieve their phone numbers. He could extract their real-time GPS position, and listen to real-time walkie-talkie chats. He could also retrieve any user’s password hash along with their reset code in plain text.

Monie suggested that returning lists of users based on the entry of an initial letter is intended functionality, adding:

Obviously, I only pulled data that was mine or my friends with their permission. Anyone with less ethical intentions could do much worse. I also wonder how many users had re-used passwords from elsewhere?

The culprit here is the Insecure Direct Object Reference (IDOR). This exposes an object, such as a file, directory, or database key, without authenticating access. That makes it possible for an attacker to manipulate the object, which could be a simple number attached to the end of a URL query string.

IDOR showed up on the Open Web Application Security Project (OWASP) top 10 vulnerability list as far back as 2007. In the most recent version, 2017, the organization merged it along with ‘missing function level access control’ to create ‘broken access control’. In other words, it is still alive and well, and people keep falling afoul of it, as Outdoor Tech has shown us.

Pen Test Partners contacted the manufacturer to explain what had happened on 6 February 2019, and got a mail back from its marketing manager on 11 February. It sent more emails on 13 and 20 February, but Outdoor Tech refused to acknowledge the vulnerability or propose any fixes, Monie explained.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PRkeKjIOlmU/

Google reveals BuggyCow macOS security flaw

Google’s Project Zero researchers have revealed a “high severity” macOS security flaw nicknamed ‘BuggyCow’ that Apple appears to be in no rush to patch.

The vulnerability is in the way macOS implements a memory optimisation and protection routine used by all OS file systems called copy-on-write (COW).

The principle behind COW is that it provides a way for different processes to efficiently and securely share the same data object in memory until they need to modify it in some way – at that point, they must make their own copy of the data rather than changing the data in memory.

Writes Google’s Jann Horn:

It is important that the copied memory is protected against later modifications by the source process; otherwise, the source process might be able to exploit double-reads in the destination process.

Using BuggyCow, malware already running on a Mac might be able to tamper with the copy of the data written to the disk in a way that is invisible to the file system:

This means that if an attacker can mutate an on-disk file without informing the virtual management subsystem, this is a security bug.

If that related to a privileged process, that might be a route to a privilege escalation capable of interfering with sensitive data.

The specific mechanism used in the researchers’ proof-of-exploit involves unmounting and remounting the file system, which apparently generates no warning via the memory management layer.

The obvious objection is that a Mac that has malware on it capable of launching this kind of attack is already in deep trouble even without this somewhat involved technique being in the public domain.

But perhaps that’s to miss the most intriguing aspect of this story – the way Apple has reacted (or not) to Google telling it about the problem.

Deadline missed

Project Zero told Apple about the vulnerability on 30 November 2018 which means that Project Zero’s 90-day deadline for the company to address the issue expired on 28 February.

Doubtless, Apple has something in the works but either has other things to fix first or doesn’t want to be rushed despite the Google team rating its severity as “high”. Writes Horn, rather hopefully:

We’ve been in contact with Apple regarding this issue, and at this point no fix is available. Apple is intending to resolve this issue in a future release, and we’re working together to assess the options for a patch. We’ll update this issue tracker entry once we have more details.

Apple has yet to comment on the flaw but if you’re a macOS user, there’s no need to panic. It’s on the to-do list.

It’s not the first time COW has been in the news. In 2016, a flaw in the Linux kernel dubbed DirtyCOW (CVE-2016-5195) emerged that could allow root access – another version of the same privilege escalation weakness.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/K2WQZbTDRLo/

How to keep your flock of users secure: Let them know exactly who and where the wolves are

RSA When it comes to getting your users up to speed with cyber-security, the best approach is to give it to them straight. Practicalities over jargon. Specific examples of threats are very persuasive, rather than simply insisting people enable a firewall and malware scanner, check regularly for updates, and avoid clicking on any suspicious attachments and links.

And rather than baffle folks with dire warnings of remote-code execution bugs, privilege escalations, and authentication bypasses, instead tell them clearly and calmly what’s at stake, who’s going to attack them, how it could happen, and what would happen next. Fraudsters emptying online bank accounts or stealing personal information for identity theft. Criminals copying company secrets via booby-trapped email attachments. That sort of thing. It’s much more likely to motivate people into taking computer security seriously.

So argues Dr Emilee Rader, an associate professor at the department of media and information at Michigan State University in the US, who has extensively studied the popular myths prevalent among ordinary folks with regards to online security and privacy.

Now, it sounds like obvious advice, but consider whether or not the last security advice you gave out, or overheard, whether at home or at work, was useful and understandable plain language, or talk of generic threats laced with industry jargon.

On Tuesday, Dr Rader told this year’s RSA Conference in San Francisco there is a disconnect between the advice experts give to users, information given by the media, and the things people themselves look for when they research online security.

“Regular users are more concerned about who may be trying to hurt them,” Dr Rader said. “Meanwhile, experts are describing mechanisms for protection, but they are missing an opportunity to connect with the end users,” presumably by skipping over specific examples.

Dr Rader pointed to antivirus packages as one example. She and her team found that users who felt they personally could be targeted by an attacker were more likely to use anti-malware tools than whose who thought infections were random chance. Folks who believed malware had immediate and visible effects were more likely to use antivirus compared to those who didn’t understand or know how an infection might play out.

“People whose folk theories [of the internet] that involve risk or visible harm were more likely to protect themselves,” Dr Rader said. “But people who thought they could get a virus from browsing the web, and there was nothing they could do about it, were less likely to say they protect themselves.”

Finally, said Dr Rader, developers and administrators should be more transparent with their users when it comes to data security and privacy. Letting netizens know how and who their data can be collected by will make them more engaged and more likely to take the appropriate steps to secure themselves.

“Hiding security and privacy from users is not the best choice if you want them to learn,” Dr Rader concluded. “Design feedback systems that allow users to learn from experiences.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/06/secure_your_users/

Ep. 022 – Plaintext passwords, cryptocoin criminality and the Momo monstrosity [PODCAST]

The Naked Security podcast explains why storing plaintext passwords is an unnecessary evil, investigates a cryptocurrency spat between a software maker and a disgruntled user, and tells you some earnest but sometimes unpopular truths about how to keep your children safe online.

With Anna Brading, Paul Ducklin, Mark Stockley and Matt Boddy.

This week’s stories:

Related articles on issues mentioned in the podcast:

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uV0xYRPk_Ho/

You. Shall. Not. Pass… word: Soon, you may be logging into websites using just your phone, face, fingerprint or token

RSA At 2004’s RSA Conference, then Microsoft chairman Bill Gates predicted the death of the password because passwords have problems and people are bad at managing them. And fifteen years on, as RSA USA 2019 gets underway in San Francisco this week, we still have passwords.

But the possibility that internet users may be able to log into websites without typing a password or prompting a password management app to fill in the blanks has become a bit more plausible, with the standardization of the Web Authentication specification.

Known as WebAuthn for those who find six syllables a bit taxing to say aloud, the newly blessed specification is already supported in Android, Apple Safari (preview), Google Chrome, Microsoft Edge, Mozilla Firefox, and Windows 10.

The spec will allow people to authenticate themselves and log into internet accounts using a preferred device, through cryptographic keys derived from biometrics, mobile hardware, and/or FIDO2-compliant security keys.

“Now is the time for web services and businesses to adopt WebAuthn to move beyond vulnerable passwords and help web users improve the security of their online experiences,” said Jeff Jaffe, CEO of web standards group W3C, in a statement on Monday.

WebAuthn doesn’t really get rid of passwords. Rather, it eliminates the security risk of storing even hashed user passwords on servers – phishing, password theft, and reply attacks – and shifts the focus from typed credentials to hardware-based cryptographic login credentials and some form of authentication gesture or code.

Looking ahead, you’ll get to worry about losing your physical hardware key rather than losing the secrecy protecting your passwords through a poorly secured server.

passcode

No password? No worries! Two new standards aim to make logins an API experience

READ MORE

The technology should allow websites to support low-friction authentication from visitors who have FIDO2 credentials associated with their desktop or mobile device.

In such a scenario, a user with a laptop or desktop computer and a Bluetooth-paired mobile phone might navigate a website’s sign-in page and receive a prompt to authenticate via phone. The user would then take some authorization action like pressing the phone’s fingerprint reader, if available, or entering a PIN to be logged in on the applicable computer.

In another scenario, a user with a laptop or desktop computer may rely on a dedicated FIDO2 fob in lieu of a phone-based authenticator. But the authentication process will probably still require pressing a button on the fob or entering a PIN. That’s because automatic authentication could go wrong – you wouldn’t want a USB stick to provide access to your bank account without some challenge.

At Dropbox, which implemented WebAuthn last year, the technology provides two-step verification rather than one-step access. The company said it kept passwords as part of the authentication process because there are a variety of security and usability factors that make it premature to get rid of them entirely.

Microsoft meanwhile has done its best to fulfill its co-founder’s password death wish, adding support for FIDO2 hardware authentication in its Windows 10 October 2018 update last year. The company now allows those using Windows 10 with Microsoft Edge to log in to their Microsoft account without entering a password. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/05/web_authentication/