STE WILLIAMS

WatchGuard Buys DNS-Filtering Company Percipient Networks

Percipient’s ‘Strongarm’ to become part of WatchGuard’s SMB security services.

WatchGuard Technologies today announced that it has purchased SMB security vendor Percipient Networks, best known for its Domain Name System (DNS)-filtering service.

The acquisition expands WatchGuard’s cloud-based security offerings for small- to midsized businesses, the company said. Financial details of the deal were not disclosed.

“Based on years of research and development, the Percipient Networks team has developed a simple, enterprise-grade solution. We are excited to add the Strongarm solution to our platform and to welcome the teams behind developing and launching it to WatchGuard’s ecosystem of rapidly growing partners, customers, and employees,” Prakash Panjwani, CEO of WatchGuard, said in a statement.

Read more about the acquisition here

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/watchguard-buys-dns-filtering-company-percipient-networks/d/d-id/1330844?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Former Santander bank manager pleads guilty to computer misuse crimes

A former Santander bank manager has pleaded guilty to £15,000 worth of computer misuse crimes after her boyfriend talked her into giving him illicitly obtained customer information.

This morning at the City of London Magistrates’ Court in England, Abiola Ajibade, 24, of Martock Court, Consort Road, Southwark, pleaded guilty to “causing a computer to perform a function to secure unauthorised access to a program or data” contrary to section 1 of the Computer Misuse Act 1990.

Her crimes took place over the course of a year, starting in August 2015 when she was aged 22. The court heard that the total value of the fraudulent transactions enabled by Adjibade was £15,000.

Crown prosecutor Janaka Siriwardena, of Temple Gate Chambers, told the court: “Fraud investigators at Santander reported to police that Miss Ajibade may have had some kind of suspicious activity… Her unique staff ID was linked to a disproportionate number of Santander customer accounts which suffered fraud.”

Following a report to the City of London Police’s fraud investigation branch, police arrested Ajibade, seizing her phone and Macbook. She provided them with the passwords for both, allowing police to see that she had sent messages to a man named in court as Melwin Williams, who, at the time, was her boyfriend.

“It was found that certain information, that was pertinent to those customers who had fraud committed against them, was sent to this Mr Williams,” Siriwardena said, adding that Williams was also investigated by police but was “NFA’d” – his case was formally marked No Further Action.

District Judge Nina Tempia said: “Her boyfriend asked her to access information on a Santander computer. She doesn’t ask why and she gives him the information on four separate occasions through the year.”

Although the charges were originally brought under section 2 of the Computer Misuse Act 1990, Ajibade pleaded guilty under section 1. The section 1 offence, it was explained, does not require mens rea – criminal intent – to be proven.

Ajibade’s defence counsel told the court: “There’s elements of being taken advantage of… [Ajibade] is now studying HR management at the Greenwich School of Management. She’s been doing that part-time since 2015 but has since gone full-time… She’s extremely remorseful, no intention at all, should have thought it through. I would ask that you consider her age and relative level of immaturity, these do go back some time and she was much younger then.”

The court also heard that Ajibade and Williams have since ended their relationship, and she was also sacked from Santander.

Tempia, who will be known to Register readers as the judge who initially ruled that accused hacker Lauri Love should be extradited to America to stand trial, ordered the hearing adjourned for probation reports on Ajibade to be prepared before sentencing.

The judge granted unconditional bail until the afternoon, warning her: “All options are on the table including committal” [to Crown court for a harsher sentence to be imposed].

Black-suited Ajibade pressed her hands together and bowed her head as the suggestion of a prison sentence was made. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/17/santander_manager_guilty_computer_misuse/

Researchers Offer a ‘VirusTotal for ICS’

Free online sandbox, honeypot tool simulates a real-world industrial network environment.

S4x18 CONFERENCE – Miami – A team of researchers plans to release an open source online tool for capturing and vetting industrial control system (ICS) malware samples that operates as a sandbox with honeypot features.

David Atch, vice president of research for CyberX, here today outlined details of the free, Web-based sandbox tool he and his team initially developed for research purposes. “It’s like a VirusTotal for ICS,” he explains in an interview.

VirusTotal is the wildly popular online tool that uses multiple antivirus and scan engines to analyze suspicious files and URLs for malware.

The goal was to create a sandbox that simulates real-world industrial networks. The sandbox tool allows ICS malware to execute and unpack, and then detects telltale malicious activities such as OPC (Open Platform Communications) scanning or overwriting programmable logic controller (PLC) configuration files, and provides quick offline detection, according to CyberX, which plans to roll out the tool in the next couple of months.

Atch says existing network sandbox technology for non-ICS, or IT environments, often misses ICS-specific malware because it doesn’t account for OT protocols and devices, for example, and doesn’t simulate OT components. “There are not enough tools for the ICS community,” Atch says. And VirusTotal isn’t ideal for ICS-specific malware, either, he says.

Take Stuxnet. The first Stuxnet variant was sent to VirusTotal in 2007, notes Ralph Langner, founder and CEO of Langner Communications, but Stuxnet wasn’t detected until 2012, he says. “I strongly support the idea” of a VirusTotal for ICS malware, he says.

Langner, a top Stuxnet expert, says ICS malware analysis is time-consuming. “It took me three years to analyze Stuxnet,” he says.

The ICS malware sandbox tool is aimed at more efficiently spotting ICS-specific malware, and can simulate the types of traffic to and from a PLC, for example, as its honeypot function. That allows the malware to execute in a safe space while unpacking and uncovering its functions and matching them with other known variants. The tool includes OT software, virtualized ICS processes and files, and a low-interaction ICS network (the honeypot element).

The concept of an ICS sandbox isn’t new: researchers at Trend Micro in 2013 stood up two honeypot-based architectures that posed as a typical ICS/SCADA environment at a water utility, including one that included a Web-based application for a water pressure station. There were 39 attacks from 14 different nations over a 28-day period. Most attacks on ICS/SCADA systems appeared to come from China (35%), followed by the US (19%) and Laos (12%).

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/researchers-offer-a-virustotal-for-ics/d/d-id/1330833?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How AI Would Have Caught the Forever 21 Breach

Companies must realize that the days of the desktop/server model are over and focus on “nontraditional” devices.

After discovering that multiple point-of-sale (POS) devices were breached nationwide, retailer Forever 21 joined the list of big-name corporations that suffered a cyberattack in 2017. And because the investigation is still ongoing, it is likely that we won’t know the full impact of the incident — including how many people are affected — for months.

However, as the initial details of the breach emerge, the headlines tell a familiar story. Many of the breaches of the past few years share a common theme: abnormal activity had occurred on the network, missed by the organization and having bypassed all of its security tools. How can we proactively identify and tackle these threats as we move into 2018?

As a first step, we must recognize that the days of the desktop/server model are over. In the case of Forever 21, the POS devices served as ground zero — not a laptop, a server, or even a corporate printer. In the age of the Internet of Things, we increasingly rely on “nontraditional” devices to optimize efficiency and boost productivity. But what constitutes a nontraditional device, and how do we look for it? Is it a device without a monitor? A device without a keyboard?

Today a nontraditional device could be anything from heating and cooling systems to Internet-connected coffee machines to a rogue Raspberry Pi hidden underneath the floorboards. Protecting registered corporate devices is not enough — criminals will look for the weakest link. As our businesses grow in digital complexity, we have to monitor the entire infrastructure, including the physical network, virtual and cloud environments, and nontraditional IT, to ensure we can spot irregularities as they emerge.

A subtle irregularity in device behavior is almost always the first sign of an emerging cyber attack — but these early indicators are consistently missed by tools that are rigidly programmed to spot known vulnerabilities and malicious behaviors.

With Forever 21, the encryption technology on the POS devices had failed, but only on some devices. Artificial intelligence (AI) would spot this type of anomaly, even if it had never seen it before, because it learns what normal behavior is over time, using this understanding to recognize suspicious shifts in activity when they arise. In contrast, tools that scan known devices, looking for known viruses or published indicators of compromise, would have missed it.

No matter how large our team is, as security professionals we all face the challenge of finding the evasive needle in an ever-expanding haystack. AI’s promise is to make subtle connections and correlations behind the scenes, and constantly build up an understanding of our digital environments over time — with this knowledge getting better and better.

Furthermore, an AI system today can be up and running in minutes, meaning that it can very quickly deliver results. This doesn’t just mean catching new anomalous activity but also understanding if a threatening presence is already in operation in your network. How is a cluster of POS devices behaving in comparison with what the AI has learned to be normal for similar devices?

Shifting our teams away from alert-chasing and perimeter protection and toward a workflow focusing on the anomalies found by AI might help us bring a gun to the knife fight. Had Forever 21 been equipped with such technology, it would have had a very good chance of both identifying and remediating the situation before any of its data was compromised.

Indeed, the gap between the breach happening and its disclosure points to a woeful inadequacy in our ability to see and detect emerging problems. Transferring the analytic burden to machines will give human security teams the time to improve their skills and add new ones — focusing on investigating and remediating genuine threats, while also having time to dedicate to strategic initiatives. As things stand, security teams are often caught in a vicious circle: high level-changes need to be made to prevent low-level problems, but teams are so busy fighting fires that they don’t have the time to make the changes necessary to break this cycle. AI would give both large and small security teams the ability to break out of this cycle.

Protecting against the threats we know of in advance is no longer sufficient. AI offers the best chance to catch breaches like the one that affected Forever 21, because it looks at all activity, irrespective of whether it pertains to a cash register or a data server, and isn’t biased to find threats that it knows already. AI is forever learning — something Forever 21 should bear in mind as it revises its security strategy. 

Related Content:

Justin Fier is the Director for Cyber Intelligence Analytics at Darktrace, based in Washington, DC. With over 10 years of experience in cyber defense, Fier has supported various elements in the US intelligence community, holding mission-critical security roles with Lockheed … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/how-ai-would-have-caught-the-forever-21-breach/a/d-id/1330803?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

One Identity Acquires Balabit

Union expands One Identity’s privileged access management and analytics offerings.

Identity and access management firm One Identity, a Quest Software business, today announced the acquisition of Balabit Corporation, a provider of privileged access management (PAM), privileged account analytics, privileged session management, and log management technology.

Terms of the deal were not disclosed.  

The two companies had already been OEM partners, but the acquisition – One Identity’s first since becoming an independent company in December 2016 – will allow One Identity to further expand its set of PAM tools. 

“The addition of privileged account analytics is a perfect complement to the identity analytics capabilities in our recently released One Identity Starling IARI solution,” said Jackson Shaw, senior director of product management at One Identity, in a statement. “The pairing of our technologies will enable our customers to know what entitlements their employees and privileged users have, and what they are doing with those entitlements.”

See here for more. 

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/one-identity-acquires-balabit-/d/d-id/1330836?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Man charged with selling billions of breached records on LeakedSource

A year ago, LeakedSource – a site that sold access to credentials stolen in data breaches – suddenly blinked out of sight, reportedly after the FBI raided it and seized its servers.

On Monday, the Royal Canadian Mounted Police (RCMP) announced that a man who was allegedly the site’s sole operator appeared in a Toronto court that day.

27-year-old Jordan Evan Bloom, of Thornhill, Ontario, was arrested on 22 December 2016 and charged on Monday with selling people’s data for a “small fee,” according to the RCMP. Those small fees must have added up: Bloom allegedly raked in approximately $247,000 from administering the site, which allegedly trafficked approximately three billion stolen personal identity records.

LeakedSource sold subscriptions to any and all comers. That allowed breach-as-a-service customers to browse through troves of data breach files. Buyers could also easily search for a victim’s name, username and email address so as to access other information, including their cleartext passwords.

The investigation into LeakedSource – an investigation Canadian authorities dubbed Project “Adoration” – began in 2016. That’s when the RCMP learned that LeakedSource was being hosted on Quebec servers. The Dutch National Police and the FBI helped out with the investigation.

LeakedSource was initially set up in 2015 and shut down in early 2017 – a lifespan during which it collected and sold those three billion personal identity records and their associated passwords from a string of major breaches. According to the International Business Times, the breaches included those at LinkedIn, MySpace, DropBox and AdultFriendFinder.

Bloom is facing charges of trafficking in ID information, unauthorized computer use, mischief to data, and possession of property obtained by crime.

Reuters talked to Toronto cybersecurity lawyer Imran Ahmad, who said that the charges against Bloom carry maximum sentences of between five and 10 years in prison.

Although the RCMP described Bloom as the sole proprietor, Ahmad told Reuters in an email that the sole proprietor notion isn’t a likely scenario. Rather, he said, Bloom was likely working with others to run the site, and the money police say he collected was probably only a slice of the overall profits. From his email:

Cybercriminals typically have an underground network of collaborators and given the size of the database and scope of the endeavor, I suspect others were likely involved.

LeakedSource is one of two breach-as-a-service criminal outfits that disappeared last year. Last month, LeakBase.pw also went dark. It had begun to redirect to Troy Hunt’s Have I Been Pwned? site after sending out this toodle-oo tweet:

This project has been discontinued, thank you for your support over the past year and a half.

Is this it? Is breach-as-a-service done? Can we stick a fork in it? If it is, would that be a good thing or a bad thing?

As Naked Security’s John E. Dunn said back when LeakBase went bye-bye, these sites were, for better or worse, uncovering breaches. Unfortunately, their business model was to sell access to breached data to crooks for them to exploit. How sound is a business model that’s based on the crime of handling breached data? How likely is it that such a business will escape police attention for long?

Not too likely, I’d say, given the track record of these two short-lived outfits.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PiBcY8rbO20/

Twitter denies claims that it snoops on your private messages

Twitter has pushed back after the release of undercover videos in which Twitter employees – primarily senior network security engineer Clay Haynes – are depicted as saying that they “view everything” users post on their servers, including private messages and sexual photos, and that employees are more than happy to participate in a Department of Justice investigation into Donald Trump.

The videos were posted by Project Veritas, an independent media outlet known for doctored clips it promotes as exposés on mostly liberal organizations.

The videos look like they were recorded via hidden camera while Haynes shared drinks with members of Project Veritas. The outlet claims to have met with him multiple times.

In one video, Haynes said Twitter is…

More than happy to help the Department of Justice in their little investigation [by providing them with] every single tweet that [Trump] has posted, even the ones he’s deleted. Any direct messages, any mentions.

In another meeting, Haynes says that Twitter has the ability to disclose…

Every single message, every single tweet, whatever you log into, what profile pictures you upload.

That second meeting was attended by Veritas Project founder and Donald Trump ally James O’Keefe, disguised in a wig and glasses. According to the New York Times, Trump has been supporting O’Keefe’s work for years, having donated $10,000 from his foundation to O’Keefe’s group.

During the meeting – a video of which O’Keefe posted here on Twitter – O’Keefe suggests that Haynes peek into direct messages in the accounts of both Donald Trump Senior and Junior. Haynes responds by emphasizing that such access is only permissible as part of the “subpoena process.”

It’s within the context of the subpoena process that Haynes says that Twitter can look at “every single message, every single tweet, whatever you log into, what profile pictures you upload.”

Last Wednesday, in a statement to media outlets, Twitter pushed back hard against the notion that its employees monitor private user data – including direct messages – outside of when instructed to do so under subpoena or other valid legal requests:

We do not proactively review DMs. Period.

Twitter said that a “limited number of employees” have access to such information, for “legitimate work purposes,” and that it enforces “strict access protocols for those employees.”

Twitter only responds to valid legal requests and does not share any user information with law enforcement without such a request…

There’s nothing new, shocking or revelatory about any of this. Twitter’s privacy policies and terms of service clearly outline how it holds and stores the information that users choose to share, including direct messages. It’s had access to the content of DMs for years. That is, after all, how it’s been able to reach in to messages and shorten URLs.

From its privacy policy:

When you privately communicate with others through our Services, such as by sending and receiving Direct Messages, we will store and process your communications, and information related to them.

And from its terms and conditions:

We also reserve the right to access, read, preserve, and disclose any information as we reasonably believe is necessary to (i) satisfy any applicable law, regulation, legal process or governmental request.

As far as Haynes goes, he’s not having much fun at all. The only thing he’s apparently said publicly has been this tweet, which has been mercilessly trolled by Trump supporters:

As far as public statements about Haynes and the other employees featured in the Project Veritas videos go, Twitter’s holding them at arm’s length. In its statement, Twitter said those employees shown in the video “were speaking in a personal capacity and do not represent or speak for Twitter.”

Twitter added that it finds Project Veritas’ methods despicable:

We deplore the deceptive and underhanded tactics by which this footage was obtained and selectively edited to fit a pre-determined narrative. Twitter is committed to enforcing our rules without bias and empowering every voice on our platform, in accordance with the Twitter Rules.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pBGKpqLuN7Y/

Firefox locks down its future with HTTPS ‘secure contexts’

Mozilla’s embrace of HTTPS, the secure form of HTTP, has ratcheted up a notch with the news that Firefox developers must start using a web security design called ‘secure contexts’ “effective immediately.”

This isn’t a surprise –  Mozilla mandated that security-sensitive geolocation be added as a secure context last March – but the signal is still significant.

Announced Mozilla:

All the building blocks are now in place to quicken the adoption of HTTPS and secure contexts, and follow through on our intent to deprecate non-secure HTTP.

Everyone involved in standards development is strongly encouraged to advocate requiring secure contexts for all new features on behalf of Mozilla.

The odd thing is that while secure contexts (also called ‘secure origins’) matter a lot to end user security, almost nobody beyond web devs has ever heard of the mechanism or pondered why it might be a big deal.

This could be about to change thanks to the publicity generated by the much better-known campaign by Google and others to migrate websites from insecure HTTP connections to encrypted HTTPS.

The principle of secure contexts is an incredibly simple one – that certain powerful web capabilities and APIs (whose risks users are often barely aware of) should be forced to work over HTTPS.

These mostly hidden functions currently include:

  • Geolocation
  • Bluetooth
  • HTTP/2
  • Web notifications API
  • Webcam and microphone access
  • Google’s Brotli web compression algorithm
  • Google’s Accelerated Mobile Pages (AMP)
  • Encrypted Media Extensions (EME)
  • The Payment Request API
  • Service Workers used for background sync and notification

(Another three – the AppCache API, Device motion/orientation, and Fullscreen – will follow in time.)

These could all work over HTTP, of course, but that would represent a security risk that attackers could exploit to steal credentials, track users, and intercept data using man-in-the-middle ruses.

Wouldn’t it be simpler to make all sites use HTTPS and be done with it?

Although HTTPS secures the browser’s connection to a website, a non-HTTPS function could still be opened in a separate window without that insecurity being obvious to the user.

Realising all this was becoming an issue as the web got more complicated, Google kicked off the secure contexts initiative in 2014, gradually adding these requirements to Chrome. Mozilla has busied itself doing the same for Firefox.

Since then, the whole thing has turned into a W3C draft proposal, another cog in the multi-dimensional drive to make all traffic between web users and websites encrypted, including the possibility of DNS queries in the future.

Mozilla is set to start using secure contexts for existing features too, on a “case-by-case basis.” The catch is that turning off support for HTTP in web technologies won’t necessarily be quick or without complication.

As Mozilla noted in 2015:

Removing features from the non-secure web will likely cause some sites to break.  So we will have to monitor the degree of breakage and balance it with the security benefit.

Google-centrism might be another factor, although given that Microsoft’s IE and Edge are the only major browsers that don’t yet support the idea, this is probably of minor importance.

The drag might be simply that the HTTPS movement has turned into a big undertaking, assertively pushing HTTPS by the front door and a fragmented series of secure contexts by the back.

Mozilla’s announcement reminds us that while the momentum is with it, this one has a way to roll yet.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JJgqe7kucgU/

Another round of click-fraud extensions pulled from Chrome Store

A security researcher has claimed that half a million Chrome users have been hit by four malicious browser extensions pushing click and SEO fraud.

Icebrg’s Justin Warner and Mario De Tore spotted the extensions while investigating a spike in outbound traffic from a workstation in a customer’s network. The company claims the four extensions had more than 500,000 downloads in all.

The extensions were Change HTTP Request Header (a legitimate capability is to hide browser type from trackers) and three apparently related to it: Nyoogle – Custom Logo for Google, Lite Bookmarks, and Stickies – Chrome’s Post-it Notes.

Change HTTP Request Header didn’t contain malicious code, the post stated. Rather, it downloaded “a JSON blob from ‘change-request[.]info’”, and that blob pushed a configuration update, after which obfuscated JavaScript was fetched from the control domain.

“Once injected, the malicious JavaScript establishes a WebSocket tunnel with ‘change-request[.]info’. The extension then utilises this WebSocket to proxy browsing traffic via the victim’s browser”, the post said, and that was how the click-fraud was launched.

A possible second use of the proxy would be to browse a company’s internal network, for information that could be sent back to the control domain.

The three related extensions used similar techniques to inject unsafe JavaScript, Icebrg’s analysts believe. The “Stickies” app went one step further, trying “to obfuscate its ability to retrieve external JavaScript for injection by modifying its included jQuery library”.

Google has removed the extensions from the Chrome Store. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/17/another_round_of_clickfraud_extensions_spiked_from_chrome_store/

Wanna motivate staff to be more secure? Don’t bother bribing ’em

Usenix Enigma It’s frustrating getting users to keep information and systems secure on a daily basis. However, don’t try any smart gimmicks – particularly offering wedges of cash or other prizes for good behavior.

It doesn’t work. Quite the opposite, it can make things worse.

Paying out a bonus to those who make few or zero security mistakes ultimately demotivates staff, Masha Sedova, cofounder of Elevate security, told Usenix’s Enigma 2018 security conference in California on Tuesday.

This is, in part, because once an incentive – especially a financial one – is dangled as a carrot, it’s usually never substantial enough to warrant the extra effort required to follow security best practices. Thus, most people don’t bother at all to meet the standard, reducing overall security.

Play nice

Another, er, motivational technique – naming and shaming of employees by the BOFH – doesn’t work either. Sedova said this massively demotivates staff. Instead, IT security teams need to be more positive with users. And by positive, she meant that workers should be praised for good behavior, and be given better tools to tackle threats to the network.

Sedova said that research, and her experience, shows that around 20 per cent of the workforce are very motivated to secure their systems. Around 70 per cent are ambivalent about it and will use security if it’s easy enough, but 10 per cent won’t touch security at all – and in the latter case, naming and shaming may be the only option.

During the QA after Sedova’s talk, a Facebook engineer in the audience said that the web giant had, as a trial, deployed a button in its internal email system that staffers could press to easily report any message thought to be phishing or packed with malware.

On one level, it worked: the security team saw a 350 per cent increase in dodgy email reports. The problem is most of them were false positives, creating more work for the network defenders. That’s an issue, Sedova said, but the alternative is that too little gets reported.

Ultimately, the right tools and the right level of encouragement and praise is needed to boost corporate computer security, not public reprimands, she said.

Five-factor authentication

Two-factor authentication is seen by some as a bit of a gimmick, a faff, or a tool for the paranoid, even though it’s pretty good at stopping unauthorized access to work and personal accounts.

With the help of Facebook, Sauvik Das, an assistant professor of interactive computing at Georgia Tech in the USA, and his team polled a random sample of 1.5 million FB users to find how these netizens preferred to secure their profiles on the social network.

Das presented his findings in a separate Enigma conference session, and revealed that a popular alternative to traditional two-factor authentication is Facebook’s trusted contacts functionality. This allows peeps to choose three to five close friends who can green-light stuff like password reset requests. For instance, if someone can’t login to the website – whether because they forgot their password or their account is being hijacked – their trusted mates can get together and confirm the user is the legit account owner.

The research also revealed that Facebook users are more concerned about the security of their friends and family than they are about their own accounts. This means it should be possible to make security awareness spread in a viral way. “Reminding family about security techniques can be very effective in changing behavior,” Das said. “But it has limitations – warn people too often and you’re seen as a nag.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/17/staff_security_training/