STE WILLIAMS

EU raises eyebrows at possible US encryption ban

The growing battle over end-to-end encryption took another turn last week, when EU officials warned that they may not take kindly to a US encryption ban or insertion of crypto backdoor technology.

In June 2019, senior US government officials met to discuss whether they could legislate tech companies into not using unbreakable encryption. According to Politico, the National Security Council pondered whether to ask Congress to outlaw end-to-end encryption, which is a technology used by companies to keep your data safe and secure.

To recap briefly, US law enforcement worries about its targets such as criminals and terrorists “going dark” by using this technology to shield their communications. Banning it outright would make it easier for government agencies to access those messages and documents. Encryption advocates counter that making encryption breakable would also allow malicious actors such as foreign governments to steal domestic secrets and they also worry about unlawful access to information by their own governments.

US officials didn’t reach a decision on the issue, but news of the conversation spooked MEP Moritz Körner enough to ask the European Commission some formal questions picked up by Glyn Moody over at Techdirt. Körner asked whether the Commission would consider a similar ban on encryption in the EU. He also asked what a US ban would mean for existing data exchange agreements between the EU and the US:

Would a ban on encryption in the USA render data transfers to the US illegal in light of the requirement of the EU GDPR for built-in data protection?

Currently, the two regions enjoy an agreement known as the EU-US Privacy Shield, which they introduced after the European Court of Justice invalidated a previous agreement called the International Safe Harbor Privacy Principles.

The Privacy Shield is a voluntary certification scheme for US businesses. By certifying under the scheme, US companies prove their adequacy to transfer and process data on EU citizens. It shows that they have made some effort to follow Europe’s strict privacy principles in the absence of any cohesive federal privacy law in the US.

On 20 November, European Commission officials gave their answers, confirming that they would not consider a ban on encryption in the region and pointing out that the General Data Protection Regulation (GDPR) explicitly refers to encryption as a privacy protection measure.

The next answer was a bit more contentious:

Should the U.S. enact new legislation in this area, the Commission will carefully assess its impact on the adequacy finding for the EU-U.S. Privacy Shield, a framework which the Commission has found to provide a level of data protection that is essentially equivalent to the level of the protection in EU, thus allowing for the transfer of personal data from the EU to participating companies in the U.S. without any further restrictions.

In short, the jury is out on how the EU would react to cross-Atlantic data transfers if the US implemented crypto backdoors.

Ashley Winton, partner at McDermott Will Emery UK LLP and specialist in data privacy law, explained that a split between the two territories on data exchange could have serious consequences. He told us:

We know that under the GDPR personal data must be held securely, and so legislating against strong encryption or introducing legal back doors, is not going to be good for the safe passage of European Personal Data – howsoever it gets there.

Unlike the annual review of Privacy Shield, if the European Court rules that the transfer of Personal Data to the US is not safe, all affected transfers will be stopped immediately and a world of data protection compliance pain will ensue.

The EU’s reservations about an encryption ban sit in stark contrast to the UK’s approach.

The Investigatory Powers Act 2016 compels communication providers to let the government know in advance of any new encryption products and services, allowing it to request technical assistance in overcoming them. Last month, the UK and the US signed an agreement under the March 2018 CLOUD Act allowing each other to demand electronic data directly from tech companies based in the other country, without legal barriers.

Winton said that another soon-to-be decided case will once again bring the issue of data transfer from the EU to the US into the spotlight. On 12 December 2019, the European Court of Justice (ECJ) will decide on a case known as Schrems 2. This is a legal challenge against Facebook in Ireland by Austrian Attorney and privacy advocate Max Schrems.

Schrems was responsible for bringing down the original Safe Harbour agreement. Concerned by Facebook’s cooperation with the US intelligence services as revealed by Edward Snowden, he filed a complaint with the Irish Data Protection Commissioner complaining that the transfer of his personal data to Facebook US violated his rights. The ECJ ruled in his favour.

Schrems 2 focuses on another mechanism used to transfer data from the EU to the US: standard contractual clauses (SCCs). These are bilateral agreements between EU and US organizations based on standard templates, and they’re often used by companies in countries that don’t have an adequacy agreement.

SCCs are a big deal because they are the go-to mechanism for extraterritorial data transfers among 88% of respondents, according to this report by the International Association of Privacy Professionals.

We will stay tuned.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tEJpvShW5zc/

Splunk customers should update now to dodge Y2K-style bug

If you’re a Splunk admin, the company has issued a critical warning regarding a showstopping Y2K-style date bug in one of the platform’s configuration files that needs urgent attention.

According to this week’s advisory, from 1 January 2020 (00:00 UTC) unpatched instances of Splunk will be unable to extract and recognise timestamps submitted to it in a two-digit date format.

In effect, it will understand the ‘year’ up to 31 December 2019, but as soon as this rolls over to 1 January 2020, it will mark it as invalid, either defaulting back to a 2019 date or adding its own incorrect “misinterpreted date”.

In addition, beginning on 13 September 2020 at 12:26:39 PM UTC, unpatched Splunk instances will no longer be able to recognise timestamps for events with dates based on Unix time (which began at 00:00 UTC on 1 January 1970).

Left unpatched, the effect on customers could be far-reaching.

What platforms like Splunk do is one of the internet’s best-kept secrets – turning screeds of machine-generated log data (from applications, websites, sensors, Internet of Things devices, etc) into something humans can make sense of.

There was probably a time when sysadmins could do this job but there are now so many devices spewing so much data that automated systems have become a must.

This big data must also be stored somewhere, hence the arrival of cloud platforms designed to do the whole job, including generating alerts when something’s going awry or simply to analyse how well everything’s humming along.

Bad timing

As with any computing system, however, Splunk depends on events having accurate time and date stamps. Without that, it has no way of ordering events, or of dealing meaningfully with the world in real time.

According to Splunk, in addition to inaccurate event timestamping this could result in:

  • Incorrect rollover of data buckets due to the incorrect timestamping
  • Incorrect retention of data overall
  • Incorrect search results due to data ingested with incorrect timestamps
  • Incorrect timestamping of incoming data

It gets worse:

There is no method to correct the timestamps after the Splunk platform has ingested the data. If you ingest data with an un-patched Splunk platform instance, you must patch the instance and re-ingest the data for timestamps to be correct.

In short, there’s no quick way to back out of a problem which will only grow with every passing hour, day and week that it’s allowed to continue.

The problem lies with a file called datetime.xml used by Splunk to extract incoming timestamps using regular expression syntax. It sees this and assumes two-date years up to and including 19, but not 20 onwards.

What to do

Leaving aside Splunk cloud customers who should receive the update automatically, there are three ways to patch the bug for all operating systems, the company said.

  • Download an updated version of datetime.xml and apply it to each of your Splunk platform instances
  • Make manual modifications to existing datetime.xml on your Splunk platform instances
  • Upgrade Splunk platform instances to a version with an updated version of datetime.xml

The complication is that applying the new file, or editing it manually, requires customers to stop and restart Splunk, a disruptive process when applied to more than once Splunk instance. Editing the datetime.xml should also be done with great care.

Although reminiscent of the famous Millennium Y2K bug predicted to affect computer systems on 1 January 2000, this class of bugs has popped up on other occasions since then.

A recent example is the GPS date issue that hit older satellite navigation systems earlier this year.

A variation on the same date/GPS problem affected Apple iPhone 5 and iPhone 4s in October, which meant that owners had to update their devices by 3 November 2019 or suffer app synchronisation problems.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wscOPxOjeto/

Facebook, Twitter profiles slurped by mobile apps using malicious SDKs

On Monday, Twitter and Facebook both claimed that bad apples in the app stores had been slurping hundreds of users’ profile data without permission.

After getting tipped off by security researchers, the platforms blamed a “malicious” pair of software development kits (SDKs) – from marketing outfits One Audience and MobiBurn – used by the third-party iOS and Android apps to display ads. Neither Twitter nor Facebook have named names of the data-sucking apps, nor how many bad apps they’ve found.

Twitter said that this wasn’t enabled by any bug on its platform. Rather, after getting a heads-up from security researchers, its own security team found that the malicious SDK from One Audience could potentially slip into the “mobile ecosystem” to exploit a vulnerability.

That vulnerability – which is to do with a lack of isolation between SDKs within an app –  could enable the malicious SDK to slurp personal information, including email, username, and last tweet. Twitter hasn’t found any evidence that any accounts got hijacked due to the malicious SDKs, mind you, but that’s what the vulnerability could have led to.

While Twitter hasn’t found any account takeovers, it’s found evidence of slurping. The unauthorized data grab was just done to Android user profiles, via unspecified Android apps:

We have evidence that this SDK was used to access people’s personal data for at least some Twitter account holders using Android, however, we have no evidence that the iOS version of this malicious SDK targeted people who use Twitter for iOS.

Facebook, however, said in a statement that it was suffering at the hands of both those bad SDKs, both of which it’s told to cease and desist:

Security researchers recently notified us about two bad actors, One Audience and Mobiburn, who were paying developers to use malicious software developer kits (SDKs) in a number of apps available in popular app stores. After investigating, we removed the apps from our platform for violating our platform policies and issued cease and desist letters against One Audience and Mobiburn.

Facebook plans to notify the people whose personal data – including name, email and gender – was likely swiped after they gave permission for apps to access their profile information. Twitter says it’s informed Google and Apple about the malicious SDK, so they can take further action if needed, as well as other industry partners.

Facebook’s cautionary words regarding grabby apps:

We encourage people to be cautious when choosing which third-party apps are granted access to their social media accounts.

Well, Facebook should know about grabby apps. Post-Cambridge Analytica data-slurping-pocalypse, as of September 2019, its roster of apps castigated over getting handsy with users’ data (or simply not bothering to respond to Facebook’s audit) was in the tens of thousands.

OneAudience has declined to respond to media questions.

On Monday, MobiBurn posted a statement saying hey, we’re not abusive data suckers. We’re just a matchmaker who hooks you up to app developers who may be data suckers:

No data from Facebook is collected, shared or monetised by MobiBurn.

MobiBurn primarily acts as an intermediary in the data business with its bundle, i.e., a collection of SDKs developed by third-party data monetisation companies. MobiBurn has no access to any data collected by mobile application developers nor does MobiBurn process or store such data. MobiBurn only facilitates the process by introducing mobile application developers to the data monetisation companies.

Notwithstanding, the company says it’s suspended activities while it investigates those third-party app developers.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/TEWLcroc9-8/

‘Ethical’ hackers say: It’s just hacker. To be one is no longer a bad thing

Ethical hacking is a “redundant term” but to be a “hacker” is no longer a bad thing, according to proponents of the cybersecurity art form known as “penetration testing”.

One-time Lulzsec hacker Jake Davis and his deportation-proof former partner-in-crime Lauri Love appeared at a talk organised by pentesting biz Redscan in which the great, the good and The Register chewed the fat about hacking for positive purposes.

“It’s got a negative stigma because of very malicious hacking which is always dumped into the same category of cyber attack,” said Davis, dismissing the term “ethical hacking” while embracing ye olde-fashioned word: “We should just say hacker. To be a hacker is no longer a bad thing.”

Agreeing with Davis, Love opined that “ethical” hacking could be defined as breaking into a computer system but in a setting where “you are in a symbiotic relationship with the thing you are hacking”.

“Whether that’s an intelligence-led penetration test, as you said, the utility depends on how that relationship works; what parameters set as you gain access; trust that has to be built; what responsibilities to ensure you’re not causing damage; finding what access you can create,” he added.

As far as the practical utility of pentesting goes, Ian Glover, president of British pentesting ‘n’ accreditation biz Crest, took a pragmatic line.

HackLabs' Chris Gatford at his office in Manly, New South Wales (Image: Darren Pauli / The Register)

Stealing, scamming, bluffing: El Reg rides along with pen-testing ‘red team hackers’

READ MORE

“We can’t find everything,” he said, “but to do the best job we can, we need to give advice and guidance to organisations on where vulnerabilities are and how to address them.” And how better to achieve that than through testing their defences, right?

First Group’s CISO, Giles Ashton-Roberts, was rather more concerned with legal compliance worries around the practice. Detailing the number of regulations his multinational transport firm is bound to comply with (including the EU’s Networks and Information Security Directive, GDPR, and on the online payments side PCI-DSS, among others), he said: “Where we engage with penetration testers, we have a series of penetration tests continuing through the year… Next time we go for that round of penetration testing it’s ensuring we’ve closed the gap on the vulnerabilities that were found.”

Backing this up, Anthony Lee, a tech and IP lawyer from Rosenblatt, pointedly observed: “Organisations are less concerned about personal data and more concerned about people coming in and making mischief with their systems… The problem you have is you can open the safe but you can’t look at the contents.”

Davis was rather taken with this idea. A pentester since the end of his LulzSec days years ago, he commented: “You’ll open the safe, look at the contents and then pretend you’ve never seen the contents, sign a lot of documents – or, in certain cases, sign a lot of documents saying you can’t replicate those documents [inside the safe] because they’re deadly cyber weapons.”

Following on from this, Love, who now works for an Australian infosec firm, summed up the tension between pentesters and industry:

[It’s] because you’re trying to reconcile two completely different things. You want someone coming in who can be as trustworthy as the legitimate employees allowed to access those systems. You want to know that an organisation like Crest has accredited them to act diligently, responsibly… At the same time you want people close to this to be able to simulate a real-world hacker.

All these views are great provided you have suitably skilled and experienced pentesters to hand, and – as ever – it is the gap between newly qualified folk and experienced personnel which can cause headaches.

Redscan’s director of cybersecurity, Mark Nicholls, said: “I’m seeing, or have seen, a quality issue in terms of people coming on board. Working with certifications and fresh out of university, the practical experience is not there. Maybe there’s not that level of [in-depth] exposure at university – which would go a long way to making better hackers.”

Industry likes pentesting provided its fears are soothed in advance; pentesters (or ethical hackers, call them what you will) are happy to do the work; all they need are enough skilled people. And they’re easy to find. Right? ®

Sponsored:
From CDO to CEO

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/27/hacker_ethical_pentester_roundtable/

On the Border Warns of Data Breach

Malware on a payment system could have stolen credit card info from customers in 28 states, according to the company.

On the Border, a border-style Mexican food chain known for “chips as big as your head,” has given notice of a data breach in a payment-processing system serving restaurants in 28 states. The company says that some customer credit card information could have been compromised on visits between April 10 and August 10, 2019.

According to On the Border, malware was installed on the payment-processing system, which then harvested information including names, credit card numbers, credit card expiration dates, and credit card verification codes. Customers who ordered through delivery services or bought catering from the company were not affected. Investigation of the incident is ongoing, and On the Border reports that it is cooperating with law enforcement.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Home Safe: 20 Cybersecurity Tips for Your Remote Workers.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/on-the-border-warns-of-data-breach/d/d-id/1336467?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘Dexphot’: A Sophisticated, Everyday Threat

Though the cryptominer has received little attention, it exemplifies the complexity of modern malware, Microsoft says.

Malware threats don’t have to have a high profile to be extremely dangerous. Sometimes, even the more common strains can pose big problems.

A case in point is “Dexphot,” a cryptomining tool that Microsoft has been tracking for the past year and which the company says exemplifies the complexity and fast-evolving nature of even the more everyday threats that organizations now face.

Dexphot first surfaced in October 2018 and has since then infected tens of thousands of systems but has received little of the attention that some malware threats receive. Microsoft researchers initially observed the malware attempting to deploy files that changed literally every 20 to 30 minutes on thousands of devices.

The company’s subsequent analysis of the polymorphic malware showed it employs multiple layers of obfuscation, encryption, and randomized file names to evade detection.

Like many other modern malware tools, Dexphot was designed to run entirely in memory. It also hijacked legitimate processes so defenders couldn’t easily detect its malicious activity. When Dexphot finally did get installed on a system, it used monitoring services and a list of scheduled tasks to reinfect systems when defenders tried to remove the malware.

The authors of the Dexphot have kept upgrading and tweaking the malware in the year since it was first detected, according to Microsoft. Most of the changes have been designed to help the malware evade detection.

What makes Dexphot especially troublesome for defenders is the malware’s use of legitimate processes and services for carrying out its activity. In fact, except for the installer that is used to drop the malware on a system, all other processes that Dexphot uses are legitimate system processes, according to a Microsoft blog post.

Among them is a process for running programs in DLL files (rundll32[.]exe), another for extracting files from ZIP archives (unzip[.]exe), one for scheduling tasks (schtasks[.]exe), and PowerShell for task automation.

Dexphot also employs “process hollowing,” a tactic in which the malware is hidden inside a legitimate process such as svchost[.]exe, tracert[.]exe, and setup[.]exe. Malware hidden in this manner can be hard to find, which is why threat actors have increasingly begun using it, Microsoft says. “This method has the additional benefit of being fileless,” according to the blog post. “Not only is it harder to detect the malicious code while it’s running, it’s harder to find useful forensics after the process has stopped.”

Malware employing such living-off-the-land tactics have become a big and growing problem for enterprise organizations. A recent report from Rapid7 identified several legitimate processes that attackers are increasingly using to hide malicious activity. Rapid7 found that PowerShell is easily the most abused executable. Other popular processes include cmd[.]exe; ADExplorer[.]exe; procdump64[.]exe, rudll32[.]exe, and schtasks[.]exe.

“The continued focus on using built-in Windows functions allow the attackers to persist mostly unnoticed after their initial bypass of security controls,” Rapid7 notes in its report. Since few security tools are designed to look for threats in administrative tools and legitimate processes, the vendor explains, organizations need to monitor for known usage patterns for Windows utilities used by attackers.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Home Safe: 20 Cybersecurity Tips for Your Remote Workers.”

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/dexphot-a-sophisticated-everyday-threat-/d/d-id/1336469?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

An Alarming Number of Software Teams Are Missing Cybersecurity Expertise

The overwhelming majority of developers worry about security and consider it important, yet many lack a dedicated cybersecurity leader.

Despite concerns over software security, many companies have not assigned a cybersecurity leader to help secure their applications — a problem that will only worsen as demand for technical security experts deepens worldwide.

In data published on Nov. 21, software security firm WhiteHat Security found that three-quarters of developers are worried about the security of their applications, and about seven out of eight consider security to be an important development consideration, but only half of these teams have a dedicated cybersecurity expert. The “Developer Security Sentiment Study,” which produced the data, found that about 49% of development teams lack a dedicated cybersecurity leader and 43% prioritize deadlines over secure coding.

“While developers’ concerns about securing their code are on an upward trajectory, it’s clear the industry has a long way to go,” said Joseph Feiman, chief strategy officer for WhiteHat Security, in a statement. “Developers are on the front lines when it comes to protecting their organizations from cyberattacks, and they need the right tools and training to handle this burden.”

Holes in software security reflect the impact of companies’ shift toward more agile programming methodologies. In the past, most IT dollars were spent by the actual IT organizations, and while that’s still true, the budget of non-IT groups, such as DevOps, are growing, says Greg Young, vice president of cybersecurity at security firm Trend Micro. 

In 2020, businesses will be either a “have” or a “have-not” when it comes to security, he says.

“AppSec, cloud security, and securing DevOps are very doable, but they take new models, not just new tools,” Young says. “The ‘haves’ will manage AppSec well, such as building security into DevOps by providing container and workload security automatically and managing cloud security postures even when they are in cloud spaces the company didn’t know they owned. The ‘have-nots’ will continue to try and force DevOps into older security models, rather than adapting themselves, and miss out on innovation opportunities while getting hacked.”

Adding to the pressures on companies and their ability to incorporate security into their development and operations is the general shortage of knowledgeable cybersecurity workers. Organizations that integrate security into their development life cycles generally have better security outcomes, but the shortage in workers means they have to pay a high price to do so, says Anthony Bettini, chief technology officer for WhiteHat Security.

“Companies that are able to pay for experienced AppSec people do,” he says. “Companies whose budgets do not permit this either assign the role to someone internally or hire more junior folks from outside. The best approach likely depends on the organization based on their budget and time scale for the outcomes they desire to achieve.”

Unsurprisingly, more than half of security professionals — 52% — have burned out at their job, according to the WhiteHat report.

Companies also have to worry about newer threats that affect software development, such as locking down their application programming interfaces (APIs) from abuse and security threats. More than a quarter of companies have detected reconnaissance attempts on their API servers, which make data and services available to Web and mobile applications, according to a survey of 100 attendees conducted by CloudVector at the Cyber Security and Cloud Expo. Another 16% do not know whether they have been attacked.

“The reality is likely [that the number of attacks is] much higher given that most organizations lack the capability to detect these threats,” said Ravi Balupari, vice president of engineering and threat research at CloudVector, in a blog post. “The lack of visibility into API payloads is a major blind spot.”

Developing in-house expertise in these cybersecurity threats does not seem to be a priority either. Only 30% of developers have received some sort of security certifications in their current or previous jobs, according to the WhiteHat survey. 

There is good news, however. The vast majority of development teams — 82% — said they scan their software at least monthly, the survey found.

Related Content

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Home Safe: 20 Cybersecurity Tips for Your Remote Workers.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/application-security/an-alarming-number-of-software-teams-are-missing-cybersecurity-expertise/d/d-id/1336470?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Implications of Last Week’s Exposure of 1.2B Records

Large sums of organized data, whether public or private, are worth their weight in gold to cybercriminals.

Late last week the security industry learned that a trove of public data had been exposed in an unsecured server, compromising 4 terabytes of information, or about 1.2 billion records.

The leak was discovered by security researcher Vinny Troia and first reported by Wired. All of the data exposed was publicly available: Profiles of hundreds of millions of people included home and mobile phone numbers, related social media profiles (Facebook, Twitter, LinkedIn, Github), and work histories seemingly pulled from LinkedIn profiles. Troia found nearly 50 million unique phone numbers and 622 unique email addresses, all easily accessible online.

Dave Farrow, senior director of information security at Barracuda Networks, was dismayed when he first heard mention of a security incident last week. A data breach would mean his team would have to mobilize, he says, and at the time he knew little information about the incident. Farrow initially guessed the news would be another exposed Elasticsearch cluster or Amazon S3 bucket, a fairly common occurrence he describes as “a prominent and easy mistake to make.”

When he learned more about the data leak, Farrow was relieved. The employees and customers he’s tasked with protecting “weren’t any more exposed that day than they were the day before,” he says. While data enrichment makes him “a little bit uneasy,” he adds, there wasn’t much of a change in security posture for anyone whose data was exposed in the leak.

A security incident that generates conversation typically involves a breach or exposure of comparatively more sensitive data. But this unprotected server didn’t store personally identifiable information like Social Security numbers, nor did it contain passwords or payment card data. So why did the exposure of publicly accessible data have people talking?

The amount and type of information exposed, and the way it was organized, could give cybercriminals the tools they need to assume other identities or launch spear-phishing attacks. As Wade Woolwine, Rapid7’s principal threat intelligence researcher, puts it: “Data in aggregate is always worth something to someone … large sums of data are worth their weight in gold.”

You Are for Sale
The server seemed to contain four separate data sets. Three were labeled to indicate they held data from San Francisco-based People Data Labs, which claims to sell contact, resume, social, and demographic data for more than 1.5 billion people. Its website advertises more than 1 billion personal email addresses, 420 million LinkedIn URLs, and 1 billion Facebook URLs and IDs.

The fourth dataset was labeled “OXY,” which is believed to contain information from data broker Oxydata, Troia told Wired. Its website claims to sell information on more than 380 million business professionals, including contact info, social profiles, industry, and education.

Data enrichment is a legal but controversial practice. “The industry exists for the purpose of influencing people and giving you access to people you want to influence,” says Farrow, who says he has heard both sides of the argument. On one hand, employees often use this data to ensure they’re not sending mailers to or cold-calling the wrong people. They could get the same information themselves on Facebook or LinkedIn; data aggregators speed up the process.

At the same time, it “feels like an intrusion on our privacy,” he says. Cybercriminals can use this leaked data to influence victims to their advantage. A leak like this gives attackers access to organized and meaningful information, as opposed to a broad data dump. It forces those affected to think twice about who they trust — about whether a message is legitimate or malicious.

Further, there is a difference between this data leak and other security breaches in which credit card numbers or passwords are stolen. “In those types of breaches, there is a clear call to action,” Farrow says. “The people whose data is leaked actually need to go out and do something.” Stolen passwords can be changed, and stolen credit cards replaced. When people give up so much personal data to tech platforms, there isn’t much they can do about how it’s used.

Responsibility to Protect Data
There are steps that can be taken to protect this data. The question is, whose job is it?

“Any company that’s holding data that might even be remotely valuable is potentially at risk of having it stolen,” Woolwine says. “It gets progressively worse as sensitivity of the data goes up.”

There are certain businesses in which the customer’s security posture has a direct impact on the organization, and this is certainly true for data aggregators, Farrow adds. Sean Thorne, co-founder of People Data Labs, told Wired that customers are responsible for securing data on their servers. While the company does free security audits and consultations, it isn’t accountable.

Woolwine suggests it may be time for the government to get involved with helping the industry move forward in implementing penalties for securing information. “I don’t think that without some kind of authoritative oversight, the smaller players are going to get their act together to secure that data,” he explains.

In the meantime, security practitioners should chat with their business colleagues about the best practices involved with data handling. Most business professionals know sensitive data must be encrypted, he says, but the process for handling less-critical information – like that exposed in this leak – should also be evaluated. A scenario like this could prove damaging to a company’s reputation if customers learn their data is mishandled or left exposed on the Web.

“An event like this should be a wake-up call to everybody that handles sensitive data to make sure they coordinate with their security teams to ensure the controls most teams are tasked with putting in place are actually applied so an accident like this doesn’t happen,” Farrow says. “There are all good reasons to protect it, and no good reasons to expose it.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Home Safe: 20 Cybersecurity Tips for Your Remote Workers.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/application-security/the-implications-of-last-weeks-exposure-of-12b-records/d/d-id/1336471?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Gamification is Adding a Spoonful of Sugar to Security Training

Gamification is becoming popular as companies look for new ways to keep employees from being their largest vulnerability.

In 1964 the world learned that a spoonful of sugar helps the medicine go down. It was not the first time a key principle of gamification was said out loud, but it might well be the catchiest. In 2019 tidying up has changed hands from Mary Poppins to Marie Kondo, but the idea that making a task enjoyable makes it more likely to be done has been embraced by the business world — and cybersecurity training.

Merriam-Webster defines gamification as “The process of adding games or gamelike elements to something (such as a task) so as to encourage participation.” And for many responsible for turning new hires from security vulnerabilities into security assets, it’s a key strategy in keeping them focused on their training.

“There are numerous studies that show that gamification not only increases engagement but it increases learning retention,” says Hewlett Packard Enterprise (HPE) cybersecurity awareness manager Laurel Chesky. She says that HPE has increased the degree to which it uses gamification in cybersecurity training because it has seen positive results with the technique.

Within HPE, Chesky says, there is mandatory basic cybersecurity training but much more training is available on an optional basis. “We want them to come and engage with us and consume the common-sense information,” she says. “If we aren’t doing that in a fun and engaging way they simply won’t come back to us. So we have to do that through gamification.”

How to Keep the Fun Factor Up

Moving training to a gamified basis can be effective but, like anything, it can become rote and routine if done poorly, say some. “Gamification is great, but you need variety,” says Colin Bastable, CEO of Lucy Security. “Variety is the spice of life. So I think that gamification is very valuable as part of a broader strategy.”

“I think our training metrics definitely reflect the larger engagement,” Chesky says. “We started off in a very grassroots, DIY type of gaming, with a web-based trivia game that we created. It’s very simple. It’s set up like Jeopardy and we can go online and pick a question for 200, 400, 800, or a thousand points. It’s very, very simple to create and we did it in-house,” she explains.

Joanne O’Connor, HPE cybersecurity training manager, created a different game called “Phish or No Phish” that uses the Yammer collaboration system as a platform. She will post an image on a channel and ask participants whether or not it’s from a phishing email intercepted by the company’s cybersecurity team. Employees providing the correct answer are able to win recognition points exchangeable for various prizes.

These games address the kind of training that Bastable believes is most suitable for gamification. “I would say that it works better for the short, sharp, pointed awareness training as opposed to a long and detailed course,” he explains. “Generally, I would say that what you want to do is is create an environment that engages rapidly and that engages people where another format might not.”

O’Connor says that many of their games are designed to be completed within about 20 minutes — experiences that allow the employee to engage deeply to learn a single facet of cybersecurity.

The Science of Fun

Some academic research, like that of Michael Sailera, Jan Ulrich Henseb, Sarah Katharina Mayra, and Heinz Mandla, explores the reasons that gamification can be effective in training. They point out that self-determination theory says that three psychological needs must be met: The need for competence, the need for autonomy, and the need for social relatedness.

In their research, the researchers found that, “…the effect of game design elements on psychological need satisfaction seems also to depend on the aesthetics and quality of the design implementations. In other words, the whole process of implementing gamification plays a crucial role.”

Bastable says that there’s a common assumption that gamification is something that is effective for younger employees and less so for older workers. He says the reality is that it can be effective for all employees, though different individuals may respond to different types of game mechanics (the way the game looks and is played).

O’Connor agrees. “It’s something that we think about a lot with our new employees being, of course, younger folks and we need to reach them. But really we think it reaches everybody,” she says.

Chesky believes that the tide has turned toward gamification in all types of enterprise training. “I think you see it now in a lot of corporations on an industry level,” she says. “I think you’ve definitely seen most corporations and of course the industry moving towards that for all different kind of mandated company training because it works,” Chesky says. “It’s all about engagement.”

Related content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Home Safe: 20 Cybersecurity Tips for Your Remote Workers.”

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/gamification-is-adding-a-spoonful-of-sugar-to-security-training/b/d-id/1336472?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Naked Security needs an intern! Here’s how to apply

We are looking for a student to join our team for a 12-month internship at our Abingdon, UK, headquarters.

If you’re currently studying marketing, business or another relevant field, and have strong written, project management and organisational skills, we want you!

As part of the Content Marketing internship, you’ll work on the Naked Security team where you’ll get to:

  • Engage with people on the Naked Security Twitter, Facebook and Instagram accounts.
  • Create new social media content, build social tiles and edit videos.
  • Monitor our social media accounts and track engagement.
  • Take part in the production of our weekly podcast.
  • Identify new social media opportunities, trends and platforms.
  • Bring your unique perspective to content brainstorming meetings.

But hurry! Sophos is hosting a Marketing Recruitment Day on Thursday 5 December. That’s next week!

Find out more about the role and submit your CV to apply.

We chatted to this year’s intern, Harry McMullin, about his experiences working with the team and what you should do if you’re thinking about applying:

(Watch directly on YouTube if the video won’t play here.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/T1Acg96DhTs/