STE WILLIAMS

Google caught a Russian state hacker crew uploading badness to the Play Store

Google has said it fired off 12,000 warnings to unlucky users of its GMail, Drive and YouTube services telling them that they’re being phished by state-backed hackers.

The ad tech firm’s Threat Analysis Group (TAG) said in a blog post that between July and September it told people in 149 countries around the world that they were being “targeted by government-backed attackers”, adding that this was consistent with the same number of warnings sent during the same periods of 2017 and 2018.

“Over 90 percent of these users were targeted via ‘credential phishing emails’, wrote Google’s Shane Huntley, who gave an example of one of these phishing emails having been sent from “Goolge”.

TAG went on to highlight a Russian state-sponsored hacking crew named Sandworm* which in 2017 started deploying Android-based malware to the Google Play store and evolved over time to simply phishing and compromising legit devs before deploying malicious updates to previously trusted apps. Google’s TAG, naturally, said they detected this and stopped Sandworm from doing these bad things.

Kevin Bocek, threat intelligence veep from Venafi, said:

“The most troubling of [Google TAG’s] examples was that [Sandworm] was able to compromise code signing keys from a legitimate app developer, via a phishing email, and add its own backdoor into an app… This just shows the power of code signing, it’s like a god that machines trust blindly. As more and more hackers see the potential, and ease, for misusing keys and certificates we’ll see more of these attacks. We must ensure in the software build process code signing and machine identities are protected”

Sandworm previously used a Windows zero-day in 2014 to spy on NATO and the EU, among other targets.

Piers Wilson, product management head of Huntsman Security, opined that all this means companies must be “constantly vigilant”, saying: “Google’s announcement highlights that anyone could be a target of nation state attacks. You might assume you’re not of interest to government-backed attackers, but even someone only tangentially related to people or organisations in power could be a way into that target and so a valid target themselves.”

Cesar Cerrudo, chief techie of IOActive, advised folks to “avoid clicking on links unless you are sure they are safe and install strong protections on your endpoint devices.” Sound advice – provided you also take care while thumbing through emails on your phone or tablet. ®

Nomenclaturenotes

Sandworm has also been named (deep breath): TEMP.Noble; Electrum; Telebots; Quedagh Group; BE2 APT; Black Energy; and Iridium, not to be confused with the element or the satcom company.

The wildly unchecked proliferation of different names for hacking crews is intended mainly as a marketing gimmick to make threat intel companies appear to be first with the latest news about FancyAPT007PandaSeaTeamCalc!heeheeCr3wBlurt and to drown out the fact that there’s a score of competing firms all tracking the same threats. This is incredibly frustrating for anyone trying to figure out whether this week’s Big Scary Thing is actually the same one from last week but under a different name.

A common problem, it has driven sensible people to build public spreadsheets resolving and deconflicting the various company-specific hacker crew names. El Reg wholeheartedly endorses this approach to making infosec comprehensible again.

Sponsored:
Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/28/google_12000_warnings_phishing_sandworm/

Twitter says it won’t delete tweets from those who have died

Twitter has paused its plans to delete accounts that haven’t been logged into for six months, after an outcry over protecting the accounts of those who have died.

The story began earlier this week, when Twitter announced it was going to rub out accounts that haven’t been logged into within six months.

Twitter emailed owners of those inactive accounts, warning them that they have to sign in by 11 December, lest those accounts be history and their usernames be offered up to others.

The email, sent with a subject line saying “Don’t lose access to @(username),” went on to say that you’ve got to log in and agree to the platform’s current terms of use, if you want to keep your account:

Hello,

To continue using Twitter, you’ll need to agree to the current Terms, Privacy Policy, and Cookie Use. This not only lets you make the best decisions about the information that you share with us, it also allows you to keep using your Twitter account. But first, you need to log in and follow the on-screen prompts before Dec. 11, 2019, otherwise your account will be removed from Twitter.

Twitter told The Verge that the purge was part of its efforts to scrub itself clean of a thick layer of sludge: what one assumes is the machine learning-based optimization of messaging and micro-targeting, the fake and misleading news, the deepfakes, the bots – all of which it pointed to when it recently banned political ads.

This, from a Twitter spokesperson:

As part of our commitment to serve the public conversation, we’re working to clean up inactive accounts to present more accurate, credible information people can trust across Twitter. Part of this effort is encouraging people to actively log-in and use Twitter when they register an account, as stated in our inactive accounts policy.

We have begun proactive outreach to many accounts who have not logged into Twitter in over six months to inform them that their accounts may be permanently removed due to prolonged inactivity.

That landed a deep blow to those who treasure the tweets of those who have died. Unlike Facebook, Twitter lacks a way to memorialize someone’s account after their death, and people have been mourning the impending loss:

So unless someone had the login of those users who have died, and could accept the new terms on behalf of their deceased loved one, the account would be deleted and the tweets lost forever.

Drew Olanoff, now the VP of Communications for venture equity fund Scaleworks, wrote an understandably panicked account for Tech Crunch about what his loss would be if Twitter deleted his father’s account – an account for which Olanoff doesn’t have the login.

My heart sank. And I cried. You see, I didn’t think about this. It is a big deal.

My father’s Twitter account isn’t active. He passed away over four years ago. My Dad was a casual tweeter at best. He mostly used it because I, well, overused it. And it was charming. Once in a while he’d chime in with a zinger of a tweet and I’d share it humbly with the folks who kindly follow me.

Twitter tweeted yesterday that it’s heard the public, and would be thinking of ways to memorialize accounts before moving forward with the deletion.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/71G_S3qaB6o/

HPE warns of impending SSD disk doom

Techies are used to worrying about the longevity of their data storage. Hard drive heads used to have a nasty habit of crashing before laptops introduced software to protect them from drops and power surges. ‘Data rot‘ can damage your DVD storage, and magnetic tape can suffer as its substrates and binders degrade.

But what about the firmware, which contains the instructions for reading and writing from the media in the first place? That’s now an issue too, thanks to HPE. It had to recall some of its solid-state drives (SSDs) last week after it found that they were inadvertently programmed to fail.

The company released a critical firmware patch for its serial-attached SCSI (SAS) SSDs, after revealing that they would permanently fail by default after 32,768 hours of operation. That’s right: assuming they’re left on all the time, three years, 270 days, and eight hours after you write your first bit to one of these drives, your records and the disk itself will become unrecoverable.

The company explained the problem in an advisory, adding that an unnamed SSD vendor tipped it off about the issue. These drives crop up in a range of HPE products. If you’re a HPE ProLiant, Synergy, Apollo, JBOD D3xxx, D6xxx, D8xxx, MSA, StoreVirtual 4335, or StoreVirtual 3200 user and you’re using a version of the HP firmware before HPD8, you’re affected.

You might hope that a RAID configuration might save you. RAID disk implementations (other than RAID 0, which focuses on speed), mirror data for redundancy purposes, meaning that you can recover your data if disks in your system go down. However, as HPE points out in its advisory:

SSDs which were put into service at the same time will likely fail nearly simultaneously.

Unless you replaced some SSDs in your RAID box, they’ve probably all been operating for the same amount of time. RAID doesn’t help you if all your disks die at once.

This bug affects 20 SSD model numbers, and to date, HPE has only patched eight of them. The remaining 12 won’t get patched until the week beginning 9 December 2019. So if you bought those disks a few years ago and haven’t got around to backing them up yet, you might want to get on that.

HPE explains that you can also use its Smart Storage Administrator to calculate your total drive power-on hours and find out how close to data doomsday your drive is. Here’s a PDF telling you how to do that.

Unfortunately, HPE didn’t include the same kind of warning that Mission Impossible protagonist Jim Phelps got at the beginning of every episode: “This tape will self destruct in five seconds”.

But then, 117,964,800 seconds is a little harder to scan. In any case, your mission, should you choose to accept it, is to back those records up.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zce3qXAYQ1A/

Ransomware attack freezes health records access at 110 nursing homes

Happy Thanksgiving: your elder loved one’s life may be at risk.

About 110 nursing homes and acute-care facilities have been crippled by a ransomware attack on their IT provider, Virtual Care Provider Inc. (VCPI), which is based in the US state of Wisconsin and which serves up data hosting, security and access management to nursing homes across the country.

The attack was still ongoing on Monday, when cybersecurity writer Brian Krebs first reported the assault.

Krebs says it involves a ransomware strain called Ryuk, known for being used by a hacking group that calculates how much ransom victimized organizations can pay based on their size and perceived value.

Whoever it was who launched the attack, they got it wrong in this case. VCPI chief executive and owner Karen Christianson told Krebs that her company can’t afford to pay the roughly $14 million Bitcoin ransom that the attackers are demanding. Employees have been asking when they’ll get paid, but the top priority is to wrestle back access to electronic medical records.

The attack affected virtually all of the firm’s core offerings: internet service, email, access to patient records, client billing and phone systems, and even the internal payroll operations that VCPI uses to pay its workforce of nearly 150. Regaining access to electronic health records (EHR) is the top priority because without that access, the lives of the seniors and others who reside in critical-care facilities are at stake.

This is dire, Christianson said:

We’ve got some facilities where the nurses can’t get the drugs updated and the order put in so the drugs can arrive on time. In another case, we have this one small assisted living place that is just a single unit that connects to billing. And if they don’t get their billing into Medicaid by December 5, they close their doors. Seniors that don’t have family to go to are then done. We have a lot of [clients] right now who are like, ‘Just give me my data,’ but we can’t.

As Krebs notes, recent research suggests that death rates from heart attacks spike in the months and years following data breaches or ransomware attacks at healthcare facilities. A report from Vanderbilt University Owen Graduate School of Management posits that it’s not the attacks themselves that lead to the death rate rise, but rather the corrective actions taken by the victimized facilities, which might include penalties, new IT systems, staff training, and revision of policies and procedures.

Ironically, those corrective measures introduce a long, slow learning curve. From the report:

Corrective actions are intended to remedy the deficiencies in privacy and security of protected health information. However, enhanced security measures may introduce usability – which we define as the ease of use – problems. New security procedures typically alter how clinicians access and use clinical information in health information systems and may disrupt the provision of care as providers require additional time to learn and use the new or modified systems.

Ryuk strikes again

The ransomware flavor used against the nursing homes was Ryuk: an especially pernicious variant used not only to prey on our elders, but also on kitties and doggies. This week, we found out that Ryuk was used in a ransomware attack that affected hundreds of veterinary hospitals.

Ryuk has also been used in ransomware attacks against organizations including the city of New Bedford in Massachusetts, the Chicago Tribune, and cloud hosting provider DataResolution.net.

How long has the attack been going on?

Krebs reports that Ryuk was unleashed inside VCPI’s networks around 1:30 a.m. CT on 17 November. It could have been lying in wait for months or years, however, as the intruders mapped out the internal networks and compromised resources and data backup systems in preparation for the ultimate attack.

Christianson said that VCPI will publicly document the attack – “when (and if)” it’s brought under control. For now, it’s focusing on rebuilding systems and informing clients, even in the face of the data kidnappers having seized control of the firm’s phone systems at one point, when it tried to sidestep their damage:

We’re going to make it part of our strategy to share everything we’re going through. But we’re still under attack, and as soon as we can open, we’re going to document everything.

How to protect yourself from ransomware

  • Pick strong passwords. And don’t re-use passwords, ever.
  • Make regular backups. They could be your last line of defense against a six-figure ransom demand. Be sure to keep them offsite where attackers can’t find them.
  • Patch early, patch often. Ransomware like WannaCry and NotPetya relied on unpatched vulnerabilities to spread around the globe.
  • Lock down RDP. Criminal gangs exploit weak RDP credentials to launch targeted ransomware attacks. Turn off Remote Desktop Protocol (RDP) if you don’t need it, and use rate limiting, two-factor authentication (2FA) or a virtual private network (VPN) if you do.
  • Use anti-ransomware protection. Sophos Intercept X and XG Firewall are designed to work hand in hand to combat ransomware and its effects. Individuals can protect themselves with Sophos Home.

For more advice, please check out our END OF RANSOMWARE page.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tUJXZ9IXE28/

Kids’ smartwatch security tracker can be hacked by anyone

For researchers at testing outfit AV-Test, the SMA M2 kids’ smartwatch is just the tip of an iceberg of terrible security.

On sale for around three years, superficially it’s not hard to understand why the model M2 might appeal to anxious parents or carers.

Costing only $32, it pairs with a smartphone so that adults can track the real-time location of kids via GPS, GSM or Wi-Fi using a simple mapping app and online account. Add a SIM and it can be used to make voice calls and there’s even an SOS button children can press in the event of an emergency.

The colour screen, cartoon icons, and baby-blue or pink colour scheme is almost guaranteed to appeal to younger children.

The punchline?

AV-Test’s investigations reveal that the M2 also happens to be an unmitigated security disaster.

Naked Security has covered numerous security screw-ups over the years but it’s hard to imagine a more face-palming charge sheet than that levelled at the makers of the M2 by AV-Test.

To illustrate the point, the testers use the example of a girl called Anna who lives in Dortmund, Germany.

She vacations with her grandparents in a coastal town called Norderney, where she regularly visits the local harbour around 2 o’clock to spot seals for an hour.

The company knows all of this because Anna is wearing an M2 smartwatch which has been leaking this information along with that of another 5,000 children via a public system whose security would be non-existent for any competent hacker.

AV-Test was able to find the names and addresses of these children, their age, images of what they looked like, as well as voice messages transmitted from the watch.

In a development that would be ironic if it weren’t so serious, they were able to discover children’s current locations. Warns AV-Test’s Maik Morgenstern:

We picked out Anna as much as we could have picked Ahmet from London or Pawel from Lublin in Poland.

Authentication fail

The epic fail starts with the fact that communication with the online system is unencrypted and its authentication is weak.

Although an authentication token is generated and sent to requests to the Web API to prevent unauthorized access, this token is not checked on the server side and is therefore inoperative.

Perhaps worse, the smartphone app’s poorly secured web API makes it possible to borrow any user’s account ID and log into that account.

An attacker could not only track and contact a child but lock legitimate adults out of the account.

Remember, this is a device that is supposed to be a security tracker for carers that turns out to do the same job for anyone.

This is surely worse than no security trackers because at least using nothing wouldn’t lull its users into a false sense of security.

What to do

If you own one of these watches, our advice would be to stop using it immediately.

It’s not clear how many children might be wearing one – AV-Test detected users in Turkey, Poland, Mexico, Belgium, Hong Kong, Spain, the UK, The Netherlands, and China – but it’s likely to be a lot more than the 5,000 the researchers identified.

The maker, SMA, has been told of the flaws while the product’s German distributor has removed it from sale.

The troubling part of this story is that AV-Test has been looking at this type of children’s smartwatch for some years, and this is only the latest and worst example in a sector that seems to have treated security as little more than a tick box – if it looks secure then it probably is.

Indeed, Naked Security has covered security problems with this class of device many times before. In 2017, Germany even reportedly banned the devices over spying worries. Then there’s this week’s case of the baby monitor hacked by a stranger.

Until IoT products like this can demonstrate better security, it’s wise to shop with great caution.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OslMbiNGOKg/

S2 Ep18: Missing cryptoqueen, festive phishing and can the web be saved? – Naked Security Podcast

This week we discuss the large scale crypto-scam which tricked people into investing $400m, Tim Berners-Lee’s proposed principles to save the web from a ‘digital dystopia’, and how to stay safe online during the festive season.

Producer Alice Duckett hosts the show with Sophos experts Paul Ducklin and Peter Mackenzie.

Listen below, or wherever you get your podcasts – just search for Naked Security.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BqKN0QM5Rtw/

Cloudy biz Datrix locks down phishing attack in 15 mins after fat thumb triggers email badness

Cloud-‘n’-comms biz Datrix has suffered a phishing attack that resulted in some customers’ contact details being compromised – though the company reckons it contained the attack within 15 minutes.

The London-based firm sent an email to its customers earlier this week, seen by El Reg, confirming it had been “the target of a sophisticated cyber security attack, designed to defraud the company and appropriate company funds”.

Company chairman Rob Wirszycz told us of the attackers: “They’re incredibly clever, these guys.”

He explained that someone within the company had been thumbing through emails on their mobile phone and accidentally tapped a link sent from a compromised supplier of Datrix’s. In turn, that compromised the person’s inbox, allowing the attackers to “access a bunch of internal emails, read them and send them to our finance department”.

Those emails, sent to tempt finance bods into paying fake invoices, linked to a fake domain: datrlx.co.uk (with a lowercase L) (instead of datrix.co.uk).

On top of that, around 300 emails were sent to customers whose details were in emails sent to the hapless Datrix worker. Wirszycz said the company shut off the compromised email account within 15 minutes, preventing the sending of “several thousand” emails.

“We encourage all our customers and suppliers to permanently delete any suspicious emails that have been received from Datrix team members during the past week with the words ‘new project’ in the subject line and to be wary of any suspicious online activity involving our company,” Datrix told its customers in a fresh email alerting them to the incident. Company reps also phoned all of those who had been emailed by the phishers to ensure the warning got through, Wirszycz told us.

“This was clearly the work of someone almost factory-like,” he lamented. “As chairman I’m pleased the business did respond this way, that the guys took the steps we did. Everything seemed to work well.”

Bad things do happen

As we reported last year, phishing crooks need to compromise just the one account to cause havoc, as Datrix’s experience vividly demonstrates.

It’s not just businesses that fall victim to this ever-increasing form of crime: fraudsters have begun targeting charities because those types of organisation tend to be more trusting. Before one starts sneering, however, it’s worth remembering that developers are also a useful target for phish-happy crooks, as Guy Podjarny of security biz Snyk reminded attendees of a conference last year. ®

Sponsored:
Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/28/datrix_phishing_attack/

This week, we give thanks to Fortinet for reminding us what awful crypto with hardcoded keys looks like

Roundup Here’s a summary of recent infosec news beyond what we’ve already covered – earlier than usual because some of us have Thanksgiving to get through in the US. By the way, watch out for hackers taking advantage of IT teams suffering turkey comas.

Fortinet fsck up: Some Fortinet networking equipment was caught sending customers’ sensitive information over the internet to its servers using weak encryption – XOR and a hardcoded static key. The weakness is present in FortiGate and Forticlient products that have the FortiGuard Web Filter, FortiGuard AntiSpam and FortiGuard AntiVirus features.

Said information potentially includes, depending on your setup, the serial number of the device, full HTTP URLs visited by users (collected for web filtering), email data (for message filtering) and other info.

The security blunder, uncovered by the team at security biz SEC Consult, would allow network eavesdroppers to potentially snoop on web browsing and manipulate some messages – for example, cancelling out malware detection alerts.

SEC reported the flaws in May 2018, and it’s only in the last week that a fix has been released shutting the holes. Admins should upgrade to FortiOS 6.2.0, FortiClientWindows 6.2.0, and FortiClientMac 6.2.2 as soon as possible.

Coding flaws to avoid: The US government has drawn up a fresh list of the most dangerous software security bugs, with out-of-bounds memory buffer accesses topping the roster. This time, it seems officials used some math and CVE vulnerability figures to produce the table rather than rely on subjective interviews with industry professionals.

“We shifted to a data-driven approach because it enables a more consistent and repeatable analysis that reflects the issues we are seeing in the real world,” said CWE project leader Chris Levendis. “We will continue to mature the methodology as we move forward.”

US also cracks the whip on vulnerability disclosure: While we’re on the topic of Uncle Sam, its Cybersecurity and Infrastructure Security Agency has issued a directive mandating that all government departments must have a setup in place to allow researchers to privately disclose any discovered code vulnerabilities.

“Most federal agencies lack a formal mechanism to receive information from third parties about potential security vulnerabilities on their systems,” it said. “Many agencies have no defined strategy for handling reports about such issues shared by outside parties. Only a few agencies have clearly stated that those who disclose vulnerabilities in good faith are authorized.”

The directive will apply to all federal, executive branch, departments, and agencies to devise and publish a vulnerability disclosure program for flaw-hunters to report holes safely and without fear of being dragged into court. Those submitting vulnerabilities must be able to do so anonymously from anywhere in the world.

Departments will have 270 days to get systems in place before the cybersecurity agency starts enforcing the directive.

Google publishes state hacking stats: When it comes to protecting netizens from state-sponsored hacking, Google has been bragging about its work in the area. Over the past 12 months, the Chocolate Factory’s Threat Analysis Group says it has tracked 270 government-backed hacking crews from more than 50 countries, and issued more than 12,000 alerts to folks that they were being phishing by government spies in 149 countries. It also took down disinformation campaigns in Africa and Papua New Guinea.

Splunk has a Y2020 problem: Data analytics outfit Splunk is warning users that they need to upgrade due to a serious timing issue.

According to an advisory, on January 1, 2020 all of its unpatched products will struggle to handle event timestamps that use a two-digit year. On September 13, they will also fail to deal with timestamps that are based on Unix time.

NYPD blue over ransomware: New York’s finest has admitted to getting hit by a ransomware attack that took down its LiveScan fingerprint database.

According to the New York Post, a contractor was setting up a display last October and plugged in a mini-PC that was infected with ransomware. It spread to 23 machines, knocking out the fingerprint checking service. In the end, 200 machines were reformated to make sure the spread of the malware was arrested. ®

Sponsored:
Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/28/security_roundup_271119/

A Cause You Care About Needs Your Cybersecurity Help

By donating their security expertise, infosec professionals are supporting non-profits, advocacy groups, and communities in-need.

(image source: lidiia, via Adobe Stock)

Victims of abusive relationships are all-too-familiar with stalkerware — spyware sometimes used by abusers to track their victims’ conversations and locations. Eva Galperin, who heads the Threat Labs at the Electronic Frontier Foundation (EFF) has been pressing antivirus companies to treat stalkerware as a serious problem for some time.

Now she’s finally seeing progress. Last week, EFF and nine other organizations united to launch the new Coalition Against Stalkerware, which aims to spread awareness and help affected victims.

“Our goal is to have a definition, standards for detection, and to get AV companies to change the norms of how this software is treated,” says Galperin.

This is just one of the ways Galperin has used her security knowledge to assist vulnerable populations. She is an outspoken advocate for using security for altruistic purposes. To put it simply: hacking for the greater good. 

“Hacking is curiosity,” she says. “It is the act of taking things apart and seeing how they work. Ideally this is followed by putting something back together so it can work better. [That] can apply to a product – but it can also apply to societal issues. It does not need to be confined to an office.”

Security professionals are needed and should feel called on to use their experience to help others and impact larger societal issues — especially now, she says. This is essential, she says, due to the ubiquity of technology in nearly every aspect of our lives.

“These are particularly interesting political times,” says Galperin. “Everyone reads the paper and gets upset about some kind of news involving technology. Digital technology is at the center of our lives. Almost every issue now has some sort of information component.”

Galperin has been giving regular presentations on the topic of security for the greater good at events like Black Hat with security luminary Bruce Schneier, who describes himself as a “public-interest technologist, working at the intersection of security, technology, and people.” Their goal is to spread a message on the need for more involvement from technology and security professionals in charitable work, as well as more influence on policy development.

Policy Development, Not Just Product Development

Schneier cites stalkerware as an example of this need. Currently, product design in a vacuum does not consider broader implications that can ultimately lead to harm.

“If your software developers are all white men, you might not get a product that reflects the rest of the population,” he says. “It goes very deep. They are just building tech toys, not systems with social implications.”

As Schneier pointed out in a recent essay, technologists and policymakers largely inhabit two separate worlds, and bridging that gap is essential for the future as almost everything is now based on technology in some way.

“You can no longer separate technology from policy,” says Schneier. “You can no longer work on food security or climate change without understanding technology. You get the technology wrong, you get the policy wrong.”

And the stakes are high if technologists and security professionals fail to involve themselves at the policy level, argues Schneier. Take artificial intelligence (AI): AI has the potential to offer productivity gains to organizations, yet it can also, as he wrote, “entrench bias and codify inequity, and to act in ways that are unexplainable and undesirable.” It is an example of a technology that requires development with an intricate understanding of “both the policy tools available to modern society and the technologies of AI,” according to Schneier.

But despite the rallying cry, Schneier also notes that actually putting technologists to work on policy and charitable causes is easier said than done.

Overcoming Obstacles 

When he speaks at public events about the issue, Schneier says he is often approached by attendees who want to do more to help, but there aren’t any clear paths forward. “They say ‘OK, you’ve convinced me.’ But I’ve got nothing for them,” says Schneier. “There aren’t enough [relevant] positions at federal agencies, at NGOs. That’s the immediate problem.”

Plus, when salaries may be substantially higher at an enterprise or technology company, trained technology professionals might be less likely to take a full-time position at a non-profit.

But in other fields, such as law, Schneier notes there is an active movement to encourage practitioners to spend a block of time on volunteering or sabbatical. This is not yet a common part of a computer science careers, and that needs to change, he says.

“Find a cause you believe in and a group that does it and get involved,” he says. “You can take a sabbatical. Or teach. But you have to find your own path.”

Galperin suggests starting local.

“Everyone has a population that they care about,” she says. “It could be a school or church. Increasingly those groups are under threat or face risks they don’t understand. This is a great opportunity for trained professionals to come in and do what they do. To spread better security hygiene in a way that works for them.”

Related content:

 

Joan Goodchild is a veteran journalist, editor, and writer who has been covering security for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/a-cause-you-care-about-needs-your-cybersecurity-help/b/d-id/1336482?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Gamification Is Adding a Spoonful of Sugar to Security Training

Gamification is becoming popular as companies look for new ways to keep employees from being their largest vulnerability.

(Image: kitthanes via Adobe Stock)

In 1964 the world learned a spoonful of sugar helps the medicine go down. It wasn’t the first time a key principle of gamification was said out loud, but it might well be the catchiest.

In 2019 tidying up changed hands from Mary Poppins to Marie Kondo, but the idea that making a task enjoyable makes it more likely to be done has been embraced by the business world — and cybersecurity training.

Merriam-Webster defines gamification as “the process of adding games or gamelike elements to something (such as a task) so as to encourage participation.” And for many responsible for turning new hires from security vulnerabilities into security assets, it’s a key strategy in keeping them focused on their training.

“There are numerous studies that show that gamification not only increases engagement, but it increases learning retention,” says Hewlett Packard Enterprise (HPE) cybersecurity awareness manager Laurel Chesky. She says HPE has increased the degree to which it uses gamification in cybersecurity training because it has seen positive results with the technique.

Within HPE, Chesky says, there is mandatory basic cybersecurity training, but much more training is available on an optional basis. “We want them to come and engage with us and consume the common-sense information,” she says. “If we aren’t doing that in a fun and engaging way, they simply won’t come back to us. So we have to do that through gamification.”

How to Keep the Fun Factor Up
Moving training to a gamified basis can be effective, but, like anything, it can become rote and routine if done poorly, some say. “Gamification is great, but you need variety,” says Colin Bastable, CEO of Lucy Security. “Variety is the spice of life. So I think that gamification is very valuable as part of a broader strategy.”

HPE’s training metrics reflect that, Chesky says. “We started off in a very grassroots, DIY-type of gaming, with a Web-based trivia game that we created,” she explains. “It’s very simple. It’s set up like Jeopardy, and we can go online and pick a question for 200, 400, 800, or 1,000 points. It’s very, very simple to create, and we did it in-house.”

Joanne O’Connor, HPE cybersecurity training manager, created a different game called “Phish or No Phish” that uses the Yammer collaboration system as a platform. She will post an image on a channel and ask participants whether it’s from a phishing email intercepted by the company’s cybersecurity team. Employees who provide the correct answer win recognition points exchangeable for various prizes.

These games address the kind of training Lucy Security’s Bastable believes is most suitable for gamification. “I would say that it works better for the short, sharp, pointed awareness training as opposed to a long and detailed course,” he says. “Generally, I would say that what you want to do is create an environment that engages rapidly and that engages people where another format might not.”

Many of HPE’s games are designed to be completed within about 20 minutes — experiences that allow the employee to engage deeply to learn a single facet of cybersecurity, O’Connor says.

The Science of Fun
Some academic research, like that of Michael Sailera, Jan Ulrich Henseb, Sarah Katharina Mayra, and Heinz Mandla, explores the reasons gamification can be effective in training. They point to self-determination theory, which states three psychological needs must be met: the need for competence, the need for autonomy, and the need for social relatedness.

In their research, the researchers found “…the effect of game design elements on psychological need satisfaction seems also to depend on the aesthetics and quality of the design implementations. In other words, the whole process of implementing gamification plays a crucial role.”

Bastable says there’s a common assumption that gamification is more effective for younger employees and less so for older workers. But the reality is it can be effective for all employees, though different individuals may respond to different types of game mechanics (the way the game looks and is played).

O’Connor agrees. “It’s something that we think about a lot with our new employees being, of course, younger folks, and we need to reach them. But, really, we think it reaches everybody,” she says.

Chesky believes the tide has turned toward gamification in all types of enterprise training. “I think you see it now in a lot of corporations on an industry level,” she says. “I think you’ve definitely seen most corporations and, of course, the industry moving toward that for all different kind of mandated company training because it works. It’s all about engagement.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Home Safe: 20 Cybersecurity Tips for Your Remote Workers.”

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/gamification-is-adding-a-spoonful-of-sugar-to-security-training/b/d-id/1336472?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple