STE WILLIAMS

Why Cybersecurity Needs a Human in the Loop

It’s no longer comparable to Kasparov versus Big Blue. When security teams use AI, it’s like Kasparov consulting with Big Blue before deciding on his next move.

A typical cybersecurity analyst is never short of work, a lot of which can be futile. According to a 2015 Ponemon Institute study, by the end of the year the average security operations center has spent around 20,000 hours just on chasing alerts that prove to be false alarms. Traditional security systems generate a lot of noise that needs to be waded through, which creates even more work. At the same time, a vast pool of security information is published across multiple media in natural languages that can’t be quickly processed and leveraged by these systems.

Cognitive security, or artificial intelligence, can “understand” natural language, and is a logical and necessary next step to take advantage of this increasingly massive corpus of intelligence that exists. These solutions, which have recently come into the market from a number of vendors including IBM Resilient, can be effective in all functions of cybersecurity, but perhaps none more so than in the response phase. Here the key metric is how quickly your team can mitigate the threat and get back to normal operations. Pairing humans and cognitive security solutions will help make sense of all this data with speed and precision, accomplishing response in a fraction of the time.  

But using cognitive solutions is not about man vs. machine. To borrow from an earlier era of artificial intelligence, it’s not as much Kasparov vs. Big Blue as it is Kasparov consulting with Big Blue before deciding on his next move against an unknown opponent. Defense works best when people and machine work together.

There are three fundamental reasons why this is true, especially when responding to a cyber incident:

  1. Level playing field: Cyber attacks and their breaches aren’t executed by technology; they’re the work of human beings. Therefore, it’s good business sense to level the playing field by having real humans on the other side of this. It’s even been referred to as “hand-to-hand combat.” This symbiosis between cognitive technology and human being is crucial and will ensure your organization is best equipped to respond.
  2. Information curation: While cognitive solutions can process information in nanoseconds and make key suggestions, not all information is relevant. Systems need to accept input from the analyst to set the broader context of an incident. They also need to be able to describe and document their findings and remediation steps and rank the information, Spotify-style, to separate what was relevant from any red herrings. This all helps to inform the next suggested response.
  3. Risk of false positives: The cost of a cyber attack is well researched, but the cost of a false positive is more elusive. Consider a penetration test: an automated incident response system may see what looks like an attack on the database and shut it down. This kind of decision is a high-stakes scenario that needs a human in the loop.

AI-Assisted Incident Response the Skills Shortage
Another key benefit: atificial intelligence will help address the talent management issue of “infosec burnout.” One analyst who documented how long it takes to fill open senior-level security positions theorizes that people bail early in their security careers after getting a taste of what the job is all about. Stress in this job is real but can be reduced if analysts work at a more strategic level by curating, not just reacting, and by consulting with a cognitive system that can share what others have done. 

In the face of an increasingly hostile environment, keeping humans in the loop and backing them up with a data-rich cognitive system is what will give businesses their best shot.

Related Content:

 

John Bruce is a seasoned executive with a successful track record of building companies that deliver innovative customer solutions, particularly in security products and services. Previously chairman and CEO of Quickcomm, an Inc. 500 international company headquartered in New … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/why-cybersecurity-needs-a-human-in-the-loop/a/d-id/1329505?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DoJ Launches Framework for Vulnerability Disclosure Programs

The Department of Justice releases a set of guidelines to help businesses create programs for releasing vulnerabilities.

The US Department of Justice has released a framework to help businesses develop formal vulnerability disclosure programs. More businesses are adopting vulnerability disclosure programs to better detect security problems that could lead to data compromise and disruption.

Some informally accept vulnerability reports with no structured process; others have formal programs with policies to dictate how they accept vulnerabilities and share the information with those affected. These policies may also include authorized methods for finding flaws in a business’ systems, services, and products.

The framework, created by the Criminal Division’s Cybersecurity Unit, provides a process for designing and administering a program, as well as a set of considerations that could help inform vulnerability disclosure policies. It doesn’t specify the goals and structure for these programs as every business has different goals and priorities.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/doj-launches-framework-for-vulnerability-disclosure-programs/d/d-id/1329514?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

WannaCry Ransom Bitcoins Withdrawn from Online Wallets

More than $140,000 in digital currency paid by WannaCry victims has been removed from online wallets.

The owners of Bitcoin wallets associated with this year’s WannaCry attack have withdrawn their funds, the BBC reports. More than $140,000 in Bitcoin has been removed from the online wallets, leaving their collective balance at zero.

WannaCry victims were asked to pay between $300 and $600 in Bitcoin to restore their data when the ransomware outbreak hit earlier this year. Despite experts’ warnings that payment wouldn’t guarantee recovery, many victims decided to pay anyway. Part of the funds were withdrawn in late July; the rest were removed this week.

Experts don’t anticipate the collectors will attempt to turn the digital currency into real money, a move that would help determine who is behind this. Many expect the Bitcoin will be put into a “mixer” and mixed into a larger series of payments so it’s harder to track.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/wannacry-ransom-bitcoins-withdrawn-from-online-wallets/d/d-id/1329538?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Making Infosec Meetings More Inclusive

Diversity and inclusion experts explain how to avoid meeting pitfalls that preclude the voices of underrepresented team members of the team.

Meetings at the office can be a battlefield for women and minorities in the cybersecurity field. 

“Many women lack confidence, but you want your ideas to be heard by the team you are with,” says Telle Whitney, CEO and co-founder of the Anita Borg Institute. Programmers or team members want to be seen as their professional role, not as a woman programmer or a black programmer, she says.

“Every woman that we work with – that is their goal,” she says.

There are strategies for making meetings more inclusive so that all members of the team get to contribute:

Encourage all voices and ideas.

Research has shown that women are interrupted more than men in meetings and their ideas valued less by team members, but there are ways to reduce that behavior, say inclusion experts.

Managers and team leaders should set ground rules before a meeting, says Mary Chaney, vice president at International Consortium of Minority Cybersecurity Professionals (ICMCP).

“As the leader, I set the tone for the meeting,” says Chaney, who previously ran the security operation for Johnson Johnson. “My rules were no interrupting other people or talking over others. As a leader, you want to make sure all voices are heard. You want a complete view.”

In some situations, team members may want to lead the conversation. A leader can step in and interrupt a dominant speaker and say something along the lines of “I understand how you feel, but I want to see what other people think as well,” Chaney advises.

Co-workers can play a role in providing space or an entry point to participate in discussions, says Aubrey Blanche, global head of diversity and inclusion for software company Atlassian, which released a 2017 State of Diversity Report in March. When she was in college and served on a board of directors at a nonprofit, she faced a male-dominated board and was uncomfortable with their communication style. “It was nothing malicious on their part, but I did not feel comfortable,” Blanche recalls.

She approached a fellow board member with whom she had a good working relationship and asked if he could help her participate in meeting discussions. During the meetings, he would turn to Blanche and ask for her thoughts or opinion, she recalls. After being called on several times to solicit her opinions, Blanche says she eventually felt comfortable providing them unprompted in the meeting.

When women and minorities get interrupted while speaking, they should try some basic communications strategies, say inclusion experts. “Interrupters often don’t realize they are interrupting,” Blanche says. “You can say their name to get their attention and then let them know you would like to ‘finish up’ your thought.”

When in a meeting with peers or those in higher roles, Chaney will put her finger up in the air and say, “please let me finish my thought” when interrupted, she says. “I figure they took the same leadership training as I did and should know better, so I will call them on it.”

Quash idea-stealing.

Another common issue women and minorities face in meetings is having their ideas pilfered, say inclusion experts. When women present ideas in a meeting, they are generally more collaborative and humble, speak in general terms and a tentative manner, and phrase their ideas as questions, says Caroline Turner, principal with DifferenceWorks. As a result, the idea tends to lay their quietly until a male colleague picks it up, claims it as their own idea and receives kudos from co-workers, Turner says.  

Men tend to communicate in a “hey, look at this” fashion, which is more apt to gain attention in a meeting, Turner explains.

Turner herself faced this issue when working for a company as a high-level executive. She mentioned this concern to her CEO on a Friday, who dismissed the notion, but the following Monday, he witnessed it firsthand during a meeting. Turner presented an idea and it failed to generate an immediate response. Then, moments later, a co-worker picked up Turner’s idea and presented it as his own, she recalls. Turner shot a knowing look to the CEO, who appeared surprised and said to the co-worker, “I see you agree with Caroline’s idea,” she recalls.

Although the conversation in the meeting continued as though nothing was amiss, Turner says, “I felt empowered. I was validated. I was no longer invisible. The power of that endorsement made me feel on par with everyone in the meeting.”

Women in President Obama’s administration used an “amplification” strategy” in which they would support each other by recognizing and commenting on any great ideas that fellow female co-workers presented in the meetings, Blanche explains.

The amplification comments would go along the lines of “great point and I would like to add on thatm” Blanche says as an example.

Women and minorities should hold open discussions with managers and executives when their ideas are hijacked, Turner says, citing her own former CEO as an example.

“Once I pointed it out to him and he saw it happen, he will never forget it,” Turner says, noting the CEO went on to validate other women’s ideas when similar situations arose.

Inclusion experts also note women and minorities can advocate for themselves in a light-hearted way while in meetings by saying such things as, “I’m glad you liked my idea and wanted to build on it.”

Eradicate dismissive behavior and attitudes.

In Silicon Valley, there is a belief that the best people have a particular style of coding, and some men are dismissive to female colleagues if they don’t employ the same coding style, says Whitney.

And in other cases, companies have fostered a confrontational culture, she says. “Intel used to have a culture of constructive confrontation, with the idea that by taking serious criticism a company could only get better,” Whitney says. “But Intel has since changed this culture to one where it wants it to be collaborative.”

Managers can tackle this head-on. “As a manager, when you see dismissive behavior you can take [the culprit] into a private conversation and tell them the behavior does not contribute to the job being done,” Whitney advises.

She adds that managers can also provide women and minorities who are dismissed by their male counterparts to have an opportunity to demonstrate their work and discuss it when the project they were working on is reviewed.

Seeking allies is one way to fend off a particularly rude co-worker, inclusion experts say. Any co-worker can approach an offending team member and recount the contributions made by a female or minority infosec member, Whitney says.

However, she notes, some co-workers who are insecure about contributions they have made to the team may not be willing to stand up in another co-workers defense.

So women and minorities should seek out allies to fend off obnoxiously rude co-workers, Whitney advises. “One of the most successful strategies in this situation and others is to find allies,” she says. “This is one of the reasons it is so important to have more women in the workforce.”

Related Content:

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/making-infosec-meetings-more-inclusive/d/d-id/1329539?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Fired employee caught by keylogger wins case

In a lawsuit between a web developer and his former employer, a media agency, a judge has ruled that using keylogger spyware to monitor one’s employees is against the law. The specific identities of all of the involved parties are private for now.

There are keylogger devices that can be plugged in between a keyboard and a PC. But, most keyloggers today are purely software and come with additional features, such as viewing a target’s monitor output and transmitting screenshots of it. That’s what the malware the employer used was found doing.

The web developer sued his former employer for wrongful dismissal. During April 2015, the employer sent out a group email announcing that internet traffic and other work-computer use would be permanently logged and saved. The email didn’t explain how, but company policy forbade the personal use of their computer and networking equipment.

Shortly after the announcement, the former employee was accused of working on a computer game for another company. He was soon fired.

The former employee claims that he was doing work for his father’s company, but only during his breaks, for only ten minutes per day.

This case highlights that spyware isn’t the preserve of foreign militaries and script kiddies, it can also come from people you interact with in person, such as jealous partners or employers.

Is using keyloggers and other forms of spyware on employees still legal in the United States? The spyware industry certainly hopes so.

Controversial Spyware developer Flexispy describes a “legislative gap” that “does not reach Keylogger technology”. In a nutshell, their advice to potential customers is that it usually is legal for employees to use keyloggers on their employees in the United States, but regulatory specifics vary from state to state.

I’m Canadian, so what about Canada?

Spyware developer Gecko Monitor suggests that people are free to spy on others using its keyloggers “as long as the person who installed the keylogger program is the owner of the computer or device that the software is being installed on”.

Those companies operate legally but have a clear interest in making the legal path look as smooth as possible. If your conscience is OK with keylogging you’d be well advised to seek independent legal advice before you do.

Just because it’s legal does it make it ethical? Tell us what you think in the comments below.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tgy7RMjePbY/

What do the words ‘Tor’ and ‘Dark Web’ mean to you? [VIDEO]

Enjoy our latest Facebook Live video featuring Sophos experts James Burchell and Greg Iddon.

In this episode, our dynamic duo reveal how well they did on our #SysAdminDay “Are you a sysadmin?” quiz, before tackling the controversial issue of Tor.

Tor, of course, is short for The Onion Router, software that’s supposed to keep you safer and more anonymous online, but that’s also used to access the infamous Dark Web, a popular haven for cybercrooks, ransomware collectors, peddlers of illegal drugs, and more.

Does Tor really make you safer, or pick you out as a target for surveillance? Are there legitimate reasons to use it? Even at work? How does Tor compare to a VPN? And if you aren’t ready to go all the way to Tor, what simple steps can you take to protect your privacy online?

(Can’t see the video directly above this line? Watch on Facebook instead.)

If you’d like to hear more from James and Greg (or from other Sophos experts, for that matter) in the relaxed format of Facebook Live…

…please let us know in the comments below!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3LH5I-AY9Os/

Can US senators secure the Internet of Things?

For once, this isn’t an Internet of Things (IoT) story about an egregious security blunder in a webcam, or a printer, or a light bulb, or a talking doll, or a home router.

Quite the opposite, in fact.

It’s a story about a proposal by the US Congress to introduce a law called The Internet of Things (IoT) Cybersecurity Improvement Act of 2017.

In an intriguing choice of words, the bill aims to specify what the regulators are calling “minimal cybersecurity operational standards” for IoT devices.

We’re not sure if American English uses the words minimal standards where British English would prefer minimum standards (meaning the standards below which you may not go, even if those standards are quite high)…

…or if the US legislators are quite literally admitting that we are living in such an insecure IoT world that mandating even the most modest security standards would be an effective start.

We suspect that both these meanings apply.

We need minimum standards (i.e. ones that everyone is required to meet), but we might as well start with a minimal minimum (i.e. one that, although unimpressive, is unarguably achievable by everyone).

This is an interesting contrast to our law-makers’ story from yesterday in which we reported that UK Home Secretary Amber Rudd wanted to attack encryption in the other direction.

Rudd as good as said that she wants the UK to legislate for minimal maximum standards for cryptographic products (i.e. to weaken them on purpose).

Rudd argued that “real people” don’t care much about security, so it would be acceptable to regulate it away in order to fight terrorism and hate crime.

The US IoT Improvement Act, fortunately, as good as states that whether “real people” care about security or not, the vendors who sell them internet devices jolly well ought to care on their behalf.

Amongst some of the proposals in the US bill:

  • Fix firmware vulnerabilities in a reasonable time.
  • Provide a mechanism for authenticated firmware upgrades, so that fixes can actually be deployed.
  • Or, if the firmware can’t be updated, send your customers a replacement device with the new firmware burned in.
  • Don’t use hardcoded passwords or credentials that can’t be changed.
  • Stick to trusted and approved encryption – no outdated or home-made algorithms.

Will it work?

Even if you are generally an opponent of government intervention in IT and the internet, on the grounds that the more you meddle, the muddier it all gets, and therefore the less innovation there will be…

…it’s hard to oppose a minimal minimum law of this sort.

After all, we already have billions of IoT devices in use and on sale, and security seems to take second place, tenth place, or even no place at all in many of them.

Sure, vendors with strong technical ability and decent business ethics are already at or above these proposed minimal minima, but an awful lot of vendors aren’t, and don’t have any incentive to change their approach.

If you want to stop a race to the bottom, a good way is to make the ocean shallower, and to put a bunch of spikes on the sea bed to prevent laggards from settling there in comfort.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/sha_dH-svgI/

WannaCrypt victims paid out over $140k in Bitcoin to get files unscrambled

More than $140,000 (£105,000) in Bitcoin has been paid out by victims of the global WannaCrypt ransomware outbreak from May.

The money was removed from the online wallets at 4am UTC on Thursday.

The Bitcoin activity was noticed by a Twitter bot set up by Quartz journalist Keith Collins.

It tweeted:

The attack swept across at least 74 countries, and the UK’s NHS was forced to turn away patients as a result of the lockdown. FedEx, rail stations, universities and a Spanish national were also clobbered by the attack. Victims were asked to cough up between $300 and $600 to have documents restored.

NHS Digital stopped short of advising health organisations in England not to pay the ransom because it couldn’t be certain that all hospitals had backed up patient records.

Quartz speculated that the WannaCrypt bitcoins will be put through a “mixer”, with the currency transferred and mixed into a larger series of payments to obscure where it ends up.

“The general consensus among security experts and government agencies is that North Korea was behind the WannaCrypt attack, and that the operation was more political than money-driven,” it reported. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/03/140k_paid_out_in_bitcoin_by_wannacrypt_victims/

Wait? What? The IBM cloud’s APIs use insecure TLS1 crypto?

An e-mail has gone out from IBM about its Bluemix cloud: after next Tuesday, the SoftLayer APIs will no longer accept connections encrypted with the ancient TLS 1.0.

It’s not quite a surprise that the 1990s-era protocol was still accepted: a great many services are still midway through their deprecation plans.

To give just one example, Salesforce began its phase-out of TLS 1.0 in production instances on July 22, 2017.

And the PCI Council, which had originally wanted TLS 1.0 gone last year, had to extend its deprecation date to 30 June 2018 (and it’s still blogging early warnings for members, in case they’re still failing to catch up).

In the Bluemix e-mail, IBM notes: “There should be no impact to customers using a modern web client. This notification is intended to be informative only.

The two services affected by the deprecation are api.softlayer.com and api.service.softlayer.com – so there’s another community that’s got to pay attention, namely developers who wrote to the APIs and used TLS 1.0 to secure their API access.

TLS 1.0 has long been known as insecure, as far back as 2011 when it was bitten by the BEAST exploit. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/03/wait_what_the_ibm_clouds_apis_use_tsl1/

This typosquatting attack on npm went undetected for 2 weeks

A two-week-old campaign to steal developers’ credentials using malicious code distributed through npm, the Node.js package management registry, has been halted with the removal of 39 malicious npm packages.

Developers regularly add these bundles of JavaScript code to Node.js applications to implement common functions, so they don’t have to write the code themselves.

In a blog post published on Wednesday, CJ Silverio, CTO at npm, said that between July 19 and July 31, an account named hacktask conducted a typosquatting attack by publishing a series of packages with names that are similar to popular existing npm packages.

“In the past, it’s been mostly accidental,” Silverio said. “In a few cases we’ve seen deliberate typo-squatting by authors of libraries that compete with existing packages. This time, the package naming was both deliberate and malicious – the intent was to collect useful data from tricked users.”

Oscar Bolmsten, a developer based in Sweden, spotted the malicious code in a package named crossenv, designed to dupe people searching for cross-env, a popular script for setting environmental variables.

“Because environmental variables are such common a way to hand credentials to software, it’s a pretty good thing to go after,” said Silverio in a phone interview with The Register.

Environmental variables are used for, among other things, storing the account names, passwords, tokens, and keys that provide access to applications, cloud services, and APIs.

In this case, the malicious code attempts to copy any environmental variables set on the victim’s machine and to transmit them to an attacker-controlled server at npm.hacktask.net.

The JSON configuration file used by crossenv runs a script named package-setup.js that converts existing environmental variables into a string and then sends the data via POST request.

According to Silverio, “about 40” packages submitted by hacktask have been removed from npm. Lift Security, she said, scanned every npm package for the abusive package setup code but found no other instances of it.

Silverio expressed doubt that the attack worked very well. “Typo-squatting turns out not to be a very effective way to get malware into the registries,” she said, noting that people tend to rely on searches or copying and pasting published code.

Among the 39 packages that npm has linked to hacktask, most had about 40 downloads each since mid-July, excluding the surge in curiosity-driven downloads once word of the malware got out. The malicious crossenv package had the most downloads, at 700. But most of these are believed to be automated downloads triggered by npm mirror servers.

Silverio estimates that only about 50 people downloaded the bad crossenv package during the exposure period. She said she’s not aware of any developers who have reported account compromises as a result of this incident.

A search of GitHub repos turns up a handful of references to the malicious crossenv package.

The hacktask account has been banned but the person or persons behind the account have not been publicly identified.

Asked whether npm has put measures in place to prevent someone else from conducting a similar attack under a different account name, Silverio acknowledged that the attack probably wouldn’t be caught immediately.

“Even if we can’t catch it all at the moment of publication, we have a system that works pretty well,” she said, praising the vigilance of the npm community.

Nonetheless, in her blog post, Silverio said npm is looking into ways to identify name similarities in packages, in order to prevent future typo-squatting. The company is also working with security firm Smyte to detect spam published to the registry – apparently, spammers publish packages hoping the README files will be indexed by search engines, in order to boost website search rankings.

In a 2016 presentation at Kiwicon about Node.js security, developer Jeff Andrews posed this question to himself: “I use Node.js/npm. How do I stay safe?” His answer: “You can’t.” ®

PS: Nikolai Tschacher wrote a thesis on typosquatting and programming language package managers in 2016.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/02/typosquatting_npm/