STE WILLIAMS

VTech to Pay $650,000 in FTC Settlement

VTech’s Kid Connect app and its Planet VTech platform collected personal information on 760,000 children without parental permission, the FTC alleges.

VTech Electronics agreed to a $650,000 settlement payment and sanctions by the Federal Trade Commission (FTC) to resolve charges it violated the Children’s Online Privacy Protection Act (COPPA), the FTC announced today.

Under COPPA, companies are required to directly notify parents or obtain verifiable parental consent when collecting personal information on their children and also take reasonable measures to safeguard the information. The FTC alleges VTech’s Kid Connect app, which had 638,000 children accounts, and its now-defunct Planet VTech gaming and chat platform, which had 130,000 children accounts, violated COPPA requirements.

The FTC also alleges VTech made false claims in its privacy policy by stating it would encrypt information submitted to Planet VTech and also Learning Lodge, which houses the Kid Connect app. However, VTech did not encrypt any of the information, the FTC said.

The FTC settlement agreement also requires VTech to implement a comprehensive data security program, which will undergo independent audits for the next 20 years. VTech is also permanently prohibited from violating COPPA in the future and misrepresenting its security and privacy practices, as part of the settlement agreement.

Read more about VTech’s settlement here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/vtech-to-pay-$650000-in-ftc-settlement-/d/d-id/1330770?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Cryptocurrency Mining Malware Has Links to North Korea

A malware tool for stealthily installing software that mines the Monero virtual currency looks like the handiwork of North Korean threat actors, AlienVault says.

A security vendor has found another clue that North Korea may be turning to illegal cryptocurrency mining as a way to bring cash into the nation’s economy amid tightening international sanctions.

AlienVault on Monday said it had recently discovered malware that is designed to stealthily install a miner for Monero, a Bitcoin-like cryptocurrency, on end-user systems and to send any mined coins to the Kim Il Sung University (KSU) in Pyongyang.

The malicious installer appears to have been created just before Christmas 2017 and is designed to install xmrig, an open source miner for Monero.

The link to the university itself doesn’t appear to be working, however, meaning the software cannot send any mined coins back to its authors. The malware itself appears pretty basic, and the inclusion of the KSU server in the code could simply be a false flag to trick security researchers. Even so, the malware is consistent with previous similar campaigns tied to North Korea, AlienVault said.

“Cryptocurrencies could provide a financial lifeline to a country hit hard by sanctions,” the vendor said. “Therefore it’s not surprising that universities in North Korea have shown a clear interest in cryptocurrencies.”

A cryptocurrency mining tool like xmrig is basically designed to harness the processing power of a computer in order to verify transactions in a blockchain. Users who put their computers to work mining virtual currencies such as Bitcoin and Monero typically receive small monetary rewards for allowing their hardware to be used for the purpose.

Crypto mining is legitimate activity. Some, like Coinhive, even distribute miners to website operators so users can run it in their browsers in exchange for an ad-free experience. In recent years, though, cybercriminals have increasingly begun hijacking computers in order to mine cryptocurrency for illegal profit.

In a report last September, IBM said that between January and July 2017 it had seen a six-fold increase in CPU mining attacks involving the use of malware for installing virtual currency mining tools against its customers. The tools typically were embedded in fake image files that were hosted on infiltrated servers running WordPress or Joomla. Most of the attacks that IBM analyzed were designed to target virtual currencies such as Monero, whose CryptoNight algorithm can run on ordinary PCs and servers compared to the specialized hardware required for Bitcoin mining.

Last September, Kaspersky Lab reported finding two relatively large botnets comprised of computers infected with malware for installing legitimate cryptocurrency miners on them. The security vendor estimated that a 4,000-computer botnet used for cryptocurrency mining was netting its operators up to $30,000 a month, while a bigger 5,000-computer botnet was garnering its operators some $200,000 a month.

“As the price of crypto-currencies increase, so do the incentives to infect people with mining malware,” says Chris Doman, security researcher at AlienVault. “Monero is becoming a popular choice as it is both more anonymous and more profitable to mine with malware.”

Security researchers have found plenty of clues in recent months to suggest that Korea-linked threat actors like the Lazarus Group and others are actively engaged in cryptocurrency mining. Earlier this month, Bloomberg reported an incident in which a North Korea threat group called Andariel hijacked a server belonging to a South Korean organization and used it to mine about 70 Montero coins.

The Lazarus group has been caught doing Monero mining on compromised networks and attacking Bitcoin exchanges, Doman says. There have also been several public reports of North Korean universities looking into mining cryptocurrencies, Doman says. So while it is hard to say with complete certainty if the malware that AlienVault discovered is the work of North Korean actors, chances are high it is, he notes.

“The main takeaway for me is that this fits into the larger picture of North Korea and cryptocurrencies.”

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/new-cryptocurrency-mining-malware-has-links-to-north-korea/d/d-id/1330773?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ex-NSA hacker builds AI tool to hunt hate groups’ symbols online

Emily Crose, ex-hacker for the National Security Agency (NSA), ex-Reddit moderator and current network threat hunter at a cybersecurity startup, wanted to be in Charlottesville, Virginia, to join in the protest against white supremacists in August.

Three people died in that protest. One of Crose’s friends was attacked and hurt by a neo-Nazi.

As Motherboard’s Lorenzo Franceschi-Bicchierai tells it, Crose was horrified by the violence of the event. But she was also inspired by her friend’s courage.

Her response has been to create and train an Artificial Intelligence (AI) tool to unmask hate groups online, be they on Twitter, Reddit, or Facebook, by using object recognition to automatically spot the symbols used by white nationalists.

The images her tool automatically seeks out are so-called dog whistles, be they the Black Sun (also known as the “Schwartze Sonne,” an image based on an ancient sun wheel artifact created by pagan German and Norse tribes that was later adopted by the Nazi SS and which has been incorporated into neo-Nazi logos) or alt-right doctored Pepe the frog memes.

Crose dubbed the AI tool NEMESIS. She says the name is that of the Greek goddess of retribution against those who succumb to arrogance against the gods:

Take that to mean whatever you will, but you have to admit that it sounds pretty cool.

Crose says it’s just a proof of concept at this point …

… and has agreed with detractors who say that the technology is “riddled with surveillance and privacy issues.”

She posted this clip onto Twitter that shows NEMESIS in action, picking out black sun and other white supremacist images carried by white supremacist protesters.

Crose said that from the beginning, the tool has been designed to identify symbols, not to identify faces. It would of course be easy for her to create a convolutional neural network (CNN) for facial recognition that could associate symbolism with faces, she said, but “that’s not my goal.”

She pointed to how Google’s using CNNs to navigate automated cars.

Should we trust CNNs with people’s personal privacy? Not if they’re in the wrong hands, she said: just go ask the Electronic Frontier Foundation (EFF) about the issues that arise.

In September, when it addressed the House of Lords Select Committee on Artificial Intelligence, the EFF brought up issues of bias that can arise from the use of AI, be it CNNs or other deep-learning techniques.

Such systems must be auditable, if not transparent, the EFF said, giving these examples:

  • AI systems used for government purposes (e.g., to advise judicial decisions, to help decide what public benefits people do or do not receive, and especially any AI systems used for law enforcement purposes).
  • AI systems used by companies to decide which individuals to do business with and how much to charge them (e.g., systems that assign credit scores or other financial risk scores or financial profiles to people, systems that advise insurance companies about the risk associated with a potential customer, and systems that adjust pricing on a per-customer basis based on the traits or behavior of that customer).
  • AI systems used by companies to analyze potential employees.
  • AI systems used by large corporations to decide what information to display to users (e.g., search engines, AI systems used to decide what news articles or other items of interest to show someone online – if they make those decisions based on individual user characteristics – and AI systems used to decide what online ads to show someone.

NEMESIS is clearly generating a lot of controversy – controversy that Crose apparently welcomes, given that it “tells me I’m doing something right.”

But as she told Motherboard, NEMESIS hasn’t evolved to an autonomous, privacy-invading AI, by any means. In fact, it’s kind of dumb at this point, she said, given that there are still humans involved. It still requires human intervention to curate the pictures of the symbols in an inference graph and make sure they’re being used in a white supremacist context as opposed to inadvertently flagging users who post Hindu swastikas, for example.

In other words, NEMESIS still needs to be taught context, Crose said:

It takes thousands and thousands of images to get it to work just right.


Image courtesy of Emily Crowse / Twitter

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RAzxf4bp4IQ/

Star Wars: The Last Jedi – the security review

Last week I went to “go see a Star War,” and the Naked Security team asked me to write about it…

Trekkie though I am, I’ll try to put my franchise allegiance to one side for this piece and take an objective look at the security angles in Star Wars: The Last Jedi. And yes, there actually is something to discuss here. It’s all at a very generalized level of course – I don’t think we’ll ever see the day when we’ll watch Kylo Ren loading up Kali Linux – so take this with many grains of salt.

Akin to my Mr. Robot reviews, I’m not going to review the whole movie, just the security bits (you’re on NakedSecurity, after all) – and yes, there will be spoilers!

WARNING: SPOILERS AHEAD – SCROLL DOWN TO READ ON

 

 

Opening scene red teaming (just never mind how it ends)

When I sat down to watch this movie, I wasn’t sure if there’d be anything for me to write about. Security? In Star Wars? Finding an apparently-put-there-on-purpose vulnerability in the giant Death Star and exploiting it with lasers, okay sure. But that’s been done… a long time ago in a galaxy far far away. (Sorry.) Thankfully the very first scene of The Last Jedi is, in a weird way, such a great advertisement for red teaming that I fully expect to see it included in future job descriptions. Never mind that it has sad, catastrophic consequences! That’s a big thing to disregard, I know, but bear with me.

We have Poe, being a hotshot, distracting the First Order by being as conspicuous as possible while trying to do something much more underhanded. Reminiscent of every VoiP conference call ever, the communication line cuts out and nobody knows what anyone’s actually saying. What’s the harm, right? He’s just one guy after all. Of course, the bad guys eventually realize they’ve been had and that they should have shot Poe down five minutes ago. Hijinks and plot developments ensue.

This whole scene reminded me so much of the war stories I’ve heard exchanged by pen testers over the years. These are red team professionals that are hired by companies to expose their weaknesses, and we’re not just talking software. They use a vast arsenal of social engineering methods to gain entry into offices or get employees to compromise their company’s security, sometimes by pretending to be someone they’re not, sometimes by creating confusion and taking advantage of the chaos. Usually by the time a pen tester is discovered, they’ve already got the information or hit the target they needed to complete their engagement.

Of course, the massive difference between what we see in the movie’s opening scene and what pen testers do professionally is that pen testers are hired by the organization they’re infiltrating so the company can find out where their weaknesses are and work to address them. The First Order did not hire Poe… as far as we know anyway – now that would be a massive plot twist. But from the outside looking in, when a pen tester is trying to infiltrate their target and not trying to be particularly subtle about it, the interaction might look just a little bit like this scene.

Infosec didn’t invent this kind of thing of course; subterfuge has been going on as long as there have been spies and soldiers, which is to say, since forever. Never mind that the end result of this particular “engagement” is disastrous for the Resistance, and not so great for Poe either really – but you can’t win them all. Still, taken out of context, if I was looking for a quick and easy allegory to show what it looks like when a pen tester is at work, this wouldn’t be a terrible clip to call on.

DJ, the Greyhat

At one point in the film, there’s a whole side plot introduced about the need to crack the encryption of something-or-other, requiring the services of one master codebreaker named DJ. The encryption of the something-or-other doesn’t really matter here (arguably it’s a completely unnecessary plotline anyway), but DJ is worth a mention.

Firstly, let’s take a look at how codebreaker DJ is introduced. We find out he likes to gamble at casinos, and can be frequently found in, and I quote here: “A terrible place filled with the worst people in the galaxy.” Basically, it’s space-Vegas. If they had shown DJ at a hacker conference and not merely in a casino, I’d be writing about Defcon Star Wars. (Rose nailed it when she said “I wish I could put my fist through this whole lousy beautiful town,” I think she speaks for many of us who make the trek to Vegas every year for “hacker summer camp.”)

As we get to know DJ through his actions, we see that he’s amazingly resourceful – of course he knows how to lockpick! – and he knows how to use seemingly innocent things for unusual purposes, like using Rose’s pendant as a conductor.

Like any good hacker, he has a considerable skillset that can be used for good or evil, and DJ has no qualms about working for either “side” depending on who’s paying. In modern parlance, you could call DJ a greyhat: He’s up for working with “good guys,” but in the end his motivation is cash and not some moral high ground. (This does become a bit of a semantic argument about how you define blackhat hacking: You could certainly argue that if you’re not explicitly working for “good,” there’s no grey there and you’re a blackhat. But only siths deal in absolutes, right?)

When DJ, Rose and Finn get caught by the First Order, DJ doesn’t hesitate to cut a deal in return for clemency – not unlike criminal hackers who get caught by law enforcement and then make a career out of educating the feds. This phenomenon happens enough in the computer security world that there are even memes about it:

Computer Security Career Paths from hacking

I’m just glad we didn’t see DJ in a black hoodie, otherwise I’d be getting Mr. Robot flashbacks mixed up in my Star Wars and I’m a confused enough Trekkie as it is.

One of the best lines in the movie, predictably, came from Yoda: “The greatest teacher, failure is.” I’ll be damned if I don’t see that on a slide deck at a conference within the next year.

What did you think? Did The Last Jedi live up to the hype for you? And are there any other security angles I may have missed? Let me know in the comments below.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JdD1p-pSiyA/

Facebook needs fixing, says Mark Zuckerberg

Mark Zuckerberg, the wizard who pulls the levers behind the Facebook curtain, has set himself a doozy of a challenge for 2018: to fix Facebook.

The most pressing problems, he said in a post on Thursday, are protecting the Facebook community from abuse and hate, stopping nation states from using Facebook like a hacky-sack in other countries’ elections, and making sure that all of us dopamine-addicted users spend our time on the platform productively (instead of turning into passive, miserable, Facebook-fixated couch potatoes).

The Facebook CEO has done these personal challenges since 2009, when he decided to dress like a grown-up and wear a tie every day:

That first year the economy was in a deep recession and Facebook was not yet profitable. We needed to get serious about making sure Facebook had a sustainable business model. It was a serious year, and I wore a tie every day as a reminder.

His list after 2009:

  • 2010: Learn Mandarin
  • 2011: Only eat meat he had killed himself
  • 2013: Meet one person a day outside Facebook
  • 2015: Read a book every other week
  • 2016: Build a simple AI to run his home

He says that current days feel dire in a similar way to the wear-a-tie, nonprofitable, recession-depressed first year he made a personal challenge:

The world feels anxious and divided, and Facebook has a lot of work to do – whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent.

My personal challenge for 2018 is to focus on fixing these important issues. We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we’re successful this year then we’ll end 2018 on a much better trajectory.

Commenters on his post include people who found Zuckerberg’s goal admirable, as well as those who grumbled about the downsides of the platform.

Facebook itself recently confronted the existential question of whether social media can be bad for us. Last month, Facebook publicly recognized some of its platform’s detrimental effects but suggested the cure is to engage with the platform more: more messages, more comments and more posts. The idea, it said at the time, was to actively engage with friends, relatives, classmates and colleagues, rather than passively consuming content.

A study we conducted with Robert Kraut at Carnegie Mellon University found that people who sent or received more messages, comments and Timeline posts reported improvements in social support, depression and loneliness. The positive effects were even stronger when people talked with their close friends online. Simply broadcasting status updates wasn’t enough; people had to interact one-on-one with others in their network.

Research has showed positives coming out of social media, including self-affirmation coming from reminiscing on past meaningful interactions – for example, seeing photos users had been tagged in and comments left by friends. But social media also has a darker side: social media-enabled trolling that can lead to problems as severe as suicide.

There have been many studies that have looked at this dark side of Facebook. Five themes emerged from one such: managing inappropriate or annoying content, being tethered to Facebook, perceived lack of privacy and control, social comparison and jealousy, and relationship tension.

Facebook’s acknowledgement that social media can be bad for us came after months of soul-searching, and a good deal of regret, from the very people who built Facebook. For example, former Facebook vice-president of user growth Chamath Palihapitiya last month gave a scathing speech about the corporation, saying that he regrets his part in building tools that destroy “the social fabric of how society works.”

The month before, Facebook ex-president Sean Parker admitted that Facebook creators were from the start well aware that they were exploiting a “vulnerability in human psychology” to get people addicted to the “little dopamine hit” when someone likes or comments on your page.

Other ex-Facebookers who’ve lately stepped back to question the repercussions of what they’ve created include Facebook “like” button co-creator Justin Rosenstein and former Facebook product manager Leah Pearlman, who have both implemented measures to curb their social media dependence.

It’s easy enough to point out where Facebook needs fixing. It’s tougher to come up with ways to fix the vast problems Zuckerberg has outlined – something he noted himself in his post:

These issues touch on questions of history, civics, political philosophy, media, government, and of course technology. I’m looking forward to bringing groups of experts together to discuss and help work through these topics.

He pointed to one example as that of the centralization of power in technology, which is the opposite of what many set out to do when building the internet we now have:

A lot of us got into technology because we believe it can be a decentralizing force that puts more power in people’s hands. (The first four words of Facebook’s mission have always been “give people the power”.) Back in the 1990s and 2000s, most people believed technology would be a decentralizing force.

But today, many people have lost faith in that promise. With the rise of a small number of big tech companies – and governments using technology to watch their citizens – many people now believe technology only centralizes power rather than decentralizes it.

Pushing against such trends are the rise of encryption and cryptocurrency, Zuckerberg says. Such technologies take power away from centralized systems and “put it back into people’s hands,” he said.

But without regulation, those hands can prove to be buttery, he said. Cryptocurrencies can go up in a puff of smoke, for example, leaving little recourse to those with emptied wallets.

Zuckerberg said fine: let’s “go deeper” and figure out how to make these things work for us:

[Decentralized technologies] come with the risk of being harder to control. I’m interested to go deeper and study the positive and negative aspects of these technologies, and how best to use them in our services.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zPRl1n8pU8g/

First shots at South Korea could herald malware campaign of Olympic proportions

A malware campaign has been unleashed against organisations involved with next month’s Pyeongchang Winter Olympics.

An email1 with a malicious Microsoft Word document attached was sent to a number of groups associated with the event, most of them targeting ice hockey organisations.

“The attackers originally embedded an implant into the malicious document as a hypertext application (HTA) file, and then quickly moved to hide it in an image on a remote server and used obfuscated Visual Basic macros to launch the decoder script,” security firm McAfee reported. “They also wrote custom PowerShell code to decode the hidden image and reveal the implant.”

The attackers appear to be casting a wide net, with several South Korean organisations included in the spam run. The majority of these had some link to the Olympics, either by providing infrastructure or in a supporting role.

Global gatherings such as the Olympics – where world leaders, businesses and governmental organisations converge on one location – make them a naturally attractive target for cyberspies. Travelling VIPs can be easier to target when they are abroad using a variety of techniques.

Threat intel firm Anomali warned that the malware incident is a just a taste of what might be in store. South Korea is a frequent target of hacks and North Korea, Russia and China might all look to exploit vulnerabilities when the world’s focus is on the nation.

Using hotel Wi-Fi to spy on executives and people of interest is a likely scenario. DarkHotel and the Russian APT28 have both reportedly engaged in such shenanigans and similar activity was associated with the Sochi Olympics in Russia four years ago.

Phishing lure techniques, such as links promising live streaming of Olympic events, could form the basis of attacks by regular cybercrooks slinging ransomware and other crud as well as spies.

Recent activity from Fancy Bear’s Hack Team and other hacktivist groups might lead to campaigns directed against the International Olympic Committee (IOC) and the Olympics in general. This may be because of the decision to ban Russian athletes from participating under the national flag, something already attributed as the motive behind attacks against the World Anti-Doping Agency.

Last but not least, animal welfare groups could stage a protest and/or boycott over South Korea’s dog and cat meat trade. Twitter chatter on this topic is already taking place and may be a harbinger of things to come, Anomali cautioned.

Many sponsors and partners of the games have already experienced hacks and this is another area of potential concern:

  • Huawei products propagated the “Satori botnet” (variant of Mirai IoT malware)
  • Hanjin Group file exposures in 2014 and 2016
  • KORAIL government officials’ smartphones were infected in 2016 and used to launch a larger attack
  • A Hyundai app software vulnerability left vehicles potentially susceptible to theft for three months

Some of the attacks have been attributed to Kimsuky (North Korea), RGB (North Korea), APT3 (China), and Nexus Zeta, Anomali said.

Bootnote

1The original file name was 농식품부, 평창 동계올림픽 대비 축산악취 방지대책 관련기관 회의 개최.doc (“Organized by Ministry of Agriculture and Forestry and Pyeongchang Winter Olympics”).

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/08/winter_olympics_malware_fears/

More stuff broken amid Microsoft’s efforts to fix Meltdown/Spectre vulns

More examples have emerged of security fixes for the Meltdown vulnerability breaking things.

Patching against CVE-2017-5753 and CVE-2017-5715 (Spectre) and CVE-2017-5754 (Meltdown) borks both the PulseSecure VPN client and Sandboxie, the sandbox-based isolation program developed by Sophos.

radiation symbol

Microsoft patches Windows to cool off Intel’s Meltdown – wait, antivirus? Slow your roll

READ MORE

PulseSecure has come up with a workaround for affected platforms, which include Windows 10 and Windows 8.1 but not Windows 7.

Sandboxie has released an updated client to solve compatibility issues with an emergency fix from Microsoft, as explained here. We’ve asked Sophos for comment.

Compatibility with the same set of Microsoft fixes released last Wednesday (January 3), freezes some PCs with AMD chips, as previously reported.

These sorts of issues leave sysadmins (and to a lesser extent consumers) between a rock and a hard place. The critical Meltdown and Spectre vulnerabilities recently found in Intel and other CPUs represent a significant security risk. Because the flaws are in the underlying system architecture, they will be exceptionally long-lived.

Remediation work is necessary but complicated because anti-malware packages need to be tweaked before Microsoft’s patches can be applied, as previously reported.

Unless the antivirus compatibility registry key is set, Windows Update will not delivery January’s or any future security updates. Anti-malware software requires low-level access to the machine it runs on so tweaks need to be made to accommodate changes in memory handling that come with the Meltdown and Spectre fixes or else crashes can occur, Microsoft warned.

A Redmond support article clarifies that “customers will not receive the January 2018 security updates (or any subsequent security updates) and will not be protected from security vulnerabilities unless their antivirus software vendor sets [a particular] registry key”.

Buckle up: it’s going to be a bumpy ride even though some help is available.

Cybersecurity vulnerability manager Kevin Beaumont has put together a Windows antivirus patch compatibility spreadsheet here. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/08/meltdown_fix_security_problems/

Vulnerability Management: The Most Important Security Issue the CISO Doesn’t Own

Information security and IT need to team up to make patch management more efficient and effective. Here’s how and why.

This piece was co-written with Amber Record, a security engineer at F5 Networks. In her role, Amber is responsible for leading the company’s vulnerability management program.

The number of attacks like the recent one against Equifax have risen dramatically in the last few years, resulting in the exposure of hundreds of millions of private records. Almost without exception there has been some fundamental flaw related to configuration or patching of systems. This trend will continue without systems designed to automatically identify, patch, and close vulnerabilities in core IT systems that can reduce the chance of human error.  We can accomplish this with automation typically found in large operational cloud deployments and the Constant Delivery (CD)/Constant Integration (CI) principles of DevOps.  These principles are already being used to automatically stop active attacks within the information security community and should now extend to IT operations to improve protections and stop the bad guys from getting in at all.

Where do organizations start when they realize that a standardized, managed vulnerability management program does not exist? There are almost too many options. For one, you could create your own vulnerability scanning solution. If you walk into a security team that has more than one solution, you could review them both and select the best option for your needs.  Another way is to establish a program by hiring a consultant to assist in reviewing the existing program or build one for you. No matter which option you choose, experienced contractors can easily provide a very generalized plan and you can then tune it to your company’s specific needs. In the end the goal is to achieve a safer environment for your data and applications. Evaluation criteria must include manageability, accuracy, interpretability, and the ability to identify specific actions that server owners can perform.  

Once you’ve selected an approach, then what? The next step in the process is getting a peer review of your solution and what it might be lacking. Coordinating the plan outside of the organization is necessary to getting the program into a fully functioning state complete with ticketing to remediate vulnerabilities. The primary preference to reduce vulnerabilities is to implement as much automated patching as possible. This has two net effects: it provides protection earlier and reduces the load on application owners to manually patch systems.

Once you implement the technical scanning solution and ticketing support, the process becomes much more simplified. The primary drive to decide what to scan first should be based on risk.  One of the biggest problems with vulnerability management programs is the application owner’s fear that scanning will negatively affect their applications. The information security team can throttle scans, target certain times of the day for lower application traffic, and scan applications prior to implementation in production to catch vulnerabilities sooner and reduce application loads.

The sad truth is that vulnerability management programs have either no or extremely limited ability to actively correct the flaws that they find. Most generate tickets that task system maintainers to patch systems or correct improperly configured systems. There are a few systems that either inject code into connection streams or provide generic intrusion prevention signatures that cover up underlying flaws in Internet-facing services, but to limited effect. Looking at the problem from the standard ITIL lens of people, process, and technology, we offer a new approach:

Today, most enterprises still rely on people in IT to manually patch operational IT systems, especially e-commerce and other customer-facing systems. But that’s the problem — even when completely accurate vulnerability scans are delivered, there aren’t enough people to patch or correct the systems in a timeframe that is relevant to prevent attack. It’s a black hole for head count to continue along this sort of path, driving total cost of ownership up with limited return because, to be truly effective, nearly all patches and fixes need to be made without exception.  Instead, IT departments should hire developers that can automate the correction of infrastructure flaws as needed. Changing the makeup of the IT Ops department may create upheaval among workers who are worried about their jobs, but this isn’t insurmountable if retraining is offered and embraced.

Switching to automation to address the vulnerability management problem doesn’t just require different skills, it requires entirely new processes. Processes designed strictly for manual work in user interfaces won’t do the trick. New processes that leverage automation tools are needed to cut out waste and “wait states” that consume time without yielding benefit. Proper testing of the operational impacts of patching and system configuration changes should be integrated with existing DevOps processes in the enterprise to prevent outages. Engineering integration skills to interconnect products like vulnerability scanners and infrastructure management systems are needed for success. The automation created should also provide metrics on the state of the systems under management to ensure that a secure state is truly being achieved.

Finally, new automation technologies will be required to complete a full cycle of vulnerability detection that automatically corrects and verifies that the fixes have been made. Orchestration tools like Puppet and Chef should be integrated with the APIs built into vulnerability scanners and infrastructure management and patching systems to make this vision a reality. It will be challenging for enterprises to accept this level of automation and “trust the machine,” but it’s a far better option than today’s automated intrusion prevention technologies, which can result in false positives that block legitimate traffic and possibly interfere with revenue-generating business systems. 

Related Content:

 

Mike Convertino has nearly 30 years of experience in providing enterprise-level information security, cloud-grade information systems solutions, and advanced cyber capability development. His professional experience spans security leadership and product development at a wide … View Full Bio

Article source: https://www.darkreading.com/application-security/vulnerability-management-the-most-important-security-issue-the-ciso-doesnt-own/a/d-id/1330734?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Gov Outlines Steps to Fight Botnets, Automated Threats

The US Departments of Commerce and Homeland Security identify the challenges of, and potential actions against, automated cyberattacks.

The US Departments of Commerce and Homeland Security have published a report focused on the challenges and steps toward fighting botnets and other automated, distributed threats, the National Institute of Standards and Technology (NIST) announced last week.

Their report is a response to Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure. The EO directed the Secretaries of Commerce and Homeland Security to “lead an open and transparent process to identify and promote action by the appropriate stakeholders” in order to reduce automated and distributed cyberattacks.

In a joint effort, the two departments drafted the opportunities and challenges in reducing the threat of automated attacks. Key themes of their report include acknowledging automated attacks are a global problem, effective tools exist but are not widely used, education and awareness is needed, and market incentives are misaligned.

They also created a list of goals to reduce the threat. These include identifying a clear path toward a secure tech marketplace, promoting infrastructure innovation to adapt to evolving threats, and promoting network innovation to prevent and detect threats.

A final report will be submitted by May 11. Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/us-gov-outlines-steps-to-fight-botnets-automated-threats/d/d-id/1330763?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Monday review – the hot 15 stories of the week