STE WILLIAMS

Future of the SIEM

Current SIEM systems have flaws. Here’s how the SIEM’s role will change as mobile, cloud, and IoT continue to grow.

Ask security experts about security information and event management (SIEM) systems, and many will tell you SIEMs are becoming dated and need to be revamped. 

The skepticism is understandable. How can SIEM, a multi-billion-dollar market around for many years, keep up as businesses adopt new technologies like cloud systems, mobile, and IoT? When it was invented, SIEM did exactly what organizations needed. Now their needs are more complex.

Behind the curve

SIEMs collect security events in real-time from various event and data sources.

“[SIEM] was a place where you pumped in a whole bunch of data and figured out what was suspicious,” says Larry Ponemon, chairman and founder of the Ponemon Institute. “It gave you an alert, quarantined the traffic, sandboxed it.

“For the most part, SIEM made a lot of sense from a business perspective. Dealing with potential attacks and vulnerabilities, without a tool, was like finding a pin in a stack of hay. It was virtually impossible to do manually.” 

As attackers became more sophisticated, SIEM systems have failed to keep up.

Today, those same products “barely work at all,” says Exabeam CMO Rick Caccia. Older systems aren’t built to capture credential or identity-based threats, hackers impersonating people on corporate networks, or rogue employees trying to steal data. 

A recent report by the Ponemon Institute, commissioned by Cyphort, discovered 76% of SIEM users across 559 businesses view SIEM as a strategically important security tool. However, only 48% were satisfied with the actionable intelligence their SIEMs generate.

Caccia likens the current state of the SIEM market to the state of the firewall market six- to seven years ago, before entrants like Palo Alto Networks entered the space with a next-level product that could catch new attacks and quickly solve problems. Similarly, SIEM is struggling with stale technology, new threats, and a need for change.

Shortcomings and challenges

Many of SIEM’s current shortcomings stem from its tough mission of monitoring security and detecting threats across the business, says Gartner vice president Anton Chuvakin. It’s a hard problem to solve, no matter how security pros choose to tackle it.

“If flying to the moon is hard, you’re not going to say your rocket is crap,” he quips. “It’s just difficult.”

Complex mission aside, one key shortcoming of today’s SIEM products is their reliance on humans. “SIEM is, in that sense, more rule-based and expert-described,” says Chuvakin. “That’s a main weakness because at this point, we’re trying to get developed tools to try and think for themselves.”

The dependence on human experts is a problem because there simply aren’t enough of them, he continues. If a business needs five SIEM experts and its entire IT team consists of five people, they don’t have the bandwidth to ensure the SIEM is effective.

Amos Stern, co-founder and CEO of Siemplify, explains there is need for better SIEM automation and management of people and systems. Businesses often have several security tools in many silos. SIEM systems will need to connect these silos and automate processes and investigations across these tools, evolving to the point where they function as a “Salesforce for security.” 

Caccia echoes the need for greater SIEM intelligence, noting how most systems’ rules can’t keep up with attackers. For companies struggling with talent, he says, automation could help junior team members perform closer to an expert level.

SIEM implementation is another challenge. “It’s a process that sometimes costs more than the actual product,” Stern says. “Organizations wouldn’t rip and replace their SIEMs with new technology. Right now many are only at the point where their SIEM deployment is mature, or mature enough, to not create a ton of noise.”

Cloud, IoT, and the role of SIEM

SIEM challenges will continue to evolve as security managers grapple with cloud services, mobile, the Internet of Things, and other new technologies the IT department doesn’t always control.

IoT will be a huge factor as it drives the number of endpoints vulnerable to attackers, says Ponemon. It’s getting harder for cybercriminals to infiltrate computers but still fairly easy to hack cameras, refrigerators, microwaves, Bluetooth tools, and other connected devices and use them as an attack vector.

The growth of cloud, especially for SMBs, has transformed how businesses store and handle data. Companies once intimidated by high price of data storage benefit from SIEM providers like ArcSight, Nitro, and others that deploy modules from the cloud, he continues.

Cloud services and IoT devices will rapidly generate increasing amounts of data, and SIEM systems will have to adapt by learning to collect and organize the influx of information.  

“The SIEM evolution is about supporting more data types, supporting more problems,” says Gartner’s Chuvakin, whose research has focused on user behavior analytics and machine learning. He anticipates these will help SIEM think on its own and relieve the need for human experts. 

Ponemon emphasizes the importance of machine learning and analytics in the next wave of SIEM, but notes companies are hesitant to explore this space. They don’t want to build products in an area where they lack the talent necessary to execute.

“A lot of companies aren’t making that investment because they feel they don’t have the internal resources to implement it properly,” he says. “They think the technology might get better; they don’t want to be early adopters.” 

While this type of evolution is “still a futuristic thing,” progress is moving quickly, Ponemon says. 

What’s up next?

The SIEM may need a face-lift, but it isn’t going anywhere.

“It’s not on the way out,” says Siemplify’s Stern. “It’s been around for quite some time.”

Caccia foresees several changes in the market shaping the growth of SIEM, including the growth of open-source big data technology and vendors focused on automated playbooks and incident response.

Chuvakin anticipates the immediate future will bring incremental improvements instead of major change. We won’t see a break in the SIEM market, but small, gradual changes.

“The future of SIEM will likely be an evolution, and not a revolution,” he says. 

Related Content:

Kelly is an associate editor for InformationWeek. She most recently reported on financial tech for Insurance Technology, before which she was a staff writer for InformationWeek and InformationWeek Education. When she’s not catching up on the latest in tech, Kelly enjoys … View Full Bio

Article source: http://www.darkreading.com/threat-intelligence/future-of-the-siem-/d/d-id/1328457?_mc=RSS_DR_EDT

Phishing Your Employees for Schooling & Security

Your education program isn’t complete until you test your users with fake phishing emails.

Imagine this fictional scenario: A student, hoping to become a surgeon, attends hours of medical courses. She never misses a class, always listens, and takes copious notes. Finally, after receiving the years of training necessary, the student receives her medical degree having never taken a test. Would you let this surgeon operate on you?

I sure hope not! Testing is a crucial part of any form of education, for both teachers and students.

That’s why I believe your phishing education program isn’t complete until you phish your own company’s tank. By that, I mean sending fake (but realistic) phishing emails to all your users to see if they fall for them. There are plenty of tools and services that can do this for you. To me, this is the real test of your phishing and user awareness security training.

I’m assuming those of you reading this already have a security education program that includes a phishing curriculum. Some information security experts don’t believe user education works. I’m not one of them. There’s significant evidence that the right kind of education does work. In fact, for phishing specifically, the Ponemen Institute found that user education had a staggering 50x return on investment. If you aren’t already educating your users through training, that number alone should convince you to start. So, let’s talk about how you can improve your general security education program, and why phishing your users is such a valuable piece of the puzzle.

  • Practical tests are the best measure of understanding. Most security awareness training I’ve seen ends with a basic multiple choice test. These tests are only a partial measurement of whether or not the pupil can put that knowledge to use in the real world. Take a driving test, for instance. Sure, there’s a written test, but you wouldn’t allow a teenager on the road until after he passed the practical one, too.
  • Practical assessment can reveal training gaps. By sending fake phishing emails, you can learn which ones your users fell for most often. Was there a certain type of email that contained a certain “lure” that tricked your employees? Perhaps that might be a missing piece you can add to your next phishing training, or a concept you haven’t covered in enough detail.
  • They help employees recognize their own level of understanding. Your fake phishing emails should immediately inform the user when they clicked on a bad link. The goal isn’t to shame the user — that’s detrimental to education. Rather, the goal is to let the user know they missed something, so they realize that they have a gap in their practical understanding, and don’t overestimate their preparedness.
  • They provide another training opportunity. The best training involves repetition. Besides informing a student they’ve made a mistake, fake phishing emails allow you to immediately share training with the user that specifically addresses the mistake they just made. For instance, say a user clicked a link that obviously went to a domain having nothing to do with the email. After informing the user of their mistake, your phishing link could forward the user to a training page specifically telling them what to look for in URLs. In fact, these fake phishing exercises provide an easy way to regularly reintroduce training materials to your users (at least the ones making mistakes), without having to repeat a training course.
  • Practical tests are more likely to change behaviors. The true measure of security education is if its recipients change their bad behaviors. One reason some security pundits complain that training is ineffective is because of a certain type of user that knows the right behavior but continues to do the wrong one when it’s easier. Failing these internal phishing tests regularly should eventually get even the most stubborn users to change their behavior, simply because they know their boss might be watching.  
  • They help you measure the actual value of your training. I believe that security training is effective, but not all training is equal. Phishing your own tank measures your training’s efficacy. Send out fake phishing emails before your trainings and record the results. Then send similar emails out after the training and compare the results. Give your organization at least two cycles of training to really understand the long-term trends. (Education takes some time!) However, if you aren’t seeing a change in behavior, then perhaps you should cancel that particular training course and identify one that works better. In any case, you’re not going to be able to calculate this risk vs. efficacy vs. cost equation unless you actually measure how well your users do against phishing emails — and the only way to do that is to phish your company’s tank. 

[Learn more about using the science of habits to transform user behavior during Interop ITX, May 15-19, at the MGM Grand in Las Vegas. For more on other Interop security tracks, or to register click on the live links.]

Related Content:

Corey Nachreiner regularly contributes to security publications and speaks internationally at leading industry trade shows like RSA. He has written thousands of security alerts and educational articles and is the primary contributor to the WatchGuard Security Center blog, … View Full Bio

Article source: http://www.darkreading.com/endpoint/phishing-your-employees-for-schooling-and-security/a/d-id/1328450?_mc=RSS_DR_EDT

Malware Explained: Packer, Crypter & Protector

What’s This?

These three techniques can protect malware from analysis. Here’s how they work.

In this article we will try to explain the terms packer, crypter, and protector in the context of how they are used in malware. Bear in mind that no definition for these categories are set in stone: they all have overlap and there are exceptions to the rules. But this is the classification that makes sense to me.

What they all have in common is their goal
The payload, which is the actual malware that the threat actor wants to run on the victims’ computers, is protected against reverse engineering and detection by security software. This is done by adding code that is not strictly malicious, but only intended to hide the malicious code. So the goal is to hide the payload from the victim – and from researchers who get their hands on the file.

Packers
This usually is short for “runtime packers” which are also known as “self-extracting archives,” software that unpacks itself in memory when the “packed file” is executed. Sometimes this technique is also called “executable compression.” This type of compression was invented to make files smaller so users wouldn’t have to unpack them manually before they could be executed. But given the current size of portable media and internet speeds, the need for smaller files is not that urgent anymore. So when you see some packers being used nowadays, it is almost always for malicious purposes – in essence, to make reverse engineering more difficult, with the added benefit of a smaller footprint on the infected machine.

Crypters
The crudest technique for crypters is usually called obfuscation. Obfuscation is also used often in scripts, like javascripts and vbscripts. But most of the time these are not very hard to bypass or de-obfuscate. More complex methods use actual encryption. Most crypters not only encrypt the file, but the crypter software also offers the user many other options to make the hidden executable as hard to detect by security vendors as possible. The same is true for some packers. Another expression you will find in this context is  FUD (Fully Undetectable), which sets the ultimate goal for malware authors; being able to go undetected by any security vendor is the holy grail. But if authors can go undetected for a while, and then easily change their files again once they are detected, they will settle for that.

Protectors
A protector in this context is software that is intended to prevent tampering and reverse engineering of programs. The methods used can, and usually will, include both packing and encrypting. That combination, plus some added features makes what is usually referred to as a protector, which surrounds the payload with protective layers, making reverse engineering difficult.

A completely different approach, which also falls under the umbrella of protectors, is code virtualization, which uses a customized and different virtual instruction set every time you use it to protect your application. Of these protectors there are professional versions that are used in the gaming industry against piracy. But the technique itself has also made its way into malware, more specifically in ransomware, that doesn’t need a CC server to communicate the encryption key. The protection is so efficient that the encryption key can be hardcoded into the ransomware. An example is Locky Bart that uses WProtect, an open-source code-virtualization project.

Discover more at Malwarebytes Labs.

Was a Microsoft MVP in consumer security for 12 years running. Can speak four languages. Smells of rich mahogany and leather-bound books. View Full Bio

Article source: http://www.darkreading.com/partner-perspectives/malwarebytes/malware-explained-packer-crypter-and-protector/a/d-id/1328458?_mc=RSS_DR_EDT

New Yorkers See 60% Rise in Data Breaches in 2016

Attorney General Eric Schneiderman announced his office received nearly 1,300 data breaches in 2016, a 60% increase over 2015.

An analysis conducted by the New York Attorney General’s (AG) office reveals a 60% increase in data breaches in New York in 2016. This resulted in 1.6 million personal records exposed, three times the amount exposed in 2015. Main causes of the 1,300 reported breaches included hacking (40%) and negligence (37%).

Around 81% of data exposed during the 2016 breaches involved Social Security numbers and financial information of victims, says the report, cautioning that all types of organizations are at risk. March 2016 saw the biggest delay in breach notifications and January and October reported two mega breaches for the year.

New York AG Eric T. Schneiderman says: “It’s on all of us to guard against those who try to use our personal information for harm – as these breaches too often jeopardize the financial health of New Yorkers and cost the public and private sectors billions of dollars.”

Read analysis details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/new-yorkers-see-60--rise-in-data-breaches-in-2016/d/d-id/1328455?_mc=RSS_DR_EDT

Google, Jigsaw Offer Free Cyber Protection to Election Sites

The Protect Your Election package from Google and Jigsaw includes password alert and two-step verification for candidates and campaigns.

Google and sister company Jigsaw are offering free cyber protection to election organizers and civic groups amid the growth of politically motivated cyberattacks, especially during election time, Reuters reports. Protect Your Election is offered to individuals and low-budget organizations, and seen as particularly important given the upcoming polls in France, South Korea, and Germany.

This package is in addition to Project Shield from Jigsaw, which has been offered to news sites over the past year to ward off distributed denial of service (DDoS) attacks.

The tech industry has been under fire for not doing enough to provide protection, especially after the recent Twitter hack during the Dutch elections and the more high-profile hack of Democrats during the US polls last year. Attacks during election time are viewed as costly for victims.

The Protect Your Election toolkit includes Password Alert and Two-Step Verification, and is meant for candidates and campaigns. News and human rights websites, and election-related websites, can apply for Project Shield, says Jigsaw. Package details here.

Read more on Reuters.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/google-jigsaw-offer-free-cyber-protection-to-election-sites/d/d-id/1328456?_mc=RSS_DR_EDT

Web smut seekers take resurgent Ramnit malware from behind

Aficionados of salacious smut sites in the UK and Canada are picking up some nasty software that infects systems by using corrupted pop-under adverts.

Security researchers at Malwarebytes Labs running a malware honeypot have started noticing resurgence in the Ramnit trojan among the samples. Ramnit was a particularly pernicious piece of code that specialized in harvesting banking credentials and building a botnet. It was supposed to have been taken down after a 2015 operation by Europol.

Since then isolated cases of infection have popped up, notably in a German nuclear power plant, but now it appears the operators are back in business and they are exploiting one of mankind’s oldest urges to spread itself.

The researchers noted that the malware was being spread by redirecting and code-injecting malicious adverts that were posted on the ExoClick ad network. These advertisements pop up under the main browser, making them unlikely to be noticed initially and allowing more time for a successful attack.

“The first stage redirection includes a link to tds.tuberl.com within two different JavaScript snippets,” said Malwarebytes intelligence analyst Jérôme Segura.

“This Traffic Distribution System mostly loads benign adult portals/offers via ExoClick. The actual malvertising incident takes place next with a 302 redirect to a malicious TDS this time, which performs some geolocation fingerprinting and checks the upper referer before loading the RIG exploit kit.”

The company has since been in contact with ExoClick, the adverts have been taken down, and new malware signatures have been distributed. But it looks as though Ramnit isn’t going away any time soon. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/22/resurgent_ramnit_malware/

UK vuln ‘fessing pilot’s great but who’s going to give a FoI?

A security researcher has welcomed the UK’s launch of a vulnerability co-ordination pilot while cautioning that a strategy for handling Freedom of Information requests needs to be developed.

The National Cyber Security Centre (NCSC) scheme will focus on handling vulnerabilities that crop up in government-run systems. The proposed framework is built around an established international standard for vulnerability disclosure, ISO/IEC 29147:2014.

In the past conversations between security researchers and government bodies have been handled through GovCert and CERT-UK. “The disclosure process has never been quite as smooth as we would have wanted,” the NCSC admitted in a blog post.

The new method aims to provide faster and more efficient triage on reports of security flaws consistent with what the NCSC describes as Active Cyber Defence. This will mean a redesigned approach, which will be tested through a pilot programme.

“Over the next few months we will be working with an invited group of UK-based security practitioners to help us to identify and resolve vulnerabilities across three publicly facing systems used in UK Public Sector. To help us get this right we are working with LutaSecurity for advice and will look to use a recognised platform for vulnerability co-ordination.”

Steve Armstrong, a pen tester and former lead of the RAF’s penetration and TEMPEST testing teams, welcomed the strategy.

“Government is recognising that more services online means more risk and more customer exposure,” Armstrong told El Reg. “One thing that government has a bad history is connecting the right people to the information.

“It will be interesting to see how many reports they get and how fast they handle them. From an Operational Security (OPSEC) viewpoint, I wonder how long it will be before they get their first FOI [freedom of information] request and it will be interesting to see how they handle them.”

Armstrong warned that FOI requests could undermine the programme by creating an environment where vulnerabilities are treated as fodder for news stories rather than flaws that ought to be swiftly and discretely resolved.

“I can almost see the waves of ‘How many critical vulnerabilities have been reported and how many are still outstanding?'” Armstrong said. “Hopefully the legal beagles have that nailed down so we don’t leak that info to those that would see us fall.”

Receiving vulnerability reports from the external security community needs to be supplemented by penetration testing, internal security reviews and patching, according to the NCSC.

The overall aim is to achieve an “effective, mature approach” to handle the disclosure of security vulnerabilities in public sector systems and services.

Alex Rice, CTO of HackerOne, a specialist bug bounty firm, also welcomed the UK government’s vulnerability disclosure plan.

“The success of DoD’s vulnerability disclosure programme has proven that they are an essential part of the strategy in the defence of even the most mission-critical systems,” Rice said. “We applaud the UK government in adopting this security best practice.”

Independent infosec consultant Brian Honan, the founder and head of Ireland’s CERT, told El Reg that the NCSC is blazing a path he hopes “many other governments and CERTs” will follow.

“Having a formal vulnerability disclosure process is good for all involved,” Honan said. “Historically Computer Emergency Response Teams have helped co-ordinate disclosures but often these have been on best efforts. In addition, many CERTs do not have any authority or influence over private companies and if the vulnerability is not being treated with the right level of urgency the CERT and security researchers can end up frustrated with the overall process, often resulting in breakdown in relationships and the resulting negative impact on future vulnerability disclosure.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/22/uk_gov_vuln_disclosure_pilot/

Microsoft’s ‘Application Verifier’ bug-finder is easily pwnable

“Don’t create undocumented features” should be tattooed in the corner of every developer’s eye: there’s one in the Microsoft Application Verifier Provider that provides attack vectors on everything Windows since XP.

Cybellum, which discovered the feature, has focussed on attacking anti-virus first, but says its DoubleAgent attack could also be used to inject persistent malware on a target, hijack permissions, modify process behaviours, and attack other users’ sessions.

What the researchers found is a fault in how the Microsoft Application Verifier Provider handles .DLLs.

As Microsoft explains, “Application Verifier is designed specifically to detect and help debug memory corruptions and critical security vulnerabilities”.

As part of the process, .DLLs are bound to the target processes in a Windows Registry entry – but, as Cybellum explains in its technical post, you can replace the real .DLL with a malicious .DLL. Here’s how the firm says this can work:

“Our researchers discovered an undocumented ability of Application Verifier that gives an attacker the ability to replace the standard verifier with his own custom verifier. An attacker can use this ability in order to inject a custom verifier into any application. Once the custom verifier has been injected, the attacker now has full control over the application.”

With the victim process associated with DoubleAgentDll.Dll, “it would permanently be injected by the Windows Loader into the process every time the process starts, even after reboots/updates/reinstalls/patches/etc.”

In their work attacking Application Verifier under antivirus products, the researchers found they were able to get the A/V to act as disk-encrypting ransomware.

The company lists A/V vendors that failed under attack as Avast (CVE-2017-5567), AVG (CVE-2017-5566), Avira (CVE-2017-6417), Bitdefender (CVE-2017-6186), Trend Micro (CVE-2017-5565), Comodo, ESET, F-Secure, Kaspersky, Malwarebytes, McAfee, Panda, Quick Heal, and Norton.

Malwarebytes, AVG, and Trend Micro have released fixes.

Youtube Video

Cybellum notes that the simplest fix for antivirus using Application Verifier is to move to a newer architecture called Protected Processes.

The proof-of-concept is at GitHub. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/22/microsoft_application_verifier_security_problems/

Mac OS IM tool Adium lagging on library security vulnerability

A developer is warning Adium users to pick a different messaging app because of an exploitable vulnerability in its underlying libpurple version.

Developed by Pidgin, libpurple is an instant messaging library, and was patched earlier this month.

According to “Erythronium23” in this post to Full Disclosure, Adium is still using the unpatched version.

If an attacker sends invalid XML entities containing white spaces, they can crash the purple_markup_unescape_entity process and get remote code execution.

The attack string has to be sent from a malicious server, which mitigates the risk somewhat.

Erythronium’s complaint is threefold:

  1. Adium’s developers are ignoring the bug report
  2. There’s no documentation about how to upgrade the library
  3. The libpurple shipping with the application is “a binary blob of unknown provenance”

Adium is a Mac OS messenger, and supports connection to AIM, Google Talk, Yahoo Messenger, Jabber, ICQ and IRC.

The company has contacted The Register to say it’s “getting the facts ironed out before giving an official response”, and is “working on releasing an update directly.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/03/22/adium_lagging_on_library_security_vulnerability/

The True State of DevSecOps

Automation improving, but security needs to find ways to slide into DevOps workflow and toolchain.

As enterprises increasingly unite security principles and standards within DevOps practices, they’re speeding up software delivery and improving security in the process.

A new tide of research and anecdotal evidence indicates that the marriage of the two, known as DevSecOps, is helping them get better at automating security testing and improving security attributes of applications earlier in the development process.

But at the same time, the data and war stories show that there’s still a lot of work to go before DevSecOps work patterns fully mature within the enterprise. At most organizations, it’s still a struggle to fold security tools into the DevOps-optimized software delivery pipeline. And the majority of DevOps stakeholders still see security as an inhibitor to DevOps agility.

The most recent research came by way of Sonatype this week with its DevSecOps Community Survey. On the good news front, the study shows that in the past three years, the ratio of organizations that test applications throughout the development lifecycle compared to just in production has grown significantly, with close to 2x gains at nearly every early stage step in the development process.

Those organizations that test or analyze for security requirements throughout the entire development process increased from 15% of organizations in 2013 to 27% today. What’s more, among those organizations with high DevOps maturity practices, 42% reported that they test throughout the lifecycle. That higher rate is likely influenced by higher rates of automation: 58% of highly mature DevOps shops automate security testing compared to just 39% of other organizations.

Nevertheless, security still suffers from a perception problem. The survey showed that 59% of organizations believe security is an inhibitor to DevOps agility. A big part of the difficulty is that many security testing tools today are still too far removed from the typical developer’s workflow and tool chain.

While the DevSecOps ideal is to embed security testing directly into the software delivery process, actually doing it is “a whole other ball game,” said Adam Jacob, CTO of Chef, an IT workflow automation vendor that plays heavily in the DevOps world, in a recent podcast.

“If you can’t figure out how to manage that security posture the same way you manage the rest of what you do, it’s really difficult to then tell a software developer that it’s their responsibility to ensure that that posture is good or bad,” he explains. “You can’t really ask them to understand the posture of what it’s going to be like when it’s deployed.

“Because the distance from a software developer making a decision to a software developer talking about how that software should be in production – and what its posture ought to be – is so vast. And their ability to influence it is so low.”

Of course, security professionals struggle to bring these tools closer to the developer because the majority of security testing tools were designed for traditional waterfall-development models.

“For those of us who have been involved on the front lines of traditional AppSec activities such as penetration testing, dynamic- or static-code analysis, it may be obvious that the traditional tools and techniques we use were built more for waterfall-native rather than DevOps-native environments,” says Oleg Gryb, chief security architect for Visa. “Yet for executives who came to security from infrastructure, networking, or development domains and have never run a security scan, the challenges of bringing traditional tool sets and practices into the new velocity expectations of DevSecOps may not be so obvious.”

This causes big hiccups in testing, such as in the case of compliance validation. A survey published last week from Chef reports that 64% of DevOps shops have regulatory standards to follow. Of those, 73% wait to assess compliance after development, and 59% wait all the way until code is already running in production. This results in a lot of added strain on developers and inconsistent security, to boot.

The Chef survey showed that after a compliance violation is found, one in four organizations need weeks or months to remediate them. In a DevOps world where dozens or even hundreds of builds a day are the delivery norm, this is a positive geologic age in time progression before fixes are made.

According to DJ Schleen, security architect at Aetna, this will require security leaders to not only look for better tools but also get creative about where and how security controls are automated into the workflow.

[Chef executives will present Defining DevOps Metrics at Interop ITX on Tuesday, May16, at the MGM Grand in Las Vegas.]

“We need to identify ways to observe and collect data in both a passive and an active way. Passive ways of producing data could include methods to calculate defect density for a collection of code after a static code analysis has been performed, or to calculate the risk ranking for a codebase based on code smell – whereas an active way of producing data may be the action of pausing the release pipeline to perform a vulnerability scan itself,” he says.

Schleen says active collection is tricky when integrating automated tools.”When you are generating data in an active way, you can potentially be a bottleneck in your release pipeline,” he notes.

Additionally, security folk would also do well to challenge their comfort zone with regard to tools, says Troy Marshall, DevSecOps and cloud reliability leader for Ellucian, a developer of software for higher education. 

“With regard to tooling, don’t be afraid to leave your familiar, trusted tools behind. We had a significant investment in a commercial DAST tool before we started our journey towards DevSecOps and we quickly discovered that it wasn’t well suited for the level of automation we require,” he says. “We looked at a lot of other commercial tools and decided that the available open source tooling was sufficient for our minimum viable product. It hasn’t been perfect but it met our initial needs and we have learned a lot that we can apply going forward.”

Meantime, while security leaders struggle with how to right-size their testing tools to DevOps pipelines, they’re also contemplating how new development tools and methods of building software impact an organization’s security posture. For example, 88% of those surveyed by Sonatype are concerned about the security of containerization technology like Docker.

Related Content:

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: http://www.darkreading.com/application-security/the-true-state-of-devsecops/d/d-id/1328453?_mc=RSS_DR_EDT