STE WILLIAMS

Survey: Majority Of Energy IT Professionals Do Not Understand NERC CIP Version 5 Requirements

PORTLAND, OREGON — November 21, 2013 — Tripwire, Inc., a leading global provider of risk-based security and compliance management solutions, today announced the results of a survey on NERC CIP Compliance. The online survey was conducted from July through September 2013 and evaluated the attitudes of more than 100 IT professionals.

According to a report by the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT), the energy industry faced more cyberattacks than any other industry sector from October 2012 through May 2013, and a successful attack on any of the country’s sixteen critical infrastructure sectors could have devastating results. However, Tripwire’s survey indicates that IT professionals are still unclear on the most recent version of North American Electric Reliability Corporation’s (NERC) critical infrastructure protection (CIP) security controls.

The survey reveals that 70% of the respondents have a clear understanding of current NERC CIP compliance requirements. However, that confidence quickly evaporates in the face of the upcoming version – 62% of respondents say they do not understand the requirements of NERC CIP version 5.

“NERC CIP version 5 represents significant security and compliance changes and will affect most of North America’s power and utilities companies,” said Jeff Simon, director of service solutions for Tripwire. “Although version 5 has been submitted but not yet approved by the Federal Energy Regulatory Commission, power and utility companies still need to understand the impact of the increase in scope and the need for automation. NERC CIP version 5 should already be a key part of their 2014 initiatives.”

Additional survey findings include:

55% are currently preparing to comply with NERC CIP version 5.

83% believe CIP version 5 will enhance the security of the Bulk Electric System (BES).

63% collect the majority of evidence needed for NERC CIP compliance audits manually or with limited support from automation.

57% do not have the automation tools in place to efficiently prepare for their next NERC CIP audit.

Tripwire has helped registered entities achieve and maintain NERC compliance since 2008. With Tripwire’s NERC Solution Suite, organizations can access award-winning security configuration management and incident detection solutions, along with specialized intelligence including policy rules, correlation rules, tools, templates, customized reports and dashboards. Together with customized services from NERC-experienced consultants, the NERC Solution Suite dramatically reduces the time and resources required to pass NERC CIP audits and minimize audit findings.

For more information, please visit: http://www.tripwire.com/company/research/update-nerc-survey-data/.

About Tripwire

Tripwire is a leading global provider of risk-based security and compliance management solutions, enabling enterprises, government agencies and service providers to effectively connect security to their business. Tripwire provides the broadest set of foundational security controls including security configuration management, vulnerability management, file integrity monitoring, log and event management. Tripwire solutions deliver unprecedented visibility, business context and security business intelligence allowing extended enterprises to protect sensitive data from breaches, vulnerabilities, and threats. Learn more at www.tripwire.com, get security news, trends and insights at http://www.tripwire.com/state-of-security/ or follow us on Twitter @TripwireInc.

Article source: http://www.darkreading.com/privacy/survey-majority-of-energy-it-professiona/240164148

Technology Sector Lags In Security Effectiveness, Analysis Shows

CAMBRIDGE, MA – Nov. 20, 2013 – BitSight Technologies, the only company to measure security effectiveness using objective and evidence-based ratings, today released the first BitSight Insight report, which analyzed security ratings for over 70 Fortune 200 companies in four industries – energy, finance, retail and technology. The objective was to uncover quantifiable differences in security effectiveness and performance across industries from October 2012 through September 2013. The study revealed that, in spite of all the attacks from criminals and hactivists over the past year, the financial industry has performed well. In contrast, the technology industry falls far behind, pointing to a need for greater focus on cyber risk management in this sector.

Leveraging big data for observed security incidents, including communication with known command and control servers, spam propagation and malware distribution, BitSight SecurityRatings provide a unique perspective on risk from the outside-in. BitSight SecurityRatings range from 250 to 900, with higher numbers equating to better security effectiveness. Factors used to determine ratings include the classification, frequency, and duration of observed security incidents.

“BitSight’s outside in, data driven approach to rating shows clear differences amongst industries,” said Stephen Boyer, BitSight co-founder and CTO. “By looking at evidence of compromise, we focus on outcomes rather than policies. Companies can have very similar policies, but their effectiveness can still vary widely. For example, the technology sector companies included in this analysis had significantly lower ratings than companies in the financial services sector. The spread is surprising.”

Key Findings

Technology Lags

Technology companies are required to be compliant with the regulations of the industries they serve, including HIPAA, PCI DSS and FISMA. With this consideration, it is surprising that the technology sector measures dramatically lower than retail and finance. High profile breaches including Bit9 and Adobe may indicate why the technology industry rates less effective. The Adobe data breach that also impacted Dun Bradstreet and LexisNexis went unobserved for months before a third party researcher uncovered the incident. Initial reports suggest these breaches occurred as early as March 2013, yet were unreported until October 2013. The persistency of attacks and time to remediate are key contributors to low SecurityRatings in this industry.

Finance Rates Highest

In spite of frequent cyber attacks on financial institutions, the finance sector rated the most favorable in terms of security effectiveness. Along with the known tactics employed by cyber criminals, this sector was also hit with several politically motivated DDoS attacks. One reason for the high SecurityRatings is that the companies assessed were quicker to respond to threats than their peers in other industries. Faster response time leads to less damage and loss. Due in part to the need to meet security and privacy regulations and the desire to stay out of news headlines for losing customer data, financial institutions tend to focus more executive level resources on IT security and risk management than other industries.

Energy and Retail

The energy sector, which includes utilities and oil and gas companies, rated highly in Q4 2012, but fell sharply in the first half of 2013 when faced with extensive malware and botnet attacks. Not only was there an increase in the number of security incidents in Q1 2013, but energy companies were slow to respond to these incidents. However, in the third quarter of 2013, the energy industry’s average effectiveness shows an upward trend, suggesting that they may be getting better at thwarting cyber attacks.

The retail sector, which excludes solely online retailers, also started out on an upward trend in Q4 2012, but then hit a rough patch in Q1 2013, showing that they continue to be an attractive target for cybercriminals seeking access to identity and financial information. The retailers included in this study faced an increase in botnet, spam, phishing and malware attacks Q1 2013, and took longer to remediate attacks as their frequency increased. The SecurityRatings of this group have remained relatively flat in the past two quarters, leaving much room for improvement.

About the BitSight Industry SecurityRatings

Industry SecurityRatings were created from the BitSight SecurityRating averages of over 70 U.S. based organizations in the Fortune 200 from October 2012 to September 2013. Below are explanations of the industry designations.

● Energy: utilities and oil gas

● Finance: banking, investment, securities and financial services

● Retail: primarily brick and mortar organizations

● Technology: device, hardware, software manufacturers and services providers

To download a full copy of the BitSight Insight report, visit http://info.bitsighttech.com/industry-ratings. For more information on the BitSight Partner SecurityRating service, visit www.bitsighttech.com.

About BitSight Technologies


BitSight Technologies is transforming how companies manage information security risk with objective, evidence-based security ratings. The company’s SecurityRating Platform continuously analyzes vast amounts of external data on security behaviors in order to help organizations make timely risk management decisions. Based in Cambridge, MA, BitSight is backed by Commonwealth Capital Ventures, Flybridge Capital Partners, Globespan Capital Partners, and Menlo Ventures. For more information, please visit www.bitsighttech.com or follow @BitSight on Twitter.

Article source: http://www.darkreading.com/government-vertical/technology-sector-lags-in-security-effec/240164171

Hack of online dating site Cupid Media exposes 42 million plaintext passwords

Broken heart. Image courtesy of Shutterstock.More than 42 million plaintext passwords hacked out of online dating site Cupid Media have been found on the same server holding tens of millions of records stolen from Adobe, PR Newswire and the National White Collar Crime Center (NW3C), according to a report by security journalist Brian Krebs.

Cupid Media, which describes itself as a niche online dating network that offers over 30 dating sites specialising in Asian dating, Latin dating, Filipino dating, and military dating, is based in Southport, Australia.

Krebs contacted Cupid Media on 8 November after seeing the 42 million entries – entries which, as shown in an image on the Krebsonsecurity site, show unencrypted passwords stored in plain text alongside customer passwords that the journalist has redacted.

Cupid Media subsequently confirmed that the stolen data appears to be related to a breach that occurred in January 2013.

Andrew Bolton, the company’s managing director, told Krebs that the company is currently making sure that all affected users have been notified and have had their passwords reset:

In January we detected suspicious activity on our network and based upon the information that we had available at the time, we took what we believed to be appropriate actions to notify affected customers and reset passwords for a particular group of user accounts. … We are currently in the process of double-checking that all affected accounts have had their passwords reset and have received an email notification.

Bolton downplayed the 42 million number, saying that the affected table held “a large portion” of records relating to old, inactive or deleted accounts:

The number of active members affected by this event is considerably less than the 42 million that you have previously quoted.

Cupid Media’s quibble on the size of the breached data set is reminiscent of that which Adobe exhibited with its own record-breaking breach.

Adobe, as Krebs reminds us, found it necessary to alert only 38 million active users, though the number of stolen emails and passwords reached the lofty heights of 150 million records.

More relevant than arguments about data-set size is the fact that Cupid Media claims to have learned from the breach and is now seeing the light as far as encryption, hashing and salting goes, as Bolton told Krebs:

Subsequently to the events of January we hired external consultants and implemented a range of security improvements which include hashing and salting of our passwords. We have also implemented the need for consumers to use stronger passwords and made various other improvements.

Krebs notes that it could well be that the exposed customer records are from the January breach, and that the company no longer stores its users’ information and passwords in plain text.

Whether those email addresses and passwords are reused on other sites is another matter entirely.

Chad Greene, a member of Facebook’s security team, said in a comment on Krebs’s piece that Facebook’s now running the plain-text Cupid passwords through the same check it did for Adobe’s breached passwords – i.e., checking to see if Facebook users reuse their Cupid Media email/password combination as credentials for logging onto Facebook:

Chad
November 20, 2013 at 10:07 am
I work on the security team at Facebook and can confirm that we are checking this list of credentials for matches and will enroll all affected users into a remediation flow to change their password on Facebook.

Facebook has confirmed that it is, in fact, doing the same check this time around.

It’s worth noting, again, that Facebook doesn’t have to do anything nefarious to know what its users passwords are.

Given that the Cupid Media data set held email addresses and plaintext passwords, all the company has to do is set up an automatic login to Facebook using the identical passwords.

If the security team gets account access, bingo! It’s time for a chat about password reuse.

It’s an extremely safe bet to say that we can expect plenty more “we have stuck your account in a closet” messages from Facebook with regards to the Cupid Media data set, given the head-bangers that people used for passwords.

To wit: “123456” was the password for 1,902,801 Cupid Media records.

And as one commenter on Krebs’s story noted, the password “aaaaaa” was employed in 30,273 customer records.

JCitizen’s comment:

That is probably what I would also say if I discovered this breach and were a former customer! (add exclamation point) :D

Amen, citizen!

Image of broken heart courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gFqptogV8_I/

Security pros: If Healthcare.gov hasn’t been hacked already, it will be soon

HealthCare.gov logoFour cyber security experts have delivered to the US Congress a unanimous opinion: American’s shouldn’t use HealthCare.gov, given its security issues.

David Kennedy, CEO of information security firm TrustedSEC and former CSO of Diebold, was one of those who testified on Tuesday before a House Science, Space, and Technology committee hearing on security concerns surrounding the woebegone, very large attack target that is the government’s new healthcare website.

The committee wanted answers to this question: “Is my data on HealthCare.gov secure?”

As FoxNews.com reported, Kennedy testified that the answer is No, given that the site’s pwnage is inevitable:

Hackers are definitely after it. … And if I had to guess, based on what I can see … I would say the website is either hacked already or will be soon.

Kennedy told FoxNews.com that his firm has detected a large number of SQL injection attacks against the site, which indicates “a large amount” of hacking attempts:

Based on the exposures that I identified, and many that I haven’t published due to the criticality of exposures, if a hacker wanted access to the site or sensitive information, they could get it.

Also testifying was Fred Chang, a computer science professor at Southern Methodist University and former research director for the NSA; Avi Rubin, a computer science professor at Johns Hopkins University; and Morgan Wright, CEO of Crowd Sourced Investigations, cybersecurity analyst for Fox News and Fox Business, and a former senior law enforcement advisor to the Republican National Convention.

Three of the four testified that they believe it’s best to shut HealthCare.gov down completely.

The lone voice of dissent on that point was Rubin, who said he doesn’t have enough information to decide, but that a security review of the site is definitely in order.

From Network World’s coverage:

I would need to know whether there are inherent flaws vs. superficial problems that can be fixed. If they can be fixed, that’s better than shutting it down.

Kennedy said that given what he’s been able to suss out from public record and reconnaissance of the site, he could break into its data stores within two days and steal the personal information of people who’ve used the site.

As Network World’s Tim Greene reports, Kennedy demonstrated that he could redirect people trying to access the site to a lookalike site that could push malware that would allow attackers to hijack people’s devices.

Kennedy’s explanation, via ABC News:

We can actually enable their web cam, monitor their web cam, listen to their microphone, steal passwords. … Anything that they do on their computer we now have full access to.

CBS News reports that Henry Chao, the project manager responsible for building HealthCare.gov, gave 9 hours of closed-door testimony to the House Oversight Committee in advance of this week’s hearing.

A CBS News video clip put up by Townhall.gov shows the heavily redacted security report that Chao claims he never saw.

Chao told the House Oversight Committee that his team told him that “there were no ‘high’ findings” – “high” referring to government classification of “high risk”, which designates that a vulnerability can be expected to have severe or catastrophic adverse effects on organisational operations, assets or individuals.

Vulnerabilities rated “high risk” could lead to identity theft, unauthorized access, and misrouted data.

It was Chao who recommended it was safe to launch the site at the start of October.

When asked if he found it surprising that he hadn’t seen the memo advising about high-risk vulnerabilities on HealthCare.gov – a highly redacted of which was shown on CBS News’s report – he said that yes, of course he was surprised:

Wouldn’t you be surprised, if you were me?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/QFYzzPcK-pU/

SSCC 124

SCADA flaws put world leaders at risk of TERRIBLE TRAFFIC JAM

Email delivery: 4 steps to get more email to the inbox

In November 2014, leaders of the G20 group of nations will convene in Brisbane, Australia, for a few days of plotting to form a one-world government high-level talks aimed at ensuring global stability and amity.

Queensland, the Australian state in which Brisbane is located, is leaving no preparatory stone unturned as it readies itself for the summit. For example: new laws mean it will be illegal to carry a reptile, fly a kite or use a laser pointer close to the venues used for the meeting.


The State has also conducted a review of its traffic management systems, mostly to figure out how to improve traffic flow but also with half an eye on the G20 summit and the likely online attacks and protests it will attract. That review’s report (PDF) explains how the authors tried penetration tests on Queensland’s two operators of intelligent transport systems (ITS) and succeeded with both attempts.

“The entities audited did not actively monitor and manage information technology security risks and did not have comprehensive staff security awareness programs,” the report notes. Managers assumed the SCADA kit in use was secure, staff weren’t aware of social engineering or other attacks and it was possible to extract information from both traffic system operators with USB keys.

The possible outcomes of this negligence, the report says, are as follows:

“If the systems were specifically targeted, hackers could access the system and potentially cause traffic congestion, public inconvenience and affect emergency response times. Such attacks could also cause appreciable economic consequences in terms of lost productivity.”

Happily, both of Queensland’s ITS operators have unreservedly accepted the report’s recommendations.

Unhappily, the SCATS system used for one Queensland ITS is also deployed in 26 nations beyond Australia. Transmax, the company behind the STREAMS system used by Queensland’s other ITS operator is seeking partners to enter the UK market. Whether either outfit has closed the holes the report identified is not known. ®

ioControl – hybrid storage performance leadership

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/11/21/scada_flaws_put_world_leaders_at_risk_of_terrible_traffic_jam/

Darknet: It’s not just for DRUGS. Ninja Banking Trojan uses it too

Email delivery: 4 steps to get more email to the inbox

Russian-speaking virus writers have brewed up a stealthy strain of banking Trojan that communicates over peer-to-peer networks using an encrypted darknet protocol that’s arguably even stealthier than TOR: I2P.

The i2Ninja malware offers a similar set of capabilities to other major financial malware such as ZeuS and SpyEye – including an HTML injection and form-grabbing for all major browsers (Internet Explorer, Firefox and Chrome), as well as an FTP grabber and a soon-to-be released VNC (Virtual Network Connection) module, which will allow remote control of compromised desktops.


In addition, the Trojan also provides a PokerGrabber module targeting major online poker sites and an email grabber.

But what really sets the malware apart is is arcane communications technology, as a blog post by transaction security firm Trusteer explains.

The i2Ninja takes its name from the malware’s use of I2P – a networking layer that uses cryptography to allow secure communication between its peer-to-peer users. While this concept is somewhat similar to TOR and TOR services, I2P was designed to maintain a true Darknet, an Internet within the Internet where secure and anonymous messaging and use of services can be maintained. The I2P network also offers HTTP proxies to allow anonymous internet browsing.

Using the I2P network, i2Ninja can maintain secure communications between the infected devices and command and control server. Everything from delivering configuration updates to receiving stolen data and sending commands is done via the encrypted I2P channels. The i2Ninja malware also offers buyers a proxy for anonymous internet browsing, promising complete online anonymity.

Trusteer, which was recently acquired by IBM, came across the i2Ninja malware through posting on a Russian cybercrime forum. Etay Maor, a fraud prevention manager at Trusteer, explains that around-the-clock support is on hand for potential customers of the cybercrime tool.

“Another feature of I2P by i2Ninja is an integrated help desk via a ticketing system within the malware’s command and control,” Maor explains. “A potential buyer can communicate with the authors/support team, open tickets and get answers – all while enjoying the security and anonymity provided by I2P’s encrypted messaging nature. While some malware offerings have offered an interface with a support team in the past (Citadel and Neosploit to name two), i2Ninja’s 24/7 secure help desk channel is a first.”

The post advertising i2Ninja was actually copied from a different source and shared within the forum on a thread discussing P2P Trojans, Maor adds.

“The cybercriminal who originally made the offer commented on this thread and confirmed that indeed this malware is for sale at this time. As the thread progressed that same cybercriminal requested that the thread [be] shut down as he [had] received many requests for purchasing the i2Ninja malware,” he adds.

Trusteer reckons the malware would most likely spread via the usual vectors – drive-by-download infection, fake ads, email attachments etc. The purchase or rental price of Trojan remains undetermined. ®

ioControl – hybrid storage performance leadership

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/11/21/ninja_banking_malware/

Meet Stuxnet’s stealthier older sister: Super-bug turned Iran’s nuke plants into pressure cookers

Email delivery: 4 steps to get more email to the inbox

Analysis Super-malware Stuxnet had an older sibling that was also designed to wreck Iran’s nuclear facilities albeit in a different way, according to new research.

The elder strain of the worm, dubbed Stuxnet Mark I, dates from 2007 – three years before Stuxnet Mark II was discovered and well documented in 2010.


Writing in Foreign Policy magazine yesterday, top computer security researcher Ralph Langner claimed that the Mark I version of the weapons-grade malware would infect the computers controlling Iran’s sensitive scientific equipment, and carefully ramp up the pressure within high-speed rotating centrifuges: these machines are vital in Iran’s uranium enrichment process as they separate the uranium-235 isotope used in, say, nuclear power plants and atomic.

Crucially, the malware did by overriding gas valves attached to the equipment while hiding sensor readings of the abnormal activity from the plant’s engineers and scientists. The end goal was to sabotage the cascade protection system that kept thousands of 1970s-era centrifuges operational.

The 2010 version, by contrast, targeted the centrifuge drive systems: it quietly sped up and slowed down rotors connected to centrifuges until they reached breaking point, triggering an increased rate of failures as a result.

Stuxnet Mark II famously hobbled high-speed centrifuges at Iran’s uranium enrichment facility at Natanz in 2009 and 2010 after infecting computers connected to SCADA industrial control systems at the plant. This flavour of Stuxnet was allegedly developed as part of a wider US-Israeli cyber-warfare effort, codenamed Operation Olympic Games, that began under the presidency of George W Bush.

But prior to that, Stuxnet Mark I sabotaged the protection system the Iranians hacked together to keep their obsolete and unreliable IR-1 centrifuges safe, as Langner explained in detail in his 4,200-word article. Once installed on computers controlling the equipment, the subtle overpressure attack ultimately damaged the machinery beyond repair, forcing engineers to replace it. The malware took great care to closely monitor its effects, allowing its masters to carefully avoid any activity that may result in immediate, catastrophic destruction – because that would have led to a postmortem examination that could have exposed the stealthy sabotage.

Samples of the Mark I malware were submitted to online malware clearing house VirusTotal in 2007, but it was only recognised as such five years later in 2012.

The results of the overpressure attack are unknown, but whatever they were, Stuxnet Mark I’s handlers decided to try something different in 2009, deploying the Mark II variant that became famous after it accidentally escaped into the wild in 2010. Langner reckons Stuxnet Mark II was “much simpler and much less stealthy than its predecessor” – a less complex yet more elegant Stuxnet could have proved more effective and reliable than the convoluted Mark I version.

The Mark I had to be installed on a computer connected to the industrial control system to carry out its sabotage, or otherwise infect a machine from a USB drive; it was probably installed by a human, either wittingly or unwittingly.

Later, the Mark II spread over local-area networks, exploited zero-day Microsoft Windows vulnerabilities to silently install itself, and was equipped with stolen digital certificates so its driver-level code appeared to be signed legit software. But this made Mark II easy to recognise as malign by antivirus experts once it was discovered.

Langner, well known for his earlier Stuxnet analysis, reckons the Mark II escaped into the wild after it infected the Windows laptop of a sub-contractor who subsequently connected the PC to the wider web, contrary to the myth that the malware spread itself across the internet as the result of an internal software bug.

Having compromised industrial control systems at Iran’s nuclear centre, Stuxnet’s masters “were in a position where they could have broken the victim’s neck, but they chose continuous periodical choking instead”, according to Langner.

The Mark II’s effect on Iran

He reckoned the 2010 build of Stuxnet set back the Iranian nuclear programme by two years: it subtly reduced the centrifuges’ ability to reliably enrich uranium at volume, forcing the scientists to tear their hair out in frustration and chase a ghost in the machine. This was a far longer delay than if the software nasty triggered the sudden catastrophic destruction of all operating centrifuges, because Iran would have been able to diagnose the problem and rebuild its processing plant using spares.

The effectiveness of the whole scheme is a matter of some dispute among foreign policy and security analysts with some even arguing it ultimately galvinised Iran’s nuclear efforts.

That issue aside, the stealth and disguise of early version of Stuxnet came at the cost of vastly increasingly the difficulty of creating the cyber-munition, according to Langner:

I estimate that well over 50 percent of Stuxnet’s development cost went into efforts to hide the attack, with the bulk of that cost dedicated to the overpressure attack which represents the ultimate in disguise – at the cost of having to build a fully-functional mockup IR-1 centrifuge cascade operating with real uranium hexafluoride.

And while Stuxnet was clearly the work of a nation-state – requiring vast resources and considerable intelligence – future attacks on industrial control and other so-called “cyber-physical” systems may not be. Stuxnet was particularly costly because of the attackers’ self-imposed constraints. Damage was to be disguised as reliability problems.

And unlike the Stuxnet attackers, these adversaries are also much more likely to go after civilian critical infrastructure. Not only are these systems more accessible, but they’re standardised. In fact, all modern plants operate with standard industrial control system architectures and products from just a handful of vendors per industry, using similar or even identical configurations.

In other words, if you get control of one industrial control system, you can infiltrate dozens or even hundreds of the same breed more.

Langner’s research adds a missing chapter to the already complex story of Stuxnet which continues to interest both military strategists and security researchers because it showed that malware could be used to physically sabotage equipment even in closely guarded facilities.

“The Stuxnet revelation showed the world what cyberweapons could do in the hands of a superpower,” Langner concluded. “It also saved America from embarrassment. If another country – maybe even an adversary – had been first in demonstrating proficiency in the digital domain, it would have been nothing short of another Sputnik moment in US history. So there were plenty of good reasons not to sacrifice mission success for fear of detection.”

The publication of the article [PDF] coincides with the release of a white paper by Langner on Stuxnet, entitled To Kill a Centrifuge: A Technical Analysis of What Stuxnet’s Creators Tried to Achieve. The white paper combines results from reverse engineering the attack code with intelligence on the design of the attacked plant and background information on the attacked uranium enrichment process to provide what’s billed as the most comprehensive research on the Stuxnet malware to date. ®

ioControl – hybrid storage performance leadership

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/11/21/stuxnet_fearsome_predecessor/

Study: Most Application Developers Don’t Know Security, But Can Learn

NEW YORK, N.Y — AppSec USA 2013 — Most application developers still aren’t security-savvy, but training can make a difference, according to study results outlined here Wednesday.

The study, conducted by professional services firm Denim Group, tested some 600 application developers — most of whom had fewer than three days of security training — on their knowledge of secure coding practices.

Quizzed on 15 questions, less than a third of the respondents (27 percent)accurately answered more than 70% of the test. The average score on the quiz was 59%. Developers with more than seven years of experience fared no better than those with those with fewer than three years’ experience.

“Most of them understood high-level concepts, such as how to recognize a cross-site scripting vulnerability, but when we asked them how to remediate it, most of them couldn’t answer correctly,” said John Dickson, CEO of Denim Group, in a presentation of the study results.

The Denim Group then gave the respondents a secure application development course and tested them again. After training, the average score on the test rose to 74%, and about two-thirds of respondents scored 70% or higher. The students reported that their security-related application vulnerabilities were reduced by 70% after training.

“What this shows is that security training makes a difference,” Dickson said. “If you do training right, you can expect to reduce vulnerabilities.”

The study also pointed out some flaws in the secure development process. For example, while most application development teams rely on quality assurance staff to catch security problems in developing code, QA staff turned in the lowest scores on the initial test, scoring lower than those who described themselves as developers or architects.

“A lot of companies put their least experienced people on the QA team, and they are actually the least knowledgeable in security,” Dickson observed. “But without any training, how are those people supposed to catch the security issues?”

Interestingly, respondents who worked in the largest companies — companies with 10,000 employees or more –scored lowest on the initial test, with a passing rate of just 19%.

“The category that does the most in-house development had the lowest scores,” Dickson said.

Experts at the AppSec 2013 conference said that until developers become more security-savvy, the incidence of vulnerabilities will continue to remain high.

“Security is still not built into the pre-production process,” said Bala Venkat, chief marketing officer at Cenzic, an application security tool vendor. “It’s not built into the application development process, it’s not part of the education process at most universities. Until security training becomes a requirement, we will continue to have problems.”

Although software development tools are rapidly evolving and attacks are becoming more sophisticated, most companies are still wrestling with application vulnerabilities that are years old, noted Chris Eng, vice president of research at Veracode, an application security tool vendor.

“We’re still seeing SQL injection flaws that we’ve been seeing for a decade,” Eng said. “The core vulnerabilities are still the same, and that speaks to the need for better training. You can’t just give a classroom course and forget it. You have to retest and reinforce the concepts, and let your developers see them in practice.”

Most organizations still don’t offer incentives for developers to write secure code, noted Jeremiah Grossman, founder and CTO of application security vendor WhiteHat Security.

“Do software developers get rewarded for writing code without vulnerabilities? No, they are usually rewarded for speed and functionality. But if you hold developers accountable for vulnerabilities and incent them to write more secure code, then it matters to them. Then you begin to see an impact.”

New requirements for training may change the way software developers approach security,” Dickson said. The new Payment Card Industry Data Security Standard 3.0, for example, requires that developers are trained and tested in secure coding practices, he noted.

“As security becomes more important to industries and organizations, I believe we will see a real change in the way developers are trained,” said Cenzic’s Venkat. “It has to be a mandate. I’m positive it will happen.”

Have a comment on this story? Please click “Add a Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/vulnerability/study-most-application-developers-dont-k/240164162

Who’s The Boss Over Your JBoss Servers?

A widely unpatched vulnerability in JBoss Application Server (AS) discovered back in 2011 is opening up tens of thousands of enterprise data center servers to attack, with at least 500 actively compromised, according to a report out this week by Imperva. The analysis done by Imperva’s security research team suggests that enterprises are not hardening their servers adequately and as a result are putting their entire data center operations at risk.

“The attackers are looking to circumvent methods that are supposed to be hardened because they expect vendors not to do a good job hardening their administrative access or functions,” says Barry Shteiman, director of security strategy for Imperva. “Because of that, attackers are using that to inject standard or classic forms of attack — in this case, a webshell — which generally allows them full control over the server.”

In this instance, Shteiman and his team noticed the attack trend after seeing a surge of attacks in online systems that demonstrated features they hadn’t commonly seen before. Looking into it further, the team found the attacks all shared a distinct commonality: They were all suffered by JBoss servers.

“When we looked into it, we found that JBoss has a component called HTTPInvoker, and that component was found vulnerable, similarly to some other vulnerabilities we looked into recently that basically allowed an administrative function to be accessed without actually being an administrator logged in,” Shteiman says. “In this case, it’s a function that is supposed to populate new servlets or new pieces of code in the server. In a default state, JBoss allows that function to be used by anyone that wants it.”

[How do you know if you’ve been breached? See Top 15 Indicators of Compromise.]

Attackers leveraged that hole to inject a webshell on vulnerable servers and achieve “full control over the data center,” says Shteiman. He says that his research has shown there are likely around 500 JBoss servers currently compromised at the moment, by anywhere between 15 to 17 flavors of webshells. Among those, the most popular are a webshell called pwn.jsp that was demonstrated in an exploit published last month, along with a more slick crimeware webshell called JspSpy.

As Shteiman explains, JBoss AS is the de facto server platform for enterprises writing applications in Java.

“A lot of trading companies and banks are using it to hold up their main banking applications,” says Shteiman. Even more frightening is its popularity among technology vendors that use it as a component for enterprise products and who could potentially be compromised before even shipping, essentially sending out products with built-in backdoors, he says.

The vulnerability in question was actually found in 2011 by Luca Carettoni, at the time a senior security consultant for Matasano Security, who then reported about 7,000 servers online susceptible to the vulnerability. Since then, rather than declining, the number of vulnerable servers has tripled, says Shteiman, who believes that part of the problem was a misclassification in the CVE database.

“It was classified as a vulnerability that affected product elements of HP ProCurve, and therefore I don’t think anyone ever understood the research to its full effect,” he says.

Nevertheless, security experts say this is something that should have been caught by more organizations, given the age of the vulnerability discovery.

“When the solution to this JBoss exploit is to simply update the affected servers, there is hardly any excuse for anyone to be affected by it, especially when the vulnerability has been discovered more than two years ago,” says Michael Yuen, security engineer at application security firm Cenzic.

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/attacks-breaches/whos-the-boss-over-your-jboss-servers/240164144