STE WILLIAMS

Kali Linux can now use cloud GPUs for password-cracking

Think passwords, people. Think long, complex passwords. Not because a breach dump’s landed, but because the security-probing-oriented Kali Linux just got better at cracking passwords.

Kali is a Debian-based Linux that packs in numerous hacking and forensics tools. It’s well-regarded among white hat hackers and investigators, who appreciate its inclusion of the tools of their trades.

The developers behind the distro this week gave it a polish, adding new images optimised for GPU-using instances in Azure and Amazon Web Services. The extra grunt the GPUs afford, Kali’s backers say, will enhance the distribution’s password-probing powers. There’s also better supoprt for GPU cracking, hence our warning at the top of this story: anyone can use Kali and there’s no way to guarantee black hats won’t press it into service. And they can now do so on as many GPU-boosted cloud instances as they fancy paying for.

The new distribution, version 2017.1, also adds support for Realtek’s RTL8812AU wireless chipsets. The Linux kernel doesn’t support that silicon, but lots of mainstream modem-makers like D-Link, Belkin and TP-Link do. Adding support to Kali therefore makes it capable of probing a great many WiFi access points.

There’s also support for the OpenVAS 9 vulnerability scanner. Kali’s not included the tool in its default release, but has packaged it so a quick apt-get update and apt install openvas will install a nicely-packaged version of the tool.

The full changelog for the new release is here for your reading pleasure. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/28/kali_linux_adds_gpu_support/

Last year’s ICO fines would be 79 times higher under GDPR

Fines from the Information Commissioner’s Office (ICO) against Brit companies last year would have been £69m rather than £880,500 if the pending General Data Protection Regulation (GDPR) had been applied, according to analysis by NCC Group.

The 2015 penalties would also have risen drastically from £1m to £35m under the same benchmark.

As things stand, the ICO can apply fines of up to £500,000 for contraventions of the Data Protection Act 1998. Once GDPR comes into force on 25 May, 2018, there will be a two-tiered sanction regime – with lesser incidents subject to a maximum fine of either €10 million (£7.9 million) or 2 per cent of an organisation’s global turnover (whichever is greater). The most serious violations could result in fines of up to €20 million or 4 per cent of turnover (whichever is greater).

NCC’s security consultants looked at all ICO fines from 2015 and 2016. Using the current maximum penalty as a guide, it created a model to determine what tier the fine would fall into and what a maximum post-GDPR fine would likely be.

TalkTalk’s 2016 fine of £400,000 for security failings that allowed hackers to access customer data would rocket to £59m under GDPR. Fines given to small and medium-sized enterprises could have been catastrophic. For example, Pharmacy2U’s fine of £130,000 would balloon to £4.4m – a significant proportion of its revenues and potentially enough to put it out of business.

Roger Rawlinson, managing director of NCC Group’s assurance division, said: “GDPR isn’t just about financial penalties, but this analysis is a reminder that there will be significant commercial impacts for organisations that fall foul of the regulations.

“Businesses should have already started preparations for GDPR by now. Most organisations will have to fundamentally change the way they organise, manage and protect data. A shift of this size will need buy-in from the board.”

Although the UK is leaving the European Union, compliance with the GDPR will still be mandatory for British firms that handle EU citizens’ data. The ICO has publicly said it plans to introduce something similar to the GDPR post-Brexit, so proceeding on the assumption that the UK will not introduce tougher fines for data breaches is unrealistic. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/28/ico_fines_post_gdpr_analysis/

Facebook decides fake news isn’t crazy after all. It’s now a real problem

Analysis Last November at the Techonomy Conference in Half Moon Bay, California, Facebook CEO Mark Zuckerberg dismissed the notion that disinformation had affected the US presidential election as lunacy.

“The idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way, I think, is a pretty crazy idea,” said Zuckerberg.

Five months later, after a report [PDF] from the Office of the US Director of National Intelligence provided an overview of Russia’s campaign to influence the election – via social media among other means – the social media giant has published a plan for “making Facebook safe for authentic information.”

Penned by Facebook chief security officer Alex Stamos and security colleagues Jen Weedon and William Nuland, “Information Operations and Facebook” [PDF] describes an expansion of the company’s security focus from “traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people.”

This despite Zuckerberg’s insistence that “of all the content on Facebook, more than 99 per cent of what people see is authentic.”

Facebook’s paper says information operations to exploit the personal data goldmine revolve around targeted data collection from account holders, content creation to seed stories to the press, and false amplification to spread misinformation. It focuses on defenses against data collection and the distribution of misleading content.

To combat targeted data collection, Facebook says it is:

  • Promoting and providing support for security and privacy features, such as two-factor authentication.
  • Presenting notifications to specific people targeted by sophisticated attackers, with security recommendations tailored to the threat model.
  • Sending notifications to people not yet targeted but likely to be at risk based on the behavior of known threats.
  • Working with government bodies overseeing election integrity to notify and educate those at risk.

False amplification – efforts to spread misinformation to hurt a cause, sow mistrust in political institutions, or foment civil strife – is recognized in the report as a possible threat to Facebook’s continuing vitality.

“The inauthentic nature of these social interactions obscures and impairs the space Facebook and other platforms aim to create for people to connect and communicate with one another,” the report says. “In the long term, these inauthentic networks and accounts may drown out valid stories and even deter some people from engaging at all.”

As can be seen from Twitter’s half-hearted efforts to subdue trolls, sock puppets, and the like, such interaction can be toxic to social networks.

Stamos, Weedon and Nuland note that Facebook is building on its investment in fake account detection with more protections against manually created fake accounts and with additional analytic techniques involving machine learning.

Facebook’s security team might want to have a word with computer scientists from University of California Santa Cruz, Catholic University of the Sacred Heart in Italy, the Swiss Federal Institute of Technology Lausanne, and elsewhere who have made some progress in spotting disinformation.

‘Some like it hoax’

In a paper published earlier this week, “Some Like it Hoax: Automated Fake News Detection in Social Networks” [PDF], assorted code boffins report that they can identify hoaxes more than 99 per cent of the time, based on an analysis of the individuals who respond to such posts.

“Hoaxes can be identified with great accuracy on the basis of the users that interact with them,” the research paper claims.

Asked about Zuckerberg’s claim that only about 1 per cent of Facebook content is inauthentic, Luca de Alfaro, computer science professor at UC Santa Cruz and one of the hoax paper’s co-authors, said he had no information on the general distribution of misinformation on Facebook.

“I would trust Mark on this,” de Alfaro said in an email to The Register. “I know that on Wikipedia, on which I worked in the past, explicit vandalism is about 6 or 7 per cent (or it was some time ago).”

More significant than the percentage of fake news, de Alfaro suggested, is the impact of hoaxes on people.

“For instance, suppose I read and believe 10 run-of-the-mill pieces of news, and one outrageous hoax: which one of these 11 news [stories] will have the greatest impact on me?” he said. “Hoaxes are frequently harmful due to the particular nature of their crafted content. You can eat 99 meatballs and 1 poison pill, and you still die.”

Machine learning techniques are proving to be effective, de Alfaro suggested, but people still need to be involved in the process.

“In our work, we were able to show that we can get very good automated results even when the oversight is limited to 0.5 per cent of the news we classify: thus, human oversight on a very small portion of news helps classify most of them.”

Asked whether human oversight is always necessary for such systems, de Alfaro said that was a difficult question.

“To some level, I believe the answer is yes, because even if you use machine learning in other ways, you need to train the machine learning on data that has been, in the end, selected by some kind of human process,” he said. “We are developing in my group at UCSC, and together with the other collaborators, a series of tools and apps that will enable people to access our classifiers, and we hope this might have an impact.”

For Facebook, and the depressingly large number of people who rely on it, such tools can’t come soon enough. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/27/facebook_fake_news_security/

Republicans want IT bloke to take fall for Clinton email brouhaha

US House Republicans are demanding prosecutors bring charges against the IT chap who hosted Hillary Clinton’s private email service.

The chairman of the House of Reps’ Science, Space, and Technology Committee, Lamar Smith (R‑TX), today sent a formal letter [PDF] to Attorney General Jeff Sessions, asking that charges be filed against Platte River Networks (PRN) and its CEO Treve Suazo.

Smith claims the tech biz failed to disclose or covered up its storage of Clinton’s emails, which have been at the heart of congressional investigations over possible violations of government record-keeping laws. The letter claims the IT company illegally withheld copies of the emails from Congress and Suazo himself lied to the committee.

“To date, Mr Suazo, on behalf of PRN and through his attorney, has refused to produce documents, as directed by congressional subpoenas duces tecum and refused to allow his employees to provide testimony to the Committee.”

The letter asks Sessions to look at hauling Suazo and his company into court for allegedly refusing to comply with a subpoena for documents, making false statements to the committee, and obstructing the committee’s investigation into whether Clinton’s use of the private email service placed US intelligence and national security at risk. No charges were filed against Clinton.

“There is no legal basis for Mr Suazo’s refusal to cooperate and comply fully with the committee’s subpoenas. Instead of cooperation the committee was met with obstruction and refusal to comply with subpoenas and requests for transcribed interviews,” Smith writes.

“These actions, taken together, as well as Mr Suazo’s false statements to the committee, made through counsel, support the pattern of obstruction.”

Should PRN and Suazo be charged, the case could have a wider impact across the IT services sector, underscoring the possible legal risks smaller IT providers run when they accept government business – or in this case even private business – from those in government. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/04/27/it_provider_should_take_fall_clinton_emails/

Iranian Hackers Believed Behind Massive Attacks on Israeli Targets

OilRig aka Helix Kitten nation-state group leveraged Microsoft zero-day bug in targeted attacks.

A massive targeted cyber espionage campaign against major Israeli institutions and government officials underscores just how far an Iranian nation-state hacking machine has come.

The Israeli Cyber Defense Authority yesterday announced that it believes Iran was behind the a series of targeted attacks against some 250 individuals between April 19 and 24 in government agencies, high-tech companies, medical organizations, and educational institutions including the renowned Ben-Gurion University. The attackers – whom security experts say are members of the so-called OilRig aka Helix Kitten aka NewsBeef nation-state hacking group in Iran — used stolen email accounts from Ben-Gurion to send their payload to victims.

“This is the largest and most sophisticated attack they’ve [OilRig] ever performed,” says Michael Gorelik, vice president of RD for Morphisec, who studied the attacks and confirms that the final stage was thwarted for the most part. “It was a major information-gathering [operation],” he says.

OilRig has been rapidly maturing since it kicked off operations around 2015. The attack campaign against Israeli targets employed the just-patched Microsoft CVE-2017-0199 remote code execution vulnerability in the Windows Object Linking and Embedding (OLE) application programming interface. This flaw had been weaponized in attacks prior to the patch, including Dridex banking Trojan and botnet attacks, and in at least one other cyber espionage campaign.

This technique by OilRig is a step up from the group’s previous MO of using malicious macros to spread malware, where it employed Microsoft Excel and Word files that required the victim to enable macros to get infected with malware. But this time around, no macros were necessary: the files contained an exploit via an embedded link packed with an HTML executable, according to researchers at Israeli security firm Morphisec who studied the new attacks.

OilRig managed to catch the victims during the patching window between when Microsoft issues a security update and organizations actually roll out the patch, security experts say. “The most important difference is that the use of macros was exchanged with a vulnerability exploit. With their ability to set up the attack in a relatively short time, the threat actors could correctly speculate that their window of opportunity between patch release and patch rollout was still open,” according to Morphisec’s blog post today.

The hacking group also was likely behind an attack campaign in January that employed a phony Juniper Networks VPN portal as well as phony websites purporting to be the University of Oxford, from which the attackers dropped malware.

Adam Meyers, vice president of intelligence at CrowdStrike, which has named this Iranian hacker group Helix Kitten, says the group has been advanced for some time. “There’s this misconception that they weren’t sophisticated before,” he says. “This group has been active since 2015 and gone after aviation, energy, financial, and government” targets in various regions and countries, including the United Arab Emirates, Turkey, and Qatar, he says.

OilRig/Helix Kitten was not the first attack group to weaponize the Microsoft CVE-2017-0199 remote code execution vulnerability before it was patched, he notes, pointing to attacks in Ukraine, China, and in the US earlier this year. “It’s unusual to see multiple threat actors pick up” a zero-day, he says, which could hint that of an 0day broker selling it to multiple “customers.”

Meantime, Morphisec’s Gorelik says in the latest round of attacks, OilRig employed a customized version of the open-source Mimikatz tool, which gives hackers access to user credentials in the Windows Local Security Authority Subsystem Service.

OilRig is among the ranks of nation-state gangs using open-source hacking tools. Kurt Baumgartner, principal security researcher for Kaspersky Lab’s Global Research and Analysis Team, says OilRig, which Kaspersky calls NewsBeef, in the past year has relied heavily on open-source hacking tools, namely  BeEF for exploiting holes in browsers; Unicorn for PowerShell-type attacks; and on Pupy, for planting a remote administration tool, or RAT. That’s a far cry from its earlier days, when it relied on social engineering accounts to target victims. “NewsBeef is not well-resourced, so this enables them to up their game,” he says.

Politics 

Most of Iran’s targets over the past few months have been in the Middle East – namely its nemesis Saudi Arabia – but this pivot to Israel should be a red flag to other nations embroiled in geopolitical conflict with Iran, such as the US, security experts say.

Tom Kellermann, CEO of Strategic Cyber Ventures, says the attacks indeed illustrate how Iran’s nation-state hacking machine has evolved and advanced. He attributes this transformation to Russian advisors assisting Iranian hackers. Look for OilRig to go West soon, too, he says.

“Oilrig will tendril West to the USA due to the Secretary of State and President’s visceral statements on Iran over that past month. The Iranians are not alone, as the Russian Pawn Storm [nation-state hacking] campaign will dramatically ratchet up due to tensions with US and NATO per the Baltics and the French election,” he says.

Their attacks also may be more destructive, including data-wiping: “To this point these actors will be more inclined to burn the evidence and house … [the] network via destructive counter-IR [incident response] ‘integrity attacks,'” which could hamper IR efforts and investigations, he says. “I am concerned that watering-hole attacks will increase, delivering 0days and wiper malware.”

[Check out the two-day Dark Reading Cybersecurity Crash Course at Interop ITX, May 15 16, where Dark Reading editors and some of the industry’s top cybersecurity experts will share the latest data security trends and best practices.]

Related Content:

 

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: http://www.darkreading.com/endpoint/iranian-hackers-believed-behind-massive-attacks-on-israeli-targets/d/d-id/1328753?_mc=RSS_DR_EDT

New OWASP Top 10 Reveals Critical Weakness in Application Defenses

It’s time to move from a dependence on the flawed process of vulnerability identification and remediation to a two-pronged approach that also protects organizations from attacks.

When I wrote the first OWASP Top 10 list in 2002, the application security industry was shrouded in darkness. The insight that a few other engineers and I had gained through hand-to-hand combat with a wide variety of applications lived only within us. We recognized that for the industry to have a future, we had to make our knowledge public. 

My idea was that we needed a top 10-style guide to help organizations focus on the key risks. I was also trying to establish a “standard of care” that would potentially allow a negligence regime to take hold and move the software industry in the right direction. I thought we were establishing a floor, and that organizations would move quickly to stamp out many of these risks. 

But it hasn’t happened. Sometimes when you try to create a floor, you accidentally create a ceiling. Many organizations have an application security program that works on a small part of their application portfolio (typically just the “critical” applications) and doesn’t cover even the Top Ten. Unfortunately, this approach leads to exactly what we see in the market: huge numbers of vulnerabilities and increasingly serious breaches.

Frankly, I don’t like “top ten” lists at all. We aren’t improving at application security. If anything we’re getting worse. The days of PDF reports, gates, and development roadblocks are over. We need to take responsibility for building defenses, creating assurance, and blocking attacks. We can’t stand by and point fingers at vulnerabilities any more. There’s too much at stake.                

2017 OWASP Top 10
The Top Ten Project has grown and matured dramatically since 2002. In May of 2016, the OWASP Top Ten Project issued an open data call to gather statistics on what organizations are seeing in terms of application security risks. A variety of organizations, consultants, and vendors contributed data on over 50,000 applications. Within these applications, over 2.3 million vulnerabilities were identified. Like everything at OWASP, the project and data are free and open.

The Top Ten project is struggling to keep up with a changing software world. So in addition to data, the project always considers “future looking” concerns. Two of these items were added to the new Top Ten, and there is controversy over them in the community. Here’s a comparison of the old and new lists.

Source: Jeff Williams

The New A7: “Insufficient Attack Protection”
One major addition to the new Top Ten is “Insufficient Attack Protection.” Traditional security defenses are like locks, they stop attackers, but have no way to detect or report that they’re being attacked. And applications have no way to quickly and easily patch themselves to respond to an attack. Given how difficult it is to prevent vulnerabilities from getting into our code, we simply can’t rely solely on trying to achieve perfect code. Currently, attackers can attack with impunity, never getting detected or blocked. Eventually, that’s a recipe for disaster.

The idea that insufficient attack protection belongs in this list isn’t obvious. Is it a risk not to have a smoke alarm? A burglar alarm? A security guard? It depends on your other defenses and where you think security ought to be. Just because you try to write code without buffer overflows doesn’t mean that you don’t need ASLR. And just because you think your authentication is secure doesn’t mean you shouldn’t detect credential stuffing and implement account lockouts. Should we report the lack of these attack protections as risks? Some of these are already being reported, so the new Top Ten is really just acknowledging that the lack of attack detection, protection, and security patching actually is a risk.

As the Top Ten Project Lead, Dave Wichers writes on the project mailing list,

“As an industry, we need to evolve to keep up with both modern development practices and the increased sophistication of attackers. As Dev and Ops come together, we need developer security and operational security to come together as well. A7 is a step towards thinking of appsec as part of both Dev and Ops. We have to do a better job of advocating security for both Dev and Ops and choosing defense strategies that span the entire playing field. I think we all agree we should know where and how we are being attacked, and we should probably do something about it if we are. Help us figure out what we can recommend to help organizations get better at this.”

This is the idea. It is controversial, but seems worth exploring. Could we move from relying on the flawed process of vulnerability identification and remediation to a strategy that relies on both eliminating vulnerabilities and attack protection?

The new A10: “Underprotected APIs”
Another major addition is “Underprotected APIs.” The use of APIs has exploded in modern software. Even browser web applications are often written in JavaScript and use APIs to get data. There is a huge variety of protocols and data formats used by these APIs, including SOAP/XML, REST/JSON, RPC, GWT, and many more. But more importantly, these APIs are often unprotected and contain numerous vulnerabilities. Both traditional security tools and manual penetration testing have struggled to analyze APIs because the protocols and frameworks are so complex, so API security is often overlooked.

This new item overlaps with many of the existing Top Ten. But it’s also a huge gap that many organizations and application security tools don’t yet cover. The Top Ten project takes a pragmatic view here, trying to encourage organizations to ensure their API coverage. In the words of the release candidate:

“NOTE: The T10 is organized around major risk areas, and they are not intended to be airtight, non-overlapping, or a strict taxonomy. Some of them are organized around the attacker, some the vulnerability, some the defense, and some the asset. Organizations should consider establishing initiatives to stamp out these issues.”

Looking Ahead
To me, the 2017 Top 10 reflects the move towards modern, high-speed software development that we’ve seen explode across the industry since the last version of the Top 10 in 2013. While many of the vulnerabilities remain the same, the addition of APIs and attack protection should focus organizations on the key issues for modern software. I absolutely think it’s worth shaking up the list to get organizations to think about these important topics.

For all the advances we’ve made at OWASP, application security isn’t part of every software project; it’s not taught regularly in university; and software projects often don’t account for it either. Simply dividing the total vulnerabilities by the number of applications yields 45.8 vulnerabilities per application. But by doing an average of averages, and eliminating the lowest and highest vendors to prevent outliers, the average is 20.5 vulnerabilities per application. That’s a stunning number that demonstrates just how widespread these vulnerabilities are. If we found 1 in 20 applications had one of the Top Ten items, it would still be concerning, but we are in “crazy risk” territory now.

It’s unrealistic for anyone to expect a simple awareness document to change much. Still, even after 14 years the OWASP Top Ten is still a good way for organizations to start getting their head around the most critical issues in application security.

If you feel strongly about these issues, please join in and get involved. The Top Ten Project appreciates your constructive feedback and needs your help. This release candidate is being updated to address a lot of the feedback already received. Given the mixed reaction of many in the security community, we’ve scheduled an open session at the OWASP Summit. Further, we’re planning to create a task force with a series of online working meetings. If the conclusion is that these new items don’t make sense, let’s get something better figured out — together.

[Read an opposing view from Chris Eng in OWASP Top 10 Update: Is it Helping to Create More Secure Applications?]

Related Content:

 

A pioneer in application security, Jeff Williams is the founder and CTO of Contrast Security, a revolutionary application security product that enhances software with the power to defend itself, check itself for vulnerabilities, and join a security command and control … View Full Bio

Article source: http://www.darkreading.com/application-security/new-owasp-top-10-reveals-critical-weakness-in-application-defenses/a/d-id/1328751?_mc=RSS_DR_EDT

OWASP Top 10 Update: Is It Helping to Create More Secure Applications?

What has not been updated in the new Top 10 list is almost more significant than what has.

The OWASP Top 10 list of the most critical web application security risks has finally been updated for the first time since 2013. This list, created by the Open Web Application Security Project (an open community dedicated to enabling organizations to create secure applications), often forms the basis of application security programs and frequently informs AppSec priorities.

The release candidate was published on April 10th, and OWASP plans to release the final version in July or August after a public comment period ending June 30th.

How Companies Actually Use the Top 10
I’ve been specializing in application security – first as a breaker and now as a defender – since long before the OWASP Top 10 list existed. When the first iterations of the lists were released, they were helpful to both me and my customers in the sense that they provided independent, vendor-agnostic advice on real-world application security risks. Later, the Top 10 was incorporated into the PCI DSS, which elevated the list’s importance in a way that never could have happened organically. Suddenly, many companies were required to invest in these very specific elements of application security – and they did. They look to this list to understand how to avoid and remediate a range of vulnerabilities.

Over the past decade, companies large and small have continued to adopt the Top 10 list as a guideline. They know it’s not the be-all and end-all of application security risks, but it’s a useful list to baseline against as they scale application security testing to hundreds – often thousands – of applications, built using development methodologies ranging from waterfall to Agile to DevOps.

Regardless of how the Top 10 list was originally intended to be used, helping to move the industry forward requires acknowledging how the list is actually used in the real world. Building and maintaining a comprehensive application security program is complex and time-consuming, so it’s important to consider the business impact of moving the goalposts.

Reading between the Lines
What has not been updated in the new Top 10 list is almost more significant than what has. It’s the first update in four years, and there are only two significant changes, and none to the top vulnerabilities. This highlights that we are continuing to see the same (often easily remediated) vulnerabilities plaguing our code. We clearly have a long way to go in terms of getting developers to understand secure coding best practices and actually implement them.

Even A4 (Broken Access Control) is simply a combination and reframing of A4 and A7 from the 2013 Top 10 list. Broken Access Control was actually category A2 from the 2004 Top 10 list. The vulnerabilities aren’t changing; they’re just being shuffled around, demonstrating that while companies are recognizing the need for application security, not enough has changed to eliminate these common threats.

A Questionable Direction
So if nothing much is new, why is OWASP releasing an update? The only significant updates to the list are the addition of API security, and a recommendation to focus on runtime protection. But the inclusion of API security isn’t much of an update; in fact, A10 (Underprotected APIs) is redundant with other categories that already exist. For example, A1 covers Injection vulnerabilities, and A10 essentially says “injection vulnerabilities can exist in APIs too!” It’s like if you had a residential building code comprised of nine rules, and the tenth item was “all of these rules also apply to blue houses.”

The addition of A7 (Insufficient Attack Protection) is even more confusing. From the working draft: “The majority of applications and APIs lack the basic ability to detect, prevent, and respond to both manual and automated attacks. Attack protection goes far beyond basic input validation and involves automatically detecting, logging, responding, and even blocking exploit attempts. Application owners also need to be able to deploy patches quickly to protect against attacks.” With this addition, I think the list is now straying from its mission.

In fact, A7 (Insufficient Attack Protection) feels more like a move to elevate certain technologies than guidance on improving security. And I make this statement despite being part of a company that offers RASP technology, an attack protection technology that would certainly fall under the proposed A7. Runtime protection is an interesting and important technology, and with today’s rapid development cycles is becoming an increasingly critical component of application security programs. But protection is orthogonal to the purpose of this list, which is to highlight the most important security risks.

It muddies the mission of the OWASP Top 10 to stray from vulnerabilities to a focus on technologies. Why does insufficient protection belong on the list, but not insufficient testing, insufficient code coverage, insufficient threat modeling, or insufficient developer education? All of these activities occur during the application lifecycle and improve application security.

Get back on track
Application security goes beyond any specific technology; there is no application security silver bullet. Securing applications requires a combination of people, process, and technology – both automated and manual – throughout the software development lifecycle. The new list seems to focus on changes that are either cosmetic or misaligned from the views of many application security experts. What’s the value in releasing an updated list compared to the disruption it will create for companies measuring against it? Failing to account for impact is neither visionary nor productive. Perhaps we need a little more empathy for the developers and end users instead of being excited to shake things up.

[Read an opposing view from Jeff Williams in New OWASP Top 10 Reveals Critical Weakness in Application Defenses.]

Related Content:

 

Chris Eng (@chriseng) is vice president of research at Veracode. Throughout his career, he has led projects breaking, building, and defending software for some of the world’s largest companies. He is an unabashed supporter of the Oxford comma and hates it when you use the … View Full Bio

Article source: http://www.darkreading.com/application-security/owasp-top-10-update-is-it-helping-to-create-more-secure-applications/a/d-id/1328752?_mc=RSS_DR_EDT

Facebook Spam Botnet Promises ‘Likes’ for Access Tokens

Facebook users can fuel a social spam botnet by providing verified apps’ access tokens in exchange for “likes” and comments.

Cyberattackers are using access tokens for legitimate Facebook apps as vehicles to spread spam on the apps’ behalf. How do they do it? By tricking Facebook users into handing over their tokens in exchange for “likes,” comments, friends, and followers.

Researchers at Proofpoint discovered an API access token for a verified Facebook app was being used to fuel comment spam on other Facebook pages. In this case, the victim page belonged to a major media outlet and Proofpoint customer.

Comment spam both interferes with social media interactions and exposes users to phishing and malware. A lot of comment spam found on social channels is generated by botnets, as was the case here.

In this case, spammers used the official HTC Sense app to trick users by leveraging an outdated version of the app and an earlier version of the Facebook API. How it worked:

Spam comments on the company’s page referred to websites containing instructions for installing a particular version of the HTC Sense Facebook app. Users clicked the links and installed the app. They were shown how to get a developer-level API access token for the HTC Sense app.

From there, users were asked to copy the access token and paste it into a third-party website, run by bad actors, in exchange for likes, comments, etc. Once they passed along the token, the website was fully able to automate comments and other actions on behalf of the users.

[Check out the two-day Dark Reading Cybersecurity Crash Course at Interop ITX, May 15 16, where Dark Reading editors and some of the industry’s top cybersecurity experts will share the latest data security trends and best practices.]

Ali Mesdaq, director of digital risk at Proofpoint, says this has been going on “at the very least for several months,” though he cannot say with certainty when it started. Researchers came across the spam while working to protect a large news organization.

“When we dug deeper into why they had such a rise in spam comments in the last several months, we uncovered this campaign,” he explains. “A number of the spam comments referenced how to install the app on individual Facebook accounts.”

Since this issue was discovered, Facebook has made changes to prevent end-users from accessing developer-level API tokens, and provided best practices for building secure applications. HTC has removed the problematic versions of its HTC Sense app.

It’s worth noting that several apps other than HTC Sense were targeted. This is one of many ways social media spam poses a risk to brands by interfering with user interaction and diluting corporate messaging on social channels.

“The implications are that we are starting to see mass broad sophisticated campaigns of spam on social media,” Mesdaq explains. “Social media should be considered a hot target for attackers, and we expect the volume, diversity, and intensity of attacks to greatly increase.”

The promise of more likes, comments, and friends may sound appealing, but this is a wake-up call to remind users that granting permissions to apps, and handing over access tokens, can lead to a spam overload, explains Proofpoint in a blog post. This can result in account suspension.

“Brands need robust, automated solutions for addressing spam, phishing, and malware distribution via their pages to protect customers and ensure appropriate interactions,” says Mesdaq.

Related Content:

 

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: http://www.darkreading.com/endpoint/facebook-spam-botnet-promises-likes-for-access-tokens/d/d-id/1328756?_mc=RSS_DR_EDT

Verizon DBIR Shows Attack Patterns Vary Widely By Industry

It’s not always the newest or the most sophisticated threat you need to worry about, Verizon’s breach and security incident data for 2016 shows.

Among the many key takeaways in the 2017 edition Verizon’s annual Data Breach Investigations Report (DBIR), released Thursday, is that there are significant differences in why and how organizations across different industries are attacked.

Data that Verizon collected from security incidents and data breaches that it investigated in 2016 showed, for instance, that financial and insurance companies suffered about six times as many breaches (364) from web application attacks as organizations in the information services sector (61).

Similarly, Verizon’s dataset showed healthcare organizations suffered about 13 times as many breaches involving privilege misuse in 2016 compared to manufacturing companies—104 breaches to 8.

Point-of-sale breaches affected organizations in the accommodations and food service space disproportionately moreso than retail organizations. Manufacturing companies—and somewhat interestingly—educational institutions were the biggest targets of cyber espionage campaigns.

The data provides further evidence that organizations can benefit from having a better understanding of the threats that are specific to their industries and sectors, says Gabriel Bassett, a senior information data scientist with Verizon.

“It’s the kind of thing you would assume. But it is not thought about enough in industry,” he says. “If you are a financial firm are you putting botnets on top? Or are you putting PoS? If you are in education, do you realize just how starkly espionage has gone up,” in this sector, Bassett says.

What the breach data shows is that every organization should mitigate its own risks, he said. “It’s very easy to look at the newest attacks. But if it is not one of your risks, you need to prioritize the things that are,” and apply the appropriate controls and mitigations, Bassett says.

[Check out the two-day Dark Reading Cybersecurity Crash Course at Interop ITX, May 15 16, where a speaker from Verizon Business will discuss the real impacts of a data breach.]

The Verizon report highlights some other trends as well. Last year’s data for instance showed that cyber espionage has emerged as a major threat for manufacturing companies, public sector entities, and to a lesser but still significant degree, for educational institutes as well.

In total, Verizon investigated 115 incidents involving cyber espionage at manufacturing companies, 108 of which resulted in a data breach. The total number of breaches at public sector organizations and educational institutions where cyber espionage was a motive was 98 and 19 respectively. Much of the interest in these sectors stems from the propriety research data, prototypes, and other intellectual property that such organizations typically possess, Verizon’s report noted.

Cyber espionage campaigns tend to be targeted, stealthy, and persistent since the effort is on stealing as much data as possible, says Brian Vecci, technical evangelist at Varonis Systems. “Attackers will follow the cyber kill chain once they compromise an account, which includes accessing the data they can get to, elevating their privileges to access more data, and then obfuscating their tracks,” he says.

Businesses often make it easier for such attackers, Vecci says. He pointed to a recent data risk report that Varonis released, which showed 47% of organizations had 1,000 or more files containing sensitive information open to every employee at any given time. “That’s making it pretty easy for the attacker to steal information.”

While organizations in the targeted sectors need to pay attention to the cyber espionage trend itself, the mitigations against the threat are not very different, Bassett notes.

“Espionage is one of those things where it feels like we need to do something different because it sounds like it is some super-duper elite cyber hacker somewhere that’s attacking,” he says.

In reality, the actual methods that attackers used to get at the data they were after were similar to the tactics used in attacks driven by financial and other motivations.

For example, the three most common actions used by attackers to target organizations in the manufacturing, public, and education sectors were hacking, social engineering, and malware. These were the same tactics that were most commonly used in attacks against organizations in almost every other sector in the Verizon study.

“The thing is espionage is the motive. It is the ‘why’ and it drives the ‘what’ gets stolen,” Bassett says. “But it not the ‘how.’ The ‘how’ stays very consistent,” across industries.

The Verizon report also showed that for yet another year, phishing, malware via email, and credential misuse, were among the most commonly used methods by attackers to try and gain access to target networks and systems. Distributed denial-of-service attacks were another major issue especially for organizations dependent on the Web, such as those in the entertainment, professional services, financial, and information sectors.

Verizon responded to a total of 11,246 denial-of-service incidents in all last year. However, only five of them across all sectors resulted in actual data disclosure.

Web application incidents increased last year as well compared to 2015, but the actual number of breaches resulting from these incidents was lower. A vast majority of web application attacks involved the use of botnets, most notably Dridex. Stolen credentials, SQL injection attacks and brute-force attacks were some of the other most commonly used tactics in web application attacks.

“Compared to network services, web applications tend to be much more vulnerable,” says Ilia Kolochenko, CEO of High-Tech Bridge. “Web applications are often developed in-house and accumulate dozens of vulnerabilities and weaknesses because of flawed, or simply missing, SDLC [secure development lifecycle] and insufficient security testing,” he says.

Many organizations continue to significantly underestimate the importance of web application security and perceive web apps as simply a web front-end to their organization. “However, as DBIR clearly states, the main attack vector is insecure applications.”

Related stories:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/verizon-dbir-shows-attack-patterns-vary-widely-by-industry/d/d-id/1328757?_mc=RSS_DR_EDT

Samsung Smart TV flaw leaves devices open to hackers

Your Samsung Smart TV might be pretty dumb.

Penetration testing firm Neseso has found that a 32-inch Tizen-based smart TV, first released as part of the 2015 model year and still being sold in North America, isn’t authenticating devices that connect to it via Wi-Fi Direct.

Rather than requiring a password or PIN to authenticate devices that want to connect to the TV – like, say, your smartphone when you want to use it as a remote control – it’s relying on a whitelist of devices that the user’s already authorized.

To do that, Samsung’s Smart TV uses devices’ media access control (MAC) addresses. Those are like a digital fingerprint: a MAC address is constant to a piece of hardware (though it can be spoofed, either for legitimate purposes or by a thief who wants to hide it).

Neseso says a user will be notified about a whitelist device that connects to their Smart TV, but that’s it: if the device is on a whitelist, the TV will just lay out the welcome mat without requiring any authentication.

It’s easy for an attacker to get a whitelisted MAC address, Neseso said. In fact, a few years ago, we saw a US cop sniffing out stolen gadgets by MAC addresses, wardriving in his squad car with some software he rigged up to a thumb drive sized-antenna that plugs into the car’s USB port and looking for MAC addresses that matched those listed in a database of known stolen devices.

After an attacker spoofs a known MAC address, they’d be able to access all the services on the Smart TV, such as remote control service.

An attacker would have to know, ahead of time, the MAC address of, say, your smartphone’s Wi-Fi chip. They’ll also likely have to crouch outside in your shrubbery – given that Wi-Fi Direct doesn’t work over long distances – while clutching their laptop or smartphone to spoof that MAC address and start messing with channel-changing or screen mirroring.

OK, so an attacker can change your channel. Annoying, but hardly earth-shattering, eh? Well, it doesn’t stop with the remote exploitation of channel-surfing. An attacker could use it as a springboard to gain access to whatever network the Smart TV is connected to, Neseso said.

Would an attacker be able to get at your home Wi-Fi network’s name and password? Not necessarily through this Wi-Fi Direct vulnerability. But as another security researcher revealed a few weeks ago, the operating system running on millions of Samsung products – it’s called Tizen – is what Motherboard referred to as a hacker’s dream.

Israeli researcher Amihai Neiderman:

Everything you can do wrong there, they do it. You can see that nobody with any understanding of security looked at this code or wrote it. It’s like taking an undergraduate and letting him program your software.

We’ve certainly heard of Samsung vulnerabilities before. In fact, last month, WikiLeaks published documents that purportedly showed how the CIA can monitor people through their Samsung Smart TVs.

Neseso contacted Samsung starting last month, with the Korean company eventually saying that it didn’t consider the find to be a security vulnerability. That’s why Neseso decided to publish details about it on Full Disclosure, it said.

The security outfit advised Samsung Smart TV owners to remove all their whitelisted devices and to avoid using the WiFi-Direct feature. It didn’t explain precisely how to do that, instead telling users to directly contact Samsung. You might want to poke around in the Network menu under Settings or simply disable Wi-Fi on your smart TV… though that would rob you of all those smart TV features you paid for.

Neseso didn’t test other Samsung models, but it suggested that they too might be vulnerable.

Short of disabling Wi-Fi, we’d suggest keeping an eye out for rustling shrubbery. If your TV channels start changing, call the police and then, by all means, switch off your TV’s Wi-Fi.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xn496Goidb4/