STE WILLIAMS

Multiple vulnerabilities found in radiation monitoring gateways

Every now and then, a presentation at Black Hat throws up a security vulnerability that has been missed either because it exists in equipment researchers haven’t been paying attention to, or is simply inherently difficult to uncover.

A prime example this year was IOActive’s research on Radiation Portal Monitors (RPMs), gateways mainly used to check for the illegal trafficking of radioactive material at ports, border crossings, airports and in and out of nuclear facilities.

Little considered they might be but, as the IOActive researcher Ruben Santamarta explains, behind the scenes they matter a lot:

RPMs are a fundamental component of the policy that was implemented worldwide, especially after 9/11 in the US, to prevent the illicit trafficking of nuclear and radiological materials.

Although not able to test a portal in situ, he managed to track down publicly-available binaries for one of the sector’s equipment makers, Ludlum, which has sold 2,500 gateways in 20 countries.

After a spot of reverse engineering, Santamarta uncovered a slew of issues, including a backdoor password that could be used to disable the device’s alarms by someone with physical access.

More surprisingly, the gateways (which transmit their readings wireless or via a LAN) were found to be vulnerable to a Man-in-the-Middle (MiTM) attack that could be used to alter the readings taken from vehicles passing though the detectors. Neither of these attacks would necessarily be noticeable.

Granted, the second of these attacks would require the attacker to pass back plausible readings, which would need to be fine-tuned in advance of an attack. From the description given, this would currently be a costly and complex, although not impossible, under-taking.

Separately, vulnerabilities were found in the WRM2 protocol used in radiation monitors made by Mirion and Digi, widely deployed in nuclear power plants.

Again, being able to remotely hack radio communication would be difficult thanks to the physical shielding of in the plants themselves.

Perhaps the biggest discovery was simply the mixed initial responses of the vendors to the issues raised, responses which give the impression of an industry that isn’t used to outsiders peering in too closely. According to the paper:

  • Ludlum has acknowledged the report but believes the secure facilities where its devices are housed will prevent exploitation.
  • Mirion acknowledged the vulnerabilities and contacted customers but did not want to patch for fear of breaking WRM2 interoperability.
  • Digi acknowledged the report, but initially said it would not fix the problems as it doesn’t consider them security issues.
  • Digi and Mirion have since begun work to “patch critical vulnerabilities uncovered in the research”.

As IOActive’s Santamarta points out, it seems likely that other vendors in this space will be affected by similar flaws.

The report concludes:

These issues are not currently patched, so increasing awareness of the possibility of such attacks will help to mitigate the risks.

This is reminiscent of what happened when people started uncovering flaws in SCADA (Supervisory Control And Data Acquisition) equipment in the aftermath of the Stuxnet attack.

None of the flaws uncovered by IOActive’s research would be easy meat for hackers but the importance of radiation monitoring makes it a target worth protecting.

More independent research can only be a good thing.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OQGu69cwaQo/

Should governments keep vulnerabilities secret?

The debate over how much US intelligence agencies should hoard, and how they should use, vulnerabilities found in software used by their own citizens will probably never end.

But two recent research papers, presented together at Black Hat, argue that data analysis should carry more weight than, “speculation and anecdote” in setting government policy on the matter.

By now, the cases for both sides are well established:

According to one, intelligence agencies need to maintain a secret stash of vulnerabilities if they are to have any hope of penetrating the communications or cyber weapons of criminals, terrorists and hostile governments. It is a crucial element of protecting the homeland.

According to the other, the problem is that they don’t always remain secret. And if those vulnerabilities are leaked or otherwise discovered by malicious actors, they can be exploited to attack millions of innocent users, or critical systems, before they are patched. Which can be very damaging to the homeland.

Indeed, the release about a year ago by the so-called Shadow Brokers of a cache of top-secret spying capabilities, presumed to belong to the National Security Agency (NSA), intensified the complaints about government surveillance going well beyond targeting terrorists.

Jason Healey, senior research scholar in cyber conflict and risk at Columbia University’s School of International and Public Affairs and also a senior fellow at the Atlantic Council, noted in a paper last November that of the 15 zero-days in the initial release, several, “were in security products produced by Cisco, Juniper, and Fortinet, each widely used to protect US companies and critical infrastructure, as well as other systems worldwide.”

More recently, starting in March, the WikiLeaks “Vault 7” document dump exposed the CIA’s efforts to exploit Microsoft and Apple technology to enable surveillance.

Without wading into the specific debate over keeping vulnerabilities secret, several researchers at Harvard Kennedy School’s Belfer Center for Science and International Affairs recently published a paper they hoped would, “call for data and more rigorous analysis of what it costs the intelligence community when they disclose a vulnerability and what it costs citizens and users when they don’t.”

It reported that their analysis of a dataset of more than 4,300 vulnerabilities found that, “(vulnerability) rediscovery happens more often than previously reported”.

They went on to say:

When combined with an estimate of the total count of vulnerabilities in use by the NSA, these rates suggest that rediscovery of vulnerabilities kept secret by the US government may be the source of as many as one-third of all zero-day vulnerabilities detected in use each year

Another paper, by the RAND Corporation is aimed at establishing, “some baseline metrics regarding the average lifespan of zero-day vulnerabilities, the likelihood of another party discovering a vulnerability within a given time period, and the time and costs involved in developing an exploit for a zero-day vulnerability.”

According to their findings, zero-day exploits and their underlying vulnerabilities, “have a rather long average life expectancy (6.9 years). Only 25% do not survive to 1.51 years, and only 25% live more than 9.5 years.”

The combined Black Hat presentation focused in part on what both groups studied – the Vulnerability Equities Process (VEP), which helps determine if “a software vulnerability known to the US government will be disclosed or kept secret.”

The Belfer Center group said they did not intend to “relitigate” VEP debate in their paper. But given that they and those at RAND have done some of the “rigorous analysis” they called for, why not at least weigh in on it?

Trey Herr, a postdoctoral fellow with the Belfer Center’s Cyber Security Project and a co-author of that paper, said that while government use of vulnerabilities is “necessary,” he thinks government has not “walked the line very well” between disclosure and secrecy.

He said that even former NSA director Gen. Michael Hayden has acknowledged that if the agency can’t keep its secret capabilities secure, it shouldn’t be allowed to have them.

But Herr, in a post on the Lawfare blog, said he is hopeful that the recently introduced PATCH (Protecting our Ability to Counter Hacking) Act in Congress will “codify” the VEP process, “to facilitate accountability and continuity between administrations.”

He said the bill is significant because, “it marks the first time that Congress will be actively involved in meaningful discussion about government disclosure of vulnerabilities.”

But Herr also contends that even if the correct balance between disclosure and secrecy is achieved, “that is only a small piece of the puzzle when it comes to software security.”


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PZm_NiIvEf8/

Anatomy of a privacy fail – when “Dark Data” gives away your identity

This week’s super-scary security topic is deanonymisation.

The media excitement was kindled after the BBC wrote up a short article about an intriguing paper entitled Dark Data, presented at the recent DEF CON conference in Las Vegas.

We weren’t at DEF CON, so we hoped that the many stories written about this fascinating paper would tell us something useful about what the researchers did, and what we could learn from that…

…but we were quickly disappointed, faced with little more the same brief story over and over again, told in the same brief way.

So we decided to dig into the matter ourselves, and soon found that the Dark Data paper was the English language version of a talk the researchers presented in German last year at the 33rd Chaos Computer Club conference in Hamburg.

We were delighted to find that the German talk had an title that was itself in English, yet even cooler than the DEF CON version: Build your own NSA.

If you have the time, the video makes for interesting viewing. There are simultaneous translations of satisfactory quality into English and French if your German isn’t up to scratch. If you follow along in the DEF CON slide deck hen you will have an accurate English version of the visual materials.

Digital breadcrumbs

Very greatly simplified, here is what the researchers did to collect their data, and what they were able to do with it afterwards.

First, they set up a bogus marketing consultancy – a cheery, hip-looking company based in the hipster city of Tel Aviv.

Second, they used the online “marketing grapevine” to look for web analytics companies that claimed to provide what’s known as clickstream data.

Clickstreams keep a log of the websites that you visit, the order you visit them, and precise URL details of where you went on each site each time you visited. If all you are interested in is how your own customers behave when they’re on your site, this sort of data seems innocent enough. Indeed, clickstreams are often referred to by the vague name of browsing metadata, as though there’s nothing important in there that could stand your privacy on its head.

Third, the researchers soon wangled a free web analytics trial, giving them near-real-time access to the web clickstreams of about 3,000,000 Germans for a month.

In theory, this clickstream data was supposed to be harmless, given that it had been anonymised. (That means real names were stripped out and replaced with some kind of meaningless identifier instead, for example by replacing Paul Ducklin with the randomly-generated text string 4VDP0­QOI2K­JAQGB.)

At least, that’s what the web analytics company claimed – but their anonymised data turned out to be a privacy-sapping gold mine.

Anonymisation and deanonymisation

We’ve written before in some detail about how anonymous data often isn’t anonymous at all, and why.

So you probably aren’t surprised to hear that in this case, too, the anonymisation could sometimes very easily be reversed.

Part of the problem – if you ignore whether it should be lawful to collect and monetise clickstream data at all – is that marketing companies love detail, and web analytics companies are correspondingly delighted to provide it.

It’s not enough to know that someone is visiting your website – you’re also supposed to take careful notice of how they behave after they arrive, to help you answer questions about how well your site is working.

Once they’ve done a search, do they stick around, or get frustrated and leave? If they look at jeans, do they think of buying sneakers at the same time? Do Californians spend longer on the site than New Yorkers?

The theory is that if you don’t pass on data that details precisely who did what, but merely how people behave in general, then you aren’t treading on anyone’s privacy if you sell (or buy) clickstream data of this sort.

Sure, you know that user T588Z­1CN4CC­6XW8G visited the recipe pages 37 times in the month, while 61XLR­W0NOW­3G644 browsed to 29 products but didn’t buy any of them.

But you don’t know who those randomly-named users actually are – so, what harm is done, provided that you don’t also get a list that maps the random identifiers back to usernames?

Sadly, however, the URLs in your browsing history are surprisingly revealing, and the Dark Data researchers were able to figure out 3% of the users (100,000 out of 3,000,000) directly from clues in the URLs.

For example, if you login to Twitter and go to the analytics page, the URL looks like this:

https://analytics.twitter.com/user/[TWITTERHANDLE]/tweets

So if the clickstream data looks like this…

usr=PI38H1H7JGX2HZH utc=2017-08-01T13:00Z uri=https://analytics.twitter.com/user/[TWITTERHANDLE]/tweets

…then you know who PI38H1­H7JGX­2HZH is right away, without doing any more detective work at all.

Public versus private

The researchers also showed how you can often deanonymise individuals simply by comparing their publicly-declared interests with the data in the clickstream.

For example, if I examine your recent tweets, I’ll be able to extract a list of all the websites that you have recommended publicly, say in the last month. (The researchers automated this prcoecss using Twitter’s programming interface.)

Let’s say you told your Twitter followers that the following websites were cool:

github.com
www.change.org
fxexperience.com
community.oracle.com
paper.li
javarevisited.blogspot.de
www-adam-bien.com
rterp.wordpress.com

It’s reasonable to assume that you browsed to all of those sites yourself before recommending them, so they’ll all show up in your clickstream.

The burning question, of course, is how many other people visited that same collection of sites. (It doesn’t matter if they visited loads of other sites as well – just that they visited at least those sites, like you did.)

The researchers found that fewer than ten different domains was almost certainly enough to pin you down.

Millions of other people have probably have visited two or three of your favourite sites.

Only a few will have five or six sites in common with your list.

But unless you’re a celebrity of some sort, you’re probably the only person who visited all of your own favourite sites recently, and that’s that for your anonymity.

Getting at the details

If you’ve read this far, you are almost certainly wondering, “Where does such detail in the clickstream come from?”

Can cookie-setting JavaScript embedded in the web pages you visit explain all of this detail, for example?

Fortunately, it can’t: the researchers found that browser plugins were a significant part of the deanonymisation problem, which is something of a relief.

After all, the owner of a website decides, at the server end, whether to add JavaScript; on the other hand, but you get to decide, in your own browser, which plugins to allow.

Browser plugins are a security risk because a malicious, careless or unscrupulous plugin gets to see every link you click, as soon as you click it, and can leak or sell that data to a clickstream aggregator, who can sell it on.

And seems that plenty of web plugins fall into one of those categories, because the researchers suggested that 95% of the data in the clickstream they “purchased” in their free trial was generated up by just 10 different popular web plugins.

The researchers were able to verify whether a plugin leaked data directly into the clickstream simply by experimentation: install a plugin, visit a recognisable pattern of websites with the plugin turned on, then turn it off, then on again, and so on. If the traffic pattern shows up in the clickstream whenever the plugin is on, but not when it is off, it’s a fair assumption that the plugin is directly responsible for feeding the clickstream with URL data.

For what it’s worth, the researchers claim that the worst of the data-leaking plugins – this work was done a year ago, in August 2016 – was a product called WOT, ironically short for Web Of Trust, a plugin that advertises itself as “protect[ing] you while you browse, warning you against dangerous sites that host malware, phishing, and more.”

What to do?

Here are some things you can do to reduce your trail of digital crumbs, or at least to make them a bit less telling:

  • Get rid of browser plugins you don’t need. Some plugins genuinely help with security, for example by blocking ads, limiting tracking behaviour, and restricting the power of JavaScript in web pages you visit. But even so-called “security add-ons” – as in the WOT case – can end up reducing your security. If in doubt, take advice from someone you know and trust.
  • Use private browsing whenever you can. Browser cookies that are shared between browser tabs allow advertisers to set a cookie via a script embedded in one website and to read it back from another website. Private browsing, also known as incognito mode, keeps the data in each web tab separate.
  • Clear cookies and web data automatically on exit. This doesn’t stop you being tracked or hacked, but it does make you more of a moving target because you regularly get new tracking cookies sent to your browser, instead of showing up as the same person for weeks or months.
  • Logout from web sites when you aren’t using them. It’s handy to be logged in to sites such as Facebook and Twitter all the time, but that also makes it much easier to like, share, upload or reveal data by mistake.
  • Learn where the privacy and security settings are for all the browsers and apps you use. Clearing cookies and web data from Safari on your iPhone is completely different to doing it in Firefox on Windows. Logging out of Facebook from the mobile app is different to logging out via your browser. Learn how to do it all ways up.
  • Avoid sites that use HTTP instead of HTTPS, even if you don’t need to log in. When you visit an HTTP web page, anyone else on the network around you can sniff out the entire URL you just browsed to. On HTTPS pages, the domain names are revealed by your network lookups (so a crook can see that you just asked where to find nakedsecurity.sophos.com) but the full URLs are encrypted (so the crook can’t tell which pages you looked at or what you did there).
  • Use an anonymising browser like Tor when you can. Tor doesn’t automatically make your browsing anonymous – if you login to Facebook over Tor, for example, you still have to tell Facebook who you are. But it makes it look as though you are coming from a different city in a different country every time, which makes you harder to keep track of or to stereotype.

You might be thinking that we missed an easy tip here.

It feels as though one “obvious” solution to improving your anonymity online is to do a bunch of extra browsing, perhaps even using automated tools, thus deliberately bloating your clickstream with content that doesn’t relate to you at all, hopefully throwing deanonymisation tools off the scent.

As the researchers point out in their video, however, that doesn’t work.

For example, the trick of tracking you back via the sites that you recently recommended on Twitter depends on whether anyone else visited those sites – not on whether you visited a load of other sites as well.

When it comes to generating, collecting and using clickstream data safely, less is definitely better than more…

…and none is best of all!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bW2rAO8wWKY/

News in brief: Alexa as wiretap; Prankster fools White House; Amazon suspends Blu

Your daily round-up of some of the other stories in the news

Amazon Echo hacked

Researchers at MWR have demonstrated a new way to attack Amazon’s “smart speaker”, turning it into a wiretap hiding in plain sight.

By reverse engineering the Echo the hackers were able to treat it like just another computer running Linux, and from there do as they please. The attack requires physical access to an Echo but having compromised a device an attacker could:

…[get] persistent remote access to the device, steal customer authentication tokens, and the ability to stream live microphone audio to remote services without altering the functionality of the device.

The authors advise that the 2017 version of the Echo is not vulnerable and that the mute button on top of the Echo continues to work on hacked devices.

Prankster fools White House

A British email prankster is making it a habit of tricking White House staffers into believing he’s different key members of the White House staff and the Trump family.

The perpetrator, who goes by the name “Evil Prankster,” appears to have used little more than an Outlook account and mobile device to impersonate President Donald Trump’s sons Eric and Donald Trump Jr., as well as his son-in-law and senior advisor Jared Kushner and recently removed White House Chief of Staff Reince Priebus. He’s also had exchanges while impersonating Priebus with recently ousted White House Communications Director Anthony Scaramucci.

“Evil Prankster” has been sharing screenshots of his exchanges with government officials via Twitter, including email exchanges with former Utah governor and recently nominated US Ambassador to Russia John Huntsman Jr., while impersonating Eric Trump, and an exchange with Homeland Security Advisor Tom Bossert while presenting himself as Kushner.

In some of the exchanges, the prankster even convinced his targets to give up their personal email addresses.

“We take all cyber-related issues very seriously and are looking into these incidents further,” White House Press Secretary Sarah Huckabee Sanders told CNN.

Amazon’s no longer feeling Blu

Budget Android phones Blu have been taken off Amazon’s digital shelves following the discovery of a ‘possible security issue’.

CNET reports concerns that pre-installed spying software on the phones was collecting data and sending it to servers in China, without users’ knowledge.

Blu refuted any claims of wrongdoing, explaining:

The data that is currently being collected is standard for OTA functionality and basic informational reporting. This is in line with every other smartphone device manufacturer in the world. There is nothing out of the ordinary that is being collected, and certainly does not affect any user’s privacy or security.

However, Amazon isn’t taking chances with its customers’ privacy and security, and won’t be selling the handsets again “until the issue is resolved”.

Catch up with all of today’s stories on Naked Security

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ZR4LZWNV7zQ/

‘App DDoS bombs’ that slam into expensive APIs worry Netflix

Netflix has identified denial of service threat to microservices architectures that it’s labelled “application DDoS”.

Traditional DDoS attacks flood networks with bogus traffic so that infrastructure runs out of resources to serve legitimate users. Netflix characterises an application DDoS attack as one in which attackers “focus on expensive API calls, using their complex interconnected relationships to cause the system to attack itself.”

Netflix’s Scott Behrens and Bryan Payne describe a scenario in which attackers figure out which API calls create the most work inside an application, then send plenty of requests to that API.

“A single request at the edge can fan out into thousands of requests for the middle tier and backend microservices,” they write. “If an attacker can identify API calls that have this effect, then it may be possible to use this fan out architecture against the overall service. If the resulting computations are expensive enough, then certain middle tier services could stop working. Depending on the criticality of these services, this could result in an overall service outage.”

The pair say the potential for application DDoS is caused in part by web application firewalls not being aware of the potential impact of mass API calls. Traffic crafted to look legitimate, but maliciously targeting the APIs that make the most work, could therefore have very nasty consequences.

It’s not hard to see why Netflix cares about this stuff: it’s built on microservices and runs in the cloud. Which raises the possibility of a loosely-configured cloud-native application falling victim to an application DDoS and using resource levels its owners never budgeted for, even if the attack doesn’t degrade customer experience.

Netflix thinks it can help those of you that have potential exposure to application DDoS: it’s built a tool called Repulsive Grizzly that helps you to test applications to understand their susceptibility to this type of attack. It also advises careful inspection of apps to ensure they can fail gracefully – or just degrade – rather than falling over.

The company’s also released a new tool called “ChAP”, aka the Chaos Automation Platform, which automates chaos in microservices architectures. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/01/application_ddos/

‘Real’ people want govts to spy on them, argues UK Home Secretary

Analysis UK Home Secretary Amber Rudd kicked off a firestorm in the tech community Tuesday when she argued that “real people” don’t need or use end-to-end encryption.

In an article in the Daily Telegraph timed to coincide with Rudd’s appearance at a closed event in San Francisco, Rudd argued: “Real people often prefer ease of use and a multitude of features to perfect, unbreakable security.”

She continued: “Who uses WhatsApp because it is end-to-end encrypted, rather than because it is an incredibly user-friendly and cheap way of staying in touch with friends and family? Companies are constantly making trade-offs between security and ‘usability,’ and it is here where our experts believe opportunities may lie.”

The reference to “real people” struck a nerve with a host of security experts, sysadmins, privacy advocates and tech-savvy consumers who took to Twitter to point out that they were real people, and not ISIS sympathizers – as Rudd implied in her piece. Rudd essentially declared that people who use strong encryption are not normal, not real people, which is a rather dangerous sentiment.

More broadly, her argument is an effort to square the circle on the issue of encryption: where tech companies and security experts say they cannot allow access to encrypted messages without compromising the entire system; and politicians and the security services argue that they need to be able to gain access to all communications for national security reasons.

Magic

The politicians’ argument has long been disparaged as “magical thinking” by the tech industry (and some federal agency representatives): simply wishing something to be true does not make it possible.

“This is not about asking the companies to break encryption or create so-called ‘back doors’,” Rudd argued, while failing to recognize that any method of breaking encryption on demand is, by definition, the introduction of a backdoor. She added:

I know some will argue that it’s impossible to have both – that if a system is end-to-end encrypted then it’s impossible ever to access the communication. That might be true in theory. But the reality is different.

“There are options. But they rely on mature conversations between the tech companies and government – and they must be confidential. The key point is that this is not about compromising wider security. It is about working together so we can find a way for our intelligence services, in very specific circumstances, to get more information on what serious criminals and terrorists are doing online.”

What Rudd appears to be arguing for is encryption on people’s devices, but with tech companies providing and storing the encryption keys so they can decrypt messages when ordered to do so by the authorities – or perhaps provide some sort of secret backdoor access so investigators can leaf through decrypted chatter remotely on suspects’ devices. The existence of these skeleton keys, or secret back passages, would undermine security and privacy for everyone.

And the reference to conversations having to be confidential – well, that was borne out by the fact that the first meeting of the “Global Internet Forum to Counter Terrorism” was kept entirely secret – with limited details only put out the day before. Even the location of the meeting was kept secret.

We asked to attend and were told: “The event isn’t open to the press at the request of some of our participants.” Some tweets from inside the event by the organizers provide a very limited window into discussions.

Remember Snowden?

What Rudd’s argument fails to acknowledge, however, is the entire reason that the encryption debate took off in the first place: mass surveillance carried out by the National Security Agency (NSA) that was revealed in confidential documents released by Edward Snowden back in 2013.

Lest anyone forget, Snowden revealed that not only were the US authorities monitoring every phone call made in the US, but they had tapped the internet’s backbone and tech giants’ data centers without letting them know.

Many of those programs have since been declared illegal, but the enormous breach of trust felt by the US tech companies that had been working with the authorities to provide legal access to communications resulted in immediate efforts to encrypt all data and so cut off the NSA’s data firehose.

The tech companies also responded to massive consumer demand for more secure systems when the extent of government spying became clear. The earliest and most high-profile shift was when Apple updated its mobile operating system to provide true end-to-end encryption, meaning that it was unable to read its own users’ messages.

That move was swiftly followed by others, including Facebook-owned WhatsApp, after competitors like Signal suddenly appeared on the market and picked up tens of thousands of new users almost overnight.

Rudd’s argument essentially boils down to asking everyone to forget about the fact that the US government illegally hoovered up and stored everyone’s personal communications, and then let them do it again. Because terrorists.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/01/amber_rudd_on_encryption/

Digital Crime-Fighting: The Evolving Role of Law Enforcement

Law enforcement, even on a local level, has a new obligation to establish an effective framework for combating online crime.

As the cybercrime landscape continues to evolve, methods of policing it must change as well. The increasing number of cyber attacks propagated by everyone from nation-state actors to average criminals is blurring lines between cybersecurity and public safety, ultimately causing a shift in the role of government and law enforcement in protecting against these threats.

Verizon’s 2017 Data Breach Investigations Report notes, “In addition to catching criminals in the act, security vendors, law enforcement agencies and organizations of all sizes are increasingly sharing threat intelligence information to help detect ransomware (and other malicious activities) before they reach systems.”

Using their own behind-the-scenes collaboration venues, threat actors have also become increasingly well armed and well informed. This can be countered by defenders through better sharing of information tied to trending campaigns, changes in attack vectors, and the emergence of new tools. Foreign enemies become domestic enemies from thousands of miles away, calling for not only a deeper investment in cybersecurity skills and technologies but a broader framework for timely dissemination of intelligence across all global industry segments, both public and private.

Hacker Best Practices 
Cybercriminal activity has seen an uptick in recent years as new tools and methods for hacking become more accessible. Frameworks and platforms sold in underground forums enable low-skilled attackers to evade defensive barriers, becoming today’s petty criminals. For example, ransomware-as-a-service has emerged as an attack vector, allowing average Joes with little-to-no cyber knowledge to target both people and businesses using DIY ransomware. Additionally, the sophistication of new technologies used by hackers, such as artificial intelligence, makes malicious advances more difficult to detect.

Traditionally, law enforcement has played a role in cybercrime only after significant damage has been done — for example, when systems are held hostage by ransomware or significant corporate or personal data is stolen. However, as attacks become more frequent and the impact increasingly devastating, law enforcement, even on a local level, has a new obligation to establish an effective framework for digital crime-fighting.

Get Your Vaccine
According to Verizon’s report, information sharing can “act like a vaccine” against cyber attacks. The report states that the spread of threat information goes beyond “just the indicators of compromise (malware hashes, YARA rules and such), but also [includes] working with law enforcement to investigate and bring the perpetrators to justice. It also requires sharing the more general context of cybersecurity incidents to inform prioritization of cybersecurity actions and law enforcement efforts to counter particularly damaging threats.”

Using timely threat intelligence, law enforcement can alert both businesses and consumers of known and suspected attacks, helping them to take proper precautions to “immunize” themselves against the spread of things like malware. This means that as hacking tools and techniques become more widely available, critical threat information that can improve defenses must also become more broadly accessible.

So, how can law enforcement begin engaging more broadly in information sharing?

  • Tools and communities: There are a number of resources immediately available, including intelligence industry initiatives like information sharing and analysis centers and open source threat feeds that provide relevant cyberthreat data and insights.
  • Diversifying expertise: Developing the right expertise on staff, whether that means changing an existing employee’s role or hiring an in-house threat analyst, can provide a more direct connection to the intelligence community, and help law enforcement agencies maximize the information they have.
  • Establishing the right partners: Threat intelligence partners can range from security vendors to local DHS fusion centers. These partners can provide common indicators and historical context that help prevent attacks, as well as best practices for incident response in the event a breach occurs.
  • Focus on forensic data: Leveraging in-house or external digital forensics and incident response resources as sources for key bits of data either during or after cyber attacks can yield valuable information in the fight against cybercriminals. Sharing information gathered with other law enforcement entities, organizations specializing in post-breach forensics and incident response, companies that have their own incident response resources, and government institutions can create a collective of expert knowledge that is a formidable counter to cybercriminal activity.

Although it is impossible to prevent cybercriminals from attempting attacks, organizations that properly take advantage of threat information can detect adversaries before they can do damage. Law enforcement plays an important role in this collaboration. By following best practices for threat intelligence sharing and taking a proactive approach, law enforcement can help pass along important information quickly, and thus enable organizations across all sectors to make better judgments and stop the bad guys in their tracks.

Related Content:

Travis Farral is a seasoned IT security professional with extensive background in corporate security environments. Prior to his current role as Director of Security Strategy at Silicon Valley-based threat intelligence platform provider Anomali, Farral was with ExxonMobil, … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/digital-crime-fighting-the-evolving-role-of-law-enforcement/a/d-id/1329461?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Qualys to Acquire Assets of Nevis Networks

The transaction aims to bolster Qualys’ efforts in network traffic analysis and speeds up its move into the endpoint attack-mitigation and incident response market.

Qualys announced today it’s snapping up assets of Nevis Networks, as the cloud-based security and compliance solutions provider seeks to increase its network traffic analysis capabilities and expand into new markets.

As part of the deal, Qualys plans to natively integrate Nevis’ network traffic analysis tools into its Qualys Cloud Platform. Nevis’ analysis tools are designed to aid enterprises on enforcing networking access controls to business applications, based on the company’s policies.

Qualys, which did not disclose the terms of the cash transaction, believes the acquisition will provide it with greater knowledge on passive scanning technologies that in turn will ramp up its move into the endpoint mitigation and response market. Nevis’ technology allows company employees and guests, such as contractors and customers, to share network access, while providing the needed controls that are required by regulatory and compliance agencies.

Read more about the transaction here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/qualys-to-acquire-assets-of-nevis-networks/d/d-id/1329506?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Senators Propose IoT Security Legislation

A new bill aims to prohibit the production of IoT devices if they can’t be patched or have their password changed.

A group of US senators today introduced a bill in Congress that would require Internet of Things (IoT) manufacturers to produce devices that could be patched, have passwords updated, and come to the market without known security vulnerabilities, according to a Reuters report.

The bill also calls for federal agencies to have the freedom to purchase non-compliant IoT devices should this legislation pass, if they get approval from the US Office of Management and Budget, according to Reuters.

The legislation also proposes expanding protection for cybersecurity researchers, when they are attempting to hack IoT devices in search of bugs to report to the manufacturer, the report noted.

The bipartisan group of senators includes Republicans Cory Gardner and Steve Daines, and Democrats Ron Wyden and Mark Warner, according to Reuters.

Read more about the bill here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/us-senators-propose-iot-security-legislation-/d/d-id/1329509?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Game of Thrones data leaked in HBO hack

HBO got its welcome to the unwelcome corner of Netflix’s and Sony’s world last week – the “you’ve been hacked” corner.

For an unknown number of entertainment journalists, it probably couldn’t have been more welcome. Talk about a story – actually two stories – falling into your lap.

Falling into your inbox, actually. How about a subject header, “1.5 TB of HBO data just leaked!!!” for a tease?

Story One was that within what the hackers, whose command of English is apparently somewhat limited, called the “greatest leak of cyber space era” is the script for next week’s episode of Game of Thrones. You know, what Time magazine’s cover of 10 July 2017 called the “World’s Most Popular Show,” and said it is watched by an average of more than 23 million people in the US alone.

Reportedly, also on the site hosting the script are pending episodes fro (starring Dwayne “The Rock” Johnson), Insecure, a new show called Room 104, and Barry, with more to come, according to the unidentified hackers.

The full email, according to Entertainment Weekly, reads:

Hi to all mankind. The greatest leak of cyber space era is happening. What’s its name? Oh I forget to tell. Its HBO and Game of Thrones……!!!!!! You are lucky to be the first pioneers to witness and download the leak. Enjoy it spread the words. Whoever spreads well, we will have an interview with him. HBO is falling.

Story Two is that the fact, if not the scope, of the hack was legitimate, confirmed by the network in a statement to Entertainment Weekly and by an email from HBO chairman and CEO Richard Plepler to employees, reported by numerous outlets:

Dear Colleagues,

As most of you have probably heard by now, there has been a cyber incident directed at the company which has resulted in some stolen proprietary information, including some of our programming. Any intrusion of this nature is obviously disruptive, unsettling, and disturbing for all of us.

I can assure you that senior leadership and our extraordinary technology team, along with outside experts, are working round the clock to protect our collective interests … The problem before us is unfortunately all too familiar in the world we now find ourselves a part of.

As has been the case with any challenge we have ever faced, I have absolutely no doubt that we will navigate our way through this successfully.

Richard

HBO declined to specify how much “proprietary information” had been taken, saying their investigation is ongoing.

The hackers’ claims have not yet been verified but if they’re true, they have stolen a vast trove of entertainment content – a single terabyte can hold an estimated 500 hours of video – including more episodes of Game of Thrones and unreleased feature films, plus internal communications and employee information.

The group behind it remains anonymous, although some reports have called them “HBO is falling,” after the last line of the announcement email. Some of the communications regarding the hack have been attributed to “Mr. Smith” – not likely to be a real name.

And while it does not appear to be connected to any kind of political retribution, it obviously recalls the 2014 hack of Sony Pictures, attributed to North Korea in apparent retaliation for the movie “The Interview,” that the hermit kingdom felt held its leader, Kim Jong Un, up to ridicule.

That hack, which leaked and distributed unreleased movies and internal emails, and exposed personal information, including taxpayer IDs of more than 47,000 current and former Sony employees and actors, was detailed by Naked Security’s Lisa Vaas and discussed in a Chet Chat podcast with Chester Wisniewski and Paul Ducklin.

Netflix was a victim as well, earlier this year, when hackers leaked some episodes of season five of “Orange is the new black” before the June release date.

How much damage this might do to HBO is all speculative so far. Sony estimated its financial hit at $171 million in 2011, which to a $41 billion company is worth noticing but not even close to crippling, at less than a half a percent of its value.

And it is unlikely that copies of the Game of Thrones script, or even videos on the cyber underground will dent the popularity of the series or of HBO. If the hackers do have internal communications or personal information on employees, depending on how salacious they are, that could be more damaging.

But, since many hackers tend to exaggerate what they’ve got, that means everybody will have to wait and see … or not see.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gaCzpkoBHZE/