STE WILLIAMS

The opsec blunders that landed a Russian politician’s fraudster son in the clink for 27 years

Black Hat Uncle Sam’s lawyers have revealed the catalog of operational security mistakes that led to the cuffing of one of the world’s most prolific credit-card crooks.

Last year, Roman V Seleznev, 32, was found guilty of multiple counts of fraud and hacking by a jury in Washington, USA. He was later thrown in the cooler for 27 years. Seleznev – the son of an ultra-nationalist Russian politician Valery Seleznev – also faces other charges.

This week, US Department of Justice prosecutors who worked on the case told the Black Hat security conference how the fraudster was brought down.

Seleznev first came to the American authorities’ attention in the early 2000s as a dabbler in identity theft using the screen name nCux – which sounds phonetically like the Russian word for psycho. By 2005 he had moved into the lucrative world of credit card theft.

By amassing online logs of his time on underground forums, the US Secret Service thought they had enough evidence that nCux was Seleznev and was working out of Vladivostok, Russia. They held a meeting with the Russian Federal Security Service to discuss the case and how to proceed.

Less than a month later, all activity by nCux stopped dead, and the name was never seen on forums again. The last forum post explained that nCux was going out of business permanently.

But shortly afterwards, another big-name credit card seller showed up on the Carder.SU criminal forum and the Feds immediately knew that something was up, because Track2 – who was supposed to be new to the site – had been marked by the admins as a trusted and verified credit card dealer.

Eventually the person set up the websites Track2.name and Bulba.cc, which were very similar in design. These started selling large numbers of credit card details and had user guides available telling people how to exploit them.

Deli delights

Around this time, a police computer specialist in Washington State was investigating a malware attack against a branch of the Schlotzskys Deli chain. The store had been flagged up as having an unusually large number of credit card fraud cases, and the investigators found that sales terminals were infected with malware that was siphoning victim’s personal information to Russia – in particular to the servers behind the Track2.name and Bulba.cc websites.

The investigator got a warrant to search the email accounts used to register the domains, and found lots of interesting evidence. One of the email accounts, hosted by Yahoo!, had been used to open a PayPal account for a man in Vladivostok and had also been used to order flowers for a woman identified as Seleznev’s wife.

The Yahoo! account was also used to purchase a server from HopOne Internet in McLean, Virginia. The Secret Service got a dialed number recorder (DNR) order against the server, which allowed them to monitor network connections to and from the device and told them which IP ports were used, but not the content of any communications.

The DNR order showed that the machine was contacting hundreds of computers in the US, almost all of them restaurants running very similar point-of-sale software. They worked out that the server was scanning for misconfigured remote desktop protocol connections, pumping malware into vulnerable sales terminals, and then harvesting data back – presumably stealing credit card numbers of paying customers.

That breakthrough allowed law enforcement to get a warrant to search the server, where they found over 400,000 credit card numbers. They also found evidence that Seleznev had been using the server for his personal web browsing, leaving behind a trail of identifying documentation. For example, he had booked travel tickets that had his passport details on them, and there was evidence of numerous aliases that he was using online. That was enough to file an indictment against Seleznev in March, 2011.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/27/russian_politicians_son_gets_27yrs_fraud/

Hackers can turn web-connected car washes into horrible death traps

Black Hat Forget hijacking smart light bulbs. Researchers claim they can hack into internet-connected car wash machines from the other side of the world and potentially turn them into death traps.

In a presentation at the Black Hat conference in Las Vegas on Wednesday, Billy Rios, founder of security shop Whitescope, and Jonathan Butts, committee chair for the IFIP Working Group on Critical Infrastructure Protection, showed how easy it was to compromise a widely used car wash system: the Laserwash series manufactured by PDQ, based in Wisconsin, USA.

The pair found that Laserwash installations can be remotely monitored and controlled by their owners via a web-based user interface: the hefty gear has a builtin web server, and can be hooked up to the public internet allowing folks to keep an eye on their equipment from afar.

carwash

A photo of the web-based interface which, oddly, bundles in a social media feed … Click to enlarge

The hardware’s control system is an embedded WindowsCE computer powered by an ARM-compatible processor. Since Microsoft no longer supports WinCE, then no matter how secure the control software running on it is, the equipment is at the mercy of any unpatched security bugs lingering in the underlying operating system, Rios said.

However, there was no need to find and exploit old WinCE holes to remotely break into one of these bad boys. Once the infosec duo had found a suitable car wash connected to the web, the researchers found that the default password – 12345 – just worked. Once logged in from their browser, they were given full control of the system.

From there, they could prod the web app into doing things it really shouldn’t.

“Car washes are really just industrial control systems. The attitudes of ICS are still in there,” Rios said. “We’ve written an exploit to cause a car wash system to physically attack; it will strike anyone in the car wash. We think this is the first exploit that causes a connected device to attack someone.”

In their talk the pair showed how they managed to bypass the safety sensors on the car wash doors to close them on a car entering the washer. Butts told The Register that much more destructive hacks were possible.

“We controlled all the machinery inside the car wash and could shut down the safety systems,” he said. “You could set the roller arms to come down much lower and crush the top of the car, provided there was not mechanical barriers in place.”

The duo said they shared their findings with PDQ in February 2015, and kept trying to warn the biz for two years. It was only when their talk was accepted for Black Hat this year that the manufacturer replied to their emails, and then it turned out that it wasn’t possible to patch against the aforementioned exploits, we’re told.

In a statement to The Register on Thursday, PDQ spokesman Todd Klitzke said the car wash maker alerted its customers yesterday, coinciding with the conference presentation, and urged people to change their passwords from the default, or firewall off their equipment:

We are aware of the presentation at Black Hat USA 2017, and are diligently working on investigating and remediating these issues. As we have advised customers via a product security bulletin issued yesterday, all systems – especially internet-connected ones — must be configured with security in mind. This includes ensuring that the systems are behind a network firewall, and ensuring that all default passwords have been changed.

Prior to the Black Hat presentation, PDQ had been working with Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) to responsibly advise customers of mitigation measures, and PDQ continues to work with ICS-CERT on this matter. Our technical support team is standing ready to discuss these issues with any of our customers.

Yeah, maybe you should stick to hand washing your motor if you’ve upset any hackers. ®

PS: Yes, the headline is a homage to this.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/27/killer_car_wash/

Wait, this email isn’t for me – what’s it doing in my inbox?

For as long as email has been in the mainstream, stories abound about how messages have reached the wrong recipient to embarrassing or detrimental consequences. Perhaps a mis-sent shipping notification from a retailer isn’t a big deal, but a financial email containing with sensitive information definitely shouldn’t land in the wrong inbox.

Recently this topic came up on Ask Slashdot via user periklisv, with the pointed question: What do you do when you get a misdirected email?

Over the past six months, some dude in Australia (I live in the EU) who happens to have the same last name as myself is using [my email address] to sign up to all sorts of services… how do you cope with such a case, especially nowadays that sites seem to ignore the email verification for signups?

The thread is full of anecdata of emails sent to the wrong recipients, often full of embarrassing or sensitive information — bank statements, loan information, lawyer correspondences.

A quick search reveals that this issue comes up in the news on a larger scale with some frequency. For example, in 2012, a company accidentally emailed an employee termination notice to all of their 1,300 global employees instead of just one. Thankfully, people quickly caught on that this email wasn’t meant to go on blast (unfortunately for the person who was still fired).

These mistakes, though rather innocuous, are usually made by someone omitting a character, making a typo, or mixing up domain names or extensions (.com instead of .net, Yahoo instead of Gmail) in a rushed moment, are usually resolved by a quick “hey, you sent this to the wrong person” reply.

But what happens if a misdirected personal email lands in the inbox of someone who might not be so honest? Or what happens when a large company sends out confidential information via email to unintended recipients?

Just one example: a representative from Rocky Mountain Bank sent sensitive customer loan information to the wrong recipient via email and sued Google to try to quash the breach and keep the data from spreading any further. (Luckily for the employee, it turned out that the unintended recipient marked the email as spam and never even looked at the email.)

That’s a data breach thanks to a simple typo. In theory, this should be easy enough to avoid.

But this isn’t a new problem. In fact, in 2011, several security researchers highlighted exactly how an enterprising criminal could typosquat on a number of domain names to wait for confidential information to come across from misdirected emails, like a trapdoor spider waiting for its prey. The researchers captured more than 20GB of data from 120,000 misdirected emails meant for Fortune 500 companies in the span of six months.

The difference between the legitimate email addresses and the ones used by the security researchers? A simple dot — that’s all.

As with so many security issues that are ultimately based on habit and human error, mitigating this issue can be easier than done. Many people know they shouldn’t send sensitive information via email, but inevitably some do it anyway out of (what they see as) necessity.

Of course, robust data and email policies to filter and/or block confidential information from egressing via email can certainly help. There are additional technical approaches we would also recommend:

Email verification for signup forms: People are in a hurry and make mistakes. It’s always going to happen. As identified by the Slashdot poster, the simple step of adding an email verification step to a sign-up process would do much to reduce misdirected emails.

Make it easier to for employees to stop hitting the “attach” button: We follow the path of least resistance — if it’s too difficult to collaborate or share by any other method, people will stick with what they know and what’s fastest. Centralized file repositories internally or in the cloud (like Dropbox), when implemented well, can make using email attachments less appealing by comparison.

Encrypt: Another possible failsafe is to encrypt everything that’s outgoing – that way even if the email does end up in the wrong hands, there’s not much the recipient can do with it.

Are misdirected emails an issue where you work? Have you managed to make them an issue of the past? We welcome your thoughts or tips on how to mitigate this issue in the comments.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oZjdettTZuQ/

Start-up accused of undermining popular open-source tools

Kite, a San Francisco based startup that develops tools for IDEs and text editors used by programmers, has apologized to the open-source community after its code was found to include what many considered to be ads for the company.

Sharp-eyed developers noticed that an update pushed to Kite’s Minimap for Atom Github page seemed to inject links in the code Minimap was looking at to Kite’s own website, and to upload scripts to Kite’s own servers. Uploading a programmer’s work to an untrusted third-party server is a security concern.

Was Kite being unethical? The open-source community certainly thought so.

Kite, founded in 2014 by CEO Adam Smith, makes tools that use machine learning to acquire data from GitHub with the stated intention of making a programmer’s work easier and better.

Open-source software has driven innovation in computer technology. At this moment, I’m using LibreOffice Writer to compose my words, Google Chromium to use the web for my research, and Kubuntu Linux as my operating system. And the internet is largely built on open technologies – HTTP, HTTPS, HTML, Apache web servers, email, among others.

Open-source software’s most enthusiastic cheerleader is Richard Stallman, who says that proprietary software cannot be properly trusted, and the key to having control over one’s computer systems is open-source software.

Proprietary software means, fundamentally, that you don’t control what it does; you can’t study the source code, or change it. It’s not surprising that clever businessmen find ways to use their control to put you at a disadvantage.

Today you can avoid being restricted by proprietary software by not using it. If you run GNU/Linux or another free operating system, and if you avoid installing proprietary applications on it, then you are in charge of what your computer does.

Kite’s code is open-source and available on GitHub for review. And reviewing it is exactly what the open-source community has been doing since the script and its functions were noticed.

Here’s what GitHub users have been saying about Minimap for Atom’s “implement Kite promotion” script:

As I understand the function of this package, it should not even be in there at all.

Time for Adblock for Atom.

This is not cool at all. Kind of crazy that anyone would think this is okay.

Definitely against company policy to upload code to external servers. This is the kind of BS that makes companies completely lockdown the software developers can use. Very disappointed.

Seems like the developer is ignoring our concerns. If there won’t be any update on the situation in the near future someone should really think about forking the project and publish it on apm. I don’t think many people are okay with the ads and everyone I know that uses Atom also uses Minimap, so I guess there is a huge demand for a fork.

Autocomplete-python is another Kite tool, which it took it over from GitHub user Dmytro Sadovnychyi in December 2016. Other GitHub users didn’t appreciate how GitHub acquired the project:

@sadovnychyi, thanks for following back. I’m not sure if you have researched this issue, but many of us feel the autocomplete-python package is being overtaken by the Kite team, and the popularity of this plugin is being used to promote their service.

It’s also worth noting that there are no regular signs of these new maintainers being introduced to the project. Having push access before your first contribution is atypical, and it is not an isolated case of one developer.

I believe we could hear more about how this collaboration came to be, and how/where did those discussions happen.

Sadovnychyi replied:

It’s hard to deny that as a result Kite did get a promotion out of this. It’s not like we have a lot of autocomplete options available for Python, so I did have some interest in Kite when it was announced. It would be awesome if Jedi (or another opensource project) would add completions based on machine learning, but it didn’t happen yet.

I’m sorry that they weren’t properly introduced into the project, but this is not a huge opensource project with strict guidelines, so I considered it would be fine.

The discussion happened over email and as I said it seemed like both parties (Kite and autocomplete-python) could improve each other based on this collaboration.

With support for a fork growing, Ryan Leckey delivered one on July 5:

I forked, renamed, and reverted the Kite nonsense. I’d appreciate it if anyone else wants to help.

Information about Leckey’s plugin can be found on Atom’s website.

So what happened? Here’s what Kite’s CEO Adam Smith had to say when he finally responded to the controversy earlier this week:

Over the last few days, Kite has been knocked around in social media (and the actual media) over several moves we made to expand our user base. It was a humbling experience, and I’d like to apologize to the entire open source community for our handling of these projects. While we had good intentions, we inadvertently angered many in the open-source community in the process – and we’ve now taken steps to address those concerns.

We’re big believers in open source – we’ve made contributions as individuals and as a company. But we messed some things up. We’re trying to fix the issues we created, and we’re eager to hear additional feedback from the open source community.

The Kite controversy illustrates the value of open-source software to cybersecurity. Kite’s actions raised understandable concerns among security-minded users, but also show how the community can spot and act on problems.

The incidents should fuel the debate about open-source software and security. Is open source good for security or not?


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/39bGIXxGX2c/

Wells Fargo apologizes for spilling trove of data on wealthy clients

In what can only be described as a true awh-shucks moment, Wells Fargo Co finds itself offering apologies to approximately 50,000 Wells Fargo Advisors clients whose information was inappropriately shared by Wells Fargo outside counsel, Angela Turiano of Bressler Amery Ross, to an ex-Wells Fargo employee’s attorney, Allen Miller, as part of the electronic discovery (e-discovery) response to a subpoena request for information.

While it is possible that the 50,000 clients’ accounts were all part of the fraternal squabble and litigation between one of the Sinderbrand brothers (Gary Sinderbrand, an ex-employee, and Steven Sinderbrand, an active Wells Fargo financial advisor) and Wells Fargo over the payment of commissions and fees in support of these high-end investors, that’s a lot of data to mishandle.

To create a disk with 1.4GB of data (which equates to approximately 14,000 documents) is not an insignificant electronic litigation support task.

We learn from the New York Times that the unidentified “vendor” conducting the e-discovery process was thrown under the bus by Wells Fargo’s outside attorney, Turiano, in her email to Miller in which she addresses the error.

We went through a long process of a very large email review with an outside vendor with instructions on exclusion which was spot checked. Clearly there was some type of vendor error — which I am confirming now.

How e-discovery is supposed to work

For those unfamiliar with how the e-discovery process works within a large enterprises, let’s review. A request is made by opposing counsel – often in the form of a subpoena. The request is reviewed for appropriateness and scope, and then determination is made as to the existence of the data. Once the data is identified, if appropriate, the data is pulled and isolated.

Once isolated, a work copy is created (preserving the original) and then the data is painstakingly reviewed for applicability specific to the outstanding request.

Then the information is compiled and shared with the requesting party in the most secure manner available.

How it worked in this instance

In this instance, accepting Turiano’s explanation, the information was identified and isolated, and compiled. Indeed, she notes the process included a laborious email review and guidance provided to their vendor on exclusions. Then the information was “spot-checked”.

The non-excluded information was then copied to the disk (1.4GB) and provided to opposing counsel. The information, according to the NYT, which reviewed the data firsthand, includes customer names, social security numbers, financial details, portfolios and fees which the bank charged the clients.

Wells Fargo, damage control

Wells Fargo, once notified, went into crisis control mode, given that Miller had shared the information with the NYT and has not return it to Turiano. Wells Fargo filed suit to compel Miller to return the information that had been mistakenly shared by their outside counsel, Turiano.

Wells Fargo then acknowledged the e-discovery error, saying:

We take the security and privacy of our customers’ information very seriously. Our goals are to ensure the data is not disseminated, that it is rapidly returned, and that we ensure the discovery process going forward in the cases is working as it should.

Does this happen often?

Thankfully it doesn’t, even though the e-discovery process is challenging for companies, regardless of size.

A high-profile case that  had its genesis in an inadvertent provision of material during the discovery process involved Hilton Hotels Resorts Worldwide and Starwood Hotels in 2009-10. Hilton’s attorneys provided information to Starwood in support of a compensation case between a former Starwood employee, who was now a Hilton employee.

When the information arrived at Starwood, the team there discovered that the attorney had provided boxes of documents containing the entire plan for Starwood’s W-hotel concept. The ensuing corporate espionage case brought to a halt Hilton’s Denizen brand. The 100,000-plus documents stolen by the two departing Starwood employees were returned, and according to the New York times, Starwood received $75m in compensation and $75m in hotel management fees from Hilton.

The task of performing discovery within corporations is not for the faint of heart, especially in the age of electronic data. Use of third party-vendors and outside counsel is quite normal.

The ERDM (electronic records document management) requirements in support of litigation are arduous for companies of all sizes. While it is one thing to identify the existence of items germane to given litigation, it is another to process the information.

Companies would be well served to have in place an audit capability for both inside and outside counsel (and vendors) to ensure there is visibility into the ERDM and e-discovery process from beginning to end, with emphasis on accomplishing the process in the most secure manner possible.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/h8T8vKF2ETY/

Independent labs to probe medical devices for security flaws

Speaking from a security perspective, medical devices are a sickly lot.

They suffer from many miseries: lack of quality assurance and testing, rush to release pressures on product development teams, accidental coding errors, malicious coding, inherent bugs in product development tools, being tiny, having low computing power in internal devices, and, well, the list goes on.

Obviously, it’s no walk in the park to secure these things. A majority – 80% – of medical device manufacturers and users surveyed in a recent study said the gadgets are “very difficult” to secure. Only 25% of respondents said that security protocols or architecture built inside the devices adequately protect clinicians and patients.

…all of which is why security researchers who specialize in dissecting medical devices were encouraged when they learned, on Monday, that a new global federation of labs will test the security of medical devices.

According to the Security Ledger, the announcement was made by a consortium of healthcare industry companies, universities and technology firms called the Medical Device Innovation, Safety and Security Consortium (MDISS).

The labs network has been dubbed the World Health Information Security Testing Labs (WHISTL). The facilities will reportedly adopt a model akin to the Underwriters Laboratory, which tests electrical devices, but will focus on issues related to medical device cybersecurity and privacy.

Nice, said Billy (BK) Rios, a researcher with WhiteScope. Along with Dr Jonathan Butts, Rios recently published a study about more than 8,000 vulnerabilities in the code that runs in seven analyzed pacemakers from four manufacturers.

The Security Ledger quoted a statement from Rios:

[The WHISTL labs is] a huge step in the right direction. Patient encounters with connected yet poorly secured medical devices are increasing exponentially, and nobody really has a handle on the risks we’re facing.

He knows of which he speaks.

Take those Hospira LifeCare patient-controlled analgesia (PCA) pumps that Rios was picking apart for security flaws a few years ago.

He found flaws, in spades. As we explained at the time, the pump used so-called “drug libraries” – data that includes dosage limits to help ensure the pumps operate safely – that could be updated … by anybody… without authentication.

Rios had, back in May 2014, recommended that Hospira analyze other models of its infusion pumps to see if they shared the same vulnerabilities with the ones he had tested, but five months later, he heard that the company was “not interested in verifying that other pumps are vulnerable”.

One day, he found himself splayed out after surgery when he realized he was hooked up to one of those pumps. Any fuzzy feel-good he might have gotten from that trickle of pethidine must have dissipated like fairy dust.

It’s unfortunate that the maker wasn’t particularly interested in checking on vulnerabilities in its other pump models, in light of the fact that those flaws got worse still.

A year later, Rios looked at more Hospira LifeCare PCA pumps and found far more serious vulnerabilities than the ones he tested in the previous year: vulnerabilities that would, in fact, allow somebody to remotely change drug doses, as well as tweak maximum permitted doses and let through a fatal overdose.

That’s just one of many stories about the lack of security in medical devices.

Benjamin G. Esslinger, a Clinical Engineer at Eskenazi Health, said that the resources of the WHISTL labs are what we need to get to best practices for medical device cybersecurity.

Ambitious initiatives like WHISTL are sorely needed, and I look forward to supporting MDISS in this undertaking. Through our over-dependence on undependable things, we have created the conditions where accidents and adversaries can have a profound impact on public safety and human life.

According to Security Ledger, WHISTL labs will be one of a rare breed: an independent, non-profit network of labs specifically designed for the needs of the medical field, including medical device designers, hospital IT, and clinical engineering professionals.

The tools it will use to assess device security include fuzzing – that’s a way of robotically bombarding software with random data in an attempt to cause the sort of unusual crashes and errors that mimic how programs behave under real-world use – static code analysis and penetration testing.

WHISTL labs will identify and mitigate security flaws, reporting them directly to manufacturers. It will also educate professionals and device-makers about device security and security best practices. Flaws will also be publicly disclosed to the international medical device vulnerability database (MDVIPER), which is maintained by MDISS and the National Health Information Sharing and Analysis Center (NH-ISAC).

Ten new device testing labs are slated to open by the end of the year, in US states including New York, Indiana, Tennessee and California. Outside of North America, it will open labs in the UK, Israel, Finland, and Singapore. Other facilities will be announced in the coming weeks.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Wzv2VLSraoY/

Strong and stable, my arse. UK wobbles when coping with ransomware

A third of businesses have suffered a ransomware attack in the last 12 months, according to a new survey sponsored by Malwarebytes.

Globally, most organisations experienced some form of attack or breach during the past year, with 35 per cent suffering a ransomware attack specifically. Ransomware demands are relatively low, with nearly three in five of the infected organisations reporting extortionate demands of $1,000 or less.

In the UK, almost 20 per cent of businesses have little or no confidence they could stop ransomware, compared to a global average of 10.7 per cent, a fear that the effects of the recent WannaCrypt attack on the NHS only partly explain. Almost half (46 per cent) of UK organisations hit by ransomware wound up losing files – the highest among the geographies surveyed.

British orgs also expressed the greatest willingness to pay ransoms with 43.1 per cent opting to cough up compared to 15.9 per cent in France, a lamentably weak and wobbly showing by Team GB against a nation sometimes referred to as cheese-eating surrender monkeys.

Most of the 37 Brit organisations who admitted any ransomware problems said they had been hit more than five times during the past year. The UK also turned out to be the most clueless nation when it comes to identifying the source of ransomware, with 35.4 per cent not knowing where it came from compared to just 8.6 per cent of American groups who confessed to being equally ignorant.

Downtime caused by a ransomware attack could be more devastating than the fees demanded. For 15 per cent of affected organisations in the UK (or around one in seven), a ransomware infection caused 25 or more hours of downtime.

The survey involved polling firms with up to 1,000 employees in North America, France, UK, Germany, Australia, and Singapore. Osterman Research surveyed a total of 1,054 companies worldwide as part of the study, explained in more depth in Malwarebytes’ Second Annual State of Ransomware Report. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/07/27/ransomware_survey/

Downtime from Ransomware More Lethal to Small Businesses Than the Ransom

New survey of small-to midsized businesses (SMBs) shows half of SMBs infected with malware suffer 25 hours or more of business disruption.

Of more than half of all small-to midsized businesses (SMBs) infected with ransomware in the past year, attackers demanded ransom of $1,000 or less – a drop in the bucket in comparison to the downtime these attacks cause, a new report shows.

The survey of more than 1,000 SMBs in the US, UK, France, Germany, Australia, and Singapore, found that while 65% have not been hit with a ransomware attack in the past 12 months, nearly 30% have suffered one to five such incidents; 5%, six to 10  ransomware incidents; and 1%, 11- to 20-plus such incidents.

Ransomware indeed is becoming the darling of attackers, with 70% of malware distributed in June of the ransomware type, according to Malwarebytes’ data from a report earlier this month. And with two major ransomware attacks this year, WannaCry and Petya, spreading around the globe rapidly via worm-type exploits, SMBs appear to consider malware a clear and present danger.

Around seven in 10 SMBs are either “concerned” or “extremely concerned” about ransomware, the Malwarebytes SMB report shows.

“Ransomware wasn’t necessarily the most expensive aspect of a ransomware attack: downtime, revenue loss, and fallout were more expensive and far more damaging, especially when you’re talking about small businesses,” says Adam Kujawa, head of malware intelligence at Malwarebytes. He says it’s easier for larger organizations to recover from a ransomware attack because they have more resources to do so than an SMB does.

In some 22% of organizations, ransomware attacks halted business immediately, while 37% say their users, customers, and vendors were affected by the attack, and 15% say they lost revenue due to the attack.

Downtime-wise, 27% were down for one- to eight hours; 23%, nine- to 16 hours; 24%, 17- 24 hours; and 15%, 25- to 100 hours.

Nearly 30% of SMBs don’t know the origin of their ransomware infection. “Most do not know where the ransomware comes from. It just shows up one day on their endpoints, and then they say ‘oh crap, what do I do.'”

Higher ransom rates were less common for SMBs. More than 10% were above $10,000, and around 3% were higher than $50,000.

Most SMBs don’t believe ransomware victims should pay up, and just 28% say they paid the ransom. Even so, 32% of those that didn’t pay ransomware attackers ended up losing their files for good.

The debate over whether or not to pay ransomware attackers has caused confusion and angst among the business world. While most security experts say victims should not to comply with the data kidnappers’ monetary demands, others say there are times that it’s best to pay up.

“You can avoid it if you have backups or some method of getting your files back,” Kujawa says. “If the data’s not important to you, you do not need to pay. There’s a 50% chance you’re going to get it back, anyway.”

But if the loss of the locked-down files is costly, paying up may be the best bet. “If there’s a legitimate or steep fine for not having those files, paying ransom may be an option,” he says.

That doesn’t necessarily mean paying the full amount the attackers demand. “Try the best way to communicate them and try to negotiate the price … for a few files, for example,” he says. “They’re still happy to get some money.”

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/downtime-from-ransomware-more-lethal-to-small-businesses-than-the-ransom/d/d-id/1329440?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Can Your Risk Assessment Stand Up Under Scrutiny?

What’s This?

Weak risk assessments have gotten a pass up until now, but that may be changing.

The foundation of what we do in InfoSec is all based on risk. How we select controls to reduce the likelihood and impact of threats and vulnerabilities stems from our risk assessments. Every compliance standard mandates one form of risk assessment be done as a part of an organization’s security program. But not all risk assessments are done the same way nor do they produce the same results.

There are many risk assessment standards for organizations to choose from, including OCTAVE, FAIR, ISO 27005:2011, FMEA, and NIST SP 800-30. Some are quantitative, based on real numbers and data with results often expressed in potential dollars lost, while others are qualitative, based on expert opinion and expressed in colors or high-medium-low ratings. Some risk assessments are built from hundreds of hours of observations and measurements. Others are constructed from calibrated opinion surveys of subject matter experts. More than a few risk assessments are presented with incomplete results, merely focusing on threats and vulnerabilities while ignoring impacts or likelihoods.

Until recently, auditors and regulators have mostly graded risk assessments on effort, not effectiveness or completeness. The common audit question is “Did you do a risk assessment?” can be satisfactorily answered with a simple affirmative. In 2015, I presented research at the Society of Information Risk Analysts (SIRA) conference that showed that only 29% of organizations assessing the security of third parties asked if there was a risk assessment process in place. Only 13% wanted to know any details on that process.

Weak risk assessments have gotten a pass up until now, but that may be changing. On April 12, 2017, the FTC issued a warning letter to Abbot Labs for a number of security failings in their medical devices. A key cause that was singled out was poor risk assessment. They noted:

Your firm identified the hardcoded universal unlock code as a risk control measure for emergent communication. However, you failed to identify this risk control also as a hazard. Therefore, you failed to properly estimate and evaluate the risk associated with the hardcoded universal lock code in the design of your High Voltage devices.”

As well as:

Your firm’s updated Cybersecurity Risk Assessments, (b)(4) Cybersecurity Risk Assessment, (b)(4), Revision A, April 2, 2015 and [email protected] Product Security Risk Assessment, (b)(4), Revision B, May 21, 2014 failed to accurately incorporate the third party report’s findings into its security risk ratings, causing your post-mitigation risk estimations to be acceptable, when, according to the report, several risks were not adequately controlled.”

What better way to diagnose a failed security program than to point at an inferior assessment of risk? If an organization omits or misjudges a critical risk, then the decisions that flow from that finding will be incorrect.

A problem with standardizing risk assessment is that the measurement of relevant risk is going to vary significantly from organization to organization, with different priorities, trade-offs, and tolerances affecting the analysis. However, the question remains: can your risk assessment withstand outside scrutiny? If you get unlucky and hacked, how is your organization’s risk assessment going to fare when regulators and lawyers scrutinize it page by page?

Your strategy should be to develop a risk assessment that appears reasonable and appropriate to hazards, threats, and potential impacts on your systems. The FTC came down hard on Abbot Labs because they manufacture medical devices, so impacts can include loss of life and breach of medical privacy. A more thorough risk assessment would be expected in this environment than that of an IT office equipment vendor.

A defensible risk assessment should be:

  • Standardized. The method should be as formal as possible so that given the same data, someone could reproduce the same results. The same method should be used for the same type of risk assessment.
  • Relevant. The right risk modeling technique should be chosen for your organization’s industry, possible impacts, and threat environment. It should also be current—the standard is to perform them at least once a year.
  • Explicit. Assumptions, trade-offs, estimates, and conclusions should be clearly documented. An auditor or regulator should be able to trace your line of reasoning in decisions made.

A risk assessment must also be read and used to manage the risk it identifies. A thick, beautifully detailed risk report that sits on the shelf is not only useless, but a clear indicator of negligence. Imagine a regulator asking you, “You knew about this risk, but why didn’t you do anything about it?” Acting on a risk assessment also means verifying that the risk was reduced by an active risk management process. In the same letter to Abbot Labs, the FTC also reprimanded them by saying:

“Your firm conducted a risk assessment and a corrective action outside of your CAPA system. Your firm did not confirm all required corrective and preventive actions were completed, including a full root cause investigation and the identification of actions to correct and prevent recurrence of potential cybersecurity vulnerabilities, as required by your CAPA procedures.”

No one wants to have these things said about their security program. I see this as a warning shot across the bow for all organizations: clean up your risk assessment processes and make sure you act on the results.

Get the latest application threat intelligence from F5 Labs.

Raymond Pompon is a Principal Threat Researcher Evangelist with F5 labs. With over 20 years of experience in Internet security, he has worked closely with Federal law enforcement in cyber-crime investigations. He has recently written IT Security Risk Control Management: An … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/f5/can-your-risk-assessment-stand-up-under-scrutiny/a/d-id/1329435?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Right to Be Forgotten & the New Era of Personal Data Rights

Because of the European Union’s GDPR and other pending legislation, companies must become more transparent in how they protect their customers’ data.

On May 25, 2018, the European Union’s General Data Protection Regulation (GDPR) will go into effect in Europe to help harmonize personal privacy rights across all 28 EU member states. Although individual countries can maintain their own privacy laws and impose additional penalties, GDPR establishes a common baseline of protections for citizens and residents of the EU and for collectors and processors of personal data — a set of common obligations and potential fines (up to 4% of global revenue per company per country).

Whose Data Is It, Anyway?
One of GDPR’s innovations is the idea of institutionalizing a fundamental right to one’s data. Under GDPR, every EU citizen and resident has a right to access, port, or erase their data. Companies that collect and process consumer or employee data — i.e., controllers — are effectively obligated to return an individual’s data upon request. GDPR reorients the balance of rights and obligations between a data owner and a data processor. People never lose their right to data about them or by them, while companies in turn are transformed into data custodians with new obligations for the data they steward on behalf of the data owners.

This new principle is nowhere more famously manifest than in the idea of the right to be forgotten. Although this concept preceded GDPR in Europe and elsewhere, GDPR elevates it and removes any ambiguity around the obligation. Under GDPR, EU citizens and residents have a fundamental right to have their data deleted upon request. There is no test as to whether the data is incorrect. The data belongs to the individual, who can do with the data as he or she sees fit.

What’s the Point of Data Controllers without Data Controls?
For companies that collect and process personal information, this new right to one’s data represents a sea change in how they view and manage their data. Since the inception of databases, personal data has been viewed more as a literal commodity as reflected in the terms used to describe where you keep it: data store, data warehouse, data lake. Understanding the identity of the data owner, inasmuch as it existed, served the primary purpose of personalization and prediction. It was — and largely remains — all about “analyze in order to monetize.”

But GDPR helps put the “person” back in personal data. It reminds companies that the data belongs to an individual to whom they are accountable and for whom they must provide an accounting. Knowing a person’s data, however, has value beyond the intelligence. Data unknown isn’t invisible; it’s just vulnerable to theft, misuse, and compromise. To meet the new GDPR requirements requires companies to find and inventory data by person. This in turn creates new opportunities for data protection, compliance, and governance. The right to be forgotten ultimately ensures that every person’s data is not forgotten. Indirectly, the new personal data rights enable better safeguarding for personal data, whether it’s a Social Security number or an IP address.

Data Driven Personal Data Governance Protection
Regulations have historically helped companies focus their attention and their budgets. In the US, regulations such as Sarbanes-Oxley, HIPAA, and PCI, to name just a few, drove companies to reset priorities and rethink approaches to dealing with data and applications. Because the US is focused on industrialization, this has led to the adoption of new kinds of technology automation with acronyms including SIEM, SSO, DLP, DAM, and DRM. But every innovation answers its unique problem, and so these innovations all speak to a specific pain at a specific point in time. Individual rights to access, port, or erase their data speak to a new set of requirements and therefore a new set of data governance, protection, and compliance requirements.

While GDPR defines a new benchmark of regulations around personal privacy, it is not alone in driving this new era around personal data governance and protection. Many countries have instituted a similar right, including China. Similarly, in the US, several states are debating bills that would enshrine new rights for personal data. For companies, this means a new kind of data governance, protection, and compliance is required that can account for a person’s data and ensure data accountability to that person. Not surprisingly, companies will need to be more accountable and transparent with the way they protect consumer data.

Related Content:

Dimitri Sirota is a 10+ year privacy expert and identity veteran. He is the CEO and co-founder of the first enterprise privacy management platform, BigID — a stealth security company looking to transform how businesses protect their customers’ data. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-right-to-be-forgotten-and-the-new-era-of-personal-data-rights/a/d-id/1329416?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple