STE WILLIAMS

5 ransomware as a service (RaaS) kits – SophosLabs investigates

In recent months we’ve told you about ransomware distribution kits sold on the Dark Web to anyone who can afford it. These RaaS packages (ransomware as a service) allow people with little technical skill to attack with relative ease.

Naked Security has reported on individual packages, and in July we released a paper on one of the slicker, more prolific campaigns: Philadelphia.

This article takes a broader look at the problem, analyzing five RaaS kits and comparing/contrasting how each is marketed and priced. The research was conducted by Dorka Palotay, a threat researcher based in SophosLabs’ Budapest, Hungary, office.

Measuring RaaS-based ransomware attacks is difficult, as the developers are good at covering their tracks. Samples received by SophosLabs have measured from single digits to hundreds. That doesn’t sound like much on the surface, but the question researchers now grapple with is how sales of these kits have contributed to global ransomware levels as a whole.

The task of fighting RaaS proliferation starts with knowing what’s out there, and that’s the point of this article. Whatever the numbers are, this phenomenon has almost certainly helped the global ransomware scourge grow worse, and the number of available kits will only increase with time.

Philadelphia

As noted above, Philadelphia is among the most sophisticated and market-savvy cases. There are more options to personalize, and for $389 one can get a full unlimited license.

The RaaS kit’s creators – Rainmakers Labs – run their business the same way a legitimate software company does to sell its products and services. While it sells Philadelphia on marketplaces hidden on the Dark Web, it hosts a production-quality “intro” video on YouTube, explaining the nuts and bolts of the kit and how to customize the ransomware with a range of feature options.

Customers include an Austrian teenager police arrested in April for infecting a local company. In that case, the alleged hacker had locked the company’s servers and production database, then demanded $400 to unlock them. The victim refused, since the company was able to retrieve the data from backups.

Stampado

This was Rainmaker Labs’ first available RaaS kit, which they started to sell in the summer of 2016 for the low price of $39.

Based on their experiences by the end of 2016, the developers created the much more sophisticated Philadelphia, which incorporated much of Stampado’s makeup. Its creators are confident enough in Philadelphia’s supremacy that they ask for the much more substantial sum of $389.

Stampado continues to be sold despite the creation of Philadelphia. The ad below is from the developer’s website:

Frozr Locker

FileFrozr kits are offered for the price of 0.14 in bitcoins. If infected, victims files are encrypted.

Files with around 250 different extensions will get encrypted. The Frozr Locker page notes that people must acquire a license to use the builder:

Its creators even offer online support for customers to ask questions and troubleshoot any problems:

Satan

This service claims to generate a working ransomware sample and let you download it for free, allows you to set your own price and payment conditions, collects the ransom on your behalf, provides a decryption tool to victims who pay up, and pays out 70% of the proceeds via Bitcoin.

Its creators keep the remaining 30% of income, so if the victim pays a ransom worth 1 bitcoin, the customer receives 0.7 in bitcoin.

The fee moves higher or lower depending on the number of infections and payments the customer is able to accumulate.

When creating a sample of Satan to send out into the world, customers fill out the form below to concoct the pay scheme. It includes a captcha box to make sure you are who you claim to be.

RaasBerry

SophosLabs first spotted this one in July 2017. It was announced on the Dark Web and, like the others, allows the customer to customize their attack. Packages are pre-compiled with a Bitcoin and email address the customer provides, and the developer promises not to take a cut of the profits:

Customers can choose from five different packages, from a “Plastic” one month command-and-control subscription to a “bronze” three-month subscription, and so on:

Defensive measures

For now, the best way for companies and individuals to combat the problem is to follow these defensive measures against ransomware in general:

  • Back up regularly and keep a recent backup copy off-site. There are dozens of ways other than ransomware that files can suddenly vanish, such as fire, flood, theft, a dropped laptop or even an accidental delete. Encrypt your backup and you won’t have to worry about the backup device falling into the wrong hands.
  • Don’t enable macros in document attachments received via email. Microsoft deliberately turned off auto-execution of macros by default many years ago as a security measure. A lot of malware infections rely on persuading you to turn macros back on, so don’t do it!
  • Be cautious about unsolicited attachments. The crooks are relying on the dilemma that you shouldn’t open a document until you are sure it’s one you want, but you can’t tell if it’s one you want until you open it. If in doubt, leave it out.
  • Patch early, patch often. Malware that doesn’t come in via document macros often relies on security bugs in popular applications, including Office, your browser, Flash and more. The sooner you patch, the fewer open holes remain for the crooks to exploit. In the case of this attack, users want to be sure they are using the most updated versions of PDF and Word.
  • Use Sophos Intercept X if you are looking to protect an organization. Intercept X stops ransomware in its tracks by blocking the unauthorized encryption of files.
  • Try Sophos Home for Windows and Mac to help protect your family and friends. It’s free!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mqN18B4sxj0/

Massive Uber data scraping and secret servers exposed in Waymo suit

It’s old news that Uber has more legal troubles on its plate than its recently exposed attempt to cover up a 2016 breach that compromised about 57 million customer and driver records.

It was almost 10 months ago – 23 February 2017 – that Google parent company Alphabet’s self-driving-car unit Waymo sued the mega-ride-hailing company, alleging that around 14,000 pages of proprietary information downloaded by one of its former employees had made its way to Uber, which is also working on developing autonomous vehicles.

However, the plot has thickened over the past few weeks – so much so that a trial that was supposed to start last week in a California federal district court has now been delayed until February.

As news sites like Gizmodo, Ars Technica and Recode have reported, it is no longer just about 14,000 pages of intellectual property. It is about Uber having a unit, called Marketplace Analytics (MA), that allegedly spied on competitors worldwide for years, scraping millions of their records using automated collection systems and conducting physical surveillance.

It is about the company using “non-attributable” servers that couldn’t be traced to Uber to store that data. It is about non-attributable laptops, pre-paid phones and Mi-Fi wireless internet devices. It is about using “ephemeral” messaging services like Wickr and Telegram to communicate, so as not to leave the digital version of a paper trail that could damage the company in any legal proceeding.

This new information has come to light largely due to a 37-page letter written seven months ago by an attorney for Richard Jacobs, a former Uber security analyst who worked in the company’s global intelligence unit, but just turned over to the court last month.

As Gizmodo put it, “it’s possible Uber’s data gathering did not violate any laws – much of it occurred internationally, and the data was often collected from publicly-available websites and apps.” Indeed, scraping data in the intensely competitive ride-hailing industry is considered common – competitors reportedly do it to Uber as well.

But the letter from Jacobs which was expected to be made public today, besides noting the secret servers and messaging, accused Uber of, “using its competitive intelligence teams to steal trade secrets from Waymo and other companies.” And that goes beyond what has been at the heart of Waymo’s suit – that former employee Anthony Levandowski allegedly stole 14,000 documents just days before starting his own company, Otto, which Uber acquired in 2016.

Jacobs resigned from Uber abruptly on 14 April, after he was caught forwarding company emails to his personal email. He emailed his resignation with the subject line, “Criminal and Unethical Activities in Security,” and said he had been collecting the emails to blow the whistle on the company’s actions.

The 37-page demand letter from Jacobs’s attorney, Clayton Halunen, came three weeks later.

The Jacobs letters were explosive enough, and late enough (they should have been added to the case file as soon as Uber received them) to prompt Waymo’s attorneys to move for a delay in the trial, arguing that there was no way they could review them in time for the scheduled 4 December start. Judge William Alsup agreed, granting a two-month delay.

At a 28 November hearing on the postponement, Waymo attorney Charles Verhoeven quoted from a portion of the demand letter that said:

Jacobs is aware that Uber used the MA [Marketplace Analytics] team to steal trade secrets at least from Waymo in the United States… (and that) MA exists expressly for the purpose of acquiring trade secrets, code base, and competitive intelligence.

Jacobs, who testified at the hearing, walked back some of the contents of the letter, saying he had not fully read it and that some of what it described about Uber’s intelligence collection efforts was, “hyperbolic.”

Still, Judge Alsup was not happy:

You don’t get taught how to deal with this problem in law school. In 25 years of practice and 18 years in this job I have never seen such a problem.

The problem, the judge said, was much bigger than the late inclusion of the letters. It was that even though none of the allegedly stolen 14,000 documents, “hit the (Uber) server,” that was because they were likely being held on a “shadow” server – which would mean Uber was trying to withhold evidence. As he put it to an Uber attorney:

You stood up so many times and said, Judge, we searched our servers; these documents never hit a Uber server. You never told me that there was a surreptitious, parallel, nonpublic system that relied upon messages that evaporated after six seconds or after six days.

The server turns out to be for dummies, that’s where the stuff that doesn’t matter shows up. The stuff that does matter is going to be in the Wickr evaporate file. Any company that would set up such a surreptitious system is as suspicious as can be. You’re making the impression that this is a total cover up.

Uber’s deputy general counsel, Angela Padilla testified that there was no merit to the claims in Jacobs’s letter – that he was simply trying to extort money from the company.

But, as Judge Alsup noted, Uber agreed to pay Jacobs a $4.5m settlement – with $1m of that as a consulting fee for helping with an internal investigation at Uber, plus another $3m to his lawyer. And he wasn’t persuaded that this was done simply because it was a cheaper alternative and would eliminate the “distraction” of litigation. He told Padilla:

You said it was a fantastic BS letter… and yet you paid $4.5 million. To someone like me, an ordinary mortal… that’s a lot of money. People don’t pay that kind of money for BS. That’s one point. And you certainly don’t hire them as consultants if you think everything they’ve got to contribute is BS.

Alsup also ordered Uber to produce all documents related to ephemeral messaging services dating back to December 2015.

With the trial now about two months away, about the only thing that is clear so far is that “ephemeral” communication will apparently no longer be used by Uber. The company’s current CEO, Dara Khosrowshahi, acknowledged in a tweet that the company had used Wickr and Telegram before he arrived, but that Uber employees had been directed to stop using them as of 27 September.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/AFCq1if77Fw/

Netflix sparks privacy row after making fun of users of Twitter

Let’s be reasonable. You have to cut the Netflix tweet writer a teensy bit of slack.

Watching just a few minutes of the Netflix production “A Christmas Prince,” snowflakes and ball gowns and betrayal and royalty on his knees in the slush holding out a gumball-sized engagement diamond and all that, is enough to make your pancreas squirt, it’s so treacly sweet.

But publicly poking fun at the 53 people who’ve watched this holiday hokum every day for two and a half weeks?

Oh no, no, no, no, no. Some, mind you, found it funny. At the time of writing, the teasing had been retweeted 112,000 times and gotten itself 440,000 love-ya hearts.

But then there were those who thought the public flaunting of personal knowledge Netflix has on its users was way out of line, privacy-wise:

In response, a Netflix spokesman pointed out to the Telegraph that nobody got named publicly; rather, the Tweet was based on “overall viewing trends” that are apparently fair game:

The privacy of our members’ viewing is important to us. This information represents overall viewing trends, not the personal viewing information of specific, identified individuals.

Privacy aside, its members’ viewing habits are solid gold to Netflix, which 10 years ago crushed the industry of video rental stores and now dominates the streaming market. It certainly seems to want to keep people coming back, pouring money as it does into original content, from award winners such as Orange is the New Black, House of Cards, Stranger Things, 13 Reasons Why and scads of other must-see shows.

The Telegraph claims that gathering customer data is just as, if not more, lucrative for Netflix than monthly fees. When Netflix allows users to share logins, for example, the separate profiles create separate data sets that are specific to each individual viewer.

Netflix’s director of corporate communications has stressed that Netflix doesn’t sell any customer data to third-party advertisers.

But the company does use cookies and other technologies such as web beacons to “learn more about our users and their likely interests, and to deliver and tailor marketing or advertising”.

Trevor Timm, executive director of the Freedom of the Press Foundation, said this all raises a series of questions worth asking:

At any rate, “The Christmas Prince” teasing is just one of a string of similar messages from Netflix following the release of similar data in its Year in Review – or, as Netflix titled it, 2017 on Netflix: A Year in Bingeing.

Don’t feel singled out if you were one of the people who watched “A Christmas Prince” more than most. You’re joined in being publicly (though not in an individually named way) teased by a number of people in other outed categories, including one pirate-obsessed person who Netflix says it’s “still scratching our heads about”:

The person who watched Pirates of the Caribbean: The Curse of the Black Pearl 365 days in a row (streamin’ me timbers?). An impressive feat, especially as the average member watched around 60 movies on Netflix this year.

Is Netflix’s choice to arbitrarily release data like this funny? Creepy? Offputting?

I’ll take Door No. 3, said one observer:


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/hWoMTDrj1Ws/

Break the Internet: a last ditch attempt to save net neutrality

What are you doing calmly reading Naked Security? Your time would be far better spent freaking out about the end of net neutrality.

Or, in the words of one of many messages put forward by Fight for the Future’s Break the Internet campaign:

Besides urging users to post on social media – the campaign has posted messages (such as that one above) that individuals can easily click to send out on social media – websites are also participating to “stop the FCC,” including Twitter, Facebook, Snapchat, Instagram, LinkedIn, reddit, Tumblr and YouTube. The sites are coming up with creative ways to get their audiences to contact Congress before net neutrality’s neck gets laid on lawmakers’ chopping blocks come Thursday.

“Creative,” as in… well, words fail me. SB Nation is a prime example. It’s usually a site dedicated to sports news, video, commentary and community. As of this Tuesday, it was dedicated to insanity.

It’s in keeping with this week’s “Break the Internet” protests, which are similar to the “Internet Slowdown Day” of 2014, when the spinning wheel of page-will-never-ever-ever-load death took over much of the internet, as companies reminded people what an internet without net neutrality would look like and thereby drove public comment to lawmakers.

Another such was the “Internet Blackout Day” of 2012, which involved protests against the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA). The activist group Fight for the Future was behind the organizing of all these protests, and the idea has been consistent: to show what the web looks like without net neutrality, be it by changing their logos, slowing down page loading, or by showing fake popups to scare users by asking for extra money to access the website.

The protests have worked in the past. But does the Federal Communications Commission (FCC) even care, at this late date? The proposed rollback of net neutrality regulations seems to be inevitable, in spite of more than six months of reports of fake and bot-generated comments flooding the FCC’s public comments space and, most recently, a demand from New York Attorney General Eric Schneiderman on Monday that the 14 December vote be postponed.

The FCC has finally agreed to help investigate what appear to be at least 1 million fake comments, but only after refusing nine previous requests for FCC logs to show the origin of the comments.

Nonetheless, there’s still time to fight the good fight.

That’s where you come in. Fight for the Future has a website that helps you join in the protest, offers tips, and asks you to write to your representatives. The effort will run until tomorrow, 14 December.

Fight for the Future organizers have a ton of suggestions on making your voice heard, including changing your Facebook status to married and listing your partner as “net neutrality.” You can also add a “new job” position to your LinkedIn profile called “Defending Net Neutrality at BattleForTheNet.com” – this will trigger a notification to your network and potentially persuade more people to take action before the vote.

Telling big internet companies to join the Break the Internet campaign is another way to fight. Companies that are already on board include Imgur, Mozilla, Pinterest, Reddit, GitHub, Etsy, BitTorrent, Pornhub, Patreon, Funny Or Die, Speedtest, Fiverr, Cloudflare, Opera, Trello, the Happy Wheels game, DeviantArt, AnimeNewsNetwork and BoingBoing.

Want to make some noise?

Check out the protest’s page for all the noise you could wish for, and then let your flying net neutrality monkeys FLY!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DgsMGN9p6Lo/

Barclays employee sentenced for aiding Dridex money launderers

An employee of Barclays Bank who laundered thousands of pounds on behalf of Moldovan cybercriminals was sentenced to six years and four months in prison yesterday.

According to the Crown Prosecution Service, Jinal Pethad, 29, from Edgware, set up more than a hundred false accounts to launder money and maintained almost 200 on behalf of Pavel Gincota and Ion Turcan for two years.

Pethad’s access allowed Gincota and Turcan to withdraw money from these accounts, and to move funds between them in order to launder it.

Pethad pleaded guilty at the opening of his case at the Old Bailey on Monday to conspiracy to conceal, disguise, convert, transfer or remove criminal property, contrary to section 1(1) of the Criminal Law Act 1977.

A National Crime Agency statement said that Gincota and Turcan had stolen the cash by using the Dridex malware to record the bank details of people who opened their spam email attachment. Both pleaded guilty to conspiracy to possess false identification and conspiracy to launder money in October 2016. Gincota was jailed for five years and eight months, and Turcan received a seven-year sentence.

Tom Guest, specialist prosecutor, said the CPS had presented evidence that Pethad had facilitated and benefitted from his crimes, and that he had had “abused his position of trust in his job”. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/13/barclays_employee_sentenced_for_aiding_money_launderers/

Automation Could Be Widening the Cybersecurity Skills Gap

Sticking workers with tedious jobs that AI can’t do leads to burnout, but there is a way to achieve balance.

According to Cybersecurity Ventures, the cybersecurity skills shortage is now expected to hit 3.5 million positions by 2021 — a huge jump from current estimates of 1 million job openings.

To help compensate for the growing shortage of talent, the cybersecurity industry is embracing artificial intelligence and automation to fill the gap. But can automation actually make the skills gap even greater? Unfortunately, yes — but security can still find a balance.

The Leftover Principle of Automation
The concept of mechanizing human tasks to drive efficiency has been studied since the advent of industrial automation. The primary goal is to automate as much as possible and thus eliminate human decision making in the process because human decisions can be the most frequent source of error in a given process. Any task not assigned to machines is “left over” for humans to carry out.

The problem with this theory, especially in cybersecurity, is that only very well-understood (relatively simple) processes can be automated, meaning the tasks left for security teams are the hard tasks that can’t be automated. These difficult tasks require security professionals who have experience and deep domain knowledge. 

This is exacerbating the vicious cycle of security analyst burnout we currently face: 

  • Tasks that provide a sense of completion/satisfaction are automated.
  • Security analysts are increasingly asked to work on tedious, arduous tasks that lead to burnout.
  • Analysts leave to find greener pastures, leaving the security operations center shorthanded.
  • Company struggles to find talent to fill the gap.
  • When security management finds someone to hire, they give the new employees tedious, arduous tasks that lead to burnout.
  • Wash. Rinse. Repeat.

Lessons from the ’90s and the IT Community
This isn’t the first time this phenomenon has reared its head in the technology world. We saw a similar cycle in the IT/sysadmin world 25+ years ago. The sysadmin of the ’90s was near omnipotent when it came to domain knowledge of technology and IT systems. This was driven by need — IT professionals had to be able to fix every problem across technology infrastructure, and that infrastructure was nowhere near as reliable and interoperable as it is today.

As technology advanced, this need for all-knowing IT admins lessened, driven by technology improvements. This necessarily lessens the experience and accumulated knowledge gained from fixing systems and making sure they work together.

Today’s IT professionals no longer implicitly acquire deep domain expertise on IT infrastructure in the same ways; however, the analogy also ends here for two significant reasons:

  1. While admins always have to contend with users who break systems unintentionally, they are not faced with armies of users distributed around the world with the sole intention of sabotaging their systems. Simple repetitive tasks can be automated. Accurately discerning behavior and intention within environments that are difficult or impossible to accurately model in the first place is a fool’s quest.
  2. Automation of IT infrastructure (DevOps) has led to many positive outcomes, such as requiring fewer people to manage more systems. This works for knowledge domains that slowly evolve and/or are hyper-focused on a specific component of a system. In security, however, the knowledge domain is not dictated by just “security practices” (quite limited), but rather the security professional must be knowledgeable about how technologies are abused across all the legitimate technologies and architectures adopted in the enterprise, most of which evolve extremely rapidly.

Compensating for Automation
Where does this leave the security industry? Is it possible to find a balance? The offshoot of the Leftover Principle is called the Compensatory Principle. This theory says that there are tasks that humans do well that machines don’t. People and machines should focus on what they do well, compensating for each other’s shortcomings. 

Attempting to automate humans out of cybersecurity is detrimental to our industry and destined to fail, primarily because we’re not facing a tech opponent — we’re facing human adversaries who go to great lengths to find weaknesses to exploit. Because so much is automated now, security analysts simply aren’t required to go to the same depths, which is creating an even wider and more detrimental gap between attackers and defenders.

What’s an example of “leftover” work today? The work that nowadays we call hunting — the responsibility of the team to compensate for the ineffectiveness of automated systems — is one example. The inability of most teams to hunt has created a perception that work isn’t getting done because there’s no talent to do it. The reality is that automation is making matters worse in this context, because effective hunting is based on the analyst having learned the more fundamental techniques while completing more “simple” tasks.

What’s the solution? How do we embrace machine learning and automation without making our situation worse?

Organizations need to focus automation on the tedious and error-prone tasks that drive security analyst burnout —while leaving jobs needing more discernment to analysts.  

For instance, automating parts of the alert investigation process can have a huge impact on security analyst productivity. Automating things such as tracking a device as it moves across the network and identifying infected devices by its human owner and their behaviors, rather than ephemeral identifiers like IP addresses (which require more human work to then identify the owner), can be enormously helpful and positive for analysts.

Like many of the overhyped features we’ve seen over the past couple of decades, from anomaly detection (early 2000s) to analytics (late 2000s), automation is not a cure-all for our cybersecurity woes of today. And worse, without a clear understanding and strategy for how automation will improve the work of your employees, automation might make some of your challenges worse — in a way that could be difficult to compensate for later.  

Related Content:

Gary Golomb has nearly two decades of experience in threat analysis and has led investigations and containment efforts in a number of notable cases. With this experience — and a track record of researching and teaching state-of-the art detection and response … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/automation-could-be-widening-the-cybersecurity-skills-gap/a/d-id/1330592?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Up to ‘ONE BEEELLION’ vid-stream gawpers toil in crypto-coin mines

Security experts claim four extremely popular video-streaming websites have been secretly loaded with crypto-currency-crafting code.

According to AdGuard, the massive Monero-mining operation was discovered when ad-blocking plugin developer was fine-tuning its ad blockers to catch and block sites that attempt to hijack web surfers’ spare CPU cycles for mining.

“While analyzing the first complaints, we came across several VERY popular websites that secretly use the resources of users’ devices for cryptocurrency mining,” AdGuard cofounder Andrey Meshkov explained this week.

“According to SimilarWeb, these four sites register 992 million visits monthly. And the total monthly earnings from crypto-jacking, taking into account the current Monero rate, can reach $326,124.85.”

AdGuard says the sites – openload, Streamango, Rapidvideo, and OnlineVideoConverter – are often linked for other pages as embedded players, increasing their reach over hundreds of millions of visitors.

“We doubt that all the owners of these sites are aware that the hidden mining has been built in to these players,” noted Meshkov, meaning the servers may have been hacked to inject the mining code into browsers.

As with other mining schemes, the sites used Coin Hive’s software to silently generate Monero coins without users noticing anything other than a slowdown of their machine. AdGuard said it had updated its plugins to catch the activity. Other ad-blocking plugins and antivirus packages also kill Coin Hive’s JavaScript code on sight.

The discovery is the latest in a huge wave of websites that are loading up (both with and without the operators’ knowledge) alt-coin mining software to co-opt the compute cycles of visitors to help generate cryptocurrency.

With the value of digital currencies skyrocketing in recent weeks – Monero, for one, leapt from $90 to $300 apiece in about a month – covert methods of generating the crypto-wonga have become ever more popular. Earlier this week, for example, researchers found that a Wi-Fi operator in Argentina was compromised to load up coin-mining code in the browsers of machines connected to the public network.

“At the moment, the only real solution is to use an ad blocker, an antivirus or one of the specialized extensions to combat crypto-jacking. Unfortunately, not all users know about the problem or want to use such software,” Meshkov said.

“The only way to completely close the issue of browser-based mining is to implement security mechanisms at the browser level.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/13/adguard_video_streaming_mining/

One per cent of all websites probably p0wned each year, say boffins

Researchers working on a technology to detect unannounced data breaches have found, to their dismay, that one per cent of the sites they monitored were hacked over the previous 18 months.

University of California San Diego researcher Joe DeBlasio, who conducted the study under professor Alex Snoeren said the number was shocking, because while one per cent doesn’t seem like much, but it translates into “tens of millions” of breaches annually.

Moreover, he said, the research showed that “size doesn’t matter” – popular sites are just as likely to suffer breaches as obscure outfits, and as the university’s announcement noted, the a 1/100 hack rate means “out of the top-1000 most visited sites on the Internet, ten are likely to be hacked every year.”

The research was carried out using a tool DeBlasio created called Tripwire: a bot that registers and created accounts on more than 2,000 sites.

Tripwire created a unique e-mail address for each account and by following the bad practice of password re-use made it simple to discover if a third party used the password to access the account. If third party access was detected, this was counted as an indication that the site’s account information had leaked.

A control group of 10,000 e-mail accounts on the same e-mail provider was left unused for registering other accounts, as a control group to demonstrate that the leaks didn’t come from the e-mail provider.

DeBlasio and Snoeren notified the security teams at the 19 sites in their sample that had suffered breaches (they said those included “a well-known American startup with more than 45 million active users”).

As a second test of a site’s security, they opened two accounts, one with an easy password, the other with a hard password. If both were breached, they reasoned, the site was storing passwords in plain text.

The source code for Tripwire is published at GitHub, with a caveat that you shouldn’t try this at home. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/13/one_per_cent_of_all_web_sites_probably_p0wned_each_year_say_boffins/

Tenable’s response to folks upset at AWOL features: A 150-emails-a-minute spam storm

Tenable Security has given itself two problems, by releasing a product its users don’t like, and then adding them all to a support email group that’s sending uncomfortable volumes of messages.

The new product is Nessus Professional v7, which Tenable has declared is just fabulous thanks to new licensing, improved reports, and customization features.

However, the release also withdrew some features, leading to responses such as this:

It gets worse: as part of the effort to spread the word about Nessus Pro 7, Tenable appears to have added all Nessus customers to a support forum that spewed out email at as much as 150 messages a minute, for over an hour. The result of that effort is typified by the tweets below…

Tenable responded by saying it erred when making a new group:

The Register has been told by Tenable sources that no personally identifiable information was exposed. We have also asked Tenable why it removed Nessus Pro features that folks liked. If further comment is forthcoming, we will update this story. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/12/tenable_security_spams_clients/

I, Robot? Aiiiee, ROBOT! RSA TLS crypto attack pwns Facebook, PayPal, 27 of 100 top domains

A 19-year-old vulnerability in the TLS network security protocol has been found in the software of at least eight IT vendors and open-source projects – and the bug could allow an attacker to decrypt encrypted communications.

Identified by security researchers Hanno Böck, Juraj Somorovsky of Ruhr-Universität Bochum/Hackmanit, and Craig Young of Tripwire VERT, the flaw – specifically in RSA PKCS #1 v1.5 encryption – affects the servers of 27 of the top 100 web domains, including Facebook and PayPal.

The vulnerability, however, is overrepresented among the top 100 websites. According to Young, only 2.8 per cent of the top million websites, as measured by Alexa, are affected.

“This drastic difference is because the problems were mainly found in expensive commercial products that are often used to enforce security controls on popular websites,” Young said in an email to The Register.

The flaw dates back to 1998, when Daniel Bleichenbacher, a Swiss cryptographer who currently works for Google, identified a problem with the implementation of RSA PKCS #1 v1.5. The shortcoming could be exploited by miscreants to pose repeated queries to a server running vulnerable code in order to receive answers usable for decoding ostensibly secure communications.

The issue isn’t confined to TLS. The researchers say similar problems exist in XML Encryption, PKCS#11 interfaces, Javascript Object Signing and Encryption (JOSE), and Cryptographic Message Syntax / S/MIME.

In keeping with the current trend of naming noteworthy vulnerabilities so the technically disinterested can participate in the conversation, the researchers have dubbed the bad code ROBOT, which stands for Return Of Bleichenbacher’s Oracle Threat.

“Oracle” in this case does not refer to the litigious database giant; it’s a term used to describe vulnerable servers that act as oracles by providing answers to queries.

The bug has endured since 1998, Young explained, because recommendations for software developers on how to prevent the attack fell short and became more and more complicated with subsequent iterations. In other words, various encryption systems in use today are still affected by the weaknesses uncovered by Bleichenbacher nearly two decades ago, because mitigations drawn up at the time were not enough, and thus network traffic encrypted using the weak technology can be decrypted by force.

“The end result as we can see is that many software designers did not properly implement these protections,” said Young. “The real underlying problem here is that the protocol designers decided (in 1999) to make workarounds for using an insecure technology rather than replace it with a secure one as recommended by Bleichenbacher in 1998.”

As a proof-of-concept exploit, the researchers managed to sign a message with the private key of the facebook.com HTTPS certificate.

Facebook has since patched its servers. As described in a paper on the flaw that was published Tuesday, Facebook was using a patched version of OpenSSL for its vulnerable hosts and said the bug hailed from one of the company’s custom patches.

Makers of stuff affected by ROBOT, meaning information encrypted by their gear is at risk of forced decryption by eavesdroppers, include:

Other vendors with vulnerable products will be identified once they publish fixes, according to Tripwire.

Young told The Register that for the most interesting attack scenarios, the attacker must have access to the victim’s network traffic.

“This is a capability government spy agencies have but vulnerabilities in W-iFi (including the recent KRACK attack) can allow nearby attackers to perform these attacks,” said Young.

Given that prerequisite, Young said the next step depends upon the configuration of the website in question.

“If the site only supports older standards, the attacker can record traffic to decrypt later and depending on how fast the attacker is able to carry out the attack, [that person] could also impersonate these sites to manipulate what shows in the victim’s web browser,” said Young. “If the site supports newer standards, the attacker needs to force a downgrade attack. This requires carrying out the attack at very high-speed.”

Young says an attack of this sort has a realistic chance of success, though subject to certain requirements. A full interception or impersonation hack requires very rapid network access, which is more easily done on large sites, where the infrastructure tends to be well-provisioned, than on small sites constrained by thin pipes.

If the snooper manages to conduct an operation of this sort during the TLS handshake timeout, that person could carry out a DROWN-style man-in-the-middle attack by downgrading the connections to use RSA encryption key change modes, according to Tripwire.

“The impact of these attacks is that passwords, credit cards, and other sensitive details become visible to the attacker and in some cases the attacker can actively manipulate what the victims see on a site,” said Young.

Beyond applying patches as appropriate, the researches advise disabling RSA encryption (ciphers that start with TLS_RSA), seen in about one per cent of connections handled by Cloudflare.

“We believe RSA encryption modes are so risky that the only safe course of action is to disable them,” they said. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/13/robot_tls_rsa_flaw/