STE WILLIAMS

Overburdened SOC Analysts Shift Priorities

Many SOC analysts are starting to shut off high-alert features to keep pace with the volume, new study shows.

Yet another red flag that the security operations center is burning: Most SOC analysts now say their primary role is to close alert investigations or the number of alerts as quickly as possible rather than to study and remediate security threats, according to a new survey.

Last year, some 70% of SOC analysts surveyed by CriticalStart actually said the reverse: They saw their main role as analyzing and fixing security threats and issues. But in the managed detection response firm’s newest survey — published this week — only 41% say that’s their priority. And most (more than 70%) handle 10 or more security alerts per day, an increase over last year, when 45% were operating at that workload.

SOCs also get hit with false-positive security alerts at a rate of 50% or higher, according to the report, and nearly 40% say when SOC analysts can’t keep up with alert, they either turn off some alerting functions or hire more analysts.

“Security teams are just trying to survive without the required resources and head count. They often filter alerts using email rules to send notifications to an alert folder, where they are ignored. Some teams even turn off security-system alerting so there is no record of security events not being monitored,” says Rob Davis, CEO of CriticalStart, which recently surveyed 50 SOC professionals in enterprise organizations, managed security services providers, and managed detection and response providers. “It is very common for analysts to increase the thresholds for creating security events to reduce volume.”

Turning down the volume on alerts to keep up with them puts organizations at risk of a real attack, says Chris Calvert, co-founder and vice president of product strategy at Respond Software, a security automation vendor. “If I am being attacked constantly and I have real vulnerabilities to manage but only a small team, how do I prioritize? In today’s environment, detection and remediation is just as important as prevention and we often don’t have the budget to cover everything,” he says.

This latest study is yet another in a string of recent SOC reports this year underscoring the growing problem of an overwhelming and impossible volume of security alerts to sift through for that needle in the haystack, and the lack of people to fill the seats in the SOC. CriticalStart’s report shows how these stresses are leading to heavy turnover, with more than 75% seeing SOC turnover raters of more than 10%, and close to half seeing turnover rates of 10% to 25%.

A Ponemon Institute study last month revealed that more than half of IT and security pros consider their SOC inadequate to thwart security threats and some 65% were thinking of leaving their positions due to alert overload, long hours, and incomplete visibility into their IT infrastructures.

“Turnover continues to increase and retention is a major issue for companies,” notes CriticalStart’s Davis. “With virtually no unemployment, the best analysts are constantly recruited. Executives fear the expertise from completed security projects will walk out the door and ruin their investment in tools.”

It’s a vicious cycle: Much of the stress in the SOC comes from analysts surrounded by too many security tools that don’t work well together or that they don’t have time or resources to fully master, as more alerts bombard their screens every day. They just don’t have the time or expertise to master the tools, or stay on top of the alerts these systems pump out.

“More security sensors and log sources containing more signatures of potentially malicious activity combined with exponential IT growth — and a dramatic increase in malicious attacks,” Calvert explains. He says SOCs should measure the time and effort spent on false positives and automate the process where they can.

The noise and overload of tools and alerts can escalate quickly, according to Larry Ponemon, president of the Ponemon Institute. “A lot of research studies find the whole issue of interoperability and scalability is largely ignored and as result, the technologies don’t actually work together, and you have more [tools] than you need,” Ponemon says.

An overwhelmed SOC can result in dangerously long times to resolve and remediate an attack. Some 42% of the SOC analysts in Ponemon’s report, sponsored by Devo Technology, say it takes months or years on average to resolve a hack. That mean time to resolution, as it is called, occurs at 22% of organizations in a matter of hours or days.

CriticalStart’s report found that most SOC analysts (78%) take 10 or more minutes to investigate each alert they see.

Security experts recommend outsourcing some or most SOC operations to managed or cloud-based providers, training up existing SOC analysts, and automating low-level tasks.

“The SOC of the future will look very different and provide the advantages of an in-house SOC with the lower cost of outsourcing,” Davis notes. “A SOC must quickly detect attacks and respond before a breach occurs. This requires resolving every alert instead of filtering or ignoring security events. If you aren’t resolving every alert, then you can’t detect every attack.”

He says that because most alerts are common across organizations, an outsourced SOC model can tackle “known good” false positives more efficiently and cheaply.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Fuzzing 101: Why Bug-Finders Still Love It After All These Years.”

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/analytics/overburdened-soc-analysts-shift-priorities/d/d-id/1335698?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Uncovers Massive iPhone Attack Campaign

A group of hacked websites has been silently compromising fully patched iPhones for at least two years, Project Zero reports.

For at least two years, a small collection of hacked websites has been attacking iPhones in a massive campaign affecting thousands of devices, researchers with Google Project Zero report.

These sites quietly infiltrated iPhones through indiscriminate “watering hole” attacks using previously unknown vulnerabilities, Project Zero’s Ian Beer reports in a disclosure published late Thursday. He estimates affected websites receive thousands of weekly visitors, underscoring the severity of a campaign that upsets long-held views on the security of Apple products.

“There was no target discrimination; simply visiting the hacked website was enough for the exploit server to attack your device, and if it was successful, install a monitoring plant,” Beer explains.

Google’s Threat Analysis Group (TAG) found five exploit chains covering nearly every operating system release from iOS 10 to the latest version of iOS 12. These chains connected security flaws so attackers could bypass several layers of protection. In total, they exploited 14 vulnerabilities: seven affecting the Safari browser, five for the kernel, and two sandbox escapes.

When unsuspecting victims accessed these malicious websites, which had been live since 2017, the site would evaluate the device. If the iPhone was vulnerable, it would load monitoring malware. This was primarily used to steal files and upload users’ live location data, Beer writes.

The malware granted access to all of a victims’ database files used by apps like WhatsApp, Telegram, and iMessage so attackers could view plaintext messages sent and received. Beer demonstrates how attackers could upload private files, copy a victim’s contacts, steal photos, and track real-time location every minute. The implant also uploads the device keychain containing credentials and certificates, as well as tokens used by services like single sign-on, which people use to access several accounts.

There is no visual indicator to tell victims the implant is running, Beer points out, and the malware requests commands from a command-and-control server every 60 seconds.

“The implant has access to almost all of the personal information available on the device, which it is able to upload, unencrypted, to the attacker’s server,” he says. It does not persist on the device; if the iPhone is rebooted the implant won’t run unless the device is re-exploited. Still, given the amount of data they have, the attacker may remain persistent without the malware.

Google initially discovered this campaign in February and reported it to Apple, giving the iPhone maker one week to fix the problem. Apple patched it in iOS 12.1.4, released on February 7, 2019.

iPhones, MacBooks, and other Apple devices are widely considered safer than their competitors. Popular belief also holds that expensive zero-day attacks are reserved for specific, high-value victims. Google’s discovery dispels both of these assumptions: This attack group demonstrated how zero-days can be used to wreak havoc by hacking a larger population.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: ‘It Saved Our Community’: 16 Realistic Ransomware Defenses for Cities.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/google-uncovers-massive-iphone-attack-campaign/d/d-id/1335699?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google warns of system-controlling Chrome bug

Google is patching a serious bug in the desktop version of its Chrome browser that could let an attacker take over a computer simply by luring users to a website. A fix for the bug, which affects the desktop version of Chrome on macOS, Windows, and Linux, will be available in the coming days, the company said. The flaw doesn’t affect the iOS or Android versions of Chrome.

The bug lies in Blink, the rendering engine that underpins Chrome. A rendering engine is the part of the browser that interprets HTML and creates the visuals you see when you visit a website.

Blink is part of the open-source Chromium project on which Chrome is based. The Chromium team created Blink in 2013 as a fork of WebCore, which is a part of WebKit, the browser engine that Apple uses for its Safari browser.

An attacker could exploit this serious bug if a user visits a malicious webpage, according to an advisory issued by the Center for Internet Security (CIS) issued a day after Google’s blog post on the issue.

It warned:

Successful exploitation of this vulnerability could allow an attacker to execute arbitrary code in the context of the browser. Depending on the privileges associated with the application, an attacker could install programs; view, change, or delete data; or create new accounts with full user rights.

Google is keeping quiet about the specifics of the bug until it’s sure that “the majority of users are updated with a fix”. However, it has revealed that it is a use-after-free vulnerability. Use-after-free bugs are flaws in which a program tries to access memory after it has been freed.

The bug was reported by Qihoo 360 Technology Co’s Chengdu Security Response Center. Google awarded the researchers $5,500 for their efforts.

CIS ranks the bug severity as high for large and medium organizations, and medium for small ones. The risk is low for home users, it suggests, but that certainly doesn’t mean you shouldn’t patch it.

Normally, this will happen in the background when the patch is available, but if you haven’t closed Chrome in a while you can check to see if there are any pending updates. Click the ‘more’ icon (the three dots at the far right of the address bar), and then Help, and About Google Chrome. The browser will check for any updates when you’re on this page.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HwRH9qy2X3g/

Apple apologizes for humans listening to Siri clips, changes policy

Apple has formally apologized for keeping accidentally triggered Siri voice assistant recordings and letting its human contractors listen to them.

From the statement it published on Wednesday:

We realize we haven’t been fully living up to our high ideals, and for that we apologize.

To make it up to us, Apple says it’s making these three changes to its Siri privacy policy. Apple will:

  • No longer hang on to audio recordings of users talking with their Siri voice assistants. Instead, it plans to solely rely on computer-generated transcripts to improve its speech recognition capabilities. That work includes identifying mistaken recordings. Unfortunately, as a whistleblower revealed in July, the rate of accidental Siri activations is quite high – most particularly on Apple Watch and the company’s HomePod smart speaker. Siri’s mistakenly hearing a “wake word” has led to recordings of private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters, and more.
  • Make analysis of recordings an opt-in function. Apple says it hopes that many people will choose to help Siri improve by learning from the audio samples of their requests, “knowing that Apple respects their data and has strong privacy controls in place.” Those who opt to participate can change their minds at any time, Apple says.
  • Guarantee that “only Apple employees” get to listen to audio samples of the Siri interactions. It’s unclear who else Apple was using to analyze transcriptions.

Over the past few months, news has emerged about human contractors working for the three major voice assistant vendors – Apple, Google and Amazon – and listening to us as they transcribe audio files.

We’ve also heard about Microsoft doing the same thing with Skype, Cortana and Xbox, and Facebook doing it with Messenger.

In the case of Messenger, we know that Facebook was getting help from at least one third-party transcription service until it put the work on pause. It may be that Apple was likewise using third parties and now plans to cut them out of the work. I’ve asked Apple for clarification and will update the story if I hear back.

Apple also says its team will work to delete any recording that’s determined to be made by an inadvertent trigger of Siri. Those accidentally made clips were the main source of sensitive recordings, according to The Guardian, which initially reported on the issue.

Update on the upshots

We can add Apple’s apology and privacy changes to the roster of companies that are, more or less, changing their approaches to improving their voice recognition. So far, we’ve got:

  • Earlier this month, in the aftermath of media reports, Google and Apple suspended contractor access to voice recordings and the grading program through which those recordings were reviewed. “We are committed to delivering a great Siri experience while protecting user privacy,” it told news outlets at the time. Under its previous policy, Apple kept random recordings from Siri for up to six months, after which it would remove identifying information for a copy that it would keep for two years or more.
  • Amazon earlier this month said it will let users opt out of human review of Alexa recordings, though users have to actually go in and, periodically, delete those recordings themselves. Here’s how.
  • Facebook said that it had “paused” its voice program with Messenger. It didn’t say if or when it might resume.
  • After the reports about Skype and Cortana recordings, Microsoft updated its privacy policy to be more explicit about humans potentially listening to recordings. It’s still getting humans to review that audio, however. The company’s privacy policy now reads…

    Our processing of personal data for these purposes includes both automated and manual (human) methods of processing.

    Microsoft also has a dedicated privacy dashboard page where you can delete voice recordings.

  • As far as the Xbox listening goes, a Microsoft spokesperson told Motherboard last week that the company recently stopped listening to Xbox audio for the most part, but that the company has always been upfront about the practice in its terms of service:

    We stopped reviewing any voice content taken through Xbox for product improvement purposes a number of months ago, as we no longer felt it was necessary, and we have no plans to re-start those reviews. We occasionally review a low volume of voice recordings sent from one Xbox user to another when there are reports that a recording violated our terms of service and we need to investigate. This is done to keep the Xbox community safe and is clearly stated in our Xbox terms of service.

Apple said in its statement that it plans to resume grading Siri recordings later this autumn, after it’s included the new opt-in option.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YC7GTRUdtR8/

I just love your accent

On Call Welcome to On Call, The Register’s weekly dive into the mailbag of woe from those faced with recalcitrant users or, occasionally, an overly helpful operator.

Today’s story comes from a reader that the Reg‘s patented pseudoriser has called “Nick” and could be regarded as somewhat of a riposte to last week’s Asset Tag shenanigans.

Rather than finding a PA careful not to hand out any potentially naughty numbers, Nick found himself in quite the opposite situation.

Nick’s story is bang up to date, so there can be no “it was acceptable in the 80s”-type excuses for what follows. You have been warned.

“I was part of the infrastructure team,” Nick told us, and “I overheard a service desk call where someone called up for a password reset.” So far so good… “which was done with no checks.”

So far so not so good.

Curious, Nick asked the service desk manager what the actual process was for resetting a password since this seemed a little, er, casual. He was told: “Oh, it’s out of date. It was written when we were in one building and says we get people to come in person.”

With expansion to multiple buildings, the face-to-face reset had become a pain. However, rather than update the process to something more practical and a teensy bit more secure, the service desk team had come up with an even better wheeze, as Nick explained:

“I was told that it was not a risk really as they would recognise the voice.”

Just let that sink in for a moment.

Nick and a chum in the infosec team retired to “the pub-shaped meeting room” where Nick regaled his friend with his discovery. The infosec chap was blessed with a rich Scottish accent and decided to test Nick’s claim: “He called up… and asked for my password to be reset.”

“Which the service desk drone did.”

Nick then sauntered back to the office and headed to the Service Desk Manager. In his own Southern Counties accent he asked why he could no longer log in.

“But you called for a reset,” said the manager. “Not me!” replied Nick. This was the cue for the infosec chap to put in an appearance, replete with Scots brogue.

“The service desk manager was asked to pull up the call recording and after she listened to it, emailed her team to tell them the new process for password resets.

“I gave my notice in the next day.”

Nick had more reason than most to be sensitive about those password resets. As a prequel, he also told us about an earlier incident while he was working for a small MSP, which looked after a number of businesses.

“We got a call asking for some passwords to be reset ‘because people had left’ and for access to their email to be given to the caller.”

Something was a bit whiffy about the call, “as it looked like half the company was on the list.”

The team dutifully double-checked by attempting to get in touch with their primary contact at the company. It appeared that person had left, so the MD was put on the line.

“It turned out the guy who called was a director who was leaving and was trying to steal a load of data.”

Ouch.

Ever taken a call and refused to take action no matter how much like the Boss the caller sounded? We hope so. But if not, perhaps a swift email to On Call will ease your conscience? ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/30/on_call/

Despite billions in spending, your ‘military grade’ network will still be leaking data

Despite years of corporate awareness training, warning articles in The Reg and regular bollockings by frustrated IT admins, human error is still behind most personal data leaks, a newly released study says.

Security shop Egress studied 4,856 personal data breach reports collected from the UK Information Commissioner’s Office, and found that in 60 per cent of the incidents, someone within the affected biz was at fault.

Further breaking down human error, it was found that 43 per cent of the data leaks were caused by incorrect disclosure, such as someone sending a file to the wrong person or the wrong file to the right person or persons. For example, 20 per cent of the exposures were caused by faxing a file to the wrong person, and 18 per cent were caused by typing the wrong address into an email field or failing to use bcc and exposing every recipient.

By comparison, just 5 per cent of the leaks were due to someone falling victim to a phishing attack. It’s just that when that happens, it tends to be much bigger and more damaging than a misdirected email.

The findings back up the argument that companies should spend less time worrying about attacks from hackers and more time planning against the inevitable human screw-ups that are far more likely to result in data loss and exposure.

In other words, the biggest threat to your company’s data security is you or a colleague. For every exotic APT operation that gets reported, there are four companies done in by someone fat-fingering a fax machine or clicking the wrong file to attach to an email.

Cheerful family sitting on the grass during a picnic in a park

Cloud computing’s no PICNIC*: Yep, biggest security risks down to customer, not provider

READ MORE

“All too often, organizations fixate on external threats, while the biggest cause of breaches remains the fallibility of people and an inherent inability of employees to send emails to the right person,” Egress CEO Tony Pepper said of the findings.

“Not every insider breach is the result of reckless or negligent employees, but regardless, the presence of human error in breaches means organizations must invest in technology that works alongside the user in mitigating the insider threat.”

Of the 4,856 leaks, nearly one-fifth (18 per cent) came from healthcare companies, while 16 per cent originated from national or local government offices. Breaches at educational institutions were next most popular, accounting for 12 per cent of leaks, followed by justice and legal at 11 per cent and financial services at 9 per cent.

None of this is to say that admins should neglect external security entirely. A quick perusal of the California Attorney General’s disclosure list shows that four of the five most recently reported data leaks, including the massive Capital One theft, were in fact down to third-party hackers or malware infections. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/30/human_error_data_leak/

S2 Ep6: Instagram phishing, jailbreaking and social media hoaxes – Naked Security Podcast

Episode 6 of the Naked Security Podcast is now live!

This week, host Anna Brading is joined by Mark Stockley and Paul Ducklin to discuss jailbreaking iPhones [2’50”], sophisticated Instagram phishing [14’02”] and the latest social media hoax [28’23”].

As always, we love answering your cybersecurity questions on the show – simply comment below or ask us on social media.

Listen now and tell us what you think!

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cspLadXBXE0/

Capital One ‘hacker’ hit with fresh charges: She burgled 30 other AWS-hosted orgs, Feds claim

The ex-Amazon engineer who allegedly stole 100 million Capital One credit applicants’ personal details from AWS cloud buckets has been formally accused of swiping data from 30 other organizations.

Paige Thompson, 33, was collared last month after cops, acting on a tip off, raided her Seattle home and allegedly discovered a computer containing vast quantities of records purloined from Capital One’s AWS-hosted systems as well as files from 30 other organizations.

An indictment [PDF], filed on Wednesday in a US federal district court, noted that investigators have identified most of the companies and institutions allegedly hit by Thompson, and lists three as “a state agency outside the State of Washington; a telecommunications conglomerate outside the United States; and a public research university outside the State of Washington.”

According to prosecutors, Thompson wrote software that scanned for customer accounts hosted by a “cloud computing company,” which is believed to be her former employer, AWS or Amazon Web Services. It is claimed she specifically looked for accounts that suffered a common security hole – specifically, a particular web application firewall misconfiguration – and exploited this weakness to hack into the AWS accounts of some 30 organizations, and siphon their data to her personal server. She also used the hacked cloud-hosted systems to mine cryptocurrency for herself, it is alleged.

“The object of the scheme was to exploit the fact that certain customers of the cloud computing company had misconfigured web application firewalls on the servers that they rented or contracted from the cloud computing company,” the indictment reads.

It goes on: “The object was to use that misconfiguration in order to obtain credentials for accounts of those customers that had permission to view and copy data stored by the customers on their cloud computing company servers. The object then was to use those stolen credentials in order to access and copy other data stored by the customers.”

Boasting blowback

Limited technical detail is provided, and the indictment confirms what we already knew: that she allegedly used a combination of Tor and VPN provider IPredator to hide her anonymity while swiping the data, though, according to the Feds, she accessed things like her public GitHub account using these tools as well as the AWS servers, allowing g-men to trace the activity back to her. For one thing, her GitHub username was her full real name.

AWS

Jeff Bezos feels a tap on the shoulder. Ahem, Mr Amazon, care to explain how Capital One’s AWS S3 buckets got hacked?

READ MORE

She also allegedly bragged about her hack to friends on Slack, and then later in a GitHub Gist post that contained detailed information about Capital One’s systems including access commands; her boasts quickly became a focus of the Feds’ attention, prosecutors say.

According to the authorities, a whistleblower spotted her Gist post and alerted Capital One via email. The credit giant found the boasts were real, and its customers’ details were accessible, patched the hole, and called the FBI. Ten days after it was first informed by Capital One, the FBI and police stormed Thompson’s house near Seattle airport in a military-style raid, and charged her with breaking America’s Computer Fraud and Abuse Act.

Now, the techie faces an additional computer abuse charge over the 30 other AWS-hosted organizations she allegedly hacked and stole information from, and one count of wire fraud due to “transmitted by means of wire communication in interstate commerce, from her computer in Seattle to a computer outside the State of Washington, writings, signs, signals, pictures, and sounds,” it claimed.

If found guilty, Thompson faces up to 25 years behind bars. She was refused bail, and is next due in court in Seattle for arraignment on September 5. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/29/capital_one_fresh_hack_charges/

Google takes a little more responsibility for its Android world, will cough up bounties for mega-popular app bugs

Google is expanding its Android bug-bounty program to cover not just holes in the web giant’s apps but also vulnerabilities in third-party software – as long as they have more than 100 million installs.

We’re told that if an Android application’s maker already runs their own bug bounty program, infosec peeps can still claim those prizes from the developers – as well as rewards from the web-search king via its enlarged Google Play Security Reward Program. If an eligible popular app doesn’t have its own bug bounty, Google will cough up the cash for any holes reported, and alert the developers to the flaws in their code.

“In these scenarios, Google helps responsibly disclose identified vulnerabilities to the affected app developer,” Googlers Adam Bacchus, Sebastian Porst, and Patrick Mutchler explained in announcing the expansion. “This opens the door for security researchers to help hundreds of organizations identify and fix vulnerabilities in their apps.”

Google is having a hard time keeping Android malware out of its Play Store due to its, in comparison with Apple and its tightly policed iOS store, light-touch regulation of third-party applications. This almost open-door policy has allowed malicious software to sneak in, and be downloaded by millions of unlucky punters. Google’s Play Protect system, an AI-powered malware spotter, has had some success in catching software nasties, but clearly not enough, and the system has been panned by testers. There are about 2.5 billion active Android devices out there, according to the Big G.

Google claims it has helped more than 300,000 developers fix flaws in about 1,000,000 apps on Google Play, and has paid out $265,000 in previous Android app bug bounties. Those rewards have now been raised, and Google says it has paid out $75,500 in the past few months alone.

There’s money in data too

Google also says it will cough up dosh for reports of bad behavior by apps and their coders: think applications improperly collecting, selling, or otherwise misusing, user and system data.

“If data abuse is identified related to an app or Chrome extension, that app or extension will accordingly be removed from Google Play or Google Chrome Web Store. In the case of an app developer abusing access to Gmail restricted scopes, their API access will be removed,” the Googlers noted. “While no reward table or maximum reward is listed at this time, depending on impact, a single report could net as large as a $50,000 bounty.”

This Developer Data Protection Reward Program will be run through HackerOne, which is announcing its own news this week. The bug bounty broker said it has crowned its first crop of millionaire bug hunters.

money

So you’ve got a zero-day – do you sell to black, grey or white markets?

READ MORE

Hackers Santiago Lopez, Mark Litchfield, Nathaniel Waklemann, Frans Rosen, Ron Chan, and Tommy DeVoss have each surpassed seven figures in bounty payouts via HackerOne. As a whole, HackerOne said it brokered $21m in bug bounty payouts last year, more than double the prior year’s total.

In short, it’s a good time to be breaking software.

“We predict that hackers will earn $100m by the end of 2020 and, when we reach that milestone, we may very well have 1 million ethical hackers signed up on our platform,” said HackerOne CEO Marten Mickos.

“By our estimates, we will have helped our customers find and fix over 200,000 vulnerabilities as more industries than ever are recognizing that an outsider perspective is critical to finding and fixing bugs that, in the wrong hands, could lead to a costly and embarrassing data breach.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/29/google_bounties_in_popular_apps/

Retadup Worm Squashed After Infecting 850K Machines

An operation involving French law enforcement, the FBI, and Avast forces Retadup to delete itself from victim machines.

Retadup, a malicious worm that infected more than 850,000 Windows machines, has been taken down by an international operation involving the French National Gendarmerie’s Cybercrime Fighting Center (C3N), US Federal Bureau of Investigation, and security firm Avast.

The worm was first exposed by Trend Micro back in 2017, when it was spotted targeting Israeli hospitals and stealing information. A few months later, another Retadup variant was seen targeting industries and governments in South America. Two years later, Avast analysts are sharing details of a separate campaign in which victim machines were targeted with a cryptocurrency miner.

Avast researchers began to closely monitor Retadup activity in March 2019, when malicious Monero cryptocurrency miner XMRig caught their eye with its advanced abilities to bypass detection. Further investigation into the distribution of XMRig led them to Retadup, the worm being used to deliver XMRig to machines mostly in Spanish-speaking countries in Latin America.

Retadup primarily spreads by dropping malicious LNK files onto connected drives. It iterates over all connected drives where the assigned letter is not “c,” goes through all the folders in the root folder of a selected drive, and for each one creates an LNK file to mimic the real folder and trick victims into clicking it. When executed, the malicious LNK file will run the malicious script. Neither Avast nor Trend Micro researchers have determined the infection vector for XMRig.

XMRig doesn’t use all of a CPU’s power when it mines cryptocurrency, says Avast malware researcher Jan Vojtesek. This helps it fly under the radar. Victims whose machines are running a fully powered cryptominer will notice their machines slow down, he explains. The malware also avoids mining when taskmgr.exe is running, so it’s difficult to detect raises in CPU usage.

In addition to XMRig, researchers noticed instances of Retadup distributing Stop ransomware and Arkei password stealer. The ransomware seemed to be a “test trial,” Vojtesek says. “They probably were trying to figure out how much they could make from ransomware.”  

Closer analysis of Retadup showed its command-and-control (C2) communication infrastructure was “quite simple,” Vojtesek explains in a report. Analysts identified a design flaw in the C2 protocol that enabled them to remove Retadup from infected machines if they assumed control over the C2 server, he explains. By doing this, they could purge XMRig from victims’ devices without asking them to do anything. They’d simply need to connect to the server to destroy the threat.

Setting Up the Takedown
Because most of Retadup’s C2 infrastructure was located in France, Avast contacted the French National Gendarmerie to share their research and proposed disinfection strategy of abusing the flaw in the C2 server to neutralize the attack campaign.

“We spent some time analyzing the threat,” Vojtesek says. “Only after we were confident it could actually be disinfected, and we had a solid plan on how to carry out the disinfection, then we contacted them.”

While French law enforcement presented the strategy to the prosecutor, Avast continued to analyze Retadup. Researchers tested the disinfection process, discussed potential risks, and reviewed a snapshot of the C2 server’s disk obtained by the Grandarmerie that did not contain victims’ data. Attackers sent a great deal of data about infected to the machines to the C2 server; researchers learned the exact amount of infections and geographical locations.

The teams got the go-ahead to launch their disinfection operation in July 2019 and replaced the malicious C2 server with a “disinfection server” that made connected instances of Retadup self-destruct. When bots connected to the server, the disinfection server responded and destroyed. So far it has neutralized more than 850,000 devices that connected to the attackers’ C2 server.

Some parts of the C2 infrastructure were located in the US, so the Gendarmerie alerted the FBI, which took those down. By July 8, Retadup’s authors no longer had control over the malware. Because infected machines received orders from the C2 server, they no longer had new jobs, meaning they were unable to continue stealing victims’ power to fuel their monetary gain.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Fuzzing 101: Why Bug-Finders Still Love It After All These Years.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/risk/retadup-worm-squashed-after-infecting-850k-machines/d/d-id/1335693?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple