STE WILLIAMS

Digi-dosh exchange Coinbase: Someone tried to pwn our staff via this week’s Firefox zero-day security hole

The development and release of a critical Firefox security patch this week was, in part, triggered by an attempted cyber-heist of crypto-coin exchange Coinbase.

Coinbase chief information security officer Philip Martin said on Wednesday night the digital-dosh trading site was one of the prime targets of hackers, who tried to exploit a zero-day vulnerability, CVE-2019-11707, a JavaScript type-confusion flaw in Firefox, to execute malicious code on Coinbase staff machines.

Coinbase, along with Project Zero researcher Samuel Groß, were given official credit for spotting and reporting the flaw. Mozilla has since issued a patch: users should update and restart their browsers to pick it up. The patch was also rolled out by the Tor Browser team for their users; their software is built from the Firefox code base.

According to Martin, the unknown hackers went after Coinbase employees with exploit code that targeted the type confusion bug and a second flaw; if successful, the miscreants would have been able to run malware on staff PCs that would, presumably, give the crooks access to their administrator accounts on the Coinbase service.

“On Monday, Coinbase detected blocked an attempt by an attacker to leverage the reported 0-day, along with a separate 0-day firefox sandbox escape, to target Coinbase employees,” Martin explained.

“We walked back the entire attack, recovered and reported the 0-day to firefox, pulled apart the malware and [infrastructure] used in the attack and are working with various orgs to continue burning down attacker infrastructure and digging into the attacker involved.”

The Coinbase security boss noted that the attack was not successful, and so far there is no evidence that any Coinbase customers were hit.

There may, however, be other exchanges that did fall victim to the attacks, as Martin says his staff believe multiple exchanges were subjected to the exploits, and, so far, Coinbase is still trying to identify the perpetrators.

He is asking anyone who does believe they were targeted in the attack to contact Coinbase and share details of what happened. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/20/coinbase_firefox_zero_day/

Millions of Windows Dell PCs need patching: Give-me-admin security gremlin found lurking in bundled support tool

Dell’s troubleshooting software SupportAssist, bundled with the US tech titan’s home and business computers, has a security flaw that can be exploited by malware and rogue logged-in users to gain administrative powers.

The Texan system slinger today issued an advisory warning that its remote support tool suffers a privilege-escalation vulnerability, CVE-2019-12280, and needs patching. We’re told Dell SupportAssist for Business PCs version 2.0.1 and Dell SupportAssist for Home PCs version 3.2.2 are the builds you need to fetch and install to kill off this high-severity hole.

The IT giant includes the Windows-based troubleshooting tool with new desktops, notebooks, and tablets. Unfortunately, as eggheads at SafeBreach Labs discovered and privately reported, the software insecurely loads .dll files when run. Researcher Peleg Hadar told The Register SupportAssist, which runs with system-level privileges, will automatically pull in unsigned code libraries from user-controlled folders. That means malware or dodgy users can leave their own .dll files in a path, wait for SupportAssist to blindly load them, and execute code with admin access.

That would allow the software nasty, or rogue insider, to gain complete control over the box. It also means, say, browser exploits that drop files on the file system can quickly lead to a remote admin-level compromise. This vulnerability is present on an an estimated 100 million Dell PCs.

Christopher Lee as Dracula

You dirty DRAC: IT bods uncover Dell server firmware security slip

READ MORE

“We can assume that all Dell PCs that run the Windows operating system without changes from the manufacturer are vulnerable, as long as the user didn’t update,” said Hadar.

The most concerning part of this story is that Hadar believes Dell is not alone in shipping PCs with this type of bug.

The reason for this is Dell doesn’t actually make SupportAssist. The software itself is written and maintained by PC Doctor, a support and diagnostics software specialist that sells its code to PC makers that then rebrand the tools and bundle them into their own computer products.

“Once we found and reported it to Dell, they reported it to PC Doctor,” explained Hadar. “They said there are several OEMs that are affected by this.”

Indeed, Dell’s brief advisory, which contains instructions on how to patch, noted: “Dell SupportAssist for Business PCs and Dell SupportAssist for Home PCs require an update to the latest versions to address a security vulnerability within the PC Doctor component.”

Unfortunately, SafeBreach did not hear from PC Doctor to confirm the extent of the fallout of this programming blunder, and El Reg was unable to get in touch with the developer by the time of publication. Should the vulnerability€ prove to have been distributed to other vendors, it is likely we will see several big names in the PC space have to issue similar updates to Dell, and PC Doctor will have some explaining to do both to its partners and the general public. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/20/dell_supportassist_security_hole/

Inside the FBI’s Fight Against Cybercrime

Heavily outnumbered and outpaced by their targets, small FBI cybersquads have been quietly notching up major wins against online criminals operating out of home and abroad.

Elliott Peterson struggles a bit when asked to identify the most frustrating part of his job as an FBI agent fighting cybercrime.

“Actually, most of the time our job is awesome,” he finally says. “We are often the only ones that can effect really permanent solutions in this space.”

As a special agent in the FBI’s Anchorage field office in Alaska, Peterson and his teammates are among those at the forefront of the US government’s dogged battle against criminals in cyberspace. Heavily outnumbered and outpaced by their targets, small FBI cybersquads like the one in Anchorage have been quietly notching up major wins against online criminals operating out of home and abroad in recent years. At least some of the success is the result of efforts to build up partnerships with private industry and from cooperation with international law enforcement agencies.

Peterson’s own team was responsible for investigating and bringing to justice the three-person operation behind the massive Mirai distributed denial-of-service (DDoS) attacks in 2016 that impacted Internet service provider Dyn and several others. More recently, Peterson led a major investigation that in December resulted in some 15 Web domains associated with DDoS-for-hire services being seized and the operators of several being arrested. The actions resulted in a sharp — but temporary — drop-off in DDoS activity early this year.

Such victories are a long way from chilling cybercrime, which by some accounts has become even bigger and more organized than even drug trafficking. But the arrests, the indictments, the seizures, and the takedowns are not going entirely unnoticed either.

“We see them talk about this stuff on forums and Discord chats,” Peterson said in an interview with Dark Reading at Akamai’s Edge World user conference in Las Vegas last week. “We’ve had a lot of wins in the areas we focus on.”

Lessons from Mirai
Peterson’s cybercrime-fighting career began as part of an FBI team that went after East European cybergroups stealing money from online accounts of US companies. The law enforcement efforts were so successful that for a brief period between 2013 and 2014, there was an enormous dip in cybertheft targeting US organizations.

“I remember thinking, ‘Oh, we figured this out. This isn’t hard,'” Peterson says wryly.

The Mirai investigation was something of an eye opener for Peterson and other members of the Anchorage cybersquad — not necessarily because of how sophisticated the malware was, but because of the sheer scale of the attacks it enabled. Mirai was the first malware tool designed to exploit weaknesses in ordinary IoT devices, such as home routers and IP cameras. It allowed attackers to quickly assemble botnets capable of launching DDoS floods bigger than anything seen up to that point. The sheer scale of the damage the malware could inflict surprised both the FBI and even the malware’s own creators — Josiah White of Washington, Pennsylvania; Paras Jha of Fanwood, New Jersey; and Dalton Norman of Metairie, Louisiana.

“These guys underestimated the scale of manufacture of [IoT] devices and how widely placed they were throughout the world,” recalls William Walton, supervisory special agent at the Anchorage FBI field office. “So when they developed the Mirai botnet, I think they inadvertently harnessed way more power than they set out to harness.”

What Mirai showed was how drastically the threat landscape had changed as a result of more devices coming online constantly. “The interconnectedness of the Internet’s architecture became readily apparent,” Walton says.

DDoS and botnet activities continue to be a core focus of the Anchorage cybersquad. But business email compromise scams and enterprise ransomware attacks are vying for attention as well.

Tapping Private Industry
As threats have evolved, so has the FBI’s understanding of how best to approach them. One area where the agency has made a lot of improvement is in scoping requests for data from service providers when carrying out investigations.

“We have gotten better at getting the right evidence from service providers,” Walton says.

Instead of hitting them with blanket requests and then having to wade through lots of data in the hope of finding something useful, the focus these days is on first gaining a technical understanding of how particular crimes are carried out.

“We try and understand the types of things we can and should be asking for,” Walton says.

Helping them in a major way is the private industry. Over the past several years, the FBI has been working with researchers and engineers from within the security industry to try and understand new and emerging threats and trends. The informal interactions and relationships have been key to the FBI’s ability to hunt down and dismantle criminal networks on the Internet.

One example is the role Akamai played in the Mirai investigation. Researchers from the company reverse-engineered Mirai’s command-and-control (C2) infrastructure and built a tool that helped the FBI and others keep track of the botnet, says Tim April, principal architect at the content delivery network services provider. When the massive DDoS attacks on Dyn began, Akamai researchers were able to quickly point the FBI to the exact C2 that issued the attack command, he says. The company’s information played a big role in the FBI’s ability to definitively attribute the attacks to Jha and his pals.

“We try to keep close tabs on what’s going on, and we update [the FBI] whenever we see something new or novel” on the threat landscape, April says. The interaction is mutual, voluntary, and beneficial to both sides.

Peterson himself calls in to meetings at least once a week with security researchers from companies like Akamai. The meetings are an opportunity to hear what everybody is doing and to provide updates on cases the FBI might be investigating. He finds such exchanges to be more useful, at least from a purely investigative standpoint, than formal information-sharing groups.

“ISACs absolutely have their place. They are super-important,” he emphasizes.

But it’s the researchers and other contacts on the frontlines who usually have the information needed to move quickly on investigating new threats.

“People really move their schedules around to do them because it is so useful to hear what the government is seeing and what all these different private entities are seeing in this space,” Peterson notes. “That visibility is really not something we had a few years ago.”

The interaction with private industry has also helped the FBI prioritize investigations better. The process typically involves looking at the scope of existing damage caused by a threat or group and the potential for future damage.

“We rely on private industry partners to give us a sense of the scale of what we are facing,” Walton says.

The Anchorage office is able to prioritize some threats locally using available agents and bandwidth. Sometimes the task involves having to work with headquarters to identify where the bureau has the best resources to put up against a particular threat.

International Cooperation
The FBI’s efforts at building relationships with its international law enforcement counterparts are helping as well. Walton and Peterson often travel to other countries in pursuing cybercriminals operating out of the direct reach of US law. On some of those trips, the two agents have taken US prosecutors along with them to meet prosecutors in other countries. In other cases, they have hosted law enforcement agents from other countries on US soil.

For the Mirai case, for instance, a team from France flew to the US to observe and sit in on interviews with the suspects in an example of what Peterson describes as an almost unprecedented level of cooperation on cyber matters between the two sides. British and Polish teams have visited the US in connection with other investigations, too.

Such interactions have given the FBI a better understanding of the legal and time constraints under which law enforcement in other countries operate. Importantly, they have also enabled a better understanding internationally about how US law enforcement conducts cybercrime investigations.

“There is a growing understanding and appreciation for what matters in terms of gathering evidence and the speed at which that has to occur,” Walton says.

Even so, international investigations still take longer than ideal. The speed at which the FBI was able to pursue the Mirai operators and with which they were prosecuted was helped by the fact the attackers were based in the US. The time lag is a whole lot longer in an international setting.

For me the most frustrating thing is the ability to match the pace of cybercriminals as we pursue them,” Walton says. Legal process takes time, developing relationships with private industry takes time, and working internationally takes time. “All of those time constraints aren’t really a factor for cybercriminal operations,” Walton says.

At the end of the day, fighting cybercrime requires broad cooperation, Peterson says. Everybody has an interest in an Internet that is safer and more secure, so people and organizations need to find ways to work together and make that happen.

“If your company is an island, you are not contributing to all of us trying to solve the problem,” he says. “Team up. Find a way to help. That’s the only way to get ahead of this.”

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/inside-the-fbis-fight-against-cybercrime/d/d-id/1335014?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Machine Learning Boosts Defenses, but Security Pros Worry Over Attack Potential

As defenders increasingly use machine learning to remove spam, catch fraud, and block malware, concerns persist that attackers will find ways to use AI technology to their advantage.

Machine learning continues to be widely pursued by cybersecurity companies as a way to bolster defenses and speed response. 

Machine learning, for example, has helped companies such as security firm Malwarebytes improve their ability to detect attacks on consumer systems. In the first five months of 2019, about 5% of the 94 million malicious programs detected by Malwarebytes’ endpoint protection software came from its machine-learning powered anomaly-detection system, according to the company. 

Such systems, and artificial intelligence (AI) technologies, in general, will be a significant component of all companies’ cyberdefense, says Adam Kujawa, director of Malwarebytes’ research labs.

“The future of AI for defenses goes beyond just detecting malware, but also will be used for things like finding network intrusions or just noticing that something weird is going on in your network,” he says. “The reality is that good AI will not only identify that it’s weird, but [it] also will let you know how it fits into the bigger scheme.”

Yet, while Malwarebytes joins other cybersecurity firms as a proponent of machine learning and the promise of AI as a defensive measure, the company also warns that automated and intelligent systems can tip the balance in favor of the attacker. Initially, attackers will likely incorporate machine learning into backend systems to create more custom and widespread attacks, but they will eventually focus on ways to attack other AI systems as well.

Malwarebytes is not alone in that assessment, and it’s not the first to issue a warning, as it did in a report released on June 19. From adversarial attacks on machine-learning systems to deep fakes, a range of techniques that general fall under the AI moniker are worrying security experts. 

In 2018, IBM created a proof-of-concept attack, DeepLocker, that conceals itself and its intentions until it reaches a specific target, raising the possibility of malware that infects millions of systems without taking any action until it triggers on a set of conditions.

“The shift to machine learning and AI is the next major progression in IT,” Marc Ph. Stoecklin, principal researcher and manager for cognitive cybersecurity intelligence at IBM, wrote in a post last year. “However, cybercriminals are also studying AI to use it to their advantage — and weaponize it.”

The first problem for both attackers and defenders is creating stable AI technology. Machine-learning algorithms require good data to train into reliable systems, and researchers and bad actors have found ways to pollute the data sets as a way to corrupt the resultant system. 

In 2016, for example, Microsoft launched a chatbot, Tay, on Twitter that could learn from messages and tweets, saying, “the more you talk the smarter Tay gets.” Within 24 hours of going online, a coordinated effort by some users resulted in Tay responding to tweets with racist responses.

The incident “shows how you can train — or mistrain — AI to work in effective ways,” Kujawa says.

Polluting the dataset collected by cybersecurity firms could similarly create unexpected behavior and make them perform poorly.

A number of AI researchers have already used such attacks to undermine machine-learning algorithms. A group including researchers from Pennsylvania State University, Google, the University of Wisconsin, and the US Army Research Lab used its own AI attacker to craft images that could be fed to other machine-learning systems to train the targeted systems to incorrectly identify images.

“Adversarial examples thus enable adversaries to manipulate system behaviors,” the researchers stated in the paper. “Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software.”

While Malwarebytes’ Kujawa cannot point to a current instance of malware in the wild that used machine-learning or AI techniques, he expects to see examples soon. Rather than malware that incorporates neural networks or other AI technology, initial attempts at fusing malware with AI will likely focus on the backend: the command-and-control server, he says.

“I think we are going to see a bot that is deployed on an endpoint somewhere, communicating with the command-and-control server, [which] has the AI, has the technology that is being used to identify targets, what’s going on, gives commands, and basically acts as an operator,” Kujawa says.

Companies should expect attacks to become more targeted in the future as attackers increasingly use AI techniques. Similar to the way that advertisers track potential interested users, attackers will track the population to better target their intrusions and malware, he says.

“These things could create their own victim profiles internally,” he says. “A dossier on each target can be created by an AI very quickly.”

Related Content

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/machine-learning-boosts-defenses-but-security-pros-worry-over-attack-potential/d/d-id/1335017?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Small Businesses May Not Be Security’s Weak Link

Organizations with 250 or fewer employees often employ a higher percentage of security pros than their larger counterparts.

Small businesses often have a bad reputation for being the gateway to supply-chain attacks on larger enterprises. But this may not be the case, as seen in a new report on small-business security.

As part of (ISC)²’s “Securing the Partner Ecosystem” study, researchers surveyed 700-plus people from small and large organizations to learn views on data-sharing risk. Half of large businesses view third-party partners of all sizes as a security risk, but only 14% have suffered a breach from working with a small partner. Meanwhile, 17% were breached as the result of working with a larger partner.

In fact, 94% of large enterprises are “confident” or “very confident” in small-business partners’ security practices, with 95% having a process for vetting security capabilities. Nearly two-thirds of large firms outsource 26% of their daily business tasks to third parties, which requires data sharing. Here, researchers found access management and vulnerability mitigation are often overlooked.

How so? For starters, 34% of large enterprises say they have been surprised by the broad level of access a third-party partner had been given to their networks and data. Nearly 40% of small businesses had been surprised by the access granted when providing services to large partners.

More than half (54%) of small businesses expressed surprise at some large clients’ insufficient security practices; 53% have notified clients of vulnerabilities found in larger networks. Fifty-five percent of small businesses said they continued to have access to a client’s network or data after a project was completed. What’s more concerning, 35% of large organizations admitted when a third party alerted them to insecure data access policies, their practices didn’t change.

Read the release here and full report here.  

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/perimeter/small-businesses-may-not-be-securitys-weak-link/d/d-id/1335019?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘Democratizing’ Machine Learning for Fraud Prevention & Payments Intelligence

How fraud experts can fight cybercrime by ‘downloading’ their knowledge and experience into computer models.

Throughout the financial industry, executives are acknowledging that machine learning can quickly and successfully process the vast volume and variety of data within a bank’s operations — a task that’s nearly impossible for humans to do at the same speed and scale. However, only about half of enterprises are using machine learning. Why? A bank-wide machine learning project is a huge undertaking, requiring major investment in technology and human resources.

Moreover, past projects that have failed to deliver a return on investment (ROI) have led to internal disappointment and, in some cases, mistrust in the technology. However, with smaller, more tactical machine learning projects that can be rapidly deployed, banks can reap the benefits from Day One. One such project is fraud prevention.

The current barrier to delivering fraud protection through machine learning often lies in the solutions themselves, which require data scientists to create the initial models. A fraud expert knows that a specific correlation between transaction types in a sequence is a strong fraud indicator, but the data scientist will need many more interactions with the same data to draw the same conclusion.

Fraud Experts or Data Scientists?
A machine learning model is only as good as the instructions it is given. This can be particularly challenging while setting up fraud prevention algorithms because fraud is a relatively small percentage of successful transactions, which means the model has fewer opportunities to learn. As a result, solutions that enable fraud experts, instead of data scientists, to input the initial correlations will deliver results faster in terms of identifying new correlations across different data sets as they’re more familiar with the instances where fraud is likely to occur. This allows the organization to reduce the time to ROI of their machine learning projects.

This “democratization of machine learning” empowers fraud experts to “download” their knowledge and experience into computer models. It’s particularly effective in areas where fraud has not yet reached critical mass to support fraud experts as they use their experience and instincts to investigate certain transactions or customer behaviors, even if they are not yet fraudulent or highly indicative of fraud. Feedback based on these kinds of instincts will aid the machine learning model to fine-tune itself, and improve accuracy and consistency in identifying more complex fraud indicators.

Teaching the Machine
Continuous involvement of fraud experts is key to developing the machine learning model over time. By bringing fraud experts closer to machine learning, they have transparent views into the models and can apply strategies and controls to best leverage the outcome of the intelligence from the models. As they input to the model, they’re able to investigate the output as the model generates intelligence and use their human expertise to confirm fraud instances. They can also combine their all-encompassing customer view with insights correctly generated by the model.

If the “human intelligence” confirms the insights, these can be fed back into the model. If the model consistently flags a correlation between data sets as potential fraud and the analysts consistently confirm this as fraud, then a strategy and control based on this information can be added to underpin the model. These can, in turn, be used to automate the decisions that fraud analysts themselves have consistently made, and, ultimately, reduce the need for automatic actions that impact the customer experience, such as freezing a credit card when a suspicious purchase is made — even though the purchase is legitimate.

Related Content:

Cleber Martins is head of payments intelligence, ACI Worldwide. He has nearly two decades’ experience in fraud prevention and anti-money laundering strategies. View Full Bio

Article source: https://www.darkreading.com/perimeter/democratizing-machine-learning-for-fraud-prevention-and-payments-intelligence/a/d-id/1334990?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Florida Town Pays $600K to Ransomware Operators

Riviera Beach’s decision to pay ransom to criminals might get files back, but it almost guarantees greater attacks against other governments.

Paying the ransom for ransomware is rarely recommended, but that didn’t stop Riviera Beach, Florida — a town with a population of around 35,000, north of West Palm Beach — from authorizing a payment of 65 Bitcoin, worth more than $600,000, to criminals in the hope that municipal data would be unlocked.

The attack, which began on May 29 when a police department employee opened a malicious email attachment, ultimately disabled all of the city’s online systems, including email, a water utility pumping station, some phones, and the ability to accept utility payments online or by credit card.

Ilia Kolochenko, founder and CEO of ImmuniWeb, says that the payment could have far-reaching consequences. “This is very alarming news that will likely spur an unprecedented spike of ransomware attacks on the critical infrastructure of small cities that are unable to duly protect themselves.” This means that “cities, municipalities, and smaller governmental entities are a low-hanging fruit for insatiable and smart cybercriminals.”

And those criminals may have begun ramping up their activities even before Riviera Beach showed that there can be significant profit. “Cyber extortion is a growing type of attack, with a questionable effectiveness,” says Allan Liska, an intelligence analyst at Recorded Future. “While there are a lot of these attacks occurring, most of them are simply bluffs. There aren’t as many cases of a legitimate cybercriminal with legitimate access to the target organization using this technique. It is an interesting area to watch for potential growth.”

“Cybercriminals always try to get maximum profit doing the least effort,” says Cesar Cerrudo, chief technology officer of IOActive and founder of Securing Smart Cities. “That’s why targeting city technology is a good business opportunity to them as the private sector is becoming more secure and difficult to hack, while most city systems are easier to hack.

“There is a lack of cybersecurity knowledge and skilled resources in most cities around the world, while technology adoption and dependence keep increasing,” Cerrudo adds, pointing out that the combination creates an especially dangerous opportunity for criminals. And things could get worse. “So far, the consequences have been mostly financial, but soon attacks could end up putting human lives at risk,” he says.

In addition to the ransom payment, Riviera Beach moved purchase of $900,000 in new computer hardware forward a year in order to replace infected systems. And all of the expense could have been avoided, according to some security professionals. “Bad actors are rational. They will invest time and effort into attacks that work,” says Unman Rahim, digital security and operations manager for The Media Trust. “The takeaway from this and other similar attacks is this: All businesses should back up their data and train their employees on how to avoid such cyberattacks.”

Sam McLane, chief technology services officer at Arctic Wolf Networks, gets even more specific with his recommendations for municipal governments. “First, having good backup and recovery is essential to counter ransomware. If malware slips through your defenses, you need the ability to revert to a recent backup and avoid the pain that the City of Riviera Beach is encountering,” McLane says. “Second, organizations also need to have detection technology like network monitoring via intrusion detection or endpoint detection and response. And third, organizations must monitor the entire environment to detect and respond when something slips through.”

As of press time, Riviera Beach has not reported whether it has been given the key to decrypt the locked files.

Related content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/florida-town-pays-$600k-to-ransomware-operators/d/d-id/1335021?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Attackers Exploit MSP’s Tools to Distribute Ransomware

Early information suggests threat actors gained access to the managed service provider’s remote monitoring and management tools and used them to attack the firm’s clients.

For the second time in the past few months, systems belonging to customers of a managed service provider have been hit with ransomware because of what may have been a security lapse on the part of the MSP.

Details of the attack are still only emerging, and the full scope of the incident or even the name of the MSP is still not currently available. But early information suggests that attackers may have somehow gained access to two remote management tools at the MSP — one from Webroot, the other from Kaseya — to distribute the ransomware.

Comments on an MSP forum on Redditt, including from security researchers claiming close knowledge of the incident, suggest the MSP is a large company and that many of its clients have been impacted.

A researcher from Huntress Labs, a firm that provides security services to MSPs, claimed on Reditt to have confirmation that the attackers used a remote management console from Webroot to execute a PowerShell-based payload that, in turn, downloaded the ransomware on client systems. Webroot describes the console as allowing administrators to view and manage devices protected by the company’s antivirus software.

According to the Huntress Labs researcher, the payload was likely “Sodinokibi,” a ransomware tool that encrypts data on infected systems and deletes shadow copy backups, as well.

Kyle Hanslovan, CEO and co-founder of Huntress Lab, says a customer of the MSP that was attacked contacted his company Thursday and provided its Webroot management console logs for analysis. “We don’t know how the attacker gained access into the Webroot console,” Hanslovan says.

Based on the timestamps, the Webroot console was used to download payloads onto all managed systems very quickly and possibly in an automated fashion. “This affected customer had 67 computers targeted by malicious PowerShell delivered by Webroot,” Hanslovan says. “We’re not sure how many computers were successfully encrypted by the ransomware.”

One Reditt poster using the handle “Jimmybgood22” claimed Thursday afternoon that almost all of its systems were down. “One of our clients getting hit with ransomware is a nightmare, but all of our clients getting hit at the same time is on another level completely,” Jimmybgood22 wrote.

Huntress Labs posted a copy of an email that Webroot purportedly sent out to customers following the incident, informing them about two-factor authentication (2FA) now being enforced on the remote management portal. The email noted that threat actors who might have been “thwarted with more consistent cyber hygiene” had impacted a small number of Webroot customers. The company immediately began working with the customers to remediate any impact.

Effective early morning June 20, Webroot also initiated an automated console logoff and implemented mandatory 2FA in the Webroot Management Console, the security vendor said. 

Meanwhile, another researcher with UBX Cloud, a firm that provides triage and consulting services to MSPs, claimed on Reditt to have knowledge that the attacker had leveraged a remote monitoring and management tool from Kaseya to deliver the ransomware.

“Kaseya was the only common touch point between the MSPs clients and it is obvious that the delivery method leveraged Kaseya’s automation by dropping a batch file on the target machine and executing via agent procedure or PowerShell,” the researcher claimed. As with the Webroot console, the MSP did not appear to have implemented 2FA for accessing the Kaseya console.

In emailed comments, John Durant, CTO at Kaseya, confirmed the incident.”We are aware of limited instances where customers were targeted by threat actors who leveraged compromised credentials to gain unauthorized access to privileged resources,” Durant says. “All available evidence at our disposal points to the use of compromised credentials.”

In February, attackers pulled off an almost identical attack against another US-based MSP. In that incident, between 1,500 and 2,000 computers belonging to the MSP’s customers were simultaneously encrypted with GandCrab ransomware. Then, as now, the attackers are believed to have used Kaseya’s remote monitoring and management tool to distribute the malware.

MSPs and IT administrators continue to be targets for attackers looking to gain credentials for unauthorized access, Durant says. “We continue to urge customers to employ best practices around securing their credentials, regularly rotating passwords, and strengthening their security hygiene,” he says.

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/attackers-exploit-msps-tools-to-distribute-ransomware/d/d-id/1335025?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

“Deeply personal medical” records exposed online

xSocialMedia – a Facebook marketing agency that runs campaigns for medical malpractice lawsuits – has leaked the medical and other data that about 150,000 people entered into online forms to check whether they’re eligible for legal assistance.

The breach was discovered by vpnMentor‘s research team. The company, which tests virtual private networks (VPNs), said in a post that cybersecurity researchers Noam Rotem and Ran Locar discovered the vulnerabilities in multiple databases operated by xSocialMedia.

They found a lot more besides the 150,000 personal records, some of which belonged to US veterans. They also found what they called “deeply personal medical testimonies”; contact information including names, addresses, and phone numbers; and people’s medical histories. They were also able to access a list of xSocialMedia’s invoices, customer data, and exact numbers from their advertising campaigns for injury-check.com.

xSocialMedia, which says it creates Facebook ad campaigns for 230+ clients, posts those ads to a variety of injury-check.com domains, depending on specific ailments. Examples include https://ied-fund.injury-check.com, for wounded servicemen and women who’ve been injured in improvised explosive device (IED) attacks, and https://ivcfilter-risk.injury-check.com, for people who’ve been injured by an inferior vena cava filter (a type of vascular filter implanted so as to prevent life-threatening pulmonary emboli).

In fact, xSocialMedia works with 10 different kinds of injury lawyers that specialize in lawsuits regarding medical injury from hernia mesh, 3M earplugs, sexual abuse, pesticides, auto accidents, and more.

After Facebook users enter one of the injury-check.com domains, they’re encouraged to fill out a form with their medical data to see if they qualify for legal assistance. vpnMentor found it could access 150,000 responses to those forms, where it found:

  • First and last name
  • Email address
  • Street address
  • Phone number
  • IP address
  • Circumstances of the injury
  • Explanation about the injury

Details of injuries leaked

vpnMentor published a redacted form of one of xSocialMedia’s “leads”: it was from a US veteran who described their combat injuries, including a below-knee amputation following an IED blast. As vpnMentor notes, this is highly sensitive information:

Employers, for example, may not know an employee is suffering from PTSD.

Another redacted record showed details of chronic pain after surgery to implant a hernia mesh. vpnMentor said that using the information exposed in xSocialMedia’s database(s) – specifically, the person’s IP address – its researchers could “easily” find the person’s social media accounts and location.

In another case of breached medical details, a veteran describes hearing loss after using military-issue hearing aids: a condition that the veteran may not wish to disclose to everyone, including, for example, to employers.

xSocialMedia leaked its own data, too

Besides leaking personal medical histories of its leads, xSocialMedia also leaked its own bank account information in invoice records the firm sent to clients. vpnMentor researchers found they could see clients’ names, addresses, phone numbers, email addresses, and the specific amount each company is paying xSocialMedia.

vpnMentor saw exposed data for more than 300 clients that are collecting data in order to build lawsuits, including data that companies don’t typically disclose. It could also easily see results per website campaign, plus how much the clients are paying for each campaign.

We can view the code for their website forms, as well as metrics for their Facebook ads. Most companies don’t disclose specific metrics per campaign.

The breach’s impact

As vpnMentor points out, this breach never should have happened, given how sensitive this data is. In the US, medical records and patient privacy are strongly protected by Health Insurance Portability and Accountability Act (HIPAA) laws that forbid disclosure of patients’ identifying information without written permission.

Healthcare providers cannot even confirm a patient to an outside party without a release. Patients may worry that if their workplace, for example, had open access to their medical records, it could be used against them. The only data allowed to be released outside of designated channels is data that does not have any identifying information attached.

And yet there were xSocialMedia’s collection of personal medical records, unprotected and paired with identifying data.

The people who filled out the forms linked in xSocialMedia’s ads were already suffering from medical problems that caused enough pain and trauma that they were looking for legal help. Discovering that their data was leaked without permission could easily add to their trauma.

vpnMentor notes that xSocialMedia might not be subject to HIPAA compliance because patients are free to disclose their health information to the parties of their choice – in this case, by inputting it into a form on one of the advertising firm’s sites. But it’s hardly likely that they would have done so if they’d known that their personal medical histories would be publicly exposed, along with data that could easily link their identities to those records.

The holes have been closed

vpnMentor says it discovered the leak on 2 June. xSocialMedia responded on 11 June and closed the database up on the same day.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2ocFpwHbINc/

Facebook’s Libra cryptocurrency is big news but will it be secure?

Unless you’ve been living under a rock, you’ll know that earlier this week Facebook announced plans for a new global cryptocurrency for absolutely everyone called Libra.

Slated to launch in 2020, Libra’s success will be decided by the interaction of three things – its financial architecture (which is complex and novel), how this affects its popularity and take up, and the consequences of how it might be used and misused.

Financial design

Regardless of what you think of the idea of a cryptocurrency invented (but not controlled) by Facebook, Libra’s coming feels like a big moment for an idea that’s been around for a decade but is still struggling to become mainstream.

Bitcoin, for instance, is a world-famous cryptocurrency almost nobody uses to do real economic work beyond consuming lots of electricity mining tokens and then speculating emptily on their value.

Libra thinks it can solve this by being more like a real fiat currency, managed by big brands (Visa, Mastercard, Spotify, PayPal, Uber, Lyft, Vodafone, and Facebook itself), backed by real assets, and regulated to avoid both volatility and the possibility of money laundering. As Libra’s 29-page white paper states:

The Libra Blockchain is a decentralized, programmable database designed to support a low-volatility cryptocurrency that will have the ability to serve as an efficient medium of exchange for billions of people around the world.

Far from trying to disrupt central control, Libra will embrace it whilst fulfilling the big economic promise of cryptocurrencies to abolish the archaically high charges levied to move currencies around or translate them from one (the dollar, say) to another (the euro or Renminbi).

Libra does employ one innovation for a cryptocurrency on this scale by splitting itself into two parts, the fiat-backed currency and a second investment token that will be offered to accredited investors and members of the Libra Association.

Instead of pegging its value to scarcity a la Bitcoin, Libra’s value and liquidity will be decided by a distributed bank of big investors (including central banks) who, we must assume, know what they’re doing.

In other words, it will behave like a usable, reliable form of digital money that just happens to function via a pseudonymous blockchain somewhere out there.

Why are big companies such as PayPal, Mastercard and Visa so keen? Because they will take a chunk out of the vast and profitable foreign exchange market they currently see very little of.

What about security?

If there’s a nervousness surrounding Libra’s effect on the real world, it’s connected to its biggest feature – Facebook also wants it to be used by billions of people to buy and sell things, and move money around at low cost, in effect creating the world’s first unofficial global currency.

You don’t have to be a pessimist to predict that this sort of prominence will attract a lot of unwanted attention, indeed within hours of Facebook’s announcement there were already reports of sites peddling scams.

And that’s before Libra even exists. Scams promoting imaginary currency, fake exchanges, services and wallets – including phishing targeting currency accounts – could well proliferate after launch.

The bullseye for cybercriminals would be to break into Libra’s Calibra wallets held on smartphones, which is why the consortium behind Facebooks claims it will refund lost coins, including ones stolen fraudulently.

That implies advanced authentication, which the official Callibra wallet app says it will manage for users so they won’t have to remember long passwords or manage private crypto keys.

But cybercriminals won’t give up on breaking wallets and are bound to look for vulnerabilities in the software (or rival wallets offering the same service) or developing mobile malware capable of siphoning off data.

Another way might be to attempt to take over accounts by exploiting reset procedures. Or perhaps they’ll focus more on trying to trick people into sending money to scam accounts masquerading as genuine contacts – a version of wire fraud.

Because third-party wallets are allowed, inevitably there’s a risk that developers could become a soft underbelly in terms of their security.

Can fraud be beaten?

In theory, being run on a centralised blockchain via the “Byzantine” LibraBFT consensus protocol, fraudulent trades or losses could be reversed, although it’s not clear how that would work if the recipient has cashed out. That suggests a comprehensive scheme for controlling accounts and identifying account holders that goes beyond anything in existence today.

This raises an intriguing possibility – perhaps what Libra heralds isn’t simply a global currency but one that might be the beginnings of a basic system of secure identity, not from the blockchain itself (which is just a public-private key pair) but the authentication architecture surrounding it.

Many cybercrime problems are tied to the lack of a mechanism for knowing that someone is who they say they are. The evolution of authentication has been knocking on the door of this problem for some time and it could be that the real significance of Libra is that the systems built to ensure its integrity are about to shift identity to the next level.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jWDNitJeuPw/