STE WILLIAMS

Attack Attribution Tricky Say Some as US Blames North Korea for WannaCry

There’s not enough evidence to conclusively tie the rogue regime to the ransomware attacks, some security experts say.

The US government’s assertion this week that North Korea is behind the WannaCry ransomware attacks of earlier this year has surfaced familiar concerns about the tricky nature of attack attribution in cyberspace.

Some security industry experts believe there’s not enough evidence – at least not enough that’s publicly available – to definitively tie the government in North Korea to the attacks. They believe that most of the clues that have been cited as evidence of North Korea’s involvement in an attack that ravaged some 300,000 computers worldwide, is circumstantial and can have other explanations.

“The evidence is weak,” says Ross Rustici, senior director of intelligence services at Boston-based cybersecurity firm Cybereason. “Without the initial source or any way to tie the code to an actual person we cannot do real attribution.” 

The Trump Administration’s homeland security advisor Thomas Bossert on Monday identified North Korea as being directly responsible for the WannaCry attack, which infected computers in 150 countries. In an opinion column for the Wall Street Journal, Bossert said the US government has evidence that shows a direct link between Pyongyang and the attacks.

He did not reveal what additional information the government might have obtained – other than what is already publicly known – to arrive at the conclusion. But he asserted that other governments and private companies agree with the US assessment. “The United Kingdom attributes the attack to North Korea, and Microsoft traced the attack to cyber affiliates of the North Korean government,” he said.

In a blog Tuesday, Microsoft president and chief legal officer Brad Smith expressed support for the US government’s decision to formally name North Korea. Smith said that Microsoft had recently conducted an operation with Facebook and others to disrupt the activities of the Lazarus Group, a threat actor previously linked to North Korea.

The Lazarus Group is believed responsible for numerous worldwide attacks in recent years, most notably the one on Sony Pictures in 2014 and the more recent attacks on numerous banks via the SWIFT financial network. Microsoft investigations showed the group is also directly responsible for the WannaCry attacks, Smith said. “If the rising tide of nation-state attacks on civilians is to be stopped, governments must be prepared to call out the countries that launch them,” Smith noted.  “Today’s announcement represents an important step in government and private sector action to make the Internet safer,” he said referring to Bossert’s comments.

Symantec is another company that has definitively linked WannaCry to the Lazarus Group. But it has stopped short of saying the group is linked to the North Korean government or is being sponsored by it. In fact, it has explicitly stated that the information it has is not enough to conclusively attribute the attacks to a specific nation state.

In a report earlier this year, Symantec said it found three malware samples linked to Lazarus on the network of a WannaCry victim. One of them was a disk-wiping tool used in the Sony attacks. Symantec said that a Trojan that was used to spread WannaCry in March and April was a modified version of malware previously used by Lazarus. Similarly, IP addresses for command and control and code obfuscation methods used in WannaCry have Lazarus links, as does code between WannaCry and a backdoor Trojan used in other attacks.

“Symantec has little doubt that WannaCry was developed by members of the Lazarus group,” says Vikram Thakur, technical director at Symantec. “Our attribution to the group is based purely on technical analysis of WannaCry’s different versions along with threats we’ve observed and researched over multiple years.”

Thakur says he has no insight on why the US government decided to go public with its accusation against North Korea at this time. But he notes that it happened on exactly the third anniversary to the date when the government previously accused North Korea of its involvement in the Sony hacks. “There is a good chance that Lazarus will get more aggressive in the short run, as a sign of disregard to public government statements,” he says.

Cybereason’s Rustici says there’s not enough evidence to attribute WannaCry to the North Korean government or its Reconnaissance General Bureau (RGB). Given the rampant code reuse among threat groups and the highly cobbled-together nature of malware used in campaigns these days, it is hard to definitively attribute an attack based just on similarities in code, Rustici says. The fact that Lazarus code was found on the networks of some WannaCry victims is only correlation and not causation, he says.

Others in the past have also noted how easy it is for attackers to plant false flags in order to throw investigators off track and to make it appear like an attack were launched by someone else.

Other facets about the WannaCry attack also make North Korea an unlikely perpetrator, Rustici says. Two of the biggest victims of the attacks were Russia and China, both of which are nations that have been relatively sympathetic to North Korea, while the nation’s biggest adversaries—the US, South Korea, and Japan—were relatively untouched. North Korea’s only Internet access transits China and Russia as well, making it unlikely the nation would want to antagonize them.

Considering Pyongyang’s strategic objectives, it is unlikely they would have launched a campaign that did not inflict more direct damage on the US, Japan, and South Korea he says.

Tim Erlin, vice president of product management at Tripwire, says that when a government attributes a cyberattack on another nation, there needs to be a way for others to independently verify the evidence.

“Clearly government security groups can’t directly share the evidence they’ve collected,” he concedes. But there has to be a way to engender trust in the analysis without endangering national security, he says. “Developing a consortium of qualified, non-governmental experts who can review the detailed data and share their level of confidence in the conclusions would go a long way to establishing trust,” Erlin says.

Related content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/attack-attribution-tricky-say-some-as-us-blames-north-korea-for-wannacry-/d/d-id/1330688?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

9 Banking Trojans & Trends Costing Businesses in 2017

New Trojans appeared, old ones resurfaced, and delivery methods evolved as cybercriminals set their sights on financial data.PreviousNext

(Image: Muratart via Shutterstock)

(Image: Muratart via Shutterstock)

Banking Trojans have been a recurring theme in security news this year as criminals find new ways to steal money and data from their victims.

“We have started to see the re-emergence of banker Trojans,” says Bogdan Botezatu, senior e-threat analyst at Bitdefender, noting that banking Trojans had their heyday between 2012 and 2013. “But we could have sworn the trend was otherwise.”

It’s interesting to see banking Trojans resurface because of the resources they need to work. Unlike comparatively simple attacks like ransomware, banking malware requires several players and is difficult to launch and monetize. Botezatu suggests the rise could be attributed to both code leaks of other banking Trojans and an oversaturation of the ransomware market.

Many of the banking Trojans we’ve seen this year are reminiscent of those we’ve seen in the past. Others are old threats being distributed in new ways, targeting new victims.

Terdot, a banking Trojan first seen in October 2016, takes its inspiration from source code of the Zeus banking Trojan following Zeus’ source code leak in 2011. IcedID, another new banking Trojan that emerged in September, shares traits with Gozi, Zeus, and Dridex.

“Overall, this is similar to other banking Trojans, but that’s also where I see the problem,” says Limor Kessem, executive security advisor for IBM Security, of IcedID. It’s rare to see banking Trojans that don’t share qualities with existing variants. Attackers are copying one another and adding new features like anti-evasion techniques to further advance the malware.

Here, we look back on the new and evolved ways banking Trojans targeted victims in 2017. Any threats we missed that should’ve made the list? Which do you think will stick around next year? Feel free to leave your thoughts in the comments and read on for more.

 

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full BioPreviousNext

Article source: https://www.darkreading.com/attacks-breaches/9-banking-trojans-and-trends-costing-businesses-in-2017/d/d-id/1330690?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

WhatsApp and Facebook told to stop sharing data

Hör auf! Stop it! Arrête ça!

That’s the order of European countries who’ve laid it on the line to WhatsApp: Germany told it to stop sharing German users’ data with parent company Facebook in September 2016, the UK told it in November 2016 to back off before Facebook even started, and now France has joined the “get-your-hands-off!” countries.

The order came on Monday from France’s ultra-vigilant privacy watchdog, the Chair of the National Data Protection Commission (CNIL).

CNIL gave WhatsApp a month to comply. In its public notice, it said that the messaging app will face sanctions for sharing user phone numbers and usage data for “business intelligence” purposes if it doesn’t comply.

The watchdog explained that it started looking into the matter last year, after WhatsApp announced that it was going to start sharing users’ phone numbers and other personal information with Facebook, in spite of years of promises that it would never, ever do such a thing.

The move was for ad targeting, of course, and to give businesses a way to communicate with users about other things, like letting your bank inform you about a potentially fraudulent transaction or getting a heads-up from an airline about a delayed flight. The reasons fell into three buckets: targeted advertising, security, and evaluation and improvement of services (“business intelligence”).

For a window of 30 days, WhatsApp offered users the option of opting out of data sharing for the purposes of advertising, but no way to entirely opt out of the new data sharing scheme.

The move outraged privacy advocates. After all, at the time of its $19 billion acquisition by Facebook in 2014, it had promised never to share data.

That promise goes back further still. In November 2009, WhatsApp founder Jan Koum posted to the company’s blog this promise:

So first of all, let’s set the record straight. We have not, we do not and we will not ever sell your personal information to anyone. Period. End of story. Hopefully this clears things up.

CNIL wanted an explanation of how the data was processed and transferred, and it asked WhatsApp to hold off on targeted advertising in the meantime.

In its efforts to verify that WhatsApp’s data processing was being done legally, CNIL carried out online inspections, sent a questionnaire to the company and then beckoned WhatsApp to a hearing. WhatsApp told CNIL that the data of 10 million French users had actually never been used for targeted advertising, but no matter: the CNIL says it uncovered violations of the French Data Protection Act during its investigations.

CNIL says the security purpose for the data transfer seemed to be essential for the app to function, so that part of the data transfer between WhatsApp and Facebook is legal. But not so the business intelligence – i.e., the sharing of non-essential information to improve the function of the app – given that users couldn’t opt out. From CNIL’s statement:

The only way to refuse the data transfer for ‘business intelligence’ purpose is to uninstall the application.

CNIL said that it had repeatedly asked WhatsApp to provide a sample of the French users’ data it had transferred to Facebook, but the company balked. WhatsApp explained that since it’s located in the US, it figures that it’s only subject to that country’s laws.

Umm, no, said CNIL, which says it has the power to regulate “the moment an operator processes data in France.”

WhatsApp said in a statement that it’s only collecting a smidgen of data because privacy is “incredibly important” to the company.

It’s why we collect very little data, and encrypt every message.

WhatsApp says it will continue to work with the CNIL “to ensure users understand what information we collect, as well as how it’s used,” and in spite of all these data protection authorities barking out differing orders:

We’re committed to resolving the different, and at times conflicting, concerns we’ve heard from European Data Protection Authorities with a common EU approach before the General Data Protection Regulation comes into force in May 2018.

The EU’s influential privacy body, the Article 29 Working Party (WP29), has been demanding answers from WhatsApp about its policy change since a few months after it was announced. The WP29 published an open letter that warned the chat app about sharing user data with the wider group of Facebook companies, forcing WhatsApp to pause data transfer.

In October, the WP29 once again turned its steely gaze WhatsApp-ward to step up action over user consent and privacy following Facebook’s failure to address breaches of EU law – failure that resulted in a £94m fine for “misleading” the EU over its WhatsApp takeover.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uW4fXY9EnMA/

Windows 10 password manager bug is hiding good news

Google Project Zero researcher Tavis Ormandy has spotted a flaw in the Keeper browser password manager extension, which Microsoft recently started bundling with developer builds of Windows 10.

It doesn’t sound like good news, and in one important respect it isn’t – the existence of a security flaw is never better than no security flaw.

Peer a bit harder, though, and out of the gloom you might spot a surprising good news story worth paying attention to if you’re a Windows 10 user.

More on that later, but first the vulnerability itself, which is severe enough to allow a malicious website to steal any password accessed by the Keeper browser extension version 11.3 (including for people who downloaded it independently of Windows 10), introduced on 8 December.

Ormandy said he’d encountered almost the same flaw in the (then unbundled) product in August 2016. Putting Keeper on notice of Project Zero’s 90-day disclosure-and-fix deadline, Ormandy wrote:

I think I’m being generous considering this a new issue that qualifies for a ninety day disclosure, as I literally just changed the selectors and the same attack works.

But, let’s give credit to Keeper’s developers for quickly jumping on the issue:

From the time we were notified of this issue, we resolved it and issued an automatic extension update to our customers within 24 hours.

Anyone running Keeper on Edge, Chrome or Firefox would automatically have received the updated version 11.4.4 or newer through the extension updating process.

Safari users should update manually by visiting a download page. Mobile and desktop versions are not affected by the flaw.

It would be easy to berate Microsoft over a flaw in a piece of software bundled with Windows, whether those downloading it were aware of its existence, but let’s dig deeper into the issues in play.

First, the software was part of a Windows 10 build downloaded from the Microsoft Developers Network (MSDN), a repository used by software professionals to test out beta Windows builds, and not Windows users at large.

They’d also have to be active users of the Keeper browser extension – just having the software wouldn’t expose anyone.

The thorny issue, then, is whether bundling security software is a good idea in the first place.

Microsoft has bundled software it deems might be helpful since the beginnings of Windows, although rarely from branded third-parties. Doing so implies some kind of security check has been carried out on the program.

It’s not clear whether this was done in this case, but even if it wasn’t its inclusion does at least signal that Microsoft is thinking about including password management with future versions of Windows 10.

If so, this is good news. While the flaw reminds us that password managers are not infallible, they are surely better than no password manager at all. They improve password strength, reduce the likelihood that passwords are reused, and integrate multi-factor authentication.

Including even a basic password manager in Windows 10 or Edge would help boost uptake, a positive step.

Ironically, this flaw might not even have been noticed in time had it not been bundled by MSDN first.

So, let’s thank Ormandy for spotting a potentially serious flaw, but also praise Microsoft, however clumsily, for broaching the important issue of how users should be securing passwords.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Z-fE0Sg2H3o/

New York to crack open its code, looking for bias

In 2016, Pro Publica released a study that found that algorithms used across the US to predict future criminals – algorithms that come up with “risk assessments” by crunching answers to questions such as whether a defendant’s parents ever did jail time, how many people they know who take illegal drugs, how often they’ve missed bond hearings, or if they believe that hungry people have a right to steal – are biased against black people.

Pro Publica came up with that conclusion after analyzing what it called “remarkably unreliable” risk assessments assigned to defendants:

Only 20% of the people predicted to commit violent crimes actually went on to do so.

What Pro Publica’s data editors couldn’t do: inspect the algorithms that are used to come up with such scores. That’s because they’re proprietary.

The algorithms that produce the risk assessment scores that are widely used throughout the country’s criminal justice systems aren’t the only ones that have been found to be discriminatory: similarly, studies have found that black faces are disproportionately targeted by facial recognition technology. The algorithms themselves have been found to be less accurate at identifying black faces – particularly those of black women.

It’s because of such research findings that New York City has passed a bill to study biases in the algorithms used by the city. According to Motherboard, it’s thought to be the first in the country to push for open sourcing of the algorithms used by courts, police and city agencies.

The bill, Intro 1696-A, would require the creation of a task force that “provides recommendations on how information on agency automated decision systems may be shared with the public and how agencies may address instances where people are harmed by agency automated decision systems.”

Passed by the City Council on 11 December, the bill could be signed into law by Mayor Bill de Blasio by month’s end.

The bill’s current form doesn’t go as far as criminal justice reformers and civil liberties groups would hope.

An earlier version introduced by council member James Vacca, of the Bronx, would have forced all agencies that base decisions on algorithms – be it for policing or public school assignments – to make those algorithms publicly available. The watered-down version only calls for a task force to study the possibility of bias in algorithms, be it discrimination based on “age, race, creed, color, religion, national origin, gender, disability, marital status, partnership status, caregiver status, sexual orientation, alienage or citizenship status.”

The idea of an “open-source” version was resisted by Tech:NYC, a high-tech industry trade group that counts among its members companies such as Facebook, Google, eBay, Airbnb and hundreds of small startups, such as Meetup.

Tech:NYC policy director Taline Sanassarian testified at an October hearing that the group was concerned that the proposal would have a chilling effect on companies that might not want to work with the city if doing so required making their proprietary algorithms public. She also suggested that open-sourcing the algorithms could lead to Equifax-like hacking:

Imposing disclosure requirements that will require the publishing of confidential and proprietary information on city websites could unintentionally provide an opportunity for bad actors to copy programs and systems. This would not only devalue the code itself, but could also open the door for those looking to compromise the security and safety of systems, potentially exposing underlying sensitive citizen data.

But most of the technologists in the room didn’t agree with her, according to Civic Hall.

Civic Hall quoted Noel Hidalgo, executive director of the civic technology organization BetaNYC, who said in written testimony that “Democracy requires transparency; copyright nor ‘trade secrets’ should ever stand in the way of an equitable, accountable municipal government.”

Another technologist who spoke in favor of the open-sourcing of the algorithms was Sumana Harihareswara, who said that open-source tools and transparency are the way to get better security, not worse.

If there are businesses in our community that are making money off of citizen data and can’t show us the recipe for the decisions they’re making, they need to step up, and they need to get better, and we need to hold them accountable.

Joshua Norkin, a lawyer with the Legal Aid Society, told Motherboard’s Roshan Abraham that it’s “totally ridiculous” to say that government has some kind of obligation to protect proprietary algorithms:

There is absolutely nothing that obligates the city to look out for the interests of a private company looking to make a buck on a proprietary algorithm.

The argument over whether open source or proprietary black-box technologies are more secure should sound familiar. In fact, the same debate is taking place now, a year before our next US election, with regards to how to secure voting systems.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5U_BIh5076M/

UK teen dodges jail time for role in DDoSes on Natwest, Amazon and more

Brit teen Jack Chappell has avoided being sent to prison after pleading guilty to helping launch DDoS attacks against NatWest, Amazon and Netflix, among others.

Chappell, 19, from Heaton Moor, Stockport, launched 2,000 DDoS attacks and aided several others as part of the vDos “booter” service. The site posed as a server stress-testing service but in fact sold its computing power to customers who wanted to launch DDoS attacks.

He was allegedly helped by the two Israelis who were arrested for running vDos, Itay Huri and Yarden Bidani, also 19. Both teens were collared in September 2016 after an FBI-assisted investigation.

His victims included web titans such as Netflix, Vodafone, the BBC and Amazon, as well as smaller sites, such as that belonging to Manchester College, where Chappell studied. He also sold his services freely and offered technical support to his customers.

Chappell was arrested in April 2016 when investigators traced his IP address after the attack on his college.

In July he pleaded guilty at Manchester’s Minshull Street Crown Court to impairing the operation of computers under the Computer Misuse Act, encouraging or assisting an offence, and money laundering. Today he was sentenced to 16 months in youth custody, suspended for two years, and ordered to undertake 20 days of “rehabilitation”.

This means he will only serve time in a young offenders’ institute if he is convicted of another crime or does not comply with the conditions imposed by the court.

According to the Manchester Evening News, Judge Maurice Greene said in his sentencing remarks: “It is a tragedy to see someone of undoubted talent before the courts… You were taken advantage of by those more criminally sophisticated than yourself.”

vDos masqueraded as a stress-testing service for servers (archived version of site available here) but was in fact a “booter”, a service which sells DDoS attacks, charging between $30 (£22.37) and $200 (£149.15) a month for different levels of service.

Security journalist Brian Krebs believed the site made around $600,000 over two years.

The identities of Huri and Bidani were revealed online after a stolen database was sent to Krebs, which contained the personal details of the two 19-year-olds.

Israeli prosecutors have charged the pair with conspiracy to commit a felony, prohibited activities, tampering with or disrupting a computer, and storing or disseminating false information. The case has not yet reached court. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/20/stockport_ddoser_given_suspended_sentence/

Security Worries? Let Policies Automate the Right Thing

By programming ‘good’ cybersecurity practices, organizations can override bad behavior, reduce risk, and improve the bottom line.

Cybersecurity and morality might seem like two entirely different universes. Yet there’s something distinctly moralistic in the narrative that surrounds the security industry. It’s a narrative that pits good against evil as starkly as any horror flick or morality play — with an emphasis on the dark side.

The security industry is engaged almost exclusively in the pursuit of the bad thing — the bad actor, the malware, the worm that turns PCs into zombies — and punishing it. All too often the remedy is to kill the bad without enforcing the good. But what if there were a different approach to security — a way to automate doing the right thing? To bring our better angels into the security narrative?

Much of the security industry adheres to this stomp-out-the-bad model, with mixed results. And with so much bad to go around, it’s no wonder the cybersecurity market is booming. By one estimate, the market will be worth over $230 billion by 2022, up from nearly $138 billion today. Yet the cost and number of breaches are increasing even faster than security spending. It’s what led VMware CEO Pat Gelsinger to tell VMworld 2017 attendees that the security industry has failed its customers — that the prevailing security model is “broken.”

In fact, most security breaches and system failures are the result of people not operating systems correctly. They forget to do something or give themselves permission to do an action, then leave that permission open so that bad actors can take advantage of it. These missteps could be avoided by a security approach that automatically directs, guides, or encourages system operators to do the right thing or blocks them from doing bad things. It is an enlightened security leader who prioritizes and budgets for this kind of security policy enforcement; without active and automated enforcement of policy, the breaches keep coming, costs keep rising, and heads keep rolling.

To draw an analogy from the parenting world, the dominant security model today is the equivalent of raising kids only by punishing them when they do bad. A more effective approach is to encourage kids when they do the right thing — thereby building a decision-making framework in their frontal cortex that will override bad behavior. Similarly, by automating good practices in the security world, the system can override bad behavior, which will lead to a safer environment.

At the risk of stating the obvious, this approach is not based on some naïve denial of the existence of the bad actor, the malware — the dark side. In fact, when recently asked what malware a policy enforcement approach would catch, I responded simply that it doesn’t; rather, assume the malware is already present and trying to do something bad. Once that assumption is accepted, you have the opportunity to turn the security model on its head into something far more powerful and resilient to zero-day attacks.

Let’s say you want to protect workloads you have running in the cloud. The cloud, of course, is one of the big drivers of the rapid increase in security spending — particularly the increased deployment of cloud-based business applications. It’s also a rich source of dark-tinged security narratives, particularly as it pertains to workloads. That’s because workloads today can span multiple cloud platforms and are vulnerable to security breaches as they move beyond the boundaries of the data center. In the words of Forrester analyst Andras Cser, manual management of cloud workloads is essentially a death wish. That’s what not to do.

But what sort of security policy would constitute doing the right thing in this context? And how could one have a policy that scales? A security policy is simply what you decide a priori is the correct behavior. You might decide to protect workloads by automating the enforcement of security policies based on contextual understanding of the people, data, and infrastructure that access and support the workload, and consistently enforce this across any cloud.

For example, consider a workload that is running in a bank’s cloud data center in Europe and the workload is migrated to a cloud data center outside the EU. The data in the workload was accessible by a bank admin before the move, but now, policy and regulatory mandates (geofencing requirements for data sovereignty or GDPR) no longer permit a third-party system admin to access an encryption key to look inside private workload data, even though the workload was successfully moved to the new location. To protect the data from prying eyes, the bank could institute a policy delineating “who can access” based on “where a workload is located.” It’s the right thing to do, can be automated, and is easily enforceable, without manual intervention.

That’s one way to automate good security practices — and it will certainly give our better angels a stronger voice in the security narrative. 

Related Content:

 

John De Santis has operated at the bleeding edge of innovation and business transformation for over 30 years – with international and US=based experience at venture-backed technology start-ups as well as large global public companies. Today, he leads HyTrust, whose mission is … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/security-worries-let-policies-automate-the-right-thing/a/d-id/1330680?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Five Arrested for Cerber, CTB-Locker Ransomware Spread

Authorities arrest three Romanian suspects for spreading CTB-Locker malware and two for a ransomware case linked to the United States.

Romanian authorities have arrested three suspects for spreading a form of ransomware called Curve-Tor-Bitcoin Locker (CTB-Locker) throughout Europe. Two members of the same criminal group have been arrested for distributing Cerber ransomware within the United States.

An investigation into CTB-Locker began in early 2017, when authorities were alerted to Romanian nationals sending spam messages designed to look like they came from Italy, the Netherlands, and the UK. The messages infected systems and encrypted data with CTB-Locker ransomware, which targets almost all versions of Windows including XP, Vista, 7, and 8.

Two suspects were arrested for contaminating a large number of systems in the US with Cerber ransomware. Initially the two investigations were separate, but they were combined when it was discovered people in the same Romanian criminal group was responsible for both. Suspects did not develop the malware themselves but acquired it before launching infection campaigns.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/five-arrested-for-cerber-ctb-locker-ransomware-spread/d/d-id/1330684?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

WordPress captcha plugin on 300,000 sites had a sneaky backdoor

The folk at WordFence are warning that the WordPress Captcha plugin, popular enough to get around 300,000 installations, should be replaced with the latest official WordPress version (4.4.5).

To help admins, WordFence has worked with the WordPress plug-in team to patch pre-4.4.5 versions of the plug-in, the author’s been blocked from publishing updates without WordPress review, WordFence now includes firewall rules to block Captcha and five other plugins from the same author.

Matt Barry explained that the group took interest in the plug-in when after it changed hands in September. Three months after that, Captcha version 4.3.7 landed, and that’s the version that WordFence found carried the backdoor.

The plug-in’s auto-downloader “downloads a ZIP file from https://simplywordpress[dot]net/captcha/captcha_pro_update.php”, which is how the backdoor is put onto the target install.

“This backdoor creates a session with user ID 1 (the default admin user that WordPress creates when you first install it), sets authentication cookies, and then deletes itself.”

1  $wptuts_plugin_remote_path = 'https://simplywordpress.net/captcha/captcha_pro_update.php';
2 ---
3  $wptuts_plugin_remote_path = 'https://simplywordpress.net/captcha/captcha_free_update.php';

WordFence pointed the finger at a group of people it considers repeat offenders: domain records, the post said, link simplywordpress[dot]net with one Martin Soiza, via a domain contact e-mail belonging to Stacy Wellington.

The group’s Mark Maunder put together a backgrounder on Soiza in September 2017.

Other plug-ins from the simplywordpress site are Convert me Popup, Death To Comments, Human Captcha, Smart Recaptcha, and Social Exchange – and all of them contain the backdoor code, Barry wrote.

The point of the backdoor, the post said, is to create cloaked backlinks to various payday loan businesses, to boost their Google rankings. As well as Soiza and Stacy Wellington, Barry traced links to a number of payday loan companies, some registered to Soiza, one to Charlotte Anne Wellington. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/20/backdoor_wordpress_captcha/

Windows 10 Hello face recognition can be fooled with photos

If you’ve skipped recent Windows 10 Creators Updates, here’s a reason to change your mind: its facial recognition security feature, Hello, can be spoofed with a photograph.

The vulnerability was announced by German pentest outfit Syss at Full Disclosure.

Even if you’ve installed the fixed versions that shipped in October – builds 1703 or 1709 – facial recognition has to be set up from scratch to make it resistant to the attack.

The “simple spoofing attacks” described in the post are all variations on using a “modified printed photo of an authorised user” (a frontal photo, naturally) so an attacker can log into a locked Windows 10 system.

On vulnerable versions, both the default config, and Windows Hello with its “enhanced anti-spoofing” feature enabled, Syss claimed.

“If ‘enhanced anti-spoofing’ is enabled, depending on the targeted Windows 10 version, a slightly different modified photo with other attributes has to be used, but the additional effort for an attacker is negligible.”

The researchers tested their attack against a Dell Latitude running Windows 10 Pro, build 1703; and a Microsoft Surface Pro running 4 build 1607.

They tried to change the Surface Pro’s config to “enhanced anti-spoofing”, but claimed its “LilBit USB IR camera only supported the default configuration and could not be used with the more secure face recognition settings.”

The researchers published three proof-of-concept videos, below. ®

Youtube Video

Youtube Video

Youtube Video

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/20/windows_10_hello_face_recognition_can_be_fooled_with_photos/