STE WILLIAMS

New Attack Campaigns Suggest Emotet Threat Is Far From Over

Malware described by the DHS as among the worst ever continues to evolve and grow, researchers from Cisco Talos, Cofense, and Check Point Software say.

In a troubling development for organizations, security researchers are reporting a recent resurgence in activity related to Emotet — malware that the US Department of Homeland Security (DHS) has previously described as among the most destructive ever.

Cisco Talos on Thursday reported seeing increased Emotet activity targeting US military domains and domains belonging to state and federal governments.

According to the vendor, the operators of Emotet appear to have successfully compromised accounts of one or more people working for or with the US government and sent out spam emails containing the malware to their contacts. The result was a rapid increase in the volume of messages containing Emotet directed at .mil and .gov top-level domains last month and and so far this month, the vendor said.

Researchers at Cofense, meanwhile, reported on another Emotet campaign, this one targeted at some 600 staffers at the United Nations. The campaign involved an email purporting to be from the Permanent Mission of Norway with an attachment that, if opened, would eventually result in Emotet being downloaded on the system.

Jason Meurer, senior research engineer at Cofense, says there has been at least two previous compromises of a permanent mission to the United Nations related to Emotet, which may have been used to gather contact lists and emails. “We saw a few other subject lines that appear to have been scraped from stolen emails, likely indicating more victims leading up to this most recent campaign,” he says.

In late December, Check Point Software described its incident response team as seeing hundreds of Emotet attacks per day, including one on the city of Frankfurt that forced officials to take its network offline to prevent further damage. According to the company, it responded to some 34 attacks last year where Emotet had been used to infect a network with Ryuk ransomware. Every single Ryuk ransomware incident that Check Point investigated in 2019, in fact, involved Emotet, Check Point said.

Ripple Effects
Emotet emerged in 2014 as a banking Trojan, but over the years has morphed into one of the most sophisticated and widely used tools for distributing malware. Its operators are known for infecting systems widely and then selling access to those systems to other threat actors, most notably those behind the Trickbot banking Trojan and the Ryuk ransomware family.

Emotet spreads mainly via spam email — often customized to appear more convincing to targeted victims. The malware is typically concealed in PDF documents, malicious links, or rogue Word documents. Typical lures to get users to click on the attachments and links have included names suggesting PayPal receipts, shipping notifications, invoices for payments, and legal documents. The recent campaign targeting UN staffers involved an attachment that purported to be some kind of a signed agreement involving the Norwegian government.  

Once Emotet infects a system, it steals names and email addresses from victims’ contact lists and uses them to send phishing emails to other victims. It can also steal passwords and comes integrated with features for detecting sandboxes and other security-control mechanisms. Emotet campaigns have hit organizations around the world, but among the most heavily targeted are those based in North America, the UK, and Australia.

“Getting infected with Emotet has many different ripple effects,” says Craig Williams, director of outreach at Cisco Talos. “Infected systems are used to transmit Emotet to additional victims, sensitive information such as email data can be exfiltrated, and the infection gives attackers the ability to move laterally within networks where Emotet is present.”

What makes Emotet especially troubling is the way it uses social engineering and personal and professional relationships to spread, Cisco Talos said in its report on the recent attacks against .mil and .gov targets. Because Emotet uses a victim’s contact list to send itself to other people, a person receiving the email can be lured into believing it is safe. Sometimes the message that Emotet sends includes the contents of a previous email exchange between the victim and the recipient, further adding to its apparent authenticity, Cisco Talos said.

Remediating Emotet infections can be challenging because of how adept the malware is at spreading inside a network — from a single machine to hundreds, Williams says. “Additionally, it has been used in some large-scale ransomware campaigns that have resulted in large amounts of data loss and destruction,” he says.

Security researchers and others have been especially worried about Emotet’s use in distributing other malware and in providing access to infected networks. Back in June 2018, the DHS’s Cybersecurity and Infrastructure Security Agency (CISA) had described Emotet as “among the most costly and destructive malware” targeting government, public, and private-sector organizations. According to CISA, each Emotet infection has cost state, local, and federal government entities up to $1 million to remediate.

CISA has warned about the polymorphic nature of the malware and its ability to continuously evolve and update its functions, as well as its ability to evade typical signature-based detection systems and to maintain persistence on an infected system.

Since the 2018 CISA alert, security researchers have said the malware and methods used to distribute it have only become more devious and dangerous.

“Emotet has established itself as a king amongst malware distributors, capable of delivering infections to a large number of infected hosts,” Check Point said in a report released this week. “It is also able to act as a launching platform for precise and coordinated attacks against well-financed organizations.”  

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How to Comprehend the Buzz About Honeypots.”

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/new-attack-campaigns-suggest-emotet-threat-is-far-from-over/d/d-id/1336825?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

With International Tensions Flaring, Cyber Risk is Heating Up for All Businesses

Risks of nation-state attacks go beyond Iran, and the need for awareness and security don’t stop at any national border.

(image by Pablo Lagarto, via Adobe Stock)

When security issues shift from phishing and trojans to things that explode in the night, they tend to get a lot of attention. Recent military action involving the United States and Iran has led many to speculate about possible cybersecurity repercussions, but experts question whether the threat landscape has actually changed.

“In the cyber world, there’s a war going on all the time,” says Elad Ben-Meir, CEO of SCADAfence. “There are attempts of nation state-backed attacks happening all the time.”

The threat landscape

“These players — Iran, China, and others — are always engaged,” says Mark Testoni, CEO of SAP NS2. He says that threat actors are always probing and poking to see which opportunities are available and which data is visible. That constant probing in the cyber realm marks a clear difference from the situation Testoni remembers from his youth.

“When we go back to when I was growing up in the Cold War era, the battlefields were pretty defined,” Testoni says, explaining, “It was sea, land, air, and then space over time. Now, the Internet is obviously one of those battlefields.”

And for many executives and experts, businesses are on the battlefield whether they’re a direct target or not. The question is not whether businesses are truly at risk to threats related to international sociopolitical affairs; but rather, what sort of risks? What does that overall threat landscape look like to corporations?

Attacks from different directions

“Two weeks ago, I would have said probably the biggest immediate risk is by criminal organizations,” says Peter Corraro, cyber governance manager at Wärsilä. Those criminal organizations have an ultimate goal that’s straightforward — they want to extract data or behavior from the company that can be converted to money.

Nation-state sponsored attacks, on the other hand, “…are going to be more specific, not necessarily financially focused, but looking to impact the organization they’re attacking along some other line, whether that’s to cause panic or to make a statement,” Corraro says.

Making a statement can mean attacking different targets than most criminals might have in their sights. “I think it’s well-documented that Chinese actors, among the many things they are looking for, is intellectual property, [sic]” says Testoni. Other actors, he points out, could have aims that include the large-scale economic disruption that might result from DDoS attacks against financial services institutions.

Outside traditional IT targets, “Industrial infrastructure worldwide is vulnerable to cyber attack and most industrial environments are underprepared for defending themselves. This not only applies to Iran but around the world,” says Sergio Caltagirone, vice president of threat Intelligence at Dragos. These industrial targets are vulnerable — and their vulnerability could have wide-ranging impacts.

“All it takes is one or two systems that aren’t protected or that haven’t been patched and the attackers will wreak whatever type of havoc they have at their disposal,” says Jason Kent, hacker in residence at Cequence Security. The havoc could extend well beyond the shop floor, as well.

“You need to remember that every IoT device is part of your network and may be the gateway of choice of the attacker to penetrate your network,” says Natali Tshuva, CEO of Sternum Security. 

(continued on next page: The positive side)

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/edge/theedge/with-international-tensions-flaring-cyber-risk-is-heating-up-for-all-businesses----/b/d-id/1336824?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

College students call for ban on facial recognition on campus

US elementary and grade schools have been experimenting with facial recognition. But so far, it hasn’t been widely adopted at colleges – and students are taking part in a nationwide effort to keep it that way.

On Tuesday, the digital rights group Fight for the Future announced that it’s teamed up with Students for Sensible Drug Policy to ban the biometric technology from university campuses. The groups have launched a website and toolkit for campuses that want to join in.

So far, student groups have been organized at George Washington in DC and DePaul in Chicago. Activists are also reaching out to 40 universities – including Stanford, Harvard, and Northwestern – to determine whether they’re using what Fight for the Future calls “this problematic technology.”

Evan Greer, Deputy Director of Fight for the Future, says that students have a right to know if their schools’ administrations plan to experiment on them with biometric surveillance:

Facial recognition surveillance spreading to college campuses would put students, faculty, and community members at risk. This type of invasive technology poses a profound threat to our basic liberties, civil rights, and academic freedom.

Schools that are already using this technology are conducting unethical experiments on their students. Students and staff have a right to know if their administrations are planning to implement biometric surveillance on campus.

Erica Darragh, board member at Students for Sensible Drug Policy, said that facial recognition doesn’t make anybody safer. Rather, it just tramples on privacy rights, particularly those of minorities.

An oft-cited study from Georgetown University’s Center for Privacy and Technology found that automated facial recognition (AFR) is an inherently racist technology. Black faces are over-represented in face databases to begin with, and FR algorithms themselves have been found to be less accurate at identifying black faces.

In another study published last year by MIT Media Lab, researchers confirmed that the popular FR technology it tested has gender and racial biases.

Among many other privacy rights groups and FR critics, the American Civil Liberties Union (ACLU) has argued that extensive video surveillance systems simply don’t work.

For example, the terrorists behind the June 2007 attempted attacks in the UK weren’t put off by the country’s enthusiastic embrace of surveillance cameras. In fact, suicide attackers may well be attracted by the television coverage that such cameras deliver, the ACLU suggests.

The attacks were fortunately botched, but not for reasons that had anything to do with surveillance cameras. In London it was human observation and common-sense that appears to have thwarted the attack. In Glasgow, it was physical security – airport barriers – that prevented the attack from succeeding.

Neither has the technology cut down on petty crime. Take Notting Hill Carnival for example. After years of high failure rates when trying to implement the technology, London’s Metropolitan Police finally threw in the towel in 2018. In 2017, the “top-of-the-line” AFR system they’d been trialling for two years couldn’t even tell the difference between a young woman and a balding man.

This is only the latest of Fight for the Future’s multiple campaigns to stop the steady march of facial recognition. In July 2019, the group called for a federal ban on FR surveillance, and in October 2019, artists and fans waged – and won – a grassroots war to stop “the Orwellian surveillance technology” from invading live music events, such as the enormous Coachella, Bonnaroo, and SXSW music festivals.

As Greer and the guitarist Tom Morello noted when writing up the victory for Buzzfeed News, the anti-AFR campaign caused concert promoters to back off of plans to use the technology. Ticketmaster, for one, had invested in military-grade facial recognition software it had planned to use to link digital tickets with concertgoers’ images so they could “just walk into the show.”

The technology belongs neither at concerts nor on campuses, Darragh says:

Students should not have to trade their right to privacy for an education, and no one should be forced to unwittingly participate in a surveillance program which will likely include problematic elements of law enforcement. This automation of racial and political profiling threatens everyone, especially students, faculty, and campus guests of color. Students have an obligation to prevent this technology going mainstream, beginning with university campuses, where we have the most power and we know how to win.

The call for a campus ban on FR is part of Fight for the Future’s broader BanFacialRecognition.com campaign, which has been endorsed by more than 30 civil rights groups. They’re calling for local, state, and federal lawmakers to ban government and law enforcement use of facial recognition. Four cities have already banned the controversial technology, including San Francisco; Somerville, in Massachusetts; Berkeley, California; and Oakland, California.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cPl_2EJ8pSY/

Google to kill third-party Chrome cookies in two years

Google doesn’t want to block third-party cookies in Chrome right now. It has promised to make them obsolete later, though. Wait – what?

The search engine giant gave us the latest update this week in the journey towards what it says will be a more private, equitable web. It announced this initiative, known as the Privacy Sandbox, in August 2019. It wants to make the web more private for users, it said.

The discussion about online ads and privacy revolves around cookies because they’re what support many predatory advertising models today. It works like this: you visit a website and it puts a small file on your hard drive. This cookie contains information about the session – when you visited, what you looked at, what IP address you came from, and so on.

Some companies use these purely to remember you when you go back so that you don’t have to sign in again. Those are first-party cookies, and they’re a great way to make the web more convenient.

Other publishers let adtech companies put their own cookies on your site that they then use to track you across different publishers. So suddenly a life insurance company knows that you’ve been searching for ways to give up smoking. Maybe you don’t care if one site knows you’ve looked at products on another.

What might creep you out is that data brokers can also gather this data, along with thousands of other data points, and end up knowing more about you than your spouse does. They can then sell that data to anyone willing to pay for it. Ewww.

Google’s messaging this week took some parsing. On the one hand, it says:

Some browsers have reacted to these concerns by blocking third-party cookies, but we believe this has unintended consequences that can negatively impact both users and the web ecosystem.

On the other hand, it says:

Once these approaches have addressed the needs of users, publishers, and advertisers, and we have developed the tools to mitigate workarounds, we plan to phase out support for third-party cookies in Chrome. Our intention is to do this within two years.

So it’ll slowly squish third-party cookies, but only after it’s found alternatives. What does that squishing look like, and what are those alternatives?

The company already announced that it would limit third-party cookies to HTTPS connections, which will make them more secure. It plans to start doing that next month.

It will also treat cookies that don’t use the SameSite label as first-party only. SameSite is a tag that developers can include with cookies. It sets the rules for exchanging the cookie with other sites. A bank could use it to avoid sending session cookies to another site that links to a customer’s transaction page, for example, so that a third party couldn’t harvest session information. So in future, developers have to be upfront about how third-party cookies will work, or Chrome won’t send them between sites at all.

Google’s fear is that choking off third-party cookies immediately will move tracking companies (which, remember, it owns) to use more subversive tracking methods like fingerprinting. The Electronic Frontier Foundation (EFF) scoffed at this notion back in August when Google first unveiled the Privacy Sandbox calling it “frankly, a mess,” and reminding us that Google tracks two-thirds of the web as it is. It also pointed out that Mozilla already blocks trackers, that the company tackled fingerprinting in its browser, and that Google announced plans to do the same (it reiterated those plans this week).

What else is Google doing that would help make third-party cookies obsolete? It lists a set of explainers here that outline plans including aggregated reporting (producing summary reports of cross-site activity that don’t ID individual users), measuring conversions on websites without tracking users, and Federated Learning of Cohorts (FLoC), which lets you monitor the behaviour of a group of similar people rather than individuals.

It’s also mulling the idea of holding data about browsing habits in the browser rather than with the advertiser (PIGIN), and for applications to effectively cover their eyes so they can’t see your IP address.

It also wants to ‘shard’ identities, so that the identity that your life insurance company sees is different from the one your high-risk motorcycle hang gliding club sees.

The problem is that Google wants all this to happen while still supporting the ability for people to advertise online, on the basis that if they can’t advertise, then most of them will lose over half their revenue. What it doesn’t do is mull alternatives to advertising such as paid content or patron-based schemes because these go against its business model. However it goes about it, persuading advertisers to gather less data about people will be a tall order.

Meanwhile, the founder of the Mozilla Foundation went on to create Brave, a browser that already keeps browsing activity private while letting users opt into viewing ads in return for attention tokens. It wants to make that a currency that you can use to pay for premium content over time, and it lets people tip content creators with the token.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Wd6Jr8cQUKg/

Apps are sharing more of your data with ad industry than you may think

GDPR? The California Consumer Privacy Act (CCPA)?

HA!

Those laws aren’t doing squat to protect us from the digital marketing and adtech industry, according to a new report from the Norwegian Consumer Council (NCC).

What chance do laws stand against policing what the NCC describes as a shadowy network of companies, “virtually unknown to consumers,” with which popular apps are sharing exquisitely personal behavior/interest/activities/habits data, including our religious preference, menstruation cycle, location data, sexual orientation, political views, drug use, birthday, the unique IDs associated with our smartphones, and more?

The current situation is “completely out of control, harming consumers, societies, and businesses,” the NCC writes, as evidence continues to mount against what it calls “the commercial surveillance systems” at the heart of online advertising.

There’s little restraining the industry from bombarding us with constant, mostly unavoidable privacy invasion, the Commission says:

The multitude of violations of fundamental rights are happening at a rate of billions of times per second, all in the name of profiling and targeting advertising. It is time for a serious debate about whether the surveillance-driven advertising systems that have taken over the internet, and which are economic drivers of misinformation online, is a fair trade-off for the possibility of showing slightly more relevant ads.

The comprehensive digital surveillance happening across the ad tech industry may lead to harm to both individuals, to trust in the digital economy, and to democratic institutions.

Out of control

The purpose of the in-depth report – titled “Out of Control” – was to expose how large parts of the vast digital marketing/adtech industry works. To do so, the NCC collaborated with the cybersecurity company Mnemonic, which analyzed data traffic from ten popular Android apps (which are also all available on iPhones) that they chose because the apps were likely to have access to highly personal information.

There are big names in the chosen crop of apps. Given the apps’ popularity, NCC says that it regards the findings to be representative of widespread practices in the adtech industry. The apps:

  • Grindr (dating)
  • OkCupid (dating)
  • Tinder (dating)
  • Clue (period tracking)
  • MyDays (period tracking)
  • Perfect365 (virtual makeup)
  • My Talking Tom 2 (children’s game)
  • Qibla Finder (app that shows Muslims where to face while praying)
  • Happn (dating)
  • Wave Keyboard (keyboard themes)

Some of the key findings about the traffic coming from those apps:

  1. All of the tested apps share user data with multiple third parties, and all but one share data beyond the device advertising ID, including a user’s IP address and GPS position; personal attributes such as gender and age; and app activities such as GUI events. The report says that that information can often be used to infer things such as sexual orientation or religious belief.
  2. Grindr, a gay dating app, shares detailed user data with many third parties, including IP address, GPS location, age, and gender. Such sharing is tucked away where we can’t see it: by using the MoPub monetization platform (owned by Twitter) as a mediator, the data sharing is “highly opaque,” the report says, given that neither the third parties nor the information transmitted are known in advance. The investigators also found that MoPub can dynamically enrich the data shared with other parties.
  3. Perfect365 also shares user data with “a very large number” of third parties, including advertising ID, IP address, and GPS position. The report says that it’s as if the app had been built “to collect and share as much user data as possible.”
  4. MyDays shares a user’s GPS location with multiple parties, and OkCupid shares users’ detailed personal questions and answers with Braze, a mobile marketing automation and customer “engagement” platform: this kind of platform is part of the industry that creates profiles that get the “right message” to the consumer at their “most receptive” moment.

Cumulatively, the ten analyzed apps were observed transmitting user data to at least 135 different third parties involved in advertising and/or behavioral profiling. The adtech industry uses the information to track us over time and across devices, in order to stitch together comprehensive profiles about individual consumers. They use those profiles and groups to target marketing, but the NCC points out that such profiles can also be used to discriminate, manipulate and exploit people.

It goes well beyond mobile apps

The adtech industry extends across different media, including websites, smart devices and mobile apps, but the NCC chose to focus on how the industry works when it comes to mobile apps.

Beyond the apps themselves are the scores of tributaries to which flows the data the apps collect and share. These are the third parties that the report traced in its analysis of data flow from those ten apps:

Location data brokers: Fysical, Safegraph, Fluxloop, Unacast, Placer, Placed/Foursquare. Never heard of them? If not, you likely don’t work in the adtech industry. Plain old consumers aren’t even aware the system exists, let alone who the players are. They may have thousands of points of data on us, but we’re kept in the dark, walled off by lengthy, legalistic privacy policies, middleman companies, plus the fact that most of us don’t know how to perform a technical analysis of app traffic.

Behavioral personalization and targeting companies: Another group that’s below the radar: Mnemonic traced data flowing to the companies Receptiv/Verve, Neura, Braze, and LeanPlum.

Systemic oversharing. There’s systemic over-collecting and oversharing throughout the industry, the NCC says. Though not all of the data transmissions Mnemonic analyzed included excessive personal data such as GPS location, put all of the data together, and you can create detailed pictures of individuals. That’s the nature of Big Data: even purportedly “anonymized” data points can be strung together to figure out exactly who we are.

You can also fingerprint devices, given that adtech liberally shares device information and metadata, such as phone model, current battery level, screen resolution and screen metadata, and information about the consumer’s mobile carrier. Examples are the dating apps OkCupid and Grindr and the kids’ game My Talking Tom 2, which all transmitted the Android Advertising ID and various metadata to AppsFlyer. a company that claims to leverage insights from “8.4 billion of the world’s connected devices”.

AppsFlyer also picked up data from Tinder on users’ Advertising ID, GPS coordinates, birthday, gender, and “target gender” – i.e., data on sexual orientation.

Good luck opting out: even after Grindr users opted out of personalized ads, the app still sent their advertising IDs, combined with their devices’ IP addresses. OKCupid also sent AppsFlyer detailed sensor data from a device’s magnetometer, gyroscope and accelerometer.

Google and Facebook. Though the industry is packed with companies that are virtually unknown to consumers, by far the biggest actors are these two household names.

Their penetration of adtech is beyond the scope of the NCC’s report, it said, but Mnemonic couldn’t help but observe the floods of data the mobile apps sent these voracious data collectors. All of the apps except Clue and Grindr were observed interacting with Google’s advertising service DoubleClick. Every app transmitted data to various parts of the Google system, and all of them had integrated various Google SDKs, including Google Ads, Google Crashlytics, and Google Firebase. Some of that data transfer may be due to the Android operating system being a Google service, but it’s tough to know “where Google as a service-provider ends and where Google as an advertising service begins,” the report said.

All of the apps except MyDays sent the Advertising ID to Facebook’s graph API, and every app except Clue had integrated a Facebook SDK. That means that Facebook can potentially track consumers through the apps, even if the consumer doesn’t have a Facebook account.

What about data privacy laws?

How are these data-sharing processes legal? Under the EU’s General Data Protection Regulation (GDPR), organizations are required to ensure that only personal data that are necessary for each specific purpose of the processing are processed, and that personal data must only be processed for specified, explicit, and legitimate purposes. In other words, data protection has to be baked in, by design and default.

How does the GDPR’s requirements jibe with the systematic, pervasive background profiling of app users the NCC’s analysis found, where, for example, some apps were found to be sharing personal data by default, requiring users to actively hunt for a tucked-away setting to try to prevent tracking and profiling?

From the report:

The extent of tracking and complexity of the ad tech industry is incomprehensible to consumers, meaning that individuals cannot make informed choices about how their personal data is collected, shared and used. Consequently, the massive commercial surveillance going on throughout the ad tech industry is systematically at odds with our fundamental rights and freedoms.

The GDPR states that where user consent is required to process personal data, it has to be informed, freely given and specific. The analyzed apps weren’t doing that, the report found:

In the cases described in this report, none of the apps or third parties appear to fulfill the legal conditions for collecting valid consent. Data subjects are not informed of how their personal data is shared and used in a clear and understandable way, and there are no granular choices regarding use of data that is not necessary for the functionality of the consumer-facing services.

The industry may well defend its practices on the basis of “legitimate interests,” but the NCC argues that app users “cannot have a reasonable expectation for the amount of data sharing and the variety of purposes their personal data is used for in these cases.”

Besides which, the report pointed out, there are other ways to do digital advertising that don’t rely on third parties getting users’ personal data, such as contextual advertising.

Even if advertising is necessary to provide services free of charge, these violations of privacy are not strictly necessary in order to provide digital ads. Consequently, it seems unlikely that the legitimate interests that these companies may claim to have can be demonstrated to override the fundamental rights and freedoms of the data subject.

Thus, the report suggests, many of the third parties that collect consumer data for things such as behavioral profiling, targeted advertising and real-time bidding may be in breach of the GDPR.

TechCrunch reached out to Ireland’s Data Protection Commission (DPC) and the UK’s Information Commissioner’s Office (ICO) for comment on the NCC’s report. The DPC didn’t reply – perhaps because it’s got a backlog of pending investigations into GDPR violations, including a probe into whether Google’s processing of personal data as part of its Ad Exchange is breaching GDPR rules.

As for the ICO, a spokeswoman sent TechCrunch the statement below, from Simon McDougall, its executive director for technology and innovation. McDougall says that the ICO is prioritizing its scrutiny of the adtech industry’s use of personal data, but as TechCrunch points out, nowhere will you find the word “enforcement.”

Still, keep your eyes out for “next steps,” to be discussed soon, the ICO says:

Over the past year we have prioritised engagement with the adtech industry on the use of personal data in programmatic advertising and real-time bidding.

Along the way we have seen increased debate and discussion, including reports like these, which factor into our approach where appropriate. We have also seen a general acknowledgment that things can’t continue as they have been.

Our 2019 update report into adtech highlights our concerns, and our revised guidance on the use of cookies gives greater clarity over what good looks like in this area.

Whilst industry has welcomed our report and recognises change is needed, there remains much more to be done to address the issues. Our engagement has substantiated many of the concerns we raised and, at the same time, we have also made some real progress.

Throughout the last year we have been clear that if change does not happen we would consider taking action. We will be saying more about our next steps soon – but as is the case with all of our powers, any future action will be proportionate and risk-based.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/NPeC4ksaHAY/

Update now! Popular WordPress plugins have password bypass flaws

Researchers have discovered password bypass vulnerabilities affecting two WordPress plugins from a publisher called Revmakx.

The first vulnerable plugin is RevMakx’s InfiniteWP Client, a tool that allows admins to manage multiple WordPress sites from the same interface.

The second is WP Time Capsule, a site backup and staging tool.

The urgency is the number of sites using these tools – between 300,000 and 500,000 for InfiniteWP, and 20,000 or more for WP Time Capsule – so if you have either of these plugins, patch as soon as possible.

According to security company WebARX, who reported the vulnerabilities 7 January 2020, both bugs make it possible for attackers to login to admin accounts without a password.

The InfiniteWP bypass was found in the iwp_mmb_set_request function, used to check whether the user is attempting an authorised action.

Two actions that do that are readd_site and add_site, but neither implements an authorisation check which means that an attacker can craft a malicious request:

All we need to know is the username of an administrator on the site. After the request has been sent, you will automatically be logged in as the user.

An even simpler logic error in WP Time Capsule produces a similar result – this time, all you need to do is include the text string IWP_JSON_PREFIX in the request submitted to the WordPress server.

What to do

The good news is the developer patched the issue within a day of being told of it, although many of the sites using InfiniteWP have yet to implement the update.

For InfiniteWP, the version that fixes the flaw is v1.9.4.5, which means all versions up to and including v1.9.4.4 are vulnerable.

For WP Time Capsule, the fix is in v1.21.16, with all versions up to and including v1.21.15 being vulnerable.

Updating is most easily achieved from the Plugins tab in the WordPress dashboard. There you can see which plugins have updates available, after which it’s a matter of hitting Update now to install the new versions.

Advice for managing WordPress plugins:

  • Minimise the number of plugins you have. Always remove plugins if you aren’t using them anymore. Keep your attack surface area as small as you can.
  • Keep your plugins up to date. Blogging software such as WordPress can keep itself updated, but you need to keep track of the plugins yourself.
  • Get rid of plugins that aren’t getting any more love and attention from their developers. Don’t stick with ‘abandonware’ plugins, because they’ll never get security fixes.
  • Learn what to look for in your logs. Know where to go to look for a record of what your web server, your blogging software and your plugins have been up to. Attacks often stand out clearly and early if you know what to look for, and if you do so regularly.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qhJn8G3sG5M/

S2 Ep23: Snake ransomware, VPN holes and phone spying – Naked Security Podcast

This week we look at VPN vulnerabilities [11:13], dig into the Snake ransomware [23:11], and decide whether our phones are spying on us [32:09].

Mark also revisits his growing list of pet peeves and Anna tests whether getting deep fake feet to your phone via SMS is real.

Host Anna Brading is joined by Sophos experts Mark Stockley, Greg Iddon and producer Alice Duckett.

Listen now!

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Acnnr8mcnHI/

Top Euro court advised: Cops, spies yelling ‘national security’ isn’t enough to force ISPs to hand over massive piles of people’s private data

Analysis In a massive win for privacy rights, the advocate general advising the European Court of Justice (ECJ) has said that national security concerns should not override citizens’ data privacy. Thus, ISPs should not be forced to hand over personal information without clear justification.

That doesn’t mean that the intelligence and security services should oblige communications companies to hand over information, especially when it comes to terrorism suspects, the opinion, handed down yesterday, proposes. But it would mean that those requests will need to be done “on an exceptional and temporary basis,” as opposed to sustained blanket harvesting of information – and only when justified by “overriding considerations relating to threats to public security or national security.”

In other words, a US-style hovering up of personal data is not legal under European law.

The legal argument being made by the AG is technically advisory – the ECJ has yet to decide – though in roughly 80 per cent of cases it does side with the preliminary opinion put forward by its Advocate General, in this case Manuel Campos Sánchez-Bordona.

If the ECJ agrees, it could also have significant implications for the UK which has passed a law that gives the security services extraordinary reach and powers – which is in a legal limbo due to the ongoing Brexit plans to leave the European Union.

If this proposed legal solution is adopted by the court, the UK will be able to retain its current laws, though it would almost certainly face legal challenges and would have a hard time reaching an agreement with Europe over data-sharing – something that could have enormous security and economic implications.

The case itself was sparked by a legal challenge from Privacy International against the UK’s Investigatory Powers Act (IPA) as well as a French data retention law.

In essence, the issue was whether national governments can oblige private parties – in this case, mostly ISPs – to hand over personal details by simply saying there were national security issues at hand.

The AG opines that no, it cannot: the European Directive on privacy and electronic communications continues to apply, and is not superseded by security claims. It does not apply to public bodies who are obliged to do what the government says.

Key part

This is the key part of the legal argument: “The provisions of the directive will not apply to activities which are intended to safeguard national security and are undertaken by the public authorities themselves, without requiring the cooperation of private individuals and, therefore, without imposing on them obligations in the management of business” (UK Case C-623/17, paragraph 34/79).”

That is explained in slightly more accessible language in a ECJ press release [PDF] today. It says that: “When the cooperation of private parties, on whom certain obligations are imposed, is required, even when that is on grounds of national security, that brings those activities into an area governed by EU law: the protection of privacy enforceable against those private actors.”

Privacy International also has its own explanation of the opinion. It is, unsurprisingly, happy about things, with its legal director Caroline Wilson Palow saying that it “is a win for privacy.”

“We all benefit when robust rights schemes, like the EU Charter of Fundamental Rights, are applied and followed,” she said. “If the Court agrees with the AG’s opinion, then unlawful bulk surveillance schemes, including one operated by the UK, will be reined in.”

The newly issued opinion follows a long-running battle between the authorities who claims that EU data privacy law doesn’t apply to national security – in large part because they want unfettered access to data sources to assist in investigations – and privacy advocates concerned about Europe creating an American mass surveillance system.

Privacy advocates have won the argument in this document. It’s worth noting that the ECJ has repeatedly come down on the side of individual rights over governmental assertions when it comes to digital data, so this opinion is likely to become legally binding when the full court considers it.

The upshot is that the French law – which requires phone companies and ISPs to store and provide a wealth of data on all their customers, including location – will almost certainly have to be rewritten.

Interference

The AG does acknowledge the legitimate concerns behind the law, noting that it came “against a background of serious and persistent threats to national security, in particular the terrorist threat.” But it said the data storing is “general and indiscriminate, and therefore is a particularly serious interference in the fundamental rights enshrined in the Charter.”

Advocate General Campos Sánchez-Bordona goes on: “The fight against terrorism must not be considered solely in terms of practical effectiveness, but in terms of legal effectiveness, so that its means and methods should be compatible with the requirements of the rule of law.”

Any new law aimed at keeping location and other data will have to be “carried out in accordance with established procedures for accessing legitimately retained personal data and are subject to the same safeguards.”

A British eavesdropper in the shadows

It’s cool for Brit snoops to break the law, says secretive spy court. Just hold on while we pull off some legal jujitsu to let MI5 off the hook…

READ MORE

Thanks to Brexit, the UK situation is more complicated. The UK, in theory at least, will be able to make its own laws – even if those amount to state surveillance of all citizens. So while the IPA breaks European law, according to this preliminary ruling, the UK could in theory retain it.

But, as with so many other things around Brexit, the truth is that the UK cannot exist in the modern world as its own digital island and so will have to reach some kind of agreement with Europe, or face the risk of being cut off from the continent when it comes to sharing data.

Despite the entire case being largely about the controversial UK law, the issue of Brexit makes it much more complicated and so the AG concludes that the ECJ should respond “in the following terms.”

“Article 4 TEU and Article 1(3) of Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) should be interpreted as precluding national legislation which imposes an obligation on providers of electronic communications networks to provide the security and intelligence agencies of a Member State with ‘bulk communications data’ which entails the prior general and indiscriminate collection of that data.”

In other words, the laws is a disgrace but, hey, you seem to want to go your own way, so have at it. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/16/ecj_privacy_prelim_ruling/

Active Directory Needs an Update: Here’s Why

AD is still the single point of authentication for most companies that use Windows. But it has some shortcomings that should be addressed.

Tried and true, Active Directory has been managing permissions and access to networked resources for decades. It’s a system that has weathered storms — cyber, organizational, and competitive — and has remained the backbone of most IT environments.

AD remains the single point of authentication and authorization for most companies that use Windows networking products or operating systems. It controls access to all critical resources, and it’s the linchpin for any major project or initiative. And that remains true even in an era when more companies are leveraging the cloud and supporting a mobile-first approach.

The Cloud, On-Premises, and the AD Identity Crisis
One of the secrets of AD’s longevity has been its ability to evolve in response to new needs and challenges. As such, the topic of “the need for Active Directory modernization” has become a major point of IT industry discussion in recent years. AD has been accused by some of having an identity crisis (pun intended), although there are almost as many opinions on how to solve that crisis as there are users of AD.

With that, there are threX issues that need to be addressed for AD to serve the next generation of computing:

Issue 1: User management in multiple environments. IT systems today are made up of a combination of environments and platforms, both on-premises and cloud-based, and users access them using a variety of methods, from desktops and laptops to mobile devices and virtual desktop infrastructure (VDI). To manage authentication across environments, organizations use the Azure Active Directory (AD) Connect management tool that connects on-premises identity infrastructure to Microsoft Azure Active Directory.

However, the security controls on Azure Active Directory are different from those of on-premises AD deployments; Azure AD, for example, supports multifactor authentication (MFA), while AD does not natively support MFA. So why not just switch to Azure AD? Because, as Microsoft says, “Azure Active Directory is not designed to be the cloud version of Active Directory. It is not a domain controller or a directory in the cloud that will provide the exact same capabilities with AD.” Clearly, an update to AD is needed.          

Issue 2: Security. Azure AD has the right idea; MFA is more secure than the Kerberos-based single sign-on (SSO) authentication used by AD. AD users have the option to implement MFA — but not in hybrid environments, where SSO is in control and gives users access to online resources. With the threat landscape so vast — and increasingly lethal — today, the need for multiple authentication factors is a must both for cloud-based systems and on-premises systems.

Issue 3: Regulations. One major factor that demands an AD update is the increasing security requirements of regulatory bodies. Increasingly, regulators are requiring that online services utilize MFA. Previously, customers would ask about Active Directory modernization when they needed help with AD migration, consolidation, or restructuring. Today, with data breaches wreaking havoc, the push for AD modernization is converging with the need for strong cybersecurity.

The Drive for Digital Transformation Makes AD More Important
All these factors play into the need for AD modernization. The popularity of AD has become its own Achilles’ heel; because companies relied so strongly on it during the on-premises computing era, they built their entire IT infrastructure around it. Now, as data, services, and activity move to the cloud, there is a “disconnect” between the authentication methods used by organizations and the authentication requirements for online services, whether they’re required for the security of the service or by regulators.

Many AD infrastructures are 10 to15 years old and have grown significantly over time. Those relying on AD have learned that these early deployments are often ill-equipped to meet the needs of today’s technologies and business demands; this is especially true for large organizations with complex infrastructures. Without proper cleanup and consolidation, organizations could face security and compliance risks once they get to the cloud.

Identity Management with Identity Crisis
The key to AD security is balancing the need to streamline user access to maximize productivity against the need to protect sensitive data and systems from both accidental and deliberate privilege abuse.

But AD authentication is limited to either passwords or smart cards, which carry respective drawbacks. Passwords, of course, can be lost, forgotten, and of course, hacked. [Editor’s note: The author’s company is one of a number that offer passwordless MFA.] If AD relies on a username and password for its efficient SSO that allows authenticated users access to everything, a hacker who steals, guesses, or tricks a user into giving up their credentials will be able to access systems, with AD as an active accomplice. The philosophy of AD authentication was based on simpler times — before there was a plethora of malware to steal user credentials, and before hackers were able to use social engineering techniques to extract credential information from users.

AD also allows logins using smart cards, eliminating the possibility that imposters will be able to log in to systems with compromised authentication information. But card management has its own issues; it’s more expensive than username/password authentication — the company has to buy the cards, which can be lost, meaning more costs for new cards. Presumably, employees will report immediately if they lose their cards, but since card authentication is based on trusting certificate authority certificates, which can be hacked, simply not losing one’s card doesn’t necessarily guarantee anything.

MFA for All
Cognizant of the problems and sensing a market opportunity, vendors by the dozen offer MFA solution add-ons for AD. Second factors can include one-time passwords sent via text message, biometric authentications (thumbprints, etc.), smart cards, tokens, and even voice authentication.

While these are certainly more secure than username/password authentication, there are no guarantees; second factors can be hackable, some more than others. And if the username/password is already compromised, we’re back where we started. For a more secure user experience, it would be best to do away with that first factor altogether, and implement more secure authentication methods. This, of course, would significantly impact AD, which is so strongly associated with credential-based SSO, speaking to the need for a major update.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How to Keep Security on Life Support After Software End-of-Life.”

Raz Rafaeli, CEO and Co-Founder at Secret Double Octopus, is a results-driven business executive with more than 25 years of technology and leadership experience in the software, security, semiconductor, and telecom industries. Previously, Raz was the CEO of MiniFrame and … View Full Bio

Article source: https://www.darkreading.com/risk/active-directory-needs-an-update-heres-why/a/d-id/1336747?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

NY Fed Reveals Implications of Cyberattack on US Financial System

A “pre-mortem analysis” sheds light on the potential destruction of a cyberattack against major US banks.

A cyberattack compromising the integrity of US financial systems could lead to an “unprecedented” reconciliation and recuperation process, bank analysts predict in new research published this week from the Federal Reserve Bank of New York.

As part of a “pre-mortem analysis,” Thomas Eisenback, Anna Kovner, and Michael Junho Lee analyzed the potential consequences if a cyberattack harmed banks’ ability to send payments between one another. They estimate the impairment of any of the five most active US banks could lead to “significant spillovers” to other banks and affect 38% of the network on average. These top banks account for close to half of total payments, the top 10 for more than 60%.

“A cyber attack on any of the most active U.S. banks that impairs any of those banks’ ability to send payments would likely be amplified to affect the liquidity of many other banks in the system,” the analysts write. If banks respond strategically — which is likely, if there is uncertainty surrounding the incident — the extent of amplification would be even greater, they explain.

To arrive at these findings, the analysts considered how an attack on multiple banks may interfere with payment activity in the Fedwire Funds Service, which represents the majority of wholesale payments between financial institutions in the US. They chose to analyze Fedwire given how high-value payment systems could appeal to an attacker who is eager to cause widespread economic damage.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “How to Keep Security on Life Support After Software End-of-Life.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/ny-fed-reveals-implications-of-cyberattack-on-us-financial-system/d/d-id/1336820?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple