STE WILLIAMS

Security 101: What Is a Man-in-the-Middle Attack?

A breakdown of the common ways criminals employ MitM techniques to snare victims, and tips for protecting users from these dirty tricks.

(Image: peterschreiber.media/Adobe Stock)

Specific numbers are hard to pin down on man-in-the-middle (MitM) attacks, but according to IBM’s X-Force Threat Intelligence Index 2018, more than one-third of exploitation of inadvertent weaknesses involved MitM attacks. Exactly how do these hacks play out? How do criminals get in and steal information – and how are their techniques evolving?

Here’s a closer look at the elements of a MitM attack, how they work, and how organizations can avoid becoming a victim.

What Is a Man-in-the-Middle Attack?
MitM attacks are attempts to “intercept” electronic communications – to snoop on transmissions in an attack on confidentiality or to alter them in an attack on integrity.

“At its core, digital communication isn’t all that much different from passing notes in a classroom – only there are a lot of notes,” explains Brian Vecci, field CTO at Varonis. “Users communicate with servers and other users by passing these notes. A man-in-the-middle attack involves an adversary sitting between the sender and receiver and using the notes and communication to perform a cyberattack.”

The victim, he adds, is “blissfully ignorant of the ‘man in the middle,’ often until it’s far too late and information has already been compromised.”

How Do MiTMs Work?
Lots of ways, including IP, DNS, HTTPS spoofing, SSL/email hijacking, and Wi-Fi eavesdropping. “And thanks to the Internet, the attacker can often be anywhere,” Vecci says.

One common attack involves a hacker setting up a fake public Wi-Fi hotspot for people to connect to, adds Kowsik Guruswamy, CTO with Menlo Security.

“People think they are accessing a legitimate hotspot,” he says, “but, in fact, they are connecting to a device that allows the hacker to log all their keystrokes and steal logins, passwords, and credit card numbers.”

Another popular MitM tactic is a fraudulent browser plugin installed by a user, thinking it will offer shopping discounts and coupons, Guruswamy says.

“The plugin then proceeds to watch over user’s browsing traffic, stealing sensitive information like passwords [and] bank accounts, and surreptitiously sends them out-of-band,” he says.

Michael Covington, VP of product strategy at Wandera, cites two main types of  MitM attacks impacting mobile users.

“The first is when the attacker has physical control of network infrastructure, such as a Wi-Fi access point, and is able to snoop on the traffic that flows through it,” he says. “The second is when the attacker tampers with the network protocol that is supposed to offer encryption, essentially exposing data that should have been protected.”

But Isn’t Encryption Supposed to Prevent MitM Attacks?
Yes. However, sophisticated spyware or surveillance “lawful intercept” software, such as Exodus and Pegasus, are occasionally finding ways to compromise the infrastructure of secure mobile messaging platforms like WhatsApp without necessarily cracking the encryption algorithm itself.

How Are MitM Attacks Evolving?
With the explosion of Internet of Things (IoT) devices in daily lives, the possibilities for MitM attacks have also ramped up. Many of these technologies were developed without security in mind, and they are being deployed by users faster than security can keep pace. For example, researchers are unearthing dangerous vulnerabilities related to unsecured radio frequency (RF) communications in the embedded systems in industrial and medical devices.

Hackers continue to look for new strategies to catch users off-guard with MitM, Varonis’ Vecci says.

“Varonis’ incident response team is seeing an uptick in adversaries using a very tricky MitM attack to bypass multifactor authentication, breach Office 365 tenants, and pivot to on-prem systems,” he says.

It starts with a phishing email that “lures a victim to a fake Office 365 login page where the attacker can snoop on the credentials used to access data, even breaking through two-factor authentication,” Vecci explains. “Users might have no idea anyone’s watching, but the attacker can use the technique to get access to systems and data both in the cloud and inside the data center if they know what they’re doing.”

The end result? “The MitM can end up hijacking a user’s credentials and then use them to get access to data that’s not even being passed,” Vecci says.

Another example, highlighted last week on Dark Reading, involves an Israeli startup that lost a significant chunk of venture capital funding due to an elaborate, multistep MitM attack. The attack started with email snooping, resulted in a fraudulent wire transfer, and ended with a $1 million theft.

The attack was discovered when the Chinese venture capital firm attempting to transfer the funds to the startup was alerted by its bank to an issue with the transaction. Soon after, the Israeli startup realized it had not received seed funding it expected. Check Point became involved once the two parties realized they’d been duped.

Best Practices for Preventing MiTM
User education remains the No. 1 defense for avoiding MitM attacks, Vecci says.

“Use a VPN, skip public Wi-Fi, and verify the sites you log into are legit by making sure they use secure, HTTPS connections,” he also advises. “Knowing what’s normal for users, devices, and data makes it far more likely that you’ll spot this kind of attack once it happens.”

What activities should raise a red flag?

“Maybe a user is logging in from a new location or device, or at 3 a.m. when most people are asleep. Or maybe they’re suddenly accessing data they’ve never seen, especially if it’s sensitive,” Vecci says. “Unless you’re watching your company’s critical data and can spot suspicious user activity, draw correlations between seemingly normal events, and connect the dots between users, devices, and data, you can easily miss a successful attack like this one.”

Menlo Security’s Guruswamy suggests advising users to heed the following tips:

  • Avoid installing unnecessary software or plugins, especially those that offer something for free. This reduces the likelihood that you install something that can implement a MitM attack.
  • Only download software or plugins from legitimate sites. Make sure you do not download software or plugins from third-party distribution sites since these may actually be distributing malware or altered software.
  • Navigate to sites by typing in the URL instead of clicking on a link, especially sites that require you to enter personally identifiable information.
  • Use a proxy service to from a trusted provider. These services allow you to create an encrypted tunnel that would be hard for MitM attacks to compromise.

Related Content:

 

Joan Goodchild is a veteran journalist, editor, and writer who has been covering security for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/security-101-what-is-a-man-in-the-middle-attack/b/d-id/1336570?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Blink Cameras Found with Multiple Vulnerabilities

Researchers found three broad types of vulnerabilities, one of which should be particularly concerning to consumers.

Amazon’s popular Blink home security cameras come packed with more than most consumers bargain for, including a variety of attack vectors that could allow criminals to hijack cameras and Blink accounts.

Researchers at Tenable found three separate vectors of attack — one of limited practicality, one of interest primarily to researchers, and one that actually poses a risk to consumers. The first involves physical access to the device, in which case the Blink camera’s design makes it very easy to connect to the device, provide hard-coded credentials, and control the device.

The second vulnerability would allow attackers to launch a man-in-the-middle attack based on the camera’s request for software updates or network information. The third, and most serious, involves network parameters passed to the camera that are not properly “sanitized” before being executed.

Tenable recommends that all Blink camera users allow automatic updates so the devices are kept up to date on software patches. The researchers say that they will provide more details on how to find and recognize already compromised cameras in the near future.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Security 101: What Is a Man-in-the-Middle Attack?

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/blink-cameras-found-with-multiple-vulnerabilities/d/d-id/1336571?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

TikTok settles class action over child privacy one day after it’s filed

The day after US parents filed a class action lawsuit against TikTok’s parent company – ByteDance – for alleged child privacy violations, ByteDance negotiated a settlement.

The company on Wednesday reached a $1.1 million settlement with the parents: what it called an “excellent result” in the settlement document.

ByteDance disputed many of the complaint’s claims, but nonetheless confirmed that it had reached a resolution that it said would avoid “the risks of protracted litigation.”

The parents of two users who joined the music video-sharing social media network (and who were also listed as plaintiffs on the class action) had alleged that their daughters had joined when they were under the age of 13. But neither parent – Sherri LeShore nor Laura Lopez – were asked for their verifiable consent, they claimed: a violation of the Children’s Online Privacy Protection Act (COPPA), which is the nation’s strictest child privacy law.

Regarding the $1.1 million settlement, TikTok sent out a statement saying that it’s “firmly committed to safeguarding the data of its users, especially our younger users.”

Although we disagree with much of what is alleged in the complaint, we have been working with the parties involved and are pleased to have come to a resolution of the issues.

You can see why ByteDance would consider $1.1 million an “excellent” resolution. It’s a good deal cheaper than the record-setting fine of $5.7 million for violating COPPA that the Federal Trade Commission (FTC) hit TikTok with in February 2019.

That record-setting fine was followed soon after by the UK launching an investigation to see if the same issues constitute a violation of the General Data Protection Regulation (GDPR).

But the parents’ class action lawsuit is only one flavor of TikTok’s legal morass.

TikTok’s Beijing-based parent company, ByteDance, plans to put its US division at arm’s length, separating the company to hopefully mollify US politicians who think it could be a national security risk.

When it comes to potential threats to national security from using Chinese apps such as TikTok or Russian apps such as FaceApp (which the FBI last week called a “potential counterintelligence threat”), the top US Marine says that the military has to do a better job at educating troops.

On Saturday, General David Berger, the US Marine Corps commandant, said that military leadership has to address the issue through training.

Business Insider quoted Gen. Berger’s remarks, given at the Reagan National Defense Forum in Simi Valley, California:

I’d give us a ‘C-minus’ or a ‘D’ in educating the force on the threat of even technology.

That’s not their fault. That’s on us. Once they begin to understand the risks, what the impact to them is tactically … then it becomes clear. I don’t blame them for that. This is a training and education that we have to do.

In October, TikTok promised US senators that it isn’t under Beijing’s thumb, that it’s never been asked by the Chinese government to remove content, that it “would not do so if asked. Period,” and that its data is stored on servers in the US.

Having said that, TikTok admitted last week that yes, it did censor some videos: for good, anti-bullying reasons.

It reportedly used to employ a far heavier hand when it came to censorship, though: Leaked documents from a few months ago showed that TikTok previously instructed moderators to follow a series of guidelines that led them to hide videos that flouted Beijing’s doctrine.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/AJarV9pQvjA/

Facebook users were duped by Cambridge Analytica, FTC rules

Oh, what a tangled web Cambridge Analytica weaved: the US Federal Trade Commission (FTC) on Friday ruled that the infamous and now bankrupt data analytics and consulting company practiced to deceive Facebook users in order to suck up their data

…all the better to tickle your inner demons, my dears.

Cambridge Analytica is, or was, a voter-profiling company that was used during both the Trump and Brexit campaigns. In March 2018, whistleblowers – former employees and contractors, including Christopher Wylie, who worked with Cambridge University professor Aleksandr Kogan to obtain the data – said that they had used Facebook to harvest millions of people’s profiles and built models to exploit what they found out about those users in order to “target their inner demons.”

Wylie:

That was the basis the entire company was built on.

In its opinion, issued on 25 November, the FTC also found that Cambridge Analytica engaged in deceptive practices relating to its participation in the EU-US Privacy Shield: a pact that allows US technology companies to legally transfer EU citizens’ personal information across the Atlantic in compliance with EU data protection requirements.

The FTC’s complaint alleged that Cambridge Analytica let its Privacy Shield certification lapse, then didn’t bother to tell the US Department of Commerce that it would continue to apply the data pact’s protections for the personally identifiable information (PII) that it collected while it was participating.

The FTC had sued Cambridge Analytica in July 2019, alleging that it, and its then-CEO Alexander Nix and app developer Aleksandr Kogan, deceived consumers, lying to them about not collecting any PII from Facebook users who were asked to answer survey questions and share some of their Facebook profile data.

Kogan developed a Facebook application called the GSRApp, better known as the “thisisyourdigitallife” app. It asked users to answer personality and other questions, and it collected information such as their – and their friends’ – likes of public Facebook pages.

Nix and Kogan settled. By-then-dead Cambridge Analytica didn’t respond to the complaint or to a motion submitted for summary judgment of the allegations.

Delete the data and don’t do it again

In its Final Order, the FTC prohibits Cambridge Analytica from making misrepresentations about the extent to which it protects the privacy and confidentiality of personal information, as well as its participation in the Privacy Shield pact and other similar regulatory or standard-setting organizations.

It’s also required to continue to apply Privacy Shield protections to personal information it collected while participating in the program (or to provide other protections authorized by law), or return or delete the information, and has to delete the PII that it collected through the GSRApp/thisisyourdigitallife.

But who’s left at Cambridge Analytica to carry out those data-deleting orders? The firm is currently filing for bankruptcy: a process it embarked on soon after the data debacle was first uncovered.

At the time, newspapers classified it as “one of the largest data leaks in the social network’s history” – one that allowed the data analytics firm to “exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.”

That was no breach; it was business as usual

Facebook at the time called that classification complete rot: the notion that there was a data breach was “completely false,” it said, and promptly blamed the victims for “[choosing] to sign up to [Kogan’s] app,” with “everyone involved [having given] their consent.”

People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.

Well, Facebook was spot-on when it claimed that the data wasn’t filched in a “breach” given that, according to whistleblowers, a fake news inquiry in the UK and private staff emails, it basically amounted to Facebook having turned a blind eye to Cambridge Analytica and other developers scraping away its users’ data.

Facebook was wrong in blaming the victims, however, the FTC said – as in, $5b worth of wrong. In July 2019, the FTC wrist-slapped Facebook $5b over its alleged, repeated use of “deceptive disclosures and settings to undermine users’ privacy preferences in violation of its 2012 FTC order.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/1_W3PpVtWZk/

EU releases its 5G conclusions

On 3 December, the Council of the European Union sent a memo to all delegations summarizing its thoughts on the “need to mitigate security risks linked to 5G”.

Yes, we recognize that security is a risk with 5G networks and we should be careful about it, the memo states, but we’re not going to talk about our attitude to specific suppliers – another adept sidestepping of US calls to ban Chinese company Huawei from EU networks.

The Council helps to approve the European Commission’s proposals. In its document, it concluded that:

5G networks will form a part of crucial infrastructure for the operation and maintenance of vital societal and economic functions.

But it also acknowledged how important it is to keep these networks secure. According to the document:

The increased security concerns related to the integrity and availability of 5G networks, in addition to confidentiality and privacy, make it necessary for the EU and the Member States to pay particular attention to promoting the cybersecurity of these networks and all services depending on electronic communications.

The potential benefits to stakeholders are clearly stated in the memo, and although a “swift rollout” is being encouraged…

EMPHASISES the need to ensure the swift demand based roll-out of the 5G networks and that 5G is a key asset for European competitiveness, sustainability and a major enabler for future digital services as well as a priority for the European Single Market.

… caution and cooperation between Member States is being emphasised to ensure a safe, swift and successful rollout:

[The Council] RECOGNISES the need to put in place robust common security standards and measures, acknowledging international standardization efforts on 5G, for all relevant manufacturers, electronic communications operators and service providers and that key components, such as components critical for national security, will only be sourced from trustworthy parties.

The memo is effectively a rubber stamp for the Commission Recommendation on Cyber Security of 5G Networks, published in March 2019 after member states called for a unified approach to securing 5G. The recommendations included several measures, including a risk assessment of 5G infrastructure by each member state before the end of June.

The European Council explained that the Transport, Telecommunications and Energy Council have now adopted its conclusions on 5G and cybersecurity as a matter of record.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xeeZjuXDL08/

Snatch ransomware pwns security using sneaky ‘safe mode’ reboot

Sophos’s Managed Threat Response (MTR) team has warned the industry of a dangerous new ransomware trick – encrypting data only after rebooting Windows PCs into ‘safe mode’.

Deployed recently by the Russian-developed ‘Snatch’ ransomware – named after the 2000 movie of the same name – it’s effective against much endpoint security software, which often doesn’t load when safe mode is in operation.

That’s despite the fact that in real-world attacks analysed by MTR, Snatch starts out like many other ransomware campaigns currently targeting business networks.

The attackers look for weakly secured Remote Desktop (RDP) ports to force their way into Azure servers, a foothold they use to move sideways to Windows domains controllers, often spending weeks gathering reconnaissance.

In one network attack, the attackers the installed the ransomware on around 200 machines using command and control (C2) after utilising a grab-bag of legitimate tools (Process Hacker, IObit Uninstaller, PowerTool, PsExec, Advanced Port Scanner) plus some of their own.

The same software profile was detected in other attacks in the US, Canada and several European countries, which also exploited exposed RDP.

One trick, but a good one

But Snatch still has the same problem as any other ransomware – how to beat local software protection.

Its approach is to load a Windows service called SuperBackupMan which can’t be stopped or paused, which adds a registry key ensuring the target will boot into safe mode after its next reboot.

Only after this has happened, and the machine has entered safe mode, does it execute a routine that deletes Windows volume shadow copies, after which it encrypts all documents it detects on the target.

Using safe mode to bypass security has its pros and cons. The upside is that in many cases, it works – security software not expecting this technique is easily bypassed.

The tricky bit is that it must still execute its bogus Windows service, which relies on breaking into domain controllers to distribute it to targets from inside the network.

Rebooting in safe mode also won’t get past the Windows login, which in theory gives an alerted user a fighting chance to stop the encryption.

However, this hasn’t stopped it achieving plenty of success. A company involved in negotiating ransomware settlements, Coveware, told Sophos it had acted for companies in 12 incidents between July and October, which involved paying bitcoins ransoms between $2,000 and $35,000.

Attacks also often involve manual oversight by the criminals, as an MTR researcher discovered when his IP address was blacklisted in real time to prevent his analysis of Snatch’s C2 behaviour.

What to do

For Sophos customers, the protection is already part of the latest endpoint protection versions although it’s important to enable the CryptoGuard feature within Intercept X.

Sophos security detects Snatch’s different components under the following signatures:

Troj/Snatch-H
Mal/Generic-R
Troj/Agent-BCYI
Troj/Agent-BCYN
HPmal/GoRnSm-A
HPmal/RansMaz-A
PUA Detected: ‘PsExec’

Unusually, Snatch’s encryption uses OpenPGP, complete with hardcoded public keys which SophosLabs has published on its GitHub page for defenders to use as Indicators of Compromise (IoCs).

Defending against Snatch

  • RDP should either be turned off or secured using a VPN with authentication.
  • VNC and TeamViewer is another possible entry point, and there is evidence the attackers might soon start using web shells, or breaking into SQL/SQL injection.
  • All admin accounts should be protected with multi-factor authentication and good passwords.
  • Unprotected devices are a big target the attackers use to gain a foothold. The defence against this is to carry out regular audits, including to detect shadow IT.
  • Ransomware attacks require having a ‘plan b’ response in place, including reinstatement from backups and forensics/mitigation of the weaknesses that allowed an attack to happen.
  • Endpoint protection tools are not alike – will yours notice Snatch or cope with its safe mode attack? This technique is likely to become more common during 2020.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Yn1KNlHf434/

SIEMs like a stretch: Elastic searches for cash from IT pros with security budgets

Black Hat Europe Elastic, the biz behind open-source search engine stack Elasticsearch, has launched its own SIEM – a somewhat counterintuitive thing to do, you’d think, until you look at how many others are using Elasticsearch for lucrative security products.

For those not in the know, SIEM is short for Security Information and Event Management: a fancy term for keeping tabs on all sorts of alerts and warnings of suspicious network activity, drawing data from various sources and presenting it in a manageable form.

Building on its recent declaration that its ECK tool is the official search function for Elasticsearch on Kubernetes, Elastic wants to recapture market ground from others profiting from its open-source tool.

They’re a bit coy about it, though. The global biz’s James Spiteri told The Register at Black Hat Europe that this was all about offering customers a better choice of integrated tools, with eating a slice of the pies being baked by others on its Elasticsearch tool as a very distant second priority. Of course.

“How many tools were built on Elasticsearch was really the main driver,” said Spiteri. Talking of various security logging, log-storing and log-trawling tools available on the market today, he added: “No longer do you have to have separate vendors for those technologies… they work fantastically together immediately as of today [on Elasticsearch’s SIEM] which is nice. Apart from that, our open-source nature gives us a lot of trust in the community.”

For comparison, Gartner lists no fewer than 44 SIEMs. This is a market that is, as an industry veteran commented to El Reg earlier this year, ripe for consolidation. Yet Elastic is pushing on.

Elastic bought out endpoint security vendor Endgame earlier this year, whose tech was (in part) built on Elasticsearch. The company is now integrating Endgame’s tech with its SIEM, which has been dangled before prospective customers since June this year.

“If you think about it,” Spiteri mused, “the security industry is a search problem… it’s about being able to have an urgent conversation about your data. We never expected to go into [the field of security products] but we just did. It was evident; if you see the amount of open-source downloads built on top of Elasticsearch, it spoke for itself.”

Around 200 people make up Elastic’s security division, most of whom were acquired along with Endgame. Spiteri said the company aims to bulk out its malware detection capabilities and start looking at fresh data sources and threat intelligence providers, though he admitted they haven’t picked any just yet.

“Over next few months we’re going to cover everything that’s missing. Threat intelligence, correlation stuff, we’re looking to rapidly add those features. We just want to make sure it’s done properly.”

Elastic does have a modest security pedigree to point at. Cisco and Palo Alto Networks have both adopted its Elastic Common Schema (“a standardised set of field names,” as Spiteri explained) for ingesting security data into Elasticsearch for later crunching. Having two industry big dogs using both the underlying tech and Elastic’s preferred method of using it certainly won’t hurt.

Whether Elastic will continue picking fights with AWS’s definitely-not-a-fork of Elasticsearch remains to be seen. Indeed, whether the firm will prosper in the busy security market also remains to be seen. ®

Sponsored:
From CDO to CEO

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/10/elastic_siem_elasticsearch/

Deliver a Deadly Counterpunch to Ransomware Attacks: 4 Steps

You can’t prevent all ransomware attacks. However, it’s possible to ensure that if a breach happens, it doesn’t spread, affect business, and become a newsworthy event.

Wayman Cummings and Salva Sinno also contributed to this column.

Nearly 1.5 million new phishing sites are created each month. And more than 850 million ransomware infections were detected in 2018 alone. These statistics illustrate the threat that ransomware poses for every IT professional and every kind of organization.

Ransomware is a specific type of malware designed to encrypt a computer’s content until the user pays to get the encryption or recovery key. This halts productivity, affecting business revenue. However, security pros can take decisive action to minimize the impact of ransomware.

The first line of defense is always a good offense. To prevent an attacker from establishing a foothold in an organization’s network, organizations should put the following in place:

  • Best practices such as strong patching policies, regular system backups, multifactor authentication, application whitelisting, and restrictions of local administrator rights and privileges
  • Awareness programs to educate users about phishing and other forms of social engineering
  • Security tools that provide spam filtering, link filtering, domain name system blocking/filtering, virus detection, and intrusion detection and prevention
  • A zero-trust framework to identify, authenticate, and monitor every connection, login, and use of resources
  • Least privilege policies to restrict users’ permissions to install and run software applications

Minimizing ransomware’s impact is about more than just defending systems against attack. It also involves taking action to minimize the impact of breaches as they happen. This is critical, since all systems can be breached by attackers who have sufficient time and resources.

That means putting in place solid incident response (IR) programs. Planning ahead builds confidence in that IR capability. To that end, enterprises should review their IR policies and engage in tabletop exercises. And they should use operational benchmarking to improve their ability to respond before an incident occurs.

Hackers continue to evolve and become more sophisticated with their attacks. So, it is likely that a ransomware attack will breach every enterprise’s environment at some point. When that occurs, these four steps will minimize the impact and recover enterprise data:

Step 1: Isolation
Before doing anything else, ensure that the infected devices are removed from the network. If they have a physical network connection, unplug them from that connection. If they are on a wireless network, turn off the wireless hub/router. Also unplug any directly attached storage to try to save the data on those devices. The goal is to prevent the infection from spreading.

Step 2: Identify
This step is often overlooked. By spending just a few minutes figuring out what has happened, enterprises can learn important information such as what variant of ransomware infected them, what files that strain of ransomware normally encrypts, and the options for decryption. Enterprises also may learn how to defeat the ransomware without paying or restoring system(s) from scratch.

Step 3: Report
This is another step that many security professionals ignore, whether due to embarrassment or time constraints. However, by reporting the ransomware attack, enterprises may help other organizations avoid similar situations. Furthermore, they provide law enforcement agencies with a better understanding of the attacker. There are many ways to report a ransomware attack. One is by contacting a local FBI office in the US or registering a complaint with the FBI’s Internet Crime Complaint Center website. The Federal Trade Commission’s OnGuardOnline website and Scamwatch, an Australian Competition Consumer Commission effort, also collect such data.

Step 4: Recover
In general, there are three options to recover from a ransomware attack: 

  • Pay the ransom: This is not recommended because there are no guarantees the organization will get its data back after paying. Instead, the attacker might request even more money before unencrypting the data.
  • Remove the ransomware: Depending on the type of ransomware involved, an enterprise might be able to remove it without requiring a full rebuild. This process, however, can be very time consuming and is therefore not a preferred option.
  • Wipe and rebuild: The easiest and safest method of recovery is to wipe the infected systems and rebuild them from a known good backup. Once rebuilt, organizations need to ensure that no traces remain of the ransomware that led to the encryption. Once an organization rebuilds its environment, the real work begins. That organization must then do a full environmental review to determine exactly how the infection began and what steps it must take to reduce the potential of another breach.

It’s simply not possible to keep all ransomware attacks at bay. However, it is possible to ensure that if a breach occurs, it does not spread, affect business, and become a newsworthy event.

By fending off the majority of attacks and dealing swiftly with the bad actors that get in the door — with the help of dynamic isolation, microsegmentation, and other modern cybersecurity technologies — organizations will keep their businesses on track and on target.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Criminals Hide Fraud Behind the Green Lock Icon.”

Mathew Newfield, the Corporate Chief Information Security Officer at Unisys, leads the company’s Corporate Information Security team with responsibility for design, development, and implementation of corporate information security and risk programs across all regions and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/deliver-a-deadly-counterpunch-to-ransomware-attacks-4-steps/a/d-id/1336524?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Serious Security: Understanding how computers count

We recently wrote up a fascinatingly scary warning about server hard drives that might abruptly and utterly fail.

HPE warned its customers that a wide variety of its solid state disks (SSDs) needed an urgent firmware update to prevent them sailing over the edge of the earth into oblivion.

The disks weren’t badly manufactured; they weren’t going to fail for reasons of physics, electronics or electromagentism; and the disk circuitry would remain intact afterwards.

In fact, as far as we can tell, the data actually stored in the chips on the failed devices would still be there…

…you just wouldn’t be able to access it.

The failure was, in fact, down to a bug in the firmware of the drive. (Firmware is a fancy name for software that’s permanently stored in hardware so that the hardware can boot up by itself.)

Simply put, after approximately 32,000 hours of operation – a little under four years – the device firmware would crash, and refuse to initialise again.

Not only would the crashed firmware prevent the drive starting up, it seems that the bug would also prevent the firmware from accepting an update to fix the bug.

Catch-22.

But why a sudden failure at ‘about 32,000 hours’?

To answer that question properly, and to avoid this sort of bug in future, we need to investigate how computers count.

Let’s start by looking at how we ourselves count, or, more precisely, how we account, for time.

In the West, we generally keep track of years using the Christian era, which is currently at AD 2019.

To represent the year exactly, therefore, you need at least four digits.

It’s fine to use more than four digits – after all, 02019 and 002019 both work out to be 2019 because those left-most zeros don’t contribute to the magnitude of the number.

But you have to be careful if you have fewer than four digits at your disposal, because you can no longer be precise about the year – all you can do is come up with some sort of approximation.

As it happens, we do that sort of thing rather often, using two digits to denote a specific year, and sometimes we get away with it.

When we talk about the ‘Swinging 60s’, for example, we’re probably safe to assume that the listener knows we mean the 1960s, rather than the 960s or the 1860s, or even the 2060s, which are still 40 years away.

But sometimes we risk ambiguity when we aren’t precise about the year.

If you were to mention ‘The 20s‘, it might not be clear at all whether you were referring to the tumultuous decade right after the First World War, or to the who-knows-how-tumultuous-it-will-be decade that will start in just a few weeks time.

The ‘millennium bug’, often ironically squeezed into the acronym Y2K, short for ‘Year 2000 bug’, was an example of this sort of problem.

The Y2K problem arose because of the once-widespread programming practice of using just two characters for the year, instead of four, in order to to save storage space.

For programs using two-digit years, 99 unambiguously meant AD 1999, but 99+1 didn’t unambiguously advance you to AD 2000.

That’s because 99+1 = 100, which has three digits, and therefore can’t be squeezed back into two characters.

The number overflows its storage space, so the extra ‘carry one’ digit at the left hand end gets lost, along with 100 years.

In a two-digit-year system, 99+1 actually turns back into 00, so that adding one year to AD 1999 loops you back to AD 1900, shooting you 99 years into the past instead of one year into the future.

Some software ‘solved’ this by taking, say, 50-99 to mean AD 1950 to AD 1999, and 00-49 to mean AD 2000 to 2049, but that didn’t fix the overflow problem – it merely shifted the problem year along the number line from 1999 to 2049. In fact, even with four-digit years we’ll face a similar problem again in AD 9999, because 9999+1 needs five digits, but we shall disregard the Y10K problem for now.

A latter-day Y2K bug

As far as we can tell, a very similar sort of numeric overflow – a latter-day Y2K bug, if you like – was was what caused the HPE storage device flaw.

We’re making this assumption because the HPE security notification said that the failure happened when the total count of power-on hours for the device reached exactly 32,768.

Programmers will recognise that number at once, because its’s a power of two – in fact, it’s 2 multiplied by itself 14 times, for a total of 215.

In regular life, however, we usually count in tens, using what’s called base 10.

It’s called base 10 because each digit can have ten different values, needing 10 different symbols, written (in the West, at least) as 0,1,2,3,4,5,6,7,8,9.

As as a result, each digit position carries 10 times more numeric weight as we move from right to left in the number.

Thus the number 2019 decomposes as 2×1000 + 0×100 + 1×10 + 9×1.

Curiously, the notation we know as Arabic numerals in the West came to us from India via the Arabic world, where they were known as Indian numerals. They’re written left-to-right because that’s how the Indian mathematicians did it, and that’s the practice the Arabian mathematicians adopted. Indeed, written Arabic uses left-to-right numbers to this day, even though Arabic text runs from right to left.

But computers almost always use base 2, also known as binary, because a counting system with just two states for each ‘digit’ is much easier to implement in electronic circuitry that a system with 10 different states.

These two-state values, denoted with the symbols 0 and 1, are known simply as ‘bits’, short for ‘binary digits’.

Additionally, almost all modern computers are designed to perform their basic operations, including arithmetical calculations, on 8 bits at a time (a memory quantity usually referred to as a byte or an octet), or on some doubling of 8 bits.

Modern Intel processors, for example, can perform calculations on 8, 16, 32 or 64 bits at a time.

So, in programming languages such as C, you can choose between various different sizes, or precisions, of integer values when you want to work with numbers.

This allows you to trade off precision (how many different values each number can represent) with memory usage (the number of bytes needed to store each number, even if it’s zero).

A 16-bit number, which uses up two bytes, can range from 0000 0000 0000 0000 in binary form (or the more compact equivalent of 0x0000 in hexadecimal notation), all the way to 1111 1111 1111 1111 (or 0xFFFF – see table above).

The 16-bit number 0xFFFF is a bit like the number 99 in the millennium bug – when all your digit positions are already maxed out, adding one to the number causes a ‘carry one’ to happen at every stage of the addition, including at the left-hand end, producing a result of 0x10000 (65,536 in decimal).

The number 0x10000 takes up 17 bits, not 16, so the left-hand ‘1’ is lost because there’s nowhere to store it.

In other words, just like the decimal sum 99+1 wraps back to zero in the Y2K bug, the binary sum 0xFFFF+1 wraps back to zero in 16-bit arithmetic, so your ‘millennium bug moment’ comes immediately after the number 65,535.

Half of 65,536 = 32,768

At this point, you’ve probably spotted that HPE’s danger value of 32,768 (215) is exactly half of 65,536 (216).

Computers don’t usually do 15-bit arithmetic, but most processors let you choose between working with unsigned and signed 16-bit values.

In the former case, the range you can represent with 65,536 different values is used to denote 0 to 65,535 (0 to 216-1); in the latter, you get -32,768 to 32,767 (-215 to 215-1).

So, if you’re using signed 16-bit numbers, rather than unsigned numbers, then your ‘millennium bug’ overflow point comes when you add 1 to 32,767 rather than at 65,535.

When you try to calculate 32,767+1, you wrap around from the end of the range back to the lowest number in the range, which for signed numbers is -32,768.

Not only do you jump backwards by a total distance of 65,535 instead of going forwards by 1, you end up flipping the sign of the number from positive to negative as well.

It’s therefore a good guess that HPE’s firmware bug was caused by a programmer using a signed 16-bit memory storage size (in C jargon, signed short int) – perhaps trying to be parsimonious with memory – under the misguided assumption that 16 bits would ‘be big enough’.

Getting away with it

If the code were counting the power-on time of the device in days, rather than hours, the programmer would almost certainly have got away with it, because 32,767 days is nearly 90 years.

Today’s SSDs will almost certainly have failed by then of their own accord, or be long retired on account of being too small, so the firmware would never be active long enough to trigger the bug.

On the other hand, if the code were counting in minutes or seconds, then the programmer would almost certainly have spotted the fault during testing (32,767 minutes is a little over three weeks, and 32,767 seconds is only 9 hours).

Sadly, counting in hours using a signed 16-bit number – assuming no one questioned your choice during a code review – might easily get you through initial testing and the prototype stage, and out into production…

…leaving your customers to encounter the overflow problem in real life instead.

We’re guessing that’s how the bug happened.

Exactly when each customer would run out of hours would depend on how long each device sat on the shelf unused before it was purchased, and how much each server was turned on.

That would make the cause of at least the first few failures hard to pin down.

Why a catastrophic failure?

What still isn’t obvious is how a simple numeric overflow like this could cause such catastrophic failure.

After all, even if the programmer had used a 32-bit signed integer, which would last for 231-1 hours (an astonishing quarter of a million years), they wouldn’t – or shouldn’t – have written their code to fail abruptly, permanently and and without notice if the hour count went haywire.

Sure, it would be confusing to see a storage device suddenly having a Back To the Future moment and reporting that it’s been turned on for a negative number of hours…

…but it’s not obvious why a counting error of that sort would only stop the drive working, but also prevent it being reflashed with new firmware to fix the counting error.

Surely software just doesn’t fail like that?

Unfortunately, it can, and it does – when your counting goes awry inside your software, even a single out-of-range number can have a devastating outcome if you don’t take the trouble to detect and deal with weird and unexpected values.

We can’t give a definitive answer about what went wrong in the HPE case without seeing the offending source code, but we can usefully speculate.

If a negative number of power-on hours is never supposed to happen – and it shouldn’t, because time can’t run backwards – then the code may never have been tested to see how it behaves when an ‘impossible’ value comes along.

Or the code might use an ‘impossible’ value, such as a negative timestamp, as some sort of error flag that causes the program to terminate when something bad happens.

If so, then reaching 32,768 hours of trouble-free usage would accidentally trigger an error that hadn’t happened – an error that would continue forever once the counter overflowed backwards.

Or the code might have used the count of hours as an index into a memory array to decide what sort of status message to use.

If the programmer mistakenly treated the 16-bit number as if it were unsigned, then they could divide the number of hours by, say, 4096 and assume that the only possible answers, rounded down to the nearest integer, would be 0,1,2,3…15. (65,535/4096 = 15 remainder 4095.)

That number might be used as an index into a table to retrieve one of 16 different messages, and because it’s impossible to get a number outside the range 0..15 when dividing an unsigned integer by 16, the programmer wouldn’t need to check whether the result was out of range.

Buf if the number were in fact treated as a signed integer, then as soon as the count overflowed from 32,767 to -32,768, the value of the count divided by 16 would swing from +7 to -8. (32,767/4096 = 7 remainder 4095, while -32,768/4096 = -8 exactly.)

What the programmer assumed would be a safe memory index would suddenly read from the wrong part of memory – what’s known as a buffer overflow (or, in this case, a buffer underflow).

What to do?

The bottom line here is that even trivial-sounding faults in arithmetical calculations can have ruinous consequences on software, causing it to crash or hang unrecoverably.

  • Never assume that you will always have enough precision. Always check for overflows, even if you’re convinced they are impossible. Your software may last a lot longer than you think, or be fed with erroneous data due to a bug somewhere else.
  • When creating device firmware, build in a fail-safe that allows a secure reset, even if the current firmware turns out to have a bug that stops it booting at all.
  • Test even the most unlikely situations in your code. Writing test suites is hard, because you often need to provoke situations that would otherwise never happen in the time available for testing, such as asking “What if the hard disk runs for 10 years? 100 years? 10,000 years?”
  • Never approve code until it has been reviewed by someone else who is at least as experienced as the programmer who wrote it in it the first place. You are unlikely to pick up your own errors once you’re convinced your code is correct – it’s hard to be truly objective about something you’re already close to.
  • Consider a continuous developement process where you retest your code after (almost) every change. Don’t allow code changes to accumulate and then try to test everything in one big pre-release test.

Oh, and if you have any HPE storage devices, check if they’re on HPE’s list of affected products and get the firmware update right now – it’ll be too late to patch after the event!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/QXKiLEwjhLM/

Advertisers want exemption from web privacy rules that, you know, enforce privacy

Amid the final rulemaking before the California Consumer Privacy Act (CCPA) is scheduled to take effect next year, five ad industry groups have asked California Attorney General Xavier Becerra to remove a requirement that businesses honor the privacy choices internet users make through browser settings, extensions, or other controls.

The wording of their request to Becerra appears to ask for a ban on browser and operating system-based privacy intervention, such as extensions that block ads and tracking scripts.

However, the ad industry groups subsequently clarified that they only want to disallow meddling with cookies that that express privacy choices, such as those set by the digital ad industry’s AdChoices link.

The CCPA, which takes effect in January, 2020, will provide Californians with greater legal privacy protections than anywhere else in the US (though still short of Europe’s GDPR), putting pressure on federal lawmakers who are trying to formulate consistent privacy rules for the entire country. Meanwhile, technology and ad companies have been trying to gut the CCPA and would welcome a weaker federal standard that supersedes the California law.

The privacy rules includes a consumer right to know whether information is being collected, to request details about the information categories collected, to know what personal information is collected, to refuse to have information collected, to delete collected information, and bans any degredation of service if the user opts to retain their privacy.

Among its requirements, the law says, “If a business collects personal information from consumers online, the business shall treat user-enabled privacy controls, such as a browser plugin or privacy setting or other mechanism, that communicate or signal the consumer’s choice to opt-out of the sale of their personal information as a valid request [under the law].”

In a December 6th letter obtained by MediaPost reporter Wendy Davis and provided to The Register as a courtesy, the five ad industry groups – The American Association of Advertising Agencies (4As), the Internet Advertising Bureau (IAB), The Association of National Advertisers (ANA), the American Advertising Federation (AAF), and the Network Advertising Initiative (NAI) – complain to Becerra that such proposals would harm consumer choice.

“These intermediaries, such as browser and operating systems, can impede consumers’ ability to exercise choices via the internet that may block digital technologies (e.g. cookies, JavaScripts, and device identifiers) that consumers can rely on to communicate their opt out preferences,” the letter says.

“This result obstructs consumer control over data by inhibiting consumers’ ability to communicate preferences directly to particular businesses and express choices in the marketplace. The OAG should by regulation prohibit such intermediaries from interfering in this manner.”

According to Davis, the ad groups have since clarified that they want “to prohibit browsers and other intermediaries from blocking opt-out cookies (like the AdChoices opt-out link), and not all cookies.”

Via Twitter, NAI attorney Tony Ficarrotta said, “The intent of NAI comments to AG (public soon) is to limit carve-out to true opt-out cookies; I would personally support [a] ban on any secondary uses, and requiring [the] use of non-unique values to achieve that.”

Separately, on Friday as the rulemaking comment period closed, the IAB on its own submitted a letter to Becerra about the CCPA, hoping to shape the rules being formulated to support the law. The ad group’s letter requests “that the AG remove the requirement for businesses to honor browser plugins or settings.”

In its letter, the IAB claims that consumer choices expressed in the form of browser privacy extensions and settings are too confusing to follow. “Given that no standard technology currently exists for such browser plugins or privacy settings, it is not clear what browser plugins or privacy signals should be honored or how they should be honored,” the IAB letter says.

The Register asked the IAB for comment and a spokesperson pointed to pages 13 and 14 of its letter, which suggests Becerra adopt rules that allow information collecting businesses to ignore privacy controls “if the business includes a ‘Do Not Sell My Personal Information’ link and offers another method for consumers to opt-out of personal information sale by the business.”

In the past, the US Federal Trade Commission has not looked kindly on ignoring browser-expressed privacy choices. In 2012, Google agreed to pay $22.5m for, among other things, circumventing the privacy controls in Apple’s Safari browser.

Someone annoying another through a glass window

Bad news: ‘Unblockable’ web trackers emerge. Good news: Firefox with uBlock Origin can stop it. Chrome, not so much

READ MORE

In a statement emailed to The Register, Mozilla stressed that privacy settings should be easy to use and said it would be irresponsible and wrong to ignore the preferences users express through their browser settings.

“Of course, that is also why organizations like the Interactive Advertising Bureau find requirements like those in CCPA so threatening, because those requirements empower people to limit what data advertisers collect about them – and empower regulators to investigate and enforce if they don’t,” a Mozilla spokesperson said.

“So, the more hurdles that can be thrown in the way of setting adoptions like recognizing browser or plug-in flags, the longer such data can be traded and sold when mechanisms are limited.”

Mozilla said that in the absence of standard mechanisms to express privacy preferences, it has enabled Enhanced Tracking Protection by default to help consumers regain control over those attempting to track their browsing activity online. ®

Sponsored:
Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/09/ad_groups_privacy_rules/