STE WILLIAMS

Mozilla faces resistance over DNS privacy test

Is Mozilla’s enthusiasm for Cloudflare’s DNS-over-HTTPS (DoH) service getting out of hand?

Cloudflare launched its 1.1.1.1 public DNS resolver on 1 April, one of the first anywhere to support DoH, an emerging technology designed to secure Domain Name System (DNS) queries from prying eyes such as governments, ISPs, and the like.

Because browsers as well as DNS resolvers must support the DoH protocol, Mozilla adopted Cloudflare as its test partner with a view to integrating the technology in Firefox 62, due in September.

But supporting DoH in a browser isn’t as simple as just enabling the protocol. Mozilla must also decide whether this support is enabled by default and, if so, which DoH server, or “Trusted Recursive Resolver” (TRR) it points to when the browser launches.

It turns out that Firefox’s DoH Shield test beta has already embedded Cloudflare as the default TRR, which hasn’t gone down well with everyone on several counts:

  • It puts a lot of trust in a company that’s already plugged into a lot of websites.
  • Using one service is an obvious single point of failure (SPOF).
  • DoH resolvers should be opt-in, not opt-out.
  • It silently overrides your existing DNS settings.

From the Ungleich blog:

When Mozilla turns this on by default, the DNS changes you configured in your network won’t have any effect anymore. At least for browsing with Firefox…

The obvious reply is that Mozilla’s developers have set Cloudflare as the default TRR as part of the testing process and are unlikely to impose this setting on users when the capability is offered to the world in Firefox 62.

As Firefox blogger Martin Brinkmann points out, the default TRR can be changed quite easily from about:config:

It is already possible to run custom DNS over HTTPS servers and Firefox’s current implementation allows custom addresses to be used.

But even if Mozilla makes the TRR opt-in, it’s possible to spy the beginnings of a dilemma about how best to implement a technology that ideally should be on all the time without non-expert users having to think too hard about what it is or how it works.

As Mozilla’s Lin Clark wrote in her excellent DoH explainer in May:

We’d like to turn this on as the default for all of our users. We believe that every one of our users deserves this privacy and security, no matter if they understand DNS leaks or not.

Most likely, a decision about which DoH resolver to use in Firefox is some way off but it’s a bridge that won’t be easy to cross in a community as sensitive to privacy as Mozilla’s.

The alternative is to go back to the traditional model of letting people choose for themselves, as they do today for conventional DNS resolvers and search engines.

A reminder of why this less exciting approach might not be a bad idea after all came when Cloudflare’s DNS suddenly stopped resolving on 31 May for 17 minutes because of a configuration cock up. The counter-argument is that this weakness applies to any DNS or DoH provider and could be countered with a backup resolver.

The dream of a private internet might look as if it’s going well with Google’s tough stance on shaming sites that don’t use HTTPS having an effect. But there is still much hard work to do, as the IETF’s work on the related issue of Encrypted Server Name Identification (ESNI) privacy underscores.

That being said, there are always too many privacy holes that need filling at any one time. Doing something about one of the larger ones, DNS privacy, would feel like much-needed progress.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Fzmm5Z6ggyI/

iPhone chipmaker blames ransomware for factory shutdowns

After a weekend in which it had to shut down several factories making iPhone chips, Taiwan chipmaker TSMC is back up and running and pinning the blame on a network virus infection – specifically, one inflicted by a WannaCry ransomware variant.

On Sunday, the Taiwan Semiconductor Manufacturing Company put out a statement saying that it had recovered about 80% of its affected tools after the variant hit production facilities over the weekend.

According to Bloomberg, the chipmaker said on Monday that full operations had been restored.

TSMC traced the virus infection to a supplier having installed tainted software without having first scanned it. When the virus hit, it spread quickly, affecting production at semiconductor plants in Tainan, Hsinchu and Taichung.

Nehal Chokshi, an analyst with Maxim Group LLC, told Bloomberg that the incident won’t cause any major delays. It would have been much worse if the production line was affected between raw wafer and finished chips, but it wasn’t. So in this case, the only delay for Apple to get its chips will be the number of days the factories were gummed up: that’s about three days, Chokshi said.

Whether Apple is going to feel the same way about its chip supplier is another matter. As Bloomberg put it, this is a black eye for TSMC, which isn’t giving up details about where the WannaCry variant originated, nor how it got past the company’s security protocols. Patches for the vulnerability that the original WannaCry relied upon to spread have been available for well over a year.

CEO C. C. Wei says that there was no hacker involved. Rather, the malware came from an infected production tool – or, as the company said in the statement on Sunday, “misoperation during the software installation process for a new tool, which caused a virus to spread once the tool was connected to the Company’s computer network” – and that the company’s overhauling its security procedures as a result.

We are surprised and shocked. We have installed tens of thousands of tools before, and this is the first time this happened.

WannaCry attacks gripped many large organizations during its first appearance last May. The reappearance of a variant shouldn’t be a surprise, though: the dreaded ransomware never actually went away.

One of the more recent appearances was when it hit Boeing in March.

We don’t know the particulars about the WannaCry variant that hit TSMC or how it may have been modified, but the global WannaCry outbreak, and the NotPetya outbreak that followed shortly after, were powerful demonstrations of what can happen when enough organisations aren’t on top of their security updates.

Patch early, patch often!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uAlUJiQWIWo/

How Bitcoin and the Dark Web hide SamSam in plain sight

For two and a half years someone has been terrorising organisations by breaking in to their networks and infecting their computers with devastating, file-encrypting malware known as SamSam.

The attacks are regular, but rarer and more sophisticated than typical ransomware attacks, and the perpetrators extort eye-watering, five-figure ransoms to undo the damage they create.

This year alone, victims have included healthcare provider Allscripts, Adams Memorial Hospital, the City of Atlanta, the Colorado Department of Transportation and the Mississippi Valley State University.

By extracting high ransoms from a small number of victims who are reluctant to share news of their misfortune, the SamSam attackers have remained elusive while amassing an estimated fortune in excess of $6 million. Details about the attacks, the victims, the methods used and the nature of the malware itself have been hard to come by.

And yet, for all the mystery, some important aspects of SamSam attacks take place in plain sight.

One of the ways the man, woman or group behind SamSam gains entry to their targets is via RDP (the Remote Desktop Protocol), a technology that companies put in place so their employees can connect remotely. It’s easy to discover companies that use RDP with search engines like Shodan, and weak passwords can be exposed with publicly-available underground tools like nlbrute.

SamSam ransom notes direct victims to a Dark Web website where the victim can exchange messages with the hacker. The website and the conversation are discreet but they aren’t secret – anyone with the Tor Browser can visit the site and watch the conversation unfold.

The ransom note also instructs victims on how to purchase bitcoins, and how to use them to pay their attacker. Like all Bitcoin transactions, the ransom payments happen in plain sight and the inflows and outflows of cash can be easily observed.

SamSam ransom collection over time

So how is it that SamSam and other cybercriminals can operate out in the open, talking to victims on public websites and exchanging money in plain sight, and yet evade capture, and is there anything that can be done about it?

Bitcoin

SamSam demands ransoms be paid in Bitcoin, the world’s favourite cryptocurrency.

The trust that people have in Bitcoin comes from its reliability, which stems from the way it stores data in public, in a database called a blockchain. Anyone can own a copy of Bitcoin’s blockchain, for free, and anyone can view the transactions stored inside it using software, or websites like blockchain.com.

On the Bitcoin blockchain, users are represented by one or more addresses – strings of letters and numbers between 26 and 35 characters long. Observers can see how much money has been sent from one address to another and when, but the Bitcoin blockchain has no record of who owns what address, or how many addresses they own.

SamSam has used Bitcoin since the malware first appeared. In the beginning, the addresses the ransoms were paid to changed regularly but, as time has passed, they’ve changed much less frequently.

There are limits to what a pocketful of bitcoins will get you though, and sooner or later they have to be traded for something such as cash, or goods and services, and that can create a link between a pseudonymous Bitcoin address and a real person. Online currency exchanges may want an ID or record an IP address, for example, and goods bought online have to be delivered to an address.

Any such link is of course of enormous interest to law enforcement.

SamSam shows an awareness of these risks by using so-called tumblers (a form of Bitcoin money laundering), and in the advice the ransom notes offers to victims about how to purchase bitcoins anonymously:

We advice you to buy Bitcoin with Cash Deposit or WesternUnion from https://localbitcoins.com or https://coincafe.com/buybitcoinswestern.php because they don't need any verification and send your Bitcoin quickly.

Bitcoin’s transparency is its strength but it is also, increasingly, a weakness. Bitcoin’s blockchain is the very definition of “Big Data” and as any regular reader of Naked Security will tell you, large collections of anonymous data are often far more than the sum of their parts.

For its investigation into SamSam, Sophos partnered with Neutrino, a company that specialises in crunching the numbers in the Big Data that cryptocurrencies create. Neutrino was able to validate suspected SamSam transactions and identify many more SamSam payments than were previously known, leading Sophos to new victims and new insights about how attacks unfold.

As a result of Neutrino’s digging, Sophos has been able to revise the previous best guess of how much money SamSam has made – moving the estimated total up from around $1 million to just over $6 million. Neutrino has also been able to use information gathered from previously unknown victims discovered through blockchain transactions to improve the protection against ransomware it provides.

And there’s every reason to expect more insight will be possible in future. Historical transactions are entombed in the Bitcoin blockchain forever, at the mercy of researchers and unaffected by upgrades or improvements in cybercriminals’ operational security.

As an example of how far that Big Data analysis can go, researchers recently succeeded in stripping away key privacy protections from Monero, a blockchain-based cryptocurrency that’s designed to offer more anonymity than Bitcoin.

Dark Web

It’s one thing to watch the money flowing from victim to attacker in broad daylight, quite another to watch them actually talking.

SamSam victims are directed by their ransom notes to websites where they can ask for the software needed to decrypt their computers. In addition to decrypting all of their computers for the full, five-figure ransom, victims are also offered a number of alternatives:

  1. Any two files can be decrypted for free, to prove the decryption works.
  2. Any one computer can be decrypted if the attacker deems it unimportant.
  3. One computer can be decrypted for 0.8 BTC (as of June 2018).
  4. Half the computers can be decrypted for half the ransom.

The SamSam gang and its victims can navigate these options, and even resolve technical issues with the decryption process, by leaving messages for each other on the website.

SamSam hidden service
The hacker talks to a victim on the SamSam Dark Web site.

In the beginning, SamSam used the web’s equivalent of “burner” phones – single use websites on anonyme.com or wordpress.com. Within a few months, though, the malware had moved to the relative safety of a website running on a hidden service on the Tor network, or, as its colloquially known, the Dark Web.

Victims are told to pay the ransom, install the Tor Browser (a modified version of Firefox that allows them to navigate to hidden services), and then visit the website and ask for the decryption software.

With the Tor browser installed, visiting the SamSam website is no different from visiting any other site aside from its peculiar looking hidden service address – a 16 character string of letters and numbers ending in .onion.

What makes the Dark Web dark, and so useful to cybercriminals, is that it uses layers of encryption and a series of intermediary computers to hide a website’s IP address.

With an IP address, law enforcement can see where in the world a website is located, which part of the internet it’s on and who the hosting company or ISP is. With that information they stand a reasonable chance of identifying who owns a site, or of shutting it down. Without an IP address, a website is unmoored from the real world and could be literally anywhere.

So is all hope lost? Not quite.

Tor, the technology used to make the web go “dark” is sophisticated and capable software, but it isn’t a cloak of invisibility and the owners of Dark Web websites are arrested fairly regularly.

For all the fuss made of it in the media you’d be forgiven for believing that the Dark Web is enormous, but it’s not, it’s vanishingly small. While the regular web has hundreds of millions of active websites, the Dark Web has thousands.

Size is important because the smaller a network is, the easier to scan and monitor it is, and scans of the Dark Web have shown something very interesting – it is far more centralised and interconnected than you’d expect.

The size of the network also has a bearing on one of the more shadowy deanonymisation tactics that might be available to a law enforcement or intelligence agency with skilled hackers and a big budget: traffic correlation attacks.

Traffic correlation attacks attempt to match the traffic entering the Tor network with the traffic leaving it. Such attacks are hard to carry out but are a long-acknowledged potential weakness and are rumoured to have been used in 2014’s multi-national Dark Web crackdown, Operation Onymous.

Tor is very good at hiding your IP address but, while it’s important, there is more to staying anonymous online than that, and more often than not it seems that the Dark Web’s inhabitants that get busted are undone by human error. Whether it’s talking to an undercover cop, trusting the wrong person, forgetting to take the necessary precautions or simply not knowing what they are, there are a lot of ways to slip up.

In amassing their criminal treasure chest, the SamSam crew has made a lot of enemies.

If and when they slip up, a lot of eyes will be watching.

You can read more about the history of SamSam, how it works and how to protect against it in Sophos’s extensive new research paper, SamSam: The (Almost) Six Million Dollar Ransomware.

The investigation is ongoing – if you have information about SamSam or you are a security vendor interested in collaborating with our investigation, please contact Sophos.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mL4E3XKZ13k/

Bank on it: It’s either legal to port-scan someone without consent or it’s not, fumes researcher

Updated Halifax Bank scans the machines of surfers that land on its login page whether or not they are customers, it has emerged.

Security researcher Paul Moore has made his objection to this practice – in which the British bank is not alone – clear, even though it is done for good reasons. The researcher claimed that performing port scans on visitors without permission is a violation of the UK’s Computer Misuse Act (CMA).

Halifax has disputed this, arguing that the port scans help it pick up evidence of malware infections on customers’ systems. The scans are legal, Halifax told Moore in response to a complaint he made on the topic last month.

If security researchers operate in a similar fashion, we almost always run into the Computer Misuse Act, even if their intent isn’t malicious. The CMA should be applied fairly…

When you visit the Halifax login page, even before you’ve logged in, JavaScript on the site, running in the browser, attempts to scan for open ports on your local computer to see if remote desktop or VNC services are running, and looks for some general remote access trojans (RATs) – backdoors, in other words. Crooks are known to abuse these remote services to snoop on victims’ banking sessions.

Moore said he wouldn’t have an issue if Halifax carried out the security checks on people’s computers after they had logged on. It’s the lack of consent and the scanning of any visitor that bothers him. “If they ran the script after you’ve logged in… they’d end up with the same end result, but they wouldn’t be scanning visitors, only customers,” Moore said.

Halifax told Moore: “We have to port scan your machine for security reasons.”

Having failed to either persuade Halifax Bank to change its practices or Action Fraud to act (thus far1), Moore last week launched a fundraising effort to privately prosecute Halifax Bank for allegedly breaching the Computer Misuse Act. This crowdfunding effort on GoFundMe aims to gather £15,000 (so far just £50 has been raised).

Halifax Bank’s “unauthorised” port scans are a clear violation of the CMA – and amounts to an action that security researchers are frequently criticised and/or convicted for, Moore argued. The CISO and part-time security researcher hopes his efforts in this matter might result in a clarification of the law.

“Ultimately, we can’t have it both ways,” Moore told El Reg. “It’s either legal to port scan someone without consent, or with consent but no malicious intent, or it’s illegal and Halifax need to change their deployment to only check customers, not visitors.”

The whole effort might smack of tilting at windmills, but Moore said he was acting on a point of principle.

“If security researchers operate in a similar fashion, we almost always run into the CMA, even if their intent isn’t malicious. The CMA should be applied fairly to both parties.”

Moore announced his findings, his crowdfunded litigation push and the reasons behind it on Twitter, sparking a lively debate. Security researchers are split on whether the effort is worthwhile.

The arguments for and against

The scanning happens on the customer login page and not the main Halifax Bank site, others were quick to point out. Moore acknowledged this but said it was besides the point.

Infosec pro Lee Burgess disagreed: “If they had added to the non-customer page then the issue would be different. They are only checking for open ports, nothing else, so [I] cannot really see the issue.”

Surely there needs to be intent to cause harm or recklessness for any criminal violation, neither of which is present in the case of Halifax, argued another.

UK security pro Kevin Beaumont added: “I’d question if [it was] truly illegal if [there was] not malicious intent. Half the infosec services would be illegal (Shodan, Censys etc). IRC networks check on connect, Xbox does, PlayStation does etc.”

Moore responded that two solicitors he’d spoken to agreed Halifax’s practice appeared to contravene the CMA. An IT solicitor contact of The Register, who said he’d rather not be quoted on the topic, agreed with this position. Halifax’s lawyers undoubtedly disagree.

Moore concluded: “Halifax explicitly says they’ll run software to detect malware… but that’s if you’re a customer. Halifax currently scan everyone, as soon as you land on their site.”

Enter the ThreatMetrix

Halifax Bank is part of Lloyds Banking Group, and a reference customer for ThreatMetrix, the firm whose technology is used to carry out the port scanning, via client-side JavaScripts.

The scripts run within the visitor’s browser, and are required to check if a machine is infected with malware. They test for this by trying to connect to a local port, but this is illegal without consent, according to Moore.

“Whilst their intentions are clear and understandable, the simple act of scanning and actively trying to connect to several ports, without consent, is a clear violation of the CMA,” Moore argued.

Beaumont countered: “It only connects to the port, it doesn’t send or receive any data (you can see from the code, it just checks if port is listening).”

Moore responded that even passively listening would break the CMA. “That’s sufficient to breach CMA. If I port-sweep Halifax to see what’s listening, I’d be breaching CMA too,” he said.

The same ThreatMetrix tech is used by multiple UK high street banks, according to Beaumont. “If one is forced to change, they all will,” Moore replied.

Moore went on to say that this testing – however well-intentioned – might have undesirable consequences.

“Halifax/Lloyds Banking Group are not trying to gain remote access to your device; they are merely testing to see if such a connection is possible and if the port responds. There is no immediate threat to your security or money,” he explained.

“The results of their unauthorised scan are sent back to Halifax and processed in a manner which is unclear. If you happen to allow remote desktop connections or VNC, someone (other than you) will be notified as such. If those applications have vulnerabilities of which you are unaware, you are potentially at greater risk.”

Moore expressed that his arguably quixotic actions may have beneficial effects. “Either Halifax [is] forced to correct it and pays researchers from the proceeds, or the CMA is revised to clarify that if [its] true intent isn’t malicious, [it’s] safe to continue,” he said.

We have asked ThreatMetrix for comment. ®

Updated at 1200 UTC to add

Halifax Bank has been to touch to say: “Keeping our customers safe is of paramount importance to the Group and we have a range of robust processes in place ‎to protect online banking customers.”

Bootnote

1Action Fraud is the UK’s cyber security reporting centre. Moore has reported the issue to it. AF’s response left Moore pessimistic about finding any relief from that quarter.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/07/halifax_bank_ports_scans/

Rights groups challenge UK cops over refusal to hand over info on IMSI catchers

British cops’ efforts to keep schtum about their use of IMSI grabbers to snoop on people’s mobile phones is to be challenged in court.

Five UK police forces are known to have purchased the equipment – which mimics mobile phone towers to connect with devices – but groups seeking further details have hit a brick wall, as cops simply fall back on the position that they can “neither confirm nor deny” they hold any information on them.

The fact that forces used the kit was revealed back in 2016, after a Freedom of Information request by The Bristol Cable established that the acronym CCDC – present in various forces’ accounts but until then unclear – stood for “Covert Communications Data Capture”.

However, the forces have refused every category of follow-up FoI requests made by Privacy International, on the grounds that they could neither confirm nor deny whether they held the information.

In response, the rights group, represented by Liberty, has today filed an appeal with the First-Tier tribunal challenging the forces’ refusal to hand over the information.

Scarlet Kim, legal officer at Privacy International, said the police had relied on the “knee-jerk ‘neither confirm nor deny’ reaction” to requests for too long.

“This secrecy is all the more troubling given the indiscriminate manner in which IMSI catchers operate,” Kim said. “These tools are particularly ripe for abuse when used at public gatherings, such as protests, where the government can easily collect data about all those attending.”

She also pointed out that the refusals continued despite evidence of widespread spending on IMSI catchers leaking out over time.

For instance, purchase records have shown that in 2015 the London Metropolitan Police spent more than £1m on the kit, while Avon and Somerset Police handed over £169,575 and South Yorkshire Police paid £144,000 in a similar time period.

The challenge comes after an appeal to the Information Commissioner’s Office – which oversees FoI compliance – ended with the ICO backing the police’s position.

The ICO’s decision, handed down last month, said that forces can’t “neither confirm nor deny” they hold information related to legislation, codes of practice, brochures or other promotional materials related to the surveillance tools – but they can use it for other categories.

These included policies or other guidance related to the regulation of IMSI catchers, as well as contracts and other records related to the purchase of that technology.

However, Privacy International and Liberty argue this goes against the principle and purpose of the FOI Act, with Kim saying the police had “attempted to strip [the FOIA] of its very meaning”.

Liberty’s lawyer Megan Goulding added that it was “vital” for the public to be able to access information on the surveillance tools the police use.

“We hope the Tribunal acknowledges the threat to our rights and encourages a more diligent approach from the Information Commissioner’s Office,” she said. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/07/police_imsi_catchers/

Batten down the ports: Linux networking bug SegmentSmack could remotely crash systems

A networking flaw has been discovered in the Linux kernel that could trigger a remote denial-of-service attack.

Versions 4.9 and up are “vulnerable to denial-of-service conditions with low rates of specially crafted packets”, according to a US CERT ADVISORY. The bug is being tracked as SegmentSmack (CVE-2018-5390).

SegmentSmack – which sounds a bit like an American wrestler whose speciality is to close bouts just before an ad break – has prompted fixes for a wide variety of networking kit.

The flaw could be worse – there’s no remote code execution – but it’s an issue because hackers may be able to remotely tie up or crash vulnerable systems provided they are configured with an open port. Firewalls are a sufficient defence here.

Fortunately patches are already available to address the vulnerability from a long list of networking, security, storage and open-source OS vendors.

Most enterprise-grade Linux distributions do not yet use kernel 4.9 or above so aren’t immediately affected.

Chris O’Brien, director of intelligence operations at EclecticIQ, said:

“If leveraged, the flaw allows a single attacker to compromise the availability of a remote server by saturating resources. Due to the wide number of vendors affected and the difficulty in patching kernels on embedded systems, EclecticIQ anticipates a large impact should a working proof-of-concept be published in the coming days.”

UK cybersecurity pro Kevin Beaumont also noted that no proof-of-concept for the exploit is available at the moment. In a blog post, Beaumont agreed that most enterprise-grade Linux distros wouldn’t be affected, but warned that “some may have backported the netcode to older kernels” leaving these systems vulnerable as a result. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/07/segmentsmack/

US-CERT Warns of New Linux Kernel Vulnerability

Patches now available to prevent DoS attack on Linux systems.

Denial-of-service attacks aren’t just about external floods: A new US-CERT vulnerability note is a reminder that operating system kernel services can be used to effectively launch a DoS campaign against a system.

Vulnerability Note VU#962459 warns of a vulnerability in Linux kernels versions 4.9 and greater that can allow an attacker to overwhelm a network’s resources with low-effort calls. With the right trigger, a Linux system can be forced to make a sequence of kernel calls for every packet – kernel calls that are hugely expensive in terms of system resources. There are limitations on the conditions, but the proof of the vulnerability exists.

Patches for the vulnerability are available for immediate application.

Read here and here for more.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/us-cert-warns-of-new-linux-kernel-vulnerability/d/d-id/1332497?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Shadow IT: Every Company’s 3 Hidden Security Risks

Companies can squash the proliferation of shadow IT if they listen to employees, create transparent guidelines, and encourage an open discussion about the balance between security and productivity.

Twelve years with the FBI and I was ready for anything: espionage, massive cyberattacks, Tom Clancy-esque zero-day exploits. I saw some of that, of course, but more often I discovered and rediscovered that it’s the simple things that most often cause catastrophic problems — simple things that plague every company.

For example, midway through my stint as a dedicated cyber agent, we responded to a data breach at a well-known company. Private information, much of it highly sensitive, had been dumped into a repository on the open Internet. Was it the result of state-sponsored actors? Sophisticated activist groups? A brute-force login attack?

No. An employee had placed sensitive data in a free cloud storage account, and run-of-the-mill data thieves had simply posted it online. Despite the fact that this storage provider had a high-profile breach only months earlier, the employee didn’t change the account password. A million-dollar problem could have been avoided with a 60-second password reset. This is a great example of the three risks I see in most companies.

  1. Who: Any employee can collect data. With a credit card and Internet access, any individual member of the staff can run critical company functions with or without permission. Most have good intentions. Some don’t.
  2. What: Employees can collect any kind of sensitive data. Customer and company data are sensitive and can be immensely valuable. Without guidance, employees can just as easily collect and store Social Security numbers as coffee preferences. But if the Social Security numbers get hacked, you could be on the hook for millions in recovery costs.
  3. Where: Data are invisible and inaccessible to company managers. In the new GDPR world, data that doesn’t live in enterprise-controlled systems is much more difficult to retrieve. Worse yet, data in private accounts follow the owner when she leaves the company. What if she’s taking a list of your 100 best clients?

It’s no surprise that as many as 80% of employees use unauthorized services. What is surprising is that companies have known about this threat for a very long time, yet they’re still failing to address it. According to Gartner, “Through 2020, 99% of vulnerabilities exploited will continue to be ones known by security and IT professionals for at least one year.”

When employees use platforms that have not been screened or authorized by a company’s technology and security team, they’re wading into what’s known as “shadow IT.” And shadow IT makes it much easier for hackers to steal your company’s data. For example, employees will always try to increase productivity in any way they can. They’ll rely on unsanctioned cloud-based file storage, survey software, or messaging apps if those apps will save them a few minutes. But this kind of behavior opens up the holes that can cost a company millions of dollars and priceless consumer trust. A 2017 study by the Ponemon Institute found that the average cost of a breach is $3.62 million. There’s nothing productive about that. 

Chat-Room Lurkers
Another true story: During an investigation into a network intrusion at a large company, the network engineering team was using a free chat tool to communicate as they fought to regain control of their network. They had not told anyone about this tool, and they had been using it for months. In fact, it became their primary channel as they chased the attackers in their network. Do you see where this is going?

These engineers hadn’t involved their infosec team in vetting the tool, and it was set up insecurely. The attackers had joined the very chat group the engineers were using to try to kick them off, and they were tracking the team’s every move. We discovered the intruders only by identifying every person in the chat group and isolating several imposters. After that, we moved quickly to a different communications channel. 

In their rush to be productive, the engineers made the problem worse with a sloppy setup of a free tool. The company spent a lot more time and money remediating the breach, and the data loss was much larger than it could have been. They had to spend millions to inform customers and to provide credit protection for those customers.

Squashing Shadow IT
How can your company avoid horror stories like these? Here are four ways to bring security priorities and employee behavior together:

  1. Policy and communication. Companies need a well-defined policy on the use of unsanctioned services and the protection of company data. But policy won’t accomplish anything if it isn’t communicated to employees. Offer regular training, including explanations of the rationale behind the policy and real-world risks.
  2. Open-minded onboarding. New employees often want to use the productivity tools that were helpful at their previous jobs, so onboarding must include the data security policy. But also use that moment to take new suggestions into account.
  3. What you don’t know can hurt you. Survey employees regularly on the resources they’re using to illuminate security risks before a breach occurs. If enough employees need a solution, IT should work to find an approved vendor that can securely fill that need.
  4. Partners, not police. Foster an ongoing conversation between employees and your company’s security/IT departments about how to balance productivity with security. If your security team reflexively says “no” whenever employees want productivity-boosting tools, employees will just stop asking and use the tools anyway.

Companies can squash shadow IT risk, but they have to be willing to listen to their employees, create transparent guidelines, and encourage an open discussion on the best ways to be both productive and secure.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Adam Marrè, CISSP, GCIA, GCIH, is a Qualtrics information security operations leader and former FBI cyber special agent. Adam has more than 12 years experience leading large-scale computer intrusion investigations and consulting as a cybercrime … View Full Bio

Article source: https://www.darkreading.com/endpoint/shadow-it-every-companys-3-hidden-security-risks/a/d-id/1332454?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Top tip? Sprinkle bugs into your code to throw off robo-vuln scanners

Miscreants and researchers are using automation to help them find exploitable flaws in your code. Some boffins at New York University in the US have a solution to this, and it’s a new take on “security through obscurity”.

Here it is: add more bugs to your software to throw the automatic scanners off the scent of really scary blunders. We already know what you’re probably thinking: “On a bad day, I get software that’s more bug than code, and you want more bugs?” – but bear with us.

The researchers – Zhenghao Hu, Yu Hu, and Brendan Dolan-Gavitt – only want the “right” kind of bug added to software: something that’s not exploitable, doesn’t cause crashes, but will show up if someone bug-scans the software.

And they want thousands of these bugs, if possible, so as to gum up the black-hat business model by making it expensive to work out which bugs are “real”.

The aim of what they call “chaff bugs” (metaphorically drawn from an aircraft tossing out foil chaff to confuse enemy radar) is to give what looks like a huge attack surface to a black hat, and send them on an exploit wild goose chase.

Their reasoning stems from a typical attack scenario, which they describe in their paper at arXiv, “Chaff Bugs: Deterring Attackers by Making Software Buggier”, as finding bugs, triaging those bugs (that is, identifying those that offer a possible attack), developing the exploit, and deploying it.

The “chaff bugs” are designed to escalate the cost of the triage and exploit-development steps. “Rather than eliminating bugs, we propose instead to increase the number of bugs in the program by injecting large numbers of chaff bugs that can be triggered by attacker-controlled input,” the paper stated.

With the right constraints and the right automation, the researchers claimed they could put a lot of likely-looking bugs into their targets (the Nginx server, the Linux file utility, and the libFLAC codec library), as shown in the table below.

When they ran their bug-infested software through the American Fuzzy Lop fuzzer, “all of our chaff bugs were considered EXPLOITABLE or PROBABLY EXPLOITABLE”, meaning a bug-hunter would have to do a lot of manual work to eliminate false leads.

How to constrain bugs

The hard part is to introduce a bug that doesn’t offer a real exploit. The paper noted that chaff bugs have to be constrained, so they only appear in conditions that aren’t exploitable and “will only, at worst, crash the program”.

If you can achieve that, while at the same time making it hard for an attacker to quickly triage a chaff bug as non-exploitable, “we can greatly increase the amount of effort required to obtain a working exploit by simply adding so many non-exploitable bugs that any bug found by an attacker is overwhelmingly likely to be non-exploitable”.

In the paper, the researchers concentrated on two classes of bug: stack buffer overflows, and heap buffer overflows – both of which are rich pickings for attackers, because they’re so often exploitable.

However, they’re also classes of bugs the researchers reckon they can control, so that the chaff bug isn’t exploitable: “Because the stack layout of a function is determined at compile time, we can control what data will be overwritten when the overflow occurs, which gives us an opportunity to ensure the overflow will not be exploitable.”

That’s because there are two important properties that the “defender” can control: the target of the overflow, and the value that’s written during the overflow.

Their target is a “variable that is unused by the program”, and the value is “constrained so that it can only take on safe values at the site of the overflow”.

If you only needed to add a few bugs to the software, you’d do it by hand, but for chaff bugs to overwhelm “real” bugs, you need lots of them – so the researchers turned to a tool called LAVA for help.

Written by team member Dolan-Gavitt, LAVA is a synthetic bug data set he first conceived to benchmark for bug-finding approaches like fuzzers. For this research, LAVA was repurposed to work in reverse, so to speak, managing how bugs are added to code. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/07/chaff_confuse_automated_vulnerability_scanners/

Chip flinger TSMC warns ‘WannaCry’ outbreak will sting biz for $250m

Chipmaker TSMC has warned that a previously disclosed virus infection of its Taiwanese plant may cost it up to $250m.

The malware struck on Friday, and affected a number of computer systems and fab tools over two days.

“The degree of infection varied by fab,” the firm said in an update on Sunday. “TSMC contained the problem and found a solution. As of 14:00 Taiwan time, about 80 per cent of the company’s impacted tools have been recovered, and the company expects full recovery on August 6.”

Although unnamed in its statement, TSMC execs reportedly blamed a variant of WannaCry, aka WannaCrypt, for the infection during the course of follow-up conference calls.

TSMC warned that the incident is likely to “cause shipment delays and additional costs”.

“We estimate the impact to third quarter revenue to be about 3 per cent, and impact to gross margin to be about one percentage point,” it said. “The company is confident shipments delayed in third quarter will be recovered in the fourth quarter 2018, and maintains its forecast of high single-digit revenue growth for 2018.”

Sales hit

The chipmaker had previously forecast revenues of $8.45bn to $8.55bn in its September quarter. A 3 per cent loss would shave this by up to $250m, though actually losses may come out lower, and execs have already revised down revenue losses to no more than 2 per cent, Bloomberg reported.

TSMC added that it was working with its customers to develop revised shipment schedules. TSMC – which supplies components to Apple iPhones, AMD, Nvidia, Qualcomm, Broadcom and others – said malware spread across its systems after an infected sub-component of an unspecified tool was connected to its network.

ransomware

74 countries hit by NSA-powered WannaCrypt ransomware backdoor: Emergency fixes emitted by Microsoft for WinXP+

READ MORE

British malware reverse-engineer Marcus Hutchins famously halted the spread of WannaCry across NHS networks and elsewhere after registering a domain that turned out to act as a kill switch, preventing further spread of the malware in cases where infected hosts could pin the domain. Even so, the software nasty is still capable of causing a problem in closed systems such as factories, UK infosec guru Kevin Beaumont told El Reg.

“Factory networks sometimes don’t have internet access so can’t reach a kill switch,” Beaumont said. “WannaCry is still one of the biggest infections seen in AV detections.”

This sort of thing is not unprecedented. Last March, around eight months after the original May 2017 malware outbreak, WannaCry crash landed on the factory systems of US aerospace giant Boeing.

TSMC pointed to a silver lining in its malware-occulted short-term outlook – the security breach could have been a lot worse. “Data integrity and confidential information was not compromised,” it said. “TSMC has taken actions to close this security gap and further strengthen security measures.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/06/tsmc_malware/