STE WILLIAMS

Huawei bungled router security, leaving kit open to botnets, despite alert from ISP years prior

Exclusive Huawei bungled its response to warnings from an ISP’s code review team about a security vulnerability common across its home routers – patching only a subset of the devices rather than all of its products that used the flawed firmware.

Years later, those unpatched Huawei gateways, still vulnerable and still in use by broadband subscribers around the world, were caught up in a Mirai-variant botnet that exploited the very same hole flagged up earlier by the ISP’s review team.

The Register has seen the ISP’s vulnerability assessment given to Huawei in 2013 that explained how a programming blunder in the firmware of its HG523a and HG533 broadband gateways could be exploited by hackers to hijack the devices, and recommended the remote-command execution hole be closed.

Our sources have requested anonymity.

After receiving the security assessment, which was commissioned by a well-known ISP, Huawei told the broadband provider it had fixed the vulnerability, and had rolled out a patch to HG523a and HG533 devices in 2014, our sources said. However, other Huawei gateways in the HG series, used by other internet providers, suffered from the same flaw because they used the same internal software, and remained vulnerable and at risk of attack for years because Huawei did not patch them.

One source described the bug as a “trivially exploitable remote code execution issue in the router.”

The vulnerability, located in the firmware’s UPnP handling code, was uncovered by other researchers in more Huawei routers years later, and patched by the manufacturer, suggesting the Chinese giant was tackling the security hole whack-a-mole-style, rolling out fixes only when someone new discovered and reported the bug.

One at a time, please – don’t all rush in

El Reg has studied Huawei’s home gateway firmware, and found blocks of code, particularly in the UPnP component, reused across multiple device models, as you’d expect. Unfortunately, Huawei has chosen to patch the models one by one as the UPnP bug is found and reported again and again, rather than issuing a comprehensive fix to seal the hole for good.

And found they were: “Some time between 2013 and 2017 this issue was then also rediscovered by some nefarious types who used it as part of the exploitation pack to [hijack] consumer home routers as part of the Mirai botnet,” a source told us.

Four years after the ISP’s review team privately disclosed the UPnP command-injection vulnerability to Huawei in 2013, and a year after the infamous Mirai botnet takedown of Dyn DNS in 2016, infosec consultancy Check Point independently found the same vuln that was quietly patched in the HG523a and HG533 series was still lurking in another of the Chinese goliath’s home routers: the HG532.

The Israeli outfit told us it went public with its discovery of the bug, CVE-2017-17215, on December 21, 2017, only after Huawei “had notified customers and developed a patch”, adding that it first “spotted malicious activity on 23rd November 2017”.

Huawei publicly acknowledged the security hole in the HG532 on November 30, 2017, suggesting that “customers take temporary fixes to circumvent or prevent vulnerability exploit or replace old Huawei routers with higher versions”.

Last summer, a security researcher discovered that the same model of Huawei routers in which Check Point had found the UPnP vuln, the HG532, were being used to host an 18,000-strong botnet created using a variant of the Mirai malware. A botnet that could have been avoided if Huawei had patched the broadband boxes when it quietly updated the related HG523a and HG533 devices in 2014.

British government policy is that while Huawei network equipment is not secure enough for government networks, officials say it is acceptable to expose the general public to the potential risks present in Huawei gear. Meanwhile, the US government has banned Huawei equipment from its federal agency networks, a move the Chinese corporation is suing to overturn.

UPnP vulnerability described

Routers affected by the UPnP vuln included Huawei’s HG523a, HG532e, HG532S, and HG533 models. For the HG533, firmware version 1.14 was reviewed by the ISP’s security assessors, and for the HG523a, version 1.12. The other two models were probed by Check Point. These were white-label products distributed by internet providers.

Check Point summarised the vulnerability, which affects all four models of router, back in 2017 as follows:

From looking into the UPnP description of the device, it can be seen that it supports a service type named ‘DeviceUpgrade’. This service is supposedly carrying out a firmware upgrade action by sending a request to ‘/ctrlt/DeviceUpgrade_1’ (referred to as controlURL) and is carried out with two elements named ‘NewStatusURL’ and ‘NewDownloadURL’.

The vulnerability allows remote administrators to execute arbitrary commands by injecting shell meta-characters “$()” in the NewStatusURL and NewDownloadURL…

The ISP’s report, dated 2013 and seen by The Register, stated:

An unauthenticated command execution vulnerability was discovered in the UPNP interface visible from the LAN on the Huawei Wireless Routers.

The UPNP schema, defined at http://192.168.1.1:37215/desc/DevUpg.xml describes the “Upgrade” action, which takes two arguments, “NewDownloadURL” and “NewStatusURL”.

The “NewStatusURL” parameter is vulnerable to command injection when commands are introduced via backticks. Injected commands are run with root privileges on the underlying operating system. No authentication credentials are required to exploit this vulnerability.

Our sources confirmed the issue Check Point found was the same one described in the internal ISP report into the HG523a and HG533 firmware.

That would mean Huawei knew of the security weakness in 2013, claimed it was fixed in 2014, yet didn’t fully address it in other routers until 2017 when Check Point got wind of it. The flawed UPnP service can be accessed from the LAN by local machines, and, depending on the default configuration, can face the public internet for anyone or any botnet to find.

The ISP-commissioned review, incidentally, documented exploiting the command-injection hole using backticks, while Check Point demonstrated exploiting the flaw using shell meta-characters. The end result is the same: shell commands can be inserted into URL parameters passed to the UPnP service running on the router by a hacker, which are executed with root-level privileges.

Specifically, the URLs are used directly on the command line to invoke a program on the device that handles security updates, without any sanity checks or sanitization; injecting commands into the URLs ensures they are executed as the updater program is invoked.

Huawei firmware

Audit … Our own analysis of the flawed code in Huawei’s firmware. On the left is the dodgy function in the HG533, and on the right, the HG532. The HG533 was quietly patched in 2014 following the ISP review, whereas the HG532 was fixed in 2017 after Check Point spotted it – despite both running functionally the same 32-bit MIPS code (click to enlarge).

A technical analysis of the security screw-up can be found, here.

In a statement, Huawei told The Register:

On November 27, 2017 Huawei was notified by Check Point Software Technologies Research Department of a possible remote code execution vulnerability in its HG532e and HG532S routers. The vulnerabilities highlighted within the report concerned these two routers only. Within days we issued a security notice and an update patch to rectify the vulnerability.

A 2014 report by one of our customers evaluated potential vulnerabilities in our HG523a and HG533 routers. As soon as these issues were presented to us, a patch was issued to fix them. Once made available to our customers, the HG533 and HG523a devices experienced no issues.

It added that it had publicly published an advisory on the Huawei PSIRT page in 2017.

The Chinese company did not address why it had patched the same code vulnerability in some products back in 2014 but not fixed the same pre-existing flaw in other routers until it was pointed out to the firm years later.

This is not the first time Huawei’s networking kit has been placed under the spotlight: the 2017 annual report by British code reviewers from the Huawei Cyber Security Evaluation Centre (HCSEC) obliquely criticised Huawei’s business practices around older elements reaching end-of-life while still embedded within Huawei products that had a longer expected lifespan.

The same team is expected to again criticise the Chinese manufacturer in this year’s report over its security practices. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/28/huawei_mirai_router_vulnerability/

Huawei savaged by Brit code review board over pisspoor dev practices

Britain’s Huawei oversight board has said the Chinese company is a threat to British national security after all – and some existing mobile network equipment will have to be ripped out and replaced to get rid of said threat.

huawei store in shanghai , china http://www.shutterstock.com/gallery-511162p1.html?cr=00pl=edit-00 by J. Lekavicius /Shutterstock - EDITORIAL USE ONLY

Debate around Huawei espionage fears in UK about as clear as those darn Brexit negotiations

READ MORE

“The work of HCSEC [Huawei Cyber Security Evaluation Centre]… reveals serious and systematic defects in Huawei’s software engineering and cyber security competence,” said the HCSEC oversight board in its annual report, published this morning.

HCSEC – aka The Cell – based in Banbury, Oxfordshire, allows UK spy crew GCHQ access to Huawei’s software code to inspect it for vulns and backdoors.

The oversight folk added: “Work has continued to identify concerning issues in Huawei’s approach to software development bringing significantly increased risk to UK operators, which requires ongoing management and mitigation.”

While the report itself does not identify any Chinese backdoors, which is the current American tech bogeyman du jour, it highlights technical and security failures in Huawei’s development processes and attitude towards security for its mobile network equipment.

In some cases, remediation will also require hardware replacement (due to CPU and memory constraints) which may or may not be part of natural operator asset management and upgrade cycles… These findings are about basic engineering competence and cyber security hygiene that give rise to vulnerabilities that are capable of being exploited by a range of actors.

Even though Huawei has talked loudly about splurging $2bn on software development, heavily hinting that this would include security fixes, HCSEC scorned this. Describing the $2bn promise as “no more than a proposed initial budget for as yet unspecified activities”, HCSEC said it wanted to see “details of the transformation plan and evidence of its impact on products being used in UK networks before it can be confident it will drive change” before giving Huawei the green light.

The report’s findings had been telegraphed long in advance by British government officials, who have been waging war with Huawei through the medium of press briefings.

Amateurs in a world desperately needing professionals

huawei offices in vilnius, lithuania

Huawei bungled router security, leaving kit open to botnets, despite alert from ISP years prior

READ MORE

One key problem highlighted by the HCSEC oversight board was “binary equivalence”, a problem Huawei has been relatively open about. HCSEC testers had previously flagged up problems with not knowing whether the binaries they were inspecting for Chinese government backdoors were compilable into firmware equivalent to what was deployed in live production environments. Essentially, the concern is that software would behave differently when installed in the UK’s telecoms networks than it did during HCSEC’s tests.

In today’s report, the Banbury centre team said: “Work to validate them by HCSEC is still ongoing but has already exposed wider flaws in the underlying build process which need to be rectified before binary equivalence can be demonstrated at scale.”

Unless and until this is done it is not possible to be confident that the source code examined by HCSEC is precisely that used to build the binaries running in the UK networks.

HCSEC also highlighted something The Register exclusively revealed precise details of this morning, saying: “It is difficult to be confident that vulnerabilities discovered in one build are remediated in another build through the normal operation of a sustained engineering process.”

It also criticised Huawei’s “configuration management improvements”, pointing out that these haven’t been “universally applied” across product and platform development groups. Huawei’s use of “an old and soon-to-be out of mainstream support version” of an unnamed real time operating system (RTOS) “supplied by a third party” was treated to some HCSEC criticism, even though Huawei bought extended support from the RTOS’s vendor.

HCSEC said: “The underlying cyber security risks brought about by the single memory space, single user context security model remain,” warning that Huawei has “no credible plan to reduce the risk in the UK of this real time operating system.”

Hygiene, that’s something you do in the shower with soap… right?

OpenSSL is used extensively by Huawei – and in HCSEC’s view perhaps too extensively:

In the first version of the software, there were 70 full copies of 4 different OpenSSL versions, ranging from 0.9.8 to 1.0.2k (including one from a vendor SDK) with partial copies of 14 versions, ranging from 0.9.7d to 1.0.2k, those partial copies numbering 304. Fragments of 10 versions, ranging from 0.9.6 to 1.0.2k, were also found across the codebase, with these normally being small sets of files that had been copied to import some particular functionality.

Even after HCSEC threw a wobbly and told Huawei to sort itself out pronto, the Chinese company still came back with software containing “code that is vulnerable to 10 publicly disclosed OpenSSL vulnerabilities, some dating back to 2006.”

Huawei also struggles to stick to its own secure coding guidelines’ rules on memory handling functions, as HCSEC lamented:

Analysis of relevant source code worryingly identified a number pre-processor directives of the form “#define SAFE_LIBRARY_memcpy(dest, destMax, src, count) memcpy(dest, src, count)“, which redefine a safe function to an unsafe one, effectively removing any benefit of the work done to remove the unsafe functions.

“This sort of redefinition makes it harder for developers to make good security choices and the job of any code auditor exceptionally hard,” said the government reviewers.

In a statement issued this morning Huawei appeared not to be overly bothered about these and the other detailed flaws revealed by NCSC, saying that it “understands these concerns and takes them very seriously”. It added: “A high-level plan for the [software development transformation] programme has been developed and we will continue to work with UK operators and the NCSC during its implementation to meet the requirements created as cloud, digitization, and software-defined everything become more prevalent.”

Commenting on the NCSC’s vital conclusion that none of these cockups were the fault of the Chinese state’s intelligence-gathering organs, Rob Pritchard of the Cyber Security Expert told The Register: “I think this presents the UK government with an interesting dilemma – the HCSEC was set up essentially because of concerns about threats from the Chinese state to UK CNI (critical national infrastructure). Finding general issues is a good thing, but other vendors are not subject to this level of scrutiny. We have no real (at least not this in depth) assurance that products from rival vendors are more secure.” ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/28/hcsec_huawei_oversight_board_savaging_annual_report/

Inside Cyber Battlefields, the Newest Domain of War

In his Black Hat Asia keynote, Mikko Hypponen explored implications of “the next arms race” and why cyber will present challenges never before seen in warfare.

BLACK HAT ASIA 2019 – Singapore –The nature of war has moved across land, sea, air, and space. Now we find ourselves in the cyber domain, where a new arms race will challenge defenders as adversaries adopt new tools, technologies, and techniques.

Mikko Hypponen, chief research officer at F-Secure, today took the stage at Black Hat Asia to discuss the implications of cyber warfare and how it will present challenges not seen before. The nuclear arms race, which he noted lasted about 60 years, is behind us. Today’s conflicts unfold differently; as a result, we have different domains for different types of fighting.

“Technology has changed where wars are fought,” Hypponen explained in an interview with Dark Reading. When the Internet was first built, he continued, geographical lines didn’t seem to exist. It seemed a kind of borderless utopia where cross-country collaboration may be possible. Now, as we know, times have changed, andwars are now fought online.

Just as the domain of war has changed, so, too, have tools used in battle. We’re no longer as worried about nuclear weapons as we were 20 years ago, Hypponen said. Nuclear weapons, only used twice in human history, are built on the power of deterrence. You know who has nuclear weapons and avoid conflict with them because of this power. The number of traditional weapons fighter jets, bombers, and aircraft carriers in each country can be learned via Google.

“We know exactly how many tanks the Russians have. We know exactly how many aircraft carriers the US has,” Hypponen explained, pointing to a screenshot of this information found online.

Digital weapons are poor in creating deterrence because nobody knows who has which tools. They are effective, affordable, and deniable – a dangerous combination of traits. “There are very few weapons that have deniability,” Hypponen emphasized. “Cyber weapons have that.”

It’s one of many qualities that make digital weapons particularly nefarious. Like guns and cannons of the past, cyber weapons also rot over time. The problem is, there’s no way of knowing when their expiration dates will arrive. Offensive toolkits used in the military include exploits targeting vulnerabilities that security researchers are constantly hunting and patching.

Because they don’t know how long their tools will be viable, militaries have no guarantee their investment in digital weapons will yield an ROI. This creates a scenario in which it’s likely those attacks will end up being used so they can justify the cost of building them, Hypponen added.

Nation-States vs. Cybercriminals: Defensive Tactics
Today’s government cyberattacks are predominantly for spying and espionage, and Hypponen noted the importance of distinguishing between spying and warfare. Most cybercriminals are after money. If a cybercriminal targets your organization, chances are they’re not particularly interested in the business itself. They’re looking for quick, easy cash.

Businesses don’t need advanced defenses to keep cybercrime at bay, Hypponen explained. If someone is seeking money and their target makes it difficult or expensive, they’ll move on to a victim with weaker defenses. “The Internet is a garden of low-handed fruit,” Hypponen added.

Nation-states are different. They won’t change their mindset or swap targets. They’re following orders to break into a specific organization and steal data. They’ll keep at it until they succeed.

There are ways of fighting back, he continued. When an attacker creates unique Trojans or backdoors, for example, you can use those to detect them by reputation. Hypponen also advises companies to avoid building defenses like a fortress. High walls won’t prevent attackers from getting in – and the larger a network is, the more likely it will be breached.

Knowing your outside defenses will fail should change security experts’ mindset. Instead of focusing on the perimeter, focus on what’s inside the network. You’re more likely to spot intruders early, which will help your ability to detect attacks and respond faster.

What Comes After Cyber?
“I believe we are in the very beginning of the cyber arms race,” Hypponen said. Still, he added, “it’s important to remember this isn’t where it ends; there will be new domains.” While it’s hard to imagine what comes after cyber, mankind will never stop fighting. New conflicts will emerge.

Robotics and drones come to mind, he continued. Both already exist; however, ethics pose a challenge in development. We don’t like the idea of machines deciding who is killed, Hypponen explained, but different forces are driving us to war in a world where machines will kill on their own. Artificial intelligence (AI) and machine learning, both modern buzzwords in cyberspace, have potential to drive this.

We still have to define what we mean by AI and machine learning, he continued. We also have to be very, very careful about where technology companies draw the line as they race to build genuine AI. This concerns Hypponen in the rush to AI development.

“When you’re in a race, what you don’t do is stop and look around and make sure you’re doing everything carefully,” he pointed out.

Hypponen said he anticipates we’ll see machine learning in real-world cyberattacks as the barrier to entry lowers. Today, you have to be a computer science gradute to deploy a machine learning system. But in 10 years, or five years, these systems will be so easy to deploy that anyone could do it – and they will. The lack of skill protects us now; it won’t protect us much longer.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/inside-cyber-battlefields-the-newest-domain-of-war/d/d-id/1334272?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Tidying Expert Marie Kondo: Cybersecurity Guru?

The “KonMari” method of decluttering can be a huge step toward greater security, according to a growing number of executives.

Marie Kondo is a cultural phenomenon. Her philosophy of “joy through tidying up,” which she shares on the popular Netflix series “Tidying Up With Marie Kondo,” has spawned countless houses minimally occupied by carefully rolled sweaters and perfectly folded linens. She’s the decluttering guru for millions.

Could she also be the cybersecurity guru you’ve been looking for?

“The more time I spend in the cybersecurity world, the more I see people just keep data — not insights — but just keep data for a rainy day,” says Grant Wernick, co-founder and CEO of Insight Engine. “Most of the time, nothing ever comes of any of this stuff.”

From a security perspective, that “stuff” can be a significant vulnerability. “If you don’t have the data to lose in the first place, you can’t lose it,” Wernick says. But what about all of the value that can come from big-data techniques applied to bottomless lakes of retained data?

“It’s always been the recommendation that if you don’t need the data, you shouldn’t have the data. And that removes the entire risk of losing the data,” says Chris Morales, head of security analytics at Vectra. And yet the availability of inexpensive storage has led to a “what if” mentality in many organizations, hoping that someday the techniques will exist to transmute mountains of currently meaningless data into security, marketing, or operational gold.

That sounds very much like the attitude Kondo has built an empire disrupting. Just as she advises individuals to look at each item and ask whether it brings joy (the “KonMari” method), organizations should look at data and ask whether it brings value in excess of its cost. Many organizations lack the formal process to look at data in a rational way.

“Holding on to data too long can be a liability, and getting rid of it too quickly can be a liability,” says Terence Jackson, CISO at Thycotic. The problem is that holding on to unneeded data can be very expensive — and dealing with it in order to make decisions can be expensive, too.

“Security teams are understaffed and overtasked,” Jackson says. “Adding another mandate to look at all the data a company has and building more committees sounds good, but in practice it can be difficult.”

Starting a process to figure out which data to keep can be be hard, too — even without the voices that say, in spite of everything, keeping it all is the right answer.

On Twitter, Kris Lahiri, co-founder and CISO of Egnyte, took the expansive view of data retention while arguing in favor of classifying and categorizing information:

He was joined by Twitter user @dak3, who counseled keeping it all because you never know what might be useful in the future.

Vectra’s Morales says that even the prospect of someday being able to analyze data shouldn’t keep an organization from digitally tidying up on a regular basis. The most important question around keeping data, he says, is, “Why?”

“Just because you can doesn’t mean you should,” he explains. “We’re looking for threats now in security. I think that there is a time limit on the data because it’s retrospective at some point,” he says. “If I was running a department right now, I would want to keep at least 90 days of data. I think that’s reasonable.”

The enterprise analogy of joy is simple, Insight Engine’s Wernick says. “So many people look at things from, ‘Well, what data sources do I have? I’ll start there,'” he explains. “Instead, they should be starting from, ‘What use cases [do] I have [and] what [do] I want to achieve?'”

These tidying up conversations are beginning to happen, but enterprise security professionals should pursue them with the zeal of Konmari converts. “I have conversations in business and my personal life about cleaning up the data trail because you just never know with some of the companies what their data hygiene is,” Thycotic Jackson says. “We should be keepers of our own data. We should understand who’s collecting, what they’re collecting, and why.”

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/analytics/tidying-expert-marie-kondo-cybersecurity-guru/d/d-id/1334271?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Everything I Needed to Know About Third-Party Risk Management, I Learned from Meet the Parents

How much do you trust your vendors? You don’t have to hook them up to a polygraph machine because there are better ways to establish trust.

Companies are increasingly dependent upon third parties to support key factors of their operations — from accounting or HR functions to building maintenance and landscaping. However, these relationships can also expose companies to cybersecurity risks based on the cybersecurity posture of the third parties. According to the Ponemon Institute, 56% of organizations have experienced a data breach caused by a third-party vendor, and 42% have suffered a data breach caused by an attack on one of their third parties.

In thinking about the third-party risk management, I realized that one popular movie series — the Meet the Parents series, starring Robert De Niro and Ben Stiller — teaches us some valuable lessons.

Establish Your “Circle of Trust”
While in Meet the Parents the Circle of Trust referred to specific people, a company’s Circle of Trust should actually be constructed of multiple factors — and potentially multiple circles. This goes far beyond simply signing contracts with cybersecurity language; it involves continuous steps to ensure your partner is actually doing what they say they are (more about that below).

Specific focus areas for establishing your Third-Party Circle of Trust include: identifying the data/systems to which specific third parties will need access, establishing acceptable levels of cyber-risk that your company is willing to accept, determining the partners’ cybersecurity practices/enforcement, and setting a baseline for continuous partner monitoring.

Trust in Processes, but Verify Continuously
In the first movie, De Niro’s character, Jack Byrnes, subjects his daughter’s fiancée, Greg Focker (played by Stiller) to an over-the-top polygraph test. The funny scene ultimately shows the counterproductive reliance on one-time audits or assessments of third-party partners: Summoning partners to periodic questionnaires, interviews, audits, or other scrutiny might look intimidating, but the movie shows us that for all its good intentions, you can’t rely on these traditional methods for fully mitigating cyber-risks (even if your interview questions are much less awkward!). 

We’re seeing an encouraging shift within contract negotiations that is bringing cybersecurity into the discussion earlier and bringing lengthy, security-focused addendums to these contracts. While adding cybersecurity to the contract is a good step, it is critical for vendors to follow through on these contracts to verify that the partner is complying with the agreed-upon cybersecurity requirements.

I’m Watching You
After determining that a third-party vendor has acceptable-or-better cybersecurity policies and practices and establishing a relationship, it is incumbent upon you to reinforce protection through continuous monitoring. While you do not need to be quite as invasive as De Niro’s Byrnes, you should have eyes on your partners 24/7/365 with technologies sending real-time alerts if something is amiss.

Even (Over)protective Security Pros Seldom Make the Final Decision
The humor of the Meet the Parents franchise is that when two people meet and fall in love, it’s the integrity, compassion, and relationship between them that matters most — yet parents, friends, and other “advisers” tend to exert a lot of advice. This is well intended (we all love to have people we can trust to look out on our behalf or confide in), but again, it can be counterproductive when advice is subjective and poorly reasoned and, frankly, is ultimately a decision outside their purview.

The nature of business partnerships is different from personal relationships, but both hinge on trust, transparency and an embracing an opportunity for both parties, together. No one can ever seriously promise that bad things will not happen, but if the integrity and shared stakes truly matter, all sides do their part to realize the benefits. This is where security pros need to play the role of the “grounded friend” or “loving parents” we all trust.

Lessons Learned
As cyber-risk managers, we should anticipate the factors framing a prospective business relationship, respectfully speak up about the risks that exist, be available for in-depth conversations, and do our duty to make sure the right questions and variables are being asked and weighed, respectively — and then accept that a decision is going to be made whether we agree, or not.

No one needs a “Jack Byrnes” flying around the world to polygraph suppliers. A better strategy is to embed cyber-risk conversations deeper in every part of the third-party partner life cycle, so that security pros feel empowered enough not to overreach — and executive “suitors” can be armed with the facts and leeway necessary to manage their relationships.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Brandon Dobrec has dedicated his career to cybersecurity, particularly to delivering the comprehensive threat data, intelligence, and tools required for organizations to minimize their business risk. Since joining LookingGlass in September 2016, Brandon has served as an … View Full Bio

Article source: https://www.darkreading.com/risk/everything-i-needed-to-know-about-third-party-risk-management-i-learned-from-meet-the-parents/a/d-id/1334269?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Office Depot, OfficeMax, Support.com cough up $35m after charging folks millions in ‘fake’ malware cleanup fees

Office Depot and Support.com have coughed up $35m after they were accused of lying to people that their PCs were infected with malware in order to charge them cleanup fees.

On Wednesday, the pair of businesses settled a lawsuit brought against them by the US Federal Trade Commission, which alleged staff at the tech duo falsely claimed software nasties were lingering on customers’ computers to make a fast buck.

The lawsuit, filed in southern Florida, claimed the two companies, including Office Depot subsidiary OfficeMax, from 2009 until November 2016 misrepresented the state of consumers’ computers by using a sales tool designed to convince people to pay for diagnostic and repair services.

“In numerous instances throughout this time period, Defendants used the PC Health Check Program to report to Office Depot Companies customers that the scan had found or identified ‘Malware Symptoms’ when it had not done so,” the complaint stated. “Additionally, in numerous instances, the PC Health Check Program falsely reported to consumers that the program had found ‘infections’ on the consumer’s computer. “

According to the watchdog’s complaint, the PC Health Check Program was incapable of finding malware. Support.com allegedly programmed the software so that whenever an Office Depot Company employee checked any one of four checkboxes describing a generic concern, like slowness, before the scan started, the scan would automatically report the detection of malware symptoms, and for a time, infections.

Hardcoded

But the results, it’s alleged, were predetermined. “Despite the statements in the PC Health Check Program’s Detailed Report that the scan ‘found infections’ or ‘found’ or ‘identified’ malware symptoms, the PC Health Check Program’s detection of malware symptoms was entirely dependent on whether any of the Initial Checkbox Statements was checked and not on the actual state of the computer,” the FTC court filing explained.

The cost for PC Health Check could exceed $300, the complaint stated. The defendants, according to the FTC, bilked customers out of tens of millions of dollars. To settle the charges, Office Depot has agreed to pay $25m and Support.com will pay $10m. The money will be refunded to affected customers, the FTC says.

The alleged fraud appears to have been first reported in 2016 by Seattle TV station KIRO-TV, tipped off by a whistleblower.

Support.com did not immediately responded to requests for comment.

In a statement emailed to The Register, a spokesperson for Office Depot said, “Office Depot’s settlement with the Federal Trade Commission (FTC) resolves an investigation relating to a computer diagnostic service that was offered to Office Depot and OfficeMax customers prior to December 2016….While Office Depot does not admit to any wrongdoing regarding the FTC’s allegations, the company believes that the settlement is in its best interest in order to avoid protracted litigation.”

The FTC claims both companies “have been aware of concerns and complaints about the PC Health Check program since at least 2012” but Office Depot, it’s claimed, nonetheless continued to push the service until 2016.

So, no wrongdoing. But $35m in penalties. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/27/office_depot_support_com_fine_ftc/

New Shodan Tool Warns Organizations of Their Internet-Exposed Devices

Shodan Monitor is free to members of the popular Internet search engine.

Famed Internet search engine Shodan this week rolled out a service that helps solve the underlying problem its tool exposes: The new Shodan Monitor alerts organizations about their devices left exposed on the public Internet.

Security researchers long have employed the Shodan search tool to identify computers, databases, industrial control systems and devices, and consumer Internet of Things (IoT) products sitting wide open to attackers via open Internet ports or other misconfigurations. Most recently, a researcher discovered a MongoDB data instance with 150 gigabytes of data, including some 763 million email addresses, sitting on the public Net and in plain text. 

“Every other week there’s an exposed database leaking information or a consumer device that was misconfigured and is now exposing private data. The number of industrial control systems directly connected to the Internet without any authentication has been increasing at a rate of about 10% every year,” says John Matherly, creator and founder of Shodan. The wave of consumer IoT devices also is increasing, he says.

“Knowing what you have exposed to the Internet is required before any further security work can be done,” he explains. “It shouldn’t be rocket science to know what you have exposed to the Internet.”

Shodan Monitor represents a new brand of tool for Shodan, an online continuous monitoring service. Renowned security expert HD Moore – a pioneer in rooting out exposed and vulnerable devices and systems on the Internet, such as embedded devices, home routers, servers, corporate videoconferencing systems, and Web servers – says many “outside-in” scanning firms such as Shodan are expanding into continuous monitoring. They include Assetnote, BinaryEdge, Bit Discovery, Expanse.co, Hardenize, RiskRecon, and SecurityScorecard.

“I think monitoring is the way to make this technology most effective; bulk data and searching is nice, but it is much more useful when someone else does the difficult attribution work for you and tells you what changed,” says Moore, vice president of research and development for consultancy Atredis Partners. “It has been a fun few years watching the ‘scan the Internet’ firms turn their platforms into actual businesses.”

Shodan’s Matherly says Monitor was built to be simple and inexpensive, and a tool for organizations with less technical know-how and resources. “From a strategic perspective, this is our first foray into creating services that don’t require advanced technical knowledge. In the past, much of our focus was on the Shodan platform, which has been capable of doing this for a long time, but it required usage of our API, which means there was a technical barrier to entry,” he says. “After a decade of building out the platform, it’s time to make it more accessible to nontechnical users.”

Matherly says setting up Shodan Monitor – which is free to all paying Shodan members – takes less than a minute, and Shodan sends an email when it finds an exposed device. It monitors up to 16 IPs for Shodan members (who pay $49 to join) and 300,000 IPs for Shodan Corporate API members. He says many of the existing services and products that offer this type of monitoring are pricey and overly complex, with an overload of dashboard data and confusing alerts.

“We’re hoping that this will put a dent in the number of exposed devices and prevent recurring issues like we see with MongoDB and industrial control systems,” he says.

Stephen Cobb, senior security researcher at ESET, notes that it’s become more difficult for organizations to get a handle on their networks. “Today’s rapidly expanding universe of sensors, cloud storage, remote access, and IoT devices has created levels of complexity that are impossible to secure without constant monitoring, both within and without,” he says. He sees Shodan Monitor as a tool for organizations that don’t have the technical expertise or resources.

“Since its inception, Shodan has played a valuable role in monitoring efforts while at the same time revealing the need for such monitoring,” Cobb says.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/cloud/new-shodan-tool-warns-organizations-of-their-internet-exposed-devices/d/d-id/1334268?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

6 Things To Know About the Ransomware That Hit Norsk Hydro

In just one week, ‘LockerGoga’ has cost the Norwegian aluminum maker $40 million as it struggles to recover operations across Europe and North America.

LockerGoga – the malware that recently disrupted operations at Norwegian aluminum company Norsk Hydro – is the latest example of the rapidly changing nature of ransomware attacks.

The March 19 attack impacted critical operations in several of Hydro’s business areas across Europe and North America. The attack forced the aluminum maker to resort to manual operations at multiple plants. It crippled production systems belonging to Hydro’s Extruded Solution group in particular, resulting in temporary plant closures and operational slowdowns that are still getting only in the process of getting restored.

In two updates this week, Norsk Hydro described the attack as so far costing it about $40 million.

The attack comes amid an overall decline in ransomware campaigns and highlights what security experts say is a shift to more narrowly focused, targeted ransomware intrusions. “Ransomware as a generic threat family is absolutely on the decline,” says Rik Ferguson, vice president security research at Trend Micro.

Ransomware-related events have declined 91% year over year and the number of new ransomware families in the marketplace has declines 32%, he says.”[But] those players still in the game are the more talented ones still seeking to innovate on this technique, to find new victim populations, to gain greater leverage, and to sow greater disruption and reap consequentially larger rewards.”

Some examples of groups using ransomware in this manner include Pinchy Spider, the group behind the GandCrab ransomware family; Boss Spider, the authors of SamSam; Indrik Spider the threat actor using BitPaymer; and Grim Spider, the operators of Ryuk. In most cases the newer attacks are notable not necessarily because of how sophisticated the ransomware tools are, but because of how they are being used.

Here’s a look at the most notable features and capabilities of LockerGoga:

1. LockerGoga changes passwords.

Security researchers are still not sure how the attackers are initially infecting systems with LockerGoga, though several believe that spear-phishing is the most likely scenario.

Once LockerGoga infects a system, it changes all the local user account passwords to ‘[email protected]’ before attempting to boot local and remote users out of the system, Ferguson says. The password change complicates local intervention processes. It also “affects any system services using local accounts running on servers, sending availability ripples throughout the targeted organization,” Ferguson says.

F-Secure, however, described LockerGoga as only changing administrator account passwords to ‘[email protected]’.

2. It forcibly logs victims out of infected systems.

Early versions of LockerGoga merely encrypted files and other data on infected systems and presented victims with a note demanding a ransom in exchange for the decryption keys. Newer versions of the malware have included a capability to forcibly log the victim out of an infected system and remove their ability to log back in as well.

“The consequence is that in many cases, the victim may not even be able to view the ransom note, let alone attempt to comply with any ransom demands,” Cisco Talos noted in a blog. This capability makes newer versions of LockerGoga destructive in nature, the vendor said.

3. It has no use for the network.

Unlike some other ransomware families, LockerGoga does not rely on the network for command and communications, nor to generate encryption keys. “In fact, LockerGoga disdains the network to such an extent that it also attempts to locally disable all network interfaces,” Ferguson says. The goal is “to further isolate the affected computer and to complicate recovery, necessitating direct local intervention.”

4. It doesn’t self-propagate (yet).

LockerGoga has no obvious worm-like capabilities for self-propagation since it does not rely on the network. Security researchers from Palo Alto Networks’ Unit 42 group said they have observed LockerGoga moving around a compromised network via the server message protocol (SMB). That “indicates the actors simply manually copy files from computer to computer,” the vendor said in a blog Tuesday.

However, recent additions and updates to the malware since it first surfaced in January suggest that the authors may be enabling a network capability. As an example, the security vendor pointed to the addition of WS2_32.dll processes for handling network connections and the use of undocumented Windows API calls.

The additions suggest “the developers are building in [a] network capability for the ransomware which could be used for Command and Control, or network self-propagation capabilities,” says Ryan Olson, vice president of threat intelligence at Unit 42 at Palo Alto Networks.

The use of the undocumented Windows APIs demonstrates a relatively high degree of technical sophistication and familiarity with Windows internals, he says. “The capabilities that we see for possible C2 or network self-propagation could make this a more dangerous kind of ransomware in the future,” Olson notes.

5. It appears designed for targeted attacks.

With no self-propagation or use of the network, LockerGoga appears to built for targeted attacks.

The code—at least initially—was digitally signed with valid certificates from at least three organizations. Those certificates have since been revoked, says Trend Micro’s Ferguson.

The ransomware also incorporates techniques that have been designed to evade sandboxing and machine learning based detection mechanisms, he says.

“The main process thread for some of LockerGoga’s variants, for example, sleeps over 100 times before it executes,” Trend Micro said in a blog analyzing the malware.

One scenario for which the ransomware appears designed is for when attackers have already gained some level of access within an organization, Ferguson says. An example is where an attacker might have access to the Active Directory infrastructure “and are able to deploy the ransomware in advance, across the affected estate, before triggering the encryption routine,” he says.

6. The authors have been trying to pass off LockerGoga as CryptoLocker.

Christopher Elisan, director of intelligence at Flashpoint, says the authors of LockerGoga appear to have gone to some lengths to pass off the malware as a version of the notorious CryptoLocker ransomware. LockerGoga uses Crypto++, an open source crypto library and newer versions even use “crypto-locker” as the project folder name.

There is also some research showing LockerGoga containing bugs in its code, Elisan adds. “If this is the case, it makes [LockerGoga] more dangerous for victimized organizations because any attempt to decrypt the files even after payment of ransom might not be successful due to buggy encryption.”

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/6-things-to-know-about-the-ransomware-that-hit-norsk-hydro/d/d-id/1334270?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ep. 025 – Business Email Compromise and IoT surprises [PODCAST]

We explain how to avoid losing money to the cybercrime known as BEC, short for Business Email Compromise, and our experts give you some great tips on what to look out for when you plug new devices into your network.

With Paul Ducklin, Matthew Boddy and Benedict Jones.

This week’s links:

To get your own copy of the free Sophos XG Firewall Home Edition mentioned in the podcast:

If you enjoy the podcast, please share it with other people interested in cybersecurity, and give us a vote on iTunes and other podcasting directories.

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...


Thanks to Purple Planet for the opening and closing music.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3qSFHge57UQ/

87% of Cloud Pros Say Lack of Visibility Masks Security

The majority of cloud IT professionals find a direct link between network visibility and business value, new data shows.

Most (84%) businesses increased their cloud-based workloads in 2018, but lack of visibility into those workloads could compromise security and business value, cloud experts report. Only 13% of companies surveyed reported the same level of public cloud usage as the previous year.

These findings come from “The State of Cloud Monitoring,” a new report released today by Keysight, which polled 300-plus IT professionals who handle public and private cloud deployments in global organizations across 15 countries. Nearly seven out of 10 respondents said public cloud monitoring is more difficult than monitoring data centers and private cloud environments, and less than 20% said their organizations can properly monitor public cloud environments.

The lack of visibility is masking security threats, according to 87% of respondents. It also leads to a variety of application and network performance issues, including the inability to deliver against service agreements. Most (95%) pros said visibility problems led to an application or network performance issue, and 99% reported they notice a direct link between network visibility and business value.

What problems does this lack of visibility cause? Delays with troubleshooting application performance issues (48%) were most common, followed by delays in troubleshooting network performance issues (40%), an application outage (38%), and the inability to monitor performance of workloads in the cloud (31%), which tied with network outage (31%).

Read more details here.

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/analytics/87--of-cloud-pros-say-lack-of-visibility-masks-security/d/d-id/1334236?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple