STE WILLIAMS

Pen testers find mystery black box connected to ship’s engines

If an attacker wanted to sneak a monitoring device into a target network, how might they go about it?

As Naked Security reported last week, they could try soldering a tiny chip on to the circuit board of something like a firewall on the assumption that it will never be noticed.

But there might be a much simpler approach – hide the device in plain sight, safe in the knowledge that its very conspicuousness means its legitimacy will probably never be questioned.

This was the initial suspicion of a team from UK-based outfit Pen Test Partners when they noticed an unlabelled, “potentially toxic box” connected to the onboard LAN of a ship that the team was performing a security assessment on.

Ship networks feature a lot of specialised equipment, of course, but every box should have a purpose. And yet, after enquiring about its origins, the message came back:

Fleet management told us that shoreside had no invoice, record, or inventory listing for it. They were blissfully unaware of its existence.

It had an Ethernet connection to the ship LAN but was also connected to a Windows console on the bridge which was so bright at night that the crew covered it up. The assumption had been that it was meant to be there.

“Suspicious”

The box had a second Ethernet connection, which after analysing, the pen testers discovered was UDP encapsulating NMEA data, a format that offers a universal interface for different GPS systems. That suggested it had something to do with the onboard Electronic Chart Display and Information System (ECDIS).

It also had an RS232 Serial converter connected to it, leading to a cable that disappeared into the deck. The traffic running across this was Modbus, an ancient master-slave protocol still used by industrial control systems (ICS).

After checking to see whether the master/slave would answer when fed data, the other end of the Modbus turned out to be 11 decks down on the ship’s engine, adjacent to its safety systems designed to slow or shut down the engine.

We’d found a Windows machine that was connected to main engine controls, which no one knew about.

It was obviously alarming that an unknown device was connected to a system involved in ship safety. Comically, the Windows console was running a long unpatched version of Team Viewer.

The culprit

It turned out that the box had been put there legitimately for monitoring fuel and engine efficiency by a third party some years before, forgotten about, but left running despite the arrangement having ended.

A vulnerable box that no-one knew about with a direct, remote connection to the main engine.

One observation from this is that engineers and crew simply assumed it had a right to be there even though nobody knew what it was doing.

This begs the question… How many more mystery boxes might be quietly sitting connected to numerous other networks?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JHJliLzwFeQ/

S2 Ep 13: Weird Android zero day and other tech fails – Naked Security podcast

Episode 13 of the Naked Security podcast is now available.

This week I step in to host the show with Sophos experts Mark Stockley and Greg Iddon.

We discuss Twitter’s two-factor authentication faux pas [10’51”], the risks of copy and pasting code from Stack Overflow [22’20”] and an Android zero-day with a difference [35’50’].

This week we’re recording an additional episode about the cultures of social media in honour of National Cybersecurity Awareness Month, so come back later this week to listen.

Listen below, or wherever you get your podcasts – just search Naked Security.

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3JL9rtTl8no/

Hundreds charged in internet’s biggest child-abuse swap-shop site bust: IP addy leak led cops to sys-op’s home

US prosecutors say a South Korean man was behind the largest child-abuse image-swapping operation yet found on the internet.

Uncle Sam’s legal eagles today claimed Jong Woo Son, 23, was the administrator of the now-defunct “Welcome to Video” dark-web-hosted website frequented by hundreds if not thousands of perverts in the US and internationally, and advertised itself as having served more than a million image and video downloads.

Son, already serving a lengthy prison sentence in South Korea for masterminding the site, was indicted in the US on nine counts related to the creation and distribution of child sexual abuse images as well as money laundering.

(Beware: the indictment, filed last year and unsealed this week, is disturbing and graphic.)

Son and 337 other defendants were charged with operating Welcome to Video, a massive underage-sex-abuse-image-sharing Tor-hidden site, which at its peak had roughly eight terabytes of media and 200,000 video files. Users either paid 0.03 Bitcoin for download rights or earned credits by uploading new child-abuse material.

The site, designed from its creation in 2015 until its takedown in March 2018 to be dedicated to peddling sex abuse imagery – including instructions to fiends not to upload any legit adult pornography – was accessed through Tor, and instructed netizens to pay for files with Bitcoin through unique email addresses.

Follow the money

It is estimated roughly one million Bitcoin wallets were affiliated with the site. Those accounts proved useful in helping government agents and police, with the help of blockchain analytics outfit Chainalysis, track down the individual users who were behind the spread of this horrific content by tracing the flow of bitcoin from the site to various exchanges and wallets.

In addition, investigators got lucky when the Welcome to Video server was briefly misconfigured and revealed a couple of public IP addresses linked to the server. These were backtraced to a system hosted at Son’s own home by South Korean telcos.

In addition to dismantling the site and charging alleged users, prosecutors said they were able to rescue 23 children in the US, UK, and Spain from abusers.

“Through the sophisticated tracing of bitcoin transactions, IRS-Criminal Investigation special agents were able to determine the location of the dark-net server, identify the administrator of the website and ultimately track down the website server’s physical location in South Korea,” said IRS-Criminal Investigation Chief Don Fort. Yes, that’s IRS as in the US tax collection authority.

Pic: Shutterstock

Dark web doesn’t exist, says Tor’s Dingledine. And folks use network for privacy, not crime

READ MORE

“This large-scale criminal enterprise that endangered the safety of children around the world is no more. Regardless of the illicit scheme, and whether the proceeds are virtual or tangible, we will continue to work with our federal and international partners to track down these disgusting organizations and bring them to justice.”

Along with the criminal charges, prosecutors are looking to recover any money Son and 23 other conspirators allegedly made through the operation. Between June 2015 and March last year, the site is thought to have earned over $370,000 in Bitcoin.

In announcing the takedown, the US government made a point of going after not only the people who traded in the abuse images and videos, but also slamming the Tor and Bitcoin services that were used to support the site.

“Operators of anonymization services like Tor must ask themselves whether they are doing their part to protect children and make their platform inhospitable to criminals,” said deputy US attorney general Richard Downing.

“Society must decide whether it will accept these lawless online spaces, whether American taxpayers should fund them, or whether we will instead demand that providers act to prioritize protecting children from online predators.” ®

Sponsored:
What next after Netezza?

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/16/korea_child_abuse_bust/

Schadenfreude Is a Bad Look & Other Observations About Recent Disclosures

The debate about whether Android or iOS is the more inherently secure platform misses the larger issues that both platforms are valuable targets and security today is no guarantee of security tomorrow.

It always feels a little unsavory when tech giants make public spectacles of security issues affecting competitors, especially against the backdrop of their pitched battle for primacy in the sphere of modern computing and the Internet. But it is hardly uncommon, whether it’s Apple revoking Facebook and Google developer certificates due to perceived abuse or, more recently, when Google Project Zero published an extensive write-up detailing a series of Apple iOS vulnerabilities and their exploitation “in the wild.”

The revelation of these exploits is significant primarily because it contradicts the prevailing wisdom that mobile OS zero days are narrowly targeted at individuals. In what appears to have been a long-running watering hole attack and unlike previous zero days, these exploits appear to have targeted ethnic groups rather than specific individuals, though the delivery mechanism meant that anyone visiting the compromised websites would be the object of attack.

The vulnerability disclosures — coupled with the subsequent increase in payouts for Android exploit chains — reinvigorated the discussion about the relative security of Android versus iOS and open versus closed source software more generally. Some researchers credit the open source roots of Android for increased security, and the reasoning is clear: Linus’ Law famously says “given enough eyeballs, all bugs are shallow,” a statement that should be equally true regardless of whether the bugs in question affect the function or the security of software.

Unsurprisingly, the reality is more nuanced. A claim on one side of the debate is that the closed source nature of iOS makes it harder for white-hat researchers to identify vulnerabilities, which implies that intent is a necessary factor in vulnerability discovery and exploitation, while ignoring the fact that vulnerabilities are discovered and exploited with some regularity (even if those exploits exist only to demonstrate severity and never progress past the proof-of-concept stage). Indeed, the work of the Project Zero researchers itself contradicts that notion insofar as they have been reporting iOS vulnerabilities since 2014.

They also separately discovered one of the same vulnerabilities in use by the attackers, though the intersection of those independent discoveries may be the exception rather than the rule. According to a Rand Corporation report, only 5.7% of vulnerabilities discovered by one party were independently discovered by another party within 12 months (the report does not, unfortunately, compare and contrast open and closed source software). If such statistics don’t cast doubt on the idea of enough eyeballs making bugs shallow, then they at least raise questions about whether we’ve reached the critical mass of eyeballs and whether or not those eyeballs interpret what they’re seeing the same way.

Though this set of exploits is alarming due to its capabilities, scale, and longevity, it is by no means the first instance of an extremely powerful and long-lived iOS exploit. In August 2016, Citizen Lab and Lookout uncovered the use of the so-called Trident vulnerabilities and Pegasus malware. Then, as now, there were proclamations about the relative security of Android and iOS. In the early days, many “high-value” targets were iOS users. Unsurprisingly, many exploit developers focused their efforts on iOS with varying degrees of success. It is important to remember, however, that absence of proof is not proof of absence, and a little less than a year after Pegasus, Chrysaor — the Android equivalent of Pegasus — was uncovered.

This parallel highlights an important fact: While threat actors might initially focus on a particular platform, it is unlikely that their objectives can be met by focusing exclusively on that platform. Increasing the number of targets is, by definition, a change in requirements. And it should go without saying — even if one accepts the premise that one platform is more difficult to exploit than another — difficult does not mean impossible. Like any “software” project, combining a change in requirements with a more difficult technical implementation typically increases costs. Rather than viewing the higher Android exploit prices as an indirect endorsement of platform security (though they are), it may be more useful to take them at face value: a bigger incentive to find exploitable vulnerabilities that will drive focus accordingly. As security researcher The Grugq recently reminded the Twitter-verse, “The people that buy those exploits? A million dollars isn’t even a rounding error. … Money is not a scarce resource for a serious threat actor.”

Lastly, there is the issue of the long tail. The difference between Android and iOS exploit acquisition costs may reflect something unexpected: a potentially longer shelf life. While current versions of Android may be more difficult to exploit, nearly 54% of Android devices are running a version that is not guaranteed to receive security updates (that is, Android 7.0/ Nougat and older; only Android 7.1 and newer receive security updates) compared with 12% of iOS devices. A typical iOS device will receive major OS and security updates for one to two years more than the best-case equivalent for Android.

Ultimately, though, the issue isn’t which platform is more secure. As Project Zero researcher Ian Beer said in his preface describing these vulnerabilities and exploits, “Real users make risk decisions based on the public perception of the security of these devices,” which are a critical part of the lives of nearly one-third of the world population. Hopefully, platform developers, enterprises, and end users alike are heeding the advice Alex Stamos offers in his reworked version of the Apple response to the Project Zero blog posts by “staying vigilant in looking for attacks” because if there is a silver lining to more widespread use of exploits, it is that it should attract more eyeballs and, though those additional eyeballs may not necessarily make the bugs shallow, it will hopefully make them obvious.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Works of Art: Cybersecurity Inspires 6 Winning Ideas

James Plouffe is a Lead Architect with MobileIron and a Technical Consultant for the hit series Mr. Robot. In his role as a member of the MobileIron Product and Ecosystem team, he is responsible for driving integrations with new technology partners, enhancing existing … View Full Bio

Article source: https://www.darkreading.com/endpoint/schadenfreude-is-a-bad-look-and-other-observations-about-recent-disclosures/a/d-id/1336060?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

SailPoint Buys Orkus and OverWatchID to Strengthen Cloud Access Governance

The $37.5 million acquisitions will boost SailPoint’s portfolio across all cloud platforms.

SailPoint, an identity governance company, has purchased startups Orkus and OverWatchID for a total of $37.5 million. Integration of the two companies’ technologies into the SailPoint Predictive Identity Platform is expected to be completed in the first half of 2020.

Orkus technology uses artificial intelligence and machine learning to continuously monitor patterns of access relationships for each enterprise cloud resource. OverWatchID uses activity information to beef up access controls, especially as they occur in the cloud. Orkus and OverWatchID will be completely integrated into SailPoint.

Read here and here.

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/sailpoint-buys-orkus-and-overwatchid-to-strengthen-cloud-access-governance/d/d-id/1336108?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cybersecurity Advice From Betty White

Among the beloved entertainer’s advice: “Double bag those passwords.”Thanks, Betty.

Source: Warlock Of WiFi

What security-related videos have made you laugh? Let us know! Send them to [email protected].

Beyond the Edge content is curated by Dark Reading editors and created by external sources, credited for their work. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/cybersecurity-advice-from-betty-white/b/d-id/1336100?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Build a Rock-Solid Cybersecurity Culture

In part one of this two-part series, we start with the basics – getting everyone to understand what’s at stake – and then look at lessons from the trenches.

Jerry Gamblin has a question he wants security managers to ask employees throughout their organizations: Why is our data worth protecting?

“Go ahead — survey a few co-workers with this question,” says Gamblin, a principal security engineer with Kenna Security, as he plants his tongue firmly in his cheek. “Were you satisfied with the answers? Did they understand clearly what data your organization collects and why it’s important to protect it?”

Ay, and there’s the rub. The lament of so many CISOs and security managers around the globe is this: While the organization may claim to care about security, do those within it understand why? Do they know what is truly at stake should a breach or security incident occur?

October is National Cybersecurity Awareness month. While any security leader worth his or her salt will say awareness is an effort that should be year-round and continuous, every October offers security departments their time in the sun to crow about the importance of employee awareness and education. This, in and of itself, reveals security is now a priority for business.

“When I first started talking about security many years ago, employees looked at it is as a ‘work’ thing – something that was done by the IT people who wanted to make sure the company’s systems didn’t break,” says Roland Cloutier, corporate vice president and chief security officer at ADP. “Training was almost nonexistent, and security was seen as optional.”

But not anymore. Gone are the days when security was seen as an IT function. Employees typically know security is much more. But in order to instill a true understanding of security throughout, CISOs need to focus on building a security culture. Veteran CISOs will tell you there are effective ways to get this done and, um, less-than-effective ways, too (think: shaming). But more on that later.

Back to Basics
According to Gamblin, step one of building a security culture is to start with a basic PowerPoint about corporate data policies and procedures. At least then people will understand what they’re protecting – and why.

This presentation, he says, should include “why you collect the data, why it’s important that it’s collected, and what could happen if the data were stolen. Once everyone in the organization has a clear understanding of this, the security culture will grow organically,” he says.

We know from plenty of studies that end-user error leads to the majority of security incidents in organizations. We also know that despite best intentions, people make mistakes. Training is important in helping them recognize threats and understand that recognizing risk is everyone’s responsibility.

“Require employees to take engaging and continuous security training that both educates them on your company’s security policies but also sets a tone that security is important to the company and its leadership,” Cloutier says. “This can be done by email, blogs, intranet postings – anywhere employees are looking. The tone should be light, engaging, and focused on storytelling.”

The training you offer should be about more than why it is important to protect corporate data and intellectual property. Jill Knesek, chief security officer at Cheetah Digital and formerly the CISO at Mattel, says the information needs to be relevant to personal lives in order to get employees to take notice.

“Once your employees start thinking about security at work and home, it will become second nature and will increase the possibility that your employee makes the right decision when they get a malicious phishing email,” she says.

Of course, sometimes when employees do things that cause a breach, they aren’t mistakes. Malicious insiders are a massive problem for security. Verizon’s Insider Threat report finds the top motivations of malicious insiders are financial gain, fun, and espionage. But most employees want to do the right thing, and ADP’s Cloutier advises giving them the tools to take security into their own hands if needed and report suspicious activity.

“Implement an easy, user-friendly way for employees to report incidents to the security organization without fear of repercussion,” he says. “If employees feel that the security organization is there to help them and protect them, they will be more likely to report incidents as they arise. You can do this by setting up a website, offering a toll-free phone number, or even having an app that is right on their mobile device. “

Championing the Cause of Security
An emerging concept in companies that are serious about security is the “security champion” program. In some companies, a security champion serves as the voice of the developer on security issues. But many organizations are also designating champions throughout the business. The champion takes on the responsibility of acting as the primary advocate for security within a team, acting as a first line of defense for security issues.

Lena Smart, CISO with MongoDB, is building a security champion program with interested employees eager to help with security posture.

“We have volunteers from many teams, globally, who are willing to become the ‘security champion’ for their group,” Smart says. “This includes the opportunity to meet directly with security leadership on best practices and to incorporate those security practices within their particular business unit.”

The volunteers already have an interest in security, and their outside perspective helps diversify the security organization, Smart says. They act as a conduit between internal teams to help break down silos, while shifting security to a shared goal. This fosters the attitude that security is an organization-wide responsibility.

Create a No-Shame Zone
As noted earlier, employees are going to make mistakes. Negative reactions will get you nowhere.

“Trying to force change by lecturing and shaming people on their security or lack thereof will rarely elicit the changes you want,” Smart says. “Instead, make security a shar ed focus by inviting all departments into the security organization.”

Cloutier agrees, and in his long career has come to understand how ineffective shaming is for building a security culture.

“The biggest lesson I’ve learned is that pushing policies and punishing people for making mistakes never works,” he says.

Keep the education engaging, interesting – and fun, he advises.

Yes, fun. The security department can have fun sometimes. In part two of our look at building security culture, we will talk to some CISOs who believe the key to building security culture is through soft skills – and by making the security department “lovable.” Who knew security could be warm and cuddly?

Related Content:

(Image: arthead via Adobe Stock)

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Joan Goodchild is a veteran journalist, editor, and writer who has been covering security for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/how-to-build-a-rock-solid-cybersecurity-culture/b/d-id/1336109?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Typosquatting Websites Proliferate in Run-up to US Elections

People who mistype the URL for their political candidate or party’s website could end up on an opposing party or candidate’s website, Digital Shadow’s research shows.

Another sign that the Internet has become the newest venue for political battles is the sheer number of websites that have sprung up that appear designed to confuse or take advantage of people following the 2020 US Presidential elections.

Researchers from Digital Shadows recently looked at how many domains they could find that appear to be targeting users who accidentally mistype the website address for a US political candidate or an election-related domain (for example berniesander.com instead of berniesanders.com).

They wanted to find out where exactly on the Internet individuals who made such typos would end up and discovered over 550 phony websites for the 19 Democratic and 4 Republican candidates and 11 other election-related domains as of Sept. 20, 2019.

About one quarter of the domains (24%) appeared harmless and simply ‘parked’ with no content. Another 8% appeared to be the result of the URLs not being correctly configured when initially created: many of these sites hosted nothing but an index page.

The remaining 68% of typosquatting sites, however, actively redirected people to entirely new sites – some of which could eventually end up being used for nefarious purposes. For example, Digital Shadows found that an individual who inadvertently typed elizibethwarren.com would get redirected to a donaldjtrump.com page. Similarly, someone entering donaldtrump.digital would instead end up on hillaryclinton.com.

Users wanting to donate to specific Republican candidates by going to the WinRed site get redirected to the ActBlue fundraising site for Democrats if they accidentally submit winrde.com instead of winred.com. Digital Shadows researchers found similar redirects for sites associated with several other candidates including Tulsi Gabbard, Bernie Sanders, and Joe Biden.

Six of the typosquatting domains studied by Digital Shadows redirected users to various “secure browsing” and “file converter” Google Chrome extensions. While none of the extensions appeared overtly malicious, the permissions they required appeared “unreasonably high,” Digital Shadows said in a report this week that summarized its research. Three of the extensions had access to cookies in the user’s browser.

“Without calling out one candidate or one party over another for these typosquats, it’s clear that the political battles are not taking place just on the debate stage or in the media but expanding to the cyber realm, as well,” the security vendor said.

Harrison Van Riper, strategy and research analyst at Digital Shadows, says the typosquatting sites to which users are being directed don’t appear malicious like the ones scammers typically use to host malware or to directly spoof a legitimate site. Additionally, though, redirection can also be used to initiate a drive-by download or a watering hole attack. There is no sign of such activity on the election-related sites so far, he says.

“So it’s challenging to determine precisely how harmful they are,” Van Riper notes. “It’s hard to quantify the negative impact any one specific candidate could receive from typosquats like this though it could potentially be measured in dollars lost from fundraising,” or from frustrating voters trying to get more information about a candidate.

Digital Shadows’ research uncovered 66 domains with political-sounding names hosted on a single IP address by an entity with an address in Panama. The domains were all registered in the last 40 days and include those with names like cleareconomy.info; brinkofrecession.com; kamalaharriss.info; and polociprotest.info. Among the domains is one called dailytravelposh.com that previously hosted typosquatting pages for several technology companies.

All 66 domains presently contain no content, but it is possible that they will begin hosting typosquatted content sometime in the future, Digital Shadows said.

Related Content:

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/typosquatting-websites-proliferate-in-run-up-to-us-elections/d/d-id/1336110?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Federal CIOs Zero In on Zero Trust

Here’s how federal CIOs can begin utilizing the security concept and avoid predictable obstacles.

Now more than ever, the US government has focused on proactive cybersecurity measures. Under President Donald Trump’s proposed budget for fiscal year 2020, the federal cybersecurity budget would increase to $17.4 billion, up from an estimated $16.6 billion this year.

The budget increase shouldn’t come as a surprise, as major data breaches continue to cripple organizations of all sizes and sectors worldwide while malicious nation-state adversaries continue to apply pressure, especially on government organizations. With cybercrime continuing its steep trajectory, it’s projected to cost the world $6 trillion annually in damages by 2021, according to Cybersecurity Ventures.

Within cybersecurity spending, one of the areas federal CIOs are eyeing is the concept of zero trust, due in part to recent reports from the Defense Innovation Board and the American Council for Technology-Industry Advisory Council. Zero trust is now front and center for federal CIOs, but where exactly should they begin?

Beyond Jargon: What Is Zero Trust?
Zero trust fundamentally focuses on establishing new perimeters around sensitive and critical data. These perimeters include traditional prevention technology such as network firewalls and network access controls, as well as authentication, logging, and controls at the identity, application, and data layers.

While the concept sounds simple, especially as information security vendors claim to make the road to zero trust easy with their products, the reality is much more complex. Zero-trust architectures (ZTAs) require extensive foundational investments and capabilities as well as extensive logging and control layers that are largely in the traditional IT stack more than a plug-and-play security technology.

Getting Started
Federal IT environments are complicated, and as CIOs take a closer look, they will see in many cases they’re already notionally on a path to Zero Trust. There are a number of foundational requirements that are not unique to Zero Trust that map back to the DHS’s Continuous Diagnostics and Mitigation (CDM) program.

To get started on the road to zero trust, government organizations should begin with CDM Phase 1 requirements that focus on understanding what’s on the network. The CDM Phase 1 requirements include:

  • Automation of hardware asset management
  • Automation of software asset management
  • Automation of configuration settings
  • Common vulnerability management capabilities

By following these requirements, federal CIOs can begin to gain a true understanding of the sheer amount and sensitivity of the data they hold.

The Obstacles in the Road
ZTA generally assumes that an enterprise has fully embraced concepts such as DevOps and has limited legacy data and applications. Federal networks are different because they have been around longer and have more legacy technology than most enterprises. They also leverage secure facilities for access to sensitive data and are already under constant attack from nation-state adversaries.

Beyond CDM Phase 1 requirements, federal CIOs should shift focus to identifying critical data in their networks and building secure applications and identity management systems around that critical data. Once sensitive data has been identified, network and application logs should be used to determine who accesses the data on a regular basis; this information can be entered into traditional network-layer and application-layer controls, such as firewalls and role-based access to applications and data.

Today, one of the biggest decisions that federal CIOs must make is how they shift their development requirements for current, next-generation and legacy applications. With the advent of ZTA, it’s likely that CIOs require all applications to use a centralized identity, credential, and access management solution. But when it comes to current applications, there is a significant cost to retrofit access controls (adding firewalls and application gateways) and it’s unclear who will foot the bill between security: IT or application development teams.

The final challenge will be around legacy applications such as mainframe applications, which are common in data-intensive government lines of business applications. Without a straightforward way to add layers of protection and monitoring to these systems, CIOs will either spend money to completely redesign these systems or accept that a true ZTA is still beyond their reach or the reach of their budgets.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Works of Art: Cybersecurity Inspires 6 Winning Ideas

William Peteroy is the Chief Technology Officer, Security, at Gigamon, where he leads security strategy and innovation efforts. William joined Gigamon through the acquisition of ICEBRG, where he was CEO and co-founder. Before Gigamon, William worked in a number of business … View Full Bio

Article source: https://www.darkreading.com/endpoint/federal-cios-zero-in-on-zero-trust/a/d-id/1336038?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cryptojacking Worm Targets and Infects 2,000 Docker Hosts

Basic and ‘inept’ worm managed to compromise Docker hosts by exploiting misconfigurations.

Some 2,000 Docker hosts have been attacked and infected by a relatively basic worm that exploits misconfigured permissions to download and run cryptojacking software as malicious containers.

Network security firm Palo Alto Networks in a report today said that despite its “inept” programming, the so-called Graboid worm has been successful: it searches for unsecured docker daemons, uses the access to the Docker host to install malicious images from the Docker Hub, and then runs scripts downloaded from a command-and-control (C2) server. Among the scripts are a cryptomining program that “mines” — or attempts to generate — the Monero cryptocurrency. Each miner is active about 63% of the time, according to Palo Alto.

The worm is not exploiting a vulnerability, but a lack of proper security settings, says Jen Miller-Osborn, deputy director of Palo Alto Networks’ Unit 42 threat intelligence group.

“The issues is a lack of updating any of the initial security settings,” she says. “The initial point of entry into the Docker host is there because none of the settings were changed. The front door is basically open on these systems.”

Cryptomining and cryptojacking have become favored tactics of online attackers as a way of easily monetizing compromised systems, and continues to be used even as the price of cryptocurrencies have declined from their highs of December 2017 and January 2018. Cryptomining through malware and cryptojacking, where resources are used without the owner’s authorization, are often used as way to generate a small amount of money from systems that would otherwise not be valuable to attackers. 

Initially, cryptojacking often involved running JavaScript inside the browsers of visitors to compromised sites. Using co-opted cloud resources is a technique that took off in 2018. One report found that cryptojacking of cloud resources affected a quarter of all companies in early 2018. While the Graboid worm is rudimentary, Palo Alto Networks warns that it could easily be updated through its connection with command-and-control servers.

“While this cryptojacking worm doesn’t involve sophisticated tactics, techniques, or procedures, the worm can periodically pull new scripts from the C2s, so it can easily repurpose itself to ransomware or any malware to fully compromise the hosts down the line and shouldn’t be ignored,” the company stated in the report.

The worm spreads by identifying vulnerable docker hosts and then sending a command to download a malicious docker image. The image is instantiated as a container, connects to the command-and-control (C2) server and download four scripts that downloads a list of IP addresses of more than 2,000 vulnerable hosts, determines the number of CPUs, and continues to spread. 

More than half of the vulnerable hosts are based in China, while about 14% are based in the United States and Ireland accounts for 4%, Palo Alto stated in its analysis.

Palo Alto conducted a simulation of the worm and, assuming that 70% of hosts are available at any given time, found out that it would take less than an hour to infect 1,400 vulnerable hosts. The cluster of 1,400 compromised hosts would give the group approximately 900 CPUs of processing power for cryptomining activities.

Locking Down Containers

Companies need to make sure that they lock down their container-based infrastructure, and know what parts of the infrastructure are their responsibility to secure, Miller-Osborn says. 

“With everyone wanting to shift into the cloud, there is a lack of understanding on both the consumer and the provider side of who’s responsible for which part of the infrastructure,” she says. “These containers that [the worm is] getting into are those that are left open to the entire Internet. It’s not something that most people would do on purpose, but whenever they set that up, they fail to implement best practices to secure the images.” 

The Docker team worked with Palo Alto Networks to remove the malicious images from the Docker Hub.

Palo Alto urged companies to lock down their containers and Docker hosts. Users should connect to the docker daemon with SSH, never use Docker images from unknown repositories or maintainers, and use firewall rules to limit the IP addresses that can access the Docker host.

“Never expose a docker daemon to the internet without a proper authentication mechanism,” the company said in the report. “Note that by default the Docker Engine (CE) is NOT exposed to the internet.”

Related Content:

 

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/cloud/cryptojacking-worm-targets-and-infects-2000-docker-hosts/d/d-id/1336104?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple