STE WILLIAMS

44% of Security Threats Start in the Cloud

Amazon Web Services is a top source of cyberattacks, responsible for 94% of all Web attacks originating in the public cloud.

Cloud-enabled cyberattacks are ramping up, as indicated in a new Netskope study that found 44% of security threats use cloud services in various stages of the kill chain. Attackers are targeting popular cloud apps and services to exploit the growing trust in commonly used enterprise platforms.

Microsoft Office 365 for Business, Box, Google Drive, Microsoft Azure, and GitHub are the most-targeted cloud apps, researchers discovered in the February 2020 Netskope Cloud and Threat Report. Most (89%) enterprise users operate in the cloud, and 33% of them work remotely.

The average business uses 2,145 cloud services and apps, and the most popular app categories are cloud storage, collaboration, Web mail, consumer, and social media, according to the study. The most popular apps overall are Google Drive, YouTube, Microsoft Office 365 for Business, Facebook, Gmail, SharePoint, Outlook, Twitter, Amazon S3, and LinkedIn. Netskope researchers noted the migration of private apps and data to the cloud, more remote work, and increased use of public cloud apps.

Amazon Web Services (AWS) is a hot target for cloud storage apps, Imperva researchers found in their Cyber Threat Index, also released today. Web attacks originating from the public cloud saw a 16% spike from November to December 2019. AWS was a top source for these attacks, responsible for 94% of all Web attacks starting in the public cloud. “This suggests that public cloud companies should be auditing malicious behavior on their platforms,” researchers said.

At least 20% of enterprise users move data laterally between cloud apps, Netskope found, and more than half of data privacy violations come from cloud storage, collaboration, and Web mail.

Data moves between cloud app suites, managed apps, unmanaged apps, app categories, and app risk levels. Nearly 40% of the data that moves between cloud apps is sensitive, and lateral movement spans 2,481 cloud services and apps. The most common movement is between cloud storage services, storage and collaboration tools, cloud storage and Web mail, and cloud storage and customer relationship management systems.

Read more of Imperva’s data here and the full Netskope report here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “8 Things Users Do That Make Security Pros Miserable.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/44--of-security-threats-start-in-the-cloud/d/d-id/1337088?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Zero-Factor Authentication: Owning Our Data

Are you asking the right questions to determine how well your vendors will protect your data? Probably not.

Let’s say you own a small business, and you want to get a payroll service to help with withholding taxes and automatic deposits into your employees’ accounts. That’s a very useful, powerful service: You’re giving a third party the right to withdraw funds from your bank account and send them to others. 

Being switched on to security, you’d look for a payroll company that supports multifactor authentication (MFA) based on a time-based one-time password (TOTP) application, knowing that SMS-based two-step login is effectively (in the words of Allison Nixon and Mark D. Rasch at Unit 221B Research) zero-factor authentication.

The trouble is, as of about three weeks ago, none of the major online payroll companies offered this feature. If you ask those companies, they’ll say they offer SMS-based two-step login and then assure you they take security seriously. 

I found one firm that does support application-based MFA: I’ll call it Payroll Company B. PCB isn’t a payroll company as much as a professional employer organization, but still, it does payroll — for twice the price of the others I just mentioned. 

Anyway, you sign up. And after you go through the rigamarole to get the TOTP application working, if you’re attentive, you may discover a seedy backdoor: If you were to forget the Web front end,call PCB’s toll-free support number, and tell the company you need to make an account change, the entire authentication regime falls apart with these dreaded words:

“For security purposes, please tell me your full name and the last four digits of your Social Security number.”

Yes, it verifies your identity by asking you for public information. Once provided, no further authentication is required, and you can request a password change, or the removal of TOTP-based MFA, or, presumably, to send Bob’s paycheck to Alice. You’re in.

And you’re root because it has verified your identity. After all, who else could possibly know your full name and last four digits of your Social Security number?  

Who indeed?

Without installing, for example, a proper and secure multifactor, telephone-voice-based authenticator capability, these companies are left to improvise methods to hack together a security story to offer to security-conscious customers. After I discovered its glaring password reset vulnerability, I spoke with a helpful PCB supervisor and asked him to disable phone support. He cheerfully (and genuinely) promised to do so, saying he put a note in my account. I waited two weeks, phoned back, authenticated with a different rep using just my name and last four digits of my SSN, then asked the rep to close my account. In the company’s failure to fix the problem, it made liars out of dedicated and creative support staff.

Forget Password Policy. What’s Your Password Reset Policy? 
This vulnerability is so mind-thwackingly obvious that I cannot believe I need to say this, but it also raises an important issue that is relatively unaddressed by my colleagues in the financial services world: When we do vendor onboarding and qualify the vendor’s security policies, are we asking the right questions? 

Or are we sending them a 120-question spreadsheet containing lots of questions about firewall rules and antivirus? As a friend who is a very high-ranking financial services security leader said to me the other day, “Oh, that doesn’t happen. I’ve never sent a spreadsheet like that in the last week … “

This is not a theoretical issue. Recently, there was an attack that worked like this: The attackers had an in at a national mobile carrier and SIM-swapped the phones of some people in a targeted industry. They then used the pirated mobile numbers to call a firm that specializes in outsourced services to that industry, claimed to be the SIM-swapped employees, and requested — verbally —  password resets. That worked, as it would have worked at PCB.

This was an attack against a third party that for many firms would have bypassed entirely the security monitoring they have in place to defend their assets. The phone was swapped at the carrier, and the password reset was done at a third party, which also set up the fraudulent transactions when the crooks logged in to that service. The firms that didn’t fall victim to this last phase were those that did transaction anomaly detection fast enough to understand the transaction was weird. 

Would your firm have caught it? More importantly, would your vendor procurement process and onboarding have asked the question, “Do you allow password resets via voice call?” 

Many companies don’t ask the question. I spoke with colleagues at household names in the financial services space, and many firms are struggling to catch up.

What is clear is that we are all trusting cloud-based companies more often, if not exclusively, to handle those parts of the business we seek to outsource. Looking at the standard questionnaires, I see a lot of question-types missing. 

For example, rather than asking lots of questions about endpoint antivirus or whether the vendor’s facility is in a location with little to no risk of natural disaster, terrorism, or civil unrest, it might be good to ask whether the vendor has separate production and nonproduction environments, or how their admins and developers access the environments, or how customer password resets are done.

In other words, we need to ask questions designed to understand the ways someone could subvert the vendor’s authentication and access control regime. 

I’ll be speaking about some of these things at the RSA Conference 2020 in San Francisco on February 26. I hope you will leave comments here and chat with me there. 

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “8 Things Users Do That Make Security Pros Miserable.

Nick Selby is the Chief Security Officer for Paxos Trust Company, which creates contemporary infrastructure to support global institutional financial transaction settlement. Prior to Paxos, Nick served as Director of Cyber Intelligence and Investigations … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/zero-factor-authentication-owning-our-data/a/d-id/1337068?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

OpenSSH eases admin hassles with FIDO U2F token support

OpenSSH version 8.2 is out and the big news is that the world’s most popular remote management software now supports authentication using any FIDO (Fast Identity Online) U2F hardware token.

SSH offers a range of advanced security features but it is still vulnerable to brute force attacks that try large numbers of passphrases until they hit upon the right one.

One way to counter this is passwordless login using cryptographic keys, but these are normally stored on a local drive or in the cloud. That makes them vulnerable to misuse and creates some management overhead.

A more secure alternative is to put them on a USB or NFC hardware token such as a YubiKey that ties a generated private key to that device. This means that authentication can’t happen without the token being present as well as requiring a physical finger tap by an admin.

However, it seems that getting U2F tokens to work with SSH has required support for the Personal Identity Verification (PIV) card interface, which only the most recent and expensive tokens offer.

Adding support inside OpenSSH simply means that any U2F token can now be used, including older FIDO1 and more recent FIDO2 hardware. Specifically, as version 8.2 documentation says:

In OpenSSH FIDO devices are supported by new public key types ‘ecdsa-sk’ and ‘ed25519-sk’, along with corresponding certificate types.

But why is FIDO U2F such a big deal when hardware tokens have been around for decades?

The simple answer is that FIDO U2F is an open rather than proprietary specification, which means that third parties can sell USB tokens that comply with it. That has not only lowered cost but meant that the same token U2F can be used across multiple applications and services.

In short, the life of OpenSSH admins just got a lot easier.

Goodbye SHA-1

The OpenSSH maintainers also announced their intention to get rid of the weak. ancient SHA-1 hashing algorithm:

It is now possible to perform chosen-prefix attacks against the SHA-1 hash algorithm for less than USD$50K. For this reason, we will be disabling the ‘ssh-rsa; public key signature algorithm that depends on SHA-1 by default in a near-future release.

This is a reference to a recently published paper, SHA-1 is a Shambles, which demonstrated that a successful collision attack could now be carried out for $45,000 or thereabouts. That was a drop from a previous and somewhat harder proof-of-concept attack carried out by Google that put the cost at more than double that sum.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/sDQy3vGdNuw/

WordPress plugin hole could have allowed attackers to wipe websites

A WordPress plugin with over 100,000 active installations had a hole which coould have allowed unauthorised attackers to wipe its users’ blogs clean, it emerged this week.

ThemeGrill is a WordPress theme developer that publishes its own Demo Importer plugin. As the name suggests, it imports demo content, widgets, and theme settings. By importing this data with a single button click, it makes demo content easy for non-technical users to import, giving them fully configured themes populated with example posts. Unfortunately, it also makes it possible for unauthenticated users to wipe a WordPress site’s entire database to its default state and then log in as admin, according to a post from web application security vendor WebARX.

The vulnerability has existed for roughly three years in versions 1.3.4 through 1.6.1, said the security company, and affects sites using the plugin that also have a ThemeGrill theme installed and activated.

The problem lies with an authentication bug in code introduced by class-demo-importer.php, a PHP file that loads a lot of the Demo Importer functionality. That file adds a code hook into admin_init, which is code that runs on any admin page.

The hook added into admin_init enables someone who isn’t logged into the site to trigger a database reset, dropping all the tables. All that’s needed to trigger the wipe is the inclusion of a do_reset_wordpress parameter in the URL on any admin-based WordPress page.

Unfortunately for site admins, one of those admin-based WordPress pages is /wp-admin/admin-ajax.php. This page, which loads the WordPress Core, doesn’t need a user to be authenticated when it loads, WebARX explains.

Loading this page with the offending parameter will drop the tables. Even more damaging, if there is a user with the name admin, it will log the attacker in using that account so that they can wreak even more havoc.

WebARX explained that it discovered the issue on 6 February 2020, resending the bad news to ThemeGrill three times through last Friday 14 February. The developer published a patch – version 1.6.2 – on Saturday 15 February saying that it had fixed the issue and thanking WebARX.

Beware, though – there’s another update. On Tuesday, ThemeGrill user mauldincultural posted on the company’s WordPress support page, explaining that their site had been hacked. They updated the Demo Importer to 1.6.2, but:

…this morning our site was down again. Our host was able to retrieve it again, but confirmed it was still an issue with our theme

ThemeGrill support explained that they’d need to upgrade to another version, 1.6.3, released yesterday, Tuesday 18 February. This contained the change: “Enhancement – secure reset button with nonce check.”

In the meantime, the plugin’s usage statistics are a little worrying. The active installs have dipped around 2% since early February as news of the vulnerability spread. Downloads spiked with the release of the new version, which is a positive sign because it shows that people are updating. However, only six in ten installations are using version 1.6. The rest are using 1.5 or earlier. So we may well see a heap of poorly maintained or abandoned sites getting wiped.

As ThemeGrill has pointed out in response to another pwned user, once you’ve used the plugin to load your demo content you don’t actually need it, so the best option is to disable or deactivate it altogether.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/GCOxf6ngpvk/

Facebook asks to be regulated kinda like a newspaper, kinda like telco

The EU has been itching to regulate the internet, and that’s where Facebook has been this week: in Germany, asking to be regulated, but in a new, bespoke manner.

In fact, CEO Mark Zuckerberg is in Brussels right on time for the European Commission’s release of its manifesto on regulating AI – a manifesto due to be published on Wednesday that’s likely going to include risk-based rules wrapped around AI.

Don’t regulate us like the telco-as-dumb-pipe model, Zuckerberg proposed on Saturday, even though that’s once how he wanted us all to view the platform: as just a technology platform that dished up trash without actually being responsible for creating it.

No, not like a telco, but not like the newspaper model, either, he said.

Nobody ever really swallowed what Facebook once offered as a magic pill to try to ward off culpability for what it publishes – as in, that “we’re just a technology platform” mantra. Facebook gave up trying to hide behind that one long ago, somewhere amongst the outrage sparked by extremist content, fake news and misleading political advertising.

So now, Facebook has taken a different tack. During a QA session at the Munich Security Conference on Saturday, Zuckerberg admitted that Facebook isn’t the passive set of telco pipes he once insisted it was, but nor is it like a regular media outlet that produces news. Rather, it’s a hybrid, he said, and should be treated as such.

Reuters quoted Zuckerberg’s remarks as he spoke to global leaders and security chiefs, suggesting that regulators treat Facebook like something between a newspaper and a telco:

I do think that there should be regulation on harmful content …there’s a question about which framework you use for this.

Right now there are two frameworks that I think people have for existing industries – there’s like newspapers and existing media, and then there’s the telco-type model, which is ‘the data just flows through you’, but you’re not going to hold a telco responsible if someone says something harmful on a phone line.

I actually think where we should be is somewhere in between.

Zuckerberg says that following the 2016 US presidential election tampering, Facebook has gotten “pretty successful” at sniffing out not just hacking, but coordinated information campaigns that are increasingly going to be a part of the landscape. One piece of that is building AI that can identify fake accounts and network accounts that aren’t behaving in the way that people would, he said.

In the past year, Facebook took down around 50 coordinated information operations, including in the last couple of weeks, he said. In October 2019, it pulled fake news networks linked to Russia and Iran.

The CEO said that Facebook is now taking down more than one million fake accounts a day before they have a chance to sign up – including not just accounts devoted to disinformation, but also those of spammers.

As the internet giants – Facebook, Twitter and Google – come under increasing pressure to get better at keeping groups and governments from using their platforms to spread disinformation, Zuckerberg claims that Facebook is strenuously tackling the problem, having employed a veritable army of 35,000 people to review online content and implement security measures.

Nearly a year ago, Facebook put out a call for new internet regulation in four areas: harmful content, election integrity, privacy and data portability. What Zuckerberg said then:

It’s impossible to remove all harmful content from the internet, but when people use dozens of different sharing services – all with their own policies and processes – we need a more standardized approach.

What he called for on Tuesday, in an op-ed published by the Financial Times: “rules for the internet,” and more regulation for his platform. On Monday, Facebook published a whitepaper describing its recommendations for future regulation, including more accountability from companies that do content moderation, which, it argues, will be a strong incentive for firms to be more responsible.

Facebook suggests that regulations should “respect the global scale of the internet and the value of cross-border communications” and encourage coordination between different international regulators, as well as look to protect freedom of expression.

Facebook is also calling on regulators to allow tech firms to keep innovating, rather than issuing blanket bans on certain processes or tools. It also wants regulators to take into account the “severity and prevalence” of harmful content in question, its status in law, and efforts already underway to address the content.

We support the need for new regulation even though it’s going to initially hurt our profits, Zuckerberg said in the op-ed:

I believe good regulation may hurt Facebook’s business in the near term but it will be better for everyone, including us, over the long term.

These are problems that need to be fixed and that affect our industry as a whole. If we don’t create standards that people feel are legitimate, they won’t trust institutions or technology.

To be clear, this isn’t about passing off responsibility. Facebook is not waiting for regulation; we’re continuing to make progress on these issues ourselves.

But I believe clearer rules would be better for everyone. The internet is a powerful force for social and economic empowerment. Regulation that protects people and supports innovation can ensure it stays that way.

Monika Bickert, Facebook’s vice president of content policy, said that we can do regulation the right way, or we can do it the wrong way:

If designed well, new frameworks for regulating harmful content can contribute to the internet’s continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together. Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vD36QJcf5Ls/

Private photos leaked by PhotoSquared’s unsecured cloud storage

Recognize anybody you know?

(Anonymized) photos leaked from PhotoSquared’s unsecured S3 bucket IMAGE: vpnMentor

No, likely not. No thanks to the leaky photo app they dribbled out of for that, though. After coming across thousands of photos seeping out of an unsecured S3 storage bucket belonging to a photo app called PhotoSquared, security researchers at vpnMentor blurred a few.

They also blurred a sample from a host of other personally identifiable information (PII) they came across during their ongoing web mapping project, which has led to the discovery of a steady stream of databases that have lacked even the most basic of security measures.

In this case, as they wrote up in a report published this week, the researchers came across photos uploaded to the app for editing and printing; PDF orders and receipts; US Postal Service shipping labels for delivery of printed photos; and users’ full names, home/delivery addresses and the order value in USD.

PhotoSquared, a US-based app available on iOS and Android, is small but popular: it has over 100,000 customer entries just in the database that the researchers stumbled upon.

Customer impact and legal ramifications

vpnMentor suggested that PhotoSquared might find itself in legal hot water over this breach. vpnMentor’s Noam Rotem and Ran Locar note that PhotoSquared’s failure to lock down its cloud storage has put customers at risk of identity theft, financial or credit card fraud, malware attacks, or phishing campaigns launched with the USPS or PhotoSquared postage data arming phishers with the PII they need to sound all that much more convincing.

A breach of this kind of data could also lead to burglary, they said:

By combining a customer’s home address with insights into their personal lives and wealth gleaned from the photos uploaded, anyone could use this information to plan robberies of PhotoSquared users’ homes.

Meanwhile, PhotoSquared customers could also be targeted for online theft and fraud. Hackers and thieves could use their photos and home addresses to identify them on social media and find their email addresses, or any more Personally Identifiable Information (PII) to use fraudulently.

The legal hot water that may await could be found in California, vpnMentor suggests, given its newly enacted California Consumer Privacy Act (CCPA), with the law’s new, strict rules about corporate data leaks.

Securing an open S3 bucket

PhotoSquared, for its part, could have secured its servers, say Rotem and Locar, implemented proper access rules, and not left a system that doesn’t require authentication lying around open to the internet.

As it was, the database was set up with no password and no encryption.

From vpnMentor’s report:

Our team was able to access this bucket because it was completely unsecured and unencrypted.

The leaky PhotoSquared app is just the most recent story (one in a long chain) about misconfigured cloud storage buckets. Last week, it was JailCore, a cloud-based app meant to manage correctional facilities that turned out to be spilling PII about inmates and jail staff.

The Who’s Who list of organizations that have misconfigured their Amazon S3 buckets and thereby inadvertently regurgitated their private data across the world just keeps getting longer. Besides JailCore last week and PhotoSquared this week, that list contains Dow Jones; a bipartisan duo including the Democratic National Committee (DNC) and the Republican National Committee (RNC); and Time Warner Cable – to name just a few.

Plug those buckets!

Your organization doesn’t have to wind up on that Who’s Who list. There’s help out there for organizations that can take a deep breath, step away from their servers, and plunge in to learn how to better secure them: Amazon has an FAQ that advises customers how to secure S3 buckets and keep them private.

In the case of PhotoSquared, vpnMentor suggested that the quickest way to patch its pockmarked bucket is to:

  • Make it private and add authentication protocols.
  • Follow AWS access and authentication best practices.
  • Add more layers of protection to the S3 bucket to further restrict who can access it from every point of entry.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/v6L-VwD68-Y/

What does a Lenovo touch pad, an HP camera and Dell Wi-Fi have in common? They’ll swallow any old firmware, legit or saddled with malware

Some of the biggest names in the technology world still ship hardware that can be possibly hijacked by well-placed miscreants, thanks to poor or non-existent checks for firmware updates.

Eclypsium said on Monday that, despite years of warnings from experts – and examples of rare in-the-wild attacks, such as the NSA’s hard drive implant – devices continue to accept unsigned firmware. The team highlighted the TouchPad and TrackPoint components in Lenovo laptops, HP Wide Vision FHD computer cameras, and the Wi-Fi adapter in Dell XPS notebooks.

The infosec biz said a miscreant able to alter the firmware on a system – such as by intercepting or vandalizing firmware downloads, or meddling with a device using malware or as a rogue user – can do so to insert backdoors and spyware undetected, due to the lack of cryptographic checks and validations of the low-level software. And, while the vulnerable devices themselves may not be particularly valuable to a hacker, they can serve as a foothold for getting into other systems on the network.

That’s a lot of caveats, we know. And while exploitation of these weaknesses is few and far between, limited to highly targeted attacks, it’s still annoying to see in this day and age.

“Eclypsium found unsigned firmware in Wi-Fi adapters, USB hubs, trackpads, and cameras used in computers from Lenovo, Dell, HP and other major manufacturers,” the firm explained. “We then demonstrated a successful attack on a server via a network interface card with unsigned firmware used by each of the big three server manufacturers.”

Perhaps most frustrating is that these sort of shortcomings have been known of for years, and have yet to be cleaned up. The Eclypsium team contacted Qualcomm and Microsoft regarding the Dell adapter – Qualcomm makes the chipset, Microsoft’s operating system provides signature checks – and encountered a certain amount of buck-passing.

“Qualcomm responded that their chipset is subordinate to the processor, and that the software running on the CPU is expected to take responsibility for validating firmware,” Eclypsium reports.

Network scientists

Hundreds of millions of Broadcom-based cable modems at risk of remote hijacking, eggheads fear

READ MORE

“They [Qualcomm] stated that there was no plan to add signature verification for these chips. However, Microsoft responded that it was up to the device vendor to verify firmware that is loaded into the device.”

Meanwhile, manufacturers complain doing signature verification of firmware code is tricky in embedded systems and other low-end or resource-constrained gadgets. While PCs and servers have plenty of room to check updates, fitting that cryptographic tech onto normal gear is not so simple, it is claimed.

“The report addresses a well-known, industry-wide challenge stemming from most peripheral devices having limited storage and/or computational capabilities,” Lenovo said in a statement to The Register.

“Lenovo devices perform on-peripheral device firmware signature validation where technically possible. Lenovo is actively encouraging its suppliers to implement the same approach and is working closely with them to help address the issue.”

Dell says it was aware of the report and was “working with our suppliers to understand impact and will communicate any necessary security updates or mitigations as they become available.”

HP added: “HP constantly monitors the security landscape and we value the work of Eclypsium and others to help identify new potential threats. We have published recommended mitigations for their latest report here. We advise customers to only install firmware updates from hp.com and the Microsoft Windows Update service, and to always avoid untrusted sources.” ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/19/unsigned_peripheral_firmware/

Dell Sells RSA to Private Equity Firm for $2.1B

Deal with private equity entity Symphony Technology Group revealed one week before the security industry’s RSA Conference in San Francisco.

Nearly five years after Dell Technologies scooped up RSA’s security business via its $67 billion buy of RSA’s then-parent firm EMC Corp., the technology company now plans to sell RSA to a private equity firm for $2.075 billion in cash.

Dell Technologies announced today that a consortium led by Symphony Technology Group (STG), Ontario Teachers’ Pension Plan Board, and AlpInvest will acquire RSA’s Archer, NetWitness Platform, SecureID, and Fraud and Risk Intelligence lines, as well as the security industry’s massive RSA Conference. The company did not disclose terms of the agreement.

The decision by Dell to sell RSA was not completely unexpected among industry insiders given Dell’s move to go public more than a year ago. The deal with a private equity firm basically gives both Dell and RSA more room to reset and evolve their security strategies, industry expert say. Private equity firms increasingly are setting their sights on the security industry due to its rapid and steady growth.

“I don’t think RSA was well-aligned with Dell’s go-forward strategy for investment,” says Amit Yoran, who served as president of RSA from 2014 to 2016 and is now chairman and CEO of Tenable. “We’ll see what private equity ownership means for RSA and what their plans are for the future.”

Chenxi Wang, founder and general partner with Rain Capital, says Dell likely was under pressure as a publicly traded firm to trim down debt.

“Dell’s vision is to be a provider of a broad swath of technology and services, from PCs, to storage, to software, to information security. RSA would have been a nice slice in that strategy, except that the RSA division has not been a fast-growing business for the Dell empire,” she says. “The Dell board is probably asking the executives to concentrate its efforts and to shed low-performing businesses.”

Dell Technologies still holds a solid stake in the security sector with its managed security services arm Secureworks, as well as its VMware operation’s recent acquisition of endpoint protection firm Carbon Black.

Dell Technologies COO and vice chairman Jeff Clarke called the deal “the right long-term strategy” for both Dell and RSA, as well as their customers and partners.

“The transaction will further simplify our business and product portfolio. It also allows Dell Technologies to focus on our strategy to build automated and intelligent security into infrastructure, platforms and devices to keep data safe, protected and resilient,” Clarke said in a statement.

Rohit Ghai, president of RSA, lauded the STG deal as providing the company “with a more independent configuration” to evolve.

“In determining the best way to support our customers’ digital journeys, we sought a partner that was enthusiastic about RSA’s mission, committed to our customer and partner base, and interested in unleashing the power of our talent, experience, and tremendous growth potential. Symphony Technology Group (STG) fully supports our vision,” he said.

The announcement of the deal, which is scheduled to be complete within the next six to nine month, comes the week before the 2020 RSA Conference, which opens in San Francisco next week.

Dell’s shedding of RSA basically closes a chapter in the EMC saga as well.

“This acquisition represents the end of an era. At one time it was thought that data center vendors should have a security arm. That concept appeared with Symantec acquiring Veritas and EMC acquiring RSA+SecurID, which it used to form the security division of EMC,” says Richard Stiennon, chief research analyst at IT-Harvest. “Most of us expected Dell to spin RSA off with the acquisition of EMC. Apparently it just took longer for Dell to realize that the fast pace of innovation in the security space does not jibe with the slow and steady pace of data center and hardware vendors.”

Private Equity for the Win
Private equity firms such as Thoma Bravo, Vista, IntSight, and TPG all have acquired security companies in the past few years, a trend driven by the high growth in that market as well as the ballooning size of the deals, notes Rain Capital’s Wang. Private equity firms typically hold a company for 18 months, during which time they double down on efficiencies and then bundle a few companies together and spin them off in an even more lucrative deal, she explains.

“Cyber seems to be a great market for that – many small firms provide niche capabilities, combining them into a broad offering is what the market wants,” Wang says. “As MA deals get larger, it’s becoming difficult for broad tech firms to continue to execute the type of deals that are expected by the market.”

Enter the private equity investors, a trend Wang expects to continue in security.

Private equity firms provide an alternative way for retooling and reinvigorating firms, notes Brian Reed, senior research director at Gartner. “Private equity firms come up with interesting ways to increase a product life cycle and help reduce the amount of time it takes companies to recover from [things like] product roadmap stagnation,” Reed says. The deal gives RSA more freedom and breathing room to work on its product roadmap, he says.

“There are bigger things going on with Dell at the moment from a macro standpoint and outside of security,” Reed says. “This is one of those instances where private equity coming in will be a good thing in the medium and long term and allows Dell to get a reset.”

Related Content:

Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/risk/dell-sells-rsa-to-private-equity-firm-for-$21b/d/d-id/1337078?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Trouble with Free and Open Source Software

Insecure developer accounts, legacy software, and nonstandard naming schemes are major problems, Linux Foundation and Harvard study concludes.

A wide-ranging study by researchers at the Linux Foundation and the Laboratory for Innovation Science at Harvard has yielded vital new information on the most widely used free and open source software (FOSS) within enterprises — and potential security risks related to that use.

The researchers found that a lack of a standardized naming scheme for FOSS components has made it hard for organizations and other stakeholders to quickly and precisely identify questionable or vulnerable components.

They also discovered that accounts belonging to developers contributing most actively to some of the most widely deployed open source software need to be secured much better. A third finding was that legacy packages within the open source space are becoming riskier by the day, just like any other older hardware or software technology.

“FOSS components underpin nearly all other software out there — both open and proprietary — but we know so little about which ones might be the most widely used and most vulnerable,” says Frank Nagle, professor at Harvard Business School and co-author of the report. “Given the estimated economic impact of FOSS, far too little attention is paid to systematic efforts to support and maintain this core infrastructure,” he says.

For the study, the researchers from the Linux Foundation and Harvard analyzed enterprise software usage data provided by, among others, software composition analysis firms and application security companies such as Snyk and the Synopsys Cybersecurity Research Center. In trying to identify the most widely used open source software, the researchers considered all of the dependencies that might exist between a FOSS package or component and other enterprise applications and systems.

The goal was to identify and measure how widely used FOSS is within enterprise environments and to understand the security and other implications of using the software. FOSS components constitute between 80% and 90% of almost any application in use currently within enterprises. While many FOSS projects have received considerable security scrutiny, many others have not.

Vulnerabilities in widely used projects with smaller contributor bases, like OpenSSL, can often slip by unnoticed, the researchers said in a report released this week. The heavy and growing reliance on FOSS has prompted efforts by governments, researchers, and organizations to better understand the provenance and security of open source software via audits, bug bounty programs, hackathons and conferences. “The first step is to truly understand the FOSS components upon which organizations depend — whether it be through regular security scans and code audits or by adopting a software bill of materials for all of its digital products,” Nagle says.

Top Projects Top Risks
The joint research by the Linux Foundation and the Laboratory for Innovation Science at Harvard showed that the 10 most-used FOSS packages within enterprises are async, inherits, isarray, kindof, lodash, minimist, natives, qs, readable-stream, and string-decoder. The researchers also identified the top most-used non-JavaScript packages, which include com.fasterxml.jackson.core:jackson-core, com.fasterxml.jackson.core:Jackson-databind, com.google.guava:guava, and commons-codec.

After identifying the top projects, the researchers set about trying to find out whom the most active contributors to these projects were and identified company affiliations for about 75% of them. During the study, the researchers found that seven of the top most-used open source software projects were hosted on individual developer accounts with fewer protections than organizational accounts. “Changes to code under the control of these individual developer accounts are significantly easier to make, and to make without detection,” the report warned.

According to the researchers, attacks on individual developer accounts are increasing, and there’s a growing risk of account takeovers and of backdoors and other malicious code being installed on them that can later be used to access the code. “One option is for such individual accounts to implement two-factor authentication if their repository supports it,” Nagle says.

Another risk in having widely used FOSS sitting on individual accounts is developers who might decide to delete their accounts or remove code over disputes and disagreements. “A broader and longer-term solution would be for such projects to move to organizational accounts, rather than individual accounts, to help enhance the accountability and future availability of the projects,” Nagle notes.

The research showed the need for better naming conventions for FOSS components. Because FOSS can be freely modified and copied, it can exist in multiple versions, forks, and similarly named repositories, Nagle says. To ensure better security, it is important to have a common understanding of which instance of a FOSS component is being used and how well it is being supported and maintained.

Another discovery the researchers made was that older, legacy open source components present the same risks as older, non-supported versions of any software or hardware. As one example, Nagle pointed to version 0.70 of the frequently used PuTTY SSH software, which was released in July 2017. No updates for the software were released until version 0.71 was released nearly two years later, in March 2019. “The infrequency of updates and examination [of] such highly used software can lead to security issues existing in the code base for more than 20 years,” he says.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “8 Things Users Do That Make Security Pros Miserable.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-trouble-with-free-and-open-source-software/d/d-id/1337082?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Shipping is so insecure we could have driven off in an oil rig, says Pen Test Partners

Penetration testers looking at commercial shipping and oil rigs discovered a litany of security blunders and vulnerabilities – including one set that would have let them take full control of a rig at sea.

Pen Test Partners (PTP), an infosec consulting outfit that specialises in doing what its name says, reckoned that on the whole, not many maritime companies understand the importance of good infosec practices at sea. The most eye-catching finding from PTP’s year of maritime pentesting was that its researchers could have gained a “full compromise” of a deep sea drilling rig, as used for oil exploration.

PTP’s Ken Munro explained, when The Register asked the obvious question, that this meant “stop engine, fire up thrusters (dynamic positioning system), change rudder position, mess around with navigation, brick systems, switch them off, you name it.”

The firm’s Nigel Hearne explained that many maritime tech vendors have a “variable” approach to security.

Making heavy use of the word “poor” to summarise what he had seen over the past year, Hearne wrote that he and his colleagues had examined everything from a deep water exploration and the aforementioned drilling rig to a brand new cruise ship to a Panamax container vessel, and a few others in between.

Munro also published a related blog post this week.

PWX_image

IT at sea makes data too easy to see: Ships are basically big floating security nightmares

READ MORE

Among other things the team found were clandestine Wi-Fi access points in non-Wi-Fi areas of ships (“they want to stream tunes/video in a work area that they can’t get crew Wi-Fi in,” said Munro), and crews bridging designed gaps between ships’ engineering control systems and human interface systems.

Why were seafarers doing something that seems so obviously silly to an infosec-minded person? Munro told us: “Someone needs to administrate or monitor systems from somewhere else in the vessel, saving a long walk. Ships are big!”

Another potential explanation proferred by Munro could apply to cruise ship crews where Wi-Fi is generally a paid-for, metered commodity: “Their personal satellite data allowance has been used up, so they put a rogue Wi-Fi AP on to the ship’s business network where there are no limits.”

A Panamax vessel (the largest size of ship that can pass through the Panama Canal, the vital central American shipping artery between the Atlantic and Pacific) can be up to 294 metres (PDF, page 8 gives the measurements) from stem to stern. A crew member needing to move from, say, bow thruster to main machinery control room in the aft part of the ship and back again will spend significant amounts of time doing so. It’s far easier to jury-rig remote access than do all that walking.

PTP also found that old infosec chestnut, default and easy-to-guess passwords – along with a smattering of stickers on PCs with passwords in plaintext.

Default passwords aboard ships. Pic: Pen Test Partners

Default passwords aboard ships. Pic: Pen Test Partners

“One of the biggest surprises (not that I should have been at all surprised in hindsight) is the number of installations we still find running default credentials – think admin/admin or blank/blank – even on public facing systems,” sighed Hearne, detailing all the systems he found that were using default creds – including an onboard CCTV system.

The pentesters also found “hard coded credentials” embedded in critical items including a ship’s satcom (satellite comms mast) unit, potentially allowing anyone aboard the ship to log in and piggyback off the owners’ paid-for internet connection – or to cut it off. ®

PS: Pen Test Partner consultant Andrew Tierney credited two psuedoanonymous bods for helping find the aforementioned security holes.

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/02/18/shipping_cybersecurity_rather_poor/