STE WILLIAMS

Serverless Computing from the Inside Out

The biggest ‘serverless’ risks don’t stem from the technology itself. They occur when organizations respond to the adoption from the outside in.

Serverless computing, or function-as-a-service (FaaS), is becoming a hot new trend in the developer world. And it’s easy to understand why: It brings the cloud one step closer to true “utility computing.” With FaaS, developers can deploy code for individual functions on a FaaS platform such as AWS Lambda or Microsoft Azure Functions. This is far faster and more efficient than deploying entire applications, and it also enables true utility computing because organizations pay for only the resources used when functions are executed rather than paying for (and managing) an “always-on” underlying infrastructure, as when deploying applications on traditional cloud platforms.

The benefits of serverless computing (such as cost savings, reduced security overhead, and quicker time-to-release) have caused a dramatic rise in its adoption — which is expected to accelerate in the coming years. The security industry is responding to this new paradigm as it has with all other new paradigms since the beginning of Internet computing: by enumerating the various vulnerabilities and threats made possible by serverless computing, and then proposing a list of technologies to combat those vulnerabilities and threats.

This is what I call an “outside-in” approach to security: where organizations allow external threats and compliance requirements to dictate security strategy and spending. They continually switch their focus to the latest threat “flavor of the week,” and throw money at the problem with new technology.

The problems with this approach are well documented. Today’s bloated and unmanageable technology infrastructures are the direct result of outside-in thinking. Along with the cybersecurity skills shortage and budget limitations, these infrastructures cause gaps due to misconfigurations and mismanagement, which opens the door to security incidents.

Against this backdrop, the biggest risks do not come from serverless computing itself; they come from how organizations respond to serverless adoption. Will they inflame the cost and complexity problem by, yet again, taking an outside-in approach to security? Or will they break this cycle with a different approach?

Security from the Inside Out
Fundamentally, cybersecurity isn’t about threats and vulnerabilities. It’s about business risk. The interesting thing about business risk is that it sits at the core of the organization. It is the risk that results from company operations — whether that risk be legal, regulatory, competitive, or operational. This is why the outside-in approach to cybersecurity has been less than successful: Risk lives at the core of the organization, but cybersecurity strategy and spending has been dictated by factors outside of the organization with little, if any, business risk context. This is why we see organizations devoting too many resources to defend against threats that really aren’t major business risks, and too few to those that are.

To break the cycle of outside-in futility, security organizations need to change their approach, so they align with other enterprise risk management functions. And that approach is to turn outside-in on its head, and take an inside-out approach to cybersecurity.

Inside-out security is not based on the external threat landscape; it’s based on an enterprise risk model that defines and prioritizes the relative business risk presented by organizations’ digital operations and initiatives. This model maps to the enterprise business model and enables security professionals to build security strategy and spend aimed at enabling the business rather than protecting against threats.

With this kind of model in place, the adoption of new platforms, such as serverless computing, does not become a life-altering experience. It’s just a function of extending the enterprise risk model to encompass serverless initiatives, so security professionals can understand the potential business risk those initiatives might represent. Typical questions to answer during this analysis include:

  • Do the code functions in the cloud represent a source of business risk? Could they lead to business disruption or compliance violations?
  • Can the code be compromised and, if it is, what is the maximum damage that could result?
  • Can you approximate the monetary value of each code function deployed in the cloud? If so, how do those values compare with the costs associated with trying to implement security at the code level?

As with all digital initiatives, there will be business risks ranging from severe to low level, which will dictate where security organizations need to concentrate their resources. This will prevent organizations from investing scarce time and money in protecting against the typical list of potential threats coming from the security industry marketing machine, and instead focus only managing enterprise risk.

Understanding Risk from the Inside Out
By adopting an inside-out strategy to cybersecurity, organizations can readily adopt new technologies and platforms without introducing undue risk or imposing outsized burdens on the security organization. Think about it like your house — if you live in a high-crime area, you’d be wise to invest in locks and alarm systems. If you live in rural America, you’re probably better served worrying more about termites than you are burglars.

Yes, it’s possible a random burglar might decide to steal your TV, but it’s not worth investing thousands of dollars in alarm systems to prevent what is realistically unlikely to happen, or low-risk. This may be a simplistic example, but it is accurate in relation to today’s cybersecurity best practices. You should invest in a risk-based, programmatic approach and embed that strategy with orchestration and automation so that you are managing security risk in a way that is constantly evolving with technology, people, and process. Just as you should “shift left” on your security priorities and spend based on the neighborhood in which you live, you should do the same based on the risk appetite for your organization. It’s all a matter of understanding your risk — from the inside out.

Related Content:

Joe Vadakkan brings more than 18 years of global infrastructure architecture and security experience, focusing on all aspects of cyber and data security to his role of global practice leader, cloud security, for Optiv. Vadakkan’s expertise in information security and IT … View Full Bio

Article source: https://www.darkreading.com/cloud/serverless-computing-from-the-inside-out-/a/d-id/1334970?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Verizon Media, Uber, PayPal Top List of Companies Paying Bug Bounties

A new report from HackerOne lists the top five companies running bug-hunting programs on the ethical hacking platform.

Companies have found that they can expand the logical size of their security teams by recruiting white-hat hackers to find vulnerabilities in their applications and networks. Many of these companies pay a bounty for legitimate discoveries and now there’s a list of the most successful bounty programs that use the HackerOne platform for managing the programs.

According to the report, Verizon Media, Uber, Shopify, PayPal, and Twitter are the top five bounty programs, with Verizon Media leading for all-time bounties paid (more than $4 million) and most hackers thanked (1,124).

PayPal took top honors for largest bounty paid ($30,000) while all the companies in the top five ranked in the top five for various categories of activity.

According to HackerOne, its community of more than 300,000 hackers from around the world have earned more than $42 million in bounties as of the end of 2018.

For more, read here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/verizon-media-uber-paypal-top-list-of-companies-paying-bug-bounties/d/d-id/1335009?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Critical Firefox Vuln Used in Targeted Attacks

Mozilla has released patches for the bug reported by Coinbase.

Mozilla has patched a critical vulnerability under active exploit in the Firefox browser. 

Digital currency exchange Coinbase reported the vulnerability to Mozilla after discovering it in use for targeted attacks. According to the Mozilla advisory, the type confusion vulnerability (CVE-2019-11707) “can occur when manipulating JavaScript objects due to issues in Array.pop. This can allow for an exploitable crash.” 

The researcher who discovered the flaw – Samuel Groß of Google Project Zero and Coinbase Security – stated on Twitter: “The bug can be exploited for RCE but would then need a separate sandbox escape. However, most likely it can also be exploited for UXSS which might be enough depending on the attacker’s goals.”

The vulnerability has been fixed in Firefox 67.0.3 and Firefox ESR 60.7.1. Read more here and here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/critical-firefox-vuln-used-in-targeted-attacks/d/d-id/1335011?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

With GDPR’s ‘Right of Access,’ Who Really Has Access?

How a security researcher learned organizations willingly hand over sensitive data with little to no identity verification.

The European Union’s General Data Protection Regulation (GDPR) has a provision called “Right of Access,” which states individuals have a right to access their personal data. What happens when companies holding this data don’t properly verify identities before handing it over?

This became the crux of a case study by James Pavur, DPhil student at Oxford University, who sought to determine how organizations handle requests for highly sensitive information under the Right of Access. To do this, he used GDPR Subject Access Requests to obtain as much data as possible about his fiancée – with her permission, of course – from more than 150 companies.

Shortly after GDPR went into effect last May, Pavur became curious about how social engineers might be able to exploit the Right of Access. “It seemed companies were in a panic over how to implement GDPR,” he explains. Out of curiosity he sent a few Subject Access Requests, which individuals can make verbally or in writing to ask for access to their information under GDPR.

In these early requests Pavur only asked for his own data from about 20 companies. He found many didn’t ask for sufficient ID before giving it away. Many asked for extensions – GDPR allows 60 days – before sending it because they didn’t have processes in place to handle requests. The initial survey took place in fall of 2018, when GDPR was just getting into full swing, Pavur says.

Phase two came in January, when he decided to do a broader experiment requesting his fiancée’s information. Over three to four months, Pavur submitted requests to businesses across different sizes and industries to obtain a range of sensitive data, from typical sensitive information like addresses and credit card numbers, to more esoteric data like travel itineraries.

He went into the experiment with three types of data: his fiancée’s full name, an old phone number of hers he found online, and a generic email address ([email protected]). All of these, he notes, are things social engineers could easily find. “The threshold for starting the attack was very low,” he says. “Every success increases the credibility of your results in the future.” Pavur requested her personal data using these initial pieces of information; as companies responded with things he asked for, he could tailor future requests to be better.

“I tried to pretend like I didn’t know much about my fiancée,” he continues. “I tried to make it as realistic as possible … tried to not allow my knowledge about her to bias me.”

Compared to the early stages of his experiment, Pavur found when he requested his fiancée’s information, businesses were better at handling the process. Still, the responses were varied, and there wasn’t a consistent way of responding to Subject Access Requests, he says.

“I sort of expected that companies would try to verify the identity by using something they already know,” he says. For example, he thought they might only accept an email address linked to a registered account. “I thought that was the best mechanism for verifying accounts.”

More than 20 out of 150 companies revealed some sort of sensitive information, he found. Pavur was able to get biographical information, passport number, a history of hotels she stayed at; he was also able to verify whether she had accounts with certain businesses, he notes. The means of verifying his fiancée’s identity varied by industry: retail companies asked what her last purchase was; travel companies and airlines asked for passport information.

Interestingly, some companies started out strong with requests for identity verification, then caved when Pavur said he didn’t feel comfortable providing it. One company asked for a passport number to verify identity; when he refused, they accepted a postmarked envelope. Some businesses improved their verification over time, he adds, but mistakes are still being made: a handful of organizations accidentally deleted his fiancée’s account when asked for data. He points to a need for businesses to feel comfortable denying suspicious GDPR requests.

Pavur will be presenting the details of his case study this August at Black Hat USA in a presentation “GDPArrrrr: Using Privacy Laws to Steal Identities.”

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/with-gdprs-right-of-access-who-really-has-access/d/d-id/1335013?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hospitals are being suffocated by robocalls

Medical staff are being overwhelmed by a new type of health crisis: “a wave of thousands of robocalls that spread like a virus… from one phone line to the next, disrupting communications for hours,” the Washington Post reports.

This is nothing new. According to the spam-call blocker company YouMail, there were an estimated 4.7 billion robocalls placed in the month of May alone.

But it’s reaching a feverish pitch at the organizations for which it’s far more than an annoyance – rather, as hospital cybersecurity chiefs tell it, it’s a question of life and death. Spearphishers are placing spam calls to patients – using numbers spoofed to look like they’re coming from legitimate healthcare organizations and pretending to be hospital representatives – and trying to get insurance or other payment information out of their targets.

Spam callers are also spoofing hospital phone numbers to place calls to hospitals that look for all the world like the calls were placed internally. Answering those calls takes precious time out of the day that should be dedicated to saving people’s lives and to medical research.

A third type of nuisance call is coming from spearphishers who pose as employees at government agencies and demand to speak to a specific, named physician as they try to finagle confidential information out of the doctors, such as medical license numbers and Drug Enforcement Agency (DEA) numbers – information with which fraudsters can illegally procure drugs to then sell on the black market.

Dave Summitt, the CISO of one such besieged hospital, the H. Lee Moffitt Cancer Center and Research Institute in Tampa, Florida, testified in April 2019 in front of the House of Representatives about how overwhelmed healthcare organizations have become by the scourge.

90 days, 6,600 spoofed calls, 65 wasted hours

Summitt said that over the course of the 90 days that led up to his testimony, over 6,600 calls spoofed to look like internal numbers were answered by staff at Moffitt, which is the third busiest stand-alone cancer hospital in the US.

During a 30-day period, hospital staff answered more than 300 calls that looked like they were coming from the Washington DC area, with half claiming to be from a federal agency. Caller ID identified some of them as coming from the US Department of Justice (DOJ). When Moffitt staff answered, the callers said they were DOJ employees… and then demanded to speak with a specific, named physician about an urgent problem affecting his or her medical license number and DEA number.

Those malicious and/or fraudulent calls tied up hospital staff for 65 hours, Summitt said.

You probably, and I for sure, complain about robocalls and spam calls and how the US government has failed to pass a single law on robocalls. Summitt said that he’s in the same boat: on his personal cell phone, he has 45 blocked numbers entered just in the last 90 days.

Not to minimize the frustration that entails for all of us, but the problem rises to a much higher level than mere annoyance when we’re talking about healthcare organizations, he said.

These attempts occurred over several weeks and involved numerous care providers. These calls can be quite disturbing and disruptive, and we, along with other organizations have to manage them on a daily basis.

The Washington Post mentioned another hospital, Boston’s Tufts Medical Center, where more than 4,500 nuisance calls came in between about 9:30 and 11:30 a.m. on one single day, 30 April 2018, according to CISO Taylor Lehmann.

Many of the messages seemed to be the same: Speaking in Mandarin, an unknown voice threatened deportation unless the person who picked up the phone provided their personal information. Lehmann said that while scams trying to scare foreigners into giving up their private data are a known phenomenon, this attack was particularly disturbing given that it targeted Tufts – a hospital located in Boston’s Chinatown.

Are carriers dropping the ball?

What are carriers doing to help save the hospitals? Not much, if anecdotal evidence is any guide. Lehmann said that Tufts’ telecom carrier, Windstream, told them that “There’s nothing we [can] do.”

For its part, Windstream blames Tufts’ outdated phone technology. The Washington Post quoted Thomas Whitehead, the company’s VP of federal government affairs:

We do have a call-blocking solution we offer. We just couldn’t offer it on their system.

The Post reports that one year later, Windstream said it was still “following up” with Tufts.

Similarly, the Moffitt Cancer Center has experienced what Summitt finds a baffling lack of response from its own telecom carrier, which the Washington Post identified as CenturyLink. During the incident with the spoofed DOJ calls, Summitt said that the carrier told him that the hospital would need to get more robocalls to file a complaint. The targeted organization needs to receive between 20 to 25 calls within a 72-hour window to make that happen, he was told.

When Moffitt tried to find out who was behind the spoofed calls that were using the hospital’s own number, the carrier wouldn’t give out the source of the calls – not without a subpoena, according to Summitt.

CenturyLink said that it’s not so: a spokeswoman told the Washington Post that it’s “not our policy and must have been a miscommunication” that someone told Moffitt that it couldn’t block certain numbers unless it had received more calls:

Our fraud management team worked closely with Moffitt to identify illegal robocalls, trace them back to their source and ultimately block them. We will continue to do our part to fight unlawful calls.

Are the robocallers being protected more than the hospitals?

Something’s wrong when hospitals are beholden to obey laws about protecting patient privacy, while those who make forged calls have their own privacy shielded, Summitt told Congress:

I am rather astonished that others can use our owned phone number range, fraudulently represent our organization, and we have no recourse other than court order. There should be provisions made that when a company is actively investigating a suspected fraud or information security breach, they should have cooperation from the carrier. Our health care regulations require us to protect patient privacy and safety, yet it seems bad actors are more easily protected from privacy than those already covered under regulatory requirements.

How do we get robocalls to die, die, die?!

In May 2019, the US Senate passed an anti-robocalling bill. It’s still waiting for the House to take it up, which the House might not do, given that it’s working on its own version, the Stopping Bad Robocalls Act (HR 946). That House bill was introduced by Rep. Frank Pallone Jr., the chairman of the Energy and Commerce Committee.

Whichever bill – if either – gets passed and signed into law by the president, it will still take months to implement the technology that’s supposed to fix this problem.

What will also take months: fixes from the top telecoms that would label a call if it’s likely to be spam.

Meanwhile, the Federal Communications Commission (FCC) has been stepping up efforts to track down, and fine, scammers.

The FCC is fully aware of how the healthcare industry is being negatively affected by these calls. When it issued a $120 million fine against Adrian Abramovich – a Florida man known as the “robocall kingpin”  – it cited millions of calls Abramovich robo-placed that drowned out operations of an emergency medical paging provider:

By overloading this paging network, Mr. Abramovich could have delayed vital medical care, making the difference between a patient’s life and death.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OnR0ORfXSzs/

Pass the salt! Popular CMSs aren’t securing passwords properly

A group of researchers has discovered that many of the web’s most popular content management systems are using insecure algorithms to protect their users’ passwords.

Three researchers from the Department of Digital Systems at the University of Piraeus in Greece tested several CMS products [article behind paywall] to see how well they hashed user passwords.

Hashing is a mathematical function that encodes a secret. It takes an alphanumeric string such as a password and uses it to produce another string, called a digest.

A hashing function is a one-way street. You can calculate the digest easily using the password, but you can’t calculate the password using the digest.

That makes it great for storing passwords securely. When a user logs in using their password, the web application can quickly hash it. If the digest matches the one on file, the user gains access. Yet if anyone steals the password database, they can’t read it. (Although hashing is fundamental to good password security, there’s more to it than that – for a detailed primer see how to store your users’ passwords safely.)

Unfortunately, CMS software often doesn’t use hashing properly, the researchers warned. stating:

We have discovered that many CMS use outdated hash functions.

What does this mean?

Not all hashing functions are equal. MD5 (invented by Ron Rivest, who is the ‘R’ in RSA), has been compromised. A hashing function should produce a unique digest for every different input. No two passwords should produce the same digest (a situation referred to as a collision). The first successful collision attack against MD5 was conducted in 1996 and generating MD5 collisions is now considered easy.

Another popular hashing function, SHA-1, was widely used as a replacement when MD5 fell out of favour. That too is now considered obsolete.

The University of Piraeus researchers looked at 49 content management systems and 47 web application frameworks. It reported that 26.5% of them used MD5. These included osCommerce, SuiteCRM, WordPress, X3cms, SugarCRM, CMS Made simple, MantisBT, Simple Machines, miniBB, Phorum, MyBB, Observium, and Composr.

A further 12.2% of them use SHA-1. The culprits there are GetSimple CMS, Redmine, Collabtive, PunBB, Pligg, and Omeka.

The danger here isn’t just that these hashing functions are vulnerable to collision attacks. They’re also highly susceptible to the use of graphical processing units (GPUs), which can divide up the processing necessary to target them among their many processor cores.

Some of these sites had even worse problems. The researchers cited…

an arbitrary number of hash iterations, while there is a lack of password policies and salt

Hashing alone won’t defeat an attacker. In modern password hashing algorithms passwords are combined with a salt (a random string of data) so that identical passwords produce different hashes. Typically, the output of the hashing function is itself then mixed with the salt and hashed again, and again and again, perhaps thousands of times, to make the operation computationally expensive.

Each pass through the salt and hash routine is called an iteration. The higher the number of iterations, the harder it is for password cracking computers to generate password matches quickly.

Some of these systems using MD5 or SHA-1 endangered users further by not using salt or iterations. X3cms 0.5.3, GetSimple, MiniBB 3.2.2, and Phorum were on that naughty list.

The most secure CMS systems from a hashing perspective used bcrypt, a password hashing function which is resistant to GPU-based parallel computing cracks. On the nice list are Joomla, Zurmo, OrangeHRM, SilverStripe, Elgg, XOOPS, e107, NodeBB, Concrete5, phpBB, Vanilla Forums, Ushahidi, Lime Survey, Mahara, Mibew, vBulletin, OpenCart, PrestaShop, and Moodle.

It should be noted that the weakness affects how quickly an attacker can guess the contents of password database they have stolen from a breached website. It doesn’t affect their ability to breach the website in the first place or to guess passwords at the login prompt.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/0OIjenI9dhw/

Netflix researcher spots TCP SACK flaws in Linux and FreeBSD

Three vulnerabilities have been discovered in the FreeBSD and Linux kernels through which attackers could induce a denial-of-service by clogging networking I/O on affected systems.

Uncovered by Netflix Information Security’s Jonathan Looney (yes, Netflix has a cybersecurity division), we’ll start with the most critical, dubbed ‘SACK Panic’, also identified as CVE-2019-11477.

Affecting all Linux distro kernels from 2.6.29 or later (March 2009 onwards), Looney describes it as:

A sequence of SACKs [that] may be crafted such that one can trigger an integer overflow, leading to a kernel panic.

SACK stands for Selective Acknowledgment, a feature introduced nearly two decades ago to help TCP performance when retransmitting packets.

‘Kernel panic’, meanwhile, is the Linux equivalent of what anyone who used Windows versions prior to XP will remember as a General Protection Fault (GPF), or Blue Screen of Death – in other words, a system crash.

The small morsel of good news is that only systems utilising TCP SACK should be vulnerable, which limits the problem to things like [*gasp*] web servers.

The second, identified as CVE-2019-11478, is a related problem whereby an attacker might craft a sequence of SACKs that would cause excess resource usage in the TCP retransmission queue on all Linux versions.

On kernels earlier than 4.15, the same flaw could be exploited to cause ‘SACK Slowness’ delays, in effect amplifying the denial of service.

A variation of this, CVE-2019-5599, is identical to SACK Slowness but affects only FreeBSD 12 installations using the RACK TCP Stack.

The third and final bug, CVE-2019-11479, is about inducing increased bandwidth consumption through which…

An attacker can force the Linux kernel to segment its responses into multiple TCP segments, each of which contains only 8 bytes of data. This drastically increases the bandwidth required to deliver the same amount of data.

Patch v mitigate

The SACK Panic flaw can be fixed by applying PATCH_net_1_4.patch or, alternatively, mitigated by disabling SACK processing (/proc/sys/net/ipv4/tcp_sack set to 0) whilst noting the possibility that this might break connections requiring a low Maximum Segment Size (MSS).

Additionally, kernel versions up to and including 4.14 require a second patch: PATCH_net_1a.patch.

For excess resource usage / SACK Slowness (CVE-2019-11478) the required fix is PATCH_net_2_4.patch, or by applying workaround mitigations such as disabling SACK processing.

For SACK Slowness (CVE-2019-5599) affecting FreeBSD, the mitigation is either to apply the split_limit.patch or temporarily disable the RACK TCP stack.

Finally, for excess bandwidth consumption due to low MSS (CVE-2019-11479), the patches are PATCH_net_3_4.patch and PATCH_net_4_4.patch, which add a sysctl enforcing a minimum MSS.

Instructions for affected distributions can be found on the support sites for Amazon AWS, Red Hat, DebianSUSE, and Ubuntu.

Apple’s macOS (which traces its lineage back to the Darwin FreeBSD port from NeXTSTEP days) isn’t mentioned anywhere in connection with the alert despite supporting SACKs. If we receive an update on that either way, we’ll publish further details.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7CXtjIop95c/

NASA’s JPL may be able to reprogram a probe at the arse end of the solar system, but its security practices are a bit crap

NASA’s Jet Propulsion Lab still has “multiple IT security control weaknesses” that expose “systems and data to exploitation by cyber criminals”, despite cautions earlier this year.

Following up on a strongly worded letter sent in March warning that NASA as a whole was suffering cybersecurity problems, the NASA Office of the Inspector General (OIG) has now released a detailed report (PDF).

Its findings aren’t great. The JPL’s internal inventory database is “incomplete and inaccurate”, reducing its ability to “monitor, report and respond to security incidents” thanks to “reduced visibility into devices connected to its networks”.

NASA Galileo Probe (Courtesy NASA/JPL-Caltech)

Houston, we’ve had a problem: NASA fears internal server hacked, staff personal info swiped by miscreants

READ MORE

One sysadmin told inspectors he maintained his own parallel spreadsheet alongside the agency’s official IT Tech Security Database system “because the database’s updating function sometimes does not work”.

An April 2018 cyberattack exploited precisely this weakness when an unauthorised Raspberry Pi was targeted by an external attacker.

A key network gateway between the JPL and a shared IT environment used by partner agencies “had not been properly segmented to limit users only to those systems and applications for which they had approved access”. On top of that, even when JPL staff opened tickets with the security helpdesk, some were taking up to six months to be resolved – potentially leaving in place “outdated compensating security controls that expose the JPL network to exploitation by cyberattacks”.

No fewer than 666 tickets with the maximum severity score of 10 were open at the time of the visit, the report revealed. More than 5,000 in total were open.

Indeed, such a cyberattack struck the whole of NASA back in December. Sensitive personal details of staff who worked for the American space agency between 2006 and 2018 were exfiltrated from the programme’s servers – and it took NASA two months to tell the affected people.

Even worse, the JPL doesn’t have an active threat-hunting process, despite its obvious attractiveness to state-level adversaries, and its incident response drills “deviate from NASA and recommended industry practices”. The JPL itself appears to operate as a silo within NASA, with the OIG stating: “NASA officials [did not] have access to JPL’s incident management system.”

Perhaps this report will be the wakeup call that NASA in general, and the JPL in particular, needs to tighten up its act. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/06/19/nasa_jpl_oig_report_cybersecurity/

Insecure Home IoT Devices a Clear and Present Danger to Corporate Security

Avast-sponsored study shows wide prevalence of IoT devices, many with weak credentials and other security vulnerabilities.

Nearly three years after the Mirai distributed denial-of-service (DDoS) attacks, the danger to corporate networks from insecure consumer Internet of Things (IoT) devices appears to have grown.

Researchers from Avast Software, in collaboration with researchers from University of Illinois Urbana-Champaign and Stanford University, recently analyzed data from 83 million Internet-connected devices in some 16 million homes globally to better understand how they are deployed, as well as how secure they are. Devices scanned included home routers, game consoles, printers, scanners, home IP cameras, and home automation devices, such as smart thermostats. Computers and phones were excluded from the IoT classification in the study.

The research highlights not only the prevalence of IoT devices, but also their inherent vulnerabilities, says Rajarshi Gupta, vice president and head of AI at Avast. 

According to the study, one-third of the homes has at least one IoT device. In North America, the number is double, at 66%. The research shows that one in four homes in North America have three or more IoT devices, and 9% have six or more.

Media devices, such as smart TVs and streaming devices, are by far the most common IoT devices in a majority of geographies. However, beyond that, the types of IoT devices installed in home networks tend to vary widely by region.

For example, Internet-connected home surveillance equipment is the most common IoT device across several parts of Asia; work appliances, like printers, are more prevalent in Africa; and voice and home assistant devices, such as those from Amazon and Google, are substantially more common in North America than anywhere else.

Security Concerns
Disturbingly, millions of the devices in the Avast study have security weaknesses, such as open services, weak default credentials, and vulnerabilities to known attacks. Millions of devices, for instance, are still using obsolete protocols, such as FTP and Telnet, Gupta says. In some parts of Africa, the Middle East, and Southeast Asia, as many as 50% of IoT devices still support FTP, and nearly 40% of home routers in Central Asia use Telnet.

Open and weak HTTP credentials are another major concern with a significant proportion of routers that Avast and the other researchers analyzed. A small number of home routers in the study host publicly accessible services. But more than half (51.2%) that did also had a recently exploited vulnerability on them.

“Millions of IoT devices today still use obsolete protocols like Telnet and FTP, both of which are known to transfer data in plain text,” Gupta notes. “The security implications of this cannot be overstated, and I’d argue that there is absolutely no reason to be using these protocols in 2019.”

The Mirai malware of 2016, for instance, exploited such weaknesses in IoT products to enable attackers to quickly assemble botnets for launching DDoS attacks. There are other concerns, too. Many IoT products that people use at home are found in work environments as well, especially printers, cameras and TVs, Gupta says.

“If a gadget at home is compromised and that employee unknowingly uses their work laptop on the same Wi-Fi, a cyberattacker can infiltrate the computer, too,” he says.

The Avast-sponsored study shows that despite a large number of branded IoT products around the world, the number of manufacturers is surprisingly small.

“There’s a long tail of more than 14,000 IoT manufacturers globally,” he says. “Yet an overwhelming majority of all devices — 94% — are made by the same 100. Half are made by the same 10.”

This market dominance means the onus for building strong privacy and security postures for IoT products rests with a handful of companies.

“Device manufacturers — at the very least, the top 100 — need to incorporate stronger security principles into their software development process,” Gupta says. Consumers, meanwhile, should consider security controls that can observe traffic at the router-level, identify irregular device behavior, and quarantine malicious network flows or infected devices.

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/iot/insecure-home-iot-devices-a-clear-and-present-danger-to-corporate-security/d/d-id/1335002?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Come to Black Hat USA for the Latest Hardware Hacks

Cars. Vending machines. Hotel suites. Security experts will share the tools and techniques they’ve used to break into all these things and more at Black Hat USA in October.

There’s no better place to see cutting-edge technology exploited than Black Hat USA in Las Vegas this October, where there’s an entire track of talks dedicated to the (in)security of hardware, firmware, and embedded devices.

Moving from Hacking IoT Gadgets to Breaking into One of Europe’s Highest Hotel Suites is a great example; it’s a 50-minute Briefing in which you’ll be shown and taught the tools and methods security researchers used to break the Bluetooth LE-based mobile phone key system of a major hotel chain.

You’ll get a full walkthrough of how the researchers worked out how to wirelessly sniff someone entering their room (or just unlocking the elevator) and then reconstruct the needed data to open the door with any BTLE-enabled PC — or even a Raspberry Pi.

BMW fans and car security enthusiasts are advised to check out 0-days Mitigations: Roadways to Exploit and Secure Connected BMW Cars, another 50-minute Briefing in which speakers from BMW Group and Tencent’s Keen Security Lab will share details of how multiple BMW car models were exploited through physical access and a remote approach — without any owner interaction.

This is a great opportunity to be introduced to the system architecture and external attack surfaces of connected cars, and you’ll also learn details about the vulnerabilities (including multiple 0-days) which existed in two vehicle components: the Infotainment System (a.k.a. Head Unit) and the Telematics Control Unit.

All the 4G Modules Could be Hacked is, as you might expect, a promising Briefing from the Baidu Security Lab about how all the 4G modules built into things like vending machines, cars, and advertising screens can be hacked. Now  that the “Internet of Things” is in vogue, this Briefing is a great way for you to get a comprehensive overview of the hardware structure of IoT modules. You’ll also learn their most common vulnerabilities and see how researchers used them to do things like take control of vehicles by attacking their built-in entertainment systems. Don’t miss it!

More details on these exciting Briefings and many more are available now on the Black Hat USA Briefings page, which is regularly updated with new content as we get closer to the event!

Black Hat USA returns to the Mandalay Bay in Las Vegas August 3-8, 2019. For more information on what’s happening at the event and how to register, check out the Black Hat website.

Article source: https://www.darkreading.com/black-hat/come-to-black-hat-usa-for-the-latest-hardware-hacks/d/d-id/1334999?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple