STE WILLIAMS

Cathay Pacific Hit with Fine for Long-Lasting Breach

The breach, which was active for four years, resulted in the theft of personal information on more than 9 million people.

The UK’s Information Commissioner’s Office (ICO) has fined airline Cathay Pacific £500,000 — with a 20% discount to £400,000 if the penalty is paid by March 12 — for basic security inadequacies in a four-year data breach that lasted from 2014 until 2018.

As a result of the breach, the personal data of 9.4 million people was stolen. The stolen information included names, nationalities, dates of birth, phone numbers, email addresses, mailing addresses, passport details, frequent flier numbers, and travel histories.

Among the criticisms levied against Cathay Pacific is that it took months after the breach was found for the airline to notify regulators, a delay the company blamed on the need to fully understand the breach. Other “security inadequacies” noted in the order for the fine include failure to encrypt database backups containing personal data, failure to patch an Internet-facing server against a 10-year-old vulnerability, and using past-end-of-life operating systems on servers. 

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “With New SOL4Ce Lab, Purdue U. and DoE Set Sights on National Security.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/cathay-pacific-hit-with-fine-for-long-lasting-breach/d/d-id/1337232?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Perfect Travel Security Policy for a Globe-Trotting Laptop

There are many challenges to safely carrying data and equipment on international travels, but the right policy can make navigating the challenges easier and more successful.

(Image by Rawf8, via Adobe Stock)

RSA Conference 2020 – San Francisco – It was an impressive claim. “Implementing the Perfect Travel Laptop Program” was on the sign at the door of the conference room at RSAC and the attendees at the morning session were buzzing with anticipation. Then Brian Warshawsky, JD CCEP, manager of export control compliance at the University of California Office of the President took the stage.

“There’s really no such thing as a perfect travel program,” he said.

Well alrighty, then.

There is such a thing, he said, as a very good travel program. And the key to that very good program is balance. With that, Warshawsky began laying out the factors that must be balanced in the creation of a travel laptop program.

First, he said, “Business travelers must understand they have no inherent right to privacy while traveling, and that most network operators conduct at least superficial surveillance.” That awareness means that security professionals within an organization should perform triage on the data and systems that employees want to carry, especially when the destination is international.

Data triage

Warshawsky said that governments’ willingness to take data as it comes into and out of the country on electronic devices means that organizations need to ask themselves a series of questions about the data.

  • Is the data and information contained with the device worth more than the device itself?
  • What are the local laws in the country being entered?
  • What is the result to both the individual and the organization if all data on the device were compromised or released?
  • What is the effect of device encryption?

He pointed out that these are the foundational questions, and must be asked not only about the countries of origin and destination, but of every country that will be a transit point on the trip. Warshawsky gave London’s Heathrow airport as one that is infamous as a mid-point in international travel. Many connections, he said, require changing terminals, which requires going through a security checkpoint, at which point officials can demand access to files on devices.

Encryption weakness

Many organization think, Warshawsky said, that full-device encryption will be enough to protect all on-device information from prying eyes. It’s important to remember, he reminded the audience, that on-device encryption is only as strong as the individual carrying the device. When local authorities threaten to imprison an employee until they supply the device password — or until the authorities can crack the device — it may not take long before the device is unlocked, decrypted, and completely duplicated into local servers.

In addition to potential human weakness, Warshawsky said that organizations must be aware that very strong encryption might be illegal to carry into certain nations. Part of the compliance review for a travel program must include answering the question of whether the information on the device, and the technology used to protect it, can legally be carried out of the country. The penalties for getting this wrong, he pointed out, can be severe for both employee and organization.

The risk-based approach

To properly assess the risk of a trip, there are five questions that must be asked in the process:

  • What is on the device?
  • Who owns it?
  • How is it being used and secured?
  • Why is it needed overseas?
  • Where will it be located and for how long?

The question of what is on the device is especially critical when an employee is going to give a presentation at an international conference: while the presentation itself will likely have been vetted and approved by both management and corporate legal, supporting documents brought along for follow-up conversations might easily be outside organizational guidelines, national law, or both.

Ask the questions

Before travel begins, Warshawsky said that there should be a formal, documented series of steps the traveler must take.

  • Pre-travel briefings
  • Pre-travel surveys
  • Guides
  • Net forms
  • Signed acknowledgement forms
  • Travel letters
  • Data and hardware classification

The surveys, he said are especially important for answering questions around what information is absolutely required for the trip, whether there are workable alternatives to carrying the information on a device, and making plans for using or transferring the information in any nation that might outlaw VPN use.

Ultimately, he said, travelers should only carry data that they (and the organization) are willing to see compromised. Travelers must be fully briefed on limitations on their rights at international crossings and on the laws applying to data in every country they will visit or transit. The point of all this is to enable and support international travel, but to do so in a way that is legally compliant at every step of the trip.

Related content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/the-perfect-travel-security-policy-for-a-globe-trotting-laptop/b/d-id/1337227?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

EternalBlue Longevity Underscores Patching Problem

Three years after the Shadow Brokers published zero-day exploits stolen from the National Security Agency, the SMB compromise continues to be a popular Internet attack.

EternalBlue, the exploit publicly leaked three years ago next month, continues to threaten unpatched Windows servers connected to the Internet, with more than 100 different sources using it to attack systems on a daily basis, according to a new report by cybersecurity firm Rapid7.

Internet-connected servers vulnerable to EternalBlue have steeply declined since the WannaCry ransomware attack used the exploit to infect hundreds of thousands of systems in May 2017, destroying data and disrupted operations. Still, more than 600,000 servers continue to allow server message block (SMB) connections on the public Internet, according to Rapid7’s Internet monitoring system. 

While some businesses need to keep the SMB port open to support critical legacy applications, for the most part companies are failing to detect and secure their attack surface, says Bob Rudis, chief data scientist for Rapid7

“At this point, it is a well-known, super-versatile piece of code that unfortunately still works way too well — maybe not on Internet-facing servers, but certainly once an attacker gets inside a network,” he says.

While the number of unpatched servers has declined significantly, the attack is still finding success, he says. “Vulnerable systems are not in the millions but the sub-millions, but there are still enough hosts out there for bad actors to do what they need to do,” he adds.

Rapid7 is not the only company to see EternalBlue as a continuing threat. Remote exploitation of the vulnerability continues to be the top network threat detected by McAfee today, said Steve Grobman, chief technology officer of the security firm, during his RSA Security Conference keynote.

The SMB vulnerability, designated MS17-010 by Microsoft and assigned three different CVEs, has joined other vulnerabilities, such as the remote procedure call (RPC) issue — MS08-067 — that allowed the Conficker worm to propagate. And the BlueKeep vulnerability in the remote desktop protocol (RDP), CVE-2019-0708, announced last June in the remote desktop protocol still affects 60% of servers, totaling hundreds of thousands of systems, Grobman said.

Patching for these significant issues remains a problem, he said.

“Significant populations of machines are still not patched,” Grobman said. “We recognize the criticality of patching, but the data suggests we are collectively not moving fast enough to patch known vulnerabilities, including those that have significant impacts.”

Three years ago this month, a group of hackers calling themselves the Shadow Brokers, a nom de guerre taken from the sci-fi video game “Mass Effect,” released files leaked from the National Security Agency, the United States’ intelligence service, that included a number of significant exploits. EternalBlue became popular because it is easy to exploit and reliable, says Rapid7’s Rudis.

While most ISPs are blocking SMB on residential networks, the fact that more than 600,000 computers and servers continue to expose the service to the Internet is a danger, even if the service is patched, he says.

“We do know that there are, well, I wouldn’t say legitimate, but there are people, organizations that are deliberately sticking SMB on the Internet,” Rudis says. “They know that it is problematic, they know that they are going to have to keep reimaging their servers, and they lament the requirement to keep them out there, but they at least know enough that these things shouldn’t be connected to anything real.”

Overall, the danger posed by still-vulnerable SMB servers should not be discounted, as they remain a platform from which to launch attacks, says Adam Meyers, vice president of intelligence for cybersecurity services firm CrowdStrike. 

“It is hard to say what is an infected host, what is a research system — there is just so much bad stuff on the Internet continuously occurring,” he says. “And that is just the annoyance-level activity, not even targeted attacks where someone is going after you.”

In the end, companies should be re-evaluating whether difficult-to-protect protocols, such as SMB, are worth having exposed to the Internet. For Rapid7’s Rudis, the answer is a firm “no.”

“If you are running SMB on the Internet, it’s either a honeypot or you’re an idiot — it really comes down to those two things,” Rudis says. “You can’t secure it.” 

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “With New SOL4Ce Lab, Purdue U. and DoE Set Sights on National Security.”

 

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/eternalblue-longevity-underscores-patching-problem/d/d-id/1337233?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

CISOs Who Want a Seat at the DevOps Table Better Bring Value

Here are four ways to make inroads with the DevOps team — before it’s too late.

Throughout a series of recent conversations that I’ve had with CISOs, a common question has emerged: “How do I get a seat at the DevOps table?”

It’s an understandable challenge that many security leaders are grappling with today. Too often, security teams are unaware of what’s in their organization’s application pipeline until after it’s pushed to production. Only then can they assess any potential risk introduced to the business, while simultaneously scrambling to take appropriate action.

To be in the room where it happens, to get that coveted invitation to the DevOps table — and keep your seat for the long run — you must contribute real value. It’s not enough to simply be there. It requires a balanced give-and-take.

Adding value happens in different ways. Regarding business, security teams provide essential risk management and mitigation services to organizations, but from the DevOps perspective, more is needed. Security teams need to design programs and introduce solutions that can keep pace with the DevOps workflow. In a word, security needs to bring speed. Here are four ways to make that happen.

1. Forge Relationships
Sure, you can sit in on a few development conversations. It’s a great way to initiate efforts around effectively securing applications and infrastructure. You’ll learn a lot and maybe even share a few best practices on secure coding. But cultivating collaborative, mutually beneficial relationships requires much more. You have to make the time and effort to get to know your development counterparts. Get smart on DevOps fundamentals, read what they’re reading, participate in regular demos, and understand what keeps them up at night, what excites them most about their work. These personal relationships and bits of insider knowledge will help you develop strong security strategies and implement the right solutions to help DevOps teams maintain velocity.

2. Champion Innovation
CISOs and security leaders, it’s up to you to reverse the long-held perception of security as a barrier to innovation and growth. In fact, a recent Harvard Business Review Analytic Services study found 73% of respondents believe a CISO’s ability to recognize and nurture innovation is “very important.” By building relationships with the DevOps team, CISOs can begin to proactively anticipate their evolving needs, get involved in new DevOps initiatives at the start (instead of coming on board after issues are discovered) and even spearhead efforts to adopt new approaches that help drive innovation and speed processes — safely.

3. Speak Their Language
According to Gartner, “CISOs must apply rigor and perspective to the business orientation, cost and value of risk management and cybersecurity.” Much has been written on the importance of CISOs “speaking the language of business” by communicating risk in terms of dollars and cents to executive teams and boards. But it can’t stop there. Today’s CISOs must also speak the language of DevOps. Risk must be communicated in terms of speed. Consider this line as an example: “If we wait to address vulnerabilities after they’re uncovered late in the software development life cycle, you’ll need to go back, reopen the code that you wrote, refresh your memory on the logic you built, and pinpoint the specific module that’s causing a problem. This unnecessary backtracking is going to waste time and slow things down.”

4. Deliver Solutions with Value
The primary purpose of the DevOps approach is to speed the development and release of software. It’s a comprehensive, continuous process, and increasing speed demands orchestration and automation. To deliver real value to DevOps teams, security must adopt similarly agile methodologies. This means integrating application risk management seamlessly into the entire DevOps process, instead of emerging at inopportune times to fix software and infrastructure vulnerabilities as they surface. It means embracing tools that are fully transparent to developers but also allow them to maintain existing workflows. Such tools should be able to orchestrate and automate the discovery and prioritization of vulnerabilities, speed remediation efforts, and provide a single, consolidated view of risk. 

Finding ways to empower DevOps at the speed of business is key to bridging the gap between security and development teams. By providing a security overlay to the pipeline platforms developers already use — from GitHub and GitLab to Azure DevOps and BitBucket — and sharing risk and remediation advice in these platforms’ native forms, developers can focus on what matters. That is, the rapid development of high-quality software that drives competitive business and promises a safer, more productive society.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “With New SOL4Ce Lab, Purdue U. and DoE Set Sights on National Security.”

John Worrall has more than 25 years of leadership, strategy, and operational experience across early stage and established cybersecurity brands. In his current role as CEO at ZeroNorth, he leads the company’s efforts to help customers bolster security across the software life … View Full Bio

Article source: https://www.darkreading.com/operations/cisos-who-want-a-seat-at-the-devops-table-better-bring-value/a/d-id/1337154?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook purges hundreds of fake accounts from state actors, marketers

In the first of what’s going to be monthly reports on its efforts to battle coordinated inauthentic behavior (CIB) leading up to the 2020 US elections and beyond, Facebook said that it removed five networks of accounts, Pages and Groups engaged in foreign or government interference in February.

The platform is always battling inauthentic behavior, including fake engagement, spam and artificial amplification, but it doesn’t bother to make announcements about those quotidian takedowns, most of which are financially motivated. The five February takedowns are different: they have to do with countering foreign interference or domestic influence operations, Facebook said in a post on Monday.

Facebook says that it views influence operations – also referred to as influence ops (IO) – as “coordinated efforts to manipulate public debate for a strategic goal where fake accounts are central to the operation,” be they carried out by domestic, non-state campaigns (CIB) or CIB done on behalf of a foreign or government actor (FGI).

CIB is against Facebook policy. The platform has been going after perpetrators who use Facebook to meddle with public discourse as was done in the 2016 US presidential election, when Russia targeted all 50 states, and as is still happening today, as we’ve been warned.

In total, last month, Facebook took down 467 Facebook accounts, 1,245 Instagram accounts, 248 Pages, 49 Groups, and $1.2 million worth of advertising.

Ben Nimmo, Nonresident Senior Fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab), pointed out that the CIB isn’t just about elections, and the groups behind it aren’t all state actors.

This is a Whack-A-Mole situation, said DFRLab, which worked with Facebook to analyze the networks the platform took down. In an analysis of the “deleted assets,” DFRLab says that it found that some bore “familiar hallmarks” of previous campaigns orchestrated by marketing companies NewWaves and Newave, registered in Egypt and the United Arab Emirates, respectively.

Facebook has already scrubbed the network off, in August 2019. Here’s DFRLab’s investigation into the network, shortly after the first takedown.

The networks used their Instagram and Facebook accounts to spread both uplifting and humorous content with politically charged narratives, DFRLab said, “presumably to garner a wide following before pivoting into regional politics.”

It’s worth noting that this is happening beyond Facebook: As DFRLabs notes, the networks post slightly tweaked memes and other CIB across all the major social media platforms, “using similar and sometimes identical accounts and content across Facebook, Instagram, and Twitter.”

These are the networks that Facebook tore up last month:

  1. India: Facebook removed a network of 37 Facebook accounts, 32 Pages, 11 Groups and 42 Instagram accounts whose activity originated in India and which focused on the Gulf region, US, UK and Canada. Facebook says that the people behind the network tried to conceal their identities and coordination, but an investigation found links to aRep Global, a digital marketing firm in India.
  2. Egypt: Facebook removed a network of 333 Facebook accounts, 195 Pages, 9 Groups and 1194 Instagram accounts. Its activity originated in Egypt, and it focused on countries across the Middle East and North Africa. Whoever’s running the network also tried to disguise their identities and the fact that they were coordinating their behavior, but Facebook’s investigation found links to two marketing firms in Egypt: New Waves and Flexell. In September, the New York Times reported that New Waves is owned by former Egyptian military officer Amr Hussein: a vocal supporter of Egypt’s authoritarian leader, President Abdel Fattah el-Sisi. The obscure digital marketing company is behind a network of fake Facebook accounts that praised Sudan’s military days after the military massacred pro-democracy demonstrators in Khartoum, according to the Times. Both companies have repeatedly violated Facebook’s Inauthentic Behavior policy and are now banned.
  3. Russia: Facebook removed a network of 78 Facebook accounts, 11 Pages, 29 Groups and four Instagram accounts. The network activity originated in Russia and focused primarily on Ukraine and neighboring countries, the company said. Facebook’s investigation found that the network has links to Russian military intelligence services.
  4. Iran: Facebook booted a small network of 6 Facebook accounts and 5 Instagram accounts that originated in Iran and focused primarily on the US.
  5. Myanmar, Vietnam: It removed 13 Facebook accounts and 10 Pages operated from Myanmar and Vietnam that focused on Myanmar, and found links to two regional telecom providers.

For more information, check out Facebook’s full report.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/_ord76BB-Vg/

Tech support scammers hacked back by vigilante

A UK cybercrime vigilante was so incensed by tech support scammers he reverse-hacked the call centre in India to reveal CCTV footage of perpetrators as they ripped off their victims in real-life calls.

Publicised by a BBC documentary, the hack was the work of ‘Jim Browning’ (not his real name), who has acquired a following on his YouTube channel for his campaigns to expose how these crimes work and the individuals behind them.

During 2019, Browning said he was able to identify dozens of call centres in India where many of tech support scams targeting English speakers originate.

Tech support scams typically involve phoning people in the UK or US claiming to represent a large company such as Microsoft and tricking them into allowing remote access to the computer after claiming it is infected with malware (scams also use malware pop-ups or poisoned search engine results containing fake support numbers).

If victims are reluctant, scammers will often up the ante by claiming that child abuse imagery has been detected which they must clean up or will have to report to the police.

The sums charged for bogus recovery can range from $80 to $1,000 or more. Hundreds of thousands of people fall for these scams every year netting the individuals behind the frauds huge sums.

It’s a cheap crime to pull off and, until recently, the chances of being caught were close to zero because investigating scammers thousands of miles away can be difficult.

It’s into this space that digital vigilantes have stepped, using a variety of techniques to bait, torment and, in the case of Browning, directly hack and expose the identifies of the people carrying them out.

Don’t try this at home

Browning told the BBC his technique is to allow scammers to connect to his computer, which has been set up to attack the scammer’s computer back using the same remote desktop connection.

He doesn’t say how he does this – that might depend on the software being used – but the use of a virtualised operating system to isolate the scammer’s activity, some form of reverse RDP attack, and the use of common hacking tools, seems likely.

In what he described as his most successful hack back yet, Browning was able to remotely access the CCTV webcams inside and outside the call centre used in one scam campaign, accessing recordings of 70,000 calls.

Footage captured included staff entering and leaving the building in Kolkata, milling around in its communal kitchen, and sitting at their desks, headsets on, making scam calls.

To the untrained eye, it just looks like well-dressed young people working in an office and yet some of the images clearly show the crimes being committed on-screen.

Browning was even able to record the fraudsters live as they sat at their desks trying to convince him to pay a fee to clean his own computer.

When one scammer claimed he was based in San Jose, the watching Browning decides to have fun:

Can you name me one restaurant in San Jose?

The scammer quickly turns to Google to locate a name, to which Browning quips:

Without looking at Google.

Interestingly, the scammers nabbed by Browning were trying the classic Windows support scam, whose popularity shows no sign of waning despite attempts by Microsoft to shutter them.

Hacking back

The BBC traced some of the victims of the hacked call centre, locating call exchanges in which they were defrauded out of hundreds of pounds each.

Browning’s work sounds like just desserts, but he acknowledges the techniques he uses are illegal under UK and US law, hence his reluctance to identify himself. Browning told the BBC:

I do not try and gain access to someone’s computer unless they’re trying to scam me.

Although the evidence gathered by the latest hack back should be interesting to police – named individuals are easily identified in the act of committing crimes – police never endorse digital vigilantism. Evidence must be gathered and documented carefully to be passed to the Indian authorities so prosecutions can take place.

Hacking back is a contentious topic in the US where there have been several attempts to legalize it, in the face of strong objections from some in the computer security industry.

Although few scam callers in the UK and the US see their money again, there is evidence that the Indian call centre operators have recently come under more pressure. In 2018, 16 call centres were raided by police, with a second bust netting another 28 centres in late 2019.

But there are hundreds that remain in operation. The business is simply too profitable to give up on.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4-8vdrhWxX0/

Google fixes MediaTek bug in Android March patches

Google published patches for over 70 software vulnerabilities in its Android security bulletin this month, finally fixing a security exploit for MediaTek chipsets said to have been in the wild for months, affecting millions of devices.

The vulnerability, CVE-2020-0069, still hasn’t been updated on the MITRE CVE database at the time of writing, but Android software modification site XDA-Developers said that details have been openly available on its forums since early 2019. The flaw began as a workaround for rooting Android Fire tablets, it said this week.

Google classifies CVE-2020-0069 as an elevation of privilege bug in MediaTek’s command queue driver, and only gives it a high severity ranking in its bulletin. However, XDA-Developers classes it as a critical vulnerability with a score of 9.3, labelling it ‘MediaTek-su’.

The bug allows an attacker to get root access to an Android device without unlocking the bootloader, XDA-Developers said, by copying a script to their device and executing it in a shell. It warns that any app on an affected phone could copy the script to its private directory and execute it to root the system.

The forum also quotes MediaTek saying that it had patched the issue in May 2019. The problem is that many manufacturers haven’t applied those patches, XDA-Developers warned:

MediaTek chips power hundreds of budget and mid-range smartphone models, cheap tablets, and off-brand set-top boxes, most of which are sold without the expectation of timely updates from the manufacturer. Many devices still affected by MediaTek-su are thus unlikely to get a fix for weeks or months after today’s disclosure, if they get one at all.

By turning to Google for help, MediaTek can take advantage of the Android creator’s muscle, according to XDA-Developers, because Google can force OEMs to update their devices using licence agreements.

Google did not respond to a request for comment yesterday. We will update this story if it does so.

This may be the most controversial bug in the bunch, but it is far from the only severe one. The bulletin features two sets of patches (2020-03-01 and 2020-03-05), grouped so that Android OEMs can patch them more easily.

Media framework

The most serious bug in the first set exists in the media framework. This framework, which is part of Android system services, handles services like the camera and playing audio and video files. This remote code execution (RCE) bug (CVE-2020-0032) affects the operating system’s media codecs. Android versions 8, 8.1, 9, and 10 are susceptible.

The company also published details of two other bugs in the media framework, both ranked with high severity. One, CVE-2020-0033, was an elevation of privilege flaw, while the other was an information disclosure bug.

There were seven bugs in the system framework, which handles services like search and activity management. All of these were high-severity bugs, comprising two elevation of privilege flaws and five information disclosure vulnerabilities.

Criticals

The only critical flaws in the 2020-03-05 patch group were in closed source components from chip vendor Qualcomm, which accounted for 48 of the bugs in the Android bulletin overall. These 16 critical bugs included several buffer overflow errors. Several of these critical bugs were remotely exploitable flaws in WLAN firmware (CVE-2019-10546, 14031, 14083, 14086, 14097, and 14098), while another five (CVE-2019-10586, 10587, 10593, 10594, and 2317) affected the Qualcomm chip’s data modem.

An untrusted pointer dereference bug (CVE-2019-10612) in the kernel was also remotely exploitable, while a buffer overflow also cropped up in Qualcomm’s core power software. This bug, CVE-2019-14030, was only exploitable locally, though, as was CVE-2019-14071, a critical flaw in the company’s system debug component. A buffer overflow bug in Qualcomm’s video processing (CVE-2019-14045) is remotely exploitable, as is a Bluetooth bug (CVE-2019-14095).


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/x_fTQFj-kGA/

NCSC: Secure your webcams now

The UK’s £1.9 billion, five-year National Cyber Security Strategy has a lofty goal: it aims to make the UK “the safest place to live and work online.”

Well, it sure as shinola can’t do that if people are slathering internet-enabled cameras all over their homes without first changing default passwords.

That’s why the National Cyber Security Centre (NCSC) – a part of GCHQ – has published some tips on how to safely use all those easily hijacked internet-enabled cameras, be they tucked into your robot vacuum, smoke alarms, water bottles, USB power plugs, lightbulbs, thermostats, alarm clocks, wall clocks, clothes hooks, teddy bears, air fresheners, picture frames, wall outlets, baby monitors, home surveillance systems, smart doorbells, or, say, decorative bird statues glued to the bed’s footboard for purposes one assumes aren’t always quite wholesome.

It’s woefully easy to hack something with an unchanged default password – passwords that voyeurs and other creeps can find online and then use to hijack video streams and eavesdrop on us. It’s particularly alarming when those passwords are supposed to secure video streams of your life, your front door, your bedroom, your child, your belongings, or any other manner of footage streamed out from your most intimate moments.

Fortunately, excruciating bit by long-time-coming bit, the Internet of Things (IoT) is becoming more secure. Google recently announced that it would soon begin forcing users of its Nest gadgets to use two-factor authentication (2FA), for one. It was welcome news, as was Amazon’s move a week later to do the same with its Ring video doorbells.

“Change your webcam and baby monitor’s default passwords” was actually our Advent Tip No. 5 for December 2015, and it makes sense that it’s still on the list when it comes to securing webcams, given that they’re still getting hijacked 4+ years later.

Caroline Normand – Director of Advocacy for the UK consumer advocacy group Which? – said that following the NCSC’s guidance on securing webcams is particularly important, given a) all the security flaws that keep popping up in cameras and children’s toys, and b) the fact that we’re still waiting for laws that will ensure that smart devices are safe:

Until new laws are in place, it is vital that consumers research smart device purchases carefully, and follow guidance to ensure their devices are protected by strong passwords and receiving regular security updates to reduce the risk of hackers exploiting vulnerabilities.

Digital Infrastructure Minister Matt Warman has introduced laws that will address this mess in the future. They’ll require that:

  • Device passwords must be unique and not resettable to any universal factory setting;
  • Manufacturers must provide a public point of contact so anyone can report a vulnerability, and
  • Manufacturers and retailers must state the minimum length of time for which the device will receive security updates.

Here are the three tips from the NCSC, with a sprinkling of our own advice:

1. Change your webcam’s default password to a secure one

It’s easy to do with the app you use to manage the device. NCSC recommends stringing together three random words that are easy for you to remember and using the blob as a password. If you want to see a quick video on how to do it right, we’ve got you covered:

(No video? Watch on YouTube. No audio? Click on the [CC] icon for subtitles.)

And if a website gives you the option to turn on two-factor authentication (2FA or MFA), do that too. Here’s an informative podcast that tells you all about 2FA, if you’d like to learn more:

LISTEN NOW

(Audio player above not working? Download MP3 or listen on Soundcloud.)

2. Regularly update your security software

Not only does this keep your devices secure, but it often adds new features and other improvements, the NSCS says. On Safer Internet Day a few weeks ago, Paul Ducklin had this to say about the importance of updating your software:

Most software patches these days aren’t just cosmetic – they typically close security holes that could let crooks sneak in without you even realizing. So if you don’t patch, you’re much more likely to encounter a crook, because lots of attacks will succeed against you when they’ll fail against everyone who has patched.

So why leave yourself in the at-risk group if you don’t need to?

Remember, however, it’s not just your laptop that needs patches these days – you also need to keep your eye out for updates for your apps, your phone, your home router, and any of those cool “connected devices” you might have, such as internet doorbells, webcams and home assistants.

3. Turn off your webcam’s internet-enabled remote access if you don’t use it

There’s a three-letter word that says it all when it comes to how dangerous remote access can be: RAT, short for Remote Access Trojans. It’s malware that makes it possible for a crook to turn on your webcam remotely.

Indeed, in a high-profile criminal case back in 2014, Jared James Abrahams, a college student in California who was studying computer science, was sentenced to 18 months in federal prison for spying on women via their webcams. Abrahams pleaded guilty to hacking and extortion charges relating to 150 women, including Miss Teen USA, Cassidy Wolf, who went public about the threats made against her.

(By the way, Wolf also said that she had a risky habit of using the same password everywhere, which may well have been how she got attacked and infected in the first place. So if the previous security tips didn’t already convince you to beef up your passwords and stop reusing them, now’s a good time to change yours and make all of your passwords unique!)


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/W_0x4h4cza0/

Fancy that: Hacking airliner systems doesn’t make them magically fall out of the sky

Airline pilots faced with hacked or spoofed safety systems tend to ignore them – but could cost their airlines big sums of money, an infosec study has found.

An Oxford University research team put 30 Airbus A320-rated pilots in front of a desktop flight simulator before manipulating three safety systems: the Instrument Landing System (ILS), the Ground Proximity Warning System (GPWS) and the Traffic Collision Avoidance System (TCAS).

The team, who presented their paper at the NDSS infosec symposium, found that while their attacks against these systems “created significant control impact and disruption through missed approaches”, all pilots in the study were able to cope and land their simulated aircraft safely.

Pilots in the study were exposed to false warnings from each of the systems to see what their reactions were. Most of them carried out missed approaches at first and tended to ignore or distrust the “hacked” system while going around to carry out a safe landing.

A go-around is expensive, with airlines racking up bills for extra landings, fuel and delay penalties.

Commenting on their findings, the researchers said in their paper: “Pilots are extensively trained to deal with the many faults which can emerge when flying an aircraft, and this was reflected in the results. However, the attacks generated situations which shared some features with faults but largely were different; they lacked indication of failure.”

They added:

Whilst alarms force action they are quickly turned off or ignored if considered spurious.

Lead researcher Matt Smith, explaining the reasoning behind the study, told The Register: “We know these attacks exist but we don’t know what would happen if they occurred,” adding that there is existing research demonstrating attacks against aeroplanes but little analysing their potential effects in this way.

Terrain ahead. Pull up!

Each of the 30 pilots in the study was put in front of a desktop simulation of an Airbus A330, which Smith explained was because there weren’t any good enough representations of the A320 available for the X-Plane simulator used in the experiments. After a familiarisation flight, helped by the fact the A330 is very similar to its short-haul sister aircraft, the experiments began with three simulated flights onto runway 33 at the UK’s Birmingham Airport.

For the GPWS phase, Smith’s team simulated a false aural alarm, where the system plays the message “Terrain, pull up!” over the cockpit loudspeakers. Pilots are trained to react to the warning so they don’t fly into the ground.

On the first approach, two-thirds of pilots went around, while on the second try just over half of those who didn’t land the first time round disabled GPWS before trying again, successfully. Those who went around largely did so between 20 and 30 seconds after the false alarm.

Traffic, traffic! Climb now!

Next was the TCAS attack. TCAS works by sensing the location of nearby aircraft fitted with TCAS gear and ordering pilots to climb or descend if algorithms calculate that the two aeroplanes will come too close for safety. Critically, TCAS can cause pilots to ignore air traffic control (ATC) instructions: pilots can bust an ATC-imposed altitude restriction (for example, “maintain 3,000ft”) if their TCAS equipment orders them to do so.

On the A320, TCAS has three pilot-selected modes: TA/RA, meaning it gives a visual and audio warning before telling the pilot to “climb now” or “descend now”, TA only (audio warnings only without the RA, Resolution Advisory, part – meaning the system does not order pilots to climb or descend) and standby (off). Due to limitations of the simulator, Smith’s team were not able to simulate the visual warning on the airliner’s cockpit screens.

By triggering a false TCAS RA, the researchers looked to see what the pilots would do, with the experiment including a “descend” RA shortly after takeoff among other activations, which Smith said was not unheard of in some crowded airspace such as the departure routes from Heathrow.

All but one of the pilots obeyed the false TCAS orders at first. On average, pilots “complied with over four RAs before reducing sensitivity”, something Smith’s team said “shows that there is no straightforward response.”

Most of the pilots switched from TA/RA to TA only after false activations, with some turning it off altogether over worries about the “additional workload” and distraction caused by false alarms. Two also diverted their flights back to the origin airport.

Glideslope. Pull up!

For the ILS scenario, Smith’s research team moved the position of the glideslope, the radio beam that guides aeroplanes down to safe landings. An ILS system consists of a glideslope, an angled beam that controls how far along the runway the aircraft touches down, and a localiser, which tells it where the middle of the runway is. All experiments were carried out in simulated good weather so pilots could use other visual references to double-check the ILS.

Four of the 30 pilots in the study chose to continue with their landing anyway despite the simulated glideslope having been moved to a point several thousand metres down the runway. A landing too far along the runway would risk the airliner running off the far end into the grass, potentially causing the runway to be closed.

Of the rest, 30 per cent fell back to using the aeroplane’s internal GPS system to carry out an area navigation (RNAV) approach, using onboard systems to calculate a glideslope and localiser path without needing the external radio beams. A fifth of the pilots went for a visual approach; landing by looking out of the window and flying accordingly, while a quarter used the localiser beam but judged the touchdown point visually. Two pilots asked for a Surveillance Radar Approach, where ATC does all the hard work of lining the aeroplane up with the runway by looking at the radar screen and giving the pilot course corrections. This depends solely on the airport’s own radar and radio equipment being available.

The pilots in the study ranged from captains with more than two decades’ flying experience to newly qualified first officers with two or fewer years in their logbooks, giving a reasonably wide cross-section of aviation experience.

Smith mused to El Reg: “If industry engaged with penetration testing on these systems and tried to fully map out what the attacks might be, what they presented to the pilots as, they should at least be able to give a list of situations that might come about as a result of an attack.” He added that this could be used to develop situation-specific checklists, much as pilots already have standardised checklist responses for instrument failures.

The full study is on the NDSS website for free download. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/04/aviation_infosec_study_a320_systems_hack/

UK data watchdog slaps a £500,000 fine on Cathay Pacific for 2018 9.4m customer data leak

The Information Commissioner’s Office has fined Cathay Pacific Airways £500,000 for leaky security that exposed the personal data of 9.4 million passengers – 111,578 of whom were from the UK.

The breach, which occurred between October 2014 and May 2018, exposed passengers’ names, passport and identity details, dates of birth, postal and email addresses, phone numbers, and travel history, as well as 430 credit card numbers, 27 of which were active.

The unauthorised access was first suspected in March 2018, when Cathay’s database suffered a brute force attack, and confirmed in May. A Cathay Pacific spokesman said at the time that the combination of data accessed varied for each affected passenger.

The ICO investigation found that Cathay’s systems were affected by malware that harvested the data. Several errors with Cathay’s security were found along the way, including backup files that were not password protected, unpatched web-facing servers, an out-of-support OS, and inadequate antivirus protection.

“This breach was particularly concerning given the number of basic security inadequacies across Cathay Pacific’s system, which gave easy access to the hackers. The multiple serious deficiencies we found fell well below the standard expected,” said Steve Eckersley, the ICO’s director of investigations.

“People rightly expect when they provide their personal details to a company, that those details will be kept secure to ensure they are protected from any potential harm or fraud. That simply was not the case here,” he added.

In response, Cathay Pacific said in a statement: “We have co-operated closely with the ICO and other relevant authorities in their investigations. Our investigation reveals that there is no evidence of any personal data being misused to date.”

The company added that it had already made improvements to its IT security.

The £500,000 fine is the maximum penalty that can be applied under the Data Protection Act 1998. The legislation was replaced by GDPR but because the timing of the Cathay breaches, it was investigated under the old rules. The new law, which applies to any incident after 25 May 2018, gives the commissioner power to fine companies €20m or 4 per cent of their global turnover.

The news comes on the back of the ICO’s record £183m fine on British Airways last month, after a breach in 2018 exposed roughly 500,000 passengers’ details. The fine process – since delayed – was the ICO’s first major penalty under GDPR. It accounts for 1.5 per cent of BA’s 2017 turnover, meaning it was not the maximum penalty. BA said it would challenge the fine. ®

Sponsored:
Quit your addiction to storage

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/03/04/ico_fines_cathay_pacific_500000/