STE WILLIAMS

VMware patches critical vulnerabilities

VMware released patches last week for several critical security vulnerabilities, just days after two of them were unveiled at a popular Canadian cybersecurity conference.

The company’s updates addressed five critical vulnerabilities in all, covering its vSphere ESX-i, VMware Workstation Pro/Player, and VMware Fusion Pro/Fusion products.

A team calling itself Fluoroacetate exploited the first two flaws during the Pwn2Own contest at the CanSecWest cybersecurity conference, which took place in Vancouver from 20-22 March, earning them a $70,000 reward.

According to VMware’s security advisory, issued on 28 March, these two issues addressed an out-of-bounds read/write vulnerability and a time-of-check time-of-use (TOCTOU) vulnerability in the virtual universal host controller interface used by ESXi, Workstation and Fusion. An attacker must have access to a virtual machine with a virtual USB controller present, the advisory said, adding that it could allow a guest VM to execute code on the host system.

The third and fourth vulnerabilities addressed out-of-bounds write issues in VMware Workstation’s and Fusion’s e1000 and e1000e virtual network adapters. Both of them could allow code execution on the host from a guest, but the latter was more likely to result in a denial of service attack on the guest virtual machine, VMware said.

Finally, VMware said that its Fusion product contains a security vulnerability stemming from an unauthenticated application programming interface (API) that allowed access to an application menu through a web socket.

This could allow an attacker to trick the host user into running malicious JavaScript. The JavaScript can, in turn, manipulate the guest virtual machine via the VMware Tools utility, which allows for enhanced communication between the host and the guest. From there, an attacker could run various commands on the guest machine, the software vendor said, thanking independent researchers CodeColorist (@CodeColorist) and Csaba Fitzl (@theevilbit) for flagging the problem.

Code Colorist originally discovered the basis for this flaw, and Fitzl built on it. In a post detailing the flaw, Fitzl elaborated:

You can fully control all the VMs (also create/delete snapshots, whatever you want) through this websocket interface, including launching apps

Code Colorist explained that you normally see exploits breaking out from the guest virtual machine to the host, but this is a rarer exploit that goes the other way:

[twitter https://twitter.com/CodeColorist/status/1112518745887391745 align=center]

These vulnerabilities have been assigned the following CVE numbers, in order, but at the time of writing the details for all entries had not yet been uploaded:

CVE-2019-5514
CVE-2019-5515
CVE-2019-5518
CVE-2019-5519
CVE-2019-5524

What to do?

VMware advises customers review the patch/release notes for their product and  version. Details about patches for the various products can be found on the security advisory.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KbTwK3bfniM/

Are there viable alternatives to Facebook and Twitter?

The thinking goes that the reason so many of us who hate social networks are still stuck using them is because it’s simply where everyone else is (which is certainly the case with me).

If only everyone would make a mass migration to some other kind of service altogether, then perhaps we could finally regain some control over our data without stepping out of our social lives. But are there actual alternatives available?

Spoiler alert: Indeed there are, so let’s take a look at them and what kind of benefits they might offer over the usual suspects. Do these alternatives protect user privacy and data, and are they user-friendly enough for everyone to use or just techy pipe dreams?

Decentralized social network – what does that mean?

There is growing interest in social networks that prioritize putting control back in the hands of users. Two of the more popular “alternative” social platforms are Mastodon and Diaspora – platforms that run a constellation of decentralized, or federated, communities.

Instead of going to a central site like Twitter.com or Facebook.com, users join separate “instances” (Mastodon) or “pods” (Diaspora) to make connections to other like-minded members.

This means members can join a smaller local community where they have their own specific rules, and moderate membership to make their social village feel like an online home, but they can also still interact with other members in other instances or pods if they choose to.

In other words, a Mastodon or Diaspora user has a smaller home base where they’re likely spending most of their time, but they’re not fenced in if they want to wander elsewhere into the bigger world.

When someone runs their own instance of Mastodon or Diaspora, it becomes like their own clubhouse, where they can set their own themes for the group and rules. Though most of these spaces are very open for anyone to join, these decentralized social networks often center around a similar identity, interest, or cause specifically to filter their membership to like-minded folks.

Someone seeking to join a Mastodon or Diaspora instance only needs to set up an account on the instance they want to join, no specialized tech knowledge needed. And both platforms are designed with ease of use in mind, so people joining will likely find the interfaces familiar enough to adopt quickly – the default Mastodon styles are similar to Twitter, for example.

If you remember life online before the days of Friendster, Facebook, and Myspace, this might feel familiar. Everything old is new again, as pre-Mega Social Networks, social groups would gather and collaborate in semi-private spaces that they owned, like chat rooms or forums. Over time many of these social spaces often petered out because people migrated to the bigger networks like Facebook simply because they were free to use and often easier too. Hosting and running a forum, on the other hand, takes both money and time that few people are interested in spending long-term.

Privacy concerns?

These decentralized networks run on open-source software, which means anyone can contribute to the software to make it better, or download the code and modify it for their own instance. The software being open source doesn’t guarantee that the code itself is any more or less secure than the proprietary software that runs private social networks, but one of the main benefits of an open source platform is that anyone who has the technical knowledge can look “under the hood” and see exactly how Mastodon or Diaspora works.

It also means anyone who wants to be the only person who owns the place where their data resides can be the one to run an instance, so that data stays on their own stores.

Both Mastodon and Diaspora emphasize that they have no interest in user data and do not advertise or sell user data in any way.

It doesn’t mean that user data is completely private though – it’s important to not think of either platform as an extension of encrypted messaging, as that’s not their purpose.

There’s no end-to-end encryption on either platform’s private messaging at the moment, and to be fair there isn’t for Twitter or Facebook either (though Facebook is rumored to be looking into it for their Messenger). Both Mastodon and Diaspora are built on the idea that the conversations happening are meant to be public, so the privacy emphasis is on keeping user data in user ownership and out of advertiser’s hands, not keeping conversations out of the public eye.

Will these new platforms catch on?

Some subcultures and communities have already found comfortable homes in these decentralized instances via Mastodon or Diaspora platforms, but the adoption question remains: Will these platforms become sufficiently mainstream to make them the easier, preferred alternative to social networks that already exist?

There’s always a risk that a network turns into a home solely for niche sites, alienating folks who are simply looking for a new home – the experiment that was (and still is) SecondLife springs to mind. Most detractors of decentralized networks say that they’re too niche, and maybe a bit too nerdy, to ever catch on enough to supplant something like Facebook. Really though, only time will tell.

Big questions

There are bigger questions here of course. Is everyone served well by being on a social network? There might be something to be said for being harder to find and a bit exclusive in who you spend your online time with.

The fragmentation that easily happens in decentralized networks can be a blessing, especially for groups that form around beliefs or identities where it can be hard to meet people safely. For instance, those belonging to marginalized groups, or folks with more fringe or misunderstood interests tend to appreciate the in-group feeling they get from a federated social network.

Perhaps with decentralized platforms, we’ll see a rise in people being picky about who they divulge their personal data to, and who they make “friends” with online once again.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/gOVXw7BmdKQ/

TP-Link router zero-day that offers your network up to hackers

Just last week, we talked in the Naked Security podcast about what you can do if you’re stuck with a router with security holes that you can’t easily fix.

One way this can happen is if your ISP won’t let you connect at your end unless you use a router provided by them.

These “forced routers” are typically locked down so you can’t update them yourself, and may even have remote access permanently enabled so that your ISP can wander in at will.

Our recommendation, when you’re faced with someone else’s router in your own home, is simply to treat it as if it were miles away at the other end of your phone line or cable connection, back in the ISP’s data centre or the phone company’s local exchange where you can’t see it.

Buy a second router (or get yourself the free Sophos XG Firewall Home Edition), plug the ISP’s router LAN (internal) port into the WAN (external) port of the device you look after yourself, and pretend the ISP’s equipment doesn’t exist.

Don’t bother with the Wi-Fi and firewall parts of the ISP’s router – just treat it as a straight-up modem that interconnects your home ethernet network with the phone, cable or fibre network used by your ISP.

Unfortunately, this router-inside-a-router approach isn’t always a viable one.

Here’s an example of when that solution breaks down: a Google security developer just released details of a bug in a type of home router that’s specifically designed to integrate with your home automation kit, and is therefore supposed to be your innermost router anyway.

The bug was found in the TP-Link SR20 router, a device that would be pointless to own if you chained it through another router to shield your own network from it.

In other words, your main reason for having an SR20 would be to set it up at the core of your home network, where it could work with your TP-Link “smart home” kit such as light bulbs, plugs and cameras, as well as interact with TP-Link’s special mobile app, Kasa.

Kasa is a pun, given that casa is the Latin word for the what Germanic languages such as English, Dutch and Swedish call a house/huis/hus, and is still used to mean house in many Romance languages, including Portuguese, Spanish, Italian and Romanian.

Fortunately, this TP-Link vulnerability isn’t remotely exploitable – by default, at least – so SR20 routers aren’t automatically exposed to attack from anyone out there on the internet.

But the bug itself is nevertheless serious, and is a handy reminder of what can go wrong when developers allow themselves to get stuck in (or at least, stuck with) the past, supporting old and insecure code alongside the latest, more secure version.

How did the bug come about?

The SR20 supports a proprietary protocol used by TP-Link for debugging.

Debug interfaces on hardware devices are often problematic, because they usually exist so that developers or support staff can get a detailed look into the guts of a unit, and extract (and sometimes modify) information and settings that would normally be protected from tampering.

Interestingly, one of the earliest internet viruses, the so-called Morris Worm of 1988, was able to spread thanks to a debug feature in the popular mail server sendmail.

This feature wasn’t supposed to be enabled on production servers, but many administrators simply forgot to turn if off.

The sendmail debug setting instructed the server to treat specially constructed emails not as messages but as a list of system commands to run automatically.

You could, quite literally, email a remote server a message that ordered it to infect itself, whereupon it would try to mail itself to the next guy, and so on, all without a password.

Well, for TP-Link SR20 owners, it’s 1988 all over again, although an attacker does need a foothold inside your network already.

Living with the past

Simply put, this bug involves what’s known in the jargon as a downgrade attack.

Apparently, early TP-Link routers would happily carry out debug commands for anyone who could send network packets to the LAN side of the router – you could access the debugging functions without ever supplying or even knowing a password.

That weakness was apparently patched by introducing a new set of “version 2” debug commands that only work if you know and provide the administrator password – a reasonable precaution that limits admin-level activity to admin-level users.

According to Google researcher Matthew Garrett, however, you can still persuade the router to accept some dangerous version 1 commands without needing a password at all.

You just ask the router nicely to fall back to the old days, before passwords were deemed necessary for debugging.

The buggy command in this case is number 49, known rather benignly as CMD_FTEST_CONFIG.

You’d hope that a command that did any sort of configuration test would, at worst, report whether the test passed or failed – but this command is much more general than that.

How does the bug work?

Garrett decompiled the relevant part of the debug server in TP-Link’s firmware and found that the CMD_FTEST_CONFIG works roughly like this:

  • You send the server a file_name and some arbitrary additional_data.
  • The server connects back to your computer using TFTP (trivial file transfer protocol) and downloads the file called file_name.
  • The server assumes that the data it just downloaded is a Lua program that includes a function called config_test().
  • The server loads the supplied Lua program, and calls the function config_test() with your additional_data as an argument.

Lua is a free, open source, lightweight scripting language from Brazil that is used in many popular security products including Wireshark and Nmap, and widely found in routers on account of its tiny footprint.

The TP-Link debug server runs as root – the Linux equivalent of the Windows SYSTEM and Administrator accounts combined – so its Lua scripting engine is running as root, so your supplied program is running as root, so your config_test() function runs as root, so you pretty much get to do whatever you like to the router.

The bad news here is that even if an attacker doesn’t know much about Lua, the language itself includes a built-in function called os.execute() that does exactly what its name suggests…

…and runs an operating system command of your choice.

Lua also has the handy function io.popen() that runs a system command, collects the output and returns it back to the program so that you can figure out what to do next.

-- Example Lua code to run a system command 
-- and retrieve the answer line by line...

uidcmd = io.popen('id -u')
getuid = uidcmd:lines()
if getuid() == '0' then
  print 'You are root!'
end

-- or simply...

if io.popen('id -u'):lines()() == '0' then print 'You are root!' end

In short, anyone who can send network traffic to the LAN ports on your router can pretty much control your network.

If your Wi-Fi is password protected, an attacker will need to know the Wi-Fi key (the one that’s typically written on the wall of the coffee shop), but that’s all – they won’t need the router password as well.

What happened next?

The bad part of this story is that Garrett says he never received any feedback from TP-Link after contacting the company via what seems to be its official Security Advisory page.

According to TP-Link:

Security engineers and other technical experts can [use this form] to submit feedback about our security features. Your information will be handled by our network security engineers. You will receive a reply in 1-3 working days.

But Garret says that a reply never arrived; he tried to follow up via a message to TP-Link’s main Twitter account, but still received no response.

Because his original report was made in December 2018, Garrett has now gone public with his findings, following Google’s policy that 90 days ought to be enough time for a vendor to deal with a security issue of this sort.

You might think it’s a bit casual to use Twitter as a medium for a formal notifications that says, “Dear Vendor, the official bug disclosure clock is ticking and you have 90 days or else,” but, as Garrett found, TP-Link’s official security feedback page offers no way to stay in touch other than to give the company your own email address and wait for a reply.

What to do?

  • If you’re part of the TP-Link security team, you probably want to acknowledge this issue and announce a date by which you can fix it, or at least provide a workaround. It feels as though simply turning off (or providing an option to turn off) unauthenticated access to the debugging server would be a quick fix.
  • If you’re a programmer, don’t run servers as root. Code that accepts data packets from anywhere on the network shouldn’t be processing those packets as root, just in case something goes wrong. If crooks find a vulnerability in your network code and figure out an exploit, why hand them root-level powers at the same time?
  • If you own an affected router, be aware that anyone you allow onto your Wi-Fi network can probably take it over rather easily using Garrett’s proof-of-concept code. In particular, if you run a coffee shop or other shared space, avoid using an SR20 for your free Wi-Fi access point.
  • Whichever brand of router you have, go into the administration interface and check your Remote access setting. At home, you almost never need or want to let outsiders see the inside of your network, so make sure that remote access is off unless you are certain that you need it.
  • If you are an IT vendor with an official channel for receiving bug reports, take care not to let any of them slip between the cracks. Provide a clear channel for future communications, such as a dedicated email address, where researchers can follow up if necessary.
  • If you’re in the marketing department and you see a technical message in your Twitter feed, find the right person to talk to inside the company and pass the message on. Don’t make it hard for people who are trying to help you.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HaRWjJsBaeg/

Government spyware hidden in Google Play store apps

We’ve seen malicious government cyberweapons leaked out of the National Security Agency (NSA) and injected via ransomware, but security researchers recently found government spyware squatting in plain sight, pretending to be harmless vanilla apps on Google’s Play store.

This time around, the malware doesn’t come from the NSA. Rather, it alegedly comes from the Italian government, which apparently purchased it from a company that sells surveillance cameras.

According to Motherboard, this is the first time that security researchers have seen malware produced by the surveillance company, known as eSurv.

It was discovered in a joint investigation carried out by Motherboard and researchers from Security Without Borders – a non-profit that often investigates threats against dissidents and human rights defenders.

Security Without Borders published a technical report of their findings on Friday:

We identified previously unknown spyware apps being successfully uploaded on Google Play Store multiple times over the course of over two years. These apps would remain available on the Play Store for months and would eventually be re-uploaded.

They’re calling the malware Exodus, after the name of the command and control servers the apps connected to.

The connection with Italy was apparently made due to snippets of Italian text in the code, such as mundizza, a dialect word from Calabria that means trash or garbage, and RINO GATTUSO, a famous retired footballer from Calabria, the region where eSurv is based.

Exodus’s two-part whammy

Exodus works in two stages: Exodus One and Exodus Two.

The first stage works as a decoy: the malware poses as harmless apps that do things like receive promotions and marketing offers from local Italian cellphone providers or that claim to improve the device’s performance.

But the first stage also loads and executes a payload of secondary programs – Exodus Two – that handle data collection and exfiltration.

There’s a laundry list of data that Exodus Two snorts up and sends back to its command-and-control servers, apparently including: installed apps, browsing history, contact lists from numerous apps, text messages, location data, plus app and Wi-fi passwords.

The report also says that Exodus Two can activate the camera and the microphone to capture both audio and video, as well as take screenshots of apps as they’re used.

Apparently, Exodus includes a function called CheckValidTarget function that supposedly exists to “validate” the target of a new infection, but the researchers suggest that not much “validation” is going on, given that the malware activated immediately on the burner phone they used, and stayed active throughout their tests.

Worse still, the Exodus code isn’t any good at security itself.

The spyware apparently opens up a remote command shell on infected phones, but doesn’t use any sort of encryption or authentication, so that anyone on the same Wi-Fi network as an infected device can wander in and hack it:

Binding a shell on all available interfaces will obviously make it accessible to anyone who [is on the same network as] an infected device. For example, if an infected device is connected to a public Wi-Fi network any other host will be able to obtain a terminal on the device without any form of authentication or verification by simply connecting to the port.

So not only does the spyware snoop on data, it also leaves that data open to tampering.

What good does it do law enforcement to retrieve possibly adulterated data? One of Motherboard’s sources – a police agent who’s used spyware during investigations – was particularly critical:

This, from the point of view of legal surveillance, is insane. Opening up security holes and leaving them available to anyone is crazy and senseless, even before being illegal.

A brief history of bad app-ery

Unfortunately, this is just the latest in a long string of rotten apples spoiling the Google Play store barrel.

The forcefield keeping Google Play Store pure and pollutant-free has had holes poked in it before.

For example, a few months ago, research found that 18,000 Play Store apps, many with hundreds of millions of installs, appeared to be sidestepping the Advertising ID system by quietly collecting additional identifiers from users’ smartphones in ways that couldn’t be blocked or reset.

in May 2018, SophosLabs found photo editor apps hiding malware on Google Play.

In February 2018, Google announced that just in the previous year alone, it had removed 700,000 bad apps and stopped 100,000 bad-app developers from sharing their nastyware on the Google Play store.

This recent Italian spyware case shows yet again that you don’t have to be much of an evil genius of an app developer to get past Google’s filters. As Motherboard reports, more than 20 malicious apps in the Exodus family went unnoticed by Google over the course of roughly two years.

Google confirmed to Security Without Borders that it’s removed all of the Exodus apps. Google said that most of the apps collected a few dozen installations each, though one of them reached over 350.

All of the downloads happened in Italy.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Lma6IrqOxAI/

Hackers don’t just want to pwn networks, they literally want to OWN your network – and no one knows they’re there

Network intruders are staying longer and going after wider swathes of machines with their attacks.

This is according to the latest quarterly report (PDF) from security company Carbon Black, which analysed various incident reports from about 40 of its enterprise customers. It found that attackers are doing more to cover their tracks in hopes of staying on the victim’s network for longer periods of time.

In the last three months alone, Carbon Black said it logged a 5 per cent jump (10 per cent in the last six months) in reports of hackers using measures to hit back at security tools and administrators – from 46 per cent in Q3 2018 to 56 per cent in Q1 2019. This includes deleting logs, disabling antivirus, hijacking legitimate processes, and turning off firewalls.

So, hackers hack, and do hacker things. What’s so noteworthy about that?

For starters, this additional attention being paid to making sure they’re undetected is part of a larger strategy by attackers to stay in the networks they infiltrate for longer. With that extra time, the hackers are looking to get more out of the systems they compromise.

“They’ve moved away from smash-and-grab to home invasion,” Carbon Black chief cybersecurity officer Tom Kellermann told The Register. “Hackers truly want to own that system; they want to own that infrastructure.”

Part of the cause is a skyrocketing rate of attackers targeting intellectual property. As companies (and governments) in China and Russia increasingly look to lift tech and documents from their competitors, IP theft was cited as the motivation for 22 per cent of attacks the security outfit observed, up from 5 per cent the previous quarter.

The second major trend was toward “island hopping” – a favourite term Carbon Black uses to describe attackers working their way from one compromised network to that of another company further up the supply chain.

The report noted that a full 50 per cent of the attacks examined in the quarter were carried out as part of an “island hopping” operation that originated at a supply chain member or other partner company.

While the technique itself is not new, the frequency of such attacks and the reason behind them is unprecedented. Kellermann said hackers will now not simply look to compromise a large business, but also to steal its identity to an extent. A bad guy might, for example, take over a network and then commandeer an email server to perform a “reverse” email compromise and spear-phishing attacks. “Once the adversaries have hopped into the island, they use the brand of the victim,” he explained.

“The true crown jewel is the brand of that organization.”

I can’t believe WMI and Powershell is still being misused in such a dramatic fashion…

This, again, gives the bad guy motivation to cover their tracks, hoping to use a single breached system or network as a foothold to pull more valuable intellectual property and get at additional companies.

While the trends themselves are indicative of larger issues (such as politics and foreign policy) that won’t be easy to solve, there are some simple technical specs and behaviors that Kellermann recommends.

First off, the Carbon Black exec noted, admins and security professionals should take a more nuanced approach when looking at an incident. For example, he said, don’t assume the attacker has gone, but rather try as quietly as possible to collect evidence and be wary an intruder might try countermeasures.

Vendors also have a role, in particular Microsoft. Kellermann said Redmond needs to step up its game and lock down its remote administration tools in order to better protect its enterprise customers.

“I can’t believe WMI and Powershell is still being misused in such a dramatic fashion,” he said. “It is time Microsoft got their act together.” ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/02/network_busting_hackers_getting_harder_to_be_rid_of/

Rapid7 Buys Network Monitoring Firm NetFort

New technology will be integrated into Rapid7’s cloud-based security analytics platform.

Security analytics and automation vendor Rapid7 today announced its acquisition of NetFort, a network monitoring company.

Rapid7, which over the past few years has evolved from a vulnerability scanning and management vendor to a provider of security analytics, orchestration, and automation for prevention and detection of attacks, plans to fold NetFort’s network monitoring and analytics technology into the Rapid7 Insight service.

“We were immediately impressed by NetFort’s technology and the deep network protocol expertise inherent across the team,” said Lee Weiner, chief product officer at Rapid7. “By bringing NetFort’s network data and analytics to our own platform, we enhance security analysts’ capability to unearth risk, detect attacks, and investigate incidents more effectively.”

Financial details of the deal were not disclosed.

Read more here

 

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/rapid7-buys-network-monitoring-firm-netfort/d/d-id/1334313?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Airports & Operational Technology: 4 Attack Scenarios

As OT systems increasingly fall into the crosshairs of cyberattackers, aviation-industry CISOs have become hyper-focused on securing them.

Finding and fixing vulnerabilities across airport operational technology networks may not be sexy, but the damage and confusion a successful attack can cause is nothing short of sensational. These critical airport systems include baggage control, runway lights, air conditioning, and power, and they’re managed by means of network-connected digital controllers. They are much less organized than conventional IT networks, are rarely monitored as closely, and are often left untouched for years.

It’s an emerging threat that has sparked the attention of dozens of airport CISOs we speak with regularly. Their concerns run the gamut from the mundane to straight out of the movies. Here are four risk vectors we hear about often:

Threat 1: Baggage Handling
Baggage-handling systems consist of an intricate latticework of automatic conveyor belts that ensure that both person and luggage arrive together at the same destination. Because they are the most customer-facing OT system found in airports, they’re a common target. For a variety of reasons, checked bags are regularly tagged for extra security checks. A malicious actor can easily hack into the baggage-handling system to either redirect a bag to another flight or prevent it from being subject to a secondary security check in order to smuggle something illicit or dangerous onto the plane. 

These systems are extremly attractive targets for an attack because they can be executed remotely; the attacker wouldn’t even need to board the plane. All that’s required is  for a single person to fall for a simple phishing email and an attacker can introduce OT-specfic malware into the airport network. This malware will find its way to the baggage handling system to execute the attack.

Threat 2: Aircraft Tugs
Most planes can’t reverse or maneuver safely or efficiently on the ground without using aircraft tugs (the airplane equivalent of tugboats). Tugs are usually vehicles that latch on the wheel bar or axle and are essential to do the kind of maneuvering needed to back a plane into the gate to connect the jet bridge and other deplaning equipment. Many modern tugs are wireless, and there’s a huge push to make all next-gen tugs wireless, driverless, and OT and IT connected.

Attackers could potentially hijack a tug’s weight sensors and back a large jet into a gate at the velocity used for a small plane, causing it to crash through the wall of the airport. Creative attackers could also hack these systems for other purposes beyond physical damage, which is likely why CISOs frequently mention this risk vector. 

Threat 3: De-icing Systems
De-icing is a routine maintenance function that is performed on the ground. Planes need to be de-iced because at typical cruising altitudes, around 35,000 feet, temperatures dip as low as minus 60 degrees Fahrenheit. To prevent ice from forming on the wings, body, and other critical mechanical structures, a special chemical treatment is applied to the outside of the plane.

The liquid chemicals used for de-icing are stored at on-site facilities. These facilities use OT devices to regulate and maintain the composition of de-icing chemicals. If those systems were attacked and the composition of the solution altered, this could easily cause ice to form on the body of a plane. Even a single millimeter of ice can dramatically affect the aerodynamics and ability of a plane to maneuver. Tampering with the aerodynamics of a plane by hacking into de-icing systems is one way to cause it to crash without loading explosives onto it, which is likely why as obscure a risk vector as it is, de-icing systems are often one of the first OT systems airports monitor.

Threat 4: Fuel Pumps
When planes are refueled at airports, this is done either by fuel trucks or hydrants that pump gas from storage tanks in the ground. These storage tanks, known as “fuel farms,” are connected via a sprawling network of underground pipes that use OT systems to regulate the valves, controls, and equipment used to store, transfer, and dispense various types of jet fuel used by commercial aircraft.

An attacker could, for example, hack into a fuel farm, causing the wrong type or mixture of fuel to be pumped into a plane, resulting in anything from engine problems to an explosion.

These are not theoretical risks — chances are an airport you frequent is susceptible to one or more of the above attacks. However, especially in light of the recent Boeing 737 plane crashes, it’s important that we don’t lapse into fearmongering. These networks are not exposed because airport cybersecurity teams are asleep at the wheel. In fact, the only reason we even know about them is because they’re making it a priority to address them in what we observe to be a thoughtful, responsible manner. And that’s a good thing.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Edy Almer leads Cyberbit’s product strategy. Prior to joining Cyberbit, Almer served as vice president of product for Algosec. During this period the company’s sales grew by over four times in five years. Before Algosec, Almer served as vice president of marketing and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/airports-and-operational-technology-4-attack-scenarios-/a/d-id/1334282?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Sentence Handed Down in $4.2 Million BEC Scheme

Maryland man conspired in a business email compromise scheme that stole from at least 13 separate victims over the course of a year.

The US Department of Justice this week announced that a Maryland man was sentenced to 87 months in federal prison for his role in a large business email compromise (BEC) scheme that netted more than $4.2 million for the conspirators.

Nkeng Amin, also known as “Rapone” or “Arnold,” was sentenced as a result of a plea agreement in which he pled guilty to setting up bank accounts for use as “drop accounts” where victims would place money, and as disbursement accounts for sharing money between conspirators.

The scheme, which ran between February 2016 and July 2017, defrauded at least 13 individuals and businesses. The criminals would pose as companies with which the victims had business relationships, and provide instructions for wiring funds, and then empty the accounts.

In addition to Amin, the co-conspirators who also pleaded guilty include:

  • Aldrin Fon Fomukong, also known as “Albanky” or “A.L.,” age 24, of Greenbelt, Maryland;
  • Carlson Cho, also known as “Uncle Tiga2,” age 23, of Braintree, Massachusetts;
  • Izou Ere Digifa, also known as “Lzuo Digifa,” or “Mimi VA,” age 22, of Lynchburg, Virginia;
  • Yanick Eyong, age 26, of Bowie, Maryland;
  • Ishmail Ganda, also known as “Banker TD,” age 31, of College Park, Maryland.
  • While Fomukong and Digifa await sentencing, the others have been sentenced to prison terms from 90 days to 57 months.

For more, read here.

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/sentence-handed-down-in-$42-million-bec-scheme/d/d-id/1334316?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How do you sing ‘We’re jamming and we hope you like jamming, too’ in Russian? Kremlin’s sat-nav spoofing revealed

Misinformation coming from Russia isn’t merely an internet phenomenon; it also affects navigation systems.

In a report [PDF] issued last week, the Center for Advanced Defense (C4ADS), a data analyzing non-profit, documents a series of attacks, attributed to Russia, designed to block or interfere with signals from the Global Navigation Satellite Systems (GNSS).

GNSS include the US Global Positioning System (GPS), Russia’s GLONASS, the EU’s Galileo, China’s BeiDou, Japan’s QZSS and India’s NavIC.

The report, titled “Above Us Only Stars,” argues that GNSS interference represents a viable, emerging strategic threat.

“Using publicly available data and commercial technologies, we detect and analyze patterns of GNSS spoofing in the Russian Federation, Crimea, and Syria that demonstrate the Russian Federation is growing a comparative advantage in the targeted use and development of GNSS spoofing capabilities to achieve tactical and strategic objectives at home and abroad,” the report says.

Such concerns took more concrete form last November, when the Norwegian Defense Ministry reported that Russian forces in the Arctic had interfered with GPS signals during a NATO drill the month prior in Norway. Both Norway and Finland have protested the meddling. The US Department of Transportation last year issued a warning about GPS disruption in the Eastern Mediterranean.

Russia is far from the only nation exploring GNSS jamming and manipulation. In 2016, South Korea reported that North Korea has interfered frequently with its satellite navigation system. The US military has also conducted training exercises involving GPS jamming.

Not new, but much more disruptive now

The potential impact of GPS spoofing was demonstrated in 2013 by University of Texas researchers who used about $2,000 in hardware to steer an $80 million yacht off course via signal manipulation.

gps

Sad Nav: How a cheap GPS spoofer gizmo can tell drivers to get lost

READ MORE

Concern about threats to US satellite navigation systems goes back further still, but lawmakers have been slow to address the issue. In a March 2017 house hearing, “Threats to Space Assets and Implications for Homeland Security,” retired Gen. William Shelton, the former head of Air Force Space Command, warned that civilian and military systems now rely heavily on GPS and that GPS jammers have become available and affordable.

“Widespread and well-conceived jamming during conflict would impact both civilian and military users of GPS,” he said, noting that in the ten years since China shot down a satellite in 2007, there have been many studies but not much in the way of satellite system defenses.

The Russian Federation meanwhile has been busy testing its ability to disrupt satellite nav systems. The C4ADS report identifies at least 12 GNSS disruptions in Russia and Syria since 2016.

The report says Russia uses these capabilities not only to protect VIPs and strategically-important infrastructure but also “to promote its ventures at frontiers in Syria and Russia’s European borders.”

The C4ADS report concludes that improved public awareness of the threat to satellite nav systems is needed to formulate proportional responses by the private sector to attacks and also to foster discussion about threat mitigation. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/02/gps_spoofing_russia/

The curious case of a WordPress plugin, a rival site spammed with traffic, a war of words, and legal threats

A British web-dev outfit has denied allegations it deliberately hid code inside its WordPress plugins that, among other things, spammed a rival’s website with junk traffic.

Pipdig, which specializes in designing themes and templates for sites running the popular WordPress publishing system, was accused late last week of including code within its plugins that fired duff requests to the dot-com of a competing maker of themes. It was also accused of slipping in code that allowed it to remotely wipe its users’ databases, modify URLs in links, change site admin passwords, and disable other third-party plugins.

These plugins are installed server-side by webmasters to enhance their WordPress installations, and they include backend and frontend code executed as visitors land on pages. Pipdig has denied any wrongdoing.

The accusations were made by Jem Turner, a web developer who questioned the purpose of several subroutines within the Pipdig Power Pack (P3), a set of plugins bundled with Pipdig’s themes.

“An unnamed client approached me this week complaining that her website, which was running a theme she’d purchased from a WordPress theme provider, was behaving oddly. Amongst other things, it was getting slower for no obvious reason,” Turner claimed on Friday. “As speed is an important ranking factor for search engines (not to mention crucial for retaining visitors), I said I’d do some digging. What I discovered absolutely blew me away; I’ve never seen anything like it.”

Turner claimed she’d found that, among other things, Pipdig’s plugins fired off traffic to a stranger’s website: thus, web servers hosting the P3 PHP code would routinely send HTTP GET requests to a rival’s site – kotrynabassdesign.com – thus flooding it with connections from all over the world, it was claimed.

The P3 tools also, it was alleged, manipulated links in customers’ pages to direct visitors away from certain websites, collected data from customer sites, could change admin passwords, disabled other plugins, and implemented a remotely activated kill-switch mechanism allowing Pipdig to drop all database tables on a customer’s site. Again, this is according to an analysis of the P3 source code.

At the same time, Wordfence, a security vendor specializing in services for WordPress sites, says it fielded a similar complaint about the P3 code from one of its users, and also found the same subroutines Turner described.

“The user, who wishes to remain anonymous, reached out to us with concerns that the plugin’s developer can grant themselves administrative access to sites using the plugin, or even delete affected sites’ database content remotely,” Wordfence explained. “We have since confirmed that the plugin, Pipdig Power Pack (or P3), contains code which has been obfuscated with misleading variable names, function names, and comments in order to hide these capabilities.”

Don’t look at me, I didn’t do it

The reports prompted a strong denial from Pipdig, which argued the claims were unfounded. In its response on Sunday, the Pipdig team denied its software deliberately lobbed web traffic at other sites. What was happening, according to Pipdig, was that the P3 code would, once an hour, fetch the contents of…

https://pipdigz.co.uk/p3/id39dqm3c0_license_h.txt

…which, strangely, contained…

https://kotrynabassdesign.com/wp-admin/admin-ajax.php

…causing the P3 code to then fetch that page, which is on another server. That’s how the dot-com came to be flooded with requests from systems around the world running Pigdig’s code. The biz said it is trying to figure out how the external site’s URL ended up in its license text file, which has since been cleared of any text to prevent any unnecessary fetching.

“We’re now looking into why this function is returning this URL,” Pipdig said in its response. “However it seems to suggest that some of the ‘Author URLs’ have been set to ‘kotrynabassdesign.com’. We don’t currently know why this is the case, or whether the site owner has intentionally changed this.

“The response should hit our site’s wp-admin/admin-ajax.php file under normal circumstances. On the surface it could mean that some pipdig themes have been renamed to other authors. We will be looking further into this issue and provide more information as it comes up. We can confirm that it won’t cause any issues for sites using pipdig themes, even if the author name/URL has been changed.”

Meanwhile, the ability to drop database tables on customer sites is to allow installations to be reset to their defaults, and is not a remote kill switch, Pigdig claimed.

“The function is in place to reset a site back to defaults, however it is only activated after being in touch with the site owner,” the small business explained.

As for changing URLs, Pipdig chalked that up to anti-piracy measures to ensure links to sites hosting counterfeit copies of its themes are changed over to its domain. Additionally, Pipdig said third-party plugins were disabled during the installation process to prevent any conflicts over functionality, and that it does not change admin passwords, and that the only information it collects from users’ installations is the site URL, license key, WordPress version, and plugin or theme version.

According to Wordfence, Pipdig has removed some of the aforementioned code from its software in a newly released version, 4.8.0, which people are urged to update to. “We reached out to the Pipdig team with questions about these issues, and within hours a new version of P3 was released with much of the suspicious code removed,” Wordfence reported.

In an email to The Register on Monday, Pipdig creative director Phil Clothier acknowledged the changes, but maintained his company has done nothing wrong. “Wordfence have agreed that latest version of the plugin is safe, however we also stand by that older versions were safe too,” Clothier said. “We always recommend that people keep all plugins updated to the latest version either way.”

Turner, meanwhile, stood behind her findings and conclusions on the matter. “I am aware that Pipdig have released a statement claiming that I am lying,” Turner wrote in an update post. “Firstly, this statement only serves to attempt to attack my character rather than dispute any of my accusations. Secondly, it addresses only my post, and none of the accusations made by Wordfence or other developers.”

Pipdig said it was seeking legal advice on the matter, though Turner told The Register she has not yet heard anything from the company.

“We will be seeking legal advice for the untrue statements and misinformation which has no doubt damaged our good name,” the Pipdig team added. “Anyone which has worked with us knows how much we care about this community and every single blogger we work with. We’re hugely upset, but we can hopefully re-earn any trust that has been lost due to this.” ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/02/pippip_attack_claims/