STE WILLIAMS

Facebook ‘regrets’ balloons and confetti triggered by earthquake posts

Does your stomach churn a little when your Facebook post triggers saccharine animations of popping hearts or confetti and balloons?

That’s nothing. The let’s-festoon-everything-with-glee impulse got Facebook into trouble this week: it pulled the animated confetti-and-balloons shtick on posts from people reporting that they had survived a 6.9 magnitude earthquake that killed at least 259 people and left some 150K homeless on the Indonesian island of Lombok on Sunday.

The death toll will rise. The BBC reports that as of Thursday, rescue workers were still digging people out of the rubble.

Facebook has apologized for survivors’ “I’m safe” messages triggering the celebratory animations. The misstep comes out of a bungled translation of the word “selamat,” which in Indonesian can mean “to survive” or “congratulations.”

Herman Saksono, an Indonesian computer science PhD student at Northeastern University in Boston, noticed the inappropriate Facebook action over the weekend and tweeted out a screen capture that shows the word highlighted in red as it triggers the inappropriately gleeful animation:

Facebook issued an apology, saying that the “Congratulations” animation feature is “widely available on Facebook globally.” However, it never should have been used in “this unfortunate context.”

The platform has since turned the feature off locally. The animations were being inserted into posts using Facebook’s Safety Check feature, which the platform launched four years ago to help users notify friends and family that they’re safe after terrorist attacks or disasters.

Besides this unfortunate hiccup, it’s a good tool. Safety Check grew out of Japan’s experience with the devastating 2011 magnitude-9 earthquake and tsunami – a disaster that affected more than 12.5 million people and caused the evacuation of 400,000, according to the Japanese Red Cross.

In the aftermath, Facebook engineers in Japan built the Disaster Message Board to make it easier for people to communicate.

Facebook kept working on that message board – work that culminated in Safety Check, available on Facebook’s desktop and mobile platforms.

The tool serves an important function: it can corral what can otherwise be a scattershot way to get updates from a range of social media about your loved ones when something bad happens.

Let’s hope and pray that in spite of the language misstep, the feature helps those in Lombok to stay in touch with their family and friends – connectivity that will be crucial to help them all get through the horrific disaster they’re dealing with.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/lny9eM25rZo/

Say what you will about self-driving cars – the security is looking ‘OK’

Black Hat Car hacking wizards Charlie Miller and Chris Valasek have turned their attention to autonomous vehicles – and reckon the security is surprisingly good.

The duo, who work for General Motors’ robo-automaker offshoot Cruise, told this year’s Black Hat USA conference on Thursday while self-driving vehicles are much less hackable than you may think, there are still serious issues that need to be shored up. Given this is an emerging and fledgling market, it’s in every manufacturer’s interest to get security right, to avoid one PR nightmare crashing them all.

“This is everyone’s problem,” Valasek told the crowd. “We want a competitive advantage but we also want everything in the sector to be secure. An incident with our competitors will hurt us too.”

First, the good news. Because these vehicles are still being developed, and virtually no one is using them yet, there are lots of opportunities to get things right. That means building encryption and cryptographic code signing into a car’s system, minimizing the attack surface hackers can abuse, and tightly locking down communications with the outside world.

Junk in the trunk

More than any other kind of vehicle on the road, autonomous cars are going to be “data centers on wheels,” Valasek said. The two showed off the trunk, or boot, of Audi’s forthcoming computer-controlled motor, and it’s packed with multiple GPUs and processors, cooling systems, and sensor arrays.

trunk

Good L’Audi … Not much room for a suitcase

The vast increase in the amount of data being processed means that the usual internal controller area networks (CANs) can’t cope, so instead manufacturers are installing Ethernet to spread the load. Devices on these fatter networks still have to eventually communicate with the sensor and control CANs, and the gateways between the CANs and the general-purpose network could be points of weakness.

The Ethernet itself is also an issue, since it’s so basic: at Layer two in the OSI stack, encryption is not built in. That has to be improved if these cars are going to be secure, preventing one subsection from screwing around with another, they said.

The other serious weak point is external communications. Autonomous vehicles are going to be updating their code, neural network models, and other datasets daily, chatting to their backend servers frequently for new information to improve their driving. Ideally, the robo-cars should not accept incoming connections, and verify everything their pick up from HQ.

Get lost

One area of hacking that is a complete non-starter is hijacking one of these cars via GPS tricks, such as spoofing signals to get the machines lost. Why? Because autonomous cars barely use the satellite-based positioning system. The resolution of GPS isn’t high enough, so instead they rely on maps and LIDAR sensors, which are accurate within an inch or so.

gps

Sad Nav: How a cheap GPS spoofer gizmo can tell drivers to get lost

READ MORE

“Self driving cars do have GPS but don’t rely on it as it’s not accurate enough,” Miller explained. “For now, they take detailed maps of where they are going to drive so they know where every tree and curb and stop signs are. It then takes readings from its LIDAR sensors so it can find out where it is.”

You don’t own it

The other big advantage of autonomous vehicles is that, initially at least, only the rich will own one.

Miller and Valasek envisage autonomous autos being introduced mainly as taxi services, such as Waymo is rolling out in Arizona; the cars won’t be sitting in driveways and public lots, but will return to base when not in use. This sharply limits the ability for hackers to get physical access to the computer, and install malware or add electronics.

In terms of design, manufacturers also have the opportunity to remove physical ports to the vehicle’s data systems – since why would you let someone patch into a taxi? – and to cut down on features. For example, in a taxi, there’s no need to include a Bluetooth link to the car’s entertainment system: the passenger will only be on board for a short time.

The robo-rides will also be in constant contact with the biz operating them. This means at the first sign that things are awry, the car can be remotely ordered to pull to the side of the road, shut down, and await pickup.

The two researchers made one final point. Most existing autonomous vehicles are made by strapping sensor pods to a non-autonomous vehicle. One thing the manufacturers will need to check is that the base vehicle doesn’t have a simpler vulnerability that could be leveraged to take over the bolted-on self-driving technology. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/10/autonomous_car_hacking/

Can we talk about the little backdoors in data center servers, please?

Black Hat Data centers are vital in this cloudy world – yet little-understood management chips potentially give hackers easy access to their servers in ways sysadmins may not have imagined.

The components in question are known as baseband management controllers (BMCs). They are discrete microcontrollers popped into boxes by the likes of Dell, HPE, and Lenovo to allow data-center managers to control machines without having to brave the chilly confines of a server farm. They can be accessed in various ways, from dedicated wired networks to Ethernet LANs.

BMCs can be used to remotely monitor system temperature, voltage and power consumption, operating system health, and so on, and power cycle the box if it runs into trouble, tweak configurations, and even, depending on the setup, reinstall the OS – all from the comfort of an operations center, as opposed to having to find an errant server in the middle of a data center to physically wrangle. They also provide the foundations for IPMI.

“They are basically a machine inside a machine – even if the server is down, as long as it has power, the BMCs will work,” said Nico Waisman, VP of security shop Immunity, in a talk at this year’s Black Hat USA hacking conference on Thursday.

“They have a full network stack, KVM, serial console, and power management. It’s kind of like the perfect backdoor: you can remotely connect, reboot a device, and manage keyboard and mouse.”

It’s a situation not unlike Intel’s Active Management Technology, a remote management component that sits under the OS or hypervisor, has total control over a system, and been exploited more than once over the years.

Waisman and his colleague Matias Soler, a senior security researcher at Immunity, examined these BMC systems, and claimed the results weren’t good. They even tried some old-school hacking techniques from the 1990s against the equipment they could get hold of, and found them to be very successful. With HP’s BMC-based remote management technology iLO4, for example, the builtin web server could be tricked into thinking a remote attacker was local, and so didn’t need to authenticate them.

“We decided to take a look at these devices and what we found was even worse than what we could have imagined,” the pair said. “Vulnerabilities that bring back memories from the 1990s, remote code execution that is 100 per cent reliable, and the possibility of moving bidirectionally between the server and the BMC, making not only an amazing lateral movement angle, but the perfect backdoor too.”

The fear is that once an intruder gets into a data center network, insecure BMC firmware could be used to turn a drama into a crisis: vulnerabilities in the technology could be exploited to hijack more systems, install malware that persists across reboots and reinstalls, or simple hide from administrators.

Sadly, the security of the BMCs is lax – and that’s perhaps because manufacturers made the assumption that once a miscreant gets access to a server rack’s baseboard controllers, it’s game over completely anyway. Here’s the stinging conclusion of their study:

From an offensive perspective, even though the various BMC platforms may require significant research investments, the results are worth the endeavour. A culture of empirically proven low-quality vendor software make BMCs a prime target. BMCs can facilitate long term persistence as well as cross-network movement that bypasses most network security design.

It is very hard, if not impossible, for any sufficiently sized company to move away from BMCs. As such, it is time for BMC vendors to revisit 2002, read [Bill Gates’] famous Trustworthy Computing memo, and realize that in 2018 sprintf() based stack overflows really should be a thing of the past in any platform that supports mission critical infrastructure.

The BMCs, by the way, use fruity hardware. Take HP’s Integrated Lights-Out (iLO) system, which is embedded in the ProLiant server range. The older version, iLO2 uses an antiquated NEC CPU core that was popular in optical drives back in the day, while iLO4 has a more modern Arm-compatible core. Dell’s version is the Integrated Dell Remote Access Controller (iDRAC) that uses Linux running on a variant of the SuperH chips once used in some gaming consoles.

Intel_Arria_10_GX

Slicker servers, heaving racks, NVMe invasion: It’s been a big week in serverland

READ MORE

Most BMC chips run their own web server, typically based on the popular Appweb code. This can reveal the exact operating system and hardware setup of the chip if pinged correctly. Waisman and Soler also found a list, published by Rapid7, of the default passwords for most BMC systems.

The duo probed whatever kit they could get hold of – mainly older equipment – and it could be that modern stuff is a lot better in terms of security with firmware that follows secure coding best practices. On the other hand, what Waisman and Soler have found and documented doesn’t inspire a terrible amount of confidence in newer gear.

Their full findings can be found here, and their slides here.

Of course, data center managers aren’t stupid, and BMC services are typically kept behind firewalls, segmented on the network, or only accessible via dedicated serial lines – and certainly shouldn’t be facing the public internet. However, Waisman and Soler said they found plenty exposed to the web.

The bottom line is that IT admins need to assess the routes to their BMC services, make sure none are internet facing, and harden up access. Once an attacker establishes persistence with a BMC, you’ll really wish you’d taken their advice. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/10/data_center_hacking/

Spec-exec CPU bugs sweep hacking Oscars – and John McAfee’s in there like a bullet

Black Hat The whizz kids who uncovered the Spectre and Meltdown data-leaking flaws in modern processors have scooped two Pwnie Awards – often referred to as the information security industry’s Oscars.

Moritz Lipp, Michael Schwarz, Daniel Gruss, Thomas Prescher, Werner Haas, Stefan Mangard, Paul Kocher, Daniel Genkin, Yuval Yarom, Mike Hamburg, Jann Horn, and Anders Fogh were members of three teams that independently discovered the speculative-execution engine design blunders, and reported them to semiconductor makers and operating system developers.

This week, amid Black Hat USA 2018, they won a gong for the best privilege escalation bug, and also the award for the most innovative research, although when popping up to the stage to pick up their glammed up My Little Pony-style trophies, they said they honestly didn’t think that they had done the best research of the year.

The full list of 2018 Pwnie Awards winners are here.

Double winners are rare at the Pwnies, but at least the gang was there to pick up their prize. You couldn’t say that for the winner of the Lamest Vendor Response gong, which this year went to the security industry’s batshit old uncle John McAfee and his paymasters at bungling cryptocurrency wallet maker Bitfi.

One of the judges, Luta Security’s Katie Moussouris, pointed out that Bitfi was not initially nominated for the award. However, in the final days before the awards night, the Pwnie website was hit by thousands of people nominating Bitfi and Mcadee for their failed PR campaign: they had claimed their wallet was unhackable. It was very hackable.

“The internet has demanded it,” she said, adding that Mcafee had “killed the competition.” This bought a laugh from the crowd: Mcafee was named by police as a suspect for the murder of his Belizean neighbor in 2012. Mcafee denies any wrongdoing.

Mcafee wasn’t there to pick up his award, so it got taken anyway by Ryan Castellucci, principal security researcher at White Ops. Castellucci was one of the researchers who detailed the lousy security of Bitfi’s hardware.

“Mcafee has written at least two hit pieces about me recently so I’m taking his Pwnie,” Castellucci said on stage. He’s planning a further demo of the parlous state of Bitfi at the DEF CON hacking conference later this week.

The Lifetime Achievement award, the most prestigious Pwnie, this year went to Michał Zalewski, who goes by the handle lcamtuf. All the judges acknowledged the massive impact this Polish-born hacker has had on the industry, both in his work at Google, as a published author, as the developer of state-of-the-art fuzzer American Fuzzy Lop and other tools, and as a mentor for young talent. He received a special gothic Pwnie for his work. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/10/pwnie_awards/

Encryption doesn’t stop him or her or you… from working out what Thing 1 is up to

You don’t need to sniff clear-text Internet of Things traffic to comprehensively compromise a gadget-fan’s home privacy: mere traffic profiles will do the job nicely, a group of researchers has found.

Encrypted streams can be surprisingly revealing, after all: just ask Cisco, which learned how to identify malware crossing the network boundary, without having to decrypt the data.

In this paper at pre-press site arXiv, nine researchers from Florida International University, Italy’s University of Padua, and the Technical University of Darmstadt in Germany gathered data from household Internet of Things gadgets.

What they found is that even with encrypted payloads, light bulbs, power switches, door locks, speakers and the like reveal their activity in how, rather than what, they communicate: the duration of a traffic spike, the length of packets in a communication, packets’ inter-arrival time, deviations in packet lengths, whether the user is contacting the device locally or over the Internet.

Once the researchers had sampled traffic from a selection of 22 devices (shown in the table below), the combination of unencrypted headers (MAC address and other information that helps identify manufacturers) with traffic patterns revealed not just what a device is doing (the light went on or off), but enough to infer what the user is doing (walking between rooms, sleeping, cooking and so on).

Devices tested in the 'Peek-a-Boo' paper

The more the merrier: devices tested in ” Peek-a-Boo: I see your smart home activities, even encrypted!” (from arXiv)

“Our results show that an adversary passively sniffing the network traffic can achieve very high accuracy (above 90 per cent) in identifying the state and actions of targeted smart home devices and their users”, the paper said.

The four goals in the research were to: identify devices in a smart home setup; infer the user’s daily routine from device traffic; infer the state of a specific device; and infer what the user is doing, based on the activity of multiple devices.

As a very simple example, the researchers wrote, a user walking through a smart home will activate lamps and motion sensors in a sequence that tells the researcher where they went, even without decrypting payloads.

Walking past the Things

Walking through a home activates devices in sequence. Image: “Peek-a-Boo: I see your smart home activities, even encrypted!” at arXiv

That kind of information let the researchers craft what they called a “multi-stage privacy attack”: first, they gather traffic and classify it by protocol (for later analysis to find out what devices are using that protocol); second, the attacker tries to infer device state (for example, by identifying on or off transitions); third, they classify those states that can train the AI; and finally, they use a Hidden Markov Model to turn what they learn about the state of different devices into a way of inferring what the user is doing (sleeping, cooking, watching TV and so on).

If you don’t want this kind of sniffing happening on your Internet, the researchers noted, you could use ToR or a VPN to encrypt everything (bar what an attacker can eavesdrop “over the air” if they can get close enough).

If IoT vendors want to protect their users (hint: they rarely do), it would be easy to pollute traffic streams with random false data so device state’s harder to infer. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/10/internet_of_things_encryption_snooping/

It’s 2018 and I can still hack into sat-comms gear, sighs infosec dude

Black Hat Four years ago, IOActive security researcher Ruben Santamarta came to Black Hat USA to warn about insecurities in aircraft satellite-communication (SATCOM) systems. Now he’s back with more doom and gloom.

During a presentation at this year’s hacking conference in Las Vegas this week, he claimed he had found a host of flaws in aircraft, shipping, and military satellite comms and antenna-control boxes that can be exploited to snoop on transmissions, disrupt transportation, infiltrate computers on military bases, and more – including possibly directing radio-transmission electronics to bathe fleshy humans in unhealthy amounts of electromagnetic radiation.

“It’s pretty much the same principle as a microwave oven,” he told The Register. “The flaws allow us to ramp up the frequency.”

The vulnerabilities stem from a variety of blunders made by SATCOM hardware manufacturers. Some build backdoors into their products for remote maintenance, which can be found and exploited, while other equipment has been found to be misconfigured or using hardcoded credentials, opening them up to access by miscreants. These holes can be abused by a canny hacker to take control of an installation’s antenna, and monitor the information the data streams contain.

“Some of the largest airlines in the US and Europe had their entire fleets accessible from the internet, exposing hundreds of in-flight aircraft,” according to Santamarta. “Sensitive NATO military bases in conflict zones were discovered through vulnerable SATCOM infrastructure. Vessels around the world are at risk as attackers can use their own SATCOM antennas to expose the crew to radio-frequency radiation.”

Essentially, think of these vulnerable machines as internet-facing or network-connected computers, complete with exploitable remote-code-execution vulnerabilities. Once you’ve been able to get control of them – and there are hundreds exposed to the internet, apparently – you can disrupt or snoop on or meddle with their communications, possibly point antennas at people, and attack other devices on the same network.

Oblivion, the movie comms officer desk

Sat comms kit riddled with backdoors for hackers – researcher

READ MORE

This is all particularly worrying for military antennas. Very often these are linked to GPS units, and an intruder could use this data to divine the location of military units, as well as siphon off classified information from the field. Similar SATCOM systems are often used by journalists in trouble spots; unwelcome press interest could be targeted, perhaps terminally.

In satellite-communications units for the shipping industry, Santamarta said he found flaws that could be used to identify where a particular vessel was, and also damage installations by overdriving the hardware. Malicious firmware could be installed to interfere with positioning equipment, and lead ships astray, it was claimed.

Santamarta also postulated crews and passengers on container and cruise ships could be harmed by directing antennas at them. There are safeguards to make sure antennas can’t point at people, but those could be overridden, he claimed.

To make matters worse, some of these software flaws remain unpatched, as manufacturers continue to develop updates, while others privately disclosed to vendors have been fixed.

He also claimed it is possible to take over an aircraft’s satellite-communications system, depending on the model, and then potentially not only commandeer the in-flight Wi-Fi access point but also menace devices of individual passengers. The in-flight wireless systems could also be hacked while onboard the airplane, we’re told, if you’d rather not go the SATCOM route.

It would not be possible for him to hijack the aircraft’s core control systems, though, as these are kept strictly separate and locked down. The aircraft SATCOM holes have since been fixed, he told the conference. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/10/satellite_hacking/

Kaspersky VPN blabbed domain names of visited websites – and gave me a $0 reward, says chap

Kaspersky’s Android VPN app whispered the names of websites its 1,000,000-plus users visited along with their public IP addresses to the world’s DNS servers, it is claimed.

The antivirus giant duly fixed up the blunder when a researcher reported it via the biz’s bug bounty program – for which he received zero dollars and zero cents as a reward, we’re told.

Version 1.4.0.216 and earlier of the “Kaspersky VPN – Secure Connection” tool did not route DNS lookups through the secure tunnel. That means if you were using the VPN software to hide your public IP address, and thus clues to your identity, from the public internet, well, it let you down, it is claimed.

Whenever you would visit a website, say, supercyberotters.org, from your device with Kaspersky’s VPN enabled, and your browser needed to look up the IP address of the domain name, it would reach out to a DNS server, such as one provided by your Wi-Fi, your cellular network, or yourself. That lookup would not be run through the VPN tunnel, it was reported, so the DNS service would be able to log the domain names of the sites you were visiting, and when, against your public IP address.

Really, the DNS lookups should have been routed through the secure VPN tunnel, along with your connections to the websites – and the app was fixed in 1.4.0.453 and later, released in late May, according to messages sent by Kaspersky Lab staff that we’ve seen. Thus, make sure you’re running the latest build.

Why you shouldn’t trust a stranger’s VPN: Plenty leak your IP addresses

READ MORE

Dhiraj Mishra, who reported the flaw in April, via Kaspersky’s HackerOne-hosted bounty program, claims he still has not received a dime.

Judging from conversations between Mishra and Kaspersky Lab staffers, seen by The Register this week, the Russian software giant decided to give him extra HackerOne reputation points, and declined to hand over cold coin.

“I believe this vulnerability leaks traces of a user who wants to remain anonymous on the internet,” Mishra told El Reg in an email on Thursday. “I reported this vulnerability on April 21, four months ago, via HackerOne, and a fix was pushed out but no bounty was awarded.”

And we can see why, unfortunately: according to the rules of the bug bounty, you can land up to $10,000 for finding a flaw that leaks sensitive data. Unfortunately for Mishra, this data is defined as user passwords, payment information, and authentication tokens – and not IP addresses and domain-name lookups. Even though a VPN toolkit like Kaspersky Lab’s is supposed to shield people’s IP addresses and domain-name lookups.

A spokesperson for Kaspersky Lab said she was unable to comment when we contacted the organization.

The Russian security house wouldn’t be the first biz to be accused of short-changing security researchers regarding vulnerability disclosures. In June, a pair of experts put multi-factor-token maker Yubico on blast after the vendor used their WebUSB security bug report to claim a bounty for itself from Google.

Yubico later apologized, and gave the researchers credit for the discovery. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/09/kaspersky_vpn_dns_leak_bug_bounty/

You can’t always trust those mobile payment gadgets as far as you can throw them – bugs found by infosec duo

Black Hat Those gadgets and apps used by small shops and traders to turn their smartphones and tablets into handheld sales terminals? Quite possibly insecure, you’ll no doubt be shocked to discover.

These mobile terminals are often seen in cafes, gyms, and other modest-sized businesses to take non-cash payments. The merchant taps out a figure, you swipe your card through some device physically or wirelessly attached to the phone or tablet, and the transaction is handled electronically over the internet. Unfortunately, these relatively cheap devices are not always particularly secure, according to a nine-month study by Positive Technologies.

The probe was carried out by Leigh-Anne Galloway and Tim Yunusov, who started off looking at just two card readers. This quickly grew into a project that studied seven card reader models from four vendors – Square, SumUp, iZettle, and PayPal – and compared their operation in two different regions: US and Europe. Not all of them are or were vulnerable to attack, and any flaws discovered varied in severity.

Mobile Point of Sales ecosystem [source: Positive Technologies]

Data flow … how the various components of a mobile payment terminal system fit together. Card goes into a reader, which talks to a phone or tablet via Bluetooth, which talks to payment processors over the internet via an application

The duo told El Reg they found that after swiping a card through five of the card readers – gizmos from PayPal, SumUp, iZettle and two from Square – it was possible to trick the customer into spending more money than they expected.

A dodgy merchant or nearby miscreant could eavesdrop on the encrypted Bluetooth connection between a card reader and its mobile terminal, and tamper with the values so that the final bill is higher than the amount previously shown on the reader.

Below is a card reader that informed the customer the item they bought will cost a quid, when in reality, because the over-the-air connection between the gizmo and the phone was twiddled, the smartphone app thought a higher amount was authorized.

Mobile Point of Sales amount sent hack [source: Positive Technologies]

Stung … The card reader says £1.00, but the payment app will bill the customer £1.23

The Positive Technologies pair also identified two terminals that can be sent arbitrary commands to change what’s displayed on their screens. This means malicious software could tell a customer, via the card reader’s display, to use a less secure method of payment, such as the magnetics-tripe rather than chip’n’PIN, or display “payment declined” to trick a cardholder into carrying out additional transactions, racking up a remarkable bill.

Lastly, two terminals – Miura-built devices for Square and PayPal – were identified as vulnerable to arbitrary code execution, allowing a miscreant to explore the device’s file system, read from the PIN keypad in plaintext mode to snoop on codes, and intercept confidential so-called track 2 data from swiped payment cards before it is encrypted.

Here’s one device, a Miura M010 reader, displaying a cute cat cartoon after it was infiltrated via a remote-code execution exploit, we’re told:

Remote code execution pwns Miura terminal [source: Positive Technologies]

Meow! A Nyan cat on a Miura M010 reader after a remote-code execution hack

Although the code execution flaws were severe, the ability to change the amounts charged during transactions was the biggest practical danger, PT’s Galloway told El Reg. Anti-fraud mechanisms varied widely between vendors, and Galloway said this patchy security is due to the lack of maturity in mobile payment technology.

“If a product costs less than $100 it’s not going to have some level of [security] development,” Galloway said. “Some vendors are following PCI to the letter and only implementing minimum requirements.”

Square, by contrast, is more mature. For example, it has run a bug bounty program since 2014 that has helped it to develop more sophisticated anti-fraud mechanism. Square’s tech, we’re told, will detect if a mobile phone used in conjunction with its terminals has been compromised.

Any bugs found were reported in April to the reader and app makers, who are in the process of patching, or have finished patching, the security blunders. Miura said it addressed the above code-execution flaw in a 2016 update. Square is phasing out its use of the M010 hardware, and PayPal has put in place mitigations.

Galloway and Yunusov presented their research, For the Love of Money: Finding and Exploiting Vulnerabilities in Mobile Point of Sales Systems, in more detail at this year’s Black Hat USA conference in Las Vegas on Thursday. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/10/mobile_pos_insecurity/

Cloud Intelligence Throwdown: Amazon vs. Google vs. Microsoft

A closer look at native threat intelligence capabilities built into major cloud platforms and discussion of their strengths and shortcomings.

BLACK HAT USA 2018 – Las Vegas – Amazon Web Services, Google Cloud Platform, and Microsoft Azure have all recently doubled down on threat intelligence to help users identify and respond to malicious activity in the public cloud. But where do these platforms differ, and how do those differences help or harm cloud security?

Brad Geesaman, an independent cloud infrastructure security consultant, aimed to clarify the strengths and shortcomings of each platform during his Black Hat session “Detecting Malicious Cloud Account Behavior: A Look at the New Native Platform Capabilities.” He set the stage for his side-by-side comparison with a broader look at how security is different in the cloud.

For starters, competition is ramping up in the space. As it does, companies are prioritizing shipping features and outsourcing non-core capabilities – including security. The cloud explosion has demolished the traditional perimeter, a rise in new infrastructure has shifted the attack surface, and a dearth of cloud security experts is amplified amid a wave of new features and services.

Cloud environments change fundamental assumptions about security, Geesaman explained. “When everything is an API, the traditional approaches don’t fit,” he said. The scalability of the cloud grants an opportunity to amplify good behavior. It also amplifies human error. 

Direct compromise may not be needed to affect cloud security, he continued. Credential theft can happen via phishing, malware, backdoor libraries or tools, or password guessing. Malicious outsiders abuse employees’ failure to rotate, disable, or delete credentials after someone leaves the company. Credential leaks, another common vector, happen more often than one might think. 

“You’d be surprised – or maybe not – where these keys can show up,” Geesaman added. “People give them away by accident all the time.”

When shopping among major cloud services, it’s important to bear in mind that none of them have been around very long. They’re still growing, changing, and gaining new features, and they all still have work to do. “Don’t expect something that’s been in service for 10 years,” he said.

Geesaman asked several of the same questions when evaluating the intelligence tools in each cloud platform: which data sources they use, how they operate on data, how much visibility the data provides, what is not covered in the service, and what is needed for onboarding, cost structure, partner integration, customization, and validating detection.

And with that, he dove into the research. First up …

Microsoft Azure
The Azure Security Center was first released in fall 2015, became generally available in spring/summer 2016, and added threat detection in summer 2017. Its idea is to provide security management and threat detection and apply security policies across hybrid cloud workloads. Microsoft charges $15 per system per month for the tool.

Its dashboard is one of the key features, Geesaman pointed out. If you’re comfortable managing Windows on-prem, much of your knowledge will carry over. 

He also highlighted its security recommendation engine, which prioritizes issues to tackle, as well as custom alert rules, file integrity monitoring, REST API, and third-party tool integration – which he said is helpful for managing choice endpoint tools. The value-add comes from its hybrid-first approach, Microsoft-supported Windows/Linux Agent, and Azure Log Analytics Service, in which all agent logs are searchable.

Amazon Web Services
Amazon GuardDuty was released as CloudTrail in spring 2013, AWS VPC Flow Logs in summer 2015, and GuardDuty in winter 2017. GuardDuty offers threat detection so users can continuously monitor AWS accounts and workloads. It’s offered as a 30-day free trial and, in North America, is priced at $0.25 to $1 per GB of VPC/DNS and $4 per 1 million Cloudtrail Events.

What’s key: GuardDuty monitors data streams from CloudTrail Events, VPC Flow Logs, and DNS Logs. It integrated threat intel feeds with known malicious IP addresses and domains; users can supply their own IP lists for “good” and “bad” hosts, he added. Further, GuardDuty can be set so users have centralized AWS accounts and don’t have to be involved in dev or operations teams to have those events sent to them.

The platform detects backdoors, malicious behavior, cryptocurrency mining, persistence, Trojans, recon, and attacks conducted with pen-testing tools, among other threats. Its value-add comes from a “zero-impact” setup, clear detection listing, broad partner ecosystem, and seeing multiple types of API abuse.

“One of the things I liked about GuardDuty is they do a lot of detections, and they tell you what those detections are,” Geesaman said. It’s “very transparent” about what it’s looking for and does the best and clearest job of reporting the misuse of API keys, he added. 

Google Cloud Platform
The Google Cloud Platform (GCP) is still in its early stages, he continued. It detects botnets, cryptocurrency mining, anomalous reboots, and suspicious network traffic, and feeds them into a user interface that he anticipates will undergo changes as it’s still early in development. 

GCP’s value-add comes from a zero-impact setup that doesn’t affect any running workflows, as well as an API and interface that feature partner solutions and integrate their output into a single interface. It’s also framework-oriented and designed to handle security events across multiple services.

Cloudy Forecast
There is room for improvement across all the major platforms, Geesaman pointed out. On the detection side, visibility is dependent on implementation. “If you’re defending your organization and you don’t know what you’re detecting, how do you know what gaps you have?” he noted.

Detection capability listings could be better, he added, as well as customization and tuning of the data. From an integration perspective, he said he foresees a lot of movement and improvement in how security events are collected, analyzed, processed, and forwarded. 

“Cloud providers are known for moving very quickly with their services,” Geesaman concluded, adding that change is in the future. He advised attendees to check providers’ next major events for updates.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/cloud-intelligence-throwdown-amazon-vs-google-vs-microsoft/d/d-id/1332527?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

IoT Malware Discovered Trying to Attack Satellite Systems of Airplanes, Ships

Researcher Ruben Santamarta shared the details of his successful hack of an in-flight airplane Wi-Fi network – and other findings – at Black Hat USA today.

BLACK HAT USA – Las Vegas – Ruben Santamarta was flying from Madrid to Copenhagen in November 2017 on a Norwegian Airlines flight when he decided to inspect the plane’s Wi-Fi network security. So he launched Wireshark from his laptop and began monitoring the network.

Santamarta noted “some weird things” happening. First off, his internal IP address was assigned a public, routable IP address, and then, more disconcerting, he suddenly noticed random network scans on his computer. It turned out the plane’s satellite modem data unit, or MDU, was exposed and rigged with the Swordfish backdoor, and a router from a Gafgyt IoT botnet was reaching out to the satcom modem on the in-flight airplane, scanning for new bot recruits.

The Internet of Things (IoT) botnet code didn’t appear to have infected any of the satcom terminals on that plane or others, according to Santamarta, but it demonstrated how exposed the equipment was to potential malware infections. “This botnet was not prepared to infect VxWorks. So, fortunately, it was no risk for the aircraft,” he said.

That was one of the long-awaited details Santamarta, principal security consultant at IOActive, shared today of his research on how he was able to exploit vulnerabilities in popular satellite communications systems that he had first reported in 2014. The flaws – which include backdoors, insecure protocols, and network misconfigurations – in the equipment affect hundreds of commercial airplanes flown by Southwest, Norwegian, and Icelandair airlines. Satcom equipment used in the maritime industry and the military also are affected by the vulns.

Santamarta emphasized that while the vulnerabilities could allow hackers to remotely wrest control of an aircraft’s in-flight Wi-Fi, there are no safety threats to airplanes with such attacks. The attack can’t reach a plane’s safety systems due to the way the networks are isolated and configured. But an attacker could access not only the in-flight Wi-Fi network, but also the personal devices of passengers and crew members.

He also found the flaws in satellite earth stations and antenna on ships and in earth stations used by the US military in conflict zones.

“It can disrupt, intercept, and modify” satcom operations from the ground, he said.

Meantime, in his research he also found a Mirai botnet-infected antenna control unit on a maritime vessel. “There’s malware already infecting vessels,” he said.

Santamarta also exposed some serious physical safety risks of radio frequency (RF) heating that could cause burns or other physical damage or injury, and found the US military had satcom equipment exposed on the Internet. “You could get their GPS position” in some conflict zones, he said, declining to divulge any vuln details until all of the sites are remediated.

Santamarta’s research was a massive coordinated disclosure process involving the aviation industry, satellite equipment vendors, and other parties.

Jeffrey Troy, executive director of the Aviation-ISAC, said in a press event yesterday previewing Santamarta’s presentation that Santamarta shared his research with aviation experts who specialize in satellite communications for aircraft. “Then he learned more from the industry about his research,” Troy said.

Related Content:

 

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/iot-malware-discovered-trying-to-attack-satellite-systems-of-airplanes-ships/d/d-id/1332529?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple