STE WILLIAMS

Miller & Valasek: Security Stakes Higher for Autonomous Vehicles

Car hacking specialists shift gears and work on car defense in their latest gigs – at GM subsidiary Cruise Automation.

No ’80s-era Adidas tracksuits. No video of a hacked Jeep Cherokee slowing to a crawl from 70 mph on a highway after losing its acceleration power. No steering-wheel hijacked Jeep stuck in a muddy ditch. This time, famed car hackers Charlie Miller and Chris Valasek came purely as defenders – rather than hackers – of automobile security.

Valasek and Miller, now both principal security architects for autonomous-vehicle manufacturer Cruise Automation, at Black Hat USA last week mapped out the key issues surrounding securing this new generation of driverless cars, based on their past three years working in the self-driving vehicle industry collectively for Uber, Didi Chuxing, and now Cruise, of which General Motors is a majority owner.

“We have a unique perspective … we’ve done a bunch of car hacking,” Miller said in an interview in Las Vegas prior to his and Valasek’s presentation. “In the last three years, it’s all been all about protecting” cars from attack, he says.

His and Valasek’s 2016 hack of the Jeep Grand Cherokee steering at speed was at least physically defendable, he says: Such a remote attack on an autonomous vehicle would not be. “The moment we turned the steering wheel, we [the driver] had a shot at resisting the attack. But in the future, there’s not going to be steering wheels and brakes, so you’re completely reliant on the car to drive itself,” Miller says of autonomous cars. “So the stakes are even higher.”

The goal of their driverless car security work, they say, is not about sniffing out the most or least hackable self-driving cars, but to make hacking them too much work for an attacker to bother doing. “[It’s] how to make the ROI [return on investment] so low for an attacker that it’s not worth doing,” Valasek explains.

Among the devices they’re helping secure are the tablets that serve as the human interface to the autonomous vehicles. Reducing the potential attack surface found in many of today’s modern driver-run cars is a goal: if Bluetooth is unnecessary, it shouldn’t be included, for example.

“There’s always going to be vulnerabilities in code. So if you don’t need something, take it out,” Miller says. If the vehicle needs Bluetooth, for instance, it should have as few ways as possible to take data from the outside world to the car, he says.

Ethernet is the communications network infrastructure of choice for autonomous vehicles, the researchers say. And the key is isolating connected components from components that control the vehicle. “For example, the communications module should not have a direct connection to the CAN bus and should not be the same as the main compute module. Likewise, the tablets should be isolated as much as possible from more trusted components of the vehicle,” Miller and Valasek wrote in a white paper they published last week.

Autonomous vehicles do come with some inherent advantages security-wise: since they’re owned and deployed by a service in many cases, that provider also handles monitoring and maintenance of its fleet. The cars return to a garage each day where they get checked or fixed, and their software can be updated, while traditional cars rarely get software updates.

If a problem is detected in a self-driving car, it can be remotely powered down, or returned to the garage. In addition, the vehicles come with custom communications modules rather than a standard Web interface.

Threats

Among the possible remote threats to an autonomous vehicle, they say, are attacks on: the listening service in its communications module; remote assistance features; and on the infotainment system, for example. An attacker also could target the vehicle’s fleet management service, or its software update service.

On the local side, Wi-Fi, Bluetooth, and tire pressure-monitoring systems could be targeted, as well as sensors in the vehicle. But Miller and Valasek say the biggest concern is a remote attack that could result in an attacker physically controlling the vehicle.

“We know what to do. We know what’s on the line,” Valasek says of their security work.

“It doesn’t matter if it’s Cruise, Waymo, or Uber – the hack of a driverless vehicle is bad for everybody,” he says. “We want to share our thought process here. We don’t want anyone to have incidents.”

The new security advancements being forged in autonomous cars likely will trickle down to traditional driver cars, too. “Once [security] is in the firmware, it will be easy to do in regular cars,” Valasek says.

Meanwhile, Cruise has not yet rolled out its autonomous vehicle models nor has it provided a timeframe for the release.

“We’re the pre-flight [security] check,” Valasek says.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/iot/miller-and-valasek-security-stakes-higher-for-autonomous-vehicles/d/d-id/1332563?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Intel Reveals New Spectre-Like Vulnerability

A new side-channel speculative execution vulnerability takes aim at a different part of the CPU architecture than similar vulnerabilities that came before it.

There’s a new speculative execution side-channel vulnerability akin to Spectre and Meltdown, but this one is different in a crucial way: it’s aimed at program data rather than program instructions.

The new vulnerability, described in CERT Vulnerability Note VU#982149, is similar to Spectre in that it leverages speculative execution — a process by which certain computer instructions are executed in case they’re the next instructions called for by the software. This is a technique that speeds up code execution on just about all modern CPUs.

It differs from Spectre in that it’s not working on the part of memory-holding instructions. Instead, the new vulnerability – called Foreshadow, or L1TF – targets the L1 data cache.

“Researchers simply followed the thread left by Spectre and Meltdown — this isn’t a completely new class of vulnerabilities,” says Matthew Chiodi, vice president of cloud security at RedLock.

Intel yesterday released a pair of security notifications on the vulnerability. The first focuses on the hardware details, and discusses the implications for operating system and VM developers. According to Intel, microcode has been developed and pushed live to help mitigate the effects of the vulnerability.

In its Intel Software Developer blog, Intel explores the impact on application developers and provides possible mitigations for programmers working on browsers, applications in VMs, etc.

Google Cloud Security published a blog post on the vulnerability, noting that, “Directly exploiting these vulnerabilities requires control of hardware resources that are accessible only with operating system level control of the underlying physical or virtual processors.”

That’s similar to other Spectre-like vulnerabilities and one of the key reasons most security professionals seem more curious than panic-stricken about this class of vulnerabilities.

Google also noted that the primary danger of Foreshadow is that a threat actor could use it to reach across virtual machine boundaries, gaining access to the information belonging to another virtual machine — and possibly, another organization.

“Systems that utilize software-defined storage via a mid-layer filesystem will likely experience the most impact. Many software-defined storage solutions, which use a mid-layer filesystem will likely have a much larger performance impact as a result of these fixes,” says Jeff Ready, CEO of Scale Computing.

Ken Spinner, vice president of field engineering at Varonis, says this attack can glean sensitive data from the target. “The prize of an attack like this is sensitive data. If passwords or other credentials can be directly extracted and then exploited, it’s obviously a win for attackers,” he says.

The benefits of virtual machines in the cloud are based on the ability to maintain clear, clean boundaries between virtual servers, he says. “This entire class of processor attacks proves how hard that can be in practice. Securing virtual services can directly increase operational costs for cloud providers, leading to gaps in some cases,” he says.

Keeping Calm

Few security professionals are panicked about this class of speculative-execution vulnerabilities due to the difficulty and complexity in developing the attacks, getting them implanted on a target machine, and sorting through slowly growing piles of data in the hopes of finding something interesting. “Why slip in through the second story window when the front door is unlocked?” Spinner says.

Nonetheless, Foreshadow is seen by most observers as yet another call to make sure an organization’s update and patch policies are strong and ready for waves of remediating updates.

“How far this goes and how much damage it does depends greatly on whether people install the patch,” says Michael Daly, CTO for Raytheon’s cybersecurity and special missions. “Unfortunately, history tells us many will not. While this particular threat seems minor for now, since so few systems use SGX, that can change.”

Related Content:

 

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/application-security/intel-reveals-new-spectre-like-vulnerability-/d/d-id/1332566?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

FBI warns of choreographed ATM drainage

The FBI has alerted banks that in the coming days cybercrooks are planning to spring a highly choreographed, multinational “ATM cashout” that could drain their cash machines of millions within the span of hours.

In an ATM cashout, cybercrooks hack a bank or payment card processor, lift fraud controls such as withdrawal limits and/or account balances and/or number of daily withdrawals, outfit so-called “casher crews” with cloned cards, and send them out to simultaneously descend on cash machines and strip them of money before the banks sound the alarm and slam down the window of opportunity.

Cybercrime journalist Brian Krebs on Sunday reported that the FBI alert to banks indicated that the plot could be triggered any day now.

From the confidential alert, which was privately sent to banks on Friday:

The FBI has obtained unspecified reporting indicating cyber criminals are planning to conduct a global Automated Teller Machine (ATM) cash-out scheme in the coming days, likely associated with an unknown card issuer breach and commonly referred to as an ‘unlimited operation’.

According to Krebs, the FBI said that “unlimited operations” compromise a financial institution or payment card processor with malware to access bank customer card information and exploit network access, enabling large-scale theft of funds from ATMs.

Historic compromises have included small-to-medium size financial institutions, likely due to less robust implementation of cyber security controls, budgets, or third-party vendor vulnerabilities. The FBI expects the ubiquity of this activity to continue or possibly increase in the near future.

What kind of vulnerability, you may well ask? We have no idea. Perhaps it’s a vulnerability that’s got an inch or two of dust on it? In January, the US Secret Service sent out an alert about ATM “jackpotting” attacks that used malware known as Ploutus.D: a malware to which ATMs running Windows XP are particularly vulnerable.

Windows what, now? Yes, Windows XP. Ahem. As we noted then, it’s way past time to update – even extended support for the stripped-down Windows XP Embedded ended more than two years ago.

At any rate, back to that FBI alert, which gave more details on ATM cashouts:

The cyber criminals typically create fraudulent copies of legitimate cards by sending stolen card data to co-conspirators who imprint the data on reusable magnetic strip cards, such as gift cards purchased at retail stores. At a pre-determined time, the co-conspirators withdraw account funds from ATMs using these cards.

As Krebs notes, ATM cashouts are typically launched on weekends, often just after banks begin closing up shop on Saturday. Krebs reported on one such last month: in this case, $2.4 million was withdrawn from accounts at the National Bank of Blacksburg in two separate ATM cashouts over the course of eight months.

In one of the heists, the robbers hit the bank on Memorial Day weekend 2016: a federal holiday in the US. It began on Saturday, 28 May, and continued through the following Monday. The crooks drained almost $570,000 in the 2016 attack, plus nearly $2 million in another cashout operation that started on Saturday, 7 January, 2017 and ended on Monday 9 January.

The FBI said that the next ATM cashout is coming soon: if the timing on previous heists is indicative, it could well hit over the coming Labor Day weekend.

How to fortify now

The FBI is telling banks to bolster their security, including implementing strong password requirements and two-factor authentication (2FA) using a physical or digital token when possible for local administrators and business-critical roles.

(A code generator with an authenticator app such as Sophos Authenticator – also included in our free Sophos Mobile Security for Android and iOS can help out. Just sayin’!).

Other tips for financial organizations from the FBI alert:

  • Implement separation of duties or dual authentication procedures for account balance or withdrawal increases above a specified threshold.
  • Implement application whitelisting to block the execution of malware.
  • Monitor, audit and limit administrator and business critical accounts with the authority to modify the account attributes mentioned above.
  • Monitor for the presence of remote network protocols and administrative tools used to pivot back into the network and conduct post exploitation of a network, such as PowerShell, Cobalt Strike and TeamViewer.
  • Monitor for encrypted traffic (SSL or TLS) traveling over non-standard ports.
  • Monitor for network traffic to regions where you wouldn’t expect to see outbound connections from the financial institution.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nUIRRb5n0c4/

Your smart air conditioner could contribute to mass power outages

In March, the US blamed Russia for attacks on the power grid. The Department of Homeland Security (DHS) and the FBI called it a multi-stage effort by “Russian government cyber actors” who targeted small facilities’ networks with malware, spear-phishing and remote access into energy sector networks.

Nation states using sophisticated cyber weaponry to attack: it’s like a Hollywood plot. In fact, experts believe that the 2015 and 2016 Ukraine power outages were the work of cyberattackers, and that they were a dress rehearsal for doing the same to the US.

But perhaps Russia or other hostile nation states aren’t the threats we should be worried about – we should be more concerned about attacks from our air conditioners.

As in, smart air conditioners, along with other internet-connected, high-wattage appliances such as smart hot water heaters that can be looped into a botnet, or zombie network, and forced to amp up their electrical demands, thereby overloading the power grid and causing mass, cascading blackouts.

As Wired reports, researchers from Princeton University – Saleh Soltan, Prateek Mittal and H. Vincent Poor – will be presenting their findings this week at the Usenix Security Symposium.

They’re calling the theoretical attack BlackIoT: an Internet of Things (IoT) botnet that would give adversaries the ability to launch large-scale, coordinated attacks on the power grid.

Rather than an attack on the supply side of the grid, the researchers have flipped the tables to describe attacks on the demand side: what they’re calling manipulation of demand via IoT (MadIoT) attacks.

They studied five variations of these attacks, in which cyberattackers would control a botnet comprising thousands of consumer IoT devices – most particularly, ones that gobble power, such as air conditioners, water heaters and space heaters.

After running five varieties of software simulations to see how many of those devices an attacker would need to simultaneously hijack in order to disrupt the stability of the power grid, they came up with a scenario that Wired called disturbing, if not yet quite practical:

In a power network large enough to serve an area of 38 million people – a population roughly equal to Canada or California – the researchers estimate that just a 1% bump in demand might be enough to take down the majority of the grid. That demand increase could be created by a botnet as small as a few tens of thousands of hacked electric water heaters or a couple hundred thousand air conditioners.

Saleh Soltan, a researcher in Princeton’s Department of Electrical Engineering and the lead author of the report, told Wired that the energy grid is OK as long as nobody throws a two-ton elephant on one side of the seesaw:

Power grids are stable as long as supply is equal to demand. If you have a very large botnet of IoT devices, you can really manipulate the demand, changing it abruptly, any time you want.

The researchers didn’t detail specific vulnerabilities that would have to be exploited in order to hijack a critical mass of appliances, but really, did they need to? News of IoT device vulnerabilities is abundant. We’ve already seen the havoc caused by the Mirai botnet, for one – a vast array of home routers, webcams and other low-powered IoT devices that launched a DDoS attack on well-known investigative cybercrime journalist Brian Krebs.

As Naked Security’s Paul Ducklin has framed it, the unfortunate fact is that many IoT devices are designed, built and delivered with scant regard for security, and are installed without much care, often with well-known default passwords unchanged, and with access left open to anyone who cares to come knocking.

IoT devices that cost 5% as much as your laptop tend to get 5% as much security love-and-care, or even less, although they can do 100% as much damage in a [distributed denial of service, or DDoS] attack.

The danger of power outages is particularly acute: when power goes out, so too do life-support devices that depend on electricity, for example. That includes home dialysis or breathing machines. If everybody’s power blinks out at once, that means that our hospitals, our police departments and our emergency responders all go dark.

From the report:

Insecure IoT devices can have devastating consequences that go far beyond individual security/privacy losses. This necessitates a rigorous pursuit of the security of IoT devices, including regulatory frameworks.

We couldn’t agree more.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/NuqboegXQuM/

Are your Android apps listening to you?

Here’s a thing: numerous apps on your phone have permission to access your microphone.

Some, like the Phone app itself, were on the phone when you got it, but you’ve almost certainly added others – WhatsApp, Skype and Facebook, for instance – along the way.

From the moment you gave those apps audio permission, they’ve been able to listen in whenever they want, without telling you.

In theory, you’ll never know if an app is overstepping the mark; in practice, however, there are some cool ways of checking to see when an app is listening in.

Keeping track of an app’s behaviour is a handy technical skill to have, so we’re going to show you how to look at the system calls made by your Android mobile to the audio subsystem.

No more audio secrets!

By following our tutorial, you can keep track of exactly when when an app is accessing the microphone.

Note. For this article, we used a test device that was wiped first and then rooted. This means we deliberately altered the security settings to give us administrative access – on Linux/Android, the admin account is called root, so getting root access is colloquially called rooting. We strongly recommend that you don’t do research of this sort on your regular phone, just in case something goes wrong. And definitely don’t try this on your work phone!

Tracing Android API calls

The tool we’ll use to find out if apps are listening to us is called AppMon.

AppMon’s Android Tracer can monitor apps on your phone by tracing Java classes when they’re called (almost all Android apps are written in Java).

Apps that want to use the microphone use the AudioRecord class. By monitoring this with Android Tracer, we can see how and when apps are interacting with the microphone.

Most of AppMon’s documentation is focused on macOS and Linux, but in this article we’ll show you how to install it on Windows.

AppMon talks to Android via a utility called Frida, monitoring software that you’ll need to install on your test Android device first.

Frida allows you to inject scripts into Android processes, hook into functions and spy on crypto APIs. For this reason it’s a scary app to have installed on any phone outside of a test environment.

You won’t find it in the Google Play Store, so you have to sideload it. For that you’ll need to have debugging enabled on your mobile, and the Android Debugging Bridge (ADB) installed on Windows.

Here’s how to get started.

Enabling USB Debugging on Android

Firstly you’ll need to make sure your Android device is in developer mode, and that USB Debugging is enabled:

  1. Open the Settings app.
  2. Select System (Only on Android 8.0 or higher).
  3. Scroll to the bottom and select About phone.
  4. Scroll to the bottom and tap Build number 7 times. (Yes, that’s officially how you do it!)
  5. Return to the previous screen to find Developer options near the bottom.

Switch developer mode on, scroll down until you see the newly revealed option USB debugging, and turn it on. Now plug the Android device into your Windows machine with its USB cable, ready for the next phase.

Installing ADB on Windows

ADB is a handy tool for interacting with Android phones, and we’ll be using it to install Frida onto our device.

Download the android-sdk command-line tools for Windows.

Once the tools are downloaded and unzipped, you can run adb.exe from the command line.

Open a command window in the directory where adb.exe is currently located. You can do this by holding shift whilst right clicking on the explorer window where adb.exe exists, and selecting “open command window here”.

With your newly opened command prompt, type adb devices and hit return.

This will show you a list of the Android devices with debugging enabled that are currently connected.

Example:

    C:UsersUser1 adb devices
    List of devices attached
    XXXXXXXXX    device

Running Frida on Android

Download the frida-server app, which you need on your Android device to monitor apps that are running.

Download the compressed binary with the file name “Frida-server-NN.N.NNN-android-MMM.xz” where NN.N.NN represents the version number, and MMM is the processor type in your phone (one of arm, arm64, x86 or x86_64).

Most older Androids have ARM chips; many newer phones have the more powerful ARM64 processor – if you choose the wrong version of frida-server you won’t break anything, but it won’t work. If you aren’t sure, use a search engine to find the CPU type for your specific model of phone.

Unzip the downloaded file to the same location as adb.exe, and rename the file to Frida-server.

Back in the command prompt you opened earlier, push the Frida-server file onto your Android device and run it:

    adb push frida-server /data/local/tmp/
    adb shell "chmod 755 /data/local/tmp/frida-server"
    adb shell
    su
    /data/local/tmp/frida-server 

Here we’ve pushed the Frida-server file to our Android device in the location of /data/local/tmp/, which is a directory on the Android file system that permits you to execute a script.

Then we used chmod to edit the file permissions of the Frida-server to make sure it’s allowed to run.

Finally, we ran the Frida-server program (the ampersand at the end of the command makes it run in the background).

Prepping Windows

Windows now needs to be prepared to run Android Tracer. Tracer is written in Python, and Python must be installed for it to run.

By placing Python and adb within the Windows Environment Variables, you can run both ADB and Python without having to be in the directory where adb.exe or python.exe are installed.

Installing Python 2

Here’s a quick guide on getting started with Python 2.7, which at the time of writing is the most current version of Python 2.

Browse to the Python website, select “download” next to the latest version of python 2 (2.7.15 at the time of writing). Scroll down until you see the MSI for 64 bit and 32 bit environments, and select the respective MSI based on your version of Windows. Once downloaded, launch the MSI file and follow the installation instructions.

Python and ADB Windows Environment Variables

On Windows 10, open Control Panel and search for “environment variables”. Under System select Edit the system environment variables.

Select Environment Variables at the bottom of the window. This pops open a new window where you’ll see User variables for your user and System variables. Under System variables there is a variable named Path. Select this so that it’s highlighted as shown in the screen grab below and then select Edit….

Select New to add the adb directory to the existing list, and enter the directory where adb.exe is currently located (e.g. C:Program FilesADB).

Select New a second time and add the location of python.exe (e.g. C:Python27).

Select New for the last time and add the location of pip.exe (e.g. C:Python27Scripts).

Installing AppMon Dependencies

In order to run AppMon on Windows, there are a few Python dependencies which are required.

AppMon makes use of argparse, frida, flask, termcolor and dataset.

Run the following command on your Windows device to install these dependencies:

   pip install argparse frida flask termcolor dataset --upgrade

Installing AppMon

On the AppMon github page, select clone or download and download the zip file containing AppMon. You’ll have to unzip the appmon-master folder to a convenient location on your windows device.

Now navigate to the location where AppMon is unzipped and go into the folder named tracer. Hold shift and right click in this directory and then select the option to Open command window here.

You can now use Tracer to monitor if a mobile app is eavesdropping when it shouldn’t be.

An example command to see if WhatsApp is listening when it shouldn’t is provided in the following section.

Monitoring Your Microphone

This is where all the hard work pays off.

We can now run Android Tracer to monitor when the microphone starts to record:

   python android_tracer.py -a "com.whatsapp" -c "*AudioRecord*" -m "startRecording"

above, -a "com.whatsapp" says you want to monitor an app called com.whatsapp; the ‑c option specifies the class (Java sub-program) to monitor; and -m says which specific method (Java function) to watch out for.

The asterisks in the text string “*AudioRecord*” denote that you want to match any characters at the start and end of the text.

This makes it easy to keep an eye on a whole set of related classes or methods without listing every one explicitly – any method that has “AudioRecord” somewhere in its name will match.

Android’s comprehensive developer documentation has a complete list of classes and methods you might want to monitor – for example, we’re monitoring startRecording in the AudioRecord class here, but you might want to look at takePicture in the Camera class instead.

Here’s what we uncovered on our test device:

(Watch directly on YouTube if the video won’t play here.)

If you’ve enjoyed researching into what your mobile is getting up to behind the scenes, check out this article on oversharing apps.

New! Improvements to Android

Since writing this article, Android 9 Pie has been released, bringing with it some much-needed privacy for us all. In a statement on the Android Developers Blog, Dave Burke, VP of Engineering, said that the microphone won’t be accessible whilst the app is idle:

The system now restricts access to mic, camera, and all SensorManager sensors from apps that are idle.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/zvWma588JSg/

The sextortionists are back, this time with your phone number as “proof”

Thanks to Brett Cove of SophosLabs for his behind-the-scenes work on this article.

Sextortion is back in the news.

That’s where someone tries to blackmail you by telling you to pay up or else they’ll reveal something truly personal about your sexuality or your sex life.

Typically, sextortionists claim to have infected your laptop or phone with malware while you were browsing, and then to have kept their eye on both your browsing habits and your webcam.

You can imagine the sort of data they claim to have sniffed out – and even if you know jolly well they couldn’t have got it from you, it still makes you wonder what they might claim you’ve been up to.

Last month, for example, we wrote about an ongoing sextortion scam campaign that tried to amplify your fear by throwing a genuine password of yours into the email.

I do know, [PASSWORD REDACTED], is your password. You do not know me and you are probably thinking why you are getting this e mail, correct? 

actually, I placed a malware on the adult videos (pornography) website and do you know what, you visited this web site to experience fun (you know what I mean). While you were watching videos, your internet browser initiated working as a RDP (Remote Desktop) that has a key logger which gave me accessibility to your display and also webcam. after that, my software program obtained all your contacts from your Messenger, Facebook, as well as email. 

The good news here is that the passwords revealed were old ones – typically from accounts that recipients had closed long ago, or where they’d already changed the password.

Even if you were still using the password they claimed to “know”, the crooks hadn’t acquired it by eavesdropping on you or hacking into your computer.

They’d bought or found a bunch of stolen data acquired in some breach or other, and were using it to try and convince you they really had hacked your device.

Well, these guys are back – or, more precisely, never went away, because we’ve seen bursts of this scam for many months already.

This time, the crooks seem to have got hold of a list that ties email addresses and phone numbers together, so they’re putting your phone number (or at least what they think is your phone number) into the email:

It seems that, +1-555-xxx-xx55, is your phone number. You may not know me and you are probably wondering why you are getting this e mail, right?

. . .

I backuped phone. All photo, video and contacts.

I created a double-screen video. 1st part shows the video you were watching (you've got a good taste haha . . .), and 2nd part shows the recording of your web cam.

exactly what should you do?

Well, in my opinion, [AMOUNT FROM $100-$1000 THIS TIME] is a fair price for our little secret. You'll make the payment by Bitcoin (if you do not know this, search "how to buy bitcoin" in Google).

In the 5000 or so samples we extracted from this week’s reports, the amount demanded varied from $100 to $1000 (last time we saw amounts up to $2900).

Interestingly, all the phone numbers had a similar North American format, with five digits Xed out; some Naked Security readers outside North America have reported receiving UK-style numbers with all but the last four digits Xed out.

We can only guess, but it looks as though the stolen data that the crooks acquired this time was pre-redacted – they’d be more convincing if they could reveal your entire number, after all.

Has anyone paid up?

When you try to track down Bitcoin payments, all you can tell is whether someone sent something to the Bitcoin addresses specified.

The 5000 samples from the past week that we used to dig into this latest email campaign each demanded payment into one of just three different Bitcoin addresses, which showed payment histories like this:

  Bitcoin address            BTC received   USD approx
  ------------------------   ------------   ----------
  1GYNxxxxxxxxxxxxxxxxxxLB    0.93094968       $6000
  19Gfxxxxxxxxxxxxxxxxxxai    0.04491935        $300
  1NQrxxxxxxxxxxxxxxxxxxrS    0.00047363          $3

  [BTC1 = $6500, roughly correct at 2018-08-15T16:00Z]

In case you’re wondering, there have been 20 payments into those three addresses, roughly distributed as follows:

   3 payments at $1000
   1 payment  at  $940
   1 payment  at  $780
   1 payment  at  $300
   1 payment  at  $210
   2 payments at  $200
   1 payment  at  $150
   1 payment  at  $100
   2 payments at   $90
   1 payment  at   $80
   1 payments at   $10
   5 payments at    $1

Of course, we can’t tell whether any of the payments into these addresses came from victims of this scam – they could have come from anywhere, including from the crooks themselves.

What to do?

Regular Naked Security readers will know what we recommend in cases like this: DON’T PAY, DON’T PANIC, DON’T REPLY.

Even if the crooks had hacked your computer and recorded material you wish they hadn’t (it needn’t be porn, of course), why pay them not to reveal data that they already possess?

At least in a ransomware attack you are “paying for a positive” – you’re paying for a decryption key that will either work and do what you were hoping, or won’t work and that’s that.

But paying the crooks not to do something, they can just threaten to do it again next week, month, year…

…so it won’t get you anywhere, except to mark you out as someone who already knows how to buy and spend bitcoins.

Fortunately, in this case, the crooks don’t have any browsing logs or webcam footage at all, so it’s all just empty threats.

Hit [Delete] and you’re done with it – tell your friends.

Oh, and use this story to remind yourself, and to convince your boss, that any data breach can lead to ongoing trouble – even if the breach was “just” email addresses and phone numbers, and even if it happened long ago.

That’s the trouble with private data: once out, always out.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MJmZPpn5OPw/

Baddies of the internet: It’s all about dodgy mobile apps, they’re so hot right now

Rogue mobile apps have become the most common fraud attack vector, according to the latest quarterly edition of RSA Security’s global fraud report.

Fraud from mobile browsers and mobile applications made up 71 per cent of total fraudulent transactions recorded (of approximately 402,000) in Q2 2018, compared to 61 per cent in Q2 2017.

RSA Security detected 9,185 rogue applications (compared to approximately 8,000 last quarter) which collectively accounted for 28 per cent of all fraudulent attacks recorded. Rogue apps can be anything from fake banking applications designed to capture authorisation codes to counterfeit mobile software that poses as either popular games or technology from a trusted consumer brand.

In the second quarter of 2018, RSA logged nearly 5.1 million unique compromised cards and card previews in underground cybercrime bazaars and from other sources. This represents a 60 per cent increase in cards recovered by RSA in the previous quarter.

Fraudsters are increasingly using burner devices and throwaway accounts to carry out bogus transactions. While just 0.4 per cent of legitimate payment transactions were attempted from a new account and new device, 27 per cent of the total value of fraudulent payments were made through new accounts and devices.

The average value of a fraudulent transaction in Europe was $392 (€346, £308), compared to $171 (€151, £134) for legitimate purchases. The average UK fraudulent transaction was valued at at a slightly lower $355 (€314, £280), compared to $193 (€171, £152) for legitimate transactions.

Phishing accounted for 41 per cent of all fraud attacks observed by RSA in Q2. Canada, the United States, and the Netherlands were the top three countries most targeted by phishers.

Fraud attack type distribution [source: RSA Security]

Fraud attack type distribution

The stats were gathered by RSA Security’s Fraud and Risk Intelligence unit, a team that works undercover to infiltrate cybercriminal groups, unearth fraud campaigns and track their proliferation. The intel is used by RSA for its managed threat services product. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/15/fraud_sitrep_rsa/

Open Source Software Poses a Real Security Threat

It’s true that open source software has many benefits, but it also has weak points. These four practical steps can help your company stay safer.

Open source code has conquered the world of software. Almost every website, API, and application is built on an enormous stack of open source libraries and frameworks that totals many millions of lines of code. Millions of corporations and developers are taking advantage of the expansive array of components, zero cost, and easy integration to create more-sophisticated software far faster than building it themselves.

I am a huge advocate of open source and have led several very successful open source projects. But along with all the benefits of open source components, we have to recognize some new risks. There are millions of these libraries in all the different software languages, such as Java, .NET, Ruby, Python, Go, and many more. Dozens of new vulnerabilities are discovered every week, but we’re only scratching the surface. The problem is that only a handful of talented security researchers are doing the highly skilled work of testing this code.

That means that there are, almost certainly, large numbers of latent vulnerabilities in open source software. Having a researcher discover one of these and publish it seems like an expensive fire drill for companies, because they have to search to see if they’re using the library, replace it, recode their application to match, retest, resecure, and redeploy. But if a malicious actor finds the vulnerability and starts attacking companies with it, the damage can be much more expensive. Web applications and web APIs run with almost full privilege inside a company’s data center, and all that open source inherits the power to do anything the application can do.

Bad actors have recognized the power of the software supply chain attack vector. If finding a vulnerability gets too hard, they can switch to attacking the open source projects themselves. For example, they could simply join a project and contribute code that contains or creates a weakness. Or they could target the open source repositories cloning an existing library, introduce malicious code, and make it available with a similar name as the original. Hackers have even targeted the development “tool chain” to inject their code into binaries. In all these examples, developers and end users alike would not see the attack happening in their data center, but they would be completely owned.

The ramifications of this are staggering. If an attacker was able to infiltrate a popular library like log4j, they would very quickly be running with privilege inside most data centers in the world. They could use this access to not only attack the targeted application but as an internal launching point for attacks on the organization’s internal network. And that’s just a single library. This is the easiest path to seriously disrupting the Internet and harming huge numbers of people.

Organizations need to minimize their exposure and establish the capability to respond to novel vulnerabilities and attacks within hours. Unfortunately, most organizations take months to respond and are very exposed in the interim. Every company that is betting their future on software needs to have a strategy for beefing up the security of their software supply chain. Here are a few practical tips:

  1. Exercise Restraint: Don’t allow just any random library into your supply chain. Remember that you are betting your company on the security of that code. Set and enforce some policies around the types of code you will allow. Look for projects with high popularity, active committers, and evidence of process — including security.
  1. Establish Guardrails: Create guidelines for secure use of the libraries you select for use. Define how you expect each library to be used, and detail how developers should safely install, configure, and use each library in their code. Also, be sure to identify dangerous methods and how to use them safely.
  1. Constant Vigilance: Establish continuous self-inventory so you know exactly what open source libraries you are using in your inventory. Ensure that you have a notification system in place, so you know exactly what applications and servers are affected by new vulnerabilities.
  1. Runtime Protection: Use runtime application security protection (RASP) to prevent both “known” and “unknown” library vulnerabilities from being exploited. If novel vulnerabilities are disclosed, your RASP infrastructure enables you to respond in minutes, not weeks or months.

In an age of “digital transformation initiatives” your software supply chain is the key to creating and deploying applications quickly. Please make sure you don’t inadvertently undermine your entire business in the rush to reinvent it.

Related Content:

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early-bird rate ends August 31. Click for more info

A pioneer in application security, Jeff Williams is the founder and CTO of Contrast Security, a revolutionary application security product that enhances software with the power to defend itself, check itself for vulnerabilities, and join a security command and control … View Full Bio

Article source: https://www.darkreading.com/application-security/open-source-software-poses-a-real-security-threat/a/d-id/1332535?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New PHP Exploit Chain Highlights Dangers of Deserialization

PHP unserialization can be triggered by other vulnerabilities previously considered low-risk.

PHP unserialization attacks have been well known for some time, but a new exploitation method explained last week at Black Hat USA in Las Vegas demonstrated that the attack surface for PHP unserialization is broader than originally thought.  

“What I presented was basically a new way to start an unserialization attack,” says Sam Thomas, director of research at Secarma Ltd. “In PHP, there’s specific command called ‘unserialize,’ which starts unserialization, but actually it turns out that because of other stuff that goes into the core of PHP, there’s another other way to trigger it.”

Also known as deserialization, the process of unserialization happens when an application like PHP takes an object that’s been encoded into a format that can be stored and transported easily — also known as being serialized — and converts it back into a “live” object.

When an application unserializes an object that’s been maliciously created or manipulated, bad things can happen. In many instances this scenario opens up the application to remote code execution (RCE). It’s a danger that is growing in prominence: last year, OWASP added insecure deserialization to its recently updated Top 10 list, and last year’s massive Equifax breach was reportedly initiated through deserialization.

For his part, Thomas demonstrated that it is possible to take advantage of the way that PHP handles self-extracting files in what’s called a Phar archive. These Phar archives can contain serialized metadata and any time any file operation accesses a Phar archive that metadata is unserialized.

So, if an attacker can get any Phar archive into the target’s local file system with malformed metadata and trigger any operation to access that file – even to simply look up whether the file exists – then they can start an unserialization attack.

As a result, Thomas explained that a whole range of path-handling vulnerabilities that previously might have been considered low-risk information disclosure or server-side request forgery (SSRF) vulnerabilities can now be used at the speartip of an unserialization attack chain — often refered to as a gadget chain — to ultimately get to the attacker to RCE. 

What Devs Can Do

He demonstrated last week several vulnerability and exploit examples on well-known PHP libraries and an as-yet unfixed issue in how WordPress handles thumbnail files that exemplify how this method of unserialization plays out in the real world. 

“I’ve highlighted that the unserialization is exposed to a lot of vulnerabilities that might have previously been considered quite low-risk,” he explains. “Issues which they might have thought were fixed with a configuration change or had been considered quite minor previously, might need to be reevaluated in the light of the attacks I demonstrated.”

In a paper detailing his findings, Thomas recommends that developers avoid design patterns that can result in easily abused unserialization gadgets, and that IDS and IPS systems start instituting signatures that detect malicious Phar archives.

“The research continues a recent trend, in demonstrating that object (un)serialization is an integral part of several modern languages,” he wrote. “We must constantly be aware of the security impact of such mechanisms being exposed to attacker-controlled data.”

Related Content:

 

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/application-security/new-php-exploit-chain-highlights-dangers-of-deserialization/d/d-id/1332559?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Instagram Hack: Hundreds Affected, Russia Suspected

Affected users report the email addresses linked to their Instagram accounts were changed to .ru domains.

A growing number of Instagram users have been hit in a hacking campaign leaving hundreds logged out of their accounts and struggling to reverse their profile content back to normal.

Users affected by the hack, first reported by Mashable, are logged out of their accounts. When they attempt to log back in, they learn their username, profile photos, password, and linked Facebook account have been changed. Email addresses linked to Instagram accounts have been switched to .ru domains, a sign the threat may originate from Russia or a Russian impersonator.

Other commonalities include deleted bios, changed handles, and a new profile photo of a Disney or Pixar film character. Nobody has confirmed the source of the hack or how the actor(s) is gaining access to these accounts.

While the attacker(s) have not deleted the photos in each profile, the fact that they edited contact info is making it harder for users to regain account access. Instagram has published guidance on how to restore affected accounts and revoke access to third-party apps.

Read more details here.

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Early bird rate ends August 31. Click for more info

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/instagram-hack-hundreds-affected-russia-suspected/d/d-id/1332558?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple