STE WILLIAMS

“Most Hated Man in America” Martin Shkreli’s Twitter feed hijacked

On Saturday, “Most Hated Man in America” Martin Shkreli – he who raised the price of a life-saving AIDS pill from $13.50 to $750 and who pleased much of the nation last week by getting busted over an alleged securities fraud Ponzi scheme – took to Twitter to shrug off the charges:

On Sunday, internet poltergeists diverted that stream of confidence, hijacking Shkreli’s Twitter account, changing his name to “Martin the God”, and emitting seven taunting and sometimes profanity-laced tweets, including:

I am now a god

“Anyone want free money? Willing to donate hundreds of thousands to charities before I go to prison…”

A spokesman for Shkreli, who stepped down from his position as chief executive of Turing Pharmaceuticals last week, confirmed to Reuters that Shkreli’s account had been hacked and that they were working with Twitter to get it back.

By late Monday morning, Shkreli tweeted a message saying that he’d regained control of his account.

One of the responses that message got:

That’s a good question. Because Twitter does, in fact, have a two-factor authentication (2FA) tool, which it calls login verification.

Twitter introduced it in February 2015 as a way to fend off hijackings like the one that Shkreli had to deal with.

We don’t know how Shkreli’s account was compromised, but we do know that there are plenty of ways to do it: he might have clicked on a phishy link, reused his password, or perhaps he just used a feeble one – like his pet’s name – instead of using a unique, hefty brute of a password.

Of course, Twitter accounts of businesses or celebrities are particularly tempting targets, and with a week like Shkreli had, he might as well have had a glowing target painted on his back.

We don’t know if he had login verification turned on, but it would have made his account a lot more difficult to take over if he did, given that an attacker would have had to not only know his login credentials but have access to his phone to successfully hack a 2FA-protected account.

You can check out this video from Twitter that shows you how to set up login verification.

Regardless of what you think of Shkreli, his innocence or guilt, or his guitar playing, hijacking his account was still wrong.

We hope that he, you or anybody liable to account hijacking knows about, and implements, 2FA on Twitter or any online service where it’s available.

Image courtesy of Twitter.com / Martin Shkreli

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2u2eR5ethEM/

Apple CEO Tim Cook sticks to his guns: “No encryption backdoors”

Apple CEO Tim Cook appeared on CBS’s 60 Minutes TV show last night.

As you can probably imagine, the topic of encryption came up, in particular the issue of what backdoors.

Bluntly put, a backdoor is a deliberate security hole – for example, an undocumented master decryption key – that is knowingly added to a software product.

Some backdoors are there as a temporary convenience, for example to speed things up during development, a bit like wedging your real-life back door open while you’re shuttling the garbage out into the yard.

But temporary software backdoors have a way of getting forgotten, and ending up in production builds, which is a strong argument for avoiding backdoors in the first place, convenience notwithstanding.

Some backdoors are there as a “feature”, for example so that the support desk can help you more quickly if you are on the road and forget your password, without needing to read you a lengthy, one-off recovery code that you have to type in within a limited time.

But backdoors like this soon end up widely known, and widely misused, which is a strong argument for avoiding backdoors in the first place, convenience notwithstanding.

Lastly, some backdoors are requested by law enforcement or a country’s regulators, supposedly as an aid in fighting crime.

The claim is that strong encryption that can’t be cracked gives criminals and terrorists an unfair advantage, because it means they can communicate without fear of their conversations being eavesdropped or investigated.

Unbreakable encryption, say its detractors, is as good as contempt of court, because crooks can laugh at search warrants that they know can’t be carried out.

But Tim Cook told 60 Minutes that he doesn’t agree:

Here’s the situation…on your smartphone today, on your iPhone. There’s likely health information, there’s financial information. There are intimate conversations with your family, or your co-workers. There’s probably business secrets and you should have the ability to protect it. And the only way we know how to do that, is to encrypt it. Why is that? It’s because if there’s a way to get in, then somebody will find the way in. There have been people that suggest that we should have a backdoor. But the reality is if you put a backdoor in, that backdoor’s for everybody, for good guys and bad guys.

Which is a strong argument for avoiding backdoors in the first place, convenience notwithstanding.

Indeed, mandatory cryptographic backdoors will leave all of us at increased risk of data compromise, possibly on a massive scale, by crooks and terrorists…

…whose illegal activities we will be able to eavesdrop and investigate only if they too comply with the law by using backdoored encryption software themselves.

In other words, Tim Cook is right: if you put in cryptographic backdoors, the good guys lose for sure, while the bad guys only lose if they are careless.

We know this because we have tried enforcing mandatory backdoors before, and it did not end well.

In the 1990s, for example, the US had laws requiring American software companies to use deliberately-weakened encryption algorithms in software for export.

The US legislators intended that these export-grade ciphers would make it safe to sell cryptographic software even to potential enemies because their traffic would always be crackable.

But the regulations ended up affecting Americans in a double-whammy:

  • International customers simply bought non-US products instead, hurting US encryption vendors.
  • EXPORT_GRADE ciphers lived on long after they were no longer legally required, leaving behind backdoors such as FREAK and LOGJAM that potentially put all of us at risk.

Those who cannot remember the past are condemned to repeat it.

💡 LEARN MORE – To encrypt or not to encrypt? We explore the issues ►

💡 LEARN MORE – The FREAK bug, a side-effect of weakened encryption ►

💡 LEARN MORE – The LOGJAM bug, another side-effect of weakened encryption ►

Image of open doors courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/u96dbmOo_oM/

Oracle ordered to admit on its website that it lost the plot on Java security

Oracle bungled the security updates of its Java SE software so badly it must publish a groveling letter prominently on its website for the next two years.

Since gobbling up Java along with Sun in 2010, Oracle’s software updates for Java SE would only affect the latest version installed. If you had multiple versions of Java SE on your system, only the latest would be replaced when installing or upgrading to a new release – leaving the old and insecure copies of Java SE on the system for hackers and malware to exploit. Vulnerabilities lurking in the outdated installations can be abused to hijack computers, steal passwords, and so on.

Why would you have multiple versions on one machine? Well, Oracle’s hopeless code would never remove old builds of Java SE from PCs: each update would leave the old vulnerable versions in place like ticking time bombs. According to US watchdog the FTC, Oracle knew in 2011 that its software was broken, as internal documents admitted the “Java update mechanism is not aggressive enough or simply not working.”

Oracle fixed its installer in August 2014 to cleanse systems of older copies of Java SE, but the FTC is still jolly cross with the California tech giant – particularly because Java SE has apparently been installed on more than 850 million PCs. The regulator sued the database goliath, accusing it of breaking consumer protection laws by lying about the security of its applications.

In a settlement announced on Monday, Oracle must provide a means for people to rid their systems of older builds of Java SE, or the corporation will face fines. It must also encourage antivirus makers Avast, AVG, ESET North America, Avira, McAfee, Symantec, and Trend Micro, and Firefox maker Mozilla, to put out security advisories about the Java SE cockup.

According to the regulator:

Oracle failed to inform consumers that the Java SE update automatically removed only the most recent prior version of the software, and did not remove any other earlier versions of Java SE that might be installed on their computer, and did not uninstall any versions released prior to Java SE version 6 update 10.

As a result, after updating Java SE, consumers could still have additional older, insecure versions of the software on their computers that were vulnerable to being hacked.

The IT titan must “notify consumers during the Java SE update process if they have outdated versions of the software on their computer, notify them of the risk of having the older software, and give them the option to uninstall it. In addition, the company will be required to provide broad notice to consumers via social media and their website about the settlement and how consumers can remove older versions of the software.”

According to an order [PDF] drafted by the FTC, Oracle must put the following letter on its website for people to see:

Dear Java SE customer:

We’re sending you this message because you may have downloaded, installed, or updated Java SE software on your computer. The Federal Trade Commission, the nation’s consumer protection agency, has sued us for making allegedly deceptive security claims about Java SE. To settle the lawsuit, we agreed to contact you with instructions on how to protect the personal information on your computer by deleting older versions of Java SE from your computer. Please take the suggested steps as soon as possible.

Here’s a summary of what the FTC lawsuit is about. The FTC alleged that, in the past, when you installed or updated Java SE, it didn’t replace the version already on your computer. Instead, each version installed side-by-side at the same time. Later, after we changed this, installing or updating Java SE removed only the most recent version already on your computer. What’s more, in many cases, it didn’t remove any version released before October 2008.

Why was that a problem? Earlier versions of Java SE have serious security risks we corrected in later versions. When people downloaded a new version, we said they could keep Java SE on their computer secure by updating to the latest version or by deleting older versions using the Add/Remove Program utility in their Windows system. But according to the FTC, that wasn’t sufficient. Updating to the latest version didn’t always remove older versions. So many computers had several versions installed.

That creates a serious security vulnerability. Even if you installed the most recent version of Java SE, the personal information on your computer may be at risk because earlier, less secure versions could still be executed.

To fix this problem, visit http://java.com/uninstall, where instructions on how to uninstall older versions of Java SE are provided. This webpage also provides a link to the Java SE uninstall tool, which you can use to uninstall older versions of Java SE. You may also go to http://java.com/uninstallhelp if you have any additional questions or concerns.

To learn more about this lawsuit, call the FTC at 1-888-922-7836.

A spokesperson for Oracle was not available for immediate comment. ®

Sponsored:
Building secure multi-factor authentication

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/22/ftc_oracle_java/

Google’s SHA-1 snuff plan is catching up with Microsoft, Mozilla

Google has outlined its approach to deprecating the compromised SHA-1 hash in its Chrome browser.

Like the rest of the security world, Google believes the SHA-1 cipher just isn’t safe any more. That’s a reasonable position, because it’s been cracked without enormous effort. Mozilla, Microsoft and Facebook have all therefore proposed to stop using it and also make life hard for those relying on SHA-1 certificates.

Google’s now explained its plan for SHA-1.

The Alphabet subsidiary’s cunning plan starts with Chrome 48, due early in 2016 and tweaked so that it presents users with a warning if a site is signed with an SHA-1 certificate that:

  1. is signed with a SHA-1-based signature
  2. is issued on or after January 1, 2016
  3. chains to a public CA

Subsequent versions of Chrome will display errors if SHA-1 certificates are employed.

On or before January 1, 2017, “Chrome will completely stop supporting SHA-1 certificates.”

Lucas Garron, of the Chrome security team, and David Benjamin from Chrome’s networking group write that they hope SHA-1 is kicked off the internet long before that date. The two write that Google is “considering” a move to July 1, 2016, Microsoft’s and Mozilla’s preferred date for the banishment of SHA-1 from the internet. ®

Sponsored:
Building secure multi-factor authentication

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/22/googles_sha1_snuff_plan_is_catching_up_with_microsoft_mozilla/

Yellow Alert Sounded For Juniper Vulns, Feds Called In

SANS ISC raises infosec alert level and FBI investigates potential nation-state activity leading to backdoor vulnerabilities in Juniper ScreenOS products.

The infosec alert level for the Internet Storm Center was bumped to yellow today on the heels of the announcement of two crucial vulnerabilities Juniper firewalls that rocked the infosec world last week. And as the industry scrambles to fill these gaping holes in its ScreenOS platform, news continues to trickle in that FBI officials are investigating potential nation-state actions that led to the insertion of an authentication backdoor that impacts tens of thousands of devices on the Internet.

Johannes Ullrich, director of the SANS ISC, said in a post this morning that the ISC decided to raise the alert level for three big reasons.

“Juniper devices are popular, and many organizations depend on them to defend their networks. The “backdoor” password is now known, and exploitation is trivial at this point,” Ullrich wrote.  “With this week being a short week for many of us, addressing this issue today is critical.”

As he warns security practitioners, there is very little that organizations can do to protect themselves from these vulnerabilities, particularly the one which allows attackers to decrypt VPN traffic. The backdoor vulnerability which requires that password to be activated–a point moot now that it is publicly known–but it can be countered by restricting access to SSH and telnet.

“Only administrative workstations should be able to connect to these systems via ssh, and nobody should be able to connect via telnet,” he says. “This is “best practice” even without a backdoor.”

Researchers with FoxIT initially reported finding the backdoor password within six hours of getting their hands on the Juniper patch. According to a post from HD Moore, chief research officer at Rapid7, the password  was chosen to look like other debug format strings in the code.

As he explains, the timing is interesting on these vulnerabilities as they were not present within versions of the software before 2013 releases.

“The authentication backdoor is not actually present in older versions of ScreenOS,” Moore wrote. “The authentication backdoor did not seem to get added until a release in late 2013.”

This could be another breadcrumb on the trail of investigators looking into how such a huge hole–one affecting over 26,000 devices according to a scan made by Moore–could have been inserted into the source code of such a prominent security product. According to a CNN report, the FBI is currently investigating the possibility that nation-state actors placed the backdoor for the purpose of espionage. Meanwhile, The Register reports an anonymous former Juniper employee who pointed to the fact that much of Juniper’s “sustaining engineering” for ScreenOS is done in China.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/yellow-alert-sounded-for-juniper-vulns-feds-called-in/d/d-id/1323647?_mc=RSS_DR_EDT

How to log into any backdoored Juniper firewall – hard-coded password published

The access-all-areas backdoor password hidden in some Juniper Networks’ Netscreen firewalls has been published.

Last week it was revealed that some builds of the devices’ ScreenOS firmware suffer from two severe security weaknesses: one allows devices to be commandeered over SSH and Telnet, and the other allows encrypted VPN communications to be monitored by eavesdroppers.

An analysis by security firm Rapid 7 of the firmware’s ARM code has uncovered more details on that first vulnerability – specifically, a hardcoded password that grants administrator access. And that password is: %s(un='%s') = %u.

On the face of it, this skeleton key looks like a harmless printf() format string for writing some text and an integer to a diagnostic log file – it would be lost among the rest of the firmware’s data.

However, the string is actually used during login checks. When the magic text is presented as a password over SSH or Telnet, the firmware grants total access to the equipment: regardless of the username given, it allows anyone to bypass authentication, and the password is hardwired into the operating system.

The Rapid 7 team found more than 26,000 internet-facing Netscreen systems with SSH open.

“We were also unable to identify the authentication backdoor in versions 6.3.0r12 or 6.3.0r14. We could confirm that versions 6.3.0r17 and 6.3.0r19 were affected, but were not able to track down 6.3.0r15 or 6.3.0r16,” said Rapid 7’s chief research officer HD Moore.

“This is interesting because although the first affected version was released in 2012, the authentication backdoor did not seem to get added until a release in late 2013 (either 6.3.0r15, 6.3.0r16, or 6.3.0r17).”

That date is important because it potentially derails a rumor that has been floating around the internet over the weekend: that the backdoor was created as part of a top-secret NSA plan to hijack Juniper’s kit for spying purposes.

NSA slide

FEEDTROUGH tech … One of the slides leaked from the NSA boasting the ability to hijack Juniper gear

This rumor spread after people fished out an NSA document published by Der Spiegel in which the intelligence agency claimed to have full control over Juniper’s Netscreen firewalls.

But that slide was made in 2008. That’s five years before this particular backdoor was added to ScreenOS. It’s possible another backdoor was present in earlier builds, but no one has evidence of that.

Also, the NSA slide focuses on implanting surveillance malware in a device, rather than compromising the firmware’s source code to introduce a hidden skeleton key. The backdoor found by Rapid 7 seems too heavy-handed for the US spy agency. It’s possible FEEDTROUGH exploited a vulnerability to install its malware, but only after a hole was discovered – and in any case, it couldn’t have been this particular password vulnerability (unless, of course, the NSA has a TARDIS.)

If anything, ScreenOS’s use of the Dual EC DRBG random number generator in its encryption is more worrying, and points to potential NSA interference. That algorithm is the same engine that was championed by the NSA even as independent security researchers pointed out that it was seriously flawed.

So where does all this leave Juniper’s customers? The company has released a patch for the affected systems, but a fair few annoyed IT managers might be leaving Juniper off their lists the next time it comes to hardware upgrade time. ®

Sponsored:
Simpler, smarter authentication

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/21/security_code_to_backdoor_juniper_firewalls_revealed_in_firmware/

Yellow Alert Sounded For Juniper Vulns, Feds Called In

SANS ISC raises infosec alert level and FBI investigates potential nation-state activity leading to backdoor vulnerabilities in Juniper ScreenOS products.

The infosec alert level for the Internet Storm Center was bumped to yellow today on the heels of the announcement of two crucial vulnerabilities Juniper firewalls that rocked the infosec world last week. And as the industry scrambles to fill these gaping holes in its ScreenOS platform, news continues to trickle in that FBI officials are investigating potential nation-state actions that led to the insertion of an authentication backdoor that impacts tens of thousands of devices on the Internet.

Johannes Ullrich, director of the SANS ISC, said in a post this morning that the ISC decided to raise the alert level for three big reasons.

“Juniper devices are popular, and many organizations depend on them to defend their networks. The “backdoor” password is now known, and exploitation is trivial at this point,” Ullrich wrote.  “With this week being a short week for many of us, addressing this issue today is critical.”

As he warns security practitioners, there is very little that organizations can do to protect themselves from these vulnerabilities, particularly the one which allows attackers to decrypt VPN traffic. The backdoor vulnerability which requires that password to be activated–a point moot now that it is publicly known–but it can be countered by restricting access to SSH and telnet.

“Only administrative workstations should be able to connect to these systems via ssh, and nobody should be able to connect via telnet,” he says. “This is “best practice” even without a backdoor.”

Researchers with FoxIT initially reported finding the backdoor password within six hours of getting their hands on the Juniper patch. According to a post from HD Moore, chief research officer at Rapid7, the password  was chosen to look like other debug format strings in the code.

As he explains, the timing is interesting on these vulnerabilities as they were not present within versions of the software before 2013 releases.

“The authentication backdoor is not actually present in older versions of ScreenOS,” Moore wrote. “The authentication backdoor did not seem to get added until a release in late 2013.”

This could be another breadcrumb on the trail of investigators looking into how such a huge hole–one affecting over 26,000 devices according to a scan made by Moore–could have been inserted into the source code of such a prominent security product. According to a CNN report, the FBI is currently investigating the possibility that nation-state actors placed the backdoor for the purpose of espionage. Meanwhile, The Register reports an anonymous former Juniper employee who pointed to the fact that much of Juniper’s “sustaining engineering” for ScreenOS is done in China.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: http://www.darkreading.com/attacks-breaches/yellow-alert-sounded-for-juniper-vulns-feds-called-in/d/d-id/1323647?_mc=RSS_DR_EDT

Oracle Settles FTC Charges That It Deceived Users About Java Security Updates

Oracle will have to be more forthright and communicate the truth via social media and anti-virus companies going forward.

Oracle has agreed to settle Federal Trade Commission charges that it had deceived customers. Oracle told customers that by installing an update to JavaSE it would make their machines “safe and secure,” despite the fact that the update often left vulnerable versions of JavaSE on the users’ machines.

The update only replaced the most recent version of JavaSE residing on the machine — it stopped short of uninstalling any other versions also residing on the computer, and did not uninstall any versions earlier than JavaSe 6 update 10 at all. According to the FTC, Oracle knew of this shortcoming in 2011 and did not fix it until August of 2014.

Under the terms of the proposed consent order, according to the FTC release:

Oracle will be required to notify consumers during the Java SE update process if they have outdated versions of the software on their computer, notify them of the risk of having the older software, and give them the option to uninstall it. In addition, the company will be required to provide broad notice to consumers via social media and their website about the settlement and how consumers can remove older versions of the software.

The consent order will require Oracle to notify consumers on Facebook and Twitter, and also contact Avast Software, AVG Technologies, ESET North America, Avira Inc., McAfee, Symantec, Trend Micro, and Mozilla, to ensure they publish the information in their security bulletins as well.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/oracle-settles-ftc-charges-that-it-deceived-users-about-java-security-updates/d/d-id/1323643?_mc=RSS_DR_EDT

Oracle Settles FTC Charges That It Deceived Users About Java Security Updates

Oracle will have to be more forthright and communicate the truth via social media and anti-virus companies going forward.

Oracle has agreed to settle Federal Trade Commission charges that it had deceived customers. Oracle told customers that by installing an update to JavaSE it would make their machines “safe and secure,” despite the fact that the update often left vulnerable versions of JavaSE on the users’ machines.

The update only replaced the most recent version of JavaSE residing on the machine — it stopped short of uninstalling any other versions also residing on the computer, and did not uninstall any versions earlier than JavaSe 6 update 10 at all. According to the FTC, Oracle knew of this shortcoming in 2011 and did not fix it until August of 2014.

Under the terms of the proposed consent order, according to the FTC release:

Oracle will be required to notify consumers during the Java SE update process if they have outdated versions of the software on their computer, notify them of the risk of having the older software, and give them the option to uninstall it. In addition, the company will be required to provide broad notice to consumers via social media and their website about the settlement and how consumers can remove older versions of the software.

The consent order will require Oracle to notify consumers on Facebook and Twitter, and also contact Avast Software, AVG Technologies, ESET North America, Avira Inc., McAfee, Symantec, Trend Micro, and Mozilla, to ensure they publish the information in their security bulletins as well.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/oracle-settles-ftc-charges-that-it-deceived-users-about-java-security-updates/d/d-id/1323643?_mc=RSS_DR_EDT

9 Coolest Hacks Of 2015

Cars, guns, gas stations, and satellites, all got ‘0wned’ by good hackers this year in some of the most creative yet unnerving hacks.

If there was one common thread among the coolest hacks this year by security researchers, it was the chilling and graphic physical implications. Good hackers rooted out the security holes and wowed the industry with actual images of remotely sending a car rolling into a ditch, hijacking the target of a smart rifle, and disabling a state trooper cruiser.

The most creative and innovative hacks in 2015 were both entertaining and chilling. They elicited a little nervous laughter, and then raised the discourse over just what bad guys could execute if increasingly networked things on the Internet aren’t secured or built with security in mind.

Here’s a look at some of the coolest hacks of the year:

 

1.       Car hacking accelerates — from the couch

Famed car hackers Charlie Miller and Chris Valasek for nearly three years had been working toward the Holy Grail of their research, remotely hacking and controlling a vehicle, and when they finally succeeded, they demonstrated it with a live (and yes, Andy Greenberg is still alive) journalist behind the wheel of a 2014 Chrysler Jeep Cherokee on a highway at 70mph. They killed the ignition from 10 miles away from their laptops while sitting on Miller’s couch, and Greenberg steered the car onto an exit ramp.

The controversial demo stirred debate among the security industry over whether the pair had gone too far to illustrate their research. Miller and Valasek have no regrets, and it resulted in the kind of response they had hoped for: Chrysler recalled 1.4 million vehicles possibly affected by the vulnerability the researchers found in the Jeep’s UConnect infotainment system that allowed them to hijack its steering, braking, and accelerator, among other things.

The hole was embarrassingly simple, the researchers admit: a wide (and unnecessarily) open communications port in the Harman uConnect infotainment system’s built-in cellular connection from Sprint, which gave them a connection to the car via their smartphones on the cellular network. They used a femtocell and found they could access the vehicle some 70 miles away via the cell connection.

That let them control the Jeep’s steering, braking, high beams, turn signals, windshield wipers and fluid, and door locks, as well as reset the speedometer and tachometer, kill the engine, and disengage the transmission so the accelerator pedal failed.

The hack also elicited the attention of the feds: a pair of veteran senators proposed legislation for federal standards to secure cars from cyberattacks and to protect owners’ privacy, and the National Highway Safety Administration launched its own investigation into the effectiveness of Fiat Chrysler’s recall.

Miller and Valasek’s “most hackable cars list” in 2014 foreshadowed their Jeep research. At the top of that list was the 2014 Jeep Cherokee,  as well as the 2014 Infiniti Q50 and 2015 Escalade. based on their study of networking features of various vehicles.

“Only a handful of people really have the baseline experience to do this type of stuff. I’m not too worried about it,” Valasek recently told Dark Reading

2.       Police cars — relatively low-tech compared with the Jeep — hackable, too

If you’re one of those drivers (like me) reassured that your older-model vehicle with no Internet connectivity isn’t hackable, think again. Researchers in Virginia this year were able to hack two Virginia State Police vehicle models, the 2012 Chevrolet Impala and the 2013 Ford Taurus.

No, the researchers in this project didn’t drive state troopers into ditches or onto highway exit ramps. The public-private partnership led by the Virginia State Police, the University of Virginia, Mitre Corp., Mission Secure Inc. (MSi), and Kaprica Security, among others, conducted the experiment to explore just what law enforcement could someday face in the age of car hacking. Like Miller and Valasek’s maiden car hacks of a 2010 Ford Escape and 2010 Toyota Prius, the hacks of the VSP cruisers require initial physical tampering of the vehicle. The researchers inserted rogue devices in the two police vehicles to basically reprogram some of the car’s electronic operations, or to wage the attacks via mobile devices.

The project evolved out of concerns by security experts as well as police officials of the dangers of criminal or terror groups tampering with state police vehicles to sabotage investigations or assist in criminal acts.

Among the hacks were remotely disabling the gearshift and engine, starting the engine, opening the trunk, locking and unlocking doors, and running the windshield wipers and wiper fluid. Some of the attacks were waged via a mobile phone app connected via Bluetooth to a hacking device planted in the police car, thus making a non-networked car hackable.

And unlike most car-hacking research to date, the researchers built prototype solutions for blocking cyberattacks as well as data-gathering for forensics purposes.

What made this project even more eye-popping, of course, was that a state police department would agree to it. But Capt. Jerry L. Davis of the Virginia State Police’s Bureau of Criminal Investigation, told Dark Reading law enforcement officials in the state didn’t hesitate to give the car hacking project the green light. “Our executive staff was aware of the issue in the arena and some of the cascading effects that could occur if we didn’t start to take a proactive” approach, he said.

Automakers traditionally have shied away from publicly discussing cybersecurity issues. But Ford and General Motors actually provided rare public statements on car cybersecurity to Dark Reading in its exclusive report on the project. 

3.       When a bad guy hacks a good guy with a gun

Just when you thought hacking couldn’t get any scarier than 0wning a car’s functions, a husband and wife team in August at Black Hat USA demonstrated how they were able to hack a long-range, precision-guided rifle manufactured by TrackingPoint. Runa Sandvik, a privacy and security researcher, and security expert Michael Auger, reverse-engineered the rifle’s firmware, scope, and some of TrackingPoint’s mobile apps for the gun.

The smart rifle has a Linux-based scope as well as a connected trigger mechanism, and comes with its own mobile apps for downloading videos, and for providing information to the firearm such as weather information.

“The worst-case scenario is someone could make permanent, persistent changes in how your rifle behaves,” Sandvik told Dark Reading in an interview prior to Black Hat. “It could miss every single shot you take and there’s not going to be any indication on the [scope] screen why this is happening.”

The good news, though, was that there was no way for an attacker to fire the gun remotely.

Even so, an attacker with wireless access could wreak some havoc on the smart rifle, the researchers found. They discovered an easily guessed and unchangeable password in the rifle’s wireless feature. “Anyone who knows it can connect to your rifle,” Sandvik said.

Among other things, they could change the weather and wind settings the smart rifle employs. The researchers got root access to the Linux software on the rifle and to create custom software updates via the WiFi connection that could alter the behavior of the weapon.

Another major flaw was that the rifle’s software allows administrative access to the device. To view a video demonstration of the hack filmed by Wired, see this

4. Hackin’ at the car wash, yeah

Sitting in the drive-through car wash now comes with a hacking risk. Security researcher Billy Rios found that a Web interface in a popular car wash brand contains weak and easily guessed default passwords and other weaknesses that could allow an attacker to hijack the functions of the car wash to wreak physical damage or score a free wash for his or her ride.

Rios, who is best known for his research into security flaws in TSA systems and medical equipment, began to wonder about car washes after a friend who’s an executive for a gas station chain that includes car washes, told him a story about how technicians had misconfigured one car wash location remotely, causing the rotary arm in the car wash to smash into a minivan mid-wash, spraying water into the vehicle and at the family inside.

“If [a hacker] shuts off a heater, it’s not so bad. But if there are moving parts, they’re totally going to hurt [someone] and do damage,” Rios, founder of Laconicly, told Dark Reading when he revealed his research earlier this year.

He found “a couple of hundred” PDQ LaserWash brand car washes online and exposed on the Net, but he estimates there are thousands or others online as well. The car wash uses an HTTP server interface for remote administration and control of the system. If an attacker were able to glean the default password for the car wash owner or technician and telnet in, he or she could take over the car wash controls from afar and open or close the bay doors, or disable the sensors or other machinery.

An attacker also could also sabotage the sales side of the business. “You can log into it and get a shell and get a free car wash” with an HTTP GET request, Rios explained.

5. Heat jumps the air gap

Air-gapping, or physically separating and keeping sensitive systems off the network, is the simple, typical go-to for critical infrastructure plants or other similar systems. Turns out there’s a way to breach that air gap simply by using heat.

Researchers at the Cyber Security Research Center at Israel’s Ben-Gurion University (BGU) discovered a way to employ heat and thermal sensors to set up a communications channel between two air-gapped systems. The so-called BitWhisper hack, which is part of ongoing air-gap security research at the university, broke new ground with a two-way, bidirectional communications channel, and no special hardware is needed, Dudu Mimran, chief technology officer at BGU, told Dark Reading.

“What we wanted to prove was that even though there might be an air gap between systems, they can be breached,” he said.

There are a few catches, though. The air-gapped machines have to be physically close: The researchers placed them 15 inches apart. And it’s a slow data transfer rate of 8 bits per hour, not exactly ideal for siphoning large amounts of data. Mimran said it’s a way to break the air gap, steal passwords, and secret keys, for example.

The researchers installed specialized malware on the machines that could connect to the thermal sensors on the systems, and up the heat on the computers in a controlled way. Just how you could distinguish between normal heat in a system and an heat-based air gap breach is unclear, he said.

6. Gas gauge security running on empty

Renowned security researcher HD Moore earlier this year found thousands of gas tank monitoring systems at US gas stations exposed and wide open on the Internet without password protection. The implication: the gas stations were vulnerable to attacks on their monitors that could simulate a gas leak or disrupt the fuel tank operations.

Moore’s groundbreaking research inspired Trend Micro researchers to explore the problem, too, and they found similar issues with another gas tank monitoring system made by the same manufacturer, Vedeer-Root. Trend Micro’s Kyle Wilhoit and Stephen Hilt then released a homegrown tool called Gaspot, which allows researchers as well as gas tank operators to set up their own virtual monitoring systems to track attack attempts and threats.

Wilhoit and Hilt had set up a series of honeypots mimicking the monitoring system and witnessed multiple attack attempts. In February, they reported finding one such Internet-facing tank monitoring system at a gas station in Holden, Maine, renamed “We_Are_Legion” from “Diesel,” suggesting either the handiwork of Anonymous hacktivists or another attacker using the group’s slogan.

The vulnerable systems Moore found were located at independent, small gas station dealer sites. Large chains affiliated with big-name petroleum companies generally aren’t vulnerable to the public-facing Net attacks because they’re secured via corporate networks.

Moore told Dark Reading earlier this year that the exposure of the fuel systems was due to a basic lack of default security, namely a VPN gateway-based connection to the devices, and authentication. 

7. Star Wars: satellite edition
With equipment costing a little less than $1,000, a security researcher was able to hack the Globalstar Simplex satellite data service used for personal locator devices, tracking shipping containers, and monitoring SCADA systems such as oil and gas drilling.

Colby Moore, information security officer at Synack, demonstrated his research findings of vulnerabilities in the service this summer at Black Hat USA, but his work was shot down by Globalstar.

Moore said an attacker could intercept, spoof, or interfere with communications between tracking devices, satellites, or ground stations because the Globalstar network for its satellites doesn’t use encryption between devices, nor does it digitally sign or authenticate the data packets. He says it’s possible to decode and spoof the satellite data transmitted, so an attacker could spoof a shipping container’s contents, or spy on an oil drilling operation.

“The real vulnerability is that it’s [the data] in plain text and not encrypted,” he said. And satellite networks are aging and not built with security in mind, he said.

But the day after Moore presented his research at Black Hat, Globalstar issued a press statement saying it studied Moore’s research and the “claims were either incorrect or implausible in practice.”

Globalstar maintained that “many … Globalstar devices have encryption implemented by our integrators, especially where the requirements dictate such because a customer is tracking a high-value asset. Synack was also incorrect when it stated, “the protocol for the communication would have to be re-architected” when in fact, no such re-architecture is required,” Globalstar claimed.

The company says its network is not “aging”: “[The] … network is the newest second-generation constellation, having recently been completed in August 2013. Many claims by Synack are simply incorrect, self-serving or misinterpret key information.”

Interestingly, Moore had contacted Globalstar several months before his presentation to alert them of his findings. “They were pretty friendly, and seemed pretty concerned,” he told Dark Reading. Moore and Synack stand by their research.

NEXT PAGE: OnStar, chemical plants, fridges and Fitbit get hit

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full BioPreviousNext

Article source: http://www.darkreading.com/vulnerabilities---threats/9-coolest-hacks-of-2015/d/d-id/1323648?_mc=RSS_DR_EDT