STE WILLIAMS

9 Coolest Hacks Of 2015

Cars, guns, gas stations, and satellites, all got ‘0wned’ by good hackers this year in some of the most creative yet unnerving hacks.

If there was one common thread among the coolest hacks this year by security researchers, it was the chilling and graphic physical implications. Good hackers rooted out the security holes and wowed the industry with actual images of remotely sending a car rolling into a ditch, hijacking the target of a smart rifle, and disabling a state trooper cruiser.

The most creative and innovative hacks in 2015 were both entertaining and chilling. They elicited a little nervous laughter, and then raised the discourse over just what bad guys could execute if increasingly networked things on the Internet aren’t secured or built with security in mind.

Here’s a look at some of the coolest hacks of the year:

 

1.       Car hacking accelerates — from the couch

Famed car hackers Charlie Miller and Chris Valasek for nearly three years had been working toward the Holy Grail of their research, remotely hacking and controlling a vehicle, and when they finally succeeded, they demonstrated it with a live (and yes, Andy Greenberg is still alive) journalist behind the wheel of a 2014 Chrysler Jeep Cherokee on a highway at 70mph. They killed the ignition from 10 miles away from their laptops while sitting on Miller’s couch, and Greenberg steered the car onto an exit ramp.

The controversial demo stirred debate among the security industry over whether the pair had gone too far to illustrate their research. Miller and Valasek have no regrets, and it resulted in the kind of response they had hoped for: Chrysler recalled 1.4 million vehicles possibly affected by the vulnerability the researchers found in the Jeep’s UConnect infotainment system that allowed them to hijack its steering, braking, and accelerator, among other things.

The hole was embarrassingly simple, the researchers admit: a wide (and unnecessarily) open communications port in the Harman uConnect infotainment system’s built-in cellular connection from Sprint, which gave them a connection to the car via their smartphones on the cellular network. They used a femtocell and found they could access the vehicle some 70 miles away via the cell connection.

That let them control the Jeep’s steering, braking, high beams, turn signals, windshield wipers and fluid, and door locks, as well as reset the speedometer and tachometer, kill the engine, and disengage the transmission so the accelerator pedal failed.

The hack also elicited the attention of the feds: a pair of veteran senators proposed legislation for federal standards to secure cars from cyberattacks and to protect owners’ privacy, and the National Highway Safety Administration launched its own investigation into the effectiveness of Fiat Chrysler’s recall.

Miller and Valasek’s “most hackable cars list” in 2014 foreshadowed their Jeep research. At the top of that list was the 2014 Jeep Cherokee,  as well as the 2014 Infiniti Q50 and 2015 Escalade. based on their study of networking features of various vehicles.

“Only a handful of people really have the baseline experience to do this type of stuff. I’m not too worried about it,” Valasek recently told Dark Reading

2.       Police cars — relatively low-tech compared with the Jeep — hackable, too

If you’re one of those drivers (like me) reassured that your older-model vehicle with no Internet connectivity isn’t hackable, think again. Researchers in Virginia this year were able to hack two Virginia State Police vehicle models, the 2012 Chevrolet Impala and the 2013 Ford Taurus.

No, the researchers in this project didn’t drive state troopers into ditches or onto highway exit ramps. The public-private partnership led by the Virginia State Police, the University of Virginia, Mitre Corp., Mission Secure Inc. (MSi), and Kaprica Security, among others, conducted the experiment to explore just what law enforcement could someday face in the age of car hacking. Like Miller and Valasek’s maiden car hacks of a 2010 Ford Escape and 2010 Toyota Prius, the hacks of the VSP cruisers require initial physical tampering of the vehicle. The researchers inserted rogue devices in the two police vehicles to basically reprogram some of the car’s electronic operations, or to wage the attacks via mobile devices.

The project evolved out of concerns by security experts as well as police officials of the dangers of criminal or terror groups tampering with state police vehicles to sabotage investigations or assist in criminal acts.

Among the hacks were remotely disabling the gearshift and engine, starting the engine, opening the trunk, locking and unlocking doors, and running the windshield wipers and wiper fluid. Some of the attacks were waged via a mobile phone app connected via Bluetooth to a hacking device planted in the police car, thus making a non-networked car hackable.

And unlike most car-hacking research to date, the researchers built prototype solutions for blocking cyberattacks as well as data-gathering for forensics purposes.

What made this project even more eye-popping, of course, was that a state police department would agree to it. But Capt. Jerry L. Davis of the Virginia State Police’s Bureau of Criminal Investigation, told Dark Reading law enforcement officials in the state didn’t hesitate to give the car hacking project the green light. “Our executive staff was aware of the issue in the arena and some of the cascading effects that could occur if we didn’t start to take a proactive” approach, he said.

Automakers traditionally have shied away from publicly discussing cybersecurity issues. But Ford and General Motors actually provided rare public statements on car cybersecurity to Dark Reading in its exclusive report on the project. 

3.       When a bad guy hacks a good guy with a gun

Just when you thought hacking couldn’t get any scarier than 0wning a car’s functions, a husband and wife team in August at Black Hat USA demonstrated how they were able to hack a long-range, precision-guided rifle manufactured by TrackingPoint. Runa Sandvik, a privacy and security researcher, and security expert Michael Auger, reverse-engineered the rifle’s firmware, scope, and some of TrackingPoint’s mobile apps for the gun.

The smart rifle has a Linux-based scope as well as a connected trigger mechanism, and comes with its own mobile apps for downloading videos, and for providing information to the firearm such as weather information.

“The worst-case scenario is someone could make permanent, persistent changes in how your rifle behaves,” Sandvik told Dark Reading in an interview prior to Black Hat. “It could miss every single shot you take and there’s not going to be any indication on the [scope] screen why this is happening.”

The good news, though, was that there was no way for an attacker to fire the gun remotely.

Even so, an attacker with wireless access could wreak some havoc on the smart rifle, the researchers found. They discovered an easily guessed and unchangeable password in the rifle’s wireless feature. “Anyone who knows it can connect to your rifle,” Sandvik said.

Among other things, they could change the weather and wind settings the smart rifle employs. The researchers got root access to the Linux software on the rifle and to create custom software updates via the WiFi connection that could alter the behavior of the weapon.

Another major flaw was that the rifle’s software allows administrative access to the device. To view a video demonstration of the hack filmed by Wired, see this

4. Hackin’ at the car wash, yeah

Sitting in the drive-through car wash now comes with a hacking risk. Security researcher Billy Rios found that a Web interface in a popular car wash brand contains weak and easily guessed default passwords and other weaknesses that could allow an attacker to hijack the functions of the car wash to wreak physical damage or score a free wash for his or her ride.

Rios, who is best known for his research into security flaws in TSA systems and medical equipment, began to wonder about car washes after a friend who’s an executive for a gas station chain that includes car washes, told him a story about how technicians had misconfigured one car wash location remotely, causing the rotary arm in the car wash to smash into a minivan mid-wash, spraying water into the vehicle and at the family inside.

“If [a hacker] shuts off a heater, it’s not so bad. But if there are moving parts, they’re totally going to hurt [someone] and do damage,” Rios, founder of Laconicly, told Dark Reading when he revealed his research earlier this year.

He found “a couple of hundred” PDQ LaserWash brand car washes online and exposed on the Net, but he estimates there are thousands or others online as well. The car wash uses an HTTP server interface for remote administration and control of the system. If an attacker were able to glean the default password for the car wash owner or technician and telnet in, he or she could take over the car wash controls from afar and open or close the bay doors, or disable the sensors or other machinery.

An attacker also could also sabotage the sales side of the business. “You can log into it and get a shell and get a free car wash” with an HTTP GET request, Rios explained.

5. Heat jumps the air gap

Air-gapping, or physically separating and keeping sensitive systems off the network, is the simple, typical go-to for critical infrastructure plants or other similar systems. Turns out there’s a way to breach that air gap simply by using heat.

Researchers at the Cyber Security Research Center at Israel’s Ben-Gurion University (BGU) discovered a way to employ heat and thermal sensors to set up a communications channel between two air-gapped systems. The so-called BitWhisper hack, which is part of ongoing air-gap security research at the university, broke new ground with a two-way, bidirectional communications channel, and no special hardware is needed, Dudu Mimran, chief technology officer at BGU, told Dark Reading.

“What we wanted to prove was that even though there might be an air gap between systems, they can be breached,” he said.

There are a few catches, though. The air-gapped machines have to be physically close: The researchers placed them 15 inches apart. And it’s a slow data transfer rate of 8 bits per hour, not exactly ideal for siphoning large amounts of data. Mimran said it’s a way to break the air gap, steal passwords, and secret keys, for example.

The researchers installed specialized malware on the machines that could connect to the thermal sensors on the systems, and up the heat on the computers in a controlled way. Just how you could distinguish between normal heat in a system and an heat-based air gap breach is unclear, he said.

6. Gas gauge security running on empty

Renowned security researcher HD Moore earlier this year found thousands of gas tank monitoring systems at US gas stations exposed and wide open on the Internet without password protection. The implication: the gas stations were vulnerable to attacks on their monitors that could simulate a gas leak or disrupt the fuel tank operations.

Moore’s groundbreaking research inspired Trend Micro researchers to explore the problem, too, and they found similar issues with another gas tank monitoring system made by the same manufacturer, Vedeer-Root. Trend Micro’s Kyle Wilhoit and Stephen Hilt then released a homegrown tool called Gaspot, which allows researchers as well as gas tank operators to set up their own virtual monitoring systems to track attack attempts and threats.

Wilhoit and Hilt had set up a series of honeypots mimicking the monitoring system and witnessed multiple attack attempts. In February, they reported finding one such Internet-facing tank monitoring system at a gas station in Holden, Maine, renamed “We_Are_Legion” from “Diesel,” suggesting either the handiwork of Anonymous hacktivists or another attacker using the group’s slogan.

The vulnerable systems Moore found were located at independent, small gas station dealer sites. Large chains affiliated with big-name petroleum companies generally aren’t vulnerable to the public-facing Net attacks because they’re secured via corporate networks.

Moore told Dark Reading earlier this year that the exposure of the fuel systems was due to a basic lack of default security, namely a VPN gateway-based connection to the devices, and authentication. 

7. Star Wars: satellite edition
With equipment costing a little less than $1,000, a security researcher was able to hack the Globalstar Simplex satellite data service used for personal locator devices, tracking shipping containers, and monitoring SCADA systems such as oil and gas drilling.

Colby Moore, information security officer at Synack, demonstrated his research findings of vulnerabilities in the service this summer at Black Hat USA, but his work was shot down by Globalstar.

Moore said an attacker could intercept, spoof, or interfere with communications between tracking devices, satellites, or ground stations because the Globalstar network for its satellites doesn’t use encryption between devices, nor does it digitally sign or authenticate the data packets. He says it’s possible to decode and spoof the satellite data transmitted, so an attacker could spoof a shipping container’s contents, or spy on an oil drilling operation.

“The real vulnerability is that it’s [the data] in plain text and not encrypted,” he said. And satellite networks are aging and not built with security in mind, he said.

But the day after Moore presented his research at Black Hat, Globalstar issued a press statement saying it studied Moore’s research and the “claims were either incorrect or implausible in practice.”

Globalstar maintained that “many … Globalstar devices have encryption implemented by our integrators, especially where the requirements dictate such because a customer is tracking a high-value asset. Synack was also incorrect when it stated, “the protocol for the communication would have to be re-architected” when in fact, no such re-architecture is required,” Globalstar claimed.

The company says its network is not “aging”: “[The] … network is the newest second-generation constellation, having recently been completed in August 2013. Many claims by Synack are simply incorrect, self-serving or misinterpret key information.”

Interestingly, Moore had contacted Globalstar several months before his presentation to alert them of his findings. “They were pretty friendly, and seemed pretty concerned,” he told Dark Reading. Moore and Synack stand by their research.

NEXT PAGE: OnStar, chemical plants, fridges and Fitbit get hit

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full BioPreviousNext

Article source: http://www.darkreading.com/vulnerabilities---threats/9-coolest-hacks-of-2015/d/d-id/1323648?_mc=RSS_DR_EDT

Democrats’ database bug spotlights the rise of big data in elections

In recent election cycles we’ve heard a lot about big data in political campaigns, and in the future it’s possible we’ll hear a lot more about big campaign data breaches, too.

Last week, a data breach nearly upended the race for US president on the Democratic Party side of the contest, after it was discovered that the campaign of Vermont Senator Bernie Sanders had accessed a private database of voter information collected by the rival campaign of Hillary Clinton.

The Sanders campaign fired the staffer who improperly accessed the private Clinton voter database, and Sanders personally apologized to Clinton when the two met on the debate stage on Saturday, 19 December.

Clinton said she was eager to “move on” from the incident, noting that American voters have much more pressing concerns on their minds.

But Americans who value their privacy might not want to move on quite so fast – let’s review what happened.

The Democratic National Committee (or DNC, the organizing body of the Democratic Party) maintains a big database of voters who are likely to vote for Democrats in future elections, based on a variety of information about those voters.

This “master list” of voters is rented out to individual campaigns at the state and federal level, like the Sanders and Clinton presidential campaigns, and the campaigns can combine that list with their own data to better target voters.

The DNC’s master list is maintained by a private company called NGP VAN, which provides data and fundraising tools to its clients, including thousands of political campaigns.

According to NGP VAN, it released a patch for its VoteBuilder software on Wednesday, 16 December, which itself contained buggy code that made proprietary voter scoring data available to unauthorized users.

In a span of 45 minutes, a staffer for the Sanders campaign was able to search and view scoring data that the Clinton campaign used to rank voters on their likelihood to turn out to the polls in Iowa, New Hampshire and other states holding early primary contests.

On Friday, 18 December, the DNC directed NGP VAN to block the Sanders campaign from accessing the party’s database of likely Democratic voters, as well as Sanders’s own data, until it could investigate whether the data was improperly used.

The Sanders campaign immediately swung into action, filing a lawsuit against the DNC with the US District Court in Washington, DC, to restore its access to the system.

Sanders is waging an uphill battle against Clinton – some prediction markets pick her as the favorite to win the nomination with more than 90% certainty.

Sanders has stayed in the fight thanks to his large base of small donors, but losing access to the NGP VAN system threatened to cost the Sanders campaign $600,000 in donations per day, “crippling our campaign,” Sanders said.

Sanders accused the DNC of stacking the deck in favor of the Clinton campaign, but the DNC relented and restored the Sanders campaign’s access to the system by Saturday, 19 December.

The DNC acknowledged that the data breach was possible because of a glitch in the software and was “not a hack.”

Even so, any time private information is exposed, it makes little difference whether it is intentional or accidental to the people whose data is breached.

And with all of the data that campaigns and political organizations are gathering on voters, it’s time to ask more questions about how this private data is collected, and how it is secured.

Political campaigns gather lots of information about voters that can be used to determine how they might vote – from demographic information like gender, age and occupation, to data about the things they buy and what websites they visit.

The presidential campaign of Republican Senator Ted Cruz has been gathering data on “tens of millions” of Facebook users, without their permission, in order to build “psychological profiles” of potential supporters, according to an investigation by The Guardian.

Yet there are no privacy regulations about what data political campaigns can collect or how they use that information.

Maybe we should also ask if political campaigns’ use of big data means the secret ballot, a vital aspect of free, democratic elections, is under threat.


Image of Bernie Sanders and Hillary Clinton courtesy of Joseph Sohm / Shutterstock.com.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bArihfodMG4/

Woman sues Airbnb over hidden surveillance cam found in rental

All the window blinds were drawn. The front door was closed.

Edith Schumacher and her partner, Kevin Stockton, had booked the Airbnb rental in California for a month, and they were doing what people do behind closed doors when they’re on vacation and can reasonably presume that other people aren’t hiding in the closet, drooling.

In Schumacher’s case, that includes sleeping naked.

All was well for the first few days.

Then, on the third day, Stockton noted a light coming from a shelf in the living room.

According to a complaint Schumacher filed last Monday (14 December) in the Northern District of California, that light was coming from a remote-controlled camera hidden between candles.

Schumacher, who’s from Germany, is now suing Airbnb, claiming that the surveillance camera allowed her hosts to spy on her as she walked naked from the bedroom to the bathroom.

The suit says that the hosts, Fariah Hassim and Jamil Jiva, also would have been able to use the spy camera to eavesdrop on their guests’ private conversations in the living room, including the intimate, private subjects of their financials and the nature of their relationship.

Beyond that, Schumacher’s lawyers say that their client has been left “deeply humiliated and angry,” as well as concerned that someone might post naked photos of her online.

Schumacher believed that “with the front door closed and the window blinds drawn throughout the property, she was protected and free from prying eyes,” her lawyers wrote. “This natural presumption proved to be incorrect.”

An Airbnb rep told The Recorder that “of course” the company expects hosts to obey the law and to respect guests’ privacy.

It also warns hosts to “fully disclose” whatever surveillance cameras might be tucked away, the company told Fusion:

Airbnb warns hosts to fully disclose whether there are security cameras or other surveillance equipment at or around the listing and to get consent where required.

That’s not good enough, Schumacher’s suit contends.

Her complaint says that because Airbnb was in control of the leasing process, it had a duty to exercise reasonable care in order to avoid causing personal injury to Schumacher.

The complaint says that Airbnb fails to conduct meaningful background checks or verify the personal details of people who lease their homes on the site, thereby failing to protect the guests’ privacy rights.

From the complaint:

Little to no effort is undertaken by Airbnb by way of a vetting process with respect to these hosts to ensure the safety and welfare of the third parties renting properties through Airbnb.

The Recorder reports that Schumacher is suing Airbnb for negligence and is suing Hassim and Jiva for violating her privacy and intentionally inflicting emotional distress.

We’ve noted in the past that there are a plethora of ways to get scammed when you’re trying to reserve a short-term rental through Airbnb.

All a crook needs to put up a fake listing, for example, are fake photos, a fake profile, a fake address and a real phone number.

It’s also possible for scammers to hijack a current, legitimate account, possibly through bulk purchase of breached logins, and to put up fake listings under the name of an unsuspecting user.

What’s more, plenty of people have been scammed by going along with fraudsters who talk them into taking communication and/or payment off the site.

It’s not just guests who get scammed, mind you: hosts are also potential targets.

One such ploy is for guests to submit complaints about conditions they falsely claim are objectionable.

Variations of one such story comes from a site called airbnbHELL that collects uncensored stories from hosts and guests.

Similar anecdotes come from two hosts who lost 50% of their earnings after guests submitted a photo of a mouse seen in the rentals: a photo that one of the hosts found, through an image search, had been submitted to another site a year earlier.

Note that the stories on airbnbHELL haven’t been confirmed.

Regardless, what’s disturbing is that they are plausible: Airbnb does reimburse guests who stay in deplorable rentals.

Airbnb has many safeguards to keep both guests and hosts safe and to keep transactions secure, including a bevy of “Trust Safety” pages.

There are ways to avoid getting ripped off on Airbnb from a cyber perspective, but hidden surveillance cameras are a whole other kettle of fish.

Even if Schumacher and Stockton’s hosts were using the hidden surveillance camera merely to make sure their home and possessions weren’t trashed, with no intention of nefariously capturing nude images or intercepting private information about their guests, the setup of a hidden surveillance camera, the presence of which was allegedly undisclosed, was still an egregious breach of privacy.

Even if the hosts hadn’t planned to sell or post naked images, that doesn’t mean that an intruder couldn’t hack the webcam and do it in their stead.

As we know from all the stories of hacked baby monitors and other webcams, it’s all too common for intruders to break through the security of a password-protected webcam and to take it over.

Last year we wrote about one website that was streaming the live feeds of hundreds of thousands of internet-enabled cameras that were secured with a default, out-of-the-box password.

Besides feeds from baby monitors in nurseries around the world, the site allows strangers to spy on people via security webcams delivering live feeds from bedrooms, other rooms in residential homes, offices, shops, restaurants, bars, swimming pools and gymnasiums.

Given the ubiquity of default webcam passwords that never get changed, that list could easily include Airbnb rentals.

In fact, changing the default password on baby monitors and webcams is so important, it’s number 5 on our list of Advent Tips.

Be careful in your holiday travels.

Wherever you wind up staying the night, be it an Airbnb rental, a hotel, or somewhere else, keep an eye out for glowing lights after you flip off the switch.

After all, the light source might not be Rudolph’s nose.

Image of hidden camera courtesy of Shutterstock.com

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BXhc3PPBdu0/

Facebook spars with researcher who says he found “Instagram’s Million Dollar Bug”

A spat erupted last week between Facebook and a security researcher who reported a vulnerability in the infrastructure behind its Instagram service.

In the wake of having reported the bug, Wesley Wineberg, a contract employee of security company Synack, accused Facebook of trying to threaten his job and intimidate him.

Facebook says, well, a number of things: that Wineberg was one of several to discover the vulnerability, that the company thanked him and offered him $2500 (as is “standard”, it says), that Wineberg wanted more than that, and that the researcher then crossed the line of responsible, ethical bug reporting to “rummage” through data.

The starting payout for bugs in Facebook’s bounty program is $500.

In an extensive post about the situation, Facebook chief security officer Alex Stamos on Thursday wrote that Facebook offered to pay Wineberg $2500 “despite this not being the first report of this specific bug.”

Up to the point when Facebook offered him $2500, everything Wineberg did was “appropriate, ethical, and in the scope of our program,” Stamos says.

Both parties agree on one thing: from there, it went downhill fast.

The way Stamos tells it, Wineberg used the flaw to “rummage around” for useful information, which he found – in spades.

Wineberg on Thursday said in a post on his personal blog that he had found weaknesses in the Instagram infrastructure that allowed him to access source code for “fairly” recent versions of Instagram; SSL certificates and private keys for Instagram.com; keys used to sign authentication cookies; email server credentials; and keys for more than a half-dozen critical other functions, including iOS and Android app signing keys, iOS push notification keys, and the APIs for Twitter, Facebook, Flickr, Tumblr and Foursquare.

In addition, the researcher said he’d managed to access employee accounts and passwords (some of which he said were “extremely weak”), and had access to Amazon buckets storing user images and other data.

He hit the jackpot, Wineberg said, and not just any piddling $2500 payout’s worth.

In fact, his post was titled “Instagram’s Million Dollar Bug”: a reference to Facebook having said in the past that:

If there’s a million-dollar bug, we will pay it out.

From Wineberg’s post:

To say that I had gained access to basically all of Instagram’s secret key material would probably be a fair statement. With the keys I obtained, I could now easily impersonate Instagram, or impersonate any valid user or staff member. While out of scope, I would have easily been able to gain full access to any user’s account, private pictures and data. It is unclear how easy it would be to use the information I gained to then compromise the underlying servers, but it definitely opened up a lot of opportunities.

Between 21 October and 1 December, Wineberg would find what he believed were three different issues, which he reported in three installments.

They eventually led him to all those Instagram keys, but that raised warning flags at Facebook, which responded quite differently than it had to the initial bug report.

In fact, Stamos said, issues 2 and 3 were where Wineberg crossed the line.

He found Amazon Web Service (AWS) API Keys that he used to access an Amazon Simple Storage Service (S3) bucket and download non-user Instagram technical and system data, Stamos said.

But this use of AWS keys is just “expected behavior”, Stamos said, and Wineberg should have kept his hands out of that cookie jar:

The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself. Intentional exfiltration of data is not authorized by our bug bounty program, is not useful in understanding and addressing the core issue, and was not ethical behavior by Wes.

Wineberg mentioned publishing his findings. Facebook was not pleased.

So Stamos reached out to Jay Kaplan, the CEO of Synack – in spite of Wineberg doing all this on his own time, not on Synack’s dime – to tell him that writing up the initial bug was OK, but that exfiltrating data and calling it research was not OK.

That’s when Stamos dropped a reference to lawyers, saying that he “wanted to keep this out of the hands of the lawyers” but that he wasn’t sure if this was something he needed to go to law enforcement over.

This is what Stamos wanted, from Wineberg’s telling of it:

  • Confirmation that he hadn’t made any vulnerability details public.
  • Deletion of all data retrieved from Instagram systems.
  • Confirmation that he hadn’t accessed any user data.
  • An agreement to keep all findings and interactions private, and not publish them at any point (contrary to Stamos’s assertion that Facebook was OK with Wineberg writing up the initial vulnerability).

Wineberg says that he couldn’t find anything in Facebook’s responsible disclosure policy that specifically forbade what he’d done after he initially found the remote code execution (RCE) vulnerability.

What would have clarified matters, he said, would have been specificity along the lines of, say, Microsoft’s bug reporting policy, which explicitly prohibits “moving beyond ‘proof of concept’ repro steps for server-side execution issues (i.e. proving that you have sysadmin access with SQLi is acceptable, running xp_cmdshell is not).”

From his post:

Despite all efforts to follow Facebook’s rules, I was now being threatened with legal and criminal charges, and it was all being done against my employer (who I work for as a contractor, not even an employee). If the company I worked for was not as understanding of security research I could have easily lost my job over this. I take threats of criminal charges extremely seriously, and so have already confirmed with legal counsel that my actions were completely lawful and within the requirements specified by Facebook’s Whitehat program.

As of Friday afternoon, Stamos was still hashing it all out with commenters on his post, many of whom said that the “expected behavior” rationale for dismissing Wineberg’s findings was thin.

For his part, Stamos said that Facebook will look at making its policies more explicit and try to be clearer about what it considers ethical behavior.

But Facebook still doesn’t condone what Wineberg did.

From Stamos’s post:

Condoning researchers going well above and beyond what is necessary to find and fix critical issues would create a precedent that could be used by those aiming to violate the privacy of our users, and such behavior by legitimate security researchers puts the future of paid bug bounties at risk.

Readers, who do you think is in the right, here? Please share your thoughts in the comments section below.

Image of bug courtesy of Shutterstock.com

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SLKRbmgvTBs/

Advent Tip 21: Bought online? Watch out for bogus courier emails!

If you’ve been doing any last-minute online shopping for Christmas gifts, you may well be waiting with increasing anxiety for the items to be be delivered.

So, don’t let your guard down when it comes to emails claiming to be from couriers.

The trick usually goes like this: the courier company tried to deliver your parcel, but no one was home, or the address wasn’t correct, or something like that.

You need to contact the couriers to check out the details and make arrangements so the delivery can be completed.

If you happen to be expecting a delivery, the email may seem perfectly well-timed…

…and, to help you out, there’ll be a web link or an attachment in the email that you can click or open to sort things out.

Even if the email doesn’t look quite right – for example, because it contains bad English, or mentions a courier company you don’t usually use – it’s still tempting to click through or open up the document, just in case.

After all, if the site turns out to be bogus, or the document to be fraudulent, you don’t have to take things any further.

Except that by then it could be too late.

Booby-trapped documents that infect your computer simply through opening them are an increasingly common weapon in the cybercrime armoury.

So too are web pages loaded with so-called exploit kits that fire off a sequence of attacks on your browser while you’re distracted by the rest of the page.

If in doubt, look up the courier company’s phone number yourself (don’t use the number in the email!) and give them a ring.

💡 LEARN MORE – The danger of booby-trapped Office attachments ►

💡 LEARN MORE – A real-world “courier delivery” scam that foisted malware on Mac users ►

💡 LEARN MORE – How exploit kits attack your browser ►

Images of Christmas tree and Advent calendar courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4tFLGXXuYhc/

Facepalm time: Windows 10 security patch wipes custom Word autotext

A cumulative Windows 10 update is pranging customised copies of Word.

Windows 10 Cumulative Update KB3124200 has caused Normal.dotm template to be renamed and it is then creating Normal.dotm when Word is re-started.

The update was released last week and complaints are landing on the Microsoft Office forum page, here.

That’s a problem for anybody who customised their copy of Word using macros, tailored auto corrections or autotext for their own particular needs.

Normal.dot – Normal.dotm in Word 2007 and Word 2010 – is the database that stores such settings. If this file is deleted, and if Word can’t find a copy or a backup, then Microsoft’s application will re-create the file in its original format – meaning factory settings. So far it seems to be copies of Word 2016 that are affected.

Doug Robbins, a Word MVP, wrote on the Microsoft community forum: “In 30+ years of using Word, including 2016 on Windows 10, I have never had a Normal.dot or more recently Normal.dotm template simply disappear.”

A Microsoft group engineering manager for Word posted to the forum that Microsoft is looking into the problem.

“This was not an intentional change and we (the Word product team) are working to understand both the cause as well as what steps customers can take to either avoid or recover from this,” he said. ®

Sponsored:
Simpler, smarter authentication

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/21/microsoft_windows_10_word_2016_normal_dotm/

Hello Kitty hack exposes 3.3 million users’ details, says infosec bod

Up to 3.3 million Hello Kitty users have had their personal data exposed due to a database breach at the brand’s online community SanrioTown.com, a security researcher has discovered.

The sanriotown.com breach had been discovered online by researcher Chris Vickery who informed security blog Salted Hash.

The exposed records include users’ names, birthdates, gender, nationality, email addresses, unsalted SHA-1 password hashes, and password hint questions.

“While having sensitive details exposed is bad enough for adults, when the information relates to a child it’s far worse.

“If someone managed to compromise a child’s identity, the fraud might not be detected for years because most parents don’t monitor their child’s credit record,” noted Salted Hash writer Steve Ragan.

In addition to the primary Sanriotown database, two additional backup servers containing mirrored data were also compromised, it said.

The earliest known date of publication for the private information was 22 November this year

Sanrio, as well as the ISP being used to host the database itself, have all been notified, reported the site.

The Register has contacted Sanrio for comment.

Earlier this month Toymaker VTech admitted that millions of kiddies’ online profiles were left exposed to hackers – much higher than the 220,000 first feared. ®

Sponsored:
Simpler, smarter authentication

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/21/hello_kitty_hack_exposes_33_million_users_details/

Iranian hackers targeted New York dam, had a quick nosy around

Iranian hackers penetrated the online control system of a New York dam in 2013, according to reports, and poked around inside the system.

The Wall Street Journal reported that hackers gained access to the dam through a cellular modem, according to an unclassified Homeland Security summary of the case.

Two sources said the summary refers to the Bowman Avenue Dam, a small facility 20 miles outside of New York. They said the hackers didn’t take control of the dam but probed the system, citing people familiar with the matter.

The Department of Homeland Security has declined to comment on the incident.

US intelligence agencies noticed the intrusion as they monitored computers they believed were linked to Iranian hackers targeting American firms, according to people familiar with the matter.

The analysts detected a machine that was crawling the internet for vulnerable US industrial-control systems. The hackers appeared to be focusing on certain internet addresses, according to the people.

The US has the highest number of industrial-control systems connected to the internet in the world, with 57,000 systems, according to researchers at Shodan.

An attack on a German industrial-controlled system occurred last December, with hackers causing “serious damage” to a German steel mill and wrecking one of its blast furnaces.

The hack of the unnamed mill, detailed in the annual report of the German Federal Office of Information Security, was pulled off after a victim fell for a phishing email. ®

Sponsored:
Simpler, smarter authentication

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/21/iranian_hackers_target_new_york_dam/

Security sweep firm links botnet infestation and file sharing

There’s high degree of correlation between organisations with P2P activity and system compromises via malware infections, according to a new study by BitSight Technologies.

Correlation is, of course, different from causation. However, the booby-trapping of Torrents to tricks freetards into sucking down on malicious code is a well-known tactic, so it’s possible that BitSight might be onto something beyond saying that firms with lax security controls in general get infected more often.

BitSight provides security sweeps of corporate networks – or, as its marketing blurb would have it, helps clients to “manage cyber risk by continuously monitoring the infosec of their business ecosystem”, a service that includes a recently introduced file-sharing monitoring component. Its study examines the P2P file sharing activity of about 30,700 companies.

The tech firm’s key findings were that 43 per cent of application files and 39 per cent of games contained malicious software (figures that seem high, to El Reg’s security desk, at least). Grand Theft Auto V and Adobe Photoshop lead the list of top torrented games and applications, respectively.

BitSight’s representatives were keen to stress the obvious point (to Reg readers, at least) that peer-to-peer file sharing and downloading illegal content didn’t end in the Napster era. The firm reckon its work provides evidence of a correlation between botnet activity and file sharing activity.

In addition, BitSight’s study suggest government, education and utilities organisations have a larger BitTorrent problem than other sectors of the economy.

More details on the research into what BitSight describes as the “Peer to Peer Peril” can be found in a blog post here. ®

Sponsored:
Simpler, smarter authentication

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/21/p2p_malware_peril/

Security Tech: It’s Not What You Buy, It’s How You Deploy

Good information security depends on a holistic strategy, not on an elite lineup of discretely moving parts.

It’s a great time to be selling security software, but a much harder time to be a CISO. Enterprise security spending has exploded in the race to protect against increasingly advanced and complex cyber threats. Much of that money is spent on modern information security tools – advanced threat detection, sandboxes, intrusion prevention systems, threat intelligence feeds, and more. The spending is growing at such a rate that Gartner predicts we will eclipse the $100 billion mark by 2018, with other industry analysts suggesting $170 billion in annual spending by 2020.

Unfortunately, buying more security software does not equal “more security.” It is not simply a matter of turning on the latest technology and walking away, problem solved. Instead, the larger challenge for security practitioners is not in what to purchase, it’s how to deploy security tools. So much emphasis has been put on product, emerging technologies, and the elusive promise of big data analytics, that there is little discussion about how to architect a secure network.

There are many different ways for deployments to fail—some are conceptual while others are matters of execution. Many organizations look at security tools and initiatives as one-off solutions, without considering the ramification of how they intersect with other initiatives, or whether or not they make sense as part of the larger security architecture. Especially in layered security models, projects that aren’t clearly defined from the outset can fall flat once they are deployed.

For example, let’s consider an organization that is deploying a multi-factor authentication program alongside a network segmentation project. And, for the sake of discussion, the deployment team decides to finish the multi-factor authentication project first. Once it is installed and working, the team pivots to the network segmentation project, but they neglected to account for the location of the multi-factor authentication machine and block its access to the network. Now, they can’t login and fix it because it’s blocked. It sounds silly, but this happens.

Another critical issue organizations must address when deploying new security tools and initiatives is ensuring fast access to data while maintaining optimal performance of various security applications on the network. A common approach to security today is to keep tools separate, with each tool competing for data and bandwidth on the network and lacking visibility into the security workflow as a whole. To ensure a maximum performance – and return on investment – network and data center architectures must be designed in a way that supplies consistent access to relevant data and traffic to security tools, while at the same time avoiding sopping network bandwidth and facilitates security workflows.

With that in mind, here are four steps security leaders can take to improve their information security based deployments.

  1. Have a 360 strategy: It can’t be overstated how critical it is to have a conceptual view of your security deployment. Without a single, overarching guide that everyone in the organization can draw from, different project teams are bound to step on each other’s toes.
  2. Clearly define your initiatives. Given the urgency created by the data breach epidemic, many security initiatives are happening in tandem. But, security systems are not all discrete, there are interdependencies that need to be accounted for. By ensuring initiatives, metrics and goals are clearly defined at the start, problems will be avoided later.
  3. Recognize how tools interact. In the same way that we don’t want project teams getting tangled up, we need to understand how different security tools interact, how they get their data, and how they perform on the network. The overall workflow orchestration should be considered
  4. Consider what each addition adds to the whole. There has been a rush to buy the “next-generation” of a security technology to fight off the rising tide of malware. But good information security depends on a holistic strategy, not on an elite lineup of discretely moving parts. Every addition to the security architecture should be considered from the standpoint of what it adds to overall security.

It’s understandable that security practitioners want to move fast; they are surely feeling the pressure from all sides on the data breach issue. But complex problems do not often have simple solutions, and in this case that is especially true. When leaders arm security teams with clear ideas of what needs to be done, well-defined plans, and a more deployment-focused thought process, projects can thrive – and that is what will lead to better overall security.

Simon Gibson is a Fellow Security Architect at Gigamon. He provides direction and roadmaps for the product that secures applications that secure the Internet.
Simon has been working on Internet infrastructure for nearly 20 years from small ISP’s, to developing streaming … View Full Bio

Article source: http://www.darkreading.com/vulnerabilities---threats/security-tech-its-not-what-you-buy-its-how-you-deploy/a/d-id/1323599?_mc=RSS_DR_EDT