STE WILLIAMS

Monday review – the hot 33 stories of the week

Get yourself up to date with everything we’ve written in the last seven days – it’s weekly roundup time.

Monday 14 December 2015

Tuesday 15 December 2015

Wednesday 16 December 2015

Thursday 17 December 2015

Friday 18 December 2015

Saturday 19 December 2015

Sunday 20 December 2015

News, straight to your inbox

Would you like to keep up with all the stories we write? Why not sign up for our daily newsletter to make sure you don’t miss anything. You can easily unsubscribe if you decide you no longer want it.

Image of days of week courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VJjMO_Iq9vE/

iOS banking app security: getting better, but still bad!

Two years ago, Ariel Sanchez, a researcher at security assessment company IOActive, published a report on the sort of security you could expect if you were doing your internet banking on an Apple gadget.

The answer, sadly, turned out to be “Very little.”

Two years later, the answer’s a bit better, but it’s still pretty sad.

The good news: over the past two years, more banking apps that run on iOS have begun to protect data better and fend off man-in-the-middle (MiTM) attacks by properly validating SSL certificates or removing plaintext traffic.

The bad news?

Well, how about you get a cup of coffee and pull up a chair.

Plenty of apps are still storing insecure data in their file systems, and many are still susceptible to client-side attacks.

For this year’s research, Sanchez again looked at 40 mobile banking apps, mostly in the continents of Europe, America and Asia.

He didn’t detail the vulnerabilities he found or how to exploit them, but he did contact some of the affected banks to report the issues.

What he found was that few of the mobile banking apps he checked out provide authentication that goes beyond username and password.

Sanchez said that overall, security has improved since he researched banking apps in January 2014, but it hasn’t improved enough, given that many apps remain vulnerable.

Specific findings:

  • 12.5% of the apps didn’t validate the authenticity of the SSL certificates presented, making them susceptible to MiTM attacks.
  • 35% of the apps contained non-SSL links throughout the application. This allows an attacker to intercept traffic and inject arbitrary JavaScript/HTML code in an attempt to create fake login prompts or similar scams.
  • 30% of the apps didn’t validate incoming data and were vulnerable to JavaScript injections via insecure UIWebView implementations. allowing client-side attacks.
  • 42.5% of the apps provided alternative authentication solutions to mitigate the risk of leaking user credentials and impersonal attacks.
  • Related to client-side information exposed via system or custom logs, 40% of the apps still leak information about user activity or client-server interactions, such as requests or responses from the server.

In 2014, Sanchez had found that 70% of the apps offered no support at all for two-factor authentication (2FA).

That number has since shrunk to 57.5%, which is a step in the right direction.

But that’s still an awful lot of banks that aren’t bothering with the extra security users get when they have to do something like punch in a one-time passcode, sent via SMS (text message), whenever they try to log in.

But the still-concerning lack of 2FA once again pales when compared to the problem of not validating SSL certificates.

Two years ago, 40% of the apps accepted any SSL certificate for secure HTTP traffic.

That’s down to 12.5%, which is another step in the right direction.

HTTPS certificates rely on a chain of trust, and validating that chain is important, given that it signals that a Certificate Authority has vouched for somebody who claims to own a site.

The chain of trust stops anyone who feels like it from blindly tricking users with a certificate that says, “Hey, this is the banking site you’re looking for, trust us!”

According to IOActive’s recent report, more than one out of 10 (12.5%) of iOS banking apps still simply didn’t produce any warnings when faced with a fake certificate, because they didn’t check whether the certificate had been vetted or whether it was a home-baked piece of bogus.

You can feed those apps any certificate that claims to validate any website, and the app will blindly accept it.

So, if the banking app is misdirected to a phishing site, for example while you’re using an untrusted network such as a Wi-Fi hotspot, you simply won’t know.

We’ve seen multiple SNAFUs in financial apps related to not checking certificates.

For example, in July 2014, the popular Bitcoin wallet Coinbase was found to have a weakness in its Android app, having to do with how the app handled HTTPS certificates, that could allow an attacker to steal authentication codes and access users’ accounts.

It’s not just banking apps that get this wrong.

Other apps that fumble HTTPS have included Pinterest’s iOS app and Microsoft’s iOS Yammer client, both of which failed to give warnings about fake certificates when Dutch security company Securify checked them out in April.

Image of iPhone courtesy of ymgerman / Shutterstock.com.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aklpEeTwbyU/

Security industry too busy improving security to do security right

The Payment Card Industry Security Standards Council (PCI SSC) has decided to delay the deadline for migration from Secure Sockets Layer (SSL) to Transport Layer Security (SSL).

Earlier this year, the Council decided the time to make the change was June 2016, a reasonable idea given that SSL gave the world the Heartbleed, Shellshock and Poodle vulnerabilities.

Now the Council says it’s just too hard for retailers to make the jump.

The canned statement (PDF) about the moratorium, issued deep into Friday US time, features the Council’s general manager Stephen Orfei saying migration was expected to be simple, “but in the field a lot of business issues surfaced as we continued dialog with merchants, payment processors and banks.”

Orfei laid some of the blame at the feet of mobile devices, saying that retailers’ efforts to secure transactions made on smartphones and fondleslabs, on top of “encryption, the SHA-1 browser upgrade and EMV in the US” together make for so much work that the SSL death deadline can’t be met.

“We’re working very hard with representatives from every part of the ecosystem to make sure it happens as before the bad guys break in,” Orfei says.

The world will therefore have to bumble along with known-to-be imperfect encryption for two years longer than planned, a period during which The Register imagines “the bad guys” will do their very best take advantage of weak encryption.

The new migration deadline will be formalised in the next version of the PCI DSS standard, due in April 2016. ®

Sponsored:
Building secure multi-factor authentication

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/21/ssl_to_tls_migration_delayed_until_2018/

Facebook hammers another nail into Flash’s coffin

Facebook has hammered puts another nail in to the coffin of Adobe Flash, by switching from the bug-ridden plug-in to HTML5 for all videos on the site.

The Social NetworkTMexplained the move by saying “Moving to HTML5 best enables us to continue to innovate quickly and at scale, given Facebook’s large size and complex needs.”

Flash hasn’t been completely banished: Facebook says it is “continuing to work together with Adobe to deliver a reliable and secure Flash experience for games on our platform.”

Facebook’s Daniel Baulig writes that going to HTML5 means the company can “tap into the excellent tooling that exists in browsers, among the open source community, and at Facebook in general. Not having to recompile code and being able to apply changes directly in the browser allow us to move fast.”

“HTML5 made it possible for us to build a player that is fully accessible to screen readers and keyboard input,” Baulig added, going on to explain that the standard will make it easier to develop for people with visual impairments.

But HTML5 is no panacea: Baulig wrote that “we noticed that a lot of the older browsers would simply perform worse using the HTML5 player than they had with the old Flash player.”

“We saw more errors, longer loading times, and a generally worse experience.”

The Social NetworkTM therefore moved to HTML5 for newer browsers some time ago, adding more browsers over time as improved its video player. As of December 19th, however, its all HTML5 all the time, no matter the browser with which you venture into The House That Zuck Built.

And The House always wins: Baulig says “People like, comment, and share more on videos after the switch, and users have been reporting fewer bugs. People appear to be spending more time with video because of it.”

As Baulig’s post points out, Facebook operates at unusual scale and therefore has unusual needs. Yet the site’s considerable influence means developers everywhere are likely to be asked to consider this decision before long, not least because YouTube’s also flushed Flash. ®

Sponsored:
Building secure multi-factor authentication

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/21/facebook_dumps_flash_for_video/

Advent tip #20: Free Wi-Fi is handy – but think before you connect!

Free Wi-Fi can be astonishingly handy.

For example, I’ve arrived at an overseas airport with instructions to “call us when you land and we’ll set off to pick you up,” only to find my mobile phone company hadn’t enabled roaming.

Wi-Fi and Skype to the rescue.

I’ve needed a multi-gigabyte security update while on the road, but had only a few hundred megabytes of mobile data left.

Wi-Fi to the rescue.

In fact, you probably have “thank goodness for free Wi-Fi” stories of your own.

But please keep your wits about you if you do decide to connect.

If it’s open Wi-Fi, where you don’t need a password at all, then anyone within a few metres (plus determined hackers who are 100m away or even more) can eavesdrop everything you send and receive.

Even if the network is encrypted, anyone else who knows the password can listen in at the moment you connect, capture what’s called your “login handshake,” and then eavesdrop the rest of your traffic anyway.

Sticking to HTTPS websites will help, because your browsing will be encrypted; and using a Virtual Private Network (VPN) is even better, because almost all of your traffic will be encrypted, not just your browsing.

Of course, there’s often a sign-up page before the service provider will let you on the network in the first place, so be mindful of what you’re giving away in return for “free” internet access.

💡 LEARN MORE – What is a VPN? ►

💡 CASE STUDY – Anatomy of a free Wi-Fi hole ►

💡 LEARN MORE – Sophos Warbiking tours search for insecure Wi-Fi ►

(No video? Watch on YouTube.)

Images of Christmas tree and Advent calendar courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Q8e6MbdGqBU/

Juniper ‘fesses up to TWO attacks from ‘unauthorised code’

Juniper Networks has offered a more detailed description of the security issues resulting from its find of ” “unauthorised code” in ScreenOS, the software that powers its firewalls.

The company’s knowledge base article on the incident says “The first issue allows unauthorized remote administrative access to the device over SSH or telnet. Exploitation of this vulnerability can lead to complete compromise of the affected system.”

While the company points out that “Upon exploitation of this vulnerability, the log file would contain an entry that ‘system’ had logged on followed by password authentication for a username,” it also notes that “a skilled attacker would likely remove these entries from the local log file, thus effectively eliminating any reliable signature that the device had been compromised.”

The second issue, the company says, “may allow a knowledgeable attacker who can monitor VPN traffic to decrypt that traffic. It is independent of the first issue.”

And here’s the nasty part:

“There is no way to detect that this vulnerability was exploited.”

The United States Federal Bureau of Investigation is reportedly probing the matter, while US government agencies are liaising with Juniper. And so they ought: the company markets its products as military-grade spookware.

Speculation is naturally running high as to the source of the unauthorised code, with many suggesting a state-sponsored attack or and/or an attack by a criminal gang that sells government data.

For what it is worth, The Register has been contacted by a former Juniper staffer who suggested “Maybe you should be looking where Juniper’s sustaining engineering is done for the ScreenOS products.”

That work’s done in China.

The Register is well aware of the many problems that flow from assuming China is a source of attacks, not least that it is just plain convenient to blame the Middle Kingdom for attacks. We’re also not willing to assume that any competent government has failed to develop networked attack and defence capabilities.

For the record, however, ScreenOS has its roots in China and Juniper’s 2004 acquisition of NetScreen for US$3.4bn. Netscreen was founded by Chinese nationals, but Sunnyvale, California, was home. Juniper stated, in this canned statement from December 2004, that one of the reasons it decided to open a research and development centre in the Chinese capital, Beijing, was to “leverage the Chinese roots of NetScreen Technologies.”

It’s not hard to find evidence of ongoing work on ScreenOS in Beijing: a quick trawl of LinkedIn turns up several Juniper employees who work on the operating system. The Register in no way suggests that those who work in Juniper’s Beijing offices are in any way associated with the unauthorised code.

We nonetheless asked Juniper if the code is known to have come from the Beijing facility. That question, and others on when Juniper became aware of the code and whether it has advised governments about the situation were all met with the answer “We have nothing further to add at this time.” ®

Sponsored:
IT evolution to a hybrid enterprise

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/20/juniper_details_two_attacks_from_unauthorised_code/

Hillary Clinton says for crypto ‘maybe the back door is the wrong door’

Democratic presidential front-runner Hillary Clinton has waded deeper in to the debate on encryption with the observation that “maybe the back door is the wrong door”.

Speaking at a debate for Democratic candidates, Clinton was asked if she would legislate “to give law enforcement a key to encrypted technology”.

Clinton’s response was to say “I would hope that, given the extraordinary capacities that the tech community has and the legitimate needs and questions from law enforcement, that there could be a Manhattan-like project, something that would bring the government and the tech communities together to see they’re not adversaries, they’ve got to be partners.”

She went on to say “maybe the back door is the wrong door, and I understand what Apple and others are saying about that.”

That position weakens Clinton’s previous calls for weaker encryption, but just what other “doors” she referred to was not explained. Clinton’s campaign site says defeating ISIS will require “better coordination and information-sharing all around to break up terror plots and prevent attacks—between European governments and law enforcement, between Silicon Valley and Washington, and between local police officers and the communities they serve.”

The nature of the “Manhattan Project” analog on encryption co-operation was not explained, so it’s hard to say just what Clinton imagines might be the outcome of a massive, secret, three-year project. Presumably Clinton’s keen on a scheme whereby law enforcement agencies gain access to encrypted data without compromising privacy. Good luck with that, Hillary, as a vote-winner and technical challenge. ®

Sponsored:
Evolution of the Hybrid Enterprise

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/20/hillary_clinton_says_for_crypto_maybe_the_back_door_is_the_wrong_door/

Advent tip #19: Grab hold and give it a wiggle!

The suggestion to “grab hold and give it a wiggle” may not sound like useful security advice.

But if you’re using an ATM to withdraw money then “wiggling” can be a good idea.

That’s because a quick scrutiny of the ATM and its components (e.g. the card slot, the keypad, the moulded surrounds) can help you spot things that are iffy, such as skimming devices.

That’s where crooks glue fake add-on parts onto or around the ATM in the hope of covertly reading in both your card data and your PIN.

Typical skimming components include:

  • A fake card slot stuck over the real one, so your card is read twice when you insert it – first by the crooks, and then by the bank.
  • A fake keypad layered over the real one, so your PIN registers both on the crooks’ keyboard and on the bank’s.
  • A hidden camera and transmitting device, such as a modified mobile phone, that takes a video of your PIN as you enter it.

With a copy of your card data and your PIN, the crooks may be able to clone your card and send other gang members around town with fake cards to make phantom withdrawals.

If you see something, say something!

Inform both the bank and the police, which not only protects you but also protects the next guy, too.

Here’s a fun video from the Queensland Police Service in Australia that shows you how cash machine skimming works:

(No video? Watch on YouTube. No audio? Click on the [CC] icon for subtitles.)

A map from the US Attorney’s Office showing the speed at which cloned ATM cards are used in mass withdrawals:

Images of Christmas tree and Advent calendar courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KtmKiaS6vsw/

Sanders presidential campaign accuses Democrats of dirty data tricks

A hacking row is splitting the Democratic Party’s presidential campaign after an incident with the party’s database provider.

Presidential hopeful Bernie Sanders has been cut off from access to the vital voter targeting database after one of his campaign staffers improperly entered the servers of the database’s host provider NGP VAN. The company hosts the campaign profiles for both Hillary Clinton and Sanders, and a 45-minute firewall failure allowed a staffer on the latter’s campaign team to view data from the Clinton’s campaign files.

“We fired the staffer immediately and made certain that any information obtained was not utilized,” said Jeff Weaver, Bernie Sanders 2016 campaign manager, in a statement.

“We are now speaking to other staffers who might have been involved and further disciplinary action may be taken. Clearly, while that information was made available to our campaign because of the incompetence of the vendor, it should not have been looked at. Period.”

Weaver explained that the Sanders IT staff warned NGP VAN months ago about failures in the firewall that separates the databases of the two accounts, calling it “dangerous incompetence.”

The now-fired IT guy behind the kerfuffle, Josh Uretsky, told CNN that he took “100 per cent” responsibility. He noticed the firewall was down on Wednesday morning, and only entered the server to create a trace route that could be used as evidence that the flaw existed. He was cut off when NGP VAN closed the hole.

“In retrospect, I got a little panicky because our data was totally exposed too,” he said. “We had to have an assessment, and understand how broad the exposure was and I had to document it so that I could try to calm down and think about what actually happened so that I could figure out how to protect our stuff.”

Uretsky asserted that he was going to report the issue to the Democratic National Committee (DNC), but instead they called him first with a warning that there was something fishy going on in NGP VAN’s servers. He was fired by the campaign shortly afterwards.

DNC turns BOFH

The DNC’s chair Debbie Wasserman Schultz said that the problem was caused by a third-party vendor’s software patch and was quickly fixed by NGP VAN. The database was never open to external attackers, and no financial information, donor records, or volunteer data files were exposed, she said.

NGP VAN will be performing a forensic examination of the incident, and an independent auditor will conduct a parallel analysis to find out what exactly went on. In the meantime, the Sanders campaign will no longer have access to the voter profiling database until a full report from the Sanders team is forthcoming.

“We are working with the Sanders and Clinton campaigns and NGP VAN to establish all of the facts and move forward as quickly as possible,” chair Wasserman Schultz said in a statement.

“Our primary goal at this moment is to ensure the integrity of the data so that the campaigns – and the entire Democratic Party – can continue the important work we do of connecting with voters on the issues that matter most to them and their families.”

The timing couldn’t be worse for the Sanders campaign; the third of the campaign’s six presidential candidate debates is held on Saturday and it’s a prime fundraising opportunity that will be crippled without access to the database. Prolonged loss of access will be even more damaging.

Sanders campaign manager Jeff Weaver called foul on the DNC’s decision to block his team from the database and demanded an investigation into the DNC’s handling of the data, including the incident reported to NGP VAN back in October.

“By their action, the leadership of the Democratic National Committee is now actively attempting to undermine our campaign. This is unacceptable,” said Weaver at a press conference on Friday.

“Individual leaders of the DNC can support Hillary Clinton in any way they want, but they are not going to sabotage our campaign – one of the strongest grassroots campaigns in modern history. If the DNC continues to hold our data hostage, and continues to try to attack the heart and soul of our campaign, we will be in federal court this afternoon seeking an immediate injunction.” ®

Sponsored:
IT evolution to a hybrid enterprise

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/19/sanders_accuses_dems_dirty_data_tricks/

Researcher claims Facebook tried to gag him over critical flaw

A security researcher who found a critical flaw in Instagram is claiming that Facebook’s chief security officer Alex Stamos tried to get him fired over the discovery.

Earlier this year Wes Wineberg, a contractor with enterprise security intelligence firm Synack, received a tip on IRC about an Instagram server with an open admin panel that could be vulnerable to a flaw in Ruby, since it was using an older version of the software.

After finding a default security code for Ruby online he tried it out and got accepted, enabling remote code execution (RCE) that gave him access to some of the command line. After confirming the flaw was exploitable, he then wrote up a couple of bug reports and submitted them to the Facebook security team’s bug bounty program.

But Wineberg decided to dig a little deeper. He’d submitted bugs to Facebook before and its terms and conditions ask for evidence of flaws that allow deep penetration of the firm’s servers, as long as doing so doesn’t cause server downtime. So he decided to go looking.

Using the RCE flaw, he checked out the user accounts that were stored on the compromised server and found 60 from Facebook and Instagram employees. Sensibly, the account passwords were encrypted with bcrypt, but he ran them though John the Ripper, an open source password cracker capable of about 250 guesses a second.

“To my surprise, passwords immediately came back. In only a few minutes of password cracking, I had recovered 12 passwords!” he said. “These passwords were all extremely weak, which is why I was able to crack them despite them being bcrypt encrypted.”

The passwords included six instances of “changeme,” three which were the user’s own name, two of “password,” and one “Instagram.” He logged into one account to prove it could be done and filed a third bug report to Facebook. The company firewalled the server shortly afterwards.

Digging deeper

However, after a closer examination of a server configuration file, Wineberg found an Amazon Web Services key-pair. A scan revealed 82 different AWS S3 storage buckets associated with the key, but only one of them could be opened. In that he found a second key pair that opened up all 82 buckets.

In there he found Instagram’s crown jewels. The buckets stored the source code for the firm’s servers, SSL certificates and private keys for Instagram.com, iOS and Android app signing keys, and email server credentials.

“To say that I had gained access to basically all of Instagram’s secret key material would probably be a fair statement,” Wineberg said.

“With the keys I obtained, I could now easily impersonate Instagram, or impersonate any valid user or staff member. While out of scope, I would have easily been able to gain full access to any user’s account, private pictures and data.”

He filed a detailed report to Facebook indicating seven areas of weakness involved in the hack, and on December 1 sent it in. It was then that the shit hit the fan. Facebook’s CSO called Wineberg’s boss at Synack the same day for a little chat.

Stamos informed Synack’s CEO Jay Kaplan that Wineberg had been poking around in Facebook’s servers and the company took a very dim view of the activity. Stamos said he didn’t want to get lawyers involved, but did need assurances that Wineberg wouldn’t be publishing anything on how he got into the S3 buckets and that he had deleted any data retrieved.

“I did not threaten legal action against Synack or Wes nor did I ask for Wes to be fired,” Stamos said in a Facebook post. “I did say that Wes’s behavior reflected poorly on him and on Synack, and that it was in our common best interests to focus on the legitimate RCE report and not the unnecessary pivot into S3 and downloading of data.”

Timing is everything

This might sound like a case of corporate bullying, but the timeline of events is important here.

Stamos said, and Wineberg agrees, that the bug report into the initial RCE flaw was confirmed and a payout of $2,500 was made. But when he submitted the flaw report on weak user passwords, Facebook rejected the flaw, and reminded Wineberg that he wasn’t supposed to be going quite so far in his research.

“In the future we expect you will make a good faith effort to avoid privacy violations, destruction of data, and interruption or degradation of our service during your research,” the email, sent on October 28, stated.

Wineberg asked for clarification about how far he could go and on November 6 got an email from Facebook’s security team, saying the team would “discourage escalating or trying to escalate access, as doing so might make your report ineligible for a bounty.”

He then sent three more emails asking Facebook for clarification on the issue, but got one form response. Then, on December 1, he reported the AWS key issue and got an immediate response that the digging had violated user privacy, and stating “we do not explicitly prevent nor provide permission” to publish his findings.

What’s key, from Stamos’ perspective, is that Wineberg was warned not to do further digging. Wineberg feels that, since Facebook’s terms and conditions don’t explicitly ban this, he’s in the clear and said his legal advisor agrees with him.

Fixing the fracas

In his Facebook post Stamos acknowledges many of Wineberg’s points, but states that the timeline that they both agree on does indicate that the researcher crossed an ethical line.

“Those of us who spent time in the security community in the 1990’s and 2000’s remember the bad old days of bug reporting, when there was a constant drumbeat of stories of security researchers trying to responsibly improve security and software vendors responding to them with legal threats,” he said.

“I have personally been the target of these threats, have stood behind my researchers as a co-founder of a security firm, and have acted as a pro-bono expert witness on behalf of security researchers facing civil and criminal action.”

But a line has to be drawn between finding a bug and actually using it to roam around servers willy-nilly. Not only could such actions cause major problems in a company’s networks – they could lead to the bad old days of companies lawyering up against the security community.

Stamos said that he thought Wineberg was an employee of Synack’s because the researcher used a synack.com email address when contacting Facebook, and he blogged for the company. All the problems identified had now been fixed, he said, and Facebook is examining both its terms and conditions and the responses from its security team.

Facebook might want to take a leaf out of Microsoft’s book – Redmond’s TCs explicitly ban investigating its servers for flaws – and researchers submitting flaws to any bug bounty program should be very conversant in how the rules might be changing in light of this case.

But some in the security research community are peeved at this approach, feeling Facebook was too heavy-handed. Others support the move, saying Wineberg shouldn’t have gone as far as he did in exploring Facebook’s servers.

Both sides in this fight appear keen to draw a line under the affair, but the case does highlight the delicate line between legitimate research and sort-of hacking for money. Millions are paid out every year in bug bounty programs, and the system works well in supporting researchers. So long as lawyers are kept out of the picture that should continue unabated. ®

Sponsored:
IT evolution to a hybrid enterprise

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2015/12/18/facebook_gagged_researcher_over_critical_flaw/