STE WILLIAMS

NASA hack blunder, doxer jailed, PAYE cybercrime, $20k iPhone prize – 60 Sec Security [VIDEO]

Dolloping out Threat Intelligence

There’s a saying that too much of a good thing can be bad for you. We normally apply it to things like ice cream and chocolate, but the saying also applies to the threat intelligence world. You’d think that by doubling or even quadrupling the number of streaming intelligence feeds in to your organization you’d be better off – better informed and more secure – unfortunately you’re likely to be wrong.

Over the past couple of years the threat intelligence service industry has really kicked in to high gear. Many of the vendors in this area have been supplying their streaming intelligence services for upwards of a decade to the manufacturers of popular security appliances and desktop protection suites, but it’s only been more recently that enterprise businesses have found themselves in a position to consume the data directly.

The growing need for streaming security intelligence is a direct response to the rapidly evolving threat. As the threats that target an enterprise become more adaptive, more dynamic, and more evasive of legacy protection architectures, there’s a driving need for real-time analytics and providing inputs into a new generation of dynamic analysis systems. To this end, the common logic is “more is better” when it comes to threat intelligence. But is it?

Earlier this week I came across an opinion piece at SC Magazine by Kathleen Moriarty (global lead security architect, EMC’s office of the CTO) titled ‘Threat-intelligence sharing is dead, and here’s how to resuscitate it‘ in which she touches upon the problems of sharing intelligence data and using it effectively. While I agree with her that contemporary threat intelligence sharing has failed (and by the way is increasingly a target for subversion) – in particular, that those participating in threat-intelligence programs have suffered from too much information, and that they struggle to deal with information that is neither actionable nor relevant – I believe the requirement to rely upon trusted parties is likely doomed to failure. “Trust” networks, if ad sharing networks are any indicator, are an open invitation to new attack vectors.

The biggest problem that enterprise threat-intelligence customers are facing can be illustrated by the problem any of us would encounter is we were sat in an office, surrounded by televisions each blaring away a separate TV news channel, and we were expected to absorb and digest the days happenings. Too much information is overwhelming. Adding additional TV’s and news broadcasts only adds to the problem.

But there’s another analogy to be drawn from the same TV news illustration. You’d think that things would become simpler if there’s a late breaking story that most of the channels then start covering at the same time. The simultaneous coverage is likely an indicator that something significant is happening and should be responded to.

Two significant wrinkles with this approach spring to mind. If the majority of the TV channels are covering the same national story, what stories are now not being covered? So, while they’re all repeating the same news – confirming among themselves the significance of the story – other local stories are being dropped from the days coverage. And then, as with practically any late breaking story of significance, the TV channels – each searching for new “facts” and unique commentary – often end up repeating each other’s facts (sometimes providing attribution to a competitor if they can’t confirm it for themselves).

In the threat-intelligence community what you end up with is a myopic fixation on the high-profile threat of the day (e.g. the latest APT that’s made it to the news) to the detriment of other analysis and, I’m sorry to say, a framework that can be easily tainted by bad or mistaken information. There’s so much pressure on the various threat-intelligence providers to provide like-for-like coverage of competitor feeds, that each vendor subscribes or monitors the other and will often add any missing intelligence data to their own feed – even if they can confirm it for themselves. This already happens daily among the dozens of blacklists and antivirus signature vendors.

The problems facing streaming threat intelligence feeds, their vendors, and their consumers, are many and (unfortunately) endemic throughout the current intelligence-sharing model. Luckily a new generation of machine learning and clustering systems is making great headway in consuming the threat intelligence feeds from a bloating industry – weeding out superfluous and inaccurate information – and preemptively classifying threat categories such as botnets and related domain abuse, but is still years away of forming the basis of prioritizing actions against the full breadth of today’s threat spectrum within the enterprise.

The incestuous nature of the streaming intelligence service industry causes many problems, but also new opportunities. While those responsible for safeguarding their corporate networks are overwhelmed with inactionable information from an avalanche of intelligence data, there is ample opportunity for boutique service providers to step in and provide distilled threat intelligence advice specific to their client’s needs.

As kids we’ve probably all dreamed at some stage about having a humongous bowl filled with every flavor of ice cream imaginable and consuming the whole thing until we exploded. As an adult I’ve learned that the strategy of first asking the girl on the other side of the counter which flavored ice creams are the best in the store is often a more efficient and less explosive way to enjoyment.

Gunter Ollmann, CTO, IOActive Inc.

Article source: http://www.darkreading.com/attacks-breaches/dolloping-out-threat-intelligence/240161638

Bank robbers pose as IT guys, rig device to slurp £1.3m from Barclays

London’s Metropolitan Police have arrested eight men in connection with a £1.3 million ($2.08 million) bank heist carried out with a remote-control device they had the brass to plug into a Barclays branch computer.

The hardware included a KVM (keyboard, video monitor and mouse) switch and a 3G dongle that enabled the crooks to slurp money from accounts, according to Met Police.

These are legitimate hardware setups: As the police explained, a KVM switch is used in business to enable remote work on computers.

Walking into a bank and pretending to be an IT guy to install such a device is, needless to say, a less legitimate prospect.

Here’s how the crooks pulled off the ruse, according to the Met Police statement:

A male purporting to be an IT engineer had gained access to the branch, falsely stating he was there to fix computers. He had then deployed the KVM device. This enabled the criminal group to remotely transfer monies to predetermined back accounts under the control of the criminal group.

Met Police said that the money was funneled out of the branch, located in the Swiss Cottage district of the north London Borough of Camden, in April.

Barclays Security had actually first reported the £1.3 million loss to police on 05 April 2013.

The report triggered a long-term investigation, led by the Met Police’s Police Central e-crime Unit (PeCU).

As of Thursday, searches were still being carried out across London at addresses where cash, jewelry, drugs, thousands of credit cards, and personal data had already been confiscated.

The eight men, aged between 24 and 47, were arrested in connection with an allegation of conspiracy to steal from Barclays Bank and conspiracy to defraud UK banks.

Det. Supt. Terry Wilson told the BBC that the police team working the Barclays theft is the same group who’s handling the foiled bank heist plot of Santander—one of the UK’s largest banks—from a week prior.

In that recent foiled Santander scheme, the same type of KVM kit came into play.

Using the same modus operandi in both plots, a man had posed as an engineer from a telecoms firm to rig up the device at a Santander branch.

But because police had been tipped off months prior—presumably, the tip was a rather large, £1.3 million-shaped hole in Barclays accounts—the suspects were arrested within hours of the hardware being put into place and before it was turned on.

On Friday, Barclays put out a whopping 72-word statement that stressed how protecting its customers against such malfeasance is priority Numero Uno:

“Barclays has no higher priority than the protection and security of our customers against the actions of would-be fraudsters.

“We have been working closely with the Metropolitan Police following a security breach at our Swiss Cottage branch in April 2013. We identified the fraud and acted swiftly to recover funds on the same day.

“We can confirm that no customers suffered financial loss as a result of this action.”

As security expert Graham Cluley points out, it sounds like the heist can be attributed at least in part to that most human of failings, politeness:

Most of us are guilty of allowing people we don’t recognise to wander around our offices, fiddling with computers, fixing photocopiers, changing the water cooler. Human nature being what it is, we feel uncomfortable questioning people too closely.

Good security training companies know this.

That’s why they send their people into clients’ offices to see what trouble they can get into and, more importantly, whether anybody’s going to stop them, ask for an ID, tell them not to wander around and peer at monitors, and refrain from things like plugging remote-control hardware into computers.

Barclays, get thee to a security training expert!

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/As2sysQS8w0/

NASA hack blunder, doxer jailed, PAYE cybercrime, $20k iPhone prize

(ISC)2 Congress Addresses Security’s People Problems

There are many conferences and get-togethers around cyber security every year, but only a few would be considered “mandatory” by the whole community of security professionals. The RSA Conference, held each year in San Francisco, offers the industry’s biggest exhibit floor and a chance to see security products in action. Black Hat USA, held annually in Las Vegas, is where the smartest and best security researchers come to reveal vulnerabilities and share knowledge on potential threats.

While these events offer a depth of technological insight unmatched in IT security, though, they don’t necessarily focus on the “people” issues faced every day by the average security professional. That’s why I’ll be in Chicago next week for the third annual (ISC)2 Security Congress, the yearly meeting of the world’s biggest cyber security professionals’ organization.

(ISC)2’s Congress — held concurrently with ASIS, the granddaddy of physical security conferences — doesn’t have an overriding technological “theme” because it isn’t focused on technology. Its focus is discussing the day-to-day, non-sexy issues that all security professionals grapple with, such as staffing, hiring, management and administration. Where other events might have more of a “show” of leading-edge technology or new threats, (ISC)2 is more like a water-cooler conversation among colleagues faced with similar security problems and issues.

Meetings of security professional organizations such as (ISC)2, ISSA, and ISACA represent the “everyman” infosec pro, who may not always be up on the most current products or attacks because he or she is fighting the everyday fires of the enterprise. These are people who work in the trenches of security and are limited by time, budgets, and short staffing. They spend a frustrating amount of time in meetings, arguing with top executives or end users who don’t understand the dangers their systems face every day. Their job is not to be on the leading edge, but to get their data secure as best they can with what they’ve got.

This year, many of (ISC)2’s sessions will focus on how to do more with less, how to train staffers and end users to improve enterprise defenses, and how to make tough decisions about security in a rapidly-changing environment where the needs of the business and the growing range of threats often outweigh the security department’s resources.

If the security industry is to progress, it will occasionally have to step away from technological problems and wrestle with some of these types of people problems. How to fund, find, and keep good security people. How to teach end users not to click on suspicious attachments. How to build security policies that are realistic for the business, yet also enforceable by monitoring and security controls.

These issues won’t be solved at the conference next week, but it’s good to see security professionals working on them together. Cyber criminals are famous for sharing (and stealing) each other’s ideas and techniques, and that sharing has helped them to get an edge on enterprise defenders. Anytime security professionals get together to share their knowledge — whether in small groups or at a major conference — it improves the enterprise’s chances of successfully fighting back.

Article source: http://www.darkreading.com/management/isc2-congress-addresses-securitys-people/240161635

Facebook “Likes” gain constitutional protection for US employees

Happy day, USA: When we click “Like” on Facebook, we are now constitutionally protected from getting fired!

If you’re thinking, “Well, duh, wasn’t I already?”, join the club.

In fact, at least one court had hitherto decreed that the First Amendment to the US Constitution, which (more or less) ensures the right to free speech, didn’t apply to Facebook Likes.

The case came to court after a sheriff from the state of Virginia fired six employees for supporting his opponent in an election.

Mashable’s Lorenzo Franceschi-Bicchierai reports that B.J. Roberts, the sheriff of Hampton, Virginia, had fired the employees who supported Jim Adams, his opponent in the sheriff’s election.

One of the fired employees, Former Deputy Sheriff Daniel Ray Carter, had Liked Adams’s Facebook page.

The fired employees, Facebook and the American Civil Liberties Union (ACLU) joined forces to fight the dismissals.

Together, they argued that a Facebook Like must be considered free speech, which would in turn mean that employers couldn’t legally fire employees for expressing their opinions on the network.

In the first federal ruling on the case, a federal district judge had said that a Like was “insufficient speech to merit constitutional protection”, as Mashable reports.

The judge ruled that a Facebook Like didn’t involve an “actual statement”, unlike Facebook posts, which have hitherto been granted constitutional protection.

On Wednesday, that decision got its own thumbs-down in a federal appeals court.

Judge William Traxler, who authored the decision, said that clicking Like is much the same as putting up a political sign supporting a candidate in your front yard:

“Liking a political candidate’s campaign page communicates the user’s approval of the candidate and supports the campaign by associating the user with it. … It is the Internet equivalent of displaying a political sign in one’s front yard, which the Supreme Court has held is substantive speech.”

Both the ACLU and Facebook’s legal counsel are applauding the decision.

The decision reinstates the claims of Carter, along with two other fired employees, but they haven’t yet actually won the case. If they do, they might get their jobs back, Franceschi-Bicchierai reports.

As commenters on the Mashable story have noted, Facebook Likes can be convoluted creatures. In order to continue to see posts appear in our news streams, we need to click Like, whether that aligns us with candidates we detest or news we abhor.

But regardless of why we click Like, it shouldn’t come back to haunt us. Facebook is now very much an outlet for speech that deserves protection, whether it’s to support a candidate or to follow news about, for example, cancer research.

We follow things. We Like things. We shouldn’t be punished for it.

That doesn’t mean you shouldn’t clean up your slimy Facebook trail if you post about your drunken binges or how much you hate your boss.

As far as I know, the First Amendment doesn’t cover dumb.

Good luck with the case, Mr. Carter, et al. I hope you get your jobs back.

UPDATE: As commenters on my original post have pointed out, this decision doesn’t necessarily protect us all, but it will hopefully set a precedent for how other courts interpret the First Amendment as it pertains to online activities. Thanks for the input goes to Don Amith and csh.

Image of suited bloke telling you to get your coat courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/FJ8JzH8DtKs/

iOS 7 lockscreen hole discovered already – all your private photos could end up online!

Serial iOS bug finder videosdebarraquito has struck again.

He found a bug in the iOS 6.1.3 lockscreen, almost as soon as that update was published (an irony, given that the main purpose of 6.1.3 was to fix various lockscreen flaws).

Now he’s made a video of himself bypassing the lock on just-released iOS 7.

(I’ve given you more than enough to find the video if you want. But I haven’t provided a direct link here. Call me an old-school wowser. I can take it.)

Lock screens have a chequered security history, with Android having its recent share of problems, too.

The main reason is complexity, one of security’s mortal enemies.

You can understand why some exceptions to a phone lock might be desirable, or even required by the regulators: the ability to call the emergency number, no matter what, for example.

Similarly, a clock is handy when the phone is locked, as well as an indication of whether there’s network service available should you want to make a call.

So some “special case” programming is needed in phone lock software, which inevitably means more to go wrong with the part that implements the actual lock.

But functionality to check whether you’ve just dialled the three digits 112, 999, 000, 911, or some other well-known emergency number, and to update a digital clock once a minute, is a far cry from the feature set implemented by the average lockscreen app on a modern smartphone.

We’re no longer content to have our phones locked: we want them locked, except for a huge raft of features.

Indeed, our terminology even reflects that: we tend to say, “My phone’s at the lock screen,” not, “My phone’s locked.”

In truth, the phone isn’t locked at all – the lockscreen app typically requires and makes extensive use of access to the network and the filing system, plus the ability to interact fully with the user.

Worse, we’re not content with just seeing general information on our lockscreens, like the latest weather and news headlines, but are happy for our “locked” phones to continue disgorge information of a more personal nature, such as posts to your Facebook wall, Tweets we’re mentioned in, and more.

And heaven forfend that we ever have to fumble with the phone lock before we are able to snap a photo!

Apple addressed these issues in iOS 7 with what it describes as a feature, but that I consider a bad idea from the start. (Call me an old-school wowser. I can take it.)

It’s called Control Center, and it flies under the banner that “some things should only be a swipe away. And now they are.”

Control Center gives you quick access to the controls and apps you always seem to need right this second. Just swipe up from any screen — including the Lock screen — to do things like switch to Airplane mode, turn Wi-Fi on or off, or adjust the brightness of your display. You can even shine a light on things with a new flashlight. Never has one swipe given you so much control.

Sadly, that one swipe, combined with some dextrous fingerwork, gives videosdebarraquito so much control that he can access your photos via a backdoor entrance.

It seems he gets from the lock screen to the control center, from there to the alarm clock, and from there, by means of some deft fingerwork – described in his video as “double click on the home button, but the second click is slightly stretched” – into your photo gallery.

Now he can do whatever you could do with your photos if the phone were unlocked: look at them, delete them, upload them and post them on social networking sites.

Let’s hope that Apple fixes this bug quickly.

In the meantime:

  • Reduce the functionality available from the iOS 7 lockscreen, notably turning off access to the control center.
  • Don’t take photos of a genuinely personal or private nature on your phone. (Call me an old-school wowser. I can take it.)

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/thnm89-XIrc/

Win Bitcoins, booze and cash! Be the first to crack the iPhone’s Touch ID fingerprint sensor…

The fingerprint sensor on Apple’s new iPhone 5s could well be the device-within-a-device that brings biometrics into the everyday mainstream.

(There’s good and bad in that. The good news is that if you paid extra for a laptop, years ago, because it had a fingerprint scanner you could never get to work, you’ll no longer be seen as a technology sucker but as an early adopter.

The bad news is that any hope of arguing for the end of fingerprint scanners in US immigration lines will be lost forever. Heck, if you can do it for Apple, you can do it for Uncle Sam!)

For all that I recently wrote – this very morning, in fact – that convenience is “one of security’s mortal enemies,” Apple’s Touch ID might end up as a blessing in disguise entirely on account of its ease of use.

People who are too lazy to bother with proper passwords or even four-digit passcodes on their phones (like Marissa Mayer, CEO of Yahoo!, no less) might be willing to use Touch ID, since it makes it slicker for them to get back into their phone one-handed.

But one burning question still remains, and in common with many Naked Security readers, you’re probably asking it yourself: “How safe is it?”

Could you defeat it with a gelatin mould, for example?

Well, if you’re willing to put Touch ID to the test, you might find yourself in line for some crowdsourced prizes.

Numerous individuals have so far pledged a mixture of cash, booze and patent application payments if you can clone someone’s fingerprint (it can be one of your own, which simplifies the experimentation) and unlock an iPhone 5s.

Actually, the rules are a little stricter than that: you have to “lift” a fingerprint off something else the user has touched, so you’re not allowed to press your finger into a Gummi Bear and then swipe the confectionery over your iPhone.

A Gummi Bear hack would be cool if it worked, but it wouldn’t be enough to walk off with what currently amounts to about US$15,000 in cash, several litres of spiritous liqour, roughly 20 Bitcoins in various fragmentary sizes, “one free patent application covering the hack”, and more.

Here’s what you need to do:

It sounds like an interesting and amusing experiment, and I look forward to seeing if anyone can find a way to defeat the sensor reliably.

The touch ID sensor isn’t supposed to work with a severed finger, which is a modest comfort, although ironically it implies that a genuinely desperate and violent criminal would need to threaten you with worse than merely cutting off your finger to force you to unlock your phone against your will.

On the other hand, we know Touch ID doesn’t actually need a finger, or even a human being, as Darell Etherington over at TechCrunch discovered “after commandeering a cat.”

Fancy giving it a try? (Cloning a fingerprint, not commandeering a cat.)

Go for it, although if you succeed, you’ll have another set of problems to solve: actually getting your prizes out of the crowdsourcers.

According to the website, even the terms and conditions are “up to each individual bounty offerer,” which sounds as though things might get labyrinthine.

And the lion’s, or at least the cat’s, share of the prize money so far ($10k of it) has been put up by a startup venture capital startup that seems to be having trouble paying to keep its website running right now, let alone coming up with ten large ones for left-field experiments into fingerprint trickery:

But you won’t be doing it for the money, I’m sure – you’ll do it for the fame, right? (That’s listed as one of the prizes.)

Image of fingerprint on main page courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Ur1-u90O1IA/

LinkedIn users sue over service’s “hacking” of contacts and spammy ways

Brian Guan, a Principal Software Engineer at Linkedln (currently on sabbatical) said it all when he described his role on the site:

Devising hack schemes to make lots of $$$ with Java, Groovy and cunning at Team Money!

Also, LinkedIn’s 2011 10-K [*] identified its key strategy as being to “Foster Viral Member Growth.”

Mind you, the fact that LinkedIn wants to grow virally and make money isn’t terribly surprising, but the way the professional networking site is doing it has now spawned a class action lawsuit.

Four LinkedIn users in the US are suing the company for allegedly “hacking” users’ email accounts, downloading their address books, and then repeatedly spamming out marketing email, ostensibly from the users themselves, to their assumably beleaguered contacts.

The complaint, filed in US District Court on Tuesday for the Northern District of California, outlines the steps LinkedIn goes through to “hack” into users’ external email accounts and extract email addresses, all without obtaining users’ consent or requesting a password.

First, LinkedIn requires an email address to sign up for the service. Next, it harvests email addresses of anyone with whom the users have ever exchanged email.

The service then sends a total of three emails to a given user’s contacts, including an initial pitch, followed up by two reminder emails if the users don’t sign up for a LinkedIn account.

Each of these reminder emails contains the Linkedln member’s name and likeness so as to appear that the Linkedln member is endorsing Linkedln, and none of them entail notice or consent from the LinkedIn member, the complaint charges:

The hacking of the users’ email accounts and downloading of all email addresses associated with that user’s account is done without clearly notifying the user or obtaining his or her consent. If a LinkedIn user leaves an external email account open, LinkedIn pretends to be that user and downloads the email addresses contained anywhere in that account to LinkedIn servers.

The LinkedIn users who filed the complaint are Paul Perkins, Pennie Sempell, Ann Brandwein, and Erin Eggers.

Perkins, a New York resident, formerly served as manager of international advertising sales for The New York Times, the complaint says.

Brandwein is a statistics professor at Baruch College in New York. Eggers is a film producer and former vice-president of Morgan Creek Productions in Los Angeles, and Sempell is a lawyer and author in San Francisco.

The quartet acknowledge that in the complaint that LinkedIn asked for permission to “grow” their networks, but they claim that the service never said it would send a series of email invitations to their contacts.

In fact, it’s only Google that gives Gmail users a heads-up that downloading is going on, the complaint states (all four LinkedIn users on the complaint are also Gmail users):

In cases where the user’s external email account is a Google Gmail account, a Google screen pops up stating, “Linkedln is asking for some information from your Google Account.” … The Google notification screen, however, does not indicate that Linkedln will download and store thousands of contacts to Linkedln servers. Rather, this notification screen misleadingly states that Linkedln is asking for “some information.” Linkedln does not provide this notification to its users; it is Google that provides this screen.

The complaint notes that LinkedIn’s site contains hundreds of complaints linked to the practice.

The plaintiffs are accusing LinkedIn of violating the federal wiretap law as well as California privacy laws, and are seeking class-action status.

LinkedIn users, are your friends complaining about LinkedIn’s sending spam under your name and photo?

Would you sign up for the suit, or do you instead consider LinkedIn’s process just the cost of getting a free service?

And furthermore, what do you think of the word “hacking” with regards to LinkedIn’s alleged practices? It sounds more like “marketing” to me, but that all boils down to semantics.

Let us know what you think in the comments below.

[*] US companies submit Form 10-K reports each year to the Securities and Exchange Commission, giving detailed information about corporate performance, finances and so forth.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8XNVVkkzD_Y/

Layoffs at EMC’s RSA security division

Free ESG report : Seamless data management with Avere FXT

RSA, the security division of EMC, has confirmed plans to restructure its business, a move than means an unspecified number of long-term staffers will be shown the door.

Details are scarce, for now, but RSA said that it plans to make new hires that will more than offset job losses by start of 2014.


It wrote in an email:

While details remain confidential, I am able to tell you that RSA realigned resources this quarter, which resulted in some RSA employee reductions and identification of new roles to be hired. RSA intends to end 2013 with more employees than the business had at the beginning of the year.

EMC acquired RSA Security for $2.1bn in 2006. In its latest quarterly figures (released in July), EMC said its RSA Information Security business had increased revenue three per cent year over year – as a component of “sales and other revenues” at EMC that came out at $5.6bn for Q2 2013. Overall revenue was up 6 per cent, which means EMC’s security division is slightly behind the overall growth curve.

RSA’s SecurID hardware tokens have been an industry standard for many years, but the market is diversifying and moving towards forms of two-factor authentication based on software agents running on smartphones and other methods.

This change in a mature market has been predictable, and RSA has been planning for its for several years by developing – partially through acquisition, meaning it now has a more diversified portfolio featuring governance and compliance – network monitoring and security management products alongside the authentication technology that made it famous.

Some argue that the change has been accelerated by the infamous 2011 breach by state-sponsored hackers from China against core systems associated with RSA SecurID, an attack later used in unsuccessful attacks against military contractors that, like so many, made use of RSA’s technology to secure remote access connections.

Whatever the reasons, and RSA isn’t saying, it looks like the firm is offloading those with expertise tied to its legacy token business while making new hires that align it better with new strategies, among them a focus on Big Data. Big Data can improve security strategy, an approach that provides a more natural fit between RSA and its owner, storage giant EMC. ®

Supercharge your infrastructure

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/09/20/rsa_restructuring/