STE WILLIAMS

Getting To The Root Of Application Security Problems

Though many enterprises invest in security testing ranging from automated vulnerability scans to full-out penetration testing, in rare instances do organizations do root cause analysis on the results and feed that information back into the application development lifecycle. Many experts within the security community say that lack of root cause analysis is keeping application security stuck in a rut.

“In most cases, organizations focus on solving the symptom and very rarely focus on addressing the underlying causes of their vulnerabilities,” says Ryan Poppa, senior product manager for Rapid7. “Vulnerabilities are an unfortunate part of the development cycle, but it should be taken by organizations as an opportunity to learn and to move the ball forward, instead of just being seen as a cost to the business that needs to be minimized.”

According to Caitlin Johanson, security architect at Veracode, all too often organizations continually let history repeat itself rather than learning from ghosts of application security past. This is why organizations continue to see the same types of vulnerabilities crop up over and over again in their code, even within newly developed applications.

“Your best practices should always be based on your worst failures. If you have reported incidents and keep tripping over the things you swept under the rug, it’s time to face the music and develop applications the right way,” Johanson says.

[Are you missing the downsides of big data security analysis? See 3 Inconvenient Truths About Big Data In Security Analysis.]

According to Ed Adams, CEO of Security Innovation, while automated scanners have made it easy for a fairly low-level technician to identify potential problems in software, an organization needs someone who has an understanding of how the system functions and access to the source code to fix the problem.

“Here’s the rub: recent research has shown that developers lack the standards and education needed to write secure code and fix vulnerabilities identified via security assessments,” he says, recent research his firm commissioned from the Ponemon Institute.

Further exacerbating the problem is the issue of false positives, which “drives developers nuts” as they chase down non-existent problems at the expense of building out future functionality in other projects. Meanwhile, the remediation guidance provided by most tools is watered down to the point of ineffectiveness.

“(The tools are) often just regurgitating stock CWE content which has no context for a developer and is agnostic to the language and platform in which the developer is coding,” he says. “This means it’s mostly useless and tools often become shelfware.”

More problematic, though, is that the skillsets and ethos held by coders and those held by vulnerability scanning or penetration testing experts come from two different worlds.

“Programming and penetration testing are fields that require constant learning and unfortunately the two fields don’t always line up,” says Joshua Crumbaugh, founder of Nagasec, a security research and penetration testing firm. “Programmers want to write more secure code and penetration testers want to help protect the code. But most never have time to take away from their studies to cross train in the other field and though most penetration testers know some coding they tend to use very different styles, functions and even languages.”

Long term, bridging that gap will require educational institutions to put a greater emphasis on information security across all IT and programming-based degrees, says Crumbaugh, who believes that some companies could even go so far as to encourage IT staffers and programmers to take penetration testing courses to understand the security mindset.

But that could take years. What can organizations do to start employing root cause analysis in the here and now? Right off the bat, Adams says organizations need to start with context.

“To leverage this vulnerability information to the fullest, you’ve got to make it contextual for the developer,” he says. ” Start by asking your developers if they understand enough about the vulnerability reported to fix the problem and code securely per your organizations policies and standards. Chances are the answer will be no.”

To truly offer useful context, developers need to know at least five important things, Adams says. First, what the possible counter measures are to take against a particular vulnerability. Next, which counter measure is most appropriate for the application or is preferred by the organization. Then help the programmer understand whether the framework they’re using has built-in defenses available. And finally, they should be shown how to implement counter-measures correctly, and how to code that counter-measure in the appropriate development language.

Additionally, organizations should start considering rethinking their definitions of terms as fundamental as “Quality Assurance” and “bugs.”

“Organizations need to stop being afraid to redefine what ‘Quality Assurance’ means, because it sure doesn’t mean just functional code anymore,” Johanson says. “Quality applications should function, but function securely.”

Similarly, organizations tend to have an easier time fixing the root cause of security problems if they stop calling them vulnerabilities and start lumping them in with all the other bugs they fix.

“That’s how developers speak and they know how to triage, log and prioritize ‘bugs,'” Adams says. “It sounds silly, but for whatever reason, when organizations treat security bugs differently, the process around remediation and triaging seems to stumble.”

Organizations should also consider redesigning the way security teams interact with dev teams.

“Stepping up their game means bringing development and security together. Force them to be friends–well, not force–but they need each other,” Johanson says. “The applications will only be as intelligent as the folks behind them.”

Adams says that it helps if you visualize a pyramid with just a very small team of security professionals at the top and at the bottom an army of developers.

“High performance organizations nominate a security champion on each development team to interface with the security team. This is the middle layer of the pyramid,” he says. “The important point is that the security champion is part of the development organization. That way you’ve got software engineers interacting with the army of developers.”

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/applications/getting-to-the-root-of-application-secur/240160693

Reality TV mother-of-eight Kate Gosselin sues husband for “hacking” email, phone, revealing private info

Kate Gosselin. Image courtesy of s_bukley and Shutterstock.Kate Gosselin, who shot to fame in the US after appearing in a reality TV docusoap ‘Jon Kate Plus 8’ about her life with her eight children, including sextuplets, is suing her husband for allegedly hacking into her personal email account, her phone and her bank account, as well as stealing a hard drive full of personal files including family photos.

The information yielded by the alleged hacking and data theft went into a much-hyped book on the couple’s very-publicised divorce, written by Robert Hoffman, a tabloid journalist and friend of Jon Gosselin, the celebrity husband who is also named in the suit.

The book was pulled by Amazon after allegations that it relied on improperly-sourced information.

Hoffman claims to have found the information by rummaging through Ms Gosselin’s bins, but is also quoted as hinting he has over 5,000 personal photos belonging to her – an unlikely find for a dumpster-diver.

The story has been carried by huge numbers of celeb-loving media outlets, including the notorious Mail Online website, probably mainly as an excuse to carry plenty of photographs of the plaintiff in a variety of outfits.

All stories of course refer to the heinous act of hacking.

The legal papers on the case, filed in the US District Court Eastern Division of Philadelphia and dug out by celeb site Radar Online among others, also make occasional use of the terms “hacking” and “hack”, but as so often in these cases it would appear that the words are being used in the loosest possible sense.

A more accurate way of describing the husband’s activities might perhaps be “guessing her password”, and possibly even “knowing the password having been married to her for 10 years”. There certainly seems to be no evidence of any special technical skill involved in accessing the information.

The moral of the story will of course be that you should ensure your passwords are fit for purpose and kept private.

Padlock. Image courtesy of Shutterstock.If you are a celebrity with oodles of private information you don’t want leaked in a bestselling memoir – and you have a grumpy and possibly vindictive former partner who might know (or have enough knowledge of you to guess) that your email account password is 12345 – you are best advised to change it as soon as possible.

And to change it to something that cannot be guessed, even by someone who knows the names of all your favourite pets, former teachers and most beloved sports teams.

The same advice holds true for normal people, as well as celebrity octomoms. Better still, let a password manager utility create properly complex passwords for you, different ones for all sites, and all hidden behind a single extra-strong passphrase.

There is of course another side to this story, as it would be unkind to put the blame entirely on someone who seems to be guilty of nothing more than the almost universal crime of poor password hygiene.

There have been many cases of partners falling out and using their intimacy to get at information about their estranged other halves that they really should not be seeing, and many of these cases, quite apart from being rather sad, involve some sort of crime being perpetrated.

In a lot of cases, those involved are not fully aware of the criminal nature of their activities.

So if you find yourself on the other side, trying to get at information which is not rightfully yours, ask yourself, should I really be doing this?

If it were, say, an expensive wristwatch or a fancy pair of shoes, rather than some digital bank records or racy celeb photos, would that make a difference? If it was secured by a physical lock rather than a password, would it be right to bust in and make off with the swag?

The answer should be, probably not – so leave that data alone.


Image of Kate Gosselin courtesy of s_bukley / Shutterstock.com. Image of padlock courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Go5uRE4JS0s/

Anatomy of a dropped call – how to jam a city with 11 customised mobile phones

When you think of “signal jamming,” you probably imagine some kind of fine steel mesh that blocks out radio transmissions altogether, or a source of electromagnetic noise that interferes enough to make legitimate communication impossible.

But a paper presented by a trio of German researchers at the recent USENIX Security Symposium reveals a much more subtle approach to jamming mobile phone calls.

They were able to convert a single mobile phone into a denial of service (DoS) device that could be turned against another subscriber, perhaps wherever they roamed through a whole town or city.

The paper is quite technical, and unavoidably filled with the jargon of mobile telephony, yet the authors have done an excellent job of making it into a comprehensible read that teaches you a number of useful security lessons.

As they point out very clearly, many of the security decisions taken in the early days of the GSM (Global System for Mobile) system were based at least in part on security through obscurity.

The consensus back then seemed to be, “Nobody will ever be able to build their own base station, or make their own handset!”

So why bother going to the trouble of designing in security to protect against the hardware and firmware of the network itself turning hostile?

All that has changed, with open source implementations available for both base stations and handsets.

As a result, security shortcuts that didn’t seem to matter much 20 years ago have come back to haunt us.

How your phone receives a call

Mobile phones aren’t in a perpetual state of readiness to receive calls or SMSes (text messages) instantaneously.

Instead, your phone spends most of its time in a low-power mode, from which it can be signalled to wake up fully to accept a call or message. (That’s why your phone battery may well last for days when you aren’t making or receiving calls, but typically only hours when you are.)

Rather casually simplified, and with apologies to the authors of the USENIX paper, this is what happens when a nearby cell tower decides it’s time for you to get a call:

  1. The base station sends out a broadcast page containing an identification code for your phone.
  2. Your phone recognises its own identification code.
  3. Your phone wakes up and responds to the base station.
  4. The base station and your phone negotiate a private radio channel for the call.
  5. Your phone authenticates to the base station.
  6. Your phone starts ringing (or an SMS arrives).

How an attacker can “jam” your calls

You can probably spot what computer scientists call a race condition in the sequence above, caused by the fact that authentication happens late in the game.

Every device in range can listen in to the broadcast pages inviting your phone to wake up, so a device that’s faster than yours can race you to step 5 and win, causing your phone’s attempt to authenticate to be rejected.

Of course, the “jamming” phone doesn’t know how to authenticate, but that doesn’t matter; in fact, it can deliberately fail the authentication, causing the process to bail out at step 5.

There is no step 6, so the call is lost – invisibly to you, because you lost the race to reply – and service is denied.

The authors got this attack working with a tweaked open source baseband (mobile phone firmware) that was adapted to ensure that it ran faster than a wide range of commercial handsets, including the Apple iPhone 4s, Samsung Galaxy S2 and Blackberry 9300 Curve.

How an attacker finds your phone

There is no authentication or encryption during the “are you there?” message and the “here I am!” reply, so an attacker doesn’t need any cryptographic cleverness to work out which messages are meant for what devices.

There is a slight complication, however: the attacker probably doesn’t know your phone’s identification code in advance.

To be strictly correct: the code is tied to your SIM card, not to the phone hardware itself, since every SIM has a unique code called an IMSI (International Mobile Subscriber Identity) burned into it, rather like the MAC address in a network card.

But GSM phones deliberately minimise the frequency with which unencrypted IMSIs are visible on the network, in order to provide you with some safety and privacy against being tracked too openly.

Instead, occasional exchanges involving your true IMSI are used to produce a regularly changing TMSI, where T stands for Temporary.

The TMSI is a pseudorandom, temporary identifier that varies as a matter of course as you turn your phone off and on or roam through a network.

The network operator maintains a list to keep track of which TMSI relates to what IMSI at any moment, but that database is unlikely to be accessible to an attacker.

The authors used traffic analysis to get round this problem.

While sniffing all the TMSIs being broadcast on the network, they call your number 10 to 20 times in quick succession, but deliberately drop each call after a few seconds.

The TMSI that suddenly appears 10 to 20 times in quick succession in the sniffer logs, as the network tries to track you down with its broadcast pages, is almost certainly the one they want.

Easy, isn’t it?

→ As long as they drop the call after the TMSI has sent in a broadcast page but before your phone gets past the authentication stage (step 5 above), your phone won’t ring and the imposter calls won’t show up. That means you won’t be aware that anything dodgy is going on. The authors used trial and error to determine a suitable call-drop delay for the network provider they targeted, finding that 3.7 seconds worked well.

How the attacker finds out which cell you are in

Here’s the thing: he doesn’t need to know more than your general location.

When you receive a call, the mobile network doesn’t page for your phone only in one cell of the network – it pages throughout your location area, which is a cluster of base stations in the vicinity.

This means that the network doesn’t need to keep precise tabs on you all the time, which in turn means that your phone doesn’t have to tell the network exactly where it is from moment to moment, thus extending battery life.

So as long as I know you are somewhere, say, in the City of Sydney, I can sit in a coffee shop at the Opera House and sniff for your TMSI wherever you go in town, because the broadcast pages that go out when I make those 10 to 20 bogus calls are duplicated everywhere in the location area.

fonez-maps-960

The authors did some warmapping drives around Berlin, their home turf, and determined that location areas can be very extensive, ranging from 100km2 to 500km2.

(For comparison, the City of Sydney, which stretches from the Harbour Bridge south as far as Central Station, is just 25km2.)

How the attacker can amplify the attack

Instead of looking out for your TMSI and blocking your calls, what if the attacker wanted to block every call to knock a large metro area out in one go?

One rigged sniffer phone alone couldn’t do it.

The authors found that although their tweaked phone baseband could beat many popular mobile phones in the race to authenticate, it still took about one second to “jam” each broadcast page, limiting each phone to about 60 “jammed” pages per minute.

So they built a rig with eleven tweaked phones, thus allowing them to subvert more than 600 broadcast pages per minute.

Their measurements suggested this would be enough to knock out the service of at least some of the four major German operators across one location area (100km2 – 500km2) in metro Berlin.

Remember that the eleven attack phones don’t have to be distributed through the location area, since all broadcast pages are replicated through all cells in the area.

The only problem the authors faced was how to allocate the TMSI broadcasts amongst their eleven tweaked phones.

Using a messaging system to hand out each successively sniffed TMSI to the next phone on the list required the use of a serial connection to each phone, which was too slow.

In the end, they simply allowed each phone to select TMSIs by a bit pattern, so that phone 1, for example, might handle TMSIs starting with the bytes 0x00 to 0x1F, and so on.

→ As an amusing side-effect of tuning the partitioning algorithm to ensure that each phone handled about the same quantity of broadcast pages, the authors noticed that the bytes in most TMSIs were far from randomly distributed. Ironically, in this case, the lack of randomness made the partitioning job harder, not easier.

What about interception, not just jamming?

As the authors note, in some mobile networks, they could go further than just cancelling your calls and knocking you off the network.

They observed that some networks, presumably for performance reasons, cheat a little on step 5, and don’t authenticate every call.

In these cases, an attacker who can win the race to the authentication stage (step 5 above) can do more than cancel your call – he can accept it instead (or receive your SMS), from anywhere in your location area, and you won’t realise.

Also, some networks still use outdated, broken versions of the A5 encryption algorithm that is part of the GSM standard.

On these networks, your calls can be sniffed and decrypted anyway, but in a busy metro area, an attacker is faced with problems of volume: how to home in automatically only on the calls he really wants to intercept, without having to listen to everyone else’s chatter too.

The authors’ “jamming” firmware could be modified to do just that job, used as a call alerting mechanism instead of for a denial of service.

→ Sniffing the call data for later decryption can’t be done from anywhere in the location area, which is a small mercy, so an attacker needs to be in the same cell as you.

What to do about it?

You can probably guess what mitigations the authors proposed, because they are obvious and easy to say; you will also probably wonder if they will ever happen, because they involve change, and potentially disruptive change at that, so they are hard to do.

Defending against the eavesdropping and call hijacking problems is straightforward: perform authentication for every call or SMS, and don’t use broken versions of the GSM cipher.

The system already supports everything that’s needed; all that is required is for it to be turned on and used by every operator.

Defending against the denial of service problem is slightly trickier, as it needs a protocol change: move authentication up the batting order to prevent the race condition.

The authors propose a technically simply way to do this, but it means shifting some of the cryptographic operations from the authentication stage (step 5 above) to the “are you there?/here I am!” stages (steps 1 and 2).

Unfortunately, these mitigations don’t include steps you can take to help yourself; they need changes from the mobile operators.

Will that happen?

Or will backward compatibility, the thorn that is making Windows XP so hard to dislodge, get in the way yet again?

Image of No Mobile Phones sign courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8oTJInuBI1U/

Apple neglects OS X privilege escalation bug for six months, gets Metasploit on its case…

Six months ago, we wrote about a risky bug in the sudo command, the Unix equivalent of Run As… on Windows.

You use sudo to run an operating system command as a different user, usually root, the all-powerful Unix administrator account.

This means that bugs in sudo are not to be sniffed at, and we were happy, back in March, to be able to praise the curators of the Sudo project for their rapid response.

The bug revisited

Our comprehensive analysis of the bug, and why the sort of programming that caused it (in-band signalling) is probably best avoided, can be summarised as follows:

  • When you first use sudo, it creates a directory called /var/db/sudo/username to record when you last ran it.
  • If you run sudo again within five minutes of the timestamp on /var/db/sudo/username, by default sudo doesn’t ask for your password, as a convenience.
  • If you run sudo -k , it sets the timestamp back to 01 January 1970, which forces sudo to ask you for a password next time, no matter how soon you run it.

You may wonder why sudo -k resets the timestamp to 01 January 1970 (the earliest date Unix cares about, represented as zero in numeric terms), rather than simply deleting the /var/db/sudo/username directory altogether, which would be a simpler and safer approach.

The reason is that if you have never run sudo before, it doesn’t just ask for your password, but gives you a little “pep talk for newbies” first.

On OS X, it’s terribly businesslike, and looks something like this:

Linux is a bit more community oriented, and wanders into social ethics:

Without the file /var/db/sudo/username, you get the pep talk every time.

Apparently, being confronted with a helpful warning when you are no longer a newbie is considered infra dignitatem, so anyone who deliberately gives up their five-minute sudo privilege window with the -k option is treated with kid gloves.

Thus the special meaning of 01 January 1970: it suppresses the mini-lecture, but still asks for your password.

Anyway, the risky bug, which existed until February 2013, was that if the clock ever actually did get set to 01 January 1970, anyone who had run sudo before would seem to have run it within the last five minutes.

As a result, they could run anything they wanted as root without entering a password.

A risky vulnerability indeed.

Six months on

If you’re an Apple OS X user:

  • Apple still hasn’t updated the version of sudo that is part of OS X.
  • The time and date can easily be changed on OS X, without entering an administrative password, using the systemsetup utility.
  • A module has recently been published for the do-it-yourself break-and-enter toolkit Metasploit to exploit these holes.

That’s a bad combination.

What can you do about it?

• Deauthenticate yourself with sudo -K rather than sudo -k.

Instead of setting your timestamp to the special value of 01 January 1970, this option removes the timestamp directory altogether, as if you had never run sudo before.

Next time you run sudo you’ll get the mini-lecture and be asked for your password.

Even if the 01 January 1970 bug isn’t patched, it can’t be exploited if the /var/db/sudo/username file doesn’t exist.

• Consider setting the timestamp_timeout value to zero in the sudo configuration file.

This means that there is no convenience period within which you can run sudo again without being asked for a password.

You will require a password every time.

• Reduce the number of users in the OS X admin group.

If you aren’t in admin then you aren’t allowed to use sudo, which reduces the overall attack surface area.

You can see the group members with the command:

duck@ret:~$ dseditgroup -o read admin
. . . 
dsAttrTypeStandard:GroupMembership -
                root
		duck
                another

You can remove unwanted users (but don’t delete yourself if you are the administrator!) like this:

duck@ret:~$ sudo dseditgroup -o edit -d another admin

• Consider installing the Macports version of sudo.

Macports is not an undertaking to be entered into lightly, but it does give your OS X computer access to a huge range of handy open source goodies that you’ll wonder how you ever managed without.

The Macports version of sudo isn’t bang up to date, but it is patched against the 01 January 1970 flaw.

How can you tell if you’re OK?

Run the command sudo -V to show you the version string.

You should have 1.7.10p7or later, if your version string starts with 1.7; or 1.8.6p7 or later if you’re on 1.8.

Should you say something to Apple?

Why on earth not?

You’re probably surprised to learn that the same company that excels at bringing completely new and funky products to market in just a couple of years can’t update within six months an open source tool that it chose to include with its operating system.

So, why not mention that to Apple?

Image of bullet through apple courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/QP6qWB6lTQI/

Internet dating scam – mother and daughter crime duo jailed

Karen VasseurMother and daughter, Karen and Tracy Vasseur from Colorado, US, have been jailed for a total of 27 years after they tricked unsuspecting victims into thinking they were talking to members of the US military who needed money to be sent to them.

In total, the pair managed to con 374 people out of 1.1 million dollars with one victim stumping up as much as $59,000 according to court documents.

Authorities said the duo had other (yet to be caught) staff working for them who would trawl the internet looking for vulnerable people on dating sites or social networks.

They would then tell them they were part of the US military, serving in Afghanistan. Once they had established a relationship with their victim they would tell them they were in need of money for things like travel to the US, retrieving property and other expenses.

When a victim had agreed to pay, they were told to transfer the money to the two women who posed as ‘military agents’.

Tracy VasseurThe money was then quickly passed on to other accomplices in Nigeria, the UK, India, UAE and Ecuador.

Tracy was ordered to spend 15 years behind bars, plus an extra four years for unrelated charges.

Karen received 12 years, to run concurrently with a 10-year sentence for tricking ‘at-risk’ adults into a fake loan scheme.

Colorado Attorney General John Suthers commented:

“Not only did this mother-daughter duo break the law, they broke hearts worldwide.

“It is fitting that they received stiff sentences for their unconscionable crimes committed in the name of love and the United States military,”

This sad story just acts as another reminder to be careful who you talk to online. When someone’s hiding behind a screen you can never be sure who they really are.

Be careful who you speak to, and *never* send money to someone just because they ask you to.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PRav_KIE9uQ/

Apple apps turned upside down writing right to left – you’re only 6 characters from a crash!

sb-crash-170Apple’s iOS and OS X are currently under what can only be described as a “jolly irritating attack.”

Certain text strings, when processed by the operating system’s CoreText rendering engine, cause the application that’s trying to display them to crash.

No-one has yet come up with a way to exploit these crashes for code execution, at least as far as I am aware, so they’re vulnerabilities of the fragility sort, rather than the you’re pwned type, but they’re still, well, jolly irritating.

The shortest string I’ve been able to come up with that provokes this bug is just eleven bytes long, and consists of six UTF-8 characters, one of which is a plain old space (hexadecimal code 0x20).

→ UTF-8 is a system for representing text that uses from one to four bytes per character. The bit pattern of each byte in a character tells you how big that character is, so moving backwards and forwards in a string is easy (you don’t need to keep re-calculating from the start of the string), and 7-bit ASCII characters are represented as themselves in one byte (so simple documents in plain ASCII don’t need converting, and don’t waste space).

The crash strings I’ve seen and heard of all include Arabic characters, and Arabic is, of course, written from right to left.

But whether it’s the direction of the text, how the characters are combined and composited, or some other subtlety, I can’t yet tell you.

The problem with this problem is that it can quickly become disruptive, since an offending string can be placed by an outsider into all sorts of otherwise unexceptionable places where you might stumble across it by mistake: web page titles, email subject lines, even Wi-Fi access point names.

If the Apple application that tries to display the string uses the vulnerable rendering library code…

…down she goes.

And if the application tries to recover gracefully when it next loads, for example by reloading the web page it was busy with before…

…down she goes again.

In my testing, I ended up with Safari’s history loaded with a URL aimed at my Bad Page, provoking an HTTP reply no more threatening-looking than this:

HTTP/1.0 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 11

...the dreaded 11 bytes...

As long as my web server was running and a network connection available, relaunching Safari caused it to crash again at once.

What to do?

I tried what I thought was the obvious solution, namely removing the file:

~/Library/Safari/LastSession.plist

That file certainly referenced my dodgy URL, but removing it didn’t help, so I tried:

~/Library/Caches/com.apple.Safari

No use, but I fared better when I removed:

~/Library/Saved Application State/com.apple.Safari.savedState

That made Safari forget that it had ever heard of my crashy website, and let me browse again.

Apple notoriously likes to keep completely quiet about software problems until a fix is available, as it did with the equally amusing and embarrassing but less disruptive FILE COLON SLASH SLASH SLASH bug earlier this year.

In this case, therefore, let’s hope that Apple pumps out a fix pretty jolly quickly.

By the way, you can help make Apple aware of the impact of the problem by reporting this crash if it happens to you.

You’ll see something like this:

Choosing Report… will show you what happened, much like you see below, and ask if you want to Send to Apple:

Should you send the crash report in?

Apple assures you it’s anonymous, and although it reveals a little bit about you – your timezone, what sort of Mac you have, and more – I suspect you can send it off without too much concern.

(I’m guessing, but Apple probably learns less about you when you submit a crash report than a search engine does when you try to look for a solution to it.)

Apologies that I don’t have a general workaround or mitigation for you.

If I come across one, I’ll post it here or in the comments.

In the meantime, applications that get derailed by a CRASH: GOTO CRASH loop, like my Safari did, can probably be pointed in the right direction by digging around in the ~/Library directory, as I showed above.

Oh, and as far as browsing is concerned, while Chromium is affected, Firefox isn’t.

Firefox is currently enjoying a really strong lead in our “which browser do you trust” poll – perhaps you’ve just found another reason to try it out.

Bonne chance.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/p-7Oeq1Q0z0/

Twee…THUD: Boffins build ‘The Classifier’ to seek out, kill millions of Twitter fakes

Win a top of the range HP Spectre laptop

Comp sci boffins spent a year buying up more than 100,000 fake Twitter accounts in a bid to help the teeny-tiny text transmitter beef up its spam defences. They also used their research to build a retroactive classifier that sniffed out the fakers so the Big Blue Bird itself could snuff them out.

A group of researchers, including two Twitter staffers, purchased a total of 121,027 Twitter accounts between June 2012 and April 2013 from 27 different merchants who advertised their services on web storefronts, blackhat forums, and freelance job listings sites.


The researchers suggested the @mongers were responsible for selling 10 to 20 per cent of all fake accounts flagged up as spam during the period of the experiment, racking up revenue of between $127,000 and $459,000 in the process.

These accounts are purchased and then “serve as stepping stones to more profitable spam enterprises”, such as selling dodgy anti-virus warnings and pharmaceuticals.

“Our findings show that merchants thoroughly understand Twitter’s existing defences against automated registration, and as a result can generate thousands of accounts with little disruption in availability or instability in pricing,” the authors wrote.

Like fine wines, cheeses or vinyl records, Twitter accounts also benefit from being aged, with some accounts left to mature for more than a month to make them appear more kosher. These pre-aged accounts are “a selling point in the underground market,” the boffins said.

Of course, this being a dodgy market to begin with, the spam-canners encountered a few scams. Eight of the merchants tried to sell them duplicate accounts, amounting to a total of 3,317 that they had already paid for, while one particularly shady seller tried to sell the same 1,000 three times.

At the end of the experiment, the researchers used their “insights to develop a classifier to retroactively detect several million fraudulent accounts sold via this marketplace, 95 per cent of which [they] disable[d] with Twitter’s help”.

Twitter is now building the boffins’ suggested defence mechanisms into its real-time spam busting system. After the study concluded, Twitter was briefly able to throttle 90 per cent of newly bought spam accounts at birth. One of the vendors told the researchers: “All of the stock got suspended. Not just mine. It happened with all of the sellers. Don’t know what Twitter has done.”

A Russian f@ker put up a sign on his website which said: “Temporarily not selling Twitter.com accounts.”

However, after the successful strategy worked briefly, the shady vendors were able to adapt and soon began to dodge Twitter’s beefed-up defences again.

“While Twitter’s initial intervention was a success, the market has begun to recover,” the researchers wrote. “Of 6,879 accounts we purchased two weeks after Twitter’s intervention, only 54 per cent were suspended on arrival.

The paper, “Trafficking Fraudulent Accounts: The Role of the Underground Market in Twitter Spam and Abuse”, can be read at Krebs on Security (PDF).

Win a top of the range HP Spectre laptop

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/08/15/undercover_spam_scientists_build_army_of_fake_twitter_accounts/

GitHub code repository rocked by ‘very large DDoS’ attack

Win a top of the range HP Spectre laptop

San Francisco–based GitHub, the online repository popular among software developers, suffered a major service outage on Thursday morning due to what it characterizes as a “very large DDoS attack.”

GitHub status page reporting major DDoS attack

This major attack follows a similar one on August 4th


The outage was first reported on the GitHub Status Messages page at 15:47 UTC (8:47am Pacific Time).

GitHub is a major code repository used by developers across the world. It hosts a mixture of public and private projects split across open and closed source.

The site works using the Git version-control system, which is a commonly used tool of devs across the world to deal with large code projects. Over the past few years, the site has become one of the main places that people push their repositories to, and for that reason an outage has a major effect on the developer community.

Public repositories can be posted for free, but companies must pay to gain private ones. The site is a frequent target of DDoS attacks: the last major attack was on August 4th, and before that July 29th, and before that July 19th.

One potential reason for why it is targeted so frequently is that it is a central repository for a large amount of projects, some of which are closed source. DDoS attacks are frequently used by hackers as a way of probing vulnerabilities in a site, so there is a chance these outages come from probing attempts by hackers keen to get at code stored on the service.

“The site continues to be operational, however we are going to keep the status at yellow while we continue to monitor closesly and work with our upstream providers,” the site’s Status Messages reported at 9:56 Pacific time. ®

Win a top of the range HP Spectre laptop

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/08/15/github_ddos/

Java devs warned of pushbutton exploit for buggy Struts framework

Win a top of the range HP Spectre laptop

Java developers were warned, but they didn’t listen. Security researchers at Trend Micro report that old and vulnerable versions of the Apache Struts framework for Java are still in widespread use, and now Chinese hackers are using automated tools to exploit their flaws.

The vulnerabilities in question were patched in the July Struts release, according to a blog post by Trend Micro senior threat researcher Noriaki Hayashi. But many applications are still running on older, buggy versions that can allow attackers to execute arbitrary code.


The exploits are made all the easier by a new tool developed by Chinese hackers, which makes executing certain commands on vulnerable remote servers as easy as pushing a button.

With a few simple clicks, attackers can determine the name of the current user account, display the version number of the OS, view network and system configuration information, list the contents of directories, and – particularly worryingly – add new user accounts.

The tool also reportedly includes a “WebShell” feature that makes it easy to plant a backdoor into vulnerable servers. Once this is done, the hackers can execute arbitrary commands from their keyboards using only a web browser.

Table showing commands that can be executed remotely on vulnerable Struts servers

Here’s what attackers can do on your Struts server using a Chinese script-kiddie tool

The attack works on servers running either Windows or Linux, although the actual commands that can be executed will differ depending on the OS.

Trend Micro is hardly the first company to warn developers of vulnerabilities in open source frameworks and the dangers of running old versions. In March, a study by Sonatype and Aspect Security found that developers downloaded out-of-date versions of the most popular frameworks 33 per cent of the time, even though newer versions with security fixes were available.

Among the top Java frameworks, Struts is also a particular target for malicious hackers. A January report by CAST found that applications built using Struts were highly likely to be misconfigured, and they delivered the lowest code quality scores overall.

Of course, the tendency to stick with outdated versions of Java frameworks is somewhat understandable. Frameworks like Struts often power highly complex, mission-critical applications that require rigorous testing before any changes can be deployed.

Security patches can also sometimes break apps when vulnerable features are removed. But with flaws as severe as the ones now being exploited in Struts, it’s essential that Java development shops stay on the ball and migrate to secure versions ASAP.

In this case, blocking the Chinese automated exploit tool requires an upgrade to Struts 2.13.15.1 – which was released in July – or to any later version. ®

Win a top of the range HP Spectre laptop

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/08/15/java_struts_automated_exploit_tool/

NORKS build TROLL ARMY to tear down S Korean surfers

Win a top of the range HP Spectre laptop

North Korea has tasked 200 agents with the job of posting negative comments online, often using stolen online identities, in a bid to undermine the morale of their neighbours in the South.

The brigade of NORK trolls is part of a brigade of 3,000 cyber warriors and hackers that make up the Reconnaissance General Bureau information warfare force, according to the Police Policy Institute.


Ryu Dong-ryul of the Police Policy Institute said, “The North has established a team of online trolls at the United Front Department and the Reconnaissance General Bureau.”

About 200 agents post derogatory comments on South Korean portals using assumed identities stolen from South Koreans, the Chosun Ilbo newspaper reports.

The South Korean thinktank reckons other members of the NORK cyber army are building malware and launching hacker attacks while the trolls are posting comments with links to N Korea propaganda sites designed to sway public opinion in favour of Pyongyang.

North Korean agents posted more than 27,000 propaganda messages designed to turn people against the South during 2011 alone, the institute estimates. In 2012 this figure increased and more than 41,000 messages were posted, delegates at a seminar in Seoul earlier this week were told. Objectives of the campaign include getting unblock access to pro-North Korean sites for surfers visiting from the South.

North Korea is the prime suspect in destructive malware attacks against the computer networks of banks and TV stations earlier this year, the latest in a series of attacks. Cyber assaults included attempts to knock targeted sites offline have been going on within the Korean peninsular for some years. Ordinary North Korean citizens have only heavily regulated access to government controlled websites through local cybercafes.

The hermit kingdom does maintain various portals and propaganda outlets, some of which are hosted in China, such as Uriminzokkiri.

Using agents of the state in a propaganda offensive fought on social media sites and consumer portals is an unusual tactic but it’s not restricted to North Korea. As previously reported, cadres of the Iranian Revolutionary Guards include similar digital propaganda units.

Lim Jong-in of Korea University estimates 30,000 North Koreans are engaged in cyber and psychological warfare against South Korea, a group swelled every year by another 300 personnel “trained in the dark arts”, as Chosun Ilbo catchily describes it.

This hoard of cyber-ninja Death Eaters are arrayed against a much smaller South Korean force that is growing by only 30 personnel a year. ®

Win a top of the range HP Spectre laptop

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/08/16/north_korea_recruits_troll_army/