STE WILLIAMS

Monday review

It’s weekly roundup time!

Here’s all the great stuff we’ve written in the past seven days.

Watch the top news in 60 seconds, and then check out the individual links to read in more detail.

Monday 28 October 2013

Tuesday 29 October 2013

Wednesday 30 October 2013

Thursday 31 October 2013

Friday 1 November 2013

Saturday 2 November 2013

Sunday 3 November 2013

Would you like to keep up with all the stories we write? Why not sign up for our daily newsletter to make sure you don’t miss anything. You can easily unsubscribe if you decide you no longer want it.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2QBW00-S7KI/

Lightbeam shines a light on which websites you’re really visiting

LightbeamDo you really know where your browser goes when you type a URI into its address bar? Do you realise that your browser not only accesses the site you intended but may also have visited 3rd party websites running connected services?

For many of us this revelation is nothing new but to a lot of surfers this type of activity is news – for the simple reason that it happens behind the scenes.

Sometimes, but by no means always, you can see the end results of this behind the scenes traffic on the website you’re visiting; it’s essential for delivering features like Google AdSense, Facebook Likes or Pinterest ‘Pin it’ buttons for example.

What’s happening is that when you type a URL into your browser it fetches the web page you asked for and then it fetches anything else that web page says it needs.

Typically a page will contain instructions to fetch things like stylesheets that control the layout of the page, graphics and photographs to illustrate it and scripts to create functionality.

Those things might come from the same website as the page you asked for but they don’t have to, the web page can also ask for things from 3rd party websites.

To both the web browser and the 3rd party websites involved these unseen secondary request are indistinguishable from a user just typing a URL into the address bar.

This is an extremely useful feature, one that is essential to the operation of a lot of web services, but it allows the 3rd parties involved to do things you might not expect such as track your ‘visit’ or set cookies on your browser.

This isn’t a secret but it isn’t obvious either. Web browsers have ways of showing you this traffic if you want to see it but it’s not visible in a form that would make sense to a non-technical user.

Recently, Mozilla released a new add-on for Firefox called Lightbeam. The primary purpose of Lightbeam is to help people better understand how the web works and to shine a light on the realities of data tracking.

Released at this year’s MozFest, Lightbeam builds on existing technology called Collusion to give users more control over their surfing activities and how they are being monitored on the web.

In a blog post announcing Lightbeam, Mozilla’s Alex Fowler stated, “we believe that everyone should be in control of their user data and privacy”.

I thought this sounded like a great tool for those of us who seek more transparency in the way our online activities are tracked so I gave Lightbeam a quick test drive.

I picked a handful of social media and news sites (including Naked Security) to see how connected they all were and to see if I could learn about some of the 3rd party connections that I hadn’t known existed.

In all, I visited 12 sites which connected me with 127 3rd party sites.

For example, a visit to Naked Security yielded 21 3rd party connections. Some of these connections are to services like Facebook, LinkedIn, Reddit and Twitter which we use to make it easier for our readers to share content.

Some are to services that provide additional content, like Sophos videos on YouTube, and some are analytics services which help us understand which articles are popular.

Lightbeam allows you to filter by visited and 3rd party sites. Visited sites are the sites that you either typed the URI in the browser yourself or explicitly clicked on a link to access the content.

3rd party sites are sites that are connected to the sites you visited that might collect information about you without any explicit interaction.

Lightbeam also gives you the ability to drill down into these site interactions and optionally block or watch certain sites of your choosing.

To be clear 3rd party services and 3rd party cookies are not intrinsically bad and can be employed for many useful purposes that don’t involve tracking.

Even those 3rd parties that are involved in tracking might be putting their data to uses that at least some of their users will agree with and benefit from.

For example Twitter monitors the websites its users visit with its tweet buttons and then uses the data to personalise its Trends.

Some Twitter users will feel this improves the site, others will be ambivalent and some will see it as unwelcome and invasive (if you’re one of those people you can disable the feature by enabling Do Not Track in your browser or through your Twitter security settings).

Fowler makes a good point when he says:

When we’re unable to understand the value these companies provide and make informed choices about their data collection practices, the result is a steady erosion of trust for all stakeholders.

For most privacy advocates this translates to transparency. If we know who is tracking us and what they’re doing with our data we can decide what level of trust and risk we’re willing to undertake.

Tools like Lightbeam give us greater visibility and control over which websites we are really visiting and allow us to make better decisions about who we transact with. A more open web means a better experience for everyone involved.

Chrome users can still download the Collusion add-on from the Chrome Web Store which will provide similar information and functionality.

If you’d like to know more about the 3rd party connections we use on Naked Security then take a look at our Cookies and Scripts page. You’ll find a list of cookies, their domains and who sets them as well as links to privacy policies and vendor opt-outs.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/moAwUNwDBfA/

Cyber-terrorists? Pah! Superhero protesters were a bigger threat to London Olympics

Supercharge your infrastructure

RSA Europe 2013 Protests from groups such as Fathers4Justice were more of a worry to London 2012 Olympic Games organisers than computer hackers, according to the former chairman of London 2012, Lord Sebastian Coe.

He said procedures put in place before the Games to guard its IT systems – including Wi-Fi networks in stadiums as well as the main Olympics website – had worked well.


In practice, risks from pressure groups and local political campaigners proved the biggest headache but precautions against all types of threat had to be prepared, he noted.

“You have to deliver the Games within an environment of security,” Coe said in response to questions about anti-aircraft missiles on tower blocks in East London and armed police on the street. “Protection has to be proportionate but I think we got the balance right.

“The threats of disruption came from everything from Fathers4Justice through to taxi drivers, angry they weren’t allowed into the Olympic lanes. That tended to be the level of the threat. Most of the challenges weren’t terrorists, cyber or otherwise,” said Coe, who was speaking at the RSA Conference Europe 2013 which took place in Amsterdam this week.

Fathers4Justice are a fathers’ rights campaign group whose signature form of protest involves scaling buildings while dressed as comic book superheroes.

Earlier at the conference, BT security chief executive officer Mark Hughes said that no cyber attack had occurred during the Games, repeating previous statements by the telco giant. BT dealt with over 212 million cyber attacks on the official website during last year’s Olympic and Para-Olympic Games.

The only serious IT threat of any note came from concerns that power to the Olympic Stadium might be disrupted.

A recent documentary from BBC Radio 4 revealed that London Olympics officials were warned hours before the opening ceremony that the event might come under cyber-attack. Olympic cyber-security head Oliver Hoare was woken by a phone call from GCHQ at 04.15 on the day of the opening ceremony by GCHQ to warn of a credible threat to the “electricity infrastructure supporting the Games”.

The security team had already run extensive tests on the electricity supply systems supporting the games long before the threat, which, based on the discovery of “attack tools and targeting information”, it was feared might relate to the Olympics. Nonetheless, additional contingency plans were developed during high-level meetings between senior government officials and LOCOG (the London Organising Committee of the Olympic Games) during the day.

In the event nothing happened. The whole incident is more of an interesting case study on how to deliver super-reliable power supply systems rather than anything that sheds much light on the capabilities of hacktivists or other malign actors when it comes to attacking industrial control gear. It’s unclear who was behind the threat to the Olympics.

“There was a potential for cyber-attack even though we didn’t suffer any incursion,” Coe said during a press conference ahead of the closing keynote speech. “We had systems in place to defend against attack and this might have even acted as a deterrent.” ®

Starting block-note

Coe and track rival Steve Ovett were rumoured to have been less than friendly when they were competing for glory in middle distance track races at the Moscow (1980) and Los Angeles (1984) Olympics. Asked abut this, Coe said that he hardly knew Ovett at the time they were competing, partly because they lived at opposite ends of the country. He added that they “get along fine” since getting to know each other better after retiring from athletics. Speculation of any ill-feeling “shows that rumours existed even before social media,” Coe told El Reg. ®

Free Regcast : Microsoft Cloud OS

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/11/04/olympics_rsa_coe/

NSA, Apple, Facebook and Adobe – 60 Sec Security [VIDEO]

Memories of the Internet Worm – 25 years later

Today is the 25th birthday of the infamous Internet Worm.

From the name alone, you can tell how different the malware scene was back in 1988.

Back in that era, the Brain virus, which was from Pakistan, was unusual for having such a usefully descriptive name. (It changed the volume labels of infected diskettes to (C) BRAIN.)

Viruses often ended up with technically unhelpful names like Jerusalem, based on where they came from – and some people even called that one the Israeli virus, because malware was rare enough that even naming a virus after a whole country was unlikely to lead to ambiguity.

And so it was with the Internet Worm – the Internet Worm, if you don’t mind, not merely an internet worm.

After all, there hadn’t been an internet worm before; if ever there were another, well, the bridge of what to call it could be crossed at that time.

The Internet Worm is also known by another name of a sort you are unlikely to see today: you will see it referred to as the Morris Worm, after its author, Robert Tappan Morris.

→ Morris’s late father, as it happens, also named Robert, worked for the NSA. A lot of this story sounds eerily familiar, even 25 years later.

Malware tends not to be named after its authors these days because their identities are rarely known – and they like to keep it that way.

Morris, however, could hardly deny being the author of the Internet Worm, because he received a criminal conviction for writing and releasing it – he was on probation for three years, did 400 hours of community service, and paid a fine of just over $10,000.

How it spread

The worm employed numerous techniques that are used to this day by cybercriminals, with three main tricks up its sleeve for spreading:

  1. It tried to exploit a stack overflow vulnerability in the system service fingerd.
  2. It tried to exploit a debug option commonly but wrongly enabled in the mail server sendmail.
  3. It tried to guess other users’ passwords.

The password guessing started off with various permutations of the user’s login name and real name, so that for a user called Paul Ducklin with a username of duck, the worm would try:

duck
duckduck
Paul
Ducklin
paul
ducklin
kcud

If none of those worked, it would use a short dictionary that it carried around with it:

char *wds[] = {
  "academia", "aerobics", "airplane", "albany",
  "albatross", "albert", "alex", "alexander",
  "algebra", "aliases", "alphabet", "amorphous",
  . . . .
  "outlaw", "oxford", "pacific", "painless",
  "pakistan", "papers", "password", "patricia",
  "penguin", "peoria", "percolate", "persimmon",
  . . . .
  "wizard", "wombat", "woodwind", "wormwood",
  "yacov", "yang", "yellowstone", "yosemite",
  "zimmerman",
  0
};

What have we learned?

It would be nice to be able to say that a password cracking list of the sort shown would be useless in 2013, but experience suggests that many people are still as careless as in 1988.

Last year, for example, when Dutch industrial group Philips suffered a database breach, we quickly recovered the following choices from the dumped password hashes:

1234
12345
123456
123457 -- nice try, but no cigar!
00000000
philips -- five appearances
ph1lips -- nice try, but no cigar!
password -- no list complete without it
qwerty -- ditto
seguro -- Spanish for "secure", it isn't

→ In a delightful historical loopback, the author of the original crypt program for Unix, which introduced the storage and validation of passwords as hashes rather than plaintext, was Robert Morris’s father, Robert Morris.

Stack overflows, on the other hand, aren’t quite the security disaster they used to be.

The stack is used to store arguments passed into, temporary variables used during, and the address to jump to when when returning from system functions.

Back in 1988, if you could reliably overflow one of those temporary variables on the stack, rewrite the return address, and add some malicious shellcode, you had a very good chance of RCE, or Remote Code Execution.

That’s because you could put your shellcode right on the stack and run it.

There are occasional legitimate reasons for generating temporary stack code and jumping to it, and in 1988, most operating systems permitted it, in case you ever needed to do so.

These days, most operating systems mark the stack non-executable, to make it harder to run shellcode stored there; they also perform various regular runtime checks to look out for unauthorised tampering with stack values such as return addresses.

Could it happen again?

Received wisdom suggests that the Internet Worm infected about 10% of the 60,000 computers connected to the internet in 1988.

That sort of penetration was probably exceeded by various network worms of the early 2000s, such as CodeRed, Nimda and Slammer; in the last few years, however, viruses of that replicative power just haven’t been seen.

One reason is the success and the ubiquity of the internet itself: malware writers just don’t need to use network-spreading viruses (self-replicating malware) these days.

Instead of sending malware out into the world to find unprotected systems and break in from the outside, cybercrooks these days can simply place their malicious content on a website, and wait for the world to come to them.

That’s simpler to do, and makes it easy to change the malware as often as you like – you can even serve up a completely different sample for each visitor.

It also bypasses many firewalls, because they’re typically configured to allow connections from the inside to the web, even if they religiously block all inbound connections from the outside.

What can we learn?

Three straightforward things that you should have been doing in 1988 are still well worth doing today:

  • Pick decent passwords. Use a password manager if you have trouble remembering them all.
  • Patch regularly, so that already-known vulnerabilities simply aren’t available to the crooks.
  • Review your system configuration, removing permissions and turning off options which are unsuitable for production use.

One last thing to remember.

The magic pixie dust that made UNIX immune to viruses and other malware…it escaped forever on 02 November 1988.

Image of earthworm courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/QHmZ-ry0RMM/

Off your bikie laws: Anonymous to Queensland Premier Newman

Free Regcast : Microsoft Cloud OS

Queensland’s police force is investigating the posting of an Anonymous-identified video to YouTube protesting the state’s anti-bikie laws, and also the publication of premier Campbell Newman’s private mobile number and home address online.

The government describes the laws as serving the purpose of dismantling outlaw motorcycle gangs, but the laws have been criticised by lawyers and civil liberty groups as excessive: anyone assembling in groups of three or more can be questioned by police. Moreover, non-outlaw clubs such as the Vietnam Veterans’ MC have been raided, and the police have reportedly suggested that all group rides should be registered with the police to avoid participants being stopped and questioned.


Anonymous has packaged up new audio with one of its stock videos to make what has been described as a “threatening” video by the government because of the inclusion of the stock sign-off, “we are Anonymous. Expect us”.

The YouTube video claims that the laws “could spread Australia-wide”, that they’re unconstitutional (while also complaining that Australians lack proper constitutional protections for freedom of speech and freedom of association), and states that “the creeping fascism has already begun”.

The release of the video was followed by the publication of Newman’s home address and personal mobile number. The Register would note that Anonymous (if it’s responsible for the release of the information) was beaten to the scoop by former premier Anna Bligh who, prior to Queensland’s bitter 2012 election, revealed the now-premier’s address in a parliamentary debate, as noted by the Brisbane Times here.

Readers may judge the extent of the Anonymous threat by checking out the video, here. ®

Free Regcast : Managing Multi-Vendor Devices with System Centre 2012

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/11/04/anonymous_tells_queensland_premier_expect_us/

Majority Of Retail Sector Does Not Meet New PCI Standards

PORTLAND, OREGON — October 30, 2013 — Tripwire, Inc., a leading global provider of risk-based security and compliance management solutions, today announced the results of research on risk-based security management in the retail industry.

The survey, conducted in April 2013 with the Ponemon Institute, evaluates the attitudes of 1,320 respondents from IT security, IT operations, IT risk management, business operations, compliance/internal audit and enterprise risk management. One hundred sixty-two retail sector respondents from the U.S. and U.K. participated in the retail portion of the survey.

The most recent version of the Payment Card Industry Data Security Standard (PCI DSS 3.0) will soon require businesses to implement and perform penetration testing. In addition, PCI DSS 3.0 will also clarify different methods of secure authentication and session management so businesses can better protect themselves against man-in-the-middle, man-in-the-browser and other similar cyber attack methods. However, the study revealed that the retail industry hasn’t yet implemented these new security requirements.

Key findings include:

Only 41% of the retail sector uses penetration testing to identify security risks.

Only 34% of the retail sector measures the reduction in access and authentication violations to assess risk management efforts.

Only 44% of the retail sector has fully or partially deployed file integrity monitoring.

62% of IT professionals in the retail sector say that negative facts about security risks are filtered before being communicated with senior executives.

“Although these survey results don’t reflect it, the retail industry is very focused on PCI 3.0 compliance,” said Michael Thelander, director of product management for Tripwire. “And Tripwire is hard at work to make these new controls less expensive, easier to implement, more scalable and more intelligent out of the box.”

For more information about this survey, please visit: http://www.tripwire.com/ponemon/2013/

About the Ponemon Institute

The Ponemon Institute is dedicated to advancing responsible information and privacy management practices in business and government. To achieve this objective, the Institute conducts independent research, educates leaders from the private and public sectors and verifies the privacy and data protection practices of organizations in a variety of industries.

About Tripwire

Tripwire is a leading global provider of risk-based security and compliance management solutions, enabling enterprises, government agencies and service providers to effectively connect security to their business. Tripwire provides the broadest set of foundational security controls including security configuration management, vulnerability management, file integrity monitoring, log and event management. Tripwire solutions deliver unprecedented visibility, business context and security business intelligence allowing extended enterprises to protect sensitive data from breaches, vulnerabilities, and threats. Learn more at www.tripwire.com or follow us @TripwireInc on Twitter.

Article source: http://www.darkreading.com/management/majority-of-retail-sector-does-not-meet/240163481

Crypto boffins propose replacing certification authorities with … Bitcoin?

5 ways to reduce advertising network latency

Whatever your opinion of Bitcoin, it does stand as a high-quality intellectual achievement. Now, a group of researchers from Johns Hopkins are suggesting its cryptographic implementation could help solve the “certificate problem” for ordinary users.

Apart from whether or not they might be universally compromised by the spooks, a problem with Public Key Infrastructure – PKI – certificates is that they depend on users’ trust of the certification authority (CA) that sits at the top of the trust hierarchy.


As we know, however, from incidents such as the DigiNotar hack, any loss of trust is fatal to a CA. Bitcoin did away with centralised trust in favour of its own cryptographic model, relying instead on a distributed transaction ledger.

In this paper, published at the International Association for Cryptologic Research, researchers Christina Garman, Matthew Green and Ian Miers of John Hopkins University’s Department of Computer Science, propose a similar model, in which anonymous credentials could exist without a centralised CA acting as trusted issuer.

Their idea is that the distributed, public, append-only ledger model used by Bitcoin could be used “by individual nodes, to make assertions about identity in a fully anonymous fashion” – while doing away with CAs as a single point of failure.

“Using this decentralised ledger and standard cryptographic primitives, we propose and provide a proof of security for a basic anonymous credential system that allows users to make flexible identity assertions with strong privacy guarantees,” they write.

Key components of the system are:

  • A Decentralised Direct Anonymous Attestation (dDAA) – the bit that gets rid of the CA.
  • Anonymous resource management in ad hoc networks – using the dDAA technique to prevent impersonation in peer-to-peer networks.
  • Credential auditability – in the same way as the Bitcoin blockchain is auditable because it’s public, the researchers believe, the scheme offers a way to guard against people faking their certificates.

While the researchers present an implementation, they note that this work – still seriously pre-Alpha – needs further development in the security of the transaction ledger, and in the efficiency of the algorithms. ®

Supercharge your infrastructure

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/11/03/crypto_boffins_propose_getting_rid_of_cas/

Fake femme fatale dupes IT guys at US government agency

Femme FataleIt was the birthday of the head of information security at a US government agency that isn’t normally stupid about cyber security.

He didn’t have any accounts on social media websites, but two of his employees were talking about his special day on Facebook.

A penetration testing team sent the infosec head an email with a birthday card, spoofing it to look like the card came from one of his employees.

The recipient opened it and clicked on the link inside.

After the head of information security opened what was, of course, a malicious birthday card link, his computer was compromised.

That gave his attackers the front-door keys, according to Aamir Lakhani, who works for World Wide Technology, the company that performed the penetration test:

This guy had access to everything. He had the crown jewels in the system.

ITWorld’s Lucian Constantin wrote up Lakhani’s account of the successful pen test, which was performed in 2012 and sanctioned by a US government agency that Lakhani neglected to name.

Lakhani, a counter-intelligence and cyber defense specialist who works as a solutions architect for World Wide Technology, presented the results on Wednesday at the RSA Europe security conference in Amsterdam.

How did World Wide Tech crack open a US government agency that Lakhani described as being, as Constantin paraphrased it, “a very secure one that specializes in offensive cybersecurity and protecting secrets and for which [World Wide Technology] had to use zero-day attacks in previous tests in order to bypass its strong defenses”?

The lynchpin, it turns out, was a spoof new hire at the agency: an attractive, smart, female graduate of MIT named Emily Williams whom World Wide Technology invented for the test.

According to the pen-test team’s fake social media profiles, Emily Williams, 28 years old, had 10 years of experience. They used a picture of a real woman, with her approval.

In fact, the real woman works as a waitress at a restaurant frequented by many of the targeted agency’s employees, Constantin reports.

Nonetheless, nobody recognized her.

Not only did the government employees not recognize their waitress, they flocked to the fake persona bearing her likeness.

Here’s how popular Emily Williams proved within just 24 hours of her birth:

  • She had 60 Facebook connections.
  • She garnered 55 LinkedIn connections with employees from the targeted organization and its contractors.
  • She had three job offers from other companies.

As time went on, Emily Williams received LinkedIn endorsements for skills, while male staffers at the agency offered to help her out with short-cuts around the normal channels set up for new hires that would net her a work laptop and network access (which the penetration testing team obtained but did not use).

Around Christmas, the pen-test team rigged Emily Williams’s profiles with a link to a site with a Christmas card.

Visitors were prompted to execute a signed Java applet that in turn launched an attack that enabled the team to use privilege escalation exploits and thereby gain administrative rights.

They also managed to sniff passwords, install other applications and steal sensitive documents, including information about state-sponsored attacks and country leaders.

Good grief.

But what about those 10 years of experience at the tender age of 28? Didn’t that sound any alarms?

Apparently not.

The bit about Emily Williams having 10 years of experience well might have been a tip of the hat to the inspiration for the ruse: namely, a fictional cyber threat analyst by the name of Robin Sage, crafted by Thomas Ryan, a US security specialist and white-hat hacker from New York, in 2009.

Like Emily Williams, Robin Sage was also set up to have 10 years of experience, though she was only 25 years old.

Ryan cooked up Robin Sage profiles on Facebook, LinkedIn, Twitter, etc., using them to contact nearly 300 people, most of whom were security specialists, military personnel, staff at intelligence agencies and defense contractors.

Despite the completely fake profile, which was populated with photos taken from an amateur pornography site, and despite the character’s name being taken from a US Army exercise, Sage was offered work at many companies, including Google and Lockheed Martin.

She was also asked out to dinner by her male friends, was invited to speak at a private-sector security conference in Miami, and was asked to review an important technical paper by a NASA researcher, the Washington Times reported.

For “her” part, Emily Williams managed to reach the very top of the government agency’s information security team.

But the attack started out low, targeting employees in sales and accounting, before hitting that high mark.

As the character’s social network grew, the attack team managed to target technical staff including security people and even executives.

Lakhani pointed out a few lessons from the experiment:

  • Attractive women can open locked doors in the male-dominated IT industry. A parallel test with a fake male social media profile resulted in no useful connections. A majority of those who offered to help Emily Williams were men. The gender disparity in social engineering has shown up in other situations, including, for example, the 2012 Capture the Flag social engineering contest at Defcon. Anecdotal evidence from the Defcon contest suggested that females might have more compunction than males about duping others, but they may be better at sniffing out a con.
  • People are trusting and want to help others. Unfortunately, low-level employees don’t always think that they could be targets for social engineering because they’re not important enough in the organization. They’re often unaware of how a simple action like friending somebody on Facebook, for example, could help attackers establish credibility.

How do you solve a problem like overly friendly, helpful employees?

Lakhani said that social engineering awareness training can help, but doing it on an annual basis doesn’t cut it. Rather, it needs to be constant, so employees develop instincts.

Other training tips from Lakhani, via Constantin, include training employees to:

  • Question suspicious behavior and report it to the human relations department.
  • Refrain from sharing work-related details on social networks.
  • Not use work devices for personal activities.

On the systems front, he recommended:

  • Protecting access to different types of data with strong and separate passwords.
  • Segmenting the network so that if attackers compromise an employee with access to one network segment they can’t access more sensitive ones.

We think that your defence against social engineering should also include someone that you can call to report phishing expeditions, whether by phone or email.

Attackers using the phone have a habit of working through the organizational phone book. If you can’t report a suspicious call to someone who can send out a warning, each phone call will stand alone. If the attacker fails to trick the first user they call you’ll want the next user to have been alerted in advance that an attack is going on.

This advice also needs to be integrated into a strategy of defence in depth.

Your existing security software and procedures can help to prevent or limit damage from a social engineering attack and of course attackers won’t necessarily limit themselves to just using social engineering, or indeed any one vector.

For more thoughts on planning your security, including defending against social engineering, read our Practical IT guide to planning against threats to your business.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rMffnTtApcM/

Memories of the Internet Worm

Today is the 25th birthday of the infamous Internet Worm.

From the name alone, you can tell how different the malware scene was back in 1988.

Back in that era, the Brain virus, which was from Pakistan, was unusual for having such a usefully descriptive name. (It changed the volume labels of infected diskettes to (C) BRAIN.)

Viruses often ended up with technically unhelpful names like Jerusalem, based on where they came from – and some people even called that one the Israeli virus, because malware was rare enough that even naming a virus after a whole country was unlikely to lead to ambiguity.

And so it was with the Internet Worm – the Internet Worm, if you don’t mind, not merely an internet worm.

After all, there hadn’t been an internet worm before; if ever there were another, well, the bridge of what to call it could be crossed at that time.

The Internet Worm is also known by another name of a sort you are unlikely to see today: you will see it referred to as the Morris Worm, after its author, Robert Tappan Morris.

→ Morris’s late father, as it happens, also named Robert, worked for the NSA. A lot of this story sounds eerily familiar, even 25 years later.

Malware tends not to be named after its authors these days because their identities are rarely known – and they like to keep it that way.

Morris, however, could hardly deny being the author of the Internet Worm, because he received a criminal conviction for writing and releasing it – he was on probation for three years, did 400 hours of community service, and paid a fine of just over $10,000.

How it spread

The worm employed numerous techniques that are used to this day by cybercriminals, with three main tricks up its sleeve for spreading:

  1. It tried to exploit a stack overflow vulnerability in the system service fingerd.
  2. It tried to exploit a debug option commonly but wrongly enabled in the mail server sendmail.
  3. It tried to guess other users’ passwords.

The password guessing started off with various permutations of the user’s login name and real name, so that for a user called Paul Ducklin with a username of duck, the worm would try:

duck
duckduck
Paul
Ducklin
paul
ducklin
kcud

If none of those worked, it would use a short dictionary that it carried around with it:

char *wds[] = {
  "academia", "aerobics", "airplane", "albany",
  "albatross", "albert", "alex", "alexander",
  "algebra", "aliases", "alphabet", "amorphous",
  . . . .
  "outlaw", "oxford", "pacific", "painless",
  "pakistan", "papers", "password", "patricia",
  "penguin", "peoria", "percolate", "persimmon",
  . . . .
  "wizard", "wombat", "woodwind", "wormwood",
  "yacov", "yang", "yellowstone", "yosemite",
  "zimmerman",
  0
};

What have we learned?

It would be nice to be able to say that a password cracking list of the sort shown would be useless in 2013, but experience suggests that many people are still as careless as in 1988.

Last year, for example, when Dutch industrial group Philips suffered a database breach, we quickly recovered the following choices from the dumped password hashes:

1234
12345
123456
123457 -- nice try, but no cigar!
00000000
philips -- five appearances
ph1lips -- nice try, but no cigar!
password -- no list complete without it
qwerty -- ditto
seguro -- Spanish for "secure", it isn't

→ In a delightful historical loopback, the author of the original crypt program for Unix, which introduced the storage and validation of passwords as hashes rather than plaintext, was Robert Morris’s father, Robert Morris.

Stack overflows, on the other hand, aren’t quite the security disaster they used to be.

The stack is used to store arguments passed into, temporary variables used during, and the address to jump to when when returning from system functions.

Back in 1988, if you could reliably overflow one of those temporary variables on the stack, rewrite the return address, and add some malicious shellcode, you had a very good chance of RCE, or Remote Code Execution.

That’s because you could put your shellcode right on the stack and run it.

There are occasional legitimate reasons for generating temporary stack code and jumping to it, and in 1988, most operating systems permitted it, in case you ever needed to do so.

These days, most operating systems mark the stack non-executable, to make it harder to run shellcode stored there; they also perform various regular runtime checks to look out for unauthorised tampering with stack values such as return addresses.

Could it happen again?

Received wisdom suggests that the Internet Worm infected about 10% of the 60,000 computers connected to the internet in 1988.

That sort of penetration was probably exceeded by various network worms of the early 2000s, such as CodeRed, Nimda and Slammer; in the last few years, however, viruses of that replicative power just haven’t been seen.

One reason is the success and the ubiquity of the internet itself: malware writers just don’t need to use network-spreading viruses (self-replicating malware) these days.

Instead of sending malware out into the world to find unprotected systems and break in from the outside, cybercrooks these days can simply place their malicious content on a website, and wait for the world to come to them.

That’s simpler to do, and makes it easy to change the malware as often as you like – you can even serve up a completely different sample for each visitor.

It also bypasses many firewalls, because they’re typically configured to allow connections from the inside to the web, even if they religiously block all inbound connections from the outside.

What can we learn?

Three straightforward things that you should have been doing in 1988 are still well worth doing today:

  • Pick decent passwords. Use a password manager if you have trouble remembering them all.
  • Patch regularly, so that already-known vulnerabilities simply aren’t available to the crooks.
  • Review your system configuration, removing permissions and turning off options which are unsuitable for production use.

One last thing to remember.

The magic pixie dust that made UNIX immune to viruses and other malware…it escaped forever on 02 November 1988.

Image of earthworm courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BaQDa6HIwlc/