STE WILLIAMS

Anatomy of a password disaster – Adobe’s giant

One month ago today, we wrote about Adobe’s giant data breach.

As far as anyone knew, including Adobe, it affected about 3,000,000 customer records, which made it sound pretty bad right from the start.

But worse was to come, as recent updates to the story bumped the number of affected customers to a whopping 38,000,000.

We took Adobe to task for a lack of clarity in its breach notification.

Our complaint

One of our complaints was that Adobe said that it had lost encrypted passwords, when we thought the company ought to have said that it had lost hashed and salted passwords.

As we explained at the time:

[T]he passwords probably weren’t encrypted, which would imply that Adobe could decrypt them and thus learn what password you had chosen.

Today’s norms for password storage use a one-way mathematical function called a hash that […] uniquely depends on the password. […] This means that you never actually store the password at all, encrypted or not.

[…And] you also usually add some salt: a random string that you store with the user’s ID and mix into the password when you compute the hash. Even if two users choose the same password, their salts will be different, so they’ll end up with different hashes, which makes things much harder for an attacker.

It seems we got it all wrong, in more than one way.

Here’s how, and why.

The breach data

A huge dump of the offending customer database was recently published online, weighing in at 4GB compressed, or just a shade under 10GB uncompressed, listing not just 38,000,000 breached records, but 150,000,000 of them.

As breaches go, you may very well see this one in the book of Guinness World Records next year, which would make it astonishing enough on its own.

But there’s more.

We used a sample of 1,000,000 items from the published dump to help you understand just how much more.

→ Our sample wasn’t selected strictly randomly. We took every tenth record from the first 300MB of the comressed dump until we reached 1,000,000 records. We think this provided a representative sample without requiring us to fetch all 150 million records.

The dump looks like this:

By inspection, the fields are as follows:

Fewer than one in 10,000 of the entries have a username – those that do are almost exclusively limited to accounts at adobe.com and stream.com (a web analytics company).

The user IDs, the email addresses and the usernames were unnecessary for our purpose, so we ignored them, simplifying the data as shown below.

We kept the password hints, because they were very handy indeed, and converted the password data from base64 encoding to straight hexadecimal, making the length of each entry more obvious, like this:

Encryption versus hashing

The first question is, “Was Adobe telling the truth, after all, calling the passwords encrypted and not hashed?”

Remember that hashes produce a fixed amount of output, regardless of how long the input is, so a table of the password data lengths strongly suggests that they aren’t hashed:

The password data certainly looks pseudorandom, as though it has been scrambled in some way, and since Adobe officially said it was encrypted, not hashed, we shall now take that claim at face value.

The encryption algorithm

The next question is, “What encryption algorithm?”

We can rule out a stream cipher such as RC4 or Salsa-20, where encrypted strings are the same length as the plaintext.

Stream ciphers are commonly used in network protocols so you can encrypt one byte at a time, without having to keep padding your input length to a multiple of a fixed number of bytes.

With all data lengths a multiple of eight, we’re almost certainly looking at a block cipher that works eight bytes (64 bits) at a time.

That, in turn, suggests that we’re looking at DES, or its more resilient modern derivative, Triple DES, usually abbreviated to 3DES.

→ Other 64-bit block ciphers, such as IDEA, were once common, and the ineptitude we are about to reveal certainly doesn’t rule out a home-made cipher of Adobe’s own devising. But DES or 3DES are the most likely suspects.

The use of a symmetric cipher here, assuming we’re right, is an astonishing blunder, not least because it is both unnecessary and dangerous.

Anyone who computes, guesses or acquires the decryption key immediately gets access to all the passwords in the database.

On the other hand, a cryptographic hash would protect each password individually, with no “one size fits all” master key that could unscramble every password in one go – which is why UNIX systems have been storing passwords that way for about 40 years already.

The encryption mode

Now we need to ask ourselves, “What cipher mode was used?”

There are two modes we’re interested in: the fundamental ‘raw block cipher mode’ known as Electronic Code Book (ECB), where patterns in the plaintext are revealed in the ciphertext; and all the others, which mask input patterns even when the same input data is encrypted by the same key.

The reason that ECB is never used other than as the basis for the more complex encryption modes is that the same input block encrypted with the same key always gives the same output.

Even repetitions that aren’t aligned with the blocksize retain astonishingly recognisable patterns, as the following images show.

We took an RGB image of the Sophos logo, where each pixel (most of which are some sort of white or some sort of blue) takes three bytes, divided it into 8-byte blocks, and encrypted each one using DES in ECB mode.

Treating the resulting output file as another RGB image delivers almost no disguise:

Cipher modes that disguise plaintext patterns require more than just a key to get them started – they need a unique initialisation vector, or nonce (number used once), for each encrypted item.

The nonce is combined with the key and the plaintext in some way, so that that the same input leads to a different output every time.

If the shortest password data length above had been, say, 16 bytes, a good guess would have been that each password data item contained an 8-byte nonce and then at least one block’s worth – another eight bytes – of encrypted data.

Since the shortest password data blob is exactly one block length, leaving no room for a nonce, that clearly isn’t how it works.

Perhaps the encryption used the User ID of each entry, which we can assume is unique, as a counter-type nonce?

But we can quickly tell that Adobe didn’t do that by looking for plaintext patterns that are repeated in the encrypted blobs.

Because there are 264 – close to 20 million million million – possible 64-bit values for each cipertext block, we should expect no repeated blocks anywhere in the 1,000,000 records of our sample set.

That’s not what we find, as the following repetition counts reveal:

Remember that if ECB mode were not used, each block would have been expected to appear just once every 264 times, for a minuscule prevalence of about 5 x 10-18%.

Password recovery

Now let’s work out, “What is the password that encrypts as 110edf2294fb8bf4 and the other common repeats?”

If the past, all other things being equal, is the best indicator of the present, we might as well start with some statistics from a previous breach.

When Gawker Media got hacked three years ago, for example, the top passwords that were extracted from the stolen hashes came out like this:

(The word lifehack is a special case here – Lifehacker being one of Gawker’s brands – but the others are easily-typed and commonly chosen, if very poor, passwords.)

This previous data combined with the password hints leaked by Adobe makes building a crib sheet pretty easy:

Note that the 8-character passwords 12345678 and password are actually encrypted into 16 bytes, denoting that the plaintext was at least 9 bytes long.

It should come as no surprise to discover that this is because the input text consisted of: the password, followed by a zero byte (ASCII NUL), used to denote the end of a string in C; followed by seven NUL bytes to pad the input out to a multiple of 8 bytes to match the encryption’s block size.

In other words, we now know that e2a311ba09ab4707 is the ciphertext that signals an input block of eight zero bytes.

That data shows up in the second ciphertext block in a whopping 27% of all passwords, which leaks to us immediately that all those 27% are exactly eight characters long.

The scale of the blunder

With very little effort, we have already recovered an awful lot of information about the breached passwords, including: identifying the top five passwords precisely, plus the 2.75% of users who chose them; and determining the exact password length of nearly one third of the database.

So, now we’ve showed you how to get started in a case like this, you can probably imagine how much more is waiting to be squeezed out of “the greatest crossword puzzle in the history of the world,” as satirical IT cartoon site XKCD dubbed it.

Bear in mind that salted hashes – the recommended programmatic approach here – wouldn’t have yielded up any such information – and you appreciate the magnitude of Adobe’s blunder.

There’s more to concern youself with.

Adobe also decribed the customer credit card data and other PII (Personally Identifiable Information) that was stolen in the same attack as “encrypted.”

And, as fellow Naked Security writer Mark Stockley asked, “Was that data encrypted with similar care and expertise, do you think?

If you were on Adobe’s breach list (and the silver lining is that all passwords have now been reset, forcing you to pick a new one), why not get in touch and ask for clarification?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/mQJDB_L5API/

The Schmidt hits the Man: NSA spying on Google servers? ‘OUTRAGEOUS!’

Supercharge your infrastructure

Google’s executive chairman Eric Schmidt has branded the NSA’s alleged surveillance of web giants’ data centers “outrageous”.

Speaking to The Wall Street Journal, Schmidt lashed out at American spooks after documents from whistleblower Edward Snowden suggested Google and Yahoo! data center links were being snooped on.


“It’s really outrageous that the National Security Agency was looking between the Google data centers, if that’s true,” the search engine supremo was quoted as saying.

“The steps that the organization was willing to do without good judgment to pursue its mission and potentially violate people’s privacy, it’s not OK.”

The remarks from the exec chairman come in the wake of revelations of a top-secret project known as MUSCULAR, which is believed to have harvested private and personal data by tapping into the fiber lines linking up server warehouses operated by Google, Yahoo! and others.

It’s alleged more than 180 million records have been slurped in the MUSCULAR dragnet, run by Uncle Sam’s NSA and the UK’s GCHQ.

The outrage from Schmidt contrasts Google’s sometimes arguably cavalier attitude to user privacy: the firm has been accused at various times of snooping on citizens’ activities, including a scandal over the collection of Wi-Fi data by its Street View cars, which dogged Google for years. More recently, politicians have called on Google executives to discuss privacy concerns over the company’s headset computers, Google Glass.

Schmidt himself said in 2009 of user privacy concerns: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”

More recently, however, Google has been among the most vocal critics of government surveillance programs, speaking out against the internet dragnet-like PRISM platform and calling for greater transparency in how firms can report their interactions with spies, and demanding the ability to warn users when their information has been sought out. ®

Free Regcast : Microsoft Cloud OS

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/11/04/eric_schmidt_blasts_nsa_for_spying_on_google/

Switzerland to set up ‘Swiss cloud’ free of NSA, GCHQ snooping (it hopes)

5 ways to reduce advertising network latency

Swisscom, the Swiss telco that’s majority owned by its government, will set up a “Swiss cloud” hosted entirely in the land of cuckoo clocks and fine chocolate – and try to make the service impervious to malware and uninvited spooks.

Companies providing secure communications, such as Silent Circle, already use Swiss data centers because the country has very tight data privacy laws. And surveillance can only take place after prosecutors secure a court order, as opposed to the secret clearances issued in the US.


“Data protection and privacy is a long tradition in Switzerland, and that’s why it’s pretty difficult to get to something,” Swisscom’s head of IT services Andreas Koenig told Reuters. “But if legal requirements are there and we are asked by the judge to obtain or deliver certain information then we would obviously have to comply with it.”

Koenig insisted that introducing the Swiss cloud was less about addressing concerns over foreign spying and more to do with providing a local service that could be fast and secure, but acknowledged fear of surveillance and the existence of Switzerland’s tight privacy policies may make the service attractive to those worried about the safety of their cloud data.

That includes the Swiss government. Last week the Swiss newspaper Basler Zeitung reported that the nation’s administration was increasingly concerned about the extent of state-sponsored spying on cloud data. Much of the country’s revenue comes from its banking sector and with an estimated $2 trillion stored by the Gnomes of Zurich, Swiss servers are bound to be of interest to snoopers and scammers.

The Swisscom cloud will use HTML5 for its user interface, and will host all its data within Swiss borders. While designed for domestic customers, Koenig said that the service could appeal to others who want their data to be as secure as their savings.

“If you are a provider in a cloud environment you need to apply the highest standards of security you can get,” he explained. “It’s like opening a data tunnel from the server to your screen and then displaying the data on your screen. That makes it pretty, pretty difficult for anyone to see what’s there.”

Swisscom aims to get about three quarters of its data, estimated to be between 200 and 300 petabytes, on its cloud by 2016, but it gave no more details on how fast the service could be rolled out to others. ®

Supercharge your infrastructure

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/11/04/switzerland_to_set_up_swiss_cloud_free_of_nsa_snooping/

Sysadmins! Microsoft now offers $100k for tales of your horrible infections

Supercharge your infrastructure

It may have been a latecomer to the practice of offering cash rewards for reporting code flaws, but Microsoft is making up for lost time with an expansion of its security bug bounty program.

The Windows 8 giant started paying for vulnerability reports in June having ring-fenced a $100,000 prize pot just for security researchers. Now anyone who registers at doa@Microsoft.com can take part, so that if a new exploitable hole is discovered or – more importantly – hackers have found ways to defeat built-in protections, Redmond can get on the case as soon as possible.


Ultimately, Microsoft would like anyone from sysadmins to software engineers on the sharp end of a digital attack in the wild to report their findings. If your machine or entire network suddenly goes screwy, make sure that you collect all the evidence you can, because Redmond wants proof-of-concept exploit code and a technical analysis of the assault before it will hand over the prize.

“Individual bugs are like arrows. The stronger the shield, the less likely any individual bug or arrow can get through,” said Katie Moussouris, senior security strategist lead for Microsoft Trustworthy Computing.

“Learning about ‘ways around the shield,’ or new mitigation bypass techniques, is much more valuable than learning about individual bugs because insight into exploit techniques can help us defend against entire classes of arrows as opposed to a single bug – hence, we are willing to pay $100,000 for these rare techniques.”

Design flaws and coding gaffes can be reported even if they’re not in production software: if beta code or preview versions contain exploitable bugs, Microsoft wants to know before the final code is released, and will pay for the knowledge.

“We want to learn about these rare new exploitation techniques as early as possible, ideally before they are used, but we’ll pay for them even if they are currently being used in targeted attacks if the attack technique is new – because we want them dead or alive,” Moussouris explained in a blog post, adding the Bon Jovi track of that title is one of her favorite pieces of music.

Brit James Forshaw, head of vulnerability research at Context Information Security, was the first person to benefit from Microsoft’s big-bucks foray into bug bounties. He bagged $100,000 in October after finding a fundamental flaw in Windows 8.1 security, and Redmond also paid out $28,000 to researchers who poked holes in Internet Explorer 11.

While some in the infosec community may be less than happy about allowing others to participate in the bug bounty program, it makes sense from a practical perspective to allow anyone a shot at getting a reward for flaw finding. It’s not just researchers who search for this stuff and a payout would be a nice way to compensate an IT admin for the sleepless nights caused by a cunning new infection. ®

Free Regcast : Microsoft Cloud OS

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/11/04/microsoft_expands_bug_bounties_to_give_it_admins_a_chance/

Monitoring Where Search Engines Fear To Tread

The Tor Onion Routing network has long been a favorite way for privacy-seeking online users to add a series of anonymizing layers between themselves and sites on the Internet. From hackers and dissidents to companies and governments seeking to cloak their activities online, Tor has gained a significant following of users.

Yet anonymizing networks, or darknets, are also used by online criminals looking to hide their tracks. Two recent incidents underscore the appeal: One, the Silk Road, an online bazaar of drugs and illegal goods, was operated as a hidden site until the FBI arrested the alleged owner of the site in San Francisco in early October. The other, a recent botnet known as Mevade, or Sefnit, routed its command-and-control traffic through the Tor network to hide the locations of infected nodes. The botnet traffic had a tremendous impact on Tor, driving its measure of simultaneous users from approximately 800,000 to more than 5 million, according to statistics on the Tor Project site.

Companies need to watch their networks for signs of the presence of darknets and for traffic to anonymous sites created to evade search engine crawlers, known as the deepweb, says Jon Clay, a security technology expert with antivirus firm Trend Micro.

“The criminals are using these techniques,” he says. “The question that an organization needs to look at, and discuss among themselves, is whether these communications channels, such as the TOR network, is something that employees should be using internally. If not, then you need to flag that and investigate any detections.”

The recent takedown of Silk Road has spotlighted the use of the deepweb sites and hidden services for illegal activities. In a report published following the arrest of the suspected operator of Silk Road, Trend Micro stressed that Tor is only the best known of the deepweb networks. Other networks and technologies for anonymizing communications and creating hidden services include the Invisible Internet Project (I2P), Freenet, and alternative domain roots.

Each technology has legitimate uses. Tor allows users to hide the source of their traffic, hidden services are used by many journalists as a drop box for anonymous sources, and alternative domain roots have offered top-level domains for certain groups of people, such as Kurdish, Tibetan, and Uyghur ethnic groups. The technologies serve a legitimate role by giving people in oppressive regimes the ability to communicate.

[The Tor-based ‘LazyAlienBiker’ — a.k.a. Mevade — botnet’s attempt to evade detection using the anonymous Tor network ultimately exposed it. See How The Massive Tor Botnet ‘Failed’.]

Whether a company should block deepweb sites and darknets is a discussion for management, but each company should look for signs of the anonymizers to know whether they have a problem, says Wade Williamson, a senior security analyst with network-security firm Palo Alto Networks. The first step should be a survey of systems on the network to look for such applications, he says.

“If you see Tor or one of these other anonymizing networks on a computer in your network, that should be a canary in the coal mine,” Williamson says. “You at least need to investigate at that point.”

Many cybercriminals tend to create their own anonymization networks, which are fairly easy to detect and block, once analyzed by security firms. The team behind Tor, which was originally created by the U.S. Naval Research Laboratory to protect government communications, has made that technology much harder to find.

While some groups and threat-intelligence firms compile lists of Tor relays and exit nodes to allow companies to block communications with those sites, unlisted bridge relays can act as intermediaries to bypass such blocking. The Tor project has even created obfuscated bridge relays to defeat techniques for inspecting traffic and blocking Tor-like traffic.

In the end, companies serious about blocking any sort of anonymizing traffic may want to only allow IP addresses and domains with known, good reputations, says Trend Micro’s Clay.

“You want to have as much information as you possibly can, and if you don’t know if an IP or a domain is bad, you might block it,” he says.

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/monitoring/monitoring-where-search-engines-fear-to/240163477

ThreatSim’s State Of The Phish Finds Most Organizations Do Not Recognize Phishing As A True Threat

HERNDON, VA–(Marketwired – Nov 4, 2013) – ThreatSim, the leading innovator of simulated phishing training and awareness solutions, today announced key findings for its 2013 State of the Phish awareness index, gauging phishing training, awareness and readiness among 300 IT executives, administrators and professionals in organizations throughout the United States.

The key finding: most organizations (57%) rate phishing as a ‘minimal’ impact threat (resulting in investigation and account resets), while one in four respondents (27%) reported phishing attacks that led to a ‘material’ breach within the last year. The survey defined ‘material’ as some form of malware infection, unauthorized access and lost/stolen data from a breach tied to phishing.

“While material impacts from phishing attacks can cause more damage and headlines, our customers report the cumulative effects from ‘minimal’ impact events are daily challenges,” said ThreatSim CEO Jeff LoSapio. “There is a ‘nuisance factor’ in which investigations are launched, accounts reset, and staff are unable to work as their laptop is cleaned. The opportunity cost is huge especially for medium size companies where up to 50% of time in a week can be spent handling these ‘minimal’ impact fire drills. Reducing end user susceptibility to phishing attacks has a direct impact on reducing IT cost and increasing the security team’s productivity.”

The weekly headlines show that phishing continues to be one of the most active, growing and consistent threat vectors, and State of the Phish findings show most organizations are still not proactive or taking an effective stance to train end users on how not to get phished. The majority (69%) are using ineffective techniques including email notifications, webinars, and in person training.

While sixty percent (60%) of all respondents reported phishing attacks targeting their organization were increasing each year, only 10% are using phishing simulation to train their users, a technique that has proven to reduce users’ click rates by up to 80%.

Other key findings from the index include:

30% of all respondents plan to increase budget for security training and awareness in 2014.

60% of all respondents see the rate of phishing increasing each year.

70% of all respondents reported not measuring their organization’s exposure to phishing.

Out-of-date 3rd party software on desktops should be viewed as another critical threat vector. The index found 44% of all respondents are not formally managing 3rd party software, representing a weakness in organizational ability to prevent damage associated with phishing attacks.

“Phishing simulation is proven to be the most effective means to educate end users and reduce susceptibility to phishing attacks,” LoSapio said. “While budgets are increasing for thirty percent of all respondents, sadly fewer than ten percent are using this successful technique.”

State of the Phish surveys were completed double blind and transmitted electronically via a third-party survey service between Sept. 26 and Oct. 4, 2013. To download the complete key findings and methodology report, including a special report featuring ThreatSim consolidated customer trending data during 2013, visit http://threatsim.com/resources/2013-state-of-the-phish/.

ThreatSim customers, including a top 10 mutual fund firm, a top three U.S. utility and one of the largest government defense contractors, have achieved up to an 80% reduction in the rate of employees who click on phishing e-mail messages. Available in 11 languages and country themes, ThreatSim simulations are extremely realistic, coupled with effective training content that equips employees with the skills to identify and avoid phishing attacks. ThreatSim is a secure hosted Software-as-a-Service that requires no installation or configuration.

About ThreatSim

ThreatSim is the leading innovator of simulated phishing defense training and awareness solutions. Headquartered in Herndon, Va., outside Washington, D.C., ThreatSim delivers highly-scalable, feature-rich, SaaS-based phishing and advanced threat training campaigns that measurably lower organizational risk exposure. ThreatSim customers include large commercial enterprises, SMBs, government organizations and academic institutions. Request a demo, visit www.threatsim.com and follow @ThreatSim.

Article source: http://www.darkreading.com/applications/threatsims-state-of-the-phish-finds-most/240163542

Is A Tsunami Of SAP Attacks Coming?

Last week at RSA Europe, a leading researcher in the security of business critical applications warned that a new wave of SAP attacks could crash down on enterprises after the discovery of an old banking Trojan had been modified to look for SAP GUI installation on infected endpoints.

The modified application was Trojan.ibank, which was found to be trolling for SAP installations by researchers at Dr. WEB recently, says Alexander Polyakov, co-founder and CTO of ERPScan, who brought up the modified malware in a broader talk at RSA about the dangers of SAP and ERP vulnerabilities. Polyakov told Dark Reading that one of the likely ways attackers could be using such targeted malicious functionality could be for the purpose of gathering information that could be sold to third-parties on the black market. But there could be another more dangerous motive.

[How do you know if you’ve been breached? See Top 15 Indicators of Compromise.]

“A second way to use it for attackers is to wait until critical mass of systems are infected and then upload a special module for SAP,” he says, explaining that this could be disastrous when combined with ibank’s password-stealing functionality. “There are dozens of ways to steal those passwords and use them. It is possible to connect to SAP Server and do any kind of fraud in the system or simply steal critical information such as client lists or employees’ personal information. We decided to warn people and SAP’s Security response team with whom we closely work before this can happen.”

A long-time advocate for the improvement of security in business critical business applications such as SAP, Polyakov also presented last week findings from a recent survey of common SAP vulnerabilities and misconfigurations found within the typical enterprise. One of the key findings revolved around lingering problems from an extremely critical heap overflow vulnerability that ERPScan discovered and was nominated for a Pwnie award for at Black Hat Vegas.

“The vulnerability allows attackers to get full control on SAP Router within one TCP packet and thus obtain access to internal network of company,” he says. “This issue was closed in May but after almost half a year we found that only 15% from about 5000 SAP Routers available on the Internet were patched.”

According to Polyakov, while business-critical application vulnerabilities have gotten more attention in the last few years, enterprises still have a lot of work ahead of them. He says that these systems are “way easier” to break than browsers or operating systems, and yet they are at the heart of most business processes and could make or break the viability of the business. For example, he points to an attack against Istanbul Provincial Administration, where hackers were essentially able to erase debts by breaking into a business critical application.

“But if we talk about espionage, it’s hard to find many public examples mostly because only 10% of SAP systems that we analyzed use logging,” he says. “Which means that even if there is a breach it’s almost impossible to find it. As for the unpublished attacks, some customers told us about internal fraud like salary modification or backdoors left in ABAP code by third-party developers.”

As enterprises endeavor to lock down these applications, he encourages them to follow the work his firm is doing on www.EAS-SEC.org, which he says will provide a framework for securing implementation, maintenance and development of custom applications.

“It’s kind of OWASP for Business applications but slightly different,” he says. “This framework can help you to find most critical issues at first and then go to less important.”

Have a comment on this story? Please click “Add Your Comment” below. If you’d like to contact Dark Reading’s editors directly, send us a message.

Article source: http://www.darkreading.com/attacks-breaches/is-a-tsunami-of-sap-attacks-coming/240163543

Shape Security Adds Execs From Palo Alto Networks, Sencha And Mozilla To Expand Exec Bench

MOUNTAIN VIEW, CA–(Marketwired – Nov 4, 2013) – Shape Security, a startup developing a new type of web security technology, today announced three hires to strengthen the company’s executive team: Mark Rotolo, VP of sales; Michael Coates, director of product security; and Dr. Ariya Hidayat, director of engineering. Founded in 2011 by leaders from Google, the Pentagon, and major defense contractors, Shape’s highly-accomplished team positions the company for immediate success in protecting organizations against sophisticated and dangerous attacks.

“Michael, Ariya, and Mark have exceptional track records and could work anywhere in Silicon Valley,” said Derek Smith, CEO of Shape Security. “They chose to join Shape because our technology is game changing for every website and web application which handles important data.”

Michael Coates, director of product security at Shape, was previously head of security at Mozilla, where he built the security program from the ground up to protect Firefox and other products. Coates is also the Chairman of OWASP, the worldwide organization dedicated to web application security, with over 40,000 participants in more than 100 countries.

“Maybe once every ten years, a new technology hits the market that truly changes the way companies think about security. Shape’s technology is that next leap,” said Coates. “They are operating on a different layer of the security problem than anyone else. I am thrilled to help build this product.”

Dr. Ariya Hidayat joins Shape as director of engineering. He is the author of PhantomJS, a widely-used web testing and automation framework. Hidayat joins Shape from Sencha, where as engineering director he led a team building hybrid HTML5 run-time platforms. He also created Esprima, which serves as the foundation of many JavaScript tools.

Mark Rotolo is Shape’s new vice president of sales. He has 25 years experience in technology sales, leading three companies to acquisition for a combined $628 million. Rotolo most recently served as vice president of sales – West at Palo Alto Networks, where he helped grow revenues from zero to over $350 million per year.

“What is most exciting to me is how broadly applicable Shape’s technology is,” said Rotolo. “I am looking forward to rapidly growing our efforts to help organizations protect themselves from dangerous attacks which no other product or solution can defend against.”

The three join a team which includes Google’s former click fraud czar and the former general manager of Cisco’s Application Delivery business unit. The company also announced in October that Dr. Xinran Wang, the creator of Palo Alto Networks’ WildFire product, had joined Shape as chief security scientist.

Shape Security is currently in private beta and will be launching its public product in early 2014.

About Shape Security

Based in Mountain View, California, Shape Security is a startup developing a new category of web security products. The company has raised $26 million in Series A and B funding from Kleiner Perkins Caufield Byers, Venrock, Google Ventures, Allegis Capital, Google Executive Chairman Eric Schmidt’s TomorrowVentures, former Symantec CEO Enrique Salem, and top executives from LinkedIn, Twitter, Square, and Dropbox. Shape has a world-class team hailing from Google, the Pentagon, Cisco, VMware, major defense contractors, and other leading corporations. The company’s patent-pending technology will provide customers with a fundamentally better approach to protecting websites against modern web attacks. Shape is hiring. Visit shapesecurity.com to learn more.

Article source: http://www.darkreading.com/management/shape-security-adds-execs-from-palo-alto/240163545

25 Years After: The Legacy Of The Morris Internet Worm

Stuart McClure was an undergraduate student at the University of Colorado in Boulder 25 years ago when dozens of the university’s servers suddenly began crashing. The university, like other universities, government agencies, and organizations, had been hit with a historic computer worm that crippled thousands of machines around the Internet in an apparent informal research project gone wrong.

“I basically cut my teeth on the low-level reverse-engineering of that worm,” recalls McClure, who analyzed the worm when he became a teaching assistant at the university. “I remember thinking, ‘This was way too easy'” to execute, he says of the worm.

Nov. 2 marked the 25th anniversary of the infamous “Morris worm,” the Internet’s first major cybersecurity event that ultimately propelled the then-nascent Internet into a new world of rogue-code attacks on the once-hallowed ground of academia, research and development, military, and government communications. The worm was written and released by then-Cornell University computer science graduate student Robert Tappan Morris, who later confessed that he wrote the code as an experiment that had inadvertently spun out of his control.

A parade of high-profile worm infections have followed the Morris worm during the past three decades, including Code Red, Blaster, Sasser, ILoveYou, Nimda, and SQL Slammer, all of which were unleashed mainly to grab attention, wreak havoc, and, like the Morris worm, mainly hurt victim organizations’ productivity and operations, though they didn’t damage their data. That traditionally had been the upside of worms: that they were more of a headache than a destructive attack. But the worm’s wrath has changed dramatically with the newest generation of worms, such as the targeted Stuxnet aimed at sabotaging Iran’s nuclear facility, and the Shamoon worm, which was unofficially identified as the worm that wiped data from some 30,000 machines at oil giant Saudi Aramco. These newest iterations make the Morris worm look quaint in comparison to their targeted and damage-inflicting missions.

“Anybody who would try convince Saudi Aramco or RASgas that they don’t have to worry about malicious worms [today] would get some pushback on that,” says Eugene “Spaf” Spafford, a security industry pioneer who was one of the first to analyze the Morris worm, referring to the malicious data-wiping worms that hit those energy organizations last year.

Spaf, who is executive director of Purdue University’s Center for Education and Research in Information Assurance and Security and a professor of computer sciences at Purdue, says the Morris worm’s impact was more about its timing than its impact. “It would have made news no matter what he had done because we had never seen anything like that,” Spaf says. “Not many people had thought about the potential for anything like that” at the time, he says.

The Morris worm wasn’t particularly elegant, either, according to Spaf and others who analyzed the code. Although Morris wrote it to exploit flaws in the Sendmail utility in Unix, his worm had some bugs of its own that caused it to go into overdrive and spread out of control. “The code was apparently unfinished and done by someone clever but not particularly gifted, at least in the way we usually associate with talented programmers and designers. There were many bugs and mistakes in the code that would not be made by a careful, competent programmer. The code does not evidence clear understanding of good data structuring, algorithms, or even of security flaws in Unix,” Spaf wrote in his renowned 1988 analysis of the Morris worm (PDF).

[Internet security pioneer Eugene Spafford talks about why security has struggled even after its first big wake-up call 25 years ago, the Morris worm. See ‘Spaf’ On Security.]

NASA-Ames was reportedly one of the first to spot the Internet worm clogging its servers at the time; it wasn’t long before other sites were experiencing similar symptoms of unusual files showing up in some machine directories, and odd messages in Sendmail’s log files. But it was when those computers became overloaded and infected over and over again as the worm replicated itself on each machine that some machines fell over altogether under the weight of it.

McClure, founder and CEO/president of Cylance and former global CTO and general manager of the Security Management Business Unit for McAfee/Intel, remembers knowing right away that the worm had reached the University of Colorado’s servers when systems began going down with no explanation.

The multiplatform capability of the worm — it infected then-pervasive Unix-based Sun Microsystems Sun 3 and DEC VAX computers running 4 BSD versions of Unix connected to the Internet — impressed McClure. “It was multiplatform, which was really cool,” he says. “It was not just Sendmail, but other pieces that it went after and exploited features.

“When I looked at the code … it was fascinating. That really kicked off my [security] career.”

The Internet has come a long way since 1988, for sure, but there are some hauntingly familiar themes in both the Morris worm and today’s threats. Not only did Morris exploit weak passwords in the systems (sound familiar?), but he also exploited a buffer overflow vulnerability, a type of software bug still abused today, notes Marc Maiffret, CTO at BeyondTrust.

Maiffret and colleague Ryan Permeh at eEye Digital Security in July 2001 discovered Code Red. They named it after the cherry Mountain Dew soda of the same name that the two were drinking while they picked apart the worm, which ultimately infected some 350,000 servers running Microsoft’s IIS.

Worms throughout history have reflected the times, he says. “If you look at the Morris worm … it started as seeing if something would work. It was not meant to be malicious in any specific way,” he says. “Code Red was very similar in a way, although both worms were written with different intentions … Code Red had a payload to attack the White House’s Web server, but it was not that well-written, and it was malicious in more of a, ‘Hey, look at me,'” way, he says.

Cybercrime was still in its infancy in 2001 as well, he notes, and the hackers behind it and worms prior were more about exploration or making a name for themselves rather than a profit, he says. “Code Red was a good [example] of that middle ground. It was not cybercrime and stealing. It was really more to make a name or put out a message, just to make a statement. That mirrored the culture of what was happening” in hacking at the time, he says.

The Morris worm, Code Red, and other early worms were considered more of a nuisance, but they also are credited with raising awareness among the security and user communities.

Fast forward to today’s worms, however, and awareness is the least of victim’s worries. With a lucrative cybercrime landscape and cyberespionage driving most of today’s malware and hacking, worms mostly play a different role. “They are very tailored and very specific,” Maiffret says. Worms are deployed via automated command-and-control infrastructures today, and attempt to remain more stealthy for cyberspying purposes, for instance. “The goal there is to be stealthy, not make a name, and extract data,” for instance, Maiffret says.

But worms are not the most popular form of malware for most attackers, mainly because it’s difficult to remain stealthy if the goal is to spread quietly to a specific target without triggering any alarms. Stuxnet, meanwhile, was used to reach an airgapped environment in such a way that would spread in a worm-like manner. “You can’t sit there at the computer and do a targeted attack of an airgapped network. You need something automated that can find its way” in by propagating itself in a controlled way, Maiffret says.

But even the highly sophisticated Stuxnet worm was eventually found out when it landed outside its target zone. “You don’t want it to end up detected somewhere or on a researcher’s site where it can be reverse-engineered,” he says. “Worm-like characteristics are for automatically spreading, but how do you control it? Look how we’ve seen plenty of mistakes [with targeted worms].”

Then there are the fast-moving, destructive worms like the one that hit Saudi Aramco. It snuck in, but then loudly wiped data from some 30,000-plus Windows machines. “That is definitely a different animal. We’ve seen old viruses back in the day that at a specific date messed up the BIOS so the system would not boot,” Maiffret says. “It was weird that they were using some stealth and also characteristics that are frankly similar to things we have seen more than 10 years ago.”

Next Page: Another ‘Morris Moment?’

Article source: http://www.darkreading.com/attacks-breaches/25-years-after-the-legacy-of-the-morris/240163523

That time when an NSA bloke’s son borked the ENTIRE INTERNET…

Supercharge your infrastructure

It’s 25 years since the Morris Worm taught the world that computers were capable of contracting viruses.

The Morris Worm hit on 2 November 1988, spreading rapidly by exploiting vulnerabilities in sendmail, the email server software that was the most commonly used technology of its type at the time.


Many contemporary Unix servers were running versions of sendmail featuring buggy debugging code, a shortcoming the worm exploited to devastating effect. The worm also bundled other spreading tricks, including the the ability to guess passwords and a stack overflow vulnerability in Unix systems.

The worm only exploited known vulnerabilities in Unix sendmail, finger, and rsh/rexec, as well as weak passwords. It was meant only to gauge the size of the nascent internet, but mistakes in its spreading mechanisms had unintended consequences that turned it into a powerful server-crashing tool.

Computer systems were flooded with malicious traffic as the worm tried to spread itself further, with many systems either crashing or grinding to a halt. According to estimates, 6,000 of the 60,000 systems in the early (and much smaller) internet of the day got infected by the worm.

Although the worm had no malicious payload its promiscuous spread created chaos.

A PBS television news report about the worm, which created all manner of headaches for early sysadmins, set the scene for many reports to follow. The suspect was a “dark genius”, it concluded.

“Life in the modern world has a new anxiety,” said the news anchor. “Just as we’ve become totally dependent on our computers they’re being stalked by saboteurs, saboteurs who create computer viruses.”

Youtube Video

The creator of the worm was eventually identified as Robert T Morris, a computer science graduate student in the first year of his doctorate at Cornell University.

Morris was prosecuted over his actions, found guilty of breaking US computer abuse laws (specifically, the recently passed Computer Fraud and Abuse Act), and sentenced to three years’ probation. He was also ordered to complete 400 hours of community service and fined just over $10,000.

“What Morris did was stupid and reckless – there is no doubt about that. But he wasn’t the first person to write a virus, and he was far from the last to create and spread destructive malware,” notes veteran security researcher Graham Cluley in a blog post marking the Morris Worm anniversary.

Morris, now a professor at the Computer Science and Artificial Intelligence Laboratory at MIT, declined The Register‘s invitation to answer questions on the creation of the worm. Morris’s late father, also named Robert, worked for the NSA.

“A lot of this story sounds eerily familiar, even 25 years later,” notes anti-virus industry veteran Paul Ducklin in a post on the Sophos Naked Security blog. Ducklin’s post reflecting on the Morris Worm event, and the lessons that can be learned from the incident – many of which continue to be relevant today – can be found here.

Bugnote

Shortly after the outbreak, security researcher Eugene Spafford wrote what remains the definitive analysis (PDF) of the bugs exploited by the Morris Worm.

Free Regcast : Microsoft Cloud OS

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2013/11/04/morris_worm_anniversary/