STE WILLIAMS

DerpTrolling game server DDoS attacker pleads guilty

It took a while, but a DDoS attacker responsible for bringing down game sites has finally been brought to justice.

The 23 year-old Utah resident, Austin Thompson, pleaded guilty on November 6 in a San Diego Federal court to knowingly causing damage to third-party computers.

Thompson went on a spree in December 2013 and January 2014 under the Twitter handle @DerpTrolling. He allegedly used the Low Orbit Ion Cannon (LOIC) denial of service tool to hit servers owned by EA, Steam and Sony Online Entertainment. His attacks also affected Blizzard’s Battle.net system.

Thompson caused $95,000 in damage, according to the US Attorney’s Office for the District of Southern California, where Sony Entertainment is based. He is said to have used his Twitter account to announce the attacks.

On Twitter, which he joined in 2011, he often referred to DerpTrolling as a group, but there are no other defendants in his case.

He annoyed others enough that they doxxed him, posting what they claimed were his personal details online. However, he appears to have continued tweeting until August 14 2016, when he went quiet for 18 months. He came back in January 2016 for a day and then disappeared for good.

At the time, DerpTrolling told online video gaming channel #DramaAlert that he was supporting a gaming streamer who had egged him on. He would also respond to requests suggesting targets.

He will be sentenced on March 1, 2019, and faces a potential 10 years in prison and a fine of up to $250,000.

Said US attorney Adam Braverman:

Denial-of-service attacks cost businesses millions of dollars annually. We are committed to finding and prosecuting those who disrupt businesses, often for nothing more than ego.

Thompson isn’t the only person to have launched attacks against game servers. Around the same time as his attacks, two other Twitter users claimed responsibility for attacks against Steam.

A group of gaming grinches called Lizard Squad took down servers on Christmas day 2014, stopping countless kids setting up their new consoles “to amuse ourselves”. They were arrested almost two years later for running a DDoS-for-hire service.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/B_DDr0QvwX8/

Update now! WordPress sites vulnerable to WooCommerce plugin flaw

Researchers have published details of a dangerous flaw in the way the hugely popular WooCommerce plugin interacts with WordPress that could allow an attacker with access to a single account to take over an entire site.

WooCommerce’s four million plus users were first alerted to the issue a few weeks back in the release notes for the updated version:

Versions 3.4.5 and earlier are affected by a handful of issues that allow Shop Managers to exceed their capabilities and perform malicious actions.

This week, PHP security company RIPS Technologies published the research that led to this warning which gives WooCommerce and WordPress admins more of the gory detail.

There are two parts to the vulnerability, the first of which the researchers describe as a “design flaw in the privilege system of WordPress.”

The second, in WooCommerce itself, is an apparently simple file deletion vulnerability affecting versions 3.4.5 and earlier.

Which of the two is the bigger issue will depend on whether you worry more about a site’s e-commerce function or happen to be its admin – either way, the combination spells trouble.

The vulnerability

After gaining access via a phishing attack or as an inside job, an attacker could use a weakness in the log file deletion routine to delete woocommerce.php, taking down the site and causing WordPress to disable the plugin.

This, RIPS Technologies researcher Simon Scannell discovered, would be enough for any WooCommerce user with a Shop Manager account and an understanding of what they’d just done to compromise the entire site.

But how?

When WooCommerce is installed, the Shop Manager role is assigned the potent edit_users capability needed to edit customer accounts, which is stored by WordPress itself.

Because this could be used to edit the WordPress site’s admin account too, its scope is limited by a special WooCommerce ‘meta capability’ filter.

Unfortunately, for WordPress to apply this safeguard the plugin needs to be active – which it wouldn’t be if an attacker has exploited the WooCommerce file deletion weakness.

Writes Scannell:

The meta privilege check which restricts shop managers from editing administrators would not execute and the default behavior of allowing users with edit_users to edit any user, even administrators, would occur.

The WooCommerce account with Shop Manager privileges would then be able to elevate these to change the site’s password and with it control of the entire site.

What to do

On the WooCommerce side, ensure it has been upgraded to version 3.4.6, which appeared on 11 October. Plugins aren’t updated by default, which means admins will have to initiate this for themselves via the wp-admin dashboard/plugins sidebar.

As for the WooCommerce fix:

With this release, Shop Managers can only edit users with the Customer role by default, and there is a whitelist of roles that Shop Managers can edit.

Redesigning the way the WordPress permission system interacts with plugins might take a little longer.

For reasons as long as your arm, plugins have always been WordPress’s underbelly. The TL;DR is that they need constant tending as does the platform itself – never take either for granted.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Q5fRlQzKUNI/

Sent a photo to the wrong person? Facebook Messenger to let you unsend it

Back in April, Facebook automagically retracted CEO Mark Zuckerberg’s messages from recipients’ inboxes.

It was good enough for Zuck and other Facebook execs, but alas, beyond the reach of us mere mortal users. But relax, Facebook said at the time: we’re going to bring “Unsend” to one and all in a matter of months.

Well, the delete-messages time is finally nigh. Facebook said on Tuesday that Messenger is soon going to get an “Unsend” feature. Keep those fingers flexible, though: you’re only going to get up to 10 minutes to delete messages from chats after you send them.

Facebook mentioned the upcoming feature in the release notes for version 191.0 of the Messenger iOS app. Here’s what it said:

Coming soon: Remove a message from a chat thread after it’s been sent. If you accidentally send the wrong photo, incorrect information or message the wrong thread, you can easily correct it by removing the message within 10 minutes of sending it.

10 minutes? Well, it’s a lot less time than the hour Facebook gives users to delete WhatsApp messages, but it’s better than nothing, particularly when “nothing” translates into “dishonor and/or idiocy preserved for eternity.”

The time that the overlords give us to stamp out our messages seems to be somewhat changeable, at any rate. When WhatsApp first introduced the ability for users to delete messages in October 2017, it gave us only 7 minutes. Since then, message kill time has been extended to “about one hour.”

How much does it really matter? One minute, 7 minutes, 10 minutes, an hour, a Snap-chat dash – no matter how much or how little time you have to hit unsend, it’s good to keep in mind that all it takes is somebody with a camera to capture a picture of the chat to turn that “delete” into “gotcha!”

Then there’s the possibility that Facebook Messenger messages won’t really go anywhere at all. It wouldn’t be surprising if they wound up haunting devices, just like their brethren: Snapchat Snaps that were supposed to have “disappeared forever” turned out to stay right there on your phone. A year ago, we also discovered that WhatsApp messages that are deleted were actually still on the device and could be easily accessed.

So keep this in mind when you compose any message, be it on Snapchat, WhatsApp or the soon to be retractable Messenger messages: it’s a lot easier not to write it in the first place and keep yourself out of trouble than it is to guarantee that it’s really, truly deleted.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EJqVN2st_IA/

258,000 encrypted IronChat phone messages cracked by police

Police in the Netherlands announced on Tuesday that they’ve broken the encryption used on an cryptophone app called IronChat.

The Dutch police made the coup a while ago. They didn’t say when, exactly, but they did reveal that they’ve been quietly reading live communications between criminals for “some time.” At any rate, it was enough time to read 258,000 chat messages: a mountain of information that they expect to lead to hundreds of busts.

Already, the breakthrough has led to the takedown of a drug lab, among other things, according to Aart Garssen, Head of the Regional Crime Investigation Unit in the east of the Netherlands. He was quoted in the press release:

This operation has given us a unique insight into the criminal world in which people communicated openly about crimes. Obviously, this has led to some results. For example, we rolled up a drug lab in Enschede.

In the course of this investigation we also discovered 90,000 euros in cash, automatic weapons and large quantities of [hard drugs] (MDMA and [cocaine]). In addition, we became aware of a forthcoming retaliatory action in the criminal circuit.

IronChat used tinfoil marketing fluff by simply making up at least one celebrity endorsement, from Edward Snowden.

Also on Tuesday, Dutch police shut down the site that sold the phones, Blackbox-security.com. An archived page shows this purported endorsement from Snowden …

I use PGP to say hi and hello, i use IronChat (OTR) to have a serious conversation

… an endorsement that, Snowden said through a representative at the American Civil Liberties Union (ACLU), he never made. In fact, he’s never heard of the phone, Snowden said. Ben Wizner, director for the ACLU’s Speech, Privacy Technology Project, relayed this message from Snowden in an email to Dan Goodin at Ars Technica:

Edward informs me that he has never heard of, and certainly never endorsed, this app.

Police said that they discovered the server through which encrypted IronChat communications flowed when police in Lingewaard, in the east of the Netherlands, traced a supplier of the cryptophones during a money-laundering investigation.

The phones cost about 3,000 euros per year (USD $3,400). The devices could only be used for texting, not for phone calls or web browsing, with the encryption happening on a separate server that rendered the communications unreadable by police.

The company was owned by a 46-year-old man from Lingewaard and his partner, a 52-year-old man from Boxtel. Both have been arrested under suspicion of money laundering and participation in a criminal organization. Their homes and the IronChat office have been searched, in addition to other, unspecified locations around the country.

The police could have let this play out until lord knows when but eventually pulled the plug on IronChat because they’d have had to step over dead bodies to keep up the investigation. As it was, criminals were suspecting each other of playing stool pigeon and leaking information to the police.

When they saw chats indicating that there was this kind of finger-pointing going on, they made it clear that “it was us acting upon information from the chats,” police said.

How did they crack the supposedly uncrackable?

Police aren’t saying: no surprise there. Frank Groenewegen, a security researcher at Fox-IT, told De Telegraaf that the likeliest explanation is that there was a mistake in the encryption:

In my opinion, that is the most likely option. If encryption is properly applied, nobody can do anything to make a message visible, but it sometimes depends on a comma that is wrong somewhere. Then you can put fifteen locks on a safe door, but if the hinges come loose and the door falls out, you will enter.

If, however, the encryption was in fact “iron-clad,” with no stray commas or other mistakes, it could be that police managed to crack the encryption algorithms, Groenewegen said. That would make this a problem for everyone who relies on the encryption in question, he said, not just Dutch crooks.

If that were the case, the police would be able to read all the chats with that encryption all over the world, so to speak… Then everyone has a problem.

For his part, Rik van Duijn, a security researcher with Dearbytes, told Dutch public broadcaster NOS that IronChat had multiple security issues.

For one thing, the app warned users about possible message interception in teensy type, worded in such a way that an average user wouldn’t understand, he said, if they read the smaller font at all. The warning:

Encryption is enabled, but conversation partner is not authenticated.

van Duijn:

The average user does not understand exactly what this means. You would expect that an app that so clearly focuses on encryption is clearer.

According to NOS, a spokesman confirmed to the police on Tuesday evening that the server used to exchange messages was cracked. Police aren’t saying how but Van Duijn has ideas: besides other flaws, he noticed that the app didn’t have much protection from people who want to use it for free.

He himself cracked the code users needed to show that they paid for the phone: all it was a “combination of a number of numbers” that he gleaned from the app’s source code, he said.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eiiptApz3gc/

Bruce Schneier: You want real IoT security? Have Uncle Sam start putting boots to asses

Any sort of lasting security standard in IoT devices may only happen if governments start doling out stiff penalties.

So said author and security guru Bruce Schneier, who argued during a panel discussion at the Aspen Cyber Summit that without regulation, there is little hope the companies hooking their products up to the internet will implement proper security protections.

“Looking at every other industry, we don’t get security unless it is done by the government,” Schneier said.

Bruce Schneier

Schneier warns of ‘perfect storm’: Tech is becoming autonomous, and security is garbage

READ MORE

“I challenge you to find an industry in the last 100 years that has improved security without being told [to do so] by the government.”

Schneier went on to point out that, as it stands, companies have little reason to implement safeguards into their products, while consumers aren’t interested in reading up about appliance vendors’ security policies.

“I don’t think it is going to be the market,” Schneier argued. “I don’t think people are going to say I’m going to choose my refrigerator based on the number of unwanted features that are in the device.”

Schneier is not alone in his assessment either. Fellow panellist Johnson Johnson CISO Marene Allison noted that manufacturers have nothing akin to a bill of materials for their IP stacks, so even if customers want to know how their products and data are secured, they’re left in the dark.

“Most of the stuff out there, even as a security professional, I have to ask myself, what do they mean?” Allison said.

That isn’t to say that this is simply a matter of manufacturers being careless. Even if vendors want to do right by data security, a number of logistical hurdles will arise both short and long term.

Allison and Schneier agreed that simply trying to port over the data security policies and practices from the IT sector won’t work, thanks to the dramatically different time scales that both industrial and consumer IoT appliances tend to have.

“Manufacturers do not change all the IT out every five years,” Allison noted. “You are looking at a factory having a 25- to 45-year lifespan.”

Support will also be an issue for IoT appliances, many of which go decades between replacement.

“The lifespan for consumer goods is much more than our phones and computers, this is a very different way of maintaining lifecycle,” Schneier said.

“We have no way of maintaining consumer software for 40 years.”

Ultimately, addressing the IoT security question may need to be spearheaded by the government, but, as the panelists noted, any long-term solution will require a shift in culture and perception from manufacturers, retailers and consumers. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/09/bruce_schneier_want_real_iot_security_get_the_government_to_put_boots_to_asses/

Learn the tricks of the cyber criminals’ trade at SANS Dublin training event

Promo The internet is full of powerful hacking tools and the cyber criminals are devising ever more ingenious ways of using them. Keeping abreast of their latest tactics and techniques is more vital than ever for those defending their organisations against ever-present threats.

SANS Dublin December 2018 offers six days of immersion training that will arm security professionals with the all-encompassing skills and detailed knowledge they need to turn the tables on the bad guys.

This training event takes place from 3-8 December and offers three cyber security courses, each taught by a respected expert and providing the chance to gain valuable GIAC certifications. SANS assures attendees that they will be equipped to use the new skills they have learned as soon as they return to work.

The courses cover the following topics:

  • Hacker tools, techniques, exploits and incident handling: Everything from the cutting edge in insidious attack vectors to the golden oldies that won’t go away. The course outlines a step-by-step process for responding to computer incidents and an account of how attackers undermine systems. It also explores the legal issues surrounding responses to breaches such as monitoring employees and handling evidence.
  • Web app penetration testing and ethical hacking: Web application flaws play a major role in large breaches and intrusions, and contrary to what many organisations believe, a security scanner is not a reliable way of discovering them. This course helps students move from push-button scanning to thorough web app penetration testing.

    As well as focusing on the major flaws and how to find them, the course shows security practitioners how to convince their organisations to take the business risk seriously and put the right measures in place.

  • Network penetration testing and ethical hacking: This is the flagship SANS course for penetration testing, with its comprehensive coverage of tools, techniques and methodologies. It starts with proper planning, scoping and recon, then dives deep into scanning, target exploitation, password attacks and web app manipulation, with more than 30 detailed hands-on labs throughout.

    Students can put their newfound skills into practice at a hacker tools workshop and compete for a prize in a capture-the-flag event.

Read the full details and register here.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/09/cyber_criminals_sans_dublin_training_event/

Route security, Cisco cert expiration, ETSI and middleboxes

Cisco admins, you thought your week was over, right? Sorry: if you have kit that runs Adaptive Security Appliance software or the Firepower Extensible Operating System, there’s one more item on the task list: updating your certificate.

Switchzilla’s field notice explained that Cisco’s root CA for tools.cisco.com was rolled over to a QuoVadis Root CA 2 cert on October 5, and that could affect “Smart Licensing and Smart Call Home functionality for all versions” of ASA or FXOS.

That causes a Communication message send response error error, and because the platforms can’t register with the Cisco servers, “smart licenses might fail entitlement and reflect an Out of Compliance status”.

You can either upgrade, or import the new cert from the CLI.

And there’s one more wrinkle to be aware of: the QuoVadis cert isn’t FIPS-compliant. If you need FIPS compliance, there’s a different certificate to import, the HydrantID SSL ICA G2 intermediate certificate, also available from the CLI.

Better route security comes to APNIC

The Asia-Pacific Network Information Centre, APNIC, this week announced extra routing security.

Its members can now run Resource Public Key Infrastructure (RPKI) operations in MyAPNIC, including generating an AS0 Route Origin Authorisation.

As we explained in September, RPKI means a network can positively identify its authority to make route announcements, and America’s National Institute of Standards and Technology recommended its adoption.

ETSI publishes TLS 1.3 “middlebox” workaround

The European Telecommunications Standards Institute, ETSI, this week published what it called a “Middlebox Security Profile specification”, Enterprise TLS (eTLS).

Hang on, I hear you ask: isn’t the Internet Engineering Task Force responsible for TLS standards?

Yes, and that was part of the problem. Welcomed for improving user security, TLS 1.3 is unloved by attackers, spooks, and those who want to proxy the security protocol at the enterprise edge.

IETF standards bods have considered the matter of TLS 1.3 proxies, but so far nobody’s hummed up sufficient support to get an RFC published – and that’s where ETSI comes in. It pitches eTLS as an enabling technology that allows net admins to carry out operations like “compliance, troubleshooting, detection of attacks (such as malware activity, data exfiltration, DDoS incidents), and more, on encrypted networks”.

eTLS only allows decryption where “both parties in a connection … are under the control of the same entity”, in which case it implements its own key exchange mechanism so TLS 1.3 packets can be sniffed snooped decrypted.

When that happens, users can see that their communications are being examined by checking the certificate (which everybody knows how to do, right?).

As we’ve reported more than once, middleboxes aren’t just invasive, they’re frequently insecure.

But at least there’s a standard for them now …

Packetpushers has reported that startup MPLS private network Mode has cut a deal with SD-WAN vendor Versa, allowing customers to set up connections to Mode services from within Versa’s portal.

BIND, OpenSSH replace WordPress and Drupal in ZDI bounty-list

The Zero Day Initiative has tweaked its Targeted Incentive Program, replacing Drupal and WordPress with OpenSSH and BIND as “high value” targets.

A successful OpenSSH code execution chain will earn you a cool $200,000, which ZDI said reflects “how much we rely on OpenSSH”.

BIND, the world’s most common DNS server, is also down for $200k, as is Windows SMB, for versions newer than 1.0.

IETF docs get sloshed

A four-party collaboration has come up with an Internet-Draft answering a conundrum you might not know existed: what’s a good way to render long lines in Internet standards documents?

Recall that the Internet standards process is ancient, and as a result, it has inherited a 72-character line length from ”green-screen” terminals.

A few years ago, the IETF adopted XML as the canonical standard for storing documents like drafts and RFCs, but humans still need to read plain text.

Code fragments pose a problem (as does the ubiquitous ASCII art of Internet documents), because they need to be stored and rendered as they are, if possible.

“Handling Long Lines in Artwork in Internet-Drafts and RFCs” suggests a simple approach: use a backslash (“”, also referred to as a “slosh”) to indicate that a line has been folded.

As Kent Watsen (Juniper), Qin Wu (Huawei), Adrian Farrel (Old Dog Consulting) and Benoit Claise (Cisco) wrote: “The approach produces consistent results regardless of the content and uses a per-artwork header. The strategy is both self-documenting and enables automated reconstitution of the original artwork.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/09/network_roundup_november_9/

Vulnerabilities in our Infrastructure: 5 Ways to Mitigate the Risk

By teaming up to address key technical and organizational issues, information and operational security teams can improve the resiliency and safety of their infrastructure systems.

Excluding the financial services industry, there were 649 breaches reported on and analyzed for the 2018 Verizon Data Breach Investigations Report (DBIR) in industries that are considered part of infrastructure verticals. These include utilities, transportation, healthcare, and others that employ operational technology (OT) systems in addition to traditional IT for their main operations.

In total, that represents 29.2% of reported breaches (not incidents). So, what exactly does that mean?

It means that just because an incident hasn’t happened in your infrastructure environment, that doesn’t mean it won’t happen or that you can postpone or underfund your cybersecurity efforts. No, I don’t believe we are facing a “Cyber Pearl Harbor.” But I do believe organizations operating both IT and, particularly, OT systems need to put a more conscious effort into securing these systems not only from a security perspective but in terms of quality, safety, and reliability.

Although OT industries face a similar set of problems as traditional IT, the overall application of security programs and technologies is quite different in OT, and there is even more differentiation based on the characteristics of each vertical. That being said, there are best practices in key areas, both technical and organizational, that can help mitigate the risk to infrastructure environments, regardless of the vertical. Here are five.

Risk 1: Your Environment
An organization is at a serious disadvantage if it doesn’t take the time to inventory its systems and assess the security posture for a given environment. It is nearly impossible to secure an environment if you are unaware of what is in it, how everything is connected, what data it uses (or generates), and how it affects your bottom line.

Best Practice: One of the best pieces of advice for organizations with a large installed base or many infrastructure environments is to pick a representative environment. Once you have selected an important or representative environment, move forward by cascading the lessons you’ve learned to the rest of your environments.

Risk 2: Patch Management
One of the prevailing issues in OT networks is the lack of technical solutions and organizational practices for patching. This is particularly relevant if the application sits on a commercial OS, as most do. In my experience, the average number of remote code execution vulnerabilities on the host operating system alone in OT environments is around 55! Consequently, developing and maintaining a strong patch management strategy is one of the most effective activities an organization can undertake. It’s also a daunting undertaking.

Best Practice: To get started, interact with your system vendors. If your representative isn’t familiar with the company’s patching solutions, press deeper into the organization. Most major automation manufacturers are working toward solution sets compliant with standards such as IEC 62443, and customer pressure can convince niche vendors to address this problem as well.

Risk 3: Network Segmentation
Many OT systems are deployed in a flat network topology or without any segmentation between systems that should not be able to interact. There are two reasons for this. First, due to a misunderstanding about which systems need to communicate with one another, and the second, as a result of deploying systems from multiple vendors or integrators over time.

Best Practice: After assessing the network topology and data flows, you will need to develop network segmentation policies, which are similar to various industry standards language describing the zones and conduits of controlling access. The goal of these policies is to mitigate the damage potential of breaches or issues related to anomalous network traffic. Bottom line: only required traffic should pass between systems, and restrictions on communication paths between various zones should be enforced.

Risk 4: Your Supply Chain
In many OT environments, vendors maintain an aspect of control over the technical implementation of the solutions they provide through support contracts and changes that must be validated and certified to ensure the safe operation of a given system.

Best Practice: Your organizations should be sure to include security requirements for the procurement of new systems as well as ongoing maintenance efforts within their vendor management programs. Industry standards such as IEC 62443 can provide guidance in this effort.

Risk 5: IT vs. Process Control Teams
Over the past few years, at both the leadership and execution levels, IT security teams have become involved in OT network security efforts. In several cases, the differences in priorities and the understanding of technology has led to organizational stalemates and differing opinions on how to address security in operational environments.

Best Practice: Organizations need to bring these groups together with a common goal in order to foster a culture of cooperation between the two groups to address cyber threats. Training for both OT and IT security personnel should be part of that effort, including the development of a common understanding of objectives and solutions that work for your organization.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Michael Fabian is a principal consultant within the Synopsys Software Integrity Group. His primary area of specialization involves adapting and bringing systems-level security objectives, processes, and technical solutions into a variety of non-traditional cyber systems in … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/vulnerabilities-in-our-infrastructure-5-ways-to-mitigate-the-risk/a/d-id/1333211?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Morris Worm Turns 30

How the historic Internet worm attack of 1988 has shaped security – or not.

Michele Guel was sound asleep on Nov. 3, 1988, when the call came at 3:30 a.m.: An unknown virus had infiltrated NASA Ames Research Laboratory’s Sun Microsystem file servers and workstations and was sapping their resources, slowing them to a crawl. She headed to the lab in the dead of the night and with her team at NASA scrambled to stop the attack. They manually powered down each machine, one by one. “We walked around and shut them down … [and] unplugged the cables,” Guel recalls.

The attack was draining memory resources of NASA’s computers and spreading fast among its Digital Equipment Corp. (DEC) VAX, Silicon Graphics Unix, and Cray supercomputer machines as it targeted systems running the version 4 BSD Unix operating system. In doing so, it was exploiting security flaws in the Sendmail email application and Finger network user-finding app, as well as brute-force password cracking and taking advantage of a Unix feature that allowed users of one system to use another without a password.

“In the moment, it was all hands on deck. There was a need to get the workstations fixed, get them back up and not reinfected, so scientists could get back to their work as fast as possible,” says Guel, then a lab administrator at NASA Ames. The supercomputer systems in the Mountain View, Calif., facility also were used by outside organizations such as Boeing, and for projects such as space capsule rocket design.

NASA, along with the US Department of Defense, Harvard, Lawrence Livermore National Laboratory, MIT, UC-Berkeley, Stanford, and several other major universities and government research arms, all were hit that day with a worm that knocked out servers and workstations connected to the then-nascent Internet, a cloistered and collegial community of academia, research and development, military, and government users.

The worm was a graduate student experiment gone awry: Robert Tappan Morris, then a Cornell University computer science student, later confessed that he wrote the program to spread as much as possible around the Internet in order to gauge its size but not to cause harm or take down machines. His project backfired, though, ultimately crashing some 6,000 Unix-based machines – 10% of the Internet at that time – leaving systems out of commission and offline for anywhere for two or more days.

It’s been three decades since Morris first unleashed his Frankenstein-esque worm on the evening of Nov. 2. It was the first major Internet security event and a loss of innocence for the young Net. “No one was preparing” for something like that, recalls Guel, who is now the chief security architect for Cisco’s security and trust organization. “The Internet was a big happy place, and we were all using it for good purposes. It did catch us all by surprise.”

But the lessons of the Morris Worm still haunt Internet security today, according to experts who responded to and cleaned up the attack. While its impact ultimately led to the emergence of the information security industry, some of the same security issues that let the worm rapidly wriggle its way through the Internet, from machine to machine, surround networks today: weak passwords, vulnerable software, and a lack of layered security. Today’s worms are more dangerous than ever and are mostly in the hands of nation-states and other malicious actors.

The new generation of self-replicating malware variants have become handy tools for spreading and dropping destructive payloads, such as ransomware with the 2017 WannaCry attack by North Korea, and data-wiping exploits transported by NotPetya, where Russian military hackers attacked mostly Ukraine targets with destructive software posing as ransomware. These Morris Worm descendants make their payload-less ancestor almost seem quaint in comparison.

“Ultimately, you can draw a line from then until now because [the Morris Worm] was a seminal event. It was the first time we realized a global, connected infrastructure was going to be globally vulnerable,” says Paul Vixie, who pioneered the Internet’s Doman Name System (DNS). The mass infection also demonstrated how running a mix of different types of computer systems and operating systems can save the Internet from an all-out outage, notes Vixie, who battled the worm while working for DEC, in Palo Alto, Calif., where just a few research computers slowed as the worm tried to replicate among them. The attack was the first buffer overflow attack he had ever seen.

“It consumed a lot of resources while trying to spread,” he recalls, though email and the Internet gateway stayed up at the DEC research site. “I stayed up all night listening to email chatter about various people that had been affected or hadn’t been.” 

Organizations today that are moving toward more homogeneous computing environments are risking a single point of failure during a big cyberattack, such as a worm, according to Eugene “Spaf” Spafford, who was one of the first to analyze the Morris Worm after battling the attack on Purdue University while a software engineer there. He says the network community learned a lot from the worm, but even all these years later still hasn’t applied those lessons across the board, leaving them at risk. “Organizations that have the same one or two platforms, and the same storage technology, and same baseband network – if something bad happens like ransomware, it sweeps through the whole organization,” Spaf says.

Purdue computers that weren’t running 4 BSD Unix, including the university’s Sun Solaris SPARQ machines and its Sequent machine in the computer science department, emerged unscathed by the worm. “We had a very divided computing environment at that time, so, as a result, we didn’t lose a lot [of systems],” Spaf says. “Our DEC VAX and Sun Solaris machines either slowed down too much or, on a couple of occasions, crashed. It was primarily brute-forcing email servers. We had one or two classroom machines that went down.”

The Morris Worm was all about spreading from machine to machine; once it did so, it attempted to hide out by changing its process name and deleting temporary files, for instance. At each computer it hit, the code checked whether the target already was infected with the worm. An apparent bug in this process led to the duplication of copies of the worm in machines and, ultimately, an apparently unintended denial-of-service effect.

For NASA Ames, the Morris Worm infection and computer outage was a rude wakeup call that its physical security wasn’t enough anymore for the lab. “Many Unix workstations and one of our Crays and big VAXes were grinding to a halt,” Guel recalls. “It took us off the map.” For most operations, the fact that the worm took off in the later hours of the day and evening probably saved a lot of initial downtime for the affected organizations, but at NASA Ames, its many projects ran round-the-clock. “The Cray [supercomputer] ran 24/7 … and there were always a backlog of processes [by engineers], so there was a lot of processing time lost,” she says.

After the worm had been eradicated from NASA’s computers, Guel was tasked with building the lab’s first security team, establishing a patching process and strong password policy, configuration management, and building an incident response team – the same tasks many organizations still wrestle with now. “Today there are still organizations that struggle to monitor 24/7, where they can have the right level of visibility or enough people,” she says.

Michele Guel, chief security architect, Cisco

Text-based and reused passwords continue to be a poor but pervasive practice today, Cisco’s Guel points out. We’re still patching software and failing on basic security hygiene, she says. Running software programs as root, using weak passwords, and running unnecessary programs still remain in practice today, she notes.

Spaf’s Story
On Nov. 2, 1988, Spaf had just celebrated his wedding anniversary with a day off and a nice dinner with his wife. On the morning of Nov. 3, he woke up and logged into his home machine to check his email. He immediately noticed something was wrong. “I discovered my lab machine running an insane process load,” he recalls, so he rebooted and went to get ready for work while the machine reset.

“When I came back, the load was climbing upward rapidly, and I knew something was wrong,” he recalls.

Spaf hurried to his office at Perdue, where he found other university machines were experiencing similar problems. So he disconnected them from the network. “We began to piece together what was happening and by midafternoon, we had come up with a mostly reliable way from keeping the worm from reinfecting the machines and began to tell people to bring machines back online,” he says.

Around the same time, experts at UC-Berkeley and MIT were comparing notes and sharing their analyses of the attack. But communication among the Internet community was basically cut with the worm since most members couldn’t access their online and email connections during the attack. “Many of us knew each other online, but we didn’t have each other’s phone numbers. One of the lessons [of the worm] was that we needed a more reliable out-of-band connection,” Spaf says. “The concerns many of us had was who set this off and why – and how do we get the word out to deal with it?”

So that very day he set up an online message/list forum called the Phage List, where responders could communicate during the incident. DARPA soon funded the Computer Emergency Response Team at Carnegie Mellon, the CERT, which opened in early 1989, for helping organizations coordinate response to cyberattacks.

There were few widely available tools to analyze software in ’88, with the exception of disassemblers, Spaf recalls, so much of the Morris Worm analysis came via manual debugging. “It was a very tedious process,” he says.

A few hours after the attack, the Computer Systems Research Group at Berkeley developed a temporary patch to stop the worm’s spread. It later issued software patches for the 4 BSD Unix operating system.

Spaf, now the executive director of Purdue University’s Center for Education and Research in Information Assurance and Security and a professor of computer science there, argues that most problems in security today, post-Morris Worm, are well-known and actually can be avoided or prevented. “But it costs money and possibly interferes with the way people are currently doing business. And security just isn’t valued enough in most of these environments to want to take those steps,” he says. That’s partly because there aren’t sufficient ways to measure security in order to employ the right business decisions, he adds.

When organizations nowadays suffer major attacks such as ransomware, they typically just patch the system rather than rethink their security architecture that allowed the attack, he explains. 

(CONTINUED ON PAGE 2)

 

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full BioPreviousNext

Article source: https://www.darkreading.com/vulnerabilities---threats/the-morris-worm-turns-30-/d/d-id/1333225?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

GDPR USA? ‘A year ago, hell no … More people are open to it now’ – House Rep says EU-like law may be mulled

The rash of high-profile IT security breaches, data thefts, and other hacks that have erupted over the last year or so may push US legislators to consider laws similar to Europe’s privacy-protecting GDPR.

This is according to Representative Will Hurd (R-TX), who told attendees at the Aspen Cyber Summit in San Francisco today that revisiting the EU’s hard-line safeguards for personal information, activated in May, could be on the agenda in America when a Democrat-controlled House begins its next session in January. For the next two months, Republicans still hold that side of Congress.

“One of the things we will be looking at is GDPR. Is it working, is it not working, is it something that we may be moving to?” Hurd told attendees at the cyber-shindig.

“A year ago, the answer would have been not ‘no,’ but ‘hell no.’ I think more people are open to that now because of some of the breaches.”

Indeed, the GOP had no time for the EU’s drive to strictly regulate how companies collect, store, and share customer information, giving GDPR short shrift. A Dem-led House may have other ideas. And although the Senate is still controlled by the Republications, and thus may block any attempt to develop a GDPR-style regime in America, the mega-hacks in recent months and years may change some of their minds.

From what we’ve gathered, a string of high-profile computer network breaches seems to have changed attitudes, and Washington DC may be willing to reexamine Europe’s way of enforcing privacy.

Data protection, American style

Hurd – who is chairman of the Information Technology Subcommittee of the House Committee on Oversight and Government Reform – told The Register that no legislation is planned right now. Anything introduced, he added, would be far from a carbon copy of the EU’s controversial personal privacy standards.

Rather, he explained, the US would reevaluate, with an open mind, some of the concepts of a law that a year ago he and most of his peers would not have touched with a ten-foot pole.

Map of Europe, with lock symbolizing GDPR

GDPR stands for Google Doing Positively, Regardless. Webpage trackers down in Europe – except Big G’s

READ MORE

“We need to be evaluating what our friends across the Atlantic did because it is still coming up in conversations about privacy here in the United States,” the ex-CIA Texas Rep said. “I think a component of the privacy conversation in the 116th Congress is going to be, is GDPR working, and how is that impacting the United States?”

At least one US state is not waiting for the federal government to take action. Earlier this year California passed its own strict privacy standards, with plans to put it into effect in January 2020.

California Attorney General Xavier Becerra said that, over the coming year, the Golden State would look to strike a balance between privacy and convenience, but a central tenet will be shifting more responsibility for data protection to companies, and pursue charges against companies that don’t take proper care of customer information.

“I would say to any company that wants to collect data, it is like having a baby. If you drop that baby in the wrong way, you’ve committed a crime,” said Becerra.

“Our job is to make sure you are responsible in the way you handle that baby.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/11/08/gdpr_usa_congressman/