STE WILLIAMS

Glitter bomb engineer exacts revenge on parcel thieves

Well, who knew: the sweet smell of success actually smells like farts and is uber fabulously glittery.

That was proved by former NASA engineer Mark Rober, who, in his own words, “over-engineered the crap” out of a glitter bomb to sprinkle glee and regular emissions of aerosolized fart odor upon package thieves…

…and who, in order to record the newly sparkling thieves’ Emmy award-worthy reactions – the majority of which amount to variations on “what the fuuhh…..??!!!” – used an accelerometer in the fake package to detect movement, geofencing to send an alert to his phone when the package left his property, and four camera phones to record visual and audio of the package-nappers.

But enough with blah blah blah. A collection of glitter explosion video captures alone are worth a thousand words, never mind the value of the suspense that mounts at the sound of a motor cranking as the fart smell machinery revs up for its 30-second-interval gusts.

Rober spent nine years working at NASA – mostly on the Curiosity rover – at its Jet Propulsion Laboratory. He’s an engineer (obviously!), an inventor (ditto!), and a YouTube personality. To date, pre-glitter fart bomb, his most popular invention has been Digital Dudz: a selection of Halloween costumes that incorporate mobile apps with clothing.

The glitter fart bomb, however, is his magnum opus, he said.

His muse was a package thief. Two package thieves, to be exact, both of whom had made off with a delivery from his California porch one day in broad daylight, about seven months ago.

In spite of his security camera having picked up the video of the thieves in action, police just shrugged, telling him it wasn’t worth their time to investigate.

So besides feeling violated by the theft, he felt powerless by the shrug. In the YouTube video, which he posted on Monday, Rober said that he “just felt like something needs to be done to take a stand against dishonest punks like this.”

And then I was like, hold up. I built a dart board that moves to get a bulls-eye every time. I spent nine years designing hardware that’s currently roving around on another freakin’ planet. If anyone was going to make a revenge bait package and over-engineer the crap out of it, it was going to be me.

In designing his booby-trap, Rober said he was inspired by his “childhood hero” Kevin McAllister, also known as the young and resourceful protagonist (played by Macaulay Culkin) in the “Home Alone” films of the 1990s.

With the help of friends – one in particular who excelled at working with tiny microelectronics – Rober mocked up a design for his ideal trap. The bomb was disguised as a package – specifically, a cellophane-wrapped Apple HomePod box he knew would be “enticing” for any porch pirate.

Not only was the package GPS-enabled so Rober could track its journey once it left his home perimeter, and not only would it record video with embedded cellphones no matter how the thief picked up the parcel, and not only would its noxious emissions likely prompt the thieves to toss it as soon as possible, but the video would also upload to the cloud: no chance of missing those reaction videos in case a smell-impaired thief realized they were in possession of four phones.

Once triggered, a spinning cup at the top would “celebrate their choice of profession with a cloud of glitter,” Rober said. One pound of glitter, in fact.

It took about six months to engineer and test. After that, it was time to set the trap.

It didn’t take long before a thief picked it up. Rober went and retrieved the discharged package from a parking garage near his house, then rigged it again. Another thief grabbed it, went through the glitter explosion and fart-spray exposure. And then another. And another.

As of Tuesday afternoon, Rober’s video had more than 11 million views, and people were begging to buy one.

Sorry, it’s not for sale, Rober told the Washington Post.

Rather, the point of the video is “to get people pumped about science and engineering and education,” like the others on his YouTube channel, including how to create a “hot tub” of liquid sand, how to skin a watermelon and how to engineer that moving dartboard of his.

Rober said that he hasn’t had any other packages stolen since he sprung the glitter-fart decoy on the parade of thieves. However, he told the Post, six months of testing have left him with a glitter-filled house and workshop.

So at the end of the day, the joke is sort of on me.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BFCMWXPWp0U/

Most home routers lack simple Linux OS hardening security

More disconcerting news for router owners – a new assessment of 28 popular models for home users failed to find a single one with firmware that had fully enabled underlying security hardening features offered by Linux.

CITL (Cyber Independent Testing Laboratories) says it made this unexpected discovery after analysing firmware images from Asus, D-Link, Linksys, Netgear, Synology, TP-Link and Trendnet running versions of the Linux kernel on two microprocessor platforms, MIPS and ARM.

The missing security protections included Address Space Layout Randomization (ASLR), Data Execution Prevention (DEP), and RELocation Read-Only (RELRO).

Granted, this will sound like a jumble of technical terms to most router owners, but in modern operating systems this layer of security should matter.

Linux pioneered features such as ASLR (Windows added it to Vista in 2007), taking advantage of the memory segmentation features of modern CPUs via something called the NX bit (no-execute).

As its name suggests, ASLR protects against buffer overflow attacks by randomising where system executables are loaded into memory (so attackers don’t know where they are).

Meanwhile, its relative, DEP, is a way of stopping malware from executing from system memory in use by the OS.

The point of security hardening like this is to make it harder for attackers to exploit software flaws as and when they are found.

How does this affect routers?

Router makers base their firmware on a version of the Linux kernel atop which they implement proprietary extensions.

In principle, there is nothing stopping them from implementing features such as ASLR, but according to CITL that doesn’t seem to have been happening.

For ASLR, all models assessed achieved a low score ranging from 0% use to almost 9% in one case, with most around half of that. With the exception of a Linksys model that scored 95%, RELRO implementation wasn’t much better.

For comparison, Ubuntu 16.04 LTS implemented ASLR on 23% of its executables and RELRO protection on 100%.

MIPS vulnerability

A clue as to why this is happening could be the particularly weak scores of the 10 routers running MIPS for protections such as DEP.

This included a weakness in Linux kernels between 2001 and 2016 relating to the implementation of floating-point emulation. The researchers also noticed a potential security-hardening bypass introduced by a 2016 kernel patch.

We also observe a significant lag in adoption of the latest Linux kernels, and related compiler toolchains, in many MIPS devices including end user devices.

The Linux kernel version shouldn’t in itself result in poor security hardening (most of which have been around for many years in Linux) but it does suggest the firmware used by many of these routers was developed at a time when security was a lower priority.

Indeed, the same issue might explain why so many routers still run on the MIPS, an aging platform left over from the early 2000s and Broadcom’s Wi-Fi reference design which came bundled with its chips. For MIPS, the researchers advise:

We believe consumers should avoid purchasing products built on this architecture for the time being.

CITL argues that although ARM-based routers are a more secure choice, even here the security hardening varies widely within the same vendor’s products.

Should we be worried?

Yes, and no. Yes, because a router lacking these basic protections is inherently less secure but no because even if this was fixed, there are still many other security problems within routers for attackers to aim at.

For instance, the router industry has a mixed reputation for fixing security vulnerabilities when they are discovered, in some cases apparently abandoning some models (and their users) to their fate.

In fairness, when it comes to patching, the router industry has improved a lot. However, CITL’s analysis suggests more fundamental work still lies ahead.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/xaFu3uIqCeI/

Facebook denies sharing private messages without user knowledge

Facebook hit back at press reports this week that highlighted a deep network of privileged data-sharing partnerships between the social media company and other large organisations.

The bi-lateral relationships saw companies including Amazon, Netflix, Microsoft and Spotify exchange user data that helped both them and Facebook extend their reach by learning more about their users, often without those users being aware. They also extended to businesses in other sectors ranging from finance to the auto industry.

The New York Times explained that there were over 150 of these partnerships, so many that the social network giant needed a technology tool to keep track. Some of the deals raised privacy concerns due to the private information that they exchanged, the paper said.

Information flowed both ways. Not only could partners see data including the contact details of peoples’ friends and some private messages, but Facebook also received data about individuals from those companies:

Among the revelations was that Facebook obtained data from multiple partners for a controversial friend-suggestion tool called “People You May Know.”

The story sheds new light on a pattern of relationships that Facebook had already announced in 2010 at its F8 conference. Called instant personalization, it shared Facebook user information with other websites to help them personalize a person’s experience when they visited. The company closed down the instant personalization feature, which shared public data, but the New York Times story is one of several that documented links between Facebook and some companies that existed beyond that point.

In some cases, those relationships persisted until this year. In June, Reuters revealed data sharing partnerships between Facebook and four companies including Huawei, which the US government has labelled a security risk.

In its blog post responding to the news report, Facebook argued that it wasn’t giving away any information that it wasn’t entitled to share:

To be clear: none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC.

The consent decree stemmed from a 2011 FTC case against Facebook that accused it of sharing information that people had thought was private. The settlement required Facebook’s provision of…

…clear and prominent notice and obtaining consumers’ express consent before their information is shared beyond the privacy settings they have established.

According to the Times, Facebook executives deem these partnerships exempt from the settlement. It views those companies as service providers and therefore an extension of the social network, it said. However, the Times also quoted former FTC officials that disputed that notion and believed that the company may have violated the agreement.

Instant personalization was reportedly switched on by default in 2010, meaning that users had to explicitly opt out by navigating the company’s privacy settings. Facebook also explained what it was doing in relatively vague terms in its data policy. In January 2013, after it had reached its consent decree with the FTC, the policy said:

We use the information we receive about you in connection with the services and features we provide to you and other users like your friends, our partners, the advertisers that purchase ads on the site, and the developers that build the games, applications, and websites you use.

It updated its policy in April, coinciding with a decision to restrict developer access to its APIs. The policy now clarifies the definitions of companies that it shares data with, along with the data that it shares. It also says that it is taking further steps to tighten up third-party access to user data.

In another blog post yesterday, Facebook took issue with another assertion, that companies had been able to read users’ private messages without their knowledge. Not so, it said. It gave four partners – Spotify, Netflix, Dropbox and the Royal Bank of Canada – read/write access to peoples’ messages, sure, but only so they could use Facebook Messenger to tell people what Spotify tracks they were listening to, what they were watching on Netflix, send links to Dropbox folders, or acknowledge money transfers.

It said:

These experiences were publicly discussed. And they were clear to users and only available when people logged into these services with Facebook.

In spite of its assertions that it didn’t violate any legal agreements or user rights, Facebook did offer a mea culpa in its first blog post:

Still, we recognize that we’ve needed tighter management over how partners and developers can access information using our APIs. We’re already in the process of reviewing all our APIs and the partners who can access them.

It added:

We shouldn’t have left the APIs in place after we shut down instant personalization.

All of which is to say that Facebook has a long journey ahead if it hopes to win back the trust of many users.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/SAxq2HqygdM/

London’s Gatwick airport suspends all flights after ‘multiple’ reports of drones

Updated No flights have arrived or left London’s Gatwick Airport since just before 21:00 UTC last night after drones were apparently spotted over the airspace.

Chris Woodroofe, Gatwick’s chief operating officer, told the BBC’s Today programme [from 1:09.39] on Radio 4 this morning that 20 police units from two forces were hunting down the drone operator as “that is the way to disable the drone”. He added: “We also have the helicopter up in the air.”

Woodroofe said that “two drones” had been spotted by staff the night before. “They were over [the runway]… over the perimeter fence and into where the runway operates from…”

The COO confirmed that another drone sighting had been made just minutes before he began speaking to host John Humphrys. At around 07:12 UTC, he said: “In the last five minutes we saw drones back over the perimeter fence in our runway and taxiway area”…

Answering the question on everyone’s lips as they pulled imaginary triggers at the air while scoffing, he told Today that police had advised that “it would be dangerous to seek to shoot the drone down because of what may happen to the stray bullets”.

Reg reporter Richard Speed, literally our man on the ground, is one of the 2,000 people whose flights have been unable to take off. He told El Reg‘s London HQ: “For some reason, only the robot voice is allowed to use the tannoy, [which] means airline staff are having to yell updates from the info desks.

“Flight crew and cabin crew are also milling about. ‘We know as much as you do – no one is telling us anything’.”

“Anything scheduled after 8am is now cancelled. If I was a drone hobbyist I would be seriously worried about [the] kneejerk reaction.”

Air traffic control organisation Eurocontrol said Gatwick – UK’s second biggest airport – would be closed until 12:00 UTC, in an update issued at 09:06 UTC.

The runway was closed on 21:00 on Wednesday night – trapping thousands of people in the terminals awaiting direct and connecting flights in the runup to the Christmas break.

The airfield was opened briefly at 03:01 but was sealed off about 44 minutes later due to a “further sighting of drones“.

The airport said in a tweet an hour ago that it was working with the Sussex* police and wouldn’t reopen the runway until it had “suitable reassurance” it was safe to do so.

Angry travellers took to the microblogging platform to complain. Tinkerbell81 opined: “You seriously expect us to believe that ‘drone activity’ shuts an airport down for nearly 12 hours …….. it was raining hard most of the night! Finding the ‘operator’ would be a needle in a haystack…”

Although “Jaeyeon Park”, who asked the location of the gate for a British Airways flight, must have been relieved to have been told:

Said our man on the scene: “I’d have an airport beer, but the queue for Wetherspoons is epic.” ®

* In the tradition of London airports (other than the marvellous London city, in East London’s Docklands), Gatwick is in one of the home counties surrounding Greater London – in LGW’s case, Sussex.

Updated to add at 10:42 UTC

Passengers are being sent to their gates. But El Reg has noticed Air traffic control organisation Eurocontrol pushing back reopening times to 1300.

Sussex police has called the drone disruption a “deliberate act” but said “There are no indications to suggest this is terror-related”. It has asked anyone who can help identify the operators to ring 999.

Passengers have been turned back from the gates again, Reg man Richard Speed told us.

Updated to add at 12:30 UTC

At 11:20 UTC Eurocontrol extended the closure until 14:00 UK time, but just 20 minutes later, it sent out an update saying the closure would remain in place until 16:00 UTC. We will keep you posted.

Updated to add at 12:40 UTC

Sussex cops have assured the public that the drone pic it used in a previous tweet – a Shutterstock image of a drone quadcopter with digital camera – was “not [one of] the devices being sought”. It added: “It is believed that the Gatwick devices used are of an industrial specification. We are continuing to search for the operators.”

The Reg has deduced this is to allay any concerns that the devices were military. We have contacted the cops asking for clarification.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/20/london_gatwick_airport_drones/

On the first day of Christmas, Microsoft gave to me… an emergency out-of-band security patch for IE

Microsoft today emitted an emergency security patch for a flaw in Internet Explorer that hackers are exploiting in the wild to hijack computers.

The vulnerability, CVE-2018-8653, is a remote-code execution hole in the browser’s scripting engine.

Visiting a malicious website abusing this bug with a vulnerable version of IE is enough to be potentially infected by spyware, ransomware or some other software nasty. Thus, check Microsoft Update and install any available patches as soon as you can.

Any injected code will run with the privileges of the logged-in user, which is why browsing the web using Internet Explorer as an administrator is like scratching an itch with a loaded gun.

According to Redmond:

A remote code execution vulnerability exists in the way that the scripting engine handles objects in memory in Internet Explorer. The vulnerability could corrupt memory in such a way that an attacker could execute arbitrary code in the context of the current user.

An attacker who successfully exploited the vulnerability could gain the same user rights as the current user. If the current user is logged on with administrative user rights, an attacker who successfully exploited the vulnerability could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

In a web-based attack scenario, an attacker could host a specially crafted website that is designed to exploit the vulnerability through Internet Explorer and then convince a user to view the website, for example, by sending an email.

While exploit code for the bug has not been publicly disclosed, it is being leveraged in the wild to attack victims, according to Microsoft, hence why the patches are being flung out today out-of-band, rather than slipping them into January’s Patch Tuesday.

Clement Lecigne of Google’s Threat Analysis Group is credited for uncovered the flaw. We’ve pinged Google for more details on how miscreants are abusing the programming blunder.

A spokesperson for Microsoft’s security team said: “Today, we released a security update for Internet Explorer after receiving a report from Google about a new vulnerability being used in targeted attacks.

“Customers who have Windows Update enabled and have applied the latest security updates, are protected automatically. We encourage customers to turn on automatic updates. Microsoft would like to thank Google for their assistance.”

Internet Explorer 9 to 11 on Windows 7 to 10, Server 2008 to 2019, and RT 8.1 are affected, though the server editions run IE in a restricted mode that should thwart attacks via this vulnerability.

One workaround, if you want to hold off on installing patches immediately, is to disable access to JScript.dll using the commands listed by Microsoft in its above-linked advisory. That will force IE to use Jscript9.dll, which is not affected by the flaw. Any websites that rely on Jscript.dll will break, though.

A possible alternative is to not use Internet Explorer, of course. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/12/19/microsoft_internet_explorer_cve_2018_8653/

Privacy Futures: Fed-up Consumers Take Their Data Back

In 2019, usable security will become the new buzzword and signal a rejection of the argument that there must be a trade-off between convenience and security and privacy.

According to a recent Pew survey, two-thirds of Americans do not believe current laws are doing enough to protect their privacy, and six out of 10 respondents would like greater autonomy over their personal data. In an even more surprising turn of events, leadership at several leading tech companies — among them Apple CEO Tim Cook — are now encouraging smarter government regulation and data privacy laws. These shifts indicate a growing awareness and concern within the United States around data privacy and data protection.

In 2019, I predict constituents across the US will seek even greater data protection legislation from their representatives. In the aftermath of the recent Marriott data breach, for example, several members of Congress demanded cybersecurity legislation focusing on consumer protection and privacy, among them, Senator Mark Warner (D-Va.) who  asked for “laws that require data minimization, ensuring companies do not keep sensitive data that they no longer need …. and data security laws that ensure companies account for security costs rather than making their consumers shoulder the burden and harms resulting from these lapses.”

Battle Royale: Authoritarian vs. Democratic
While the US has been, for the most part, sitting on the sidelines over the past few years, we’ve seen a steady march toward greater data localization laws that foreshadow a global battle over data security and privacy. On the one hand are authoritarian regimes that are implementing data localization policies to enable greater government access to both personally identifiable information and intellectual property. This digital authoritarianism includes Internet controls and restrictions, integrating disinformation, and limiting individual data access through various forms of censorship. One the other are the democratic nations that are using legislation such as  the European Union’s General Data Protection Regulation (GDPR), which favor the rights of the individual over government access to the data of private citizens.

Russia, for instance, recently announced greater oversight and harsher fines to existing data laws, which include requiring government access to encryption keys and storing Russian users’ personal data in Russia. But Russia is not alone. According to Freedom House’s Freedom of the Net, this form of digital authoritarianism is the most dominant trend, coinciding with eight consecutive years of rising global Internet censorship.

Conversely, GDPR and now the California Consumer Privacy Act (CCPA) represent the emergence of more democratic models that focus on individual data protection and provide a counterweight to digital authoritarianism. Given these global trends — coupled with constituent pressure — the US will find it increasingly difficult to maintain its current patchwork of industry and state-specific approaches to cybersecurity and data protection. Expect to see the US step off the bench and put some skin in the game.

2019: The Year of Security UX?
While the United States will inevitably see additional forms of data protection legislation introduced in 2019, given the stagnation of current cybersecurity legislation in Congress and the nonstop mega-breaches, the public likely will not be satisfied to sit back and wait and see if legislation gets passed. In the last few weeks of 2018, the recent Marriott mega-breach, the National Republican Congressional Committee email hack, and the Facebook email dump have served as constant reminders about the magnitude of this problem. Given the confluence of corporate breaches, proliferation of attackers, and the global diffusion of surveillance and censorship, individuals want to take back control and gain agency in their own data protection.

The security industry notoriously lacks usability and often blames the user as the weakest link and source of all security problems. But in 2019, users will revolt against this and demand greater, more intuitive individual control over their data. The movement toward usable security will also drive security professionals to work closely with social scientists and user experience experts to ensure that incentive structures and human-computer interaction match those for the broader population of product users and consumers. Usable security will become the new buzzword and signal a rejection of the argument that there must be a trade-off between convenience and security and privacy. The public will demand both convenience and data protection, and there will finally be some progress toward true democratization of security for the masses.

Related Content:

Dr. Andrea Little Limbago is the chief social scientist at Virtru, a data privacy and encryption software company, where she specializes in the intersection of technology, cybersecurity, and policy. She previously taught in academia before joining the Department of Defense, … View Full Bio

Article source: https://www.darkreading.com/perimeter/privacy-futures-fed-up-consumers-take-their-data-back/a/d-id/1333499?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Attack Campaign Targets Financial Firms Via Old But Reliable Tricks

Among other tried-and-true cyberattack methods, the attackers hosted malware on the Google Cloud Storage service domain storage.googleapis.com to mask their activity.

An ongoing targeted attack campaign against financial institutions demonstrates how older and well-trodden hacking methods still remain effective. 

Since August, a group of attackers have used Java-based remote access Trojans, phishing emails, and zip-compressed files – and hosted their malware on popular cloud services – to target employees at banks and other financial institutions, according to a report released this week by Menlo Security.

The attackers write their initial infectors in Java and Visual Basic, and customize versions of popular malware frameworks to steal account information, the company says.

“A lot of these attacks are stealing credit card information, they also steal accounts and steal money directly from the accounts,” says Vinay Pidathala, director of research at Menlo Security, a Web security firm. “They can inject code directly into the pages to infect account holders, and they can put a keylogger, along with taking screenshots.”

That these older tactics work should not be a surprise. Attackers still use these techniques because they work. In 2017, for example, 93% of breaches had a phishing e-mail component, according to the 2018 Verizon Data Breach Investigations Report (DBIR). While only 4% of recipients clicked the malicious link in a phishing e-mail on average, only a single person needs to let in the attacker.

Menlo Security found in its research that 4,600 phishing sites use legitimate hosting services. In the latest campaign, the attackers used storage.googleapis.com to host their malicious payload.

“Attackers are increasingly using popular domains to host their attacks,” Pidathala says. “It’s an easy way around being blocked by security software, because these sites are on a known good list.”

Rise of the jRATs 

Another common technique is using Adobe Flash or Oracle’s Java as an initial infector. While personal computers have tried to move away from these ubiquitous runtime agents, for malware writers the write-once-run-anywhere technology allows a single file can run on Mac systems as well as Windows. 

The capability has resulted in consistent efforts to infect systems using malware written in those languages. More than a year ago, security firms warned that Java-based remote access trojans, or jRATs, were targeting business users using attachments that appeared to be communications from the Internal Revenue Service (IRS) or a purchase order, according to an April 2017 analysis by security firm Zscaler. 

“The jRAT payload is capable of receiving commands from a CC server, downloading and executing arbitrary payloads on the victim’s machine,” writes Zscaler security researcher Sameer Pail. “It also has the ability to spy on the victim by silently activating the camera and taking pictures.”

Java-based RATs allow attackers to initiate an attack and download specific executables, depending on the operating system encountered. As Macs become an increasing part of the corporate world, such flexibility is key, experts say.

“More and more enterprises are using Macs, and with one JAR file you can design an attack that can infect both platforms,” says Menlo Security’s Pidathala. “Java is still installed on a significant number of computers around the world.”

Old But Modified RATs

The attackers also used well-known remote access Trojans: Houdini and qRAT. Both are modular, so attackers are able to customize their payloads and add capabilities through a modular architecture. 

Menlo Security’s Pidathala argues that such RATs are more useful than automated botnets because attackers can easily tailor their attack to attempt to bypass the victim’s defenses.  

“It is a RAT, so it is very flexible because it is modular—it can do lateral movement, or it can do reconnaissance, just by updating its modules,” he says. “Going forward, the concept of botnets, meaning malware that has automated functionality to steal specific things, will die down in favor of more malware that can be customized to the attackers’ needs.”

Related Content:

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/perimeter/attack-campaign-targets-financial-firms-via-old-but-reliable-tricks/d/d-id/1333528?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

NASA Investigating Breach That Exposed PII on Employees, Ex-Workers

Incident is latest manifestation of continuing security challenges at agency, where over 3,000 security incidents have been reported in recent years.

NASA is investigating a data breach that exposed personally identifiable information (PII) — including Social Security numbers — belonging to current and former employees who joined the agency after July 2006.

The breach is the latest of numerous major and minor security incidents at NASA in recent years and is sure to heighten scrutiny of its cybersecurity practices.

In an internal memo to employees Dec. 18 (posted here), NASA’s head of human relations, Bob Gibbs, said the space agency’s cybersecurity staff discovered the breach when investigating a potential compromise of several servers in late October. An initial analysis of the incident showed that one of the impacted servers contained PII on NASA employees that the attackers may have stolen.

“Upon discovery of the incidents, NASA cybersecurity personnel took immediate action to secure the servers and the data contained within,” Gibbs’ memo stated without further elaboration.

NASA and other federal cybersecurity partners are doing a forensic analysis of the impacted systems to understand the full scope of the breach and to identify employees whose data might have been stolen, the statement noted. The process will take time but is a top priority at NASA, with senior leadership is actively involved in understanding the breach and developing a response.

“NASA does not believe that any agency missions were jeopardized by the intrusions,” a spokeswoman said in a separate emailed statement to Dark Reading. “The agency is continuing its efforts to secure all servers and is reviewing its processes and procedures to ensure the latest security practices are followed throughout the agency.” NASA did not respond to a question about why the agency waited so long to disclose the breach.

The server intrusions appear to be the latest manifestation of what NASA’s Office of Inspector General (OIG) has previously described as long-standing security issues at the agency.

In a November 2017 assessment of NASA’s top management and performance challenges, inspector general Paul Martin identified IT governance and information security as one key issue. According to the OIG, NASA reported more than 3,000 computer security incidents involving malware or unauthorized access to agency computers in the two years preceding the report.

“These incidents included individuals testing their skills to break into NASA systems, well-organized criminal enterprises hacking for profit, and intrusions that may have been sponsored by foreign intelligence services seeking to further their countries’ objectives,” the report noted. In one instance, a contract employee was indicted for illegally accessing and attempting to sabotage NASA systems.

To address these issues, NASA has implemented a series of initiatives, including expanded network penetration testing, more incident response assessments, broader deployment of intrusion detection systems, and increased Web application security scanning. Despite such measure, problems persist, the OIG said. Among them: inadequate IT acquisition and governance practices, gaps in the agency’s incident detection and handling capabilities, inadequate monitoring tools and Web application security controls.

Also troubling, according to the OIG, were NASA policies that did not distinguish OT systems from IT.

As of November 2017, the agency managed more than 500 information systems for everything from controlling spacecraft and processing scientific data to enabling NASA personnel to collaborate with peers around the world. NASA also manages some 1,200 publically accessible Web applications — or about 50% of all non-military federal websites that are publicly accessible.

Not a Houston Problem Alone
NASA, by far, is not the only federal agency with cybersecurity challenges. Though civilian US federal agencies spent an estimated $5.7 billion on cybersecurity last year, many serious deficiencies persist across the spectrum, said the White House Office of Management and Budget (OMB) in a report in May. Among them were gaps in network visibility that prevented agencies from fully knowing what was going on in their networks, lack of standardized processes and capabilities, and limited situational awareness. One example: In 38% of federal cybersecurity incidents, investigators were not able to identify an attack vector.

Michael Magrath, director of global regulations and standards at OneSpan, says breaches like the one at NASA are not surprising given how big of a target federal agencies are for cybercriminals because of the PII they collect and store. “That large human resources target plus the potential damage that can be inflicted from a national security standpoint means that federal agencies will always [face] cyberthreats,” he says.

The OMB is expected to soon release final policy to address federal agencies’ implementation of Identity, Credential, and Access Management (ICAM) policy, he says. The policy will update previous requirements for multifactor authentication, digital signatures, encryption acquisition, and other areas of security. “It remains to be seen what is included in the updated requirements,” Magrath says. “Hopefully it addresses the growing number of successful cyberattacks on federal agencies.”

Somewhat ironically, the latest breach is unlikely to make a huge difference for the victims because a lot of their PII was likely already compromised in the 2015 intrusion at the US Office of Personnel Management (OPM). In that incident, PII belonging to as many as 21.5 million current and former federal employees and others was compromised.

“Given the depth of the OPM breach, it is likely that most of the information has already been made available,” says Keenan Skelly, vice president of global Partnerships at Circadence.

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/nasa-investigating-breach-that-exposed-pii-on-employees-ex-workers/d/d-id/1333529?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US Names, Sanctions Russian GRU Officials for 2016 Election Hacks

Treasury Department names and imposes economic sanctions on the alleged major players behind the Russian election-meddling operation, as well as the World Anti-Doping Agency breach.

Nine Russian GRU officers were sanctioned today by the US Department of Treasury for their alleged hacking and document theft of US election systems during the 2016 presidential race, as well as for hacking the World Anti-Doping Agency database. The officers also were indicted on July 13 by the US. They are:

  • Aleksey Aleksandrovich Potemkin (Potemkin), who ran the computer infrastructure used by the hackers for releasing stolen documents online.
  • Anatoliy Sergeyevich Kovalev (Kovalev), who in July 2016 hacked a state board of elections website and pilfered voter information, as well as hacked a US firm that provided voter registration verification software for the 2016 election. 
  • Viktor Borisovich Netyksho (Netyksho), a GRU unit commander.
  • Boris Alekseyevich Antonov (Antonov), who managed the actors targeting the US election.
  • Ivan Sergeyevich Yermakov (Yermakov), who hacked email accounts and servers related to the US election, as well as the World Anti-Doping Database in 2016.
  • Aleksey Viktorovich Lukashev (Lukashev), who sent spear-phishing emails to US election campaign officials.
  • Nikolay Yuryevich Kozachek (Kozachek), who wrote the malware used by the GRU used to hack into networks during the election.
  • Artem Andreyevich Malyshev (Malyshev), who monitored the GRU malware and helped hack WADA’s database.
  • Aleksandr Vladimirovich Osadchuk (Osadchuk), a GRU commanding officer. 

Read the full sanctions report here, which includes other Russian officials sanctioned for the WADA hack, the online election-influence campaign, and the March 2018 nerve agent attack on Sergei Skripal and his daughter.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/perimeter/us-names-sanctions-russian-gru-officials-for-2016-election-hacks-/d/d-id/1333530?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Remotely Brick a Server

Researchers demonstrate the process of remotely bricking a server, which carries serious and irreversible consequences for businesses.

Attackers with access to your server holds your company in their hands – and it’s not hard for them to abuse their power and brick the server from anywhere, researchers report.

Most people view firmware attacks, and other attacks that cause permanent damage, as physical in nature. Analysts at Eclypsium sought to demonstrate how it’s possible to remotely brick a server and disrupt infrastructure by exploiting vulnerabilities in the baseboard management controller (BMC) and system firmware. The result would spell enterprise disaster.

The idea of bricking systems is not new, says John Loucaides, vice president of engineering at Eclypsium. While the concept has been around for a while, and security experts have discovered the vulnerabilities that could lead to this level of compromise, few have shown it. Eclypsium’s goal in documentation published today is to help improve understanding of the remote attack vector, which can be performed at scale with enormous potential damage.

“It’s a fairly significant impact,” Loucaides points out. Recovery for most malware involves wiping affected systems and restoring good data. Recovery for this type of attack would require opening each affected server and physically connecting to deliver new firmware. It’s a slow, technical process that’s beyond the abilities of most IT staff and current enterprise systems, Loucaides explains. “This is an area that normal security technologies are missing,” he says.

It doesn’t take a sophisticated actor to pull this off, he notes. Many people will think of this as a nation state-level attack, he continues, but open source toolkits exist on the Internet that can give attackers the access they need to render a target system inoperable. Eclypsium’s demonstration marks the first time it’s using this specific method and technique, and it emphasizes the low barrier to entry for launching a successful attack of this nature.

Similar threats have been seen in the wild, Loucaides explains. Attackers have replaced server components with corrupted firmware, for example, or firmware that doesn’t work. Eclypsium’s method, which leverages past BMC research, bricks a server by remotely exploiting a BMC. If you’re not familiar, the BMC is an independent computer within the server. It’s used to remotely configure the system without relying on the host operating system or applications.

How It Plays Out
Step one is getting a foot in the door. “The first thing we’re doing is assuming you have some sort of compromise,” Loucaides explains. Perhaps the system got infected with malware; perhaps credentials were lost and picked up by the wrong person.

In Eclypsium’s demonstration, researchers then used normal update tools to pass a malicious firmware image to the BMC. No special authentication or credentials are required to do this, and the firmware update contains additional code which, once triggered, erases the UEFI system firmware and essential components of the BMC firmware itself, analysts say in a blog.

Why target the BMC? You could target any part of the server and get a similar result, says Loucaides, but the BMC “is the most understandable and the most obvious.” In a ransomware attack or other major-impact scenario, the BMC is used to recover the system.

Step three is when the BMC boots to the attacker supplied image. Because the BMC handles system management and recovery, it can install components into any part of the system. Researchers could use the malicious capability they installed in the BMC to corrupt system firmware; by corrupting the BMC, they leave no path for a system operator to recover it.

There is an arbitrary amount of time between stages three and four, in which the code executes, Loucaides explains. Attackers could launch malicious code as soon as they gain access via credential compromise, or they could install a component in the BMC and leave it there for as long as they like. “It doesn’t all have to happen at the same time,” he adds. The final payload could be triggered by a timer or external command and control.

The window between stages three and four depends on the attacker’s goals. If they’re going for maximum damage and disruption, Loucaides says, he would likely want to take his time and infect as many components as possible before bringing it all down at once. In step five, the BMC reboots the server, which is now unusable.

What You Can Do
Existing security defenses don’t focus on firmware or hardware, says Loucaides, but there are ways to stop this type of attack. It starts with preventing initial compromise, which goes back to basic cyber hygiene: protecting credentials, for example, and using multifactor authentication.

“You can’t do everything perfectly,” he admits. “Something is going to go wrong. The trick is to be assessing the integrity of different components in your system.”

Updates get plenty of attention at the application and operating system level, he continues, but not many people pay attention to firmware updates. Security teams should be running scans and monitoring infrastructure for anomalies, and interrupting the process before it’s complete.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/application-security/how-to-remotely-brick-a-server/d/d-id/1333531?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple