STE WILLIAMS

5 IT Practices That Put Enterprises at Risk

No one solution will keep you 100% protected, but if you avoid these common missteps, you can shore up your security posture.

A billion data records are compromised in the US in more than 90 million different cyber-related incidents each year, with each event costing a company an average of $15 million in damages. Certainly, cybersecurity threats continue to increase in size and complexity, but the real problem is that too many IT organizations are leaving their enterprises vulnerable to attacks because they overlook a number of simple tasks. Although no single solution or approach will keep organizations completely protected, there are some things to avoid so that IT teams can shore up their security posture and ensure continual improvement to match the advancing threat.

1. Using Old Printers
Surprisingly, office printers present three threat vectors. First, printers store images of documents. Don’t forget to destroy the remains of sensitive corporate data or personally identifiable information that may be left on an internal hard drive of an office printer.

In addition, IT staffers often miss updates or news of exploitable office vulnerabilities. Tracking firmware updates for printers is something for which no one really has the time or patience. Most updates will require at least some physical access to the device (especially if something doesn’t go as expected). Doing routine update checks is a great idea, and if you can’t keep up with multiple vendor patches, make sure that you at least isolate printers on a separate VLAN with access limited to core protocols for printing. 

Finally, third-party vendor access can cause issues. Managed providers often have VPN credentials for enterprises to allow them access to perform maintenance and inventory. This is another gateway into your environment and is a third-party exposure that must be monitored. Limit their access as much as possible and require that access be handled via least privileged means.

2. Disregarding Alerts
The average enterprise generates nearly 2.7 billion actions from its security tools per month, according to a recent study from the Cloud Security Alliance (CSA). A tiny fraction of these are actual threats — less than 1 in a 100. What’s more, over 31% of respondents to the CSA study admitted they ignore alerts altogether because they think so many of the alerts are false positives. Too many incoming alerts are creating a general sense of overload for anyone in IT. Cybersecurity practitioners must implement a better means of filtering, prioritizing, and correlating incidents. Executives should have a single platform for collecting data, identifying cyber attacks and tracking the resolution. This is the concept of active response — not only identifying threats, but being able to immediately respond to them as well.

3. Giving Away Admin Rights
Administrative rights arm malware and other unwanted applications with the authority needed to inflict damage to an enterprise. The access granted to manipulate system-level services and core file systems is greater than a power user needs on a regular basis. Forcing users to provide administrator credentials to deploy new applications tremendously cuts down threat exposure. This also creates an audit trail that lets security analysts rapidly identify issues, especially those that present signs of intrusion.

Any form of administrator rights must come with a degree of risk analysis on behalf of the IT department. IT executives should consider what damage is possible if a user account is compromised, and what ripple effect would administrative rights have on secondary systems. Administrator access should be the exception, not the norm. If applied properly, organizations can proactively identify issues rather than spend all weekend cleaning up compromised systems.

4. Ignoring Employee Apps
Do you really know what cloud services are being actively used in your network? Many organizations look the other direction when employees use social media and cloud services on their own. But the potential for an IT crisis can be quietly brewing as internal business users create their own IT infrastructure without any adherence to corporate policy. Monitoring cloud application connections can create increased visibility into unapproved software-as-a-service use, and limit the potential for a loss of intellectual property or sensitive information. Cloud access security broker solutions proxy outbound traffic to cloud applications and offer a detailed view into user behaviors.

5. Being Unprepared for Device Loss
Road warriors often fall victim to theft or accidentally leave a laptop or smartphone in a taxi, never to be seen again. This can be a non-event if the device is remotely managed and encrypted, but a major threat if the device contains unsecured sensitive data. IT administrators need to understand what data is being stored where. If it is anything sensitive, they should ensure that devices are properly encrypted and that remote access tools such as VPNs are in use and disabled in the event of a loss. Documenting that devices are encrypted and properly locked down will go a long way in the event of a data leak as well.

As cyberthreats have evolved, so has incident management. What hasn’t changed, unfortunately, is the need to address the simple and often tedious IT practices that, when ignored, can threaten enterprise security. From forgetting to revoke administrative privileges to providing third-party access to printers, the common cybersecurity challenges that enterprises face can be fixed, putting enterprises in the best position to address the current and evolving cyberthreat.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Darren McCue is President of Dunbar Security Solutions, where he has led the integration of Dunbar’s Cybersecurity, Security Systems, and Protective Services businesses and is responsible for strategically growing the company. For more than 22 years, he has spearheaded growth … View Full Bio

Article source: https://www.darkreading.com/perimeter/5-it-practices-that-put-enterprises-at-risk-/a/d-id/1330004?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Tightens Web Security for 45 TLDs with HSTS

Google broadens HTTPS Strict Transport Security to Top Level Domains under its control and makes them secure by default.

Google is buckling down on Web security by extending HTTP Strict Transport Security (HSTS) to Top Level Domains (TLDs) under its control, the company reports.

HTTPS prevents traffic from being intercepted or misdirected in transit. HSTS automatically enforces HTTPS for connections between clients and Web servers. If someone types http://gmail.com, the browser changes it to https://gmail.com before sending the request.

In doing so, it makes connections more secure and prevents threats like downgrade attacks and cookie hijacking. Google has a HSTS preload list, which is built into all major browsers and contains a list of individual domains, subdomains, and TLDs for which browsers automatically enforce HTTPS connections. It operates 45 TLDs including .google, .how, and .soy.

Back in 2015, Google created the first secure TLD by adding .google to the HSTS preload list. Now it’s extending HSTS to more of its TLDs, starting with .foo and .dev, making these websites secure by default without additional work for their users.

“Registrants receive guaranteed protection for themselves and their users simply by choosing a secure TLD for their website and configuring an SSL certificate, without having to add individual domains or subdomains to the HSTS preload list,” explained Google Registry’s Ban McIlwain in a blog post on the news.

This move will also accelerate the security update process. Normally, there are a few months between the time a domain name is added to the list, and the time browser upgrades reach most users. Using a secure TLD means users are immediately protected.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/google-tightens-web-security-for-45-tlds-with-hsts/d/d-id/1330024?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

FBI’s secret iPhone hacking tool must stay under wraps, court rules

Here’s what we know about the tool that the FBI used to break into the San Bernardino terrorist’s encrypted iPhone:

  • It cost $900,000, as confirmed by Sen. Dianne Feinstein during an open hearing with then-FBI director James Comey in May.
  • It only works on a “narrow slice of phones,” according to Comey. The process, or tool, or whatever it is, doesn’t work on an iPhone 5s or later. It was narrowly tailored to only work on an iPhone 5C operating on iOS 9, according to Comey.

That’s it. News agencies that filed a Freedom of Information Act (FOIA) request can take a hike with their quest for more details: the mystery company that sold the tool to the FBI can’t be identified because it’s got sub-par security and the tool could be hacked out of it, according to the FBI.

Thus wraps up the quest to find out the tool vendor and its cost, according to a court decision on Saturday that sided with the FBI in finding that it’s allowed to keep the tool secret.

The iPhone in question was that of Syed Rizwan Farook. He and his wife were allegedly responsible for the 2015 mass shooting in San Bernardino, California. Fourteen people were murdered before the couple fled the scene in a rented vehicle, only to end up dead themselves after trying to shoot it out with the police from inside their car.

The couple had apparently destroyed their own mobile phones before the attack, but the husband’s work phone – technically, it belonged to his employer – was bagged by the FBI to see what evidence it might reveal.

After the password for Farook’s iCloud backup account was changed – either accidentally or on purpose – the FBI took Apple to court, demanding that the iPhone maker install a backdoor in its encryption to enable law enforcement to crack the iPhone.

But just as the court ordered Apple to weaken its encryption, suddenly, there was no need, the FBI said. It had figured out, with the help of the mystery vendor, how to get into the phone without Apple’s help.

Ever since the FBI threw in the towel on the court case, people have wanted to know a whole lot more about how the bureau pulled it off. The Associated Press, Vice News and USA Today had each filed FOIA requests relating to the FBI’s agreement with the hacking tool vendor. After getting some material out of the FBI, the media companies had narrowed their requests to two specific pieces of information: the name of the vendor and how much the tool cost.

In spite of Feinstein and Comey having publicly disclosed the cost of the tool, the court said that putting an exact figure on the sale could help hackers figure out where to target their efforts to get their hands on it. From the decision:

Releasing the purchase price would designate a finite value for the technology and help adversaries determine whether the FBI can broadly utilize the technology to access their encrypted devices.

The court also dismissed the notion that Comey’s statements about the price of the tool amounts to an “official disclosure” that compels the release of information.

As far as the vendor’s ability to ward off thieves goes, the FBI had argued that its networks weren’t as sophisticated as the bureau’s cyber security facilities. Releasing its name would thus mean that a company with unhardened security would have a bulls-eye painted on its back.

If an adversary were determined to learn more information about the iPhone hacking tool the FBI acquired, it is certainly logical that the release of the name of the company that created the tool could provide insight into the tool’s technological design. Adversaries could use this information to enhance their own encryption technologies to better guard against this tool or tools the vendor develops for the FBI in the future.

The plaintiffs had argued that it’s not plausible that the FBI would have left the tool – one that’s “critically important to national security,” as the FBI claims – in the hands of a “poorly guarded vendor.”

The court didn’t buy it, saying that there are any number of reasons relating to national security why the tool should stay in the hands of the vendor.

That’s it, case closed: the media companies won’t be allowed to appeal the case.

If the court decision makes it harder for unknown adversaries to steal a tool that can crack open an iPhone 5C that’s a good thing. However, it seems we’ll never know how the FBI cracked Syed Rizwan Farook’s iPhone, whether or not there’s an unpatched iPhone vulnerability, how successful the crack was, if the phone contained anything of value and what value for money US taxpayers got from what seems to have been a very expensive tool.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/UL3HOSxu_7w/

Government demands for Apple and Google data keep on climbing

This year’s autumn crop of transparency reports from the first half of the year reveals that Google’s had to respond to an all-time high of government data requests – while US national security orders for data from Apple have likewise burgeoned.

Google, the search engine Goliath, released its biannual transparency report on Thursday, disclosing to the public how often it gets – and complies with – requests from the government for users’ private information.

Google displays the information in the form of a bar chart that shows the number of user data requests from government authorities, alongside the total number of users/accounts those requests apply to, in six-month increments from July 2009 up until 31 July 2017 (though it only began reporting on the number of users/accounts affected in 2011).

Those bars have been climbing skyward. On a worldwide level, Google received 12,539 requests in the second half of 2009. That number had hit 48,941 by the first half of 2017. So far this year, 83,345 users/accounts were involved, topping the previous high of 81,311 users/accounts in the second half of 2015.

Google’s compliance has been on a gentle downward slope. It produced data for 65% of the requests as of Jan. 1, 2017, compared with 76% in July 2010.

It’s worth noting that there’s nothing gentle about the reaction of US courts to this kind of failure to comply. In fact, Ars Technica described the Department of Justice (DOJ) as “going nuclear” recently in its fury over Google’s fight over a search warrant: in this particular case, Google has declined to abide by court orders to turn over data tied to 22 e-mail accounts.

“Willful and contemptuous,” the government called it.

At any rate, back to that transparency report; Google also discloses the number of National Security Letters (NSLs) it receives. These are requests for information that the US government makes, via the FBI or other government agencies in the executive branch, in the course of conducting national security investigations. The FBI can use an NSL to seek a user’s name, address, length of service, and local and long-distance phone records. The Electronic Communications Privacy Act (ECPA) forbids the FBI from using NSLs to obtain content: no Gmail content, no search queries, no YouTube watch lists nor user IP addresses.

No surprise here: the number of NSLs Google receives has been climbing. Reporting in bands of 500, Google says that it received between 1000 and 1499 NSLs.

Last week, Apple also released its latest transparency report (PDF) outlining government data requests received in the first half of this year, from 1 January to 30 June.

Apple strives to make its reports detailed. One of the information-request buckets it provides data for is how many requests for information regarding devices it receives. Examples of such requests are law enforcement agencies trying to help people find their lost or stolen devices. This category also covers information about devices that are suspected of being used in fraud. These type of requests generally look to find out details on the customers who are either connected to the devices themselves and/or to Apple services.

In the US, Apple received 4479 requests for 8958 devices and provided data 80% of the time (in 3565 cases). Worldwide, Apple received 30,814 requests for data from 233,052 devices and provided data 80% of the time (in 23,856 cases).

For whatever reason(s) – more people losing their phones? More phone thieves? More fraudsters? – Germany sticks out like a sore thumb in this category. Germany accounts for 12,677 requests for data on 24,446 devices. And as far as overall regions go, there’s quite a lot going on in Asia Pacific: 160,221 devices were the subject of 6120 data requests.

Overall demands for data were slightly down compared to requests during the second half of 2016: Apple got 30,184 requests that covered 151,105 devices during that time period.

But Apple disclosed a much higher number of national security requests that include orders received under FISA and NSLs. According to Apple, to date, it has not received any orders for bulk data.

Apple says that in the first half of this year, it received between 13,250-13,499 national security letters. Between 9000-9249 accounts were affected. It was able to declassify a big fat 0% of those letters.

That’s a huge jump over the second half of last year. Between 1 July and 31 December 2016, Apple reported (PDF) receiving between 5750-5999 NSLs. The orders affected between 4750-4999 accounts, and Apple declassified only one of the NSLs.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/GAoAH6S5yo0/

UK lotto players quids in: Website knocked offline by DDoS attack

The UK National Lottery has apologised for a website outage that left money in their pockets of punters unable to play games on Saturday evening.

“We’re very sorry that many players are currently unable to access The National Lottery website or app. Our 46,000 retailers are unaffected,” it said on Twitter before adding “please accept our sincere apologies if you were unable to play tonight’s games due to the website issue that affected many players.”

By Sunday the National Lottery confirmed that outage was the result of a denial of service attack. The attack ran for about 90 minutes on Saturday between 6pm until 7.30pm), at a time of peak demand, the Daily Mirror adds.

On Saturday 30 September, a DDoS extortion group called Phantom Squad sent out a ransom demand to companies all over the world, threatening denial-of-service attacks. It’s unknown whether any of its attack threats were genuine – much less whether they were connected to the UK Lottery DDoS.

Criminals with no capacity to launch DDoS attacks have been known to threaten such assaults in a bid to coerce targets into paying up when no threat is present. ®

* Who gets DDoSed. You’re not going to win the lottery, buddy.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/02/lottery_ddos/

Weakness In Windows Defender Lets Malware Slip Through Via SMB Shares

CyberArk says the manner in which Defender scans for malicious executables in SMB shares gives attackers an opening.

Researchers at CyberArk Labs have devised what they claimed is a relatively simple way for attackers to sneak known malware past Windows Defender and get it to execute on devices running Windows 10 and Windows 8.1.

The tactic will likely work against other anti-virus tools as well, CyberArk said in an advisory Thursday. But for the moment it has only been tested and shown to work against Windows Defender. As many as 480 million devices with Windows Defender are completely unprotected against attacks that use the approach, the security vendor warned.

The technique that CyberArk has developed exploits a weakness in the process that Windows Defender uses for antivirus scanning of Server Message Block (SMB) shares. The vulnerability gives attackers a way to trick Windows Defender into scanning a different file than the one carrying the malware and that is being executed on a system.

Attackers can execute known malware under the guise of a legitimate file over an SMB server, CyberArk said. “Imagine a situation where you double-click a file and Windows loads that file, but your antivirus scans another file or even scans nothing at all.”

In a statement, a Microsoft spokeswoman downplayed the severity of the threat posed by the CyberArk exploit.

“The technique described has limited practical applicability, since it requires an attacker to first gain privileges or control of an internal server,” the spokeswoman said. ” Should the attacker achieve that prerequisite, Windows Defender Antivirus and Windows Defender Advanced Threat Protection will detect further actions by the attacker.”

Steve Lowing, a product and marketing lead at CyberArk says the problem has to do with the manner in which Windows Defender handles processes loading in SMB shares.

In theory, Windows Defender should “treat the process flow for handling SMB loading exactly like it would for loading a local file on your C drive,” he says. The process should not be any different in the sense of opening and reading a file, Lowing says.

But CyberArk’s investigation showed that Windows Defender has a different code execution pathway and poor error handling for SMB loaded files. “Through our evaluation and analysis of this weakness, depending on SMB server responses, Defender would indicate success—or, no malware—responses when the file was loaded from an SMB server. “

CyberArk’s attack method involved implementing a custom SMB server and creating a “pseudo-server” to differentiate requests being made by Windows Defender and those made by other Windows native processes. Then, by manipulating the responses to those requests, CyberArk said it could get malware to bypass Defender’s scanning.

For example when Windows Defender requests a malicious executable file for scanning from the SMB server, the pseudo-server would identify the origin of the request and send it a benign file to scan instead. As an example, CyberArk said it got Windows Defender to detect the Mimikatz post-exploitation tool as a completely different and benign file. Similarly, the SMB server could also be made to block scanning request in a manner that would cause Defender to eventually give up attempting to intercept the file and let it run normally.

“It is fairly easy for someone that wants to leverage this weakness in Defender to implement their own SMB server,” Lowing says.

“The most obvious attack vector would be through a phishing campaign where the attacker would either have already compromised an internal server or even the endpoint where the email is being read,” he notes. “The end user would need to be tricked into running the file and the file would need to be served up from a nefarious SMB server. An effective attacker would be able to accomplish both of these tasks.”

Related content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/perimeter/weakness-in-windows-defender-lets-malware-slip-through-via-smb-shares/d/d-id/1330021?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Java security plagued by crappy docs, complex APIs, bad advice

Relying on search engines to find answers to coding problems has become so common that two years ago it was suggested computer programming be renamed “googling Stack Overflow,” in reference to the oft-visited coding community website.

But researchers from Virginia Tech contend more care needs to be taken when copying code from accepted Stack Overflow answers, at least in the context of Java.

In a paper released on Thursday titled “Secure Coding Practices in Java: Challenges and Vulnerabilities” [PDF], five computer boffins – Na Meng, Stefan Nagy, Daphne Yao, Wenjie Zhuang and Gustavo Arango Argoty – analyzed Stack Overflow posts related to Java security.

They found that many developers don’t understand security well enough to implement it properly, that the overly complicated APIs in the Spring security framework and other libraries lead to frustration and errors, and that some popular Stack Overflow answers are unsafe and outdated.

“The significance of this work is that we provided empirical evidence for a significant number of alarming secure coding issues, which have not been previously reported,” the paper says. “These issues are due to a variety of reasons, including the rapidly increasing need for enterprise security applications, the lack of security training in the software development workforce, and poorly designed security libraries.”

The researchers analyzed 497 Stack Overflow posts related to Java security because, they said, the site is popular with developers and plays an important role in educating them.

They looked at common concerns related to secure Java coding, common development challenges, and common security vulnerabilities.

And they found that many of the answers endorsed by the Stack Overflow community led to insecure code. For example, accepted answers often recommended the use of MD5 and SHA-1 crypto algorithms – despite the fact that they’re insecure and should not be used.

Stack Overflow answers also recommended trusting all SSL/TLS certificates to bypass cert verification errors, even though this disables SSL security checks.

Similarly, advice related to implementing authentication in Spring suggested disabling Cross-Site Request Forgery checks.

The researchers had nothing nice to say about Spring, which accounted for 55 per cent of the Java security implementation questions that were analyzed.

“We provided substantial empirical evidences showing that APIs in Spring security (designed for enterprise security applications) are overly complicated and poorly documented, and error reports from runtime systems cause confusion,” the paper states.

They also observed that in some instances, the higher social reputation of Stack Overflow respondents led to incorrect answers being accepted over more correct fixes offered by individuals with lesser reputation scores.

‘Resist the urge to apply solutions’

Library designers, the paper advises, should remove or deprecate APIs with weak security, like MD5. And tool builders should develop more automated security checking capabilities.

The paper recommends that developers spend more time testing security features, avoid disabling security checks, and exercise caution with community answers.

And it advises StackOverflow admins to think about how outdated posts get presented, because of the perils of unsafe code.

“There is always a risk when developers use code they do not fully understand,” said Josh Heyer, community management team lead for Stack Overflow, in an email to The Register.

“We encourage all programmers looking to benefit from the guidance found on Stack Overflow to resist the urge to apply solutions to their work without first reading the full answer, other answers to the same question, and any warnings left as comments by previous readers.”

Heyer said developers should always refer back to official documentation when using an unfamiliar API for the first time, noting that the most trustworthy community-based answers tend to cite the docs.

tl;dr RTFM. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/29/java_security_plagued_stack_overflow/

Signal app’s address book security could upset governments

Signal, arguably the world’s most respected secure messaging app, plans to use the DRM (Digital Rights Management) secure enclave built into Intel’s Skylake chips as a way of hiding away how people are connected.

It sounds esoteric, but it fixes an important privacy weakness that has dogged end-to-end encrypted messaging: users want to know who else they know that uses the same service. This requires that apps check who else among a person’s contacts uses it by consulting a central “social graph” of how people are connected.

This is a privacy compromise because it means that while the service’s own encryption stops it from reading your messages (or letting intelligence agencies that later ask for access to this data read them either) it can end up knowing a lot about who you know.

Signal tries to counteract this by not maintaining its own centralised social graph but instead using yours: your address book.

To find out if someone you know uses Signal the app turns their number into a truncated SHA256 hash first, and matches it against a central directory of hashes (this is similar to the way that password authentication works.) Anyone intercepting the traffic or hacking the directory will see hashes rather than telephone numbers.

The only way for a hacker with stolen hashes to figure out what telephone numbers they’ve got is to guess. Guess a number, run it through the hashing algorithm and see if it matches one that you’ve stolen. If it doesn’t match anything, guess another number, and another, and another… and so on until you find a match.

There is a problem with this scheme (to quote Signal’s developers Open Whisper Systems) because the “pre-image space” for 10-digit numbers is small, “inverting these hashes is basically a straightforward dictionary attack”, which is another way of saying that it’s feasible for a computer to make guesses quickly and cheaply enough to compromise the security of the hashes.

Signal doesn’t keep any record of the lookups it’s performed and allows you to satisfy yourself that it doesn’t by giving you access to its source code:

…if you trust the Signal service to be running the published server source code, then the Signal service has no durable knowledge of a user’s social graph if it is hacked or subpoenaed.

But who’s to say that it’s the published server source code that’s actually running on Signal’s server rather than some version of it that’s been modified by a hacker or the demands of an intelligence agency?

…someone who hacks the Signal service could potentially modify the code so that it logs user contact discovery requests, or (although unlikely given present law) some government agency could show up and require us to change the service so that it logs contact discovery requests.

Open Whisper Systems’ founder Moxie Marlinspike thinks the Software Guard Extension (SGX) instruction built into Intel chips as a secure enclave for Digital Rights Management (DRM) offers a way out of the problem, and has integrated it into a new Signal open source Beta.

This is similar to ARM’s TrustZone technology that forms the basis of Samsung’s Knox security system, but was designed with DRM-oriented features such as “remote attestation”.

Remote attestation is normally used by content providers to verify that you and I are running the software we are permitted to, software that will respect DRM restrictions, rather than something that can pirate the content it’s playing.

In Signal’s case this arrangement is inverted. The enclave is on its server rather than on your device and remote attestation allows you, the client, to attest that the server is running a squeaky clean copy of Signal’s software.

Furthermore, because the verified copy of Signal’s software is running in an enclave, neither it nor the messages that pass between you and the enclave can be interfered with by other software on the server.

A practical hurdle to this is SGX’s 128MB RAM limit, which sounds like a lot of protected memory for a microprocessor but is nowhere near enough to hold a database that might contain billions of hashes.

Not to mention:

Even with encrypted RAM, the server OS can learn enough from observing memory access patterns … to determine the plaintext values of the contacts the client transmitted!

Open Whisper Systems’ solution is to perform “a full linear scan across the entire data set of all registered users for every client contact submitted,” which is to say access lots of hashes in the database so anyone with control of the OS can’t detect a pattern.

For any sizable user base, this would be incredibly slow if it had to be done for every user, almost every time they connect to the service (messaging apps perform regular checks in case new users appear).

To avoid this turning into a computer science lecture, we’ll sum up Marlinspike’s proposed solution by saying that it is based around disordering the way hashes are stored within the hash table to make it harder to carry out surveillance on them.

Does any of this matter beyond this one app?

Undoubtedly. Signal’s user base is small but where Signal goes, other secure messaging apps have a habit of following, including as mentioned above, WhatsApp and Facebook Messenger with their billion or more users. Since adopting Signal’s underlying platform in 2016, both appear to be implementing its innovations over time.

We don’t know whether this will include using server-side SGX enclaves, but if it does it could provoke a response from governments already questioning the use encrypted messaging.

App companies want to preserve user privacy for complex reasons we’ve written about before, including a desire not to turn into large-scale surveillance platforms for global governments in ways that might hurt their popularity.

But the bottom line is clear: losing access to address book metadata will not go down well with the powers that be.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ZUVI3cZhfS0/

Equifax mea-culpas with free credit “locks” forever

Equifax’s mea-culpa-ing by offering free credit locks for life starting on 31 January.

These are not credit freezes, mind you. No, Equifax is giving away credit padlocks that it says are a new service.

We don’t know much about the credit locks outside of what Equifax’s new interim CEO, Paulino do Rego Barros Jr., said in an editorial published by the Wall Street Journal on Wednesday, the day after he was appointed.

Barros got his new gig the same day that Equifax’s previous CEO, Richard Smith, washed his hands and walked away from the embarrassing mess. That is, Smith washed his hands, but he didn’t wash off the $18 million pension he took with him after his 12 year tenure.

Barros said the credit locks will be easy for consumers to lock and unlock, unlike credit freezes, which require PINs (yes, those PINS) to unlock … and which stop thieves dead in their tracks … and which cost the credit bureaus money they’d otherwise make by banks, credit card companies, cell phone companies or the like pulling customers’ credit reports, as the New York Times explains.

The data monger has a lot to mea culpa about. The credit lock freebie-4-ever comes three weeks after Equifax’s breach affected about half of everybody in the US, 400,000 in the UK and 100,000 in Canada.

…mind you, it was a breach that was enabled by a critical RCE (Remote Code Execution) flaw for which patches had been available for two months before the mid-May attack.

Equifax has been pratfalling ever since, as Barros is well aware.

As ZDNet’s Zack Whittaker reported, a XSS (Cross-site scripting) vulnerability was found in Equifax’s fraud alerts website—a flaw that could be used in phishing emails to trick consumers into turning over personal data.

And there was that leaky customer portal in Argentina – username ‘admin’, password ‘admin’.

It just kept getting more and more pratfally: There were the woeful PINs that put frozen credit files at risk, and then too there was Equifax’s not-so-neat party trick of ditching its tried, trusted equifax.com domain and instead putting its breach info site onto the easy to typosquat and bafflingly convoluted domain equifaxsecurity2017.com … a convoluted domain name which it proceeded to scramble at least 3 times, sending customers to a fake phishing site for weeks.

Beyond the pile of cyber D’oh!, there were insufficient, underprepared operators at the call centres, leaving alarmed customers facing delays and agents who couldn’t answer questions.

There’s no excuse for any of it, Barros said in his editorial. The company is adding agents and getting them trained, and he’s getting a daily update on the situation.

As well, Equifax is going to fix that problematic site of theirs. If it can’t fix it, it’s going to build a new one from scratch, Barros said. It’s also extending the window to sign up for free credit freezes and its TrustedID Premier credit monitoring service, both of which you can sign up for through the end of January.

I’m sure Equifax is sincerely sorry about this mess. But here’s the thing: given its track record, would you trust the company’s new credit lock service? From the NYT’s Ron Lieber:

This is the same company … that could not create a functioning website for people worried about whether thieves had stolen their Social Security numbers. People who have been trying to freeze their files have run into too many problems to name, and many of them do not yet have PINs. I’ve received hundreds of emails complaining about Equifax’s basic dysfunction.

Why does Equifax even need a new service? Why can’t it just give free credit freezes for life?

Lieber sent Equifax 18 questions that we still need answered, including:

Whether Equifax will force people to submit to mandatory arbitration or some other loss of privileges or rights in exchange for free locks for life. Or whether your name will end up on lists for various offers of credit. This is how TransUnion’s similar free service works, one that it’s been pushing hard at people who have come to its website looking for a credit freeze in the wake of the Equifax hack.

Good questions. As Mother Jones has noted, credit freezes or credit locks come with strings. Transunion’s Disclaimers and Warranties suggest that in order to interact with the company at all, you have to absolve them of liability for anything that might happen to your data on their watch.

Transunion, by the way, also has credit locks, and they’re definitely not free. I tried to set one up, it looks like I was heading toward a $19.99/month credit monitoring bleed.

Will the free credit locks cause the other credit bureaus to follow suit? I’m not holding my breath. At any rate, I want my $5 back. I want all my $5 payments back: as a citizen of Massachusetts, that’s how much I had to fork over to Transunion and to Experian to freeze my credit at those bureaus, all on account of Equifax’s pratfall. People in other states have had to shell out even more.

I called Equifax’s “We’re sorry, we’re sorry, we’ve got enough phone operators on hand now, we swear!” number to ask if Equifax had any intention of refunding customers the money we’ve had to fork over because of its breach.

Its trained operators might not have been trained to handle that one yet: the answer was a stammered “I haven’t heard of anything like that…”

No, I’m not surprised. Again, I’m not holding my breath on that one, either.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/PztooK9dPkI/

Apple Mac fans told: Something smells EFI in your firmware

Pre-boot software on Macs is often outdated, leaving Apple fans at a greater risk of malware attack as a result, according to new research.

An analysis of 73,000 Apple Macs by Duo Security found that users are unknowingly exposed to sophisticated malware-based attacks because of outdated firmware. On average, 4.2 per cent of real-world 73,324 Macs used in the enterprise environments analysed are running an EFI firmware version that’s different from what they should be running, based on the hardware model, the OS version, and the EFI version released with that OS version.

In one iMAC model (21.5” late 2015) analysed 43 per cent (941 out of 2,190) were running outdated, insecure firmware. Three variants of the late 2016 13” MacBook Pro show rates of deviance between 35 per cent and 25 per cent. Two variants of the early 2011 MacBook Pro showing a deviance from expected EFI firmware versions of 15 per cent and 12 per cent.

Variance from the expected EFI firmware versions is also markedly different across versions of the OS: macOS 10.12 (Sierra) had significantly higher average rate of deviance at 10 per cent. This is followed by OS X 10.11 (El Capitan) with 3.4 per cent and OS X 10.10 (Yosemite) with 2.1 per cent.

Patched apps, obsolete firmware

The research shows that Mac fans might easily be running systems that are fully up to date for OS and applications but years out of date in terms of EFI firmware, leaving their Mac computers vulnerable to publicly disclosed vulnerabilities and exploitation.

A total of 3,400 (4.6 per cent) of the 70K Macs analysed threw up systems supported by Apple that continue to receive software security updates, but have not received EFI firmware updates.

Sixteen combinations of Mac hardware and OSs have never received any EFI firmware updates over the lifetime of the 10.10 to 10.12 versions of OS X/macOS that Duo Security analysed. They do, however, continue to receive security updates from Apple for their OS and bundled software.

Researchers were taken aback by the update gap.

“The size of this discrepancy is somewhat surprising, given that the latest version of EFI firmware should be automatically installed alongside the OS updates,” according to Duo Security. “As such, only under extraordinary circumstances should the running EFI version not correspond to EFI version released with the running OS version.”

Thunderstruck

Security flaws in firmware could expose users to the Thunderstrike vulnerability. Attacks originally developed by the NSA and exposed in the WikiLeaks Vault 7 data dumps also rely on out-of-date firmware.

Duo Security said its research raised questions about the level of QA being afforded to these EFI firmware updates in comparison to the much better job Apple is doing with software security updates.

Further analysis of Apple’s updates also highlighted what seems to be the erroneous inclusion of 43 versions of EFI binaries in the 2017-001 security updates for 10.10 and 10.11 that were older than the versions of EFI binaries that were released in the previous updates 2016-003 (10.11) and 2016-007 (10.10).

This would indicate a regression or a release QA failing where incorrect versions of EFI firmware were shipped in OS security updates.

Duo speculates that something might be interfering with the way bundled EFI firmware updates are getting installed, leading to systems running old EFI versions.

Part of the problem is that there is very little visibility to the state of EFI firmware security for Apple systems. There are no published timelines for how long EFI firmware will be supported for firmware patches, or any lists of which systems are no longer going to receive firmware updates, despite continuing to receive software security updates. Enterprise patch deployment tools may also be an issue, in at least some cases.

But part of the firmware security gap could be the fault of BOFHs rather than Apple. Mac sysadmins too often ignore the importance of EFI firmware updates, or actively remove them due to past issues with their deployment. The process of applying EFI firmware updates used to be a laborious process that required hands-on interaction by IT support staff.

Due to this, many Mac sysadmins over time decided to remove or disable the deployment of EFI firmware updates alongside OS or security updates, deciding to “deal with it” as needed.

This approach is no longer sustainable, according to Duo Security, which advocates that EFI firmware updates should be delivered and applied alongside OS or security updates. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/29/mac_firmware_insecurity/