STE WILLIAMS

It’s Not What You Know, It’s What You Can Prove That Matters to Investigators

Achieving the data visibility to ensure you can provide auditors with the information they need after a breach, and do so in just a few days, has never been more difficult.

Speaking at a recent conference, Heather Adkins, Google’s information security manager, posed a question to the audience that every organization should ask itself: “The question is not whether or not you’re going to get hacked, but are you ready? Are you going to be able to very quickly make decisions about what to do next?”

When a breach occurs, you must do more than confirm you have implemented the appropriate information security technologies. You will also have to demonstrate you have complete visibility into the location of sensitive data, who has accessed it, and how they used and shared those files.

And you will have to do so more quickly than you might expect.

Research shows that it requires security and IT team days, even weeks, to identify the cause of a breach, and determine which files may have been exposed. The Ponemon Institute reports that it takes an average of 191 days to identify a breach, and more than two months (66 days) to contain it. That’s not going to fly with lawmakers and regulatory agencies.

Consider the looming May 25, 2018, deadline to demonstrate compliance with the EU’s General Data Protection Regulation (GDPR), which establishes strict requirements for protecting customer data. There are two key components to GDPR that create anxiety among security professionals and compliance officers: the broad definition of what constitutes publicly identifiable information, and the short time frame to report a breach — within 72 hours after discovery.

In the US, the North Carolina state legislature is considering an update to the Act to Strengthen Identity Theft Protections that requires that a breached entity notify affected consumers and the attorney general’s office within 15 days. You can rest assured that other states will follow suit.

To make matters worse, achieving the necessary level of data visibility to ensure you are able to provide auditors with the information on specific files that may have been exposed in a breach, and do so in just a few days, has never been more difficult.

For one, there is more technology creating more data — and most of it is sensitive. Second, a typical organization has sensitive information widely distributed across its network, both on-premises and in the cloud. Your organization may have customer information stored in content systems and business applications like SAP, SharePoint, OpenText, Oracle, Box, Office 365, and many more.

These challenges pertain only to the content that stays in your organization. In today’s environment, it has become virtually impossible to get business done without sharing confidential information with partners and other third-party services providers.

Consider a loan application received by a bank that must be routed to multiple third parties to conduct background and credit checks as well as a property appraisal, and secure the tax and lien history. Or a physician who forwards a patient’s medical records to a colleague for a second opinion and then sends the agreed-upon treatment plan to the patient’s insurance company. These kinds of use cases occur all the time.

Managing and monitoring who is accessing sensitive information and what they’re doing with it is seldom done efficiently — if done at all — and yet these are critical to data security and regulatory compliance. 

While IT has ceded the control it once had over enterprise content, it remains responsible for protecting sensitive information from both outside hackers and the rising insider threat — either the malicious actor who steals information or the innocent employee whose mistake accidentally exposes sensitive data.

The trouble is, security has become anathema to efforts to improve business agility and employee productivity levels. End users are increasingly interrupted with notifications and requests from security solutions to install updates and perform system sweeps, disrupting workflows and hurting employee productivity.

This constant tug of war between IT’s efforts to secure data and users’ needs to share data has given rise to Shadow IT. Users embrace consumer (read: unsecure) solutions without IT’s permission (or knowledge) to share files by downloading them to USB thumb drives, or uploading them to a cloud-based service like Dropbox. Users are drawn to these solutions for their functionality and ease of use; however, IT loses critical visibility over the movement and usage of files. The problem boils down to this: you can’t secure what you can’t see.

IT must have the controls necessary to demonstrate compliance with internal policies and industry standards. There are three key steps you can take to ensure you are able to supply information on file activity in an auditable format to internal auditors, government regulators, and/or legal teams when a security incident happens:

  1. Monitor all sensitive data: You must gain an understanding of how your employees and partners are accessing, using, and sharing your organization’s sensitive data.
  2. Create detailed audit logs that show all content activity: Knowing exactly who accessed your content, when they accessed it, and how they used it will enable you to demonstrate your organization’s compliance with rigorous industry regulations.
  3. Keep content where it belongs: Instead of creating duplicate copies of content for the purposes of collaboration, integrate with the applications that create the content to manage the associated workflows. 

Simply showing investigators your security system will not relieve you of the responsibility of a data breach and the resulting financial and legal consequences. Identifying the root cause of a security incident, mitigating the damage, and demonstrating full compliance requires you and your IT department know the details of the location, access controls, and activities around every single file, including how it may be shared externally. That is critical to quickly and effectively identifying the root cause of a security incident, mitigating the damage, and demonstrating full compliance.

Related Content:

 

Black Hat Asia returns to Singapore with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier solutions and service providers in the Business Hall. Click for information on the conference and to register.

Yaron Galant joined Accellion in 2017 and brings 25 years of experience in product strategy, management, and development. A pioneer in security and analytics, Mr. Galant has played a leading role in the creation of the Web application security space and … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/its-not-what-you-know-its-what-you-can-prove-that-matters-to-investigators/a/d-id/1331095?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Another baby monitor is allowing strangers to spy on children

Internet-enabled cameras: they’re supposed to secure and monitor our babies, or our pets, or our homes and offices. Realistically? All too often, a child could hack them.

The latest news from the department of Internet of Things (IoT) gadgets that you can use to spy on people: SEC Consult, an Austrian cybersecurity company, on Wednesday urged owners of MiSafes Mi-Cam baby monitors to turn them off if they want to keep their kids from being eyeballed by prying eyes or chatted up by strangers roaming the internet.

One of what the firm called multiple critical vulnerabilities allows for the hijacking of arbitrary video baby monitors. An attacker can eavesdrop on nurseries and talk to whoever’s near the baby monitor by simply modifying a single HTTP request, SEC Consult says.

The tweaked HTTP request allows an attacker to get at information about a given cloud-based Mi-Cam customer account and whatever baby monitors are paired with it, and to view and interact with those connected webcams. This video demonstrates the attack.

The baby monitors also have outdated firmware riddled with numerous publicly known vulnerabilities; root access protected by only four digits worth of credentials (and default credentials, at that); and a password-forget function that sends a six-digit validation key that’s good for 30 minutes: plenty of time for a brute-force attack.

As far as the software goes, one of the problems with the Mi-Cam app is broken session management, SEC Consult says:

A number of critical API calls can be accessed by an attacker with arbitrary session tokens because of broken session management.

This allows an attacker to retrieve information about the supplied account and its connected video baby monitors. Information retrieved by this feature is sufficient to view and interact with all connected video baby monitors for the supplied UID [unique identifier].

SEC Consult isn’t giving away much detail about these vulnerabilities. That’s because it can’t figure out how to get through to the vendor to responsibly disclose them: it’s been trying to get in touch with MiSafes since December, without any luck. It’s also tried to ask the Chinese Computer Emergency Response Team for coordination support, but CERT/CC decided not to coordinate a response or to publish the vulnerabilities.

What’s the best you can do if you’re one of the 52,000 or so people who own one of these baby monitors?

Turn it off.

After that, you might want to check out our tips on how to secure your baby monitor or other IP cameras.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rGLQwlyvh_k/

Tesla cryptojacked by currency miners

Tesla’s Amazon Web Services (AWS) cloud account was broken into by hackers who suckled at its computer power for cryptocurrency mining, according to security researchers at RedLock.

The researchers said on Tuesday that the hackers managed to get into the administration console for Tesla’s Kubernetes account because it wasn’t password-protected. Kubernetes is an open-source system designed by Google for optimizing cloud applications.

Once they were in, they found access credentials for Tesla’s AWS environment. They also got at an Amazon S3 (Amazon Simple Storage Service) bucket that had sensitive telemetry data related to Tesla cars.

To mine cryptocurrency – the researchers didn’t say what kind or how much the hackers got – the attackers hid the true IP address of a mining-pool server behind an IP address hosted by CloudFlare, a free content delivery network.

RedLock says it immediately reported the issue to Tesla, which quickly scrubbed itself clean of the infection. Tesla sent a statement to media outlets in which it said that it hadn’t uncovered any sign of customer privacy or vehicle safety or security having been compromised:

We maintain a bug bounty program to encourage this type of research, and we addressed this vulnerability within hours of learning about it. The impact seems to be limited to internally-used engineering test cars only, and our initial investigation found no indication that customer privacy or vehicle safety or security was compromised in any way.

RedLock said that the Tesla attack is similar to those it’s discovered in the past few months targeting Aviva, a British multinational insurance company, and Gemalto, the world’s largest manufacturer of SIM cards.

RedLock said that Tesla, Aviva and Gemalto all had at least one thing in common: Kubernetes consoles accessible to anybody on the internet, all lacking password protection. In fact, the researchers said they found hundreds of such unprotected consoles.

Within the past few months, cryptomining has cropped up on at least a couple of sites, such as The Pirate Bay and Salon, that are purposefully doing it to make money they say they can’t get through advertising.

Unauthorized cryptomining, known as cryptojacking has shown up in unexpected places: a recent example simultanteously infected numerous government websites in at least the US, the UK and Australia, when a third-party service that they all used for text-to-speech conversion got hacked.

Another cryptojacking instance popped up at a Buenos Aires Starbucks Wi-Fi in December.

Did the franchise do it on purpose? Or was it victimized by cryptojackers? Most of the time, cryptojacking is intentional, Naked Security’s Paul Ducklin told Wired at the time of the Starbucks incident:

It’s hard to guess the motivation of an unknown website operator, but based on an analysis of our detection data for the month of November [2017], most coinmining sites were doing it on purpose, and a significant majority were taking all the CPU they could get.

As we noted with Salon’s “turn off your adblocker or get to work mining for us,” you’ll probably have have a tough time making money from browser-based cryptomining.

Even Coinhive, a website dedicated to providing cloud-based cryptomining services for a 30% cut of the take, admits that your takings are likely to be modest: a site with 1,000,000 page visits a month, each of which lasts a full five minutes, none of which are from mobile devices, and where visitors are forced to mine all the time they’re on the site, is only going to pull in about 0.27 Monero a month – currently about $100.

How much cryptocoin does unauthorized sipping off an AWS get you?

It doesn’t matter. It’s illegal.

Come up with a different hobby rather than go down that route, and while we’re all on the topic, make sure to password-protect those Kubernetes consoles!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EC4k4trRgXE/

How one guy could have taken over any Tinder account (but didn’t)

An Indian researcher has put Tinder’s online security in the spotlight again.

Last month, we explained how missing encryption in Tinder’s mobile app made it less secure than using the service via your browser – in your browser, Tinder encrypted everything, including the photos you saw; on your mobile, the images sent for your perusal could not only be sniffed out but covertly modified in transit.

This time, the potential outcome was worse – complete account takeover, with a crook logged in as you – but thanks to responsible disclosure, the hole was plugged before it was publicised. (The attack described here therefore no longer works, which is why we are comfortable talking about it.)

In fact, researcher Anand Prakash was able to penetrate Tinder accounts thanks to a second, related bug in Facebook’s Account Kit service.

Account Kit is a free service for app and website developers who want to tie accounts to phone numbers, and to use those phone numbers for login verification via one-time codes send in text messages.

Prakash was paid $5000 by Facebook and $1250 by Tinder for his troubles.

Note. As far as we can see in Prakash’s article and accompanying video, he didn’t crack anyone’s account and then ask for a bug bounty payout, as seemed to have happened in a recent and controversial hacking case at Uber. That’s not how responsible disclosure and ethical bug hunting works. Prakash showed how he could take control of an account that was already his own, in a way that would work against accounts that were not his. In this way, he was able to prove his point without putting anyone else’s privacy at risk, and without risking disruption to Facebook or Tinder services.

Unfortunately, Prakash’s own posting on the topic is rather abrupt – for all we know, he abbreviated his explanation on purpose – but it seems to boil down to two bugs that could be combined:

  • Facebook Account Kit would cough up an AKS (Account Kit security) cookie for phone number X even if the login code he supplied was sent to phone number Y.

As far as we can tell from Prakash’s video (there’s no audio explanation to go with it, so it leaves a lot unsaid, both literally and figuratively), he needed an existing Account Kit account, and access to its associated phone number to receive a valid login code via SMS, in order to pull off the attack.

If so, then at least in theory, the attack could be traced to a specific mobile device – the one with number Y – but a burner phone with a pre-paid SIM card would admittedly make that a thankless task.

  • Tinder’s login would accept any valid AKS security cookie for phone number X, whether that cookie was acquired via the Tinder app or not.

We hope we’ve got this correct, but as far as we can make out…

…with a working phone hooked up to an existing Account Kit account, Prakash could get a login token for another Account Kit phone number (bad!), and with that “floating” login token, could directly access the Tinder account associated with that phone number simply by pasting the cookie into any requests generated by the Tinder app (bad!).

In other words, if you knew someone’s phone number, you could definitely have raided their Tinder account, and perhaps other accounts connected to that phone number via Facebook’s Account Kit service.

What to do?

If you’re a Tinder user, or an Account Kit user via other online services, you don’t need to do anything.

The bugs described here were down to how login requests were handled “in the cloud”, so the fixes were implemented “in the cloud” and therefore came into play automatically.

If you’re a web programmer, take another look at how you set and verify security information such as login cookies and other security tokens.

Make sure that you don’t end up with the irony of a set of super-secure locks and keys…

…where any key inadvertently opens any lock.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/y6spMCGFVtg/

Security Liability in an ‘Assume Breach’ World

What’s This?

Cybersecurity today is more than an IT issue. It’s a product quality issue, a customer service issue, an operational issue, and an executive issue. Here’s why.

Sara Boddy contributed to this article.

Anything we put online must swim in a sea of enemies. The F5 Labs report, Lessons Learned from a Decade of Data Breaches reveals that the average breach leaked 35 million records.  Nearly 90% of the US population’s social security numbers have been breached to cybercriminals. When confronted by staggering statistics like these, it is prudent to assume it’s a matter of “if, not when” your systems will be hacked. The safest stance is to operate in an “assume breach” mode. This means anticipating that most of the systems and devices you use on a day-to-day basis, from IoT devices in homes to web servers supporting applications, are susceptible to attack.

At the heart of this, CISOs are so worried about the impacts of a breach that 81% of them either won’t report a breach, or would only report a material breach which, depending on the size of the company and its materiality threshold, could mean that very significant breaches go unreported. The recent F5 and Ponemon report “The Evolving Role of CISOs and their Importance to the Business,” found that:

  • 19% of CISOS report all breaches to the CEO/Board
  • 46% of CISOs report only material breaches
  • 35% do not report breaches at all.

So why are CISOs reluctant to report a breach? It seems that every high-profile breach is followed by the cleaning out of the C-suite. From Equifax to Uber, a breach means those in charge of cybersecurity are sent off in search of new employment.

This should raise serious questions regarding liability for corporate leadership. But according to Harvard Business Review: “Just 38% of directors reported having a high level of concern about cybersecurity risks, and an even smaller proportion said they were prepared for these risks.” That’s no surprise. Organizations run on IT, and most of our IT systems are being engineered beyond our ability to operate or monitor them.

Complex and Multifaceted
Compounding this problem is the fact that cybersecurity is a complicated field with many facets and sub-disciplines. Consider these eight areas identified by the International Information System Security Certification Consortium as essential knowledge for cybersecurity:

  • Security and Risk Management
  • Asset Security
  • Security Engineering
  • Communications and Network Security
  • Identity and Access Management
  • Security Assessment and Testing
  • Security Operations
  • Software Development Security

Most executives have a working knowledge of sales, contract law, and accounting, but once you dive into the deep water of IT security, comprehension gets far more difficult. It doesn’t help that many IT processionals speak in terms of technology, not business. From this, we can conclude:

  • Breaches should be expected.
  • CISOs are not fully reporting breaches.
  • Executives are deficient in their awareness and ability of understanding the risk.

This lack of awareness raises serious questions about liability for the organization, the CISO, and the leadership itself. While businesses generally work to limit their liability whenever and wherever they can, when it comes to cyber-risk executive teams seem to be exposing themselves unnecessarily. The exposure stems from regulations that make protecting other people’s information a business duty and obligation.

‘Commercially Reasonable’ Data Protection
In general, the law states that organizations must use “commercially reasonable” methods to secure access to the data they collect and process about their employees, customers and, in the case of hosting/outsourcing organizations, their customer’s customers. There are plenty of standards to measure what is commercially reasonable, such as those published by the National Institute of Standards as well as commercial standards, such as the Payment Card Data Security Standard (PCI-DSS), which also introduces contractual liability if standards are not met.

When it comes to contractual liability there are many things could go badly for an organization and ways in which they can be penalized: breach of contract (if contract requires cybersecurity), general negligence (especially if internal processes are not being followed), or breach of warranty (if contract guarantees a certain level of security quality). On top of that, there is the potential for class-action customer and shareholder lawsuits for negligence. This is not counting all the regulatory liability that stem from FTC lawsuits for false advertising regarding security, state attorneys general suing for improper notification, and the forthcoming GDPR regulatory requirements.

With such painful liability threats looming, organizations need to look at their information security plan as a liability defense plan. Cybersecurity is part of the business, which means that it’s not just an IT issue, but also a quality issue (liability regarding product quality), a customer service issue (liability regarding customer data), and an operational issue (liability regarding service delivery). Security leaders should be aware of the potential liability and make use of trusted advisors, both within and outside the organization, to help manage this risk. Lastly, executives, once aware of their risks and liabilities, need to follow up and monitor operations in order to ensure that the business liability is kept to a minimum.

Get the latest application threat intelligence from F5 Labs.

 

Raymond Pompon is a Principal Threat Researcher Evangelist with F5 labs. With over 20 years of experience in Internet security, he has worked closely with Federal law enforcement in cyber-crime investigations. He has recently written IT Security Risk Control Management: An … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/f5/security-liability-in-an-assume-breach-world/a/d-id/1331100?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Hey, you. App dev. You like secure software? Let’s learn from Tinder, Facebook’s blunders

App developers should take a long, hard look at how they use Facebook’s Account Kit for identifying users – after a flaw in the system, and Tinder’s use of the toolkit, left shag-seekers open to account hijacking.

When a horny netizen logs into their Tinder profile using their phone number as a username, the hookup app relies on the Facebook-built AccountKit.com to check the person is legit owner of that account.

Facebook’s system texts a confirmation code to the punter, they receive it on their phone, and type the code into Account Kit’s website. Account Kit verifies the code is correct, and if it is, issues Tinder an authorization token, allowing the login attempt to complete.

It’s a simple, easy, and supposedly secure password-less system: your Tinder account is linked to your phone number, and as long as you can receive texts to that number, you can log into your Tinder account.

However, Appsecure founder Anand Prakash discovered Account Kit didn’t check whether the confirmation code was correct when the toolkit’s software interface – its API – was used in a particular way. Supplying a phone number as a “new_phone_number” parameter in an API call over HTTP skipped the verification code check, and the kit returned a valid “aks” authorization token.

Thus, you could supply anyone’s phone number to Account Kit, and it would return a legit “aks” access token as a cookie in the API’s HTTP response. That’s not great.

Prepare for trouble, and make it double

Now to Tinder. The app’s developers forgot to check the client ID number in the login token from Account Kit, meaning it would accept the aforementioned “aks” cookie as a legit token. Thus it was possible to create an authorization token belonging to a stranger from Account Kit, and then send it to Tinder’s app to log in as that person.

All you’d need is a victim’s phone number, and bam, you’re in their Tinder profile, reading their saucy messages between hookups or discovering how much of an unloved sad sack they were, and setting up dates.

“He will be logged in to the victim’s Tinder account,” explained Prakash earlier this week, apparently assuming only guys would be interested in this kind of caper. Pssh, as if.

“The attacker basically has full control over the victim’s account now — he can read private chats, full personal information, swipe other user profiles left or right, etc.”

Prakash reported the flaws to Facebook and Tinder, and went public with his findings after the bugs were ironed out out of the backend systems and app. Facebook paid out $5,000 in bug bounties, with Tinder kicking in an extra $1,250.

Thankfully, it doesn’t appear the holes were exploited in the wild. Hopefully this episode will encourage some programmers double check they’re not making the same blunders in their source code. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/22/tinder_account_kit_vulnerabilities/

uTorrent file-swappers urged to upgrade after PC hijack flaws fixed

Users of uTorrent should grab the latest versions of the popular torrenting tools: serious security bugs, which malicious websites can exploit to commandeer PCs, were squashed this week in the software.

If you’re running a vulnerable Windows build of the pira, er, file-sharing applications while browsing the web, devious JavaScript code on an evil site can connect to your uTorrent app and leverage it to potentially rifle through your downloaded files or run malware.

The flaws were found by Googler Tavis Ormandy: he spotted and reported the vulnerabilities in BitTorrent’s uTorrent Classic and uTorrent Web apps in early December. This month, BitTorrent began emitting new versions of these products for people to install by hand or via the built-in update mechanism. These corrected builds were offered first as beta releases, and in the coming days will be issued as official updates, we’re told.

Look out for version 3.5.3.44352 or higher of the desktop flavor, or version 0.12.0.502 and higher of the Spotify-styled Web build.

The latest classic desktop app looks to be secured. However, Ormandy was skeptical the uTorrent Web client had been fully fixed, believing the software to still be vulnerable to attack. On Wednesday this week, he went public with his findings since he had, by this point, given BitTorrent three months to address their coding cockup.

“The vulnerability is now public because a patch is available, and BitTorrent have already exhausted their 90 days anyway,” Ormandy wrote in his advisory.

“I see no other option for affected users but to stop using uTorrent Web and contact BitTorrent and request a comprehensive patch. We’ve done all we can to give BitTorrent adequate time, information and feedback, and the issue remains unsolved.”

BitTorrent told The Register the flaws should all be resolved this week, including the Web app Ormandy was concerned about.

“All users will be updated with the fix automatically over the following days. The nature of the exploit is such that an attacker could craft a URL that would cause actions to trigger in the client without the user’s consent (e.g. adding a torrent),” said Dave Rees, BitTorrent’s veep of engineering, late on Tuesday.

The security weaknesses are a result of uTorrent Classic creating an HTTP RPC server on port 10000 or 19575 for the uTorrent Web version. A malicious webpage, or anything else running on the PC, could perform a DNS rebinding attack to inject commands into the torrenting apps.

Pre-patch, the desktop app could be abused to allow “any website you visit [to] read and copy every torrent you’ve downloaded,” according to Ormandy. The flaws were more serious in the Web app: the code could be attacked to download an arbitrary .exe into the operating system’s startup folder, effectively ensuring malware runs during the next boot up. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/22/utorrent_client_vulnerabilities/

If at first you don’t succeed, you’re likely Intel: Second Spectre microcode fix emitted

For the second time of asking, Intel has issued microcode updates to computer makers that it prays says will mitigate the Spectre variant two design flaw impacting generations of x86 CPUs spewed out over previous decades.

Yep, old Chipzilla has turned up at the scene of the metaphorical IT industry earthquake with a dustpan and brush*: the firmware updates are for the sixth generation (Skylake), the seventh generation (Kaby Lake) and the eight generation (Coffee Lake), the X-series line and the Xeon scalable and Xeon D processors.

Since 2 January, when The Register exposed the existence of the Meltdown and then Spectre chip design blunders, Intel and other CPU vendors have been working to mitigate the vulnerabilities.

frightened looking woman with hands over eyes

You can’t ignore Spectre. Look, it’s pressing its nose against your screen

READ MORE

The 12 January release of the firmware updates for Meltdown and Spectre made PCs and servers less stable, and so vendors including Lenovo, VMware and Red Hat delayed rolling out patches.

“We have now released production microcode updates to our OEM customers and partners,” said Navin Shenoy, veep and GM for mobile client platforms at Intel. “The microcode will be made available in most cases through OEM firmware updates”.

Intel said the firmware is in beta mode for Sandy Bridge, Ivy Bridge, Haswell and Broadwell. The microcode patch update schedules for the chips are here.

Shenoy said there are “multiple mitigation techniques available that may provide protection against these exploits”, including Google-developed binary modification technique Retpoline (white paper here).

According to Google: “Retpoline sequences are a software construct which allow indirect branches to be isolated from speculative execution.

“This may be applied to protect sensitive binaries (such as operating system or hypervisor implementations) from branch target injection attacks against their indirect branches”.

Retpoline is a portmanteau of return and trampoline: it is a trampoline construct built using return operations which “also figuratively ensures that any associated speculative execution will ‘bounce’ endlessly.”

Intel, which is facing 32 separate lawsuits in the US over Spectre and Meltdown – from both customers and investors – extended its “appreciation” to the rest of the industry for their “ongoing support”.

Some hard pressed techies dealing with the fallout are not yet convinced of Intel’s latest microcode update, at least ones that expressed doubts on Reddit.

“Don’t patch yet,” was the advice from one, “MS had to revert one of Intel’s fixes already. Best to wait until it’s verified not to cause issues with the OS.”

Another said, “I would imagine… at least hope, that the second time around they’d make sure they get it right. But probably still a good call.”

A third said he was “cautiously optimistic” as it will still be “up to the motherboard manufacturers to provide BIOS updates”.

And therein lies the problem, pessimists are rarely disappointed, but for optimists… it is the hope that gets them in the end. El Reg suspects Linux supremo Linus Torvalds, based on experience, fits into the former bracket where Intel is concerned. ®

* Sorry, Sean Lock, couldn’t resist pinching your joke. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/21/intel_spectre_2_microcode_patch/

Guess who else Spectre is haunting? Yes, it’s AMD. Four class-action CPU flaw lawsuits filed

It’s not just Intel facing a legal firestorm over its handling of the Spectre and Meltdown CPU design flaws – AMD is also staring at a growing stack of class-action complaints related to the chip vulnerabilities.

At least four separate lawsuits have now been filed against the California-based processor slinger, alleging violations ranging from securities fraud to breach of warranty, unfair competition, and negligence. The cases, all submitted to a US district court in San Jose, include:

The first three suits, which could be merged into each other at some point, seek damages from AMD on behalf of those who bought an AMD processor blighted by the Spectre design vulnerability prior to the flaw’s public disclosure by researchers in January of this year.

While Meltdown primarily affects Intel chips, AMD’s CPUs – like many modern processor architectures including Intel’s – suffer from Spectre-class bugs. The trio of suits cite El Reg‘s exclusive reporting on the semiconductor security cockups.

The lawsuits note that AMD knew of these side-channel attack vulnerabilities before the public disclosure and yet didn’t issue any mitigations, nor warn users of the risks even as they pushed their products to market.

“Despite its knowledge of the Spectre Defect, AMD continued to sell its processors to unknowing customers at prices much higher than what customers would have paid had they known about the Spectre Defect and its threat to critical security features as well as on the processing speeds of the devices they purchased,” read the Barnes complaint.

Additionally, the cases note that because Spectre is rooted so deeply into the CPU architecture, a permanent fix will be difficult to roll out and will likely cause a drop in performance.

“Defendant has been unable or unwilling to repair the security vulnerabilities in the subject CPUs or offer Plaintiff and class members a non-defective CPU or reimbursement for the cost of such CPU and the consequential damages arising from the purchase and use of such CPUs,” reads the Speck complaint.

“The software updates or ‘patches’ pushed by AMD onto CPU owners does not appear to provide protection from all the variants of Spectre. At the very least, firmware updates or changes will be required. Even then, these ‘patches’ dramatically degrade CPU performance.”

The Speck, Barnes, and Hauck complaints levy charges against AMD including breach of implied warranty, breach of express warranty, violation of the Magnusson-Moss Warranty Act, negligence, strict liability, unjust enrichment, and violations of unfair competition and consumer protection laws in California and Ohio.

AMD CEO Lisa Su speaking at the firm's 2015 financial analyst day

Sueball smacks AMD over processor chip security flaw silence

READ MORE

Meanwhile, the Kim complaint, as we reported last month, looks to recover cash on behalf of shareholders who bought AMD stock between between February 21, 2017 and January 11, 2018. The suit alleges that AMD mislead investors and violated securities laws when it failed to disclose the bugs and, after the flaws were disclosed, downplayed their severity.

As a result, the suit alleges, shareholders took a financial hit when the vulnerabilities were confirmed in AMD chips and its stock price fell 0.99 per cent on January 12, 2018.

“AMD and the Individual Defendants, individually and in concert, directly or indirectly, disseminated or approved the false statements… which they knew or deliberately disregarded were misleading in that they contained misrepresentations and failed to disclose material facts,” the complaint reads.

All four complaints seek a jury trial to determine damages. A spokesperson for AMD could not be reached for immediate comment. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/21/amd_spectre_lawsuits/

Guys, you’re killing us! LA Times homicide site hacked to mine crypto-coins on netizens’ PCs

A Los Angeles Times’ website has been silently mining crypto-coins using visitors’ web browsers and PCs for several days – after hackers snuck mining code onto its webpages.

The newspaper’s IT staffers left at least one of the publication’s Amazon Web Services S3 cloud storage buckets wide open to anyone on the internet to freely change, update, and tamper.

Miscreants seized upon this security blunder to slip CoinHive’s Monero-mining JavaScript code into the LA Times’ interactive county homicide map at homicide.latimes.com.

People visiting this site will inadvertently start crafting alt-coins for whoever injected the code, unless they have antivirus or ad-blockers installed that prevent such scripts from loading. This particular coin-crafting script has remained hidden on the website since February 9.

For now it’s probably a good idea to avoid that website and other LA Times online properties until the bucket is protected – software more malicious than a miner could be uploaded and injected, such as password sniffers and drive-by malware installers.

The scumbags who implanted the hidden crypto-miner were not the only ones to find the newspaper’s world-writable S3 bucket. Others left a warning note, with the filename BugDisclosure.txt, in the vulnerable cloud storage urging technicians to secure the account:

Hello, This is a friendly warning that your Amazon AWS S3 bucket settings are wrong. Anyone can write to this bucket. Please fix this before a bad guy finds it.

The bucket is used to host graphics and other material for the daily paper’s website. It appears an administrator has not only left read permissions open on the silo, but also enabled global write permissions, meaning anyone so inclined would be able waltz right in and inject code and other files into the paper’s websites.

Naturally, someone soon did just that – the malicious JavaScript code can be found perched atop some innocent code within the murder map.

Off script … The injected evil code found on an LA Times website

We have asked the LA Times for comment. A spokesperson was not immediately available. Infosec researcher Troy Mursch, who has been tracking these kinds of crypto-jacking attacks, also reached out earlier today to LA Times, and said he had no response. We also reported the mining activity to CoinHive.

This is not the first case of a biz being exposed by an incorrectly configured S3 storage bin. Security researchers have created a cottage industry out of combing the internet for AWS buckets that have been improperly configured, resulting in the accidental exposure of millions of records and pieces of personal information.

Only this week were experts warning that it’s not just world readable silos people need to be worried about – world writeable ones allow miscreants to inject malware into websites, encrypt documents and hold them to ransom, and so on.

Hundreds of warning notes, alerting IT admins to insecure world-writable buckets, have recently appeared in S3 silos, courtesy of gray-hat hackers.

Needless to say, if you administer one or more S3 storage buckets, now would be a good time to make sure your access controls (both read and write) are properly configured to keep unauthorized netizens out. Amazon has tools available to prevent this kind of cockup. S3 silos are, by default, not accessible to the public internet. ®

Updated to add at 00:56 UTC

The CoinHive code has been stripped from LA Times’ website.

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/22/la_times_amazon_aws_s3/