STE WILLIAMS

We translated Intel’s crap attempt to spin its way out of CPU security bug PR nightmare

Analysis In the wake of The Register‘s report on Tuesday about the vulnerabilities affecting Intel chips, Chipzilla on Wednesday issued a press release to address the problems disclosed by Google’s security researchers that afternoon.

To help put Intel’s claims into context, we’ve annotated the text. Bold is Intel’s spin.

Intel and other technology companies have been made aware of new security research describing software analysis methods that, when used for malicious purposes, have the potential to improperly gather sensitive data from computing devices that are operating as designed.

Translation: When malware steals your stuff, your Intel chip is working as designed. Also, this is why our stock price fell. Please make other stock prices fall, thank you.

By the way, here’s what Linux kernel supremo Linus Torvalds had to say about this: “I think somebody inside of Intel needs to really take a long hard look at their CPUs, and actually admit that they have issues instead of writing PR blurbs that say that everything works as designed.

“Is Intel basically saying ‘we are committed to selling you shit forever and ever, and never fixing anything’?”

What Intel described as “software analysis methods,” security researchers describe thus: “Meltdown breaks all security assumptions given by the CPU’s memory isolation capabilities.”

“Meltdown” is the name given to a side-channel attack on memory isolation that affects most Intel chips since 2010, as well as a few Arm cores. Intel’s chips may be “operating as designed” but it is this processor design that’s the issue; based on the research that has been published, the current design is inadequate and insecure.

Meltdown – on Intel CPUs and the Arm Cortex-A75 – allows normal applications to read protected kernel memory, allowing them to steal passwords and other secrets. It is easy to exploit, but easy to patch – and workarounds to kill the vulnerability are available for Windows and Linux, and are already in macOS High Sierra, for Intel parts. There are Linux kernel patches available for the Cortex-A75.

There’s also another security flaw named Spectre that affects, to varying degrees, Intel, AMD, and Arm. Depending on your CPU, Spectre allows normal apps to potentially steal information from other apps, the kernel, or the underlying hypervisor. Spectre is difficult to exploit, but also difficult to fully patch – and is going to be the real stinger from all of this.

Intel believes these exploits do not have the potential to corrupt, modify or delete data.

Translation: Look, over here! Scary words! And we deny them! And you’ll forget that this is about stealing information, not tampering with it.

Funnily enough, no one said the security flaws could be used to directly alter data. Instead of talking about what these exploits don’t do, let’s focus on what they make possible.

On vulnerable systems, Meltdown allows user programs to read from private and sensitive kernel address spaces, including kernel-sharing sandboxes like Docker or Xen in paravirtualization mode. And when you’ve stolen the keys to the kingdom, such as cryptographic secrets, you’ll probably find you can indeed corrupt, modify or delete data, pal.

Recent reports that these exploits are caused by a “bug” or a “flaw” and are unique to Intel products are incorrect.

Translation: Pleeeeeease, pleeeeease do not sue us for shipping faulty products or make us recall millions of chips.

Bug. Flaw. Security shortcoming. Design oversight. Blueprint blunder. Bungled architecture. It’s the same difference. Security researchers, describing Meltdown, said: “On the microarchitectural level (e.g., the actual hardware implementation), there is an exploitable security problem.”

The exploits described this week against processors rely on unsafe system designs. Flawed system designs, if you will. Buggy system designs.

Based on the analysis to date, many types of computing devices — with many different vendors’ processors and operating systems — are susceptible to these exploits.

Translation: We weren’t the only one. And if we’re going down, we’re taking every last one of you with us.

Chipzilla doesn’t want you to know that every Intel processor since 1995 that implements out-of-order execution is potentially affected by Meltdown – except Itanium, and the Atom before 2013.

Intel is committed to product and customer security and is working closely with many other technology companies, including AMD, ARM Holdings and several operating system vendors, to develop an industry-wide approach to resolve this issue promptly and constructively. Intel has begun providing software and firmware updates to mitigate these exploits. Contrary to some reports, any performance impacts are workload-dependent, and, for the average computer user, should not be significant and will be mitigated over time.

Translation: Just fucking leave us alone.

While Intel may be working with AMD, Arm, and other system vendors, any performance hit incurred from patching Meltdown is not an issue for these vendors. AMD isn’t affected by the bug, and only the Arm Cortex-A75 is susceptible to the vulnerability.

The performance hit is due to implementing Kernel Page Table Isolation, or KPTI (previously called KAISER) to close the security hole. This pushes the kernel into a separate address space so it cannot be accessed at all by the running processes. The downside is that whenever an application needs the kernel to do something useful, such as read a file or send some network traffic, the CPU has to switch over to the kernel’s virtual address space. This takes time, and doing it a lot will slow down the machine.

The KPTI overhead for the Cortex-A75 isn’t known at the moment, and is estimated to be minimal. As for Intel systems, we stated in our original report:

The effects are still being benchmarked, however we’re looking at a ballpark figure of five to 30 per cent slow down, depending on the task and the processor model. More recent Intel chips have features – such as PCID – to reduce the performance hit. Your mileage may vary.

These figures were based on estimates from people working on the KPTI patches, and from simple benchmarks, such as asking a PostgreSQL database to select all records in a data store, and adding up all file sizes of documents on a disk. Some people have already reported noticeable degraded performance from their cloud-hosted virtual machines after applying the Meltdown patches. Some cloud vendors insist there won’t be a problem.

It really boils down to – as we said, and Intel pointed out – your workload. If you just play games on your PC, you will not see a slowdown because the software rarely jumps to the kernel during gameplay. Your game will be mostly talking to the graphics processor.

If all you do is browse Twitter, write emails, and type away in a word processor, you probably won’t notice any difference. If you do a lot of in-memory number crunching, you won’t see much of an impact because again the kernel isn’t getting in the way. If you have PCID support enabled on your hardware and in your kernel, any performance hit should be minimized.

If you hammer the disk, the network, or use software that makes lots of system calls in and out of the kernel, and you’re lacking working PCID support, you will see a performance hit. And it’s a good idea to warn you, right?

It’s a given for this particular issue that any slowdown is dependent upon the kind of work the affected system is being asked to do. Gamers will maintain their frame rates, but that’s not what this is about. It’s about enterprise workloads and data centers. With reports of SQL database slowdowns of up to 20 or so per cent, it seems premature to say the impact should not be significant. If a company’s AWS, Microsoft Azure, or Google Cloud bill ends up being, say, three, five or eight per cent higher as a consequence of prolonged compute times, that’s significant.

No doubt the patches will be benchmarked, and we’ll write about them.

Intel is committed to the industry best practice of responsible disclosure of potential security issues, which is why Intel and other vendors had planned to disclose this issue next week when more software and firmware updates will be available. However, Intel is making this statement today because of the current inaccurate media reports.

Translation: We were gonna say something next week, but those bastards at The Register blew the lid on it early so, uh, so, fake news! Fake news! NO PUPPET!

The preferred phrase at present is “coordinated disclosure.” “Responsible disclosure” suggests the media and security researchers have been irresponsible for reporting on this issue before Intel was ready to go public. Once we get into assigning blame, that invites terms like “responsible microarchitecture design” or “responsible sales of processors known to contain vulnerabilities” or “responsible handling of security disclosures made last June.”

Also, it’s not clear which media reports are inaccurate, since Intel is not addressing anyone specifically.

Check with your operating system vendor or system manufacturer and apply any available updates as soon as they are available. Following good security practices that protect against malware in general will also help protect against possible exploitation until updates can be applied.

Translation: Don’t click on any bad links or emails, you rubes.

Thanks for that. And remember to lock your doors at night.

Intel believes its products are the most secure in the world and that, with the support of its partners, the current solutions to this issue provide the best possible security for its customers.

Translation: Who else are you gonna buy this stuff from?

One step below security by obscurity, there’s security by belief. Demand more. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/04/intels_spin_the_registers_annotations/

Attention, vSphere VDP backup admins: There is a little remote root hole you need to patch…

VMware on Tuesday published a security advisory for its vSphere Data Protection (VDP) backup and recovery product.

The virtualization giant identified three vulnerabilities, one of which it deems critical, with the two others categorized as important.

The issues affect VDP 5.x, 6.0.x, and 6.1.x.

CVE-2017-15548 is the critical flaw, which the biz described as a remote authentication bypass. If exploited, it could allow a remote unauthenticated attacker to bypass authentication protections, and gain root control of the server.

The second flaw, CVE-2017-15549, could allow arbitrary file uploads to an affected system from a remote authenticated user with low privileges.

The third, CVE-2017-15550, is a path traversal vulnerability. According to VMware, a remote authenticated user with low privileges could use the flaw to access files on an affected system in the context of the vulnerable app.

Further details about the vulnerabilities were not immediately available.

VMware has published patches to fix things, with version 6.1.6 as the update path for 6.1 users and version 6.0.7 for those using 6.0.x or 5.x. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/03/vmware_vsphere_vdp/

In Mobile, It’s Back to the Future

The mobile industry keeps pushing forward while overlooking some security concerns of the past.

The mobile revolution has advanced so fast that we might have missed some critical steps on the way. For example, ever notice how many key elements in this dynamic field seem highly contradictory?

First, of course, there’s the work-play equation: every emerging mobile innovation is specifically designed to be consumer-friendly, yet many are undeniably fundamental to business productivity, which mandates different priorities. Next, most users know just enough about mobile technologies to embrace and depend on new tools as they arrive, but not nearly enough to keep those practices secure. And perhaps most importantly, the mobile industry is constantly pushing us forward — new form factors, new platforms, new channels, new apps — but the challenge to true progress might be some security concerns from the past.

All that helps explain why the near future is such a mix of potential and peril. Sure, the endless stream of new technologies will keep coming — think 5G, or the Internet of Things, and surely others we don’t know about yet. Each innovation will bring with it greater access, lower costs, enhanced convenience, and a bunch of other benefits. But at the same time, without some remedial action, we’ll leave ourselves increasingly vulnerable to hacks, attacks, and outright theft.

So, what can we see coming down the pike that might bring dangers later on? More to the point, what should users know that they don’t?

Let’s start with SS7. Officially, this is a telecom protocol defined by the International Telecommunication Union  as a way to offload public switched telephone network data traffic congestion onto a wireless or wireline digital broadband network. Because that likely doesn’t mean much to folks not working in telecommunications, here’s just a sampling of different areas in which it’s used: basic call setup, management, and teardown; personal communications and other wireless services, wireless roaming, and mobile subscriber authentication; local number portability; toll-free services; and enhanced features such as call forwarding and three-way calling; and optimal security. In other words, even if we don’t think about it, we all use it every day.

Now let’s turn to 5G. Think your current download speed is pretty good? This pending standard will make it seem tortoise-like. It’s the next big thing, succeeding the International Mobile Telecommunications-Advanced Standard, or 4G (and sometimes 4.5G). The benefits are undeniable: Data rates of 100 Mbps in metropolitan areas, 1 Gbps simultaneously to workers on the same office floor, hundreds of thousands of simultaneous connections for wireless sensors, and much more. It will alter our reception and appreciation of everything from cable TV to physical objects that get an IoT hookup.

Finally, consider Diameter. This is the upcoming authentication, authorization, and accounting protocol, and it’s in a rush. By 2021, it’s expected to generate 595 million messages per second.

And now for the bad news.

It was reported this summer that some cybercriminals were draining bank accounts around Germany. They didn’t actually hack the banks — they got a customer’s username, password, and telephone number, then used SS7 vulnerabilities to reroute the two-factor codes that serve as the ultimate defense.

Remember, the whole point of SS7 is carrier interoperability. Without it, we couldn’t get a text or call from anyone outside the network (or the country). The basic belief is that this process can’t happen — seamlessly, instantly, easily — without a certain level of trust. For example, carriers need to identify the location of a device specifically to route the call to the nearest cell tower. If scammers can spoof a carrier to ask the same question, they’ll get the same answer — and enable all kinds of fraud.

But here’s the worst part: this is not new. Security specialists and others have been saying for years that SS7 has fundamental security issues — and Diameter has them too. So, in the future we’ll have not just mobile devices but every corner of IoT (cars, kitchens, utilities) running on 5G, SS7, and Diameter. It will be high speed and highly insecure.

There’s some action on the legislative front: Arizona’s HB 2365 law seeks to streamline the permitting process for faster networks (as does pending legislation in other states), while the US Senate is considering action to accelerate 5G implementation. But security is a more difficult issue.

When new mobile networks rely on network protocols littered with vulnerabilities, multiple filters can help secure SS7. But ultimately, every organization in the chain needs to ensure constant monitoring and assessments to not only identify vulnerabilities but also stay one step ahead of emerging zero-day exploits. That will require innovative technologies such as artificial intelligence and machine learning, to be sure, but also solid reverse engineering and network visibility coupled with human analysis. Some of this might sound old-fashioned — but in the future, that’s just what we need.

Related Content:

Michael Downs has been assisting telecoms and mobile providers address the business impact from cybersecurity risks for nearly 20 years. At Positive Technologies, he works side by side with the penetration testing team and research specialists to help mobile network operators … View Full Bio

Article source: https://www.darkreading.com/endpoint/in-mobile-its-back-to-the-future/a/d-id/1330695?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Intel Processor Security Flaw Prompts Kernel Makeovers in Linux, Windows

As-yet undisclosed design flaw in Intel processors has OS programmers working on kernel updates that reportedly could slow performance.

A design flaw in Intel microprocessors has Linux and Microsoft Windows developers reworking their kernels to defend against exploitation of the security bug.

Details of the flaw have not yet been made public, and Intel and Microsoft have remained mum about the chip design flaw, which was first reported by The Register this week. The report said Microsoft is expected to issue updates for Windows in next week’s Patch Tuesday batch, while Linux developers have been openly working on fixes online. According to the report, the OS updates ultimately could slow performance of the systems, in some cases by five- to 30%. Newer Intel processors aren’t as susceptible to a performance impact, the report said.

Renowned security expert Dan Kaminsky says without the details of the flaw out yet, it doesn’t make sense to theorize about its ramifications. “I think we shouldn’t speculate until the bug is disclosed,” Kaminsky says. “Clearly, the notable part of this is whatever it is can’t be addressed in microcode.”

Intel had not responded to press inquiries as of this posting, and Microsoft declined to comment.

The flaw – which reportedly affects processors in millions of computers – could allow applications, including JavaScript in a Web browser, to read protected areas of the kernel memory. 

The kernel is designed to separate “userland” from sensitive kernel areas “so that userland programs can’t take over from the kernel itself and subvert security, for example by launching malware, stealing data, snooping on network traffic and messing with the hardware,” wrote Sophos security analyst Paul Ducklin in a post today.

The new Linux patch will isolate the kernel memory from the user process via the so-called Kernel Page Table Isolation, KPTI. 

“This security fix is especially relevant for multi-user computers, such as servers running several virtual machines, where individual users or guest operating systems could use this trick to “reach out” to other parts of the system, such as the host operating system, or other guests on the same physical server,” Ducklin explained.

The risk of attack on appliances or endpoints such as a laptop appears to be low, he said, because an attacker would have to run code on the targeted machine to exploit it.

“On shared computers such as as multiuser build servers or hosting services that run several different customers’ virtual machines on the same physical hardware, the risks are much greater: the host kernel is there to keep different users apart, not merely to keep different programs run by one user apart,” Ducklin said. 

Intel has been under the security microscope several times in the past year, starting with its May 2017 disclosure of a critical privilege-escalation bug in its Active Management Technology (AMT) firmware used in many Intel chips that affected AMT firmware versions dating back to 2010. It’s up to hardware OEMs to update their platforms with Intel’s fix.

The AMT vulnerability, discovered by security vendor Embedi, gives attackers a way to access the AMT functionality without the need to authenticate to it first. The flaw allows an attacker to remotely delete or reinstall the operating system on a vulnerable system, or control the mouse and keyboard, for instance. 

Last fall, Intel patched a vulnerability in its microprocessors  that could be used by an attacker to burrow deep inside a machine and control processes and access data – even when a laptop, workstation, or server is powered down. Researchers from Positive Technologies first discovered the flaw, a stack buffer overflow bug in the Intel Management Engine (ME) 11 system that’s found in most Intel chips shipped since 2015. ME, which contains its own operating system, is a system efficiency feature that runs during startup and while the computer is on or asleep, and handles much of the communications between the processor and external devices.

And now the Intel design flaw, the details of which remain a mystery. “This flaw has existed for years and has been documented about for months, at least, so there is no need to panic; nevertheless, we recommend that you keep your eyes out for patches for the operating systems you use, probably in the course of January 2018, and that you apply them as soon as you can,” Sophos’ Ducklin advised.

The flaw also reportedly affects cloud services such as Amazon EC2, Microsoft Azure, and Google Compute Engine. “Amazon just sent a notice about a major security update and EC2 is scheduled to reboot this Friday,” said Chris Morales, head of security analytics at Vectra. “If the Azure and Amazon reboots are related to the Intel flaw, it would demonstrate how far reaching the impact is. A phrase like ‘the cloud is rebooting’ is not something that anyone has had to say before, and it reminds me of the kind of far reaching impact that Y2K was feared to have had.”

Related Content:

 

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/endpoint/intel-processor-security-flaw-prompts-kernel-makeovers-in-linux-windows/d/d-id/1330738?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Open Source Components, Code Volume Drag Down Web App Security

The number of new Web application vulnerabilities published last year was 212% greater than the number disclosed in 2016, Imperva says in a new report this week.

If there’s something of a déjà vu-like quality to vendor and analyst reports summing up the state of Web application security these days its because they all inevitably arrive at the same conclusion: Web apps are becoming more insecure, not less.

The latest reminder of that trend is a report from Imperva released Wednesday showing a 212% percent increase in the number of new Web application vulnerabilities disclosed in 2017 compared to the year before. Using data gathered from multiple sources including vulnerability databases, forums, newsletters, and social media, Imperva tallied a total of 14,082 new vulnerabilities in Web applications last year compared to 6,615 in 2016.

The vendor found that more than half of Web applications have an exploit available publicly to hackers, meaning attacks against the apps are possible at any time. If that was not bad enough, some 36% of Web application vulnerabilities did not have a software patch, upgrade, or other available workaround. “Web application vulnerabilities are always on the rise, and 2017 was a record year,” says Nadav Avital, security research team leader at Imperva. “Organizations should plan how to deal with the increase in vulnerabilities through carefully planned maintenance and patching programs or through external security solutions.” 

Yet again, cross-site scripting (XSS) errors were the most common Web application vulnerability, accounting for 1,863 of the new vulnerabilities in Imperva’s report, compared to just 630 the previous year. XSS continues to be one of the most basic Web application vulnerabilities and are very easy to test and find, Avital says. “Many of the products that suffer from XSS vulnerabilities are open source which makes it even easier to find the XSS vulnerabilities.”

Vulnerable Web applications have been a major cause of data breaches in recent years. Last year’s monster breach at Equifax that exposed personal data on more than 140 million individuals resulted from a Web application flaw that gave intruders a way inside the credit reporting giant’s network. Botnet-enabled attacks on vulnerable Web applications in fact accounted for more breaches (571) than any other vector in Verizon’s 2017 Data Breach Investigations Report. In contrast, cyber espionage, the second most common cause, accounted for just 289 breaches.

Security experts point to a handful of causes for the prevailing state of Web application security.

Chris Wysopal, CTO of CA Veracode, says one reason is the increasing use by developers of open source components to build applications. Often these components have bugs that then get inherited by the application that is built with them. Even with a process known as software composition analysis, checking for and replacing known vulnerabilities in open source components, there is still the issue of vulnerabilities being discovered after the application is deployed, Wysopal says.

“For example, CA Veracode’s State of Software Security Report 2017 found that 88% of Java applications had at least one flaw in a component,” he says. The CA Veracode report found that applications produced internally and sourced externally have gotten worse when looked at against OWASP list of Top Ten vulnerabilities, he notes.

The sheer volume of Web applications being produced these days is another issue. “Modern software development frameworks have had a highly positive impact on Web application vulnerabilities over the years,” says Jeremiah Grossman, chief of security strategy at SentinelOne. “[But] the bottom line is there’s an increasing amount of Web application code going into production.”

“Similar to software bugs in general, more code equals more vulnerabilities. What we need to focus on is how to make sure a breach doesn’t happen due to exploiting just a single vulnerability,” he says.

The growing adoption of DevOps, agile development, and CI/CD practices at many organizations has been a factor as well. “If development teams integrate security testing as an automated process as part of their CI/CD pipeline, then there should be an improvement in security,” notes Wysopal. But if security remains outside of the continuous integration and continuous delivery pipeline, more applications are likely to be released without proper testing or without the proper fixes being applied to code before release, he says.

“DevOps has provided both significant upsides and downsides” with regard to Web application security, agrees Grossman. “On the upside, the rapid and frequent release cycles of DevOps provide more windows of opportunity to resolve identified vulnerabilities.”

DevOps processes also shorten the time available to security teams to find and fix flaws in application before they make it to production, Grossman says.

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/application-security/open-source-components-code-volume-drag-down-web-app-security-/d/d-id/1330744?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Critical Microprocessor Flaws Affect Nearly Every Machine

Researchers release details of ‘Meltdown’ and ‘Spectre’ attacks that allow programs to steal sensitive data.

After a day of speculation over a reported design flaw in Intel processors, security researchers came clean late today with full disclosure of a new and widespread class of attack that affects most computers worldwide.

Researchers from Google’s Project Zero Team, Cyberus Technology, Graz University of Technology, University of Pennsylvania and the University of Maryland, Rambus, and University of Adelaide and Data61, discovered critical flaws in a method used by most modern processors for performance optimization that could allow an attacker to read sensitive system memory, which could contain passwords, encryption keys, and emails, for example. The vulnerabilities affect CPUs from Intel, AMD, and ARM, according to Google.

The two attacks, called Meltdown and Spectre, can be executed on desktop machines, laptops, mobile devices, and in cloud environments. That means an attacker could steal information from other cloud customers’ systems, for example.

Meltdown allows user applications to pilfer information from the operating system memory, as well a secret information of other programs. “If your computer has a vulnerable processor and runs an unpatched operating system, it is not safe to work with sensitive information without the chance of leaking the information. This applies both to personal computers as well as cloud infrastructure,” the researchers wrote in an FAQ about the attacks. “Luckily, there are software patches against Meltdown,” referring to Linus, Windows, and OS X updates (not all of which are yet available, however).

Most Intel processors since 1995 are affected by Meltdown, with the exception of Intel Itanium and Intel Atom prior to 2013). Only Intel processors are confirmed to be affected by it so far.

Spectre forces an application to share its secrets, and is a more difficult attack to pull off. According to the researchers, application “safety checks of said best practices actually increase the attack surface and may make applications more susceptible to Spectre.” It affects Intel, AMD, and ARM processors on desktops, laptops, cloud servers, and smartphones.

“Both attacks use side channels to obtain the information from the accessed memory location,” the researchers said.

The researchers say they don’t know if the exploits have been used in the wild. 

Google senior security engineer Matt Linton, and technical program manager Pat Parseghian in  a blog post  today said Google went public with the vulns in advance of the planned January 9 coordinated disclosure date for all affected vendors “because of existing public reports and growing speculation in the press and security research community about the issue, which raises the risk of exploitation.”

Intel earlier in the day issued a statement, noting that it “believes these exploits do not have the potential to corrupt, modify or delete data.” The chip vendors aid it has been providing software and firmware updates to mitigate the attacks, and reports of peformance degradation won’t “be significant” for the average use and “will be mitigated over time.”

ARM issued a statement as well, noting that “The majority of Arm processors are not impacted by any variation of this side-channel speculation mechanism.”

Amazon also issued a statement: “This is a vulnerability that has existed for more than 20 years in modern processor architectures like Intel, AMD, and ARM across servers, desktops, and mobile devices. All but a small single-digit percentage of instances across the Amazon EC2 fleet are already protected. The remaining ones will be completed in the next several hours, with associated instance maintenance notifications,” Amazon said.

But fair warning: those updates protect AWS’s underlying infrastructure only. Full protection requires the operating systems also be patched, Amazon noted.  

Related Content:

 

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/endpoint/critical-microprocessor-flaws-affect-nearly-every-machine/d/d-id/1330745?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Sensor data can be used to guess your PIN, unlock your phone

Turns out that those sensors in your smartphone that do all kinds of cool, magical things like give you directions, find your friends, let your Uber or Lyft driver find you, count the steps in your workout, let you know where traffic is bad and a host of other conveniences have a not-so-cool downside.

According to researchers from the Nanyang Technological University (NTU) in Singapore, malicious apps on your phone could use the datastream from those sensors to build up information on how the phone is used and ultimately guess the phone’s PIN.

The researchers’ algorithm was able to guess a PIN with a 99.5% accuracy on the first try using a list of the top 50 most common PINs, although the success rate went down to 83.7% when it tried to guess all 10,000 possible combinations of four-digit PINs within 20 tries.

There’s no barrier to collecting the data because those sensors are what’s known as “zero-permission” – essentially, an app doesn’t need a user’s consent to access data from them.

Which, on the surface, might not seem like much of a threat. The data collected by such sensors are labeled rather dismissively – at least for security purposes – as “non-critical.” Why should we care if an app has access to a device’s accelerometer, gyroscope, magnetometer, proximity sensor, barometer or ambient light sensor?

They don’t store passwords, Social Security numbers, credit card numbers or other personally identifiable information (PII). They just measure things like whether you’re moving and how fast, where you are, what your altitude is and whether you’re looking at the phone or have it next to your ear.

But they are yet another example of how data from seemingly disparate and unrelated sources can be merged to provide information that is much more invasive than you thought. In this case, enough to guess your PIN and invade your phone, at which point your critical data is at risk.

The NTU researchers are not the first to demonstrate this – there are now multiple examples of how much those sensors can give away. A couple of weeks ago Naked Security reported on a team of researchers from Princeton who demonstrated that they could track the location of smartphone users even if they had their GPS (“location services”) turned off.

In April, researchers from the University of Newcastle in the UK published a paper in the International Journal of Information Security, in which they described a “JavaScript-based side channel attack” that allowed them to guess the PINs on Android devices.

In this attack, once the user visits a website controlled by an attacker, the JavaScript code embedded in the web page starts listening to the motion and orientation sensor streams without needing any permission from the user. By analysing these streams, it infers the user’s PIN using an artificial neural network. Based on a test set of fifty 4-digit PINs, PINlogger.js is able to correctly identify PINs in the first attempt with a success rate of 74% which increases to 86% and 94% in the second and third attempts, respectively.

The NTU researchers did even better (also attacking Android devices), combining data “leakage from a pool of zero-permission sensors to reconstruct a user’s secret PIN.”

By harvesting the power of machine learning algorithms, we show a practical attack on the full four-digit PIN space. Able to classify all 10,000 PIN combinations, results show up to 83.7% success within 20 tries in a single user setting. Latest previous work demonstrated 74% success on a reduced space of 50 chosen PINs, where we report 99.5% success with a single try in a similar setting.

The researchers also note the obvious – that since Android has 81.7% of the smartphone market, this amounts to a massive attack surface.

They acknowledge that the flaws of zero-permission sensors have been noted in at least a dozen other publications, but say defeating the PIN code is more complicated, “because the exploited movements are less pronounced and hence, harder to classify correctly.”

But the success of their research, along with expected improvements, is bad news for smartphone security – even for users who use more than four digits for a PIN.

The classification algorithms are able to easily weigh the importance of each sensor in PIN recovery and allow high recovery success. Since the methodology works on a single digit, it is scalable to PINs longer than 4-digits.

There have been some moves made toward at least partially closing the door. Bleeping Computer noted last April that both Mozilla and Apple updated the Firefox and Safari browsers in early 2016 to allow JavaScript access to motion and orientation sensors only to top-level documents and same origin iframes, but that such restrictions don’t apply yet with Google Chrome.

The much more fundamental fix, of course, has to involve the OS. There is more than enough research out there now to demonstrate that there should no longer be any such thing as a zero-permission sensor. As is regularly said, if the good guys are doing it, the bad guys are too.

At a minimum, every app should require an affirmative consent from users. Meanwhile, this and other research should serve as yet another warning that it’s asking for trouble to get your apps anywhere other than a reliable store that vets them for you. If you get fooled by a malicious one, it’s like inviting hackers through an open door.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/LsbvpSGlOYY/

Your Nigerian Prince is a 67 year old from Louisiana

Somewhere in Nigeria, Sunday was a sad day, for on that day, a prince lost his valet.

Well, a “Nigerian Prince” lost what’s better known as his mule, depending on whether charges stick. Police in the Louisiana town of Slidell in the US announced on Sunday that they’d arrested 67-year-old Michael Neu for 269 counts of wire fraud and money laundering.

On Tuesday, Slidell detectives told local news outlet NOLA/The Times-Picayune that the scams, which also included romance scams, raked in more than $250,000. The detectives said that so far, they’ve tracked transactions involving victims in 17 states, including some in Louisiana.

Neu reportedly admitted to being a middle man for the scammers. He was arrested on 28 November and, as of Tuesday, was in custody pending $30,000 bond. NOLA reports that Slidell police had asked Neu to come in for a chat after they got a request in May 2016 from police in Dodge City, Kansas, to check him out. That request had been sparked by a complaint from somebody in Dodge City who said they’d been ripped off in an internet scam, having sent thousands of dollars to Neu.

Detectives Michael Giardina and Nick Burtanog of the Slidell Police Department’s Financial Crimes Unit didn’t even know if Michael Neu was real, they told NOLA.

He was. When he came in for a conversation with the Slidell detectives, Neu reportedly told them that years earlier he’d been victimized in a romance scheme himself, by someone he met on Facebook who called themselves “Maria Mendez.” At some point, he switched from being a victim to being a middleman in the scams, NOLA reports.

Neu himself wasn’t the purported Nigerian prince, per se. The investigation picked up the “Nigerian” tag at some point, police said, though the origin was romance scam. Neu wasn’t personally offering to share a percentage of millions of princely dollars in exchange for illegally transferring the loot out of the country.

Nonetheless, police said that some of the money obtained by Neu did, in fact, wind up being wired to co-conspirators in Nigeria.

You likely know the drill with these scams, which are also referred to as Nigerian letter or 419 scams: there’s typically an email from a purported Nigerian prince who claims something like he’s the named beneficiary in a will, where he stands to inherit an estate worth a million or more, or that you’re the beneficiary.

The thing is, he needs personal financial information from the victim – who’s obviously not adverse to ripping off the Nigerian government and hence, in FBI parlance, shows a “propensity for larceny” – to prove their trustworthiness. Or if you’re the alleged beneficiary, you then have to prove you’re you… which typically involves sending blank letterhead stationery, bank name and account numbers, and other identifying information using a fax number given in the letter or return email address provided in the message.

And then come all those darn fees. There’s just so much red tape involved in illegally transferring princely piles of plunderage. Or in helping the newfound love of your life out of debt, for that matter. The prince has to draw up an affidavit, pay the fees for the checks so they can clear, cover the contract tax, stamp the duty payment, grease some palms, or, say, get human body parts to satisfy the voodoo part of the deal.

Crazy, huh? Who would fall for such a scam?

Well, let’s see: That would be A LOT OF PEOPLE.

The FBI’s Internet Crime Complaint Center (IC3), which monitors internet scams, said in its 2016 Internet Crime Report that it had received a total of 298,728 complaints with reported losses in excess of $1.3 billion for the year. 419 scams – so-called because the number “419” refers to the section of the Nigerian Criminal Code dealing with fraud – were one of the top three crime types reported by victims.

Those victims can be practically any age, though the older the victims get, the more money the scammers tend to make off with. People in their 30s and those older than 60 each make up about 20% of the total number of victims, according to the IC3.

At any rate, back to Neu. Slidell police said that they arrested him after an “extensive” 18-month investigation that’s still ongoing. This is a tough one, they said, given that many leads are leading to people who live outside of the country.

Slidell Police Chief Randy Fandal warned us all that the arrest is yet another example of how something that sounds too princely perfect likely isn’t for real:

If it sounds too good to be true, it probably is. Never give out personal information over the phone, through email, cash checks for other individuals, or wire large amounts of money to someone you don’t know. 99.9% of the time, it’s a scam.

Don’t send that money!

Here are some tips on how to avoid Nigerian letter or 419 scams:

  • If it’s too good to be true, it likely is. Don’t believe the promise of large sums of money in exchange for your cooperation.
  • Guard your data carefully. Don’t give out your personal information online.
  • If you receive a letter or email from Nigeria asking you to send personal or banking information, don’t reply. Report the letter or message to your local FBI office if you’re in the US, Action Fraud if you’re in the UK, or similar body if you’re elsewhere in the world.
  • If you know someone who is corresponding in one of these schemes, encourage that person to report it as soon as possible.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RvukDN16Hig/

Ad scripts track users via browser password managers

Researchers have spotted a sly new technique adopted by advertising companies to track web users that can’t be stopped by private browsing, clearing cookies or even changing devices.

The method, discovered by Princeton’s Center for Information Technology Policy, exploits the fact that many web users rely on the login managers built into browsers to autofill login details (email address and password) when they visit a familiar website.

Normally this is an innocent process, but on a small number of sites that have embedded either one of two tracking scripts – AdThink and OnAudience – the user is fed a second invisible login screen on a subsequent page that is autofilled by most browser password managers without the user realising this is happening.

At this point, the scripts capture a hashed version of the user’s email address, which is sent to one or more remote servers run by the advertising companies including, in the case of AdThink, large data broker Acxiom.

But what use is a hashed and therefore unusable email address? Quite simply:

Email addresses are unique and persistent, and thus the hash of an email address is an excellent tracking identifier.

Email addresses don’t change often or at all, which means:

The hash of an email address can be used to connect the pieces of an online profile scattered across different browsers, devices, and mobile apps.

The researchers speculate that tracking users via an email address identifier might even allow advertisers to join different browsing histories together even after cookies have been cleared.

This means that changing browsers, or devices, or even deleting cookies after every session would offer little protection as long as a user’s email address remains the same and is used regularly enough.

It sounds alarming, so let’s mention the technique’s limitations.

The first is simply that it is not common, with the two scripts being found on only 1,110 of the Alexa top one million websites.

The user must also be using a browser’s integrated login manager rather than a third-party platform such as LastPass, 1Pass or Dashlane (which don’t autofill invisible forms), and importantly, to have entered their login information on the domain – just visiting isn’t enough.

It’s likely that browser script blockers such as Ghostery, NoScript or Privacy Badger would make short work of the scripts assuming they have been updated to add them to their list of invisible trackers.

The problem is that there will be plenty of internet users who continue to use browser login managers out of convenience, and who don’t run script blockers.

As the researchers point out, the underlying fault here is the Same Origin Policy model of web trust in which publishers either completely trust or mistrust third-parties.

When a script is embedded on a site by an ad partner the easiest option is simply to trust it because not doing so might limit its functions – even if that third-party script is capturing hashes of email addresses entered by customers.

For users, the latest discovery only adds to the strong sense that ad tracking is running out of control in ways that can be extremely hard to keep tabs on. Tracking can be mitigated to some extent but only if users understand such a thing is necessary in the first place.

Today, the default position is that users should take web tracking systems on trust. The discovery of AdThink and OnAudience suggests they need to become far warier.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wzJcze1NFmw/

F**CKWIT, aka KAISER, aka KPTI – Intel CPU flaw needs low-level OS patches

In the near future – in all likelihood, later this month – at least Windows and Linux will get security updates that change the way those operating systems manage memory on Intel processors.

There’s a lot of interest, excitement even, about these changes: they work at a very low level and are likely to affect performance.

The slowdown will depend on many factors, but one report suggests that database servers running on affected hardware might suffer a performance hit around 20%.

“Affected hardware” seems to include most Intel CPUs released in recent years; AMD processors, apparently, have different internals and are immune to this problem.

So, what’s going on here?

On Linux, the forthcoming patches are known colloquially as KPTI, short for Kernel Page Table Isolation, though they have jokingly been referred to along the way as both KAISER and F**CKWIT.

The latter is short for Forcefully Unmap Complete Kernel With Interrupt Trampolines; the former for Kernel Address Isolation to have Side-channels Efficiently Removed.

To explain.

Inside most modern operating systems, you’ll find a privileged core, known as the kernel, that manages everything else: it starts and stops user programs; it enforces security settings; it manages memory so that one program can’t clobber another; it controls access to the underlying hardware such as USB drives and network cards; it rules and regulates the roost.

Everything else – what we glibly called “user programs” above – runs in what’s called userland, where programs can interact with each other, but only by agreement.

If one program could casually read (or, worse still, modify) any other program’s data, or interfere with its operation, that would be a serious security problem; it would be even worse if a userland program could get access to the kernel’s data, because that would interfere with the security and integrity of the entire computer.

One job of the kernel, therefore, is to keep userland and the kernel carefully apart, so that userland programs can’t take over from the kernel itself and subvert security, for example by launching malware, stealing data, snooping on network traffic and messing with the hardware.

The CPU itself provides hardware support for this sort of separation: the x86 and x64 processors provide what are known as privilege levels, implemented and enforced by the chip itself, that can be used to segregate the kernel from the user programs it launches.

Intel calls these privilege levels rings, of which there are four; most operating systems use two of them: Ring 0 (most privileged) for the kernel, and Ring 3 (least privileged) for userland.

Loosely speaking, processes in Ring 0 can take control over processes and resources in higher-numbered rings, but not the other way around.

In theory, then, the processor itself blocks Ring 3 programs from reading Ring 0 memory, thus proactively preventing userland programs from peeking into the kernel’s address space, which could leak critical details about the sytem itself, about other programs, or about other people’s data.

In technical terms, a sequence of machine code instructions like this, running in userland, should be blocked at step 1:

mov rax, [kernelmemory]   ; this will get blocked - the memory is protected
mov rbx, [usermemory]     ; this is allowed - the memory is "yours"

Likewise, swapping the instructions, this sequence would be blocked at step 2:

mov rbx, [usermemory]     ; this is allowed - the memory is "yours"
mov rax, [kernelmemory]   ; this will get blocked - the memory is protected

Now, modern Intel and AMD CPUs support what is called speculative execution, whereby the processor figures out what the next few instructions are supposed to do, breaks them into smaller sub-instructions, and processes them in a possibly different order to how they appear in the program.

This is done to increase throughput, so a slow operation that doesn’t affect any intermediate results can be started earlier in the pipeline, with other work being done in what would otherwise be “dead time” waiting for the slow instruction to finish if it ran at the end of the list.

Above, for example, the two instructions are computationally independent, so it doesn’t really matter what order they run in, even though swapping them round changes the moment at which the processor intervenes to block the offending instruction (the one that tries to load memory from the kernel).

Order does matter!

Back in July 2017, a German security researcher did some digging to see if order does, in fact, matter.

He wondered what would happen if the processor calculated some internal results as part of an illegal instruction X, used those internal results in handling legal instruction Y, and only then flagged X as disallowed.

Even if both X and Y were cancelled as a result, would there be a trace of the internal results from the illegal X left where it could be found?

The example he started with looked like this:

1.  mov rax, [K]      ; K is a kernel address that is banned
2.  and rax, 1
3.  mov rbx, [U+rax]  ; U is a user address that is allowed

Don’t worry if you don’t speak assembler: what this does is:

  • Load the A register from kernel memory.
  • Change A to 0 if it was even or 1 if it was odd (this keeps the thought experiment simple).
  • Load register B from memory location U+0 or U+1, depending on A.

In theory, speculative execution means that the CPU could finish working internally on instruction 3 before finishing instruction 1, even though the whole sequence of instructions would ultimately be invalidated and blocked because of the privilege violation in 1.

Perhaps, however, the side-effects of instruction 3 could be figured out from elsewhere in the CPU?

After all the the processor’s behaviour would have been slightly different depending on whether the speculatively-executed instruction 3 referenced mempey location U or U+1.

For example, this difference might, just might, show up in the CPU’s memory cache – a list of recently-referenced memory addresses plus their values that is maintained inside the CPU itself for performance reasons.

In other words, the cache might act as a “telltale”, known as a side channel, that could leak secret information from inside the CPU – in this case, whether the privileged value of memory location K was odd or even.

(Looking up memory in CPU cache is some 40 times faster than fetching it from the actual memory chips, so enabling this sort of “short-circuit” for commonly-used values can make a huge difference to performance.)

The long and the short of it is that the researcher couldn’t measure the difference between A is even and A is odd (or, alternatively, did the CPU peek at U or did the CPU peek at U+1) in this case…

…but the thought experiment worked out in the end.

The researcher found other similar code constructions that allow you to leach information about kernel memory using address calculation tricks of this sort – a hardware-level side channel that could leak privileged memory to unprivileged programs.

The rest is history

And the rest is history.

Patches are coming soon, at least for Linux and Windows, to deliver KAISER: Kernel Address Isolation to have Side-channels Efficiently Removed, or KPTI, to give its politically correct name.

Now you have an idea where that name KAISER came from: the patch keeps kernel and userland memory more carefully apart so that side-effects from speculative execution tricks can no longer be measured.

This security fix is especially relevant for multi-user computers, such as servers running several virtual machines, where individual users or guest operating systems could use this trick to “reach out” to other parts of the system, such as the host operating system, or other guests on the same physical server.

However, because CPU caching is there to boost performance, anything that reduces the effectiveness of caching is likely to reduce performance, and that is the way of the world.

Sometimes, the price of security progress is a modicum of inconvenience, in much the same the way that 2FA is more hassle than a plain login, and HTTPS is computationally more expensive than vanilla HTTP.

In eight words, get ready to take one for the team.

What next?

A lot of the detail beind these patches is currently hidden behind a veil [2018-01-03T16:30Z] – that seems to be down to non-disclosure clauses imposed by various vendors involved in preparing the fixes, an understandable precaution given the level of general interest in new ways to pull of data leakage and privilege escalation exploits.

We expect this secrecy to be lifted as patches are officially published.

However, you can get and try the Linux patches for yourself rigt now, if you wish. (They aren’t finalised yet, so we can’t recommend using them except for testing.)

So far as we know at the moment, the risk of this flaw seems comparatively modest on dedicated servers such as appliances, and on personal devices such as laptops: to exploit it would require an attacker to run code on your computer in the first place, so you’d already be compromised.

On shared computers such as as multiuser build servers or hosting services that run several different customers’ virtual machines on the same physical hardware, the risks are much greater: the host kernel is there to keep different users apart, not merely to keep different programs run by one user apart.

So, a flaw such as this might help an untrustworthy user to snoop on other who are logged in at the same time, or to influence other virtual machines hosted on the same server.

This flaw has existed for years and has been documented about for months at least, so there is no need to panic; nevertheless, we recommend that you keep your eyes out for patches for the operating systems you use, probably in the course of January 2018, and that you apply them as soon as you can.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9k2pS5lFW7c/