STE WILLIAMS

Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign

Updated A fundamental design flaw in Intel’s processor chips has forced a significant redesign of the Linux and Windows kernels to defang the chip-level security bug.

Programmers are scrambling to overhaul the open-source Linux kernel’s virtual memory system. Meanwhile, Microsoft is expected to publicly introduce the necessary changes to its Windows operating system in an upcoming Patch Tuesday: these changes were seeded to beta testers running fast-ring Windows Insider builds in November and December.

Crucially, these updates to both Linux and Windows will incur a performance hit on Intel products. The effects are still being benchmarked, however we’re looking at a ballpark figure of five to 30 per cent slow down, depending on the task and the processor model. More recent Intel chips have features – such as PCID – to reduce the performance hit. Your mileage may vary.

Similar operating systems, such as Apple’s 64-bit macOS, will also need to be updated – the flaw is in the Intel x86-64 hardware, and it appears a microcode update can’t address it. It has to be fixed in software at the OS level, or go buy a new processor without the design blunder.

Details of the vulnerability within Intel’s silicon are under wraps: an embargo on the specifics is due to lift early this month, perhaps in time for Microsoft’s Patch Tuesday next week. Indeed, patches for the Linux kernel are available for all to see but comments in the source code have been redacted to obfuscate the issue.

However, some details of the flaw have surfaced, and so this is what we know.

Impact

It is understood the bug is present in modern Intel processors produced in the past decade. It allows normal user programs – from database applications to JavaScript in web browsers – to discern to some extent the layout or contents of protected kernel memory areas.

The fix is to separate the kernel’s memory completely from user processes using what’s called Kernel Page Table Isolation, or KPTI. At one point, Forcefully Unmap Complete Kernel With Interrupt Trampolines, aka FUCKWIT, was mulled by the Linux kernel team, giving you an idea of how annoying this has been for the developers.

Whenever a running program needs to do anything useful – such as write to a file or open a network connection – it has to temporarily hand control of the processor to the kernel to carry out the job. To make the transition from user mode to kernel mode and back to user mode as fast and efficient as possible, the kernel is present in all processes’ virtual memory address spaces, although it is invisible to these programs. When the kernel is needed, the program makes a system call, the processor switches to kernel mode and enters the kernel. When it is done, the CPU is told to switch back to user mode, and reenter the process. While in user mode, the kernel’s code and data remains out of sight but present in the process’s page tables.

Think of the kernel as God sitting on a cloud, looking down on Earth. It’s there, and no normal being can see it, yet they can pray to it.

These KPTI patches move the kernel into a completely separate address space, so it’s not just invisible to a running process, it’s not even there at all. Really, this shouldn’t be needed, but clearly there is a flaw in Intel’s silicon that allows kernel access protections to be bypassed in some way.

The downside to this separation is that it is relatively expensive, time wise, to keep switching between two separate address spaces for every system call and for every interrupt from the hardware. These context switches do not happen instantly, and they force the processor to dump cached data and reload information from memory. This increases the kernel’s overhead, and slows down the computer.

Your Intel-powered machine will run slower as a result.

How can this security hole be abused?

At best, the vulnerability could be leveraged by malware and hackers to more easily exploit other security bugs.

At worst, the hole could be abused by programs and logged-in users to read the contents of the kernel’s memory. Suffice to say, this is not great. The kernel’s memory space is hidden from user processes and programs because it may contain all sorts of secrets, such as passwords, login keys, files cached from disk, and so on. Imagine a piece of JavaScript running in a browser, or malicious software running on a shared public cloud server, able to sniff sensitive kernel-protected data.

Specifically, in terms of the best-case scenario, it is possible the bug could be abused to defeat KASLR: kernel address space layout randomization. This is a defense mechanism used by various operating systems to place components of the kernel in randomized locations in virtual memory. This mechanism can thwart attempts to abuse other bugs within the kernel: typically, exploit code – particularly return-oriented programming exploits – relies on reusing computer instructions in known locations in memory.

If you randomize the placing of the kernel’s code in memory, exploits can’t find the internal gadgets they need to fully compromise a system. The processor flaw could be potentially exploited to figure out where in memory the kernel has positioned its data and code, hence the flurry of software patching.

However, it may be that the vulnerability in Intel’s chips is worse than the above mitigation bypass. In an email to the Linux kernel mailing list over Christmas, AMD said it is not affected. The wording of that message, though, rather gives the game away as to what the underlying cockup is:

AMD processors are not subject to the types of attacks that the kernel page table isolation feature protects against. The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault.

A key word here is “speculative.” Modern processors, like Intel’s, perform speculative execution. In order to keep their internal pipelines primed with instructions to obey, the CPU cores try their best to guess what code is going to be run next, fetch it, and execute it.

It appears, from what AMD software engineer Tom Lendacky was suggesting above, that Intel’s CPUs speculatively execute code potentially without performing security checks. It seems it may be possible to craft software in such a way that the processor starts executing an instruction that would normally be blocked – such as reading kernel memory from user mode – and completes that instruction before the privilege level check occurs.

That would allow ring-3-level user code to read ring-0-level kernel data. And that is not good.

The specifics of the vulnerability have yet to be confirmed, but consider this: the changes to Linux and Windows are significant and are being pushed out at high speed. That suggests it’s more serious than a KASLR bypass.

Also, the updates to separate kernel and user address spaces on Linux are based on a set of fixes dubbed the KAISER patches, which were created by eggheads at Graz University of Technology in Austria. These boffins discovered [PDF] it was possible to defeat KASLR by extracting memory layout information from the kernel in a side-channel attack on the CPU’s virtual memory system. The team proposed splitting kernel and user spaces to prevent this information leak, and their research sparked this round of patching.

Their work was reviewed by Anders Fogh, who wrote this interesting blog post in July. That article described his attempts to read kernel memory from user mode by abusing speculative execution. Although Fogh was unable to come up with any working proof-of-concept code, he noted:

My results demonstrate that speculative execution does indeed continue despite violations of the isolation between kernel mode and user mode.

It appears the KAISER work is related to Fogh’s research, and as well as developing a practical means to break KASLR by abusing virtual memory layouts, the team may have somehow proved Fogh right – that speculative execution on Intel x86 chips can be exploited to access kernel memory.

Shared systems

The bug will impact big-name cloud computing environments including Amazon EC2, Microsoft Azure, and Google Compute Engine, said a software developer blogging as Python Sweetness in this heavily shared and tweeted article on Monday:

There is presently an embargoed security bug impacting apparently all contemporary [Intel] CPU architectures that implement virtual memory, requiring hardware changes to fully resolve. Urgent development of a software mitigation is being done in the open and recently landed in the Linux kernel, and a similar mitigation began appearing in NT kernels in November. In the worst case the software fix causes huge slowdowns in typical workloads.

There are hints the attack impacts common virtualisation environments including Amazon EC2 and Google Compute Engine…

Microsoft’s Azure cloud – which runs a lot of Linux as well as Windows – will undergo maintenance and reboots on January 10, presumably to roll out the above fixes.

Amazon Web Services also warned customers via email to expect a major security update to land on Friday this week, without going into details.

There were rumors of a severe hypervisor bug – possibly in Xen – doing the rounds at the end of 2017. It may be that this hardware flaw is that rumored bug: that hypervisors can be attacked via this kernel memory access cockup, and thus need to be patched, forcing a mass restart of guest virtual machines.

A spokesperson for Intel was not available for comment. ®

Updated to add

The Intel processor flaw is real. A PhD student at the systems and network security group at Vrije Universiteit Amsterdam has developed a proof-of-concept program that exploits the Chipzilla flaw to read kernel memory from user mode:

The Register has also seen proof-of-concept exploit code that leaks a tiny amount of kernel memory to user processes.

Finally, macOS has been patched to counter the chip design blunder since version 10.13.2, according to operating system kernel expert Alex Ionescu. And it appears 64-bit ARM Linux kernels will also get a set of KAISER patches, completely splitting the kernel and user spaces, to block attempts to defeat KASLR. We’ll be following up this week.

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/

Bug-finders’ scheme: Tick-tock, this tech’s tested by flaws.. but who the heck do you tell?

Security researcher Scott Helme is pushing a scheme to make it easier for bug finders to notify companies about problems with their technology.

The idea revolves around “security.txt” – a simple text file, much like robots.txt, that contains information on whom to contact or where to look for security related information about a website. Ready access to this information would reduce the headaches involved in the often fraught security bug notification process, as Helme explains.

Bad things happen and organisations need to respond quickly to resolve them but one things that’s always slowed down the process was me not being able to find who I should speak to. I’ve been through call centres, online chats, support tickets systems, social media and who knows what else just to try and raise an issue with the right person.

The process is a nightmare, consumes significant amounts of my time and ultimately leaves the website and users vulnerable for even longer.

The idea, conceived by web dev E. Foudil back in September – has been put forward as an RFC. The securitytxt.org website offers more information.

The scheme is analogous with robots.txt, the file websites use to specify what pages can and shouldn’t be indexed by search engines and other web crawlers.

Helme told El Reg that he’d already received positive feedback on the idea from fellow security researchers but the idea has yet to see traction with vendors of online services, unsurprisingly since the concept is just days old. “It’s getting good backing from the researchers,” Helme explained. “I’ve started tracking the use of the text file in the Alexa Top 1 Million sites and it’s as low as expected right now.”

“[It] would be nice to see sites adopting this so researchers like me can disclose quickly and easily to better protect users,” he added. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/03/security_notification_scheme/

A Pragmatic Approach to Fixing Cybersecurity: 5 Steps

The digital infrastructure that supports our economy, protects our national security, and empowers our society must be made more secure, more trusted, and more reliable. Here’s how.

Today’s headlines are depressingly familiar: wide swaths of personal data are stolen; ransomware locks out access to vital medical records; hostile nation-states exploit social media to influence our political system; electrical grids are compromised; another company loses intellectual property to a foreign competitor. 

Despite over $90 billion spent per year on cybersecurity, progress in securing our business systems, protecting our critical infrastructure, and ensuring consumer data is safe appears to be halting. Clearly, we are at an inflection point. The digital ecosystem that supports our economy, protects our national security, and empowers our society must be made more secure, more trusted, and more reliable. We propose government and business leaders take the following steps immediately.

Step 1: Rethink the distinction between critical and noncritical infrastructure. The economy runs on data and digital networks, from hospitals reliant on electronic medical records to serve patients to sophisticated payment networks that power small businesses. The proliferation of these digital ecosystems across all facets of our economy and society make it very difficult to differentiate between critical and noncritical systems. We need to rethink our risk models in such an interdependent environment. 

Step 2: Make more use of market and legal incentives to drive adoption of best practices, and harden our digital infrastructure across all industries. The key to securing and making networks more resilient is the greater use of market incentives and less reliance on regulation. Currently, most businesses spend enormous resources satisfying the requirements of dozens of cybersecurity frameworks and standards. This compliance-based approach adds to the cost and complexity of security with a questionable reduction in risk. A case in point: most of the large data breaches over the last several years occurred at organizations that were “compliant” with government and industry control standards.

Step 3: Leverage the efforts of the National Institute of Standards and Technology (NIST). The federal government should take the lead by creating and promulgating one framework with associated controls standards, measurable performance criteria, uniform audit approaches, and breach disclosure criteria to replace the myriad of federal, state, and industry regulatory models. Liability protection should be extended to those entities that adopt this framework, which then can be translated into action by leveraging the purchasing power of the private sector, government, and consumers using market-based incentives.

Businesses need to hold their vendors and suppliers to a better standard in terms of protecting sensitive data, and ensure that digital services are safe from disruption, destruction, or tampering. They can leverage their tremendous purchasing power to demand a higher level of cybersecurity and resilience in the same manner they currently screen vendors for financial soundness and their ability to deliver goods and services.

The US government spends hundreds of billions on suppliers and vendors as well. This purchasing power should be translated into contract language requiring basic levels of digital security. NIST’s current efforts are a good start but need to be fully implemented into the federal government’s acquisition and procurement systems to be effective.

US consumers spend over $600 billion per year on information technology and telecommunication services. To improve consumer awareness of the level of security of digital products and services, the government and industry should create the cyber equivalent of Energy Star — a rating system to inform consumers about the level of security of the products and services they buy. This would compel companies to improve the security of their products and services using market mechanisms.  

Step 3: Improve information sharing and collaboration. One of the lessons learned from our war on terror is not only the need to share information between government agencies and between the private and public sectors, but also the need for greater collaboration. We propose the creation of a National Cybersecurity Center that would include the various federal government cyber centers, the private sector’s information sharing and analysis centers (ISACs), and nonprofit entities. The goal of the center is to co-locate a diverse group of stakeholders to work collaboratively to better prepare for, prevent, detect, respond to, and recover from cyber threats.

Step 4:  A “Manhattan Project” to improve the research and development of next-generation technologies for the sensitive systems that drive our modern economy. This private-public initiative will require the government to lead efforts to ramp up RD, in concert with the private sector and academia, with particular focus on securing Internet of Things technologies, quantum computing and cryptography, and improving the security of autonomous systems.

Step 5: Make a large investment in our cybersecurity human capital base. Currently, over 500,000 cybersecurity jobs are unfilled, resulting in substantial gaps in key industries and bidding wars for talent. We need the equivalent of the National Defense Education Act passed after the Sputnik launch in 1957 to produce the tens of thousands of cyber specialists we need each year. Not only would this produce high-paying jobs, but it would ensure the United States maintains its competitive advantage in cyberspace for decades to come.

What we are proposing here is not new; in fact, it is been part of recommendations from dozens of previous studies and task forces over the last 25 years. What has been missing is the leadership and commitment to translate these recommendations into action.

Related Content:

Mike McConnell, Senior Executive Advisor, Booz Allen Hamilton Former US Director of National Intelligence
Mike McConnell was appointed Director of National Intelligence (DNI) under Presidents George W. Bush and Barack Obama and served as a member of the National Security … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/a-pragmatic-approach-to-fixing-cybersecurity-5-steps-/a/d-id/1330729?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Barracuda Hooks PhishLine in Social Engineering Security Acquisition

Barracuda plans to use PhishLine’s user awareness training to protect against targeted email-based attacks.

Barracuda Networks has acquired PhishLine, a SaaS platform for social engineering simulation and training, the company announced today. The cloud security company plans to add PhishLine’s tech to its security portfolio to protect against email-based targeted attacks.

Email is a prime target for attackers but difficult to protect because security relies on the human factor. Barracuda’s acquisition is intended to address the social engineering threat as attacks become increasingly targeted. In addition to user awareness training, PhishLine provides data analytics and reporting for users to measure risk of both people and processes.

Barracuda has generated a wave of MA news in recent months as it buckles down on cloud security. Last November, it purchased Sonian for cloud archiving and email security; shortly after, it expanded public cloud functionality for its Web Application Firewall and NextGen Firewall. Later the same month, it was acquired by private equity firm Thoma Bravo for $1.6B.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/barracuda-hooks-phishline-in-social-engineering-security-acquisition/d/d-id/1330739?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook ditches fake news flag, admits it was making things worse

Well, that didn’t work out like we thought it would, Facebook said last month about the “Disputed” tag, which has now been mothballed.

Since March, Facebook has been slapping disputed flags on what some of us call fake news and what others call the stories that mainstream news outlets with hidden agendas want to suffocate.

Aside from whether or not you agree with the use of disputed tags – tags that Facebook’s been allocating with the input of third-party fact-checkers such as Snopes, ABC News, Politifact, FactCheck and the Associated Press – there’s one thing that’s become clear: the tags haven’t been doing squat to stop the spread of fake news.

In fact, at least one publisher of admittedly fake news (he was eventually conscience-panged out of the lucrative business) has noted that fake news goes viral way before Facebook systems, partners or users have a chance to report it. Then there’s a consequence that seems obvious in hindsight: traffic to some articles flagged as fake has skyrocketed as a backlash to what some groups see as an attempt to bury the “truth”.

You can imagine: Hey! Facebook is trying to silence this blog! It says we shouldn’t share it! Well, in your FACE, Facebook: Share! Share! Share!

Jeff Smith, Facebook Product Designer, Grace Jackson, User Experience Researcher, and Seetha Raj, Content Strategist, said in a more detailed post published on Medium that Facebook found these four ways that disputed tags could be improved:

  1. Disputed flags buried critical information: Although the disputed flag alerted someone that fact-checkers disputed the article, it wasn’t easy for people to understand exactly what was false. It required too many clicks, and those additional clicks made it harder for people to see what the fact-checkers had said about the disputed story.
  2. Disputed flags could sometimes backfire: We learned that dispelling misinformation is challenging. Just because something is marked as “false” or “disputed” doesn’t necessarily mean we will be able to change someone’s opinion about its accuracy. In fact, some research suggests that strong language or visualizations (like a bright red flag) can backfire and further entrench someone’s beliefs.
  3. Disputed flags required at least two fact-checkers: Disputed flags were only applied after two third-party fact-checking organizations determined an article was false because it was a strong visual signal and we wanted to set a high bar for where we applied it. Requiring two false ratings slowed down our ability to provide additional context and often meant that we weren’t able to do so at all. This is particularly problematic in countries with very few fact-checkers, where the volume of potentially false news stories and the limited capacity of the fact-checkers made it difficult for us to get ratings from multiple fact-checkers.
  4. Disputed flags only worked for false ratings: Some of our fact-checking partners use a range of ratings. For example, they might use “false,” “partly false,” “unproven,” and “true.” We only applied Disputed flags to “false” ratings because it was a strong visual signal, but people wanted more context regardless of the rating. There are also the rare circumstances when two fact-checking organizations disagree about the rating for a given article. Giving people all of this information can help them make more informed decisions about what they read, trust, and share.

Mind you, Facebook told the Guardian in May that the disputed flag was leading to decreased traffic and sharing. Some of the publishers of disputed news echoed that. But neither Facebook nor those publishers coughed up much detail on the supposedly reduced traffic.

On 20 December, Facebook Product Manager Tessa Lyons said in a blog post that the company is swapping out the disputed tags because they were working about as well as waving a red flag in front of a raging bull.

In fact, if a fake-news fan sees that type of image, they’re likely to dig in deeper, she says:

Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs – the opposite effect to what we intended.

So instead, Facebook is going to post a nice, bland, mild-mannered, black and white and completely not a red flag selection of Related Articles, to offer users a bit more context about a given disputed article.

The social media behemoth actually launched Related Articles in 2013 to offer up new articles – in News Feed – that people may find interesting about a given topic after they’re already read an article. In April 2017, it began to test Related Articles that might appear before visitors read an article shared in News Feed. The articles appear in a box below the link and were designed to provide “additional perspectives and information,” including articles by Facebook’s third-party fact checker partners.

Instead of a red flag, the Related Articles are simply about putting news into context. Since April, Lyons says, they’ve proved more effective at dampening shares of fake news:

Related Articles… are simply designed to give more context, which our research has shown is a more effective way to help people get to the facts. Indeed, we’ve found that when we show Related Articles next to a false news story, it leads to fewer shares than when the Disputed Flag is shown.

There are those who’ve questioned Facebook’s sincerity with regards to turning off the spigot of marketing revenues that flow with fake news. But Facebook swears it’s truly committed to keeping fake news out, given that it “undermines the unique value that Facebook offers: the ability for you to connect with family and friends in meaningful ways.” That’s why it’s putting better tech and more people on the problem, Lyons says.

And it is indeed having an effect, she says:

Overall, we’re making progress. Demoting false news (as identified by fact-checkers) is one of our best weapons because demoted articles typically lose 80 percent of their traffic. This destroys the economic incentives spammers and troll farms have to generate these articles in the first place.

What kind of economic incentives, you well may ask? Well, you can have a chat with Russian troll Jenna Abrams and her 2,752 troll factory friends for the details. A taste: according to one former troll factory employee, $2.3 million was spent over two years, with up to 90 employees making about USD $846 (50,000 roubles or £650)/month.

But while Facebook’s fight against fake news means it’s leaving money on the table, it could also spare it the finacial lashings of countries that have had it up to here with fake news.

In December 2016, for example, Germany threatened Facebook with a €500,000 fine per fake news post.

It did so amid fears that its German election campaign would turn into a Trump election-like circus, “hijacked by news peddlers, conspiracy theorists, racist ideologues, trolls and cyber-bullies,” as the Financial Times put it.

The UK, France and the Netherlands have had similar fears.

Hopefully, the more neutral approach of giving context, plus the lack of a strong, red-flag graphic image, will do a better job at keeping fake news from spreading like wildfire and will keep such countries’ election campaigns on a more rational, less circusy footing.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/CKuHDGlM1e8/

Brazil says it has bagged Royal Navy flagship HMS Ocean for £84m

The flagship of the Royal Navy, HMS Ocean, has been sold to Brazil for £84m, the South American country’s government has confirmed.

The 22,000-tonne helicopter carrier, which returned from her last British deployment to the Caribbean just weeks ago, will be formally decommissioned from the RN in spring this year.

Although it was well known that Ocean was up for sale and that Brazil (as well as Turkey) were interested in buying the 20-year-old warship, confirmation of the deal and the purchase price were all “known unknowns” until now.

The news was broken by defence blog UK Defence Journal, citing a Brazilian journalist.

The Brazilian Navy’s end-of-year roundup statement, published on Christmas Eve 2017, also included the line: “Minister Raul Jungmann and the Brazilian Navy Commander Eduardo Bacellar Leal Ferreira took the opportunity to announce the purchase of the Royal Navy’s HMS Ocean multifunction vessel, valued at £84m sterling.”

A British Ministry of Defence spokesman told El Reg: “Discussions with Brazil over the long-planned sale of HMS Ocean are at an advanced stage, but no final decisions have been made. HMS Ocean has served admirably with us since 1998 and the revenue she generates will be reinvested in defence as we bolster our Royal Navy with two types of brand new frigates and two huge aircraft carriers.”

In 2012 Ocean was given a two-year, £65m refit, extending her life by three years. While this refit cycle could potentially continue for more years to come, the MoD simply doesn’t have the budget to keep her in service – especially as ever more machinery on the hard-worked old ship needs deep maintenance or replacing.

Selling the ship gives a net benefit to the taxpayer over the last three years alone of £20m, which (very approximately) covers the running costs of two frigates for one year. The longer-term financial benefits will be felt by not having to stump up the £12m annual running cost (see “LPH Ocean Class”) of the ship herself.

Sunset of the ‘Mighty O’

Built for £150m in the mid-1990s and commissioned in 1998, Ocean has a dual role as an amphibious landing ship and a helicopter carrier. Her primary purpose in times of war is delivering a unit of Royal Marines to wherever in the world they are needed, whether by using helicopters flying from her large, flat deck, or landing craft operating from her midships davits and her stern ramp.

HMS Ocean on her last deployment to the Caribbean. Crown copyright

HMS Ocean on her last deployment to the Caribbean. Note landing craft hanging from the davits; the stern ramp is folded up in this picture. Crown copyright/LPhot Kyle Heller

Some controversy surrounded her build costs, with one of the bidders being accused of submitting an artificially low bid in order to win. Nonetheless, the ship was built to civilian standards, something which not only made her cheaper but also gave her a shorter expected service life than most British warships.

It is expected that Ocean will undergo a comprehensive refit in the UK to Brazil’s specifications before she departs for a new life under a new name.

The old helicopter carrier has just been supplanted as the RN’s sole flat-top by HMS Queen Elizabeth, the new F-35 aircraft carrier. Though Big Liz will eventually become a capable warship in her own right, she was not designed as an amphibious warfare vessel; the ability to deliver large numbers of British personnel onto a potentially hostile shore now rests solely with the two dedicated Albion-class warships, only one of which is ever in service at any given time. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/03/hms_ocean_sold_brazil/

Security catch-up: Nigerian prince email ring cops collar … Louisiana OAP?

The festive period was accompanied by the usual security shenanigans including breaches, cybercrime busts and serious security bugs.

Those security pros returning to work this week after a well-earned break may or (may not) be relieved to know it’s largely been business as usual.

Over in the United States, half a dozen of the country’s senators have teamed up (PDF) on a bipartisan effort to prevent hackers from tampering with election results. Senators James Lankford (Republican-Oklahoma), Amy Klobuchar (Democrat-Minnesota), Kamala Harris (Democrat-California), Susan Collins (Republican-Maine), Martin Heinrich (Democrat-New Mexico), and Lindsey Graham (Republican-South Carolina) teamed up to craft a potential Secure Elections Act.

The bill includes requirements that the government provide states with any intel it has on possible election-hacking threats as well as sets up a procedure for state officials to get security clearance to view these intelligence reports. It also provides grant money to help states cover the cost of switching from electronic-only voting machines to more secure models that leave a paper trail of activity.

In true US government fashion, the two parties combined to give a wonderfully wishy-washy summary of the election hacking landscape:

During the 2016 election, intelligence reports have factually established that Russia hacked presidential campaign accounts, launched cyberattacks against at least 21 state election systems, and attacked a US voting systems software company. While there is no evidence that a single vote outcome was tampered with, this dangerous precedent should be a wake-up call as we head into the 2018 election cycle.

Meanwhile, Guy Fawkes mask-wearing hacker collective Anonymous reportedly hacked an Italian speed camera database. Hacktivists hijacked a police email and database system in Corregio, Italy and deleted speed camera tickets.

In an unrelated case, two Romanian nationals were charged with hacking CCTV cameras ahead of US pres Donald Trump’s inauguration, CNN reports. A link to the criminal complaint can be found here (pdf).

Elsewhere in the world of cybercrime investigation, a suspect purported to be part of an email scam ring styling itself as a Nigerian prince was arrested. A police charge sheet shows that a 67-year-old pensioner from Louisiana – rather than a resident of Lagos and surrounds – was charged with 269 counts of Wire Fraud and Money Laundering, as USA Today reports. The police report does maintain that law enforcement officers are also looking into suspected “co-conspirators in the Country of Nigeria”, so there’s that…

Just before the new year, John McAfee claimed his Twitter account had been hacked to encourage his followers to purchase, er, lesser-known cryptocurrencies. McAfee said the incident was Twitter’s fault, and not his, because of its failure to get to grips with fake accounts.

The incident raised more than a few raised eyebrows among security watchers, such as Graham Cluley. “The real John McAfee is no stranger to tweeting about which cryptocurrency his followers should invest in, so the ‘hacker’ certainly wasn’t entirely clueless about how to blend in with the security veteran’s regular postings,” Cluley opined.

Back in the world of the more tangible, the new year brought with it the disclosure of a macOS kernel flaw along with an accompanying proof-of-exploit on Github by bug-sniffer “Siguza”, who suggests it has been around for quite a while.

With nasty flaws, social media hacks and cybercrime arrests, it’s reasonable to say that the festive period was largely a continuation of the 12 months that proceeded it.

Keep being weird, infosec. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/03/security_catch_up/

Shopped in Forever 21? There was bank-card-slurping malware in it for, like, forever

Clothing chain Forever 21 has admitted a malware infection on its cash registers swiped customer payment card details for most of last year.

The retailer issued a statement revealing that from how last year, from April 3 to November 18, hackers were able to harvest the payment card details from point of sale (POS) terminals in its stores.

According to Forever21, the crimeware was present at various times on machines throughout the seven-month period with some machines being infected for most or all of that time. Additionally, Forever 21 said, the malware was able to get into appliances that stored transaction log in the stores so it could potentially access cards read by machines that were not themselves infected.

Perhaps most infuriating to victims is the fact that Forever 21 actually had encryption tools installed to secure those sales records from prying eyes, but not running, on the infected machines and log storage systems.

“Forever 21 stores have a device that keeps a log of completed payment card transaction authorizations,” the company said.

“When encryption was off, payment card data was being stored in this log. In a group of stores that were involved in this incident, malware was installed on the log devices that was capable of finding payment card data from the logs, so if encryption was off on a POS device prior to April 3, 2017 and that data was still present in the log file at one of these stores, the malware could have found that data.”

The company notes that its online store and its stores outside of the US use different payment systems that were not exposed to the malware.

Those who shopped at Forever 21 stores between April and November should keep a close eye on their bank and credit card statements, and report any suspicious activity. One tiny saving grace is that in many cases the card numbers, security codes, and expiration dates were obtained, but the cardholder names were rarely disclosed.

Forever 21 said it is working with its payment processors, the developer of the breached POS systems, and law enforcement to further investigate the cyber-break-in.

The clothing shop is far from alone in falling victim to these sort of attacks. POS infections have become an increasingly common way for crooks to conduct large-scale harvesting of payment card details. The targets have ranged from hotel chains like Hilton to big-box retailer Target and even restaurant chains. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/03/forever_21_hacked/

‘Kernel memory leaking’ Intel processor design flaw forces Linux, Windows redesign

A fundamental design flaw in Intel’s processor chips has forced a significant redesign of the Linux and Windows kernels to defang the chip-level security bug.

Programmers are scrambling to overhaul the open-source Linux kernel’s virtual memory system. Meanwhile, Microsoft is expected to publicly introduce the necessary changes to its Windows operating system in an upcoming Patch Tuesday: these changes were seeded to beta testers running fast-ring Windows Insider builds in November and December.

Crucially, these updates to both Linux and Windows will incur a performance hit on Intel products. The effects are still being benchmarked, however we’re looking at a ballpark figure of five to 30 per cent slow down, depending on the task and the processor model. More recent Intel chips have features – such as PCID – to reduce the performance hit.

Similar operating systems, such as Apple’s 64-bit macOS, will also need to be updated – the flaw is in the Intel x86 hardware, and it appears a microcode update can’t address it. It has to be fixed in software at the OS level, or buy a new processor without the design blunder.

Details of the vulnerability within Intel’s silicon are under wraps: an embargo on the specifics is due to lift early this month, perhaps in time for Microsoft’s Patch Tuesday next week. Indeed, patches for the Linux kernel are available for all to see but comments in the source code have been redacted to obfuscate the issue.

However, some details of the flaw have surfaced, and so this is what we know.

Impact

It is understood the bug is present in modern Intel processors produced in the past decade. It allows normal user programs – from database applications to JavaScript in web browsers – to discern to some extent the contents of protected kernel memory.

The fix is to separate the kernel’s memory completely from user processes using what’s called Kernel Page Table Isolation, or KPTI. At one point, Forcefully Unmap Complete Kernel With Interrupt Trampolines, aka FUCKWIT, was mulled by the Linux kernel team, giving you an idea of how annoying this has been for the developers.

Whenever a running program needs to do anything useful – such as write to a file or open a network connection – it has to temporarily hand control of the processor to the kernel to carry out the job. To make the transition from user mode to kernel mode and back to user mode as fast and efficient as possible, the kernel is present in all processes’ virtual memory address spaces, although it is invisible to these programs. When the kernel is needed, the program makes a system call, the processor switches to kernel mode and enters the kernel. When it is done, the CPU is told to switch back to user mode, and reenter the process. While in user mode, the kernel’s code and data remains out of sight but present in the process’s page tables.

Think of the kernel as God sitting on a cloud, looking down on Earth. It’s there, and no normal being can see it, yet they can pray to it.

These KPTI patches move the kernel into a completely separate address space, so it’s not just invisible to a running process, it’s not even there at all. Really, this shouldn’t be needed, but clearly there is a flaw in Intel’s silicon that allows kernel access protections to be bypassed in some way.

The downside to this separation is that it is relatively expensive, time wise, to keep switching between two separate address spaces for every system call and for every interrupt from the hardware. These context switches do not happen instantly, and they force the processor to dump cached data and reload information from memory. This increases the kernel’s overhead, and slows down the computer.

Your Intel-powered machine will run slower as a result.

How can this security hole be abused?

At best, the vulnerability could be leveraged by malware and hackers to more easily exploit other security bugs.

At worst, the hole could be abused by programs and logged-in users to read the contents of the kernel’s memory. Suffice to say, this is not great. The kernel’s memory space is hidden from user processes and programs because it may contain all sorts of secrets, such as passwords, login keys, files cached from disk, and so on. Imagine a piece of JavaScript running in a browser, or malicious software running on a shared public cloud server, able to sniff sensitive kernel-protected data.

Specifically, in terms of the best-case scenario, it is possible the bug could be abused to defeat KASLR: kernel address space layout randomization. This is a defense mechanism used by various operating systems to place components of the kernel in randomized locations in virtual memory. This mechanism can thwart attempts to abuse other bugs within the kernel: typically, exploit code – particularly return-oriented programming exploits – relies on reusing computer instructions in known locations in memory.

If you randomize the placing of the kernel’s code in memory, exploits can’t find the internal gadgets they need to fully compromise a system. The processor flaw could be potentially exploited to figure out where in memory the kernel has positioned its data and code, hence the flurry of software patching.

However, it may be that the vulnerability in Intel’s chips is worse than the above mitigation bypass. In an email to the Linux kernel mailing list over Christmas, AMD said it is not affected. The wording of that message, though, rather gives the game away as to what the underlying cockup is:

AMD processors are not subject to the types of attacks that the kernel page table isolation feature protects against. The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault.

A key word here is “speculative.” Modern processors, like Intel’s, perform speculative execution. In order to keep their internal pipelines primed with instructions to perform, the CPU cores try their best to guess what code is going to be run next, fetch it, and execute it.

It appears, from what AMD software engineer Tom Lendacky was suggesting above, that Intel’s CPUs speculatively execute code potentially without performing security checks. It seems it may be possible to craft software in such a way that the processor starts executing an instruction that would normally be blocked – such as reading kernel memory from user mode – and completes that instruction before the privilege level check occurs.

That would allow ring-3-level user code to read ring-0-level kernel data. And that is not good.

The specifics of the vulnerability have yet to be confirmed, but consider this: the changes to Linux and Windows are significant and are being pushed out at high speed. That suggests it’s more serious than a KASLR bypass.

Also, the updates to separate kernel and user address spaces on Linux are based on a set of fixes dubbed the KAISER patches, which were created by eggheads at Graz University of Technology in Austria. These boffins discovered [PDF] it was possible to defeat KASLR by extracting memory layout information from the kernel in a side-channel attack on the CPU’s virtual memory system. The team proposed splitting kernel and user spaces to prevent this information leak. Their work was reviewed by Anders Fogh, who wrote this interesting blog post in July.

That article described his attempts to read kernel memory from user mode by abusing speculative execution. Although Fogh was unable to come up with any working proof-of-concept code, he noted:

My results demonstrate that speculative execution does indeed continue despite violations of the isolation between kernel mode and user mode.

It appears the KAISER work is related to Fogh’s research, and as well as developing a practical means to break KASLR by abusing virtual memory layouts, the team may have proved Fogh right – that speculative execution on Intel x86 chips can be exploited to access kernel memory.

Shared systems

The bug will impact big-name cloud computing environments including Amazon EC2, Microsoft Azure, and Google Compute Engine, said a software developer blogging as Python Sweetness in this heavily shared and tweeted article on Monday:

There is presently an embargoed security bug impacting apparently all contemporary [Intel] CPU architectures that implement virtual memory, requiring hardware changes to fully resolve. Urgent development of a software mitigation is being done in the open and recently landed in the Linux kernel, and a similar mitigation began appearing in NT kernels in November. In the worst case the software fix causes huge slowdowns in typical workloads.

There are hints the attack impacts common virtualisation environments including Amazon EC2 and Google Compute Engine…

Microsoft’s Azure cloud – which runs a lot of Linux as well as Windows – will undergo maintenance and reboots on January 10, presumably to roll out the above fixes.

Amazon Web Services also warned customers via email to expect a major security update to land on Friday this week, without going into details.

There were rumors of a severe hypervisor bug – possibly in Xen – doing the rounds at the end of 2017. It may be that this hardware flaw is that rumored bug: that hypervisors can be attacked via this kernel memory access cockup, and thus need to be patched, forcing a mass restart of guest virtual machines.

A spokesperson for Intel was not available for comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/

The Cybersecurity ‘Upside Down’

There is no stranger thing than being breached. Here are a few ways to avoid the horror.

Like many in cybersecurity, I’m more than a bit of a sci-fi fan and was easily reeled in by Netflix’s Stranger Things. Stranger ThingsUpside Down is an alternative reality where none of us wants to be. Landing in the Upside Down diverts circumstances in different, unintended directions and, in some cases, permanently changes lives.  

As breach headlines and the resulting fallout of these compromises continue to stream in, it’s easy to imagine that the affected companies are now experiencing their own alternative, unintended reality. This wasn’t the business plan they started the year with, but it is what will be managed for months, and likely a few years, to come. It’s more than a bit… upside down. 

The Cybersecurity Upside Down is the alternate reality organizations enter once they have been materially compromised. It stops business, costs millions, and can have an incalculable impact on current and future customers. It’s the inevitable, not-so-alternative reality for organizations if they don’t take a strategic approach to security, especially as they transform their businesses. Small changes and more investments in new, disparate tools without a seismic shift in strategy will take you to the Cybersecurity Upside Down. 

What Does the Cybersecurity Upside Down Look Like?
In two words, “reactive chaos.” You have no control of your environment and most of your efforts are diverted into understanding what happened, containing the damage, and remediating the issue. New projects, including cloud development and mergers and acquisitions, are significantly stalled. An organization new to the Cybersecurity Upside Down will quickly realize it is blind to what is happening on the network, unaware of where the weaknesses are and without the ability to quickly assess risk.

How Can You Stay Out of the Upside Down?
Do whatever you can to get visibility of your entire security posture and be able to measure it easily and, preferably, continuously so you can take proactive action. Many security organizations have started instrumenting for visibility at endpoints and networks. This is important and useful in monitoring, responding to, and, in some cases, being able to block potential exploits. But this is only a start.

Understanding and establishing true visibility for code and application security is a must for today’s enterprises. Most companies are developing technology and using many different infrastructure providers and third-party components, and they’re accelerating development practices due to competition and new methodologies such as DevOps. If organizations are not integrating security into the entire development lifecycle, they are exposed. Practices of manual pen testing twice per year, and/or siloed testing within development provide no visibility and painful remediation in an Upside Down event. 

Make sure to ask questions. Knowing how organizations in your supply chain are developing and protecting your products gives you a line of sight into issues and areas of potential risk. How easily can they update you on the security of their solutions? How will they handle remediation for the solutions? Do they continuously test? 

Systemically Avoid the Cybersecurity Upside Down
Weaknesses and vulnerabilities can be insidious. So, how can organizations root out the unintended consequences of how their company is operating?  Automate wherever possible to provide better visibility. Automating code and application security, for example, takes the burden off of siloed teams and developers. More-secure software is delivered faster, and automation enables a continuous view of your security posture.  

Embed the Culture of Security
Just one trip to the Upside Down will highlight quickly how well or how ineffectively DevOps, security, and development teams are working together. Embedding security champions within development teams and automating and orchestrating security are good examples of how to advance the culture of security in an organization. Threat modeling and red teaming are also good exercises to go through, as long as the results are embedded in the security posture going forward and improve overall operations. By integrating security early and often into the application development process, you can have the visibility and assurance that you need for the best defense against the Cybersecurity Upside Down. 

Related Content:

Carol Clark has over 17 years of experience in the software security industry. She is currently Vice President of Marketing at CYBRIC, where she is responsible for customer success programs. She has also held numerous leadership roles at RSA Security, including vice president … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-cybersecurity-upside-down/a/d-id/1330722?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple