STE WILLIAMS

Encryption chip flaw afflicts huge number of computers

Researchers have discovered a serious vulnerability in Infineon Trusted Platform Module (TPM) cryptographic processors used to secure encryption keys in many PCs, laptops, Chromebooks and smartcards.

An early warning something might be up emerged on 30 August 2017 when the Estonian Information System Authority (RIA) issued an alert about a “theoretical” problem affecting 750,000 national ID cards issued after October 2014.

The RIA didn’t go into detail but the fact that cancelling the country’s national elections was floated had security people worried.

Last week we got confirmation from Infineon that the problem was serious enough to demand firmware updates from computer vendors, including HP, Fujitsu, Lenovo, Acer, Asus, LG, Samsung and Toshiba.

In cryptographic terms, this one’s a biggie: a flaw in the way the public key encryption key pair is generated makes it possible for an attacker to work out private 1024-bit and 2048-bit RSA keys stored on the TPM simply by having access to the public key.

According to the researchers, a factorisation attack based on the “Coppersmith” method on a 512-bit key could at worst be achieved on Amazon Web Services (AWS) in 2 CPU hours at a cost of fractions of a cent, on a 1024-bit key in 97 CPU hours for $40-$80, and on 2048-bit in 140.8 CPU hours for $20,000-$40,000.

That probably still puts attacks against 2048-bit keys out of the range of all but the most serious attackers. 1024-bit keys have also been regarded as too weak for some time – security strength guidelines published by the US National Institute of Standards and Technology (NIST) has graded 1024-bit RSA keys “disallowed” since the start of 2013.

Explained the researchers, who will present more information at this month’s ACM CCS conference:

The currently confirmed number of vulnerable keys found is about 760,000 but possibly up to two to three magnitudes more are vulnerable.

Do Trusted Platform Modules matter?

First introduced in 2009, a TPM is a cryptographic chip standard built on to the motherboard of many (but by no means all) PCs and laptops as a secure place to store system passwords, certificates, encryption keys and even biometric data.

The principle is simple: storing keys inside the TPM is a lot better than keeping them on the hard drive or letting them be managed by the operating system, both of which can be compromised.

Microsoft’s BitLocker uses a TPM. They can also be used for authentication (checking a PC is the one it claims to be) and attestation (that a system’s boot image hasn’t been tampered with), for example on Google’s Chromebooks.

The vulnerability was first reported to Infineon in February this year, but the headache now is working out which devices are (or are not) affected.

Many computers, especially older ones, don’t have TPMs and others use chips from vendors other than Infineon.

Windows users can check for the presence of a TPM by typing Win+R to open Run followed by the command tpm.msc (if one is not present you’ll see a message stating this), with the manufacturer code stated at the bottom of the dialogue box. This interface can also be used to regenerate keys, which might be necessary at some point.

Beyond that, the best place to start assessing the flaw’s impact is on the website of the affected vendor and Microsoft’s help page.

According to the latter, what is now designated CVE-2017-15361 was given a “workaround” update in last week’s monthly Windows patch update, which should be applied before any firmware update from the TPM maker.

And it’s not just PCs: a labyrinth of other devices could also be caught up in the issue, for example around 2% of YubiKey hardware tokens. Likewise, Google Chromebooks, almost all of which seem to use Infineon’s TPM but will, thankfully, update automatically without user intervention.

Sophos products that manage BitLocker encryption on affected hardware may be impacted. Sophos customers should check Knowledge Base article 127650 for information.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/U8KSEcRdg9I/

Is security on the verge of a fuzzing breakthrough?

October is National Cybersecurity Awareness Month (NCSAM) and this week’s theme is Today’s predictions for tomorrow’s internet.

Naked Security asked me for a “from the trenches” prediction – a prediction rooted in something practical, where I’m already preparing to spend some time and energy in the next six months.

I’m expecting fuzzing to remain an important technique in security testing, and for the sophistication of fuzzing to improve significantly.

What is fuzzing?

Fuzzing is fundamentally an automated code testing technique. It can be applied to find security problems by throwing vast amounts of tweaked and permuted (fuzzed) inputs at an application and monitoring for conditions with known security implications.

People can write clever tests, but not very many in one day. Fuzzing automates the process of test creation and so it can produce vastly more tests than a person can. Typically each test is quite stupid though, perhaps attempting to provoke the code into an exception or crash with nothing more than random input.

The raw speed of fuzzing compensates for the low odds of an individual test actually finding anything.

Commodification

If you want to run millions of tests (or more – I try to test our engine for billions of iterations in each area I consider), then you need dedicated hardware, ideally lots of it.

Recently, large companies like Google and Microsoft have been trying to make it easier to do fuzzing at scale whilst also packaging it up as an accessible service.

Fuzzers that individuals can easily get running have also been rapidly improving, with the open source American Fuzzy Lop (AFL) being the standout player for me.

AFL describes itself as:

a security-oriented fuzzer that employs a novel type of compile-time instrumentation and genetic algorithms to automatically discover clean, interesting test cases that trigger new internal states in the targeted binary. This substantially improves the functional coverage for the fuzzed code.

Fuzzing can be used as a black box technique (working without access to an application’s source code), so as it becomes more accessible to you it becomes more accessible to your adversaries too. That alone is reason enough to start.

Smarter fuzzing

One way to make fuzzing more accessible and efficient is to make it less stupid. This normally involves using knowledge of how a program works and how bugs can occur to influence the process of automated test creation.

Automatic exploration of code is hard though. Sophisticated computer programs have so many possible execution paths that attempting to trace them all causes a rapid “explosion” in complexity (known as a combinatorial explosion). There are simply too many possibilities even for a computer to cope with. (How the code is explored is a detailed topic beyond the scope of this article, but if you want to go down that rabbit hole, start with symbolic execution and then perhaps compiler transformation).

Hybrid techniques try to balance the speed of stupid tests with the greater efficiency of smarter ones, while avoiding getting lost in too many choices.

The recent winner of a $2 million cyber security prize used one such approach: concolic execution. That work, however, was sponsored at least in part by the USA’s Defense Advanced Research Projects Agency (DARPA), and is not likely to be released publicly anytime soon (the goal of the challenge was to automate writing exploits…)

A breakthrough?

As code gets harder to understand, the volume of code written each year increases, and as more and more of our lives touch computers in some way, the use of automation to find bugs will only increase in importance.

A number of promising approaches to improving fuzzing have already been demonstrated and it feels to me that we’re almost at a breakthrough where those different techniques are combined and made public – providing any developer with the opportunity to efficiently find bugs during development, before they cause problems.

The most promising tools that I know of come from Shellphish, but I don’t think they’re yet accessible enough to count as the breakthrough I’m hoping for.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RTZFi3MkdyA/

Hackers can track, spoof locations and listen in on kids’ smartwatches

Tests on smartwatches for children by security firm Mnemonic and the Norwegian Consumer Council have revealed them to be riddled with flaws.

The Oslo-based company teamed up with the trading standards body to investigate several smartwatches aimed at kids, specifically the Xplora (and associated mobile application Xplora T1), Viksfjord (and mobile app SeTracker) and the Gator 2 (mobile app Gator).

The project found “significant security flaws, unreliable safety features and a lack of consumer protection”.

Strangers can easily seize control of the watches and use them to track and eavesdrop on children due to a lack of encryption and other failings.

The SOS function in the Gator watch, and the whitelisted phone numbers function in the Viksfjord, are particularly poorly implemented. The alerts transmitted when the child leaves a permitted area are also unreliable. Some of the apps associated with the watches lack terms and conditions. Tests showed it wasn’t possible to delete data or user accounts.

After surreptitiously pairing their phone or tablet with the Gator watch, an attacker can remotely access the location of the watch and its location history. They can also edit and remove “geofenced” areas and even send voice messages to the watch itself, according to the research.

The Xplora watch exhibited less severe vulnerabilities. During testing, the consumer council inadvertently accessed sensitive personal data belonging to other Xplora users, including location, names, and phone numbers.

The consumer council is referring (PDF) the manufacturers to the Norwegian Data Protection Authority and the Consumer Ombudsman for breaches of the Norwegian Personal Data Act and the Marketing Control Act. These are based on EU law so the makers of the kit may have violated EU regulations. The watches are available in multiple EU member states.

“It’s very serious when products that claim to make children safer instead put them at risk because of poor security and features that do not work properly,” said Finn Myrstad, director of digital policy at the Norwegian Consumer Council.

Youtube Video

“Importers and retailers must know what they stock and sell. These watches have no place on a shop’s shelf, let alone on a child’s wrist.”

Mobile developer Roy Solberg also looked at the Gator 2 smartwatch and blasted the kit for its absent security and as a child-tracking risk. He reported his findings in August to the manufacturer but has received no response to date. The publication of the larger study prompted him to go public with his findings.

The Gator watch, distributed in the UK by Techsixtyfour, was previously sold at John Lewis, but consumer advice firm Which? said that after it contacted the retailer the item was pulled from its website.

The Norwegian Consumer Council tested Viksfjord, a Norwegian version. A similar watch also in the SeTracker family is available in the UK, branded as Witmoving and sold on Amazon.

Mnemonic researchers were able to reliably generate the registration code SeTracker requires for pairing, enabling full pairing with the watch and access to its functionality. SeTracker was vulnerable to location spoofing. In addition, Mnemonic was able to develop a voice call hack, involving an attacker instructing the watch to call back to a specified number.

The warnings about children’s smartwatches add to the growing list of IoT security woes. Tony Rowan, chief security consultant at SentinelOne, commented: “It’s clear to me that we need security standards to be developed and applied to all devices that are going to be connected to any kind of network. Perhaps something along the lines of the CE or kite marking but related to security aspects of the design.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/18/child_smartwatch_privacy_peril/

What’s Next After HTTPS: A Fully Encrypted Web?

As the rate of HTTPS adoption grows faster by the day, it’s only a matter of time before a majority of websites turn on SSL. Here’s why.

HTTPS, the encrypted form of delivering websites, was slow to catch on. Over 20 years after its inception, it was still only used by roughly 1 in 20 websites, despite experts praising its security. This adoption finally started picking up speed in mid-2015, more than doubling, to reach 12% of the Web.

The rate of adoption is getting faster by the day. Data from BuiltWith marks adoption at 32.2% of the top 1 million websites, and the HTTP Archive shows 52% of the requests in the top 500,000 websites used HTTPS (up from 22% a mere two years back).

These are staggering numbers, and a huge security win. Even more impressive is the fact the adoption was not driven by compliance, which is the usual suspect when it comes to growing adoption of security controls. Unless you’re communicating private data, none of the major regulations require the use of HTTPS. Instead, this was achieved by making it easier and cheaper to use HTTPS — and making it more costly not to.

On the ease-of-use side, we saw the Let’s Encrypt initiative make SSL certificates free and easy to deploy. Platforms like Cloudflare and WordPress.com make HTTPS free and on by default, and developer tools like SSLTest and Chrome’s dev tools flag SSL configuration mistakes at no cost.

If you didn’t switch, the biggest penalties came from Google and the browser vendors, penalizing your search ranking, displaying increasingly harsh “this site is not secure” messages to users, and more. The same players, collaborating with standards bodies and content delivery networks, went on to limit new Web technologies such as HTTP2 and ServiceWorker to HTTPS, blocking unsecured websites from enjoying their performance and functionality improvements.

So, with all this great progress, what will happen next? Will we actually reach (or approach) a fully encrypted Web? I believe the answer is absolutely yes. I doubt we’ll ever reach 100% coverage, but I expect the vast majority of websites to turn on SSL within a handful of years.

The main reason for that is that the Web’s giants are pushing for it. As adoption grows, it will become increasingly easier for browsers to mark HTTP sites as insecure, which they are already doing more and more. The Web’s primary standards body, the W3C, has explicitly stated new capabilities will often require TLS, and the IEEE favors that sentiment. Apps on Apple and Google have to work harder to make non-HTTPS API calls, and Google is making entire top-level domains it controls require HTTPS. With all this momentum behind it, I believe HTTPS is on its way to victory.

I, for one, am thrilled to see this change take place. It shows the market and community can promote security controls without government regulation — though those don’t hurt. It also shows the value of making security easy, forecasting a great opportunity for security tools that emphasize user experience and simplicity. Lastly, I’m simply happy more of my personal and business communication will be better protected — at least while in transit!

So where else can we replicate this type of success? The same HTTPS advocates are already working on it! The best two examples of such work deal with the risk of third parties, both services and code.

Most websites rely on a frightening number of third-party services, doing anything from analytics to advertising to social content. The fairly recent Content Security Policy (CSP) standard helps contain what those services can do on your site, reducing the risk of data theft and cross-site scripting. Google, Mozilla, and Microsoft all support CSP and highlight it in their dev tools, trying to raise visibility, but configuring it needs to get a lot easier for true adoption to occur.

Alongside third-party services, websites also rely on third-party libraries, such as the popular jQuery. Such libraries often carry known vulnerabilities, the same type of risk that led to the Equifax breach. Microsoft and Google both recently added a test for such vulnerable libraries to their browser’s auditing tools, Lighthouse and Sonar. These widely used auditing tools are sure to raise developers’ awareness about the risk, and tools will make it increasingly easy to fix such Javascript flaws.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

 

Guy Podjarny is CEO and cofounder at Snyk.io, focusing on securing the Node.js and npm world. Guy was previously CTO at Akamai, founded Blaze.io (acquired by Akamai), helped build the first Web app firewall security code analyzer, and was in the Israeli army cyber units. … View Full Bio

Article source: https://www.darkreading.com/endpoint/whats-next-after-https-a-fully-encrypted-web/a/d-id/1330152?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

‘Hacker Door’ Backdoor Resurfaces as RAT a Decade Later

Sophisticated backdoor re-emerges as a RAT more than a decade after its 2004 public release, with updated advanced malicious functionality.

A sophisticated remote access Trojan (RAT) dubbed Hacker Door by researchers has appeared in active attacks and sharing many similarities to a backdoor of the same name that was released in 2004 and last updated in 2005. The new Hacker Door has updated and advanced functionality, report researchers at Cylance.

Hacker Door contains a backdoor and rootkit components. It engages in a set of typical remote commands once active, Cylance researchers say, including grabbing screenshots and files, running other processes and commands, opening Telnet and RDP servers, and stealing Windows credentials during current sessions.

Some of its functionality includes using a signed stolen certificate to evade detection by security software designed to search for unsigned code, notes a ZDNet report. Cylance researchers note Hacker Door is largely undocumented malware and has seldom been seen in the wild.

Hacker Door appears to be used by Winnti, a Chinese advanced persistent threat group, notes Cylance. And Winnti appears to be targeting the aerospace industry, the researchers discovered.

Read more about Hacker Door here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/hacker-door-backdoor-resurfaces-as-rat-a-decade-later/d/d-id/1330159?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Future of Democratic Threats is Digital

Public policy and technological challenges take center stage as security leaders discuss digital threats to democracy.

CYBERSEC EUROPEAN CYBERSECURITY FORUM – Kraków, Poland – Technology has transformed the geopolitical landscape and nature of conflict, said Sean Kanuck, director of Future Conflict and Cyber Security at the International Institute for Strategic Studies last week.

“Cyber operations are being used to achieve traditional, economic, political, and criminal ends,” he emphasized in his keynote for the conference’s State track. “The challenges to global interoperability are not limited by physical connectivity of networks.”

Kanuck listed a few key strategic trends in cyber conflict: offensive operations below the level of armed conflict, private sector companies as enablers and targets, automation and higher visibility, collateral damage, and data manipulation and fabricated information campaigns.

Cyberattacks to influence democracy are stealthier; crafted to cause uncertainty. “What we actually see is nation-states intentionally operating below the threshold of armed attack that would lead to military response,” he explained. “Citizens are unsure who perpetrated attacks.”

The idea of compromising users’ trust in the democratic system is at the foundation of many geopolitical threats, said Janis Sarts, director of the NATO Strategic Communications Centre of Excellence in Latvia. He pointed to recent attacks on voting systems as an example.

“When you think, ‘What is the fundamental element of democratic society?’ — that is elections. We trust it will bring us results,” he explained in a panel titled “From Cyber With Love: Digital Threats to Democracy.”

“If you take away trust, either by hacking or by making people believe you did, it’s enough,” he continued.

Michael Chertoff, cofounder and executive chairman at the Chertoff Group and former US Secretary of Homeland Security, pointed to attacks on integrity of information and their ability to manipulate people. He pointed to attacks on media and advertising as an example.

“We haven’t really appreciated what it means to lose control of information, including information about ourselves,” he said. “The EU is ahead of the United States in realizing what it means to have someone else control your data.”

Threats to critical infrastructure

Some threats don’t target citizens’ trust but critical infrastructure and the economy.

“The IoT and poorly designed devices that are easily infected present a real threat to the US economy,” says Melissa Hathaway, president of Hathaway Global Strategies and former cybersecurity advisor for the George W. Bush and Barack Obama administrations, in an interview with Dark Reading.

“There needs to be an urgent focus on the few problems that affect many,” she continues, citing energy, telecommunications, and finance as three sectors vulnerable to cyberattacks. “It’s easy to disrupt service and get malware to destroy capabilities, and we lack resilience.”

Hathaway emphasizes the importance of collaborating with allies, fixing infrastructure, and focusing on trade and diplomacy. Right now, the United States is more worried about cyber weaponry than how cyberattacks could influence its economic structure, she explains. Cybersecurity isn’t always about inbound weapons, but about economic opportunity based on how actors change market forces.

We need to engage — and right now, she says, the US is not engaging. It’s critical to work with all nations in diplomatic exchanges, not only those which are like-minded. “I mean real diplomatic negotiations, understanding what the other side wants,” Hathaway explains.

The Internet is core to international interactions, trade negotiations, and communications technology. It could present a real risk, and any nation could abuse it. “Anybody can be a geopolitical threat depending on how they use or misuse technologies and market forces,” she says.

Fellow experts agree the issue of democratic threats is as much about public policy as it is about tech.

“Cybersecurity lies at the interface of a number of different areas — home, abroad, civilian, and military,” said Sir Julian King, European commissioner for the UK Security Union, in his opening keynote remarks. “Many different actors need to be involved when [a cyberattack] happens, and they need to work together swiftly and efficiently.”

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/the-future-of-democratic-threats-is-digital/d/d-id/1330132?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Game Change: Meet the Mach37 Fall Startups

CEOs describe how their fledgling ventures will revolutionize user training, privacy, identity management and embedded system security.PreviousNext

With a growing pedigree in cybersecurity, the Mach37 Cyber Accelerator is back in session for the fall with a new crop of participants hoping to gain valuable advice and direction from mentors connected to the program.

Sponsored by the Center for Innovation Technology in Virginia, Mach37 has now helped 52 companies with launch or development assistance that includes close mentorship from fellow entrepreneurs, investors and domain experts in the security world, and a modest $50,000 grant to help bring ideas to fruition. Most importantly, the experience provides a valuable networking opportunity for participants to rub shoulders with the who’s who of security.

The momentum continues during the next three months with another round of six companies that aim to change the game in a range of security niches, including training, privacy, identity management and embedded systems security. We reached out to the CEOs of each firm to get to know their companies better. 

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/endpoint/game-change-meet-the-mach37-fall-startups/d/d-id/1330155?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Internet of Ships falling down on security basics

We may not think of ships as industrial control systems (ICS). But, according to Ken Munro, a security researcher with the UK-based Pen Test Partners, we should.

Those who operate them should as well, he said in a blog post summarizing a talk he gave at a conference in Athens, Greece on how easy it is to hack ships’ communication systems. While they may not have physical leaks, they are catastrophically porous when it comes to cybersecurity.

The same history that has led to poor security in land-based ICSs applies to ships, he wrote – they used to run on “dedicated, isolated networks,” and therefore were not at risk from online attacks. But no more:

Now ships: complex industrial controls, but floating. Traditionally isolated, now always-on, connected through VSAT, GSM/LTE and even Wi-Fi. Crew internet access, mashed up with electronic navigation systems, ECDIS, propulsion, load management and numerous other complex, custom systems. A recipe for disaster.

And there are multiple ways for disaster to happen – most of them due to a failure to practice what regular Naked Security readers will recognise as security basics.

Simply by using Shodan, the search engine that indexes internet connected devices, Munro found marine equipment all over the world. For one of the major maritime satcom (satellite communication) vendors, Immarsat, he found, “plenty of logins for the Globe Wireless over plaintext HTTP,” along with evidence that the firmware of many of their older comm boxes was, as he put it, “dated.”

Another example, the Cobham Sailor 900 satellite antenna, was “protected” from a malicious attacker by the unique, complex username and password combo of: admin/1234.

As Catalin Cimpanu of Bleeping Computer noted, a public exploit already exists for that antenna, “that makes hacking it child’s play for any knowledgeable attacker.” He added that such antennas are not only found on container and passenger ships, “but also on navy and private security boats,” plus helicopters and airplanes.

But, where things “got a bit silly” for Munro was when he discovered a collection of KVH terminals that not only lacked TLS encryption on the login, but also included the name of the vessel plus an option to “show users.” Munro’s reaction: “WTF??”

That option gave up a list of the members of the crew online at that point. He added that spending a moment on Google yielded the Facebook profile of the deck cadet who he had spotted using the commbox.

Simple phish, take control of his laptop, look for a lack of segregation on the ship network and migrate on to other more interesting devices.

Or simply scrape his creds to the commbox and take control that way.

It shouldn’t be this easy!

These flaws are not just now being discovered. They have been noted for years. More than four years ago, in April 2013, security firm Rapid7 reported that in just 12 hours they were able to track more than 34,000 ships worldwide using the maritime protocol Automatic Identification System (AIS).

Using those AIS receivers, it reckoned:

…we would probably be able to isolate and continuously track any given vessel provided with an MSSI number. Considering that a lot of military, law enforcement, cargo and passenger ships do broadcast their positions, we feel that this is a security risk.

And Munro’s research found that things have only gone downhill since – in the past four and a half years, the number of exposed ships has increased.

But Munro has some (rather depressingly familiar) recommendations for both civilian and military mariners: Start practicing the basics.

  • Update satcom boxes immediately.
  • Implement TLS on all satcom boxes.
  • Increase password complexity, especially for high-privilege accounts.

He concluded:

There are many routes on to a ship, but the satcom box is the one route that is nearly always on the internet. Start with securing these devices, then move on to securing other ship systems.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/O_2NgpG3eZc/

Ex-TalkTalk chief grilled by MPs on suitability to chair NHS Improvement

Dido Harding, the woman at the helm during TalkTalk’s 2015 mega breach, was yesterday grilled about her move to chair NHS Improvement, the body responsible for overseeing the UK’s health service and also famously clobbered by a huge cyber attack.

Speaking in front of MPs in a pre-appointment hearing for her forthcoming role as chairman of NHS Improvement, Harding was asked about her suitability for the appointment, specifically given her handling of the TalkTalk cyber attack.

The incident affected 157,000 customers’ personal details and cost the biz £42m. In February this year, Harding stepped down as chief exec.

She said: “One of the reasons why my name is inextricably linked with cyber attacks is because at TalkTalk we made a choice to warn our customers very quickly after the attack.” She said she was most criticised in the business press for speaking out too early and saying “I don’t know” a few times on the Today Programme.

“I actually think I did exactly the right thing there and what’s more, TalkTalk customers told us after the event they thought I, and the company, had done the right thing.”

She said she would have liked to speak out earlier, but was waiting on the Metropolitan Police before deciding whether to immediately warn customers. “The police wanted us to keep quiet,” she said.

However, Ben Bradshaw MP noted that when the attack happened, TalkTalk’s share price went down 30 per cent, and when Harding announced her departure it increased by 10 per cent.

“[The public] may take from that the judgement of the market was that you weren’t a great success. And that your new desire to do good and work in the public service – a cynic might look at this and think, well, she wasn’t going to hack it at the top level in the private sector, so she is looking for a cushy government job,” he said.

Harding replied she did not think the role was a cushy job, and that the company had doubled in value since she took the reins more than seven years ago.

During the hearing, Harding was also asked if she intended to give up her private health insurance if she took the role, to which she replied she would not. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/18/mps_grill_dido_harding_over_suitability_to_chair_nhs_improvement/

BoundHook: Microsoft downplays Windows systems exploit technique

Features of the Intel MPX designed to prevent memory errors and attacks might be abused to launch assaults on Windows systems, security researchers claim.

Windows 10 uses Intel to secure applications by detecting boundary exceptions (common during a buffer overflow attack). An exploit technique by CyberArk Labs uses the boundary exception as the hook itself to give attackers control of Windows 10 devices.

The researchers claim the so-called “BoundHook” technique creates a potential mechanism for hackers to exploit design of Intel Memory Protection Extensions to hook applications in user mode and execute code. According to CyberArk Labs, this malfeasance could, in theory, allow attacks to fly under the radar of antiviruses or other security measures on Windows 10, 32-bit and 64-bit OS devices.

Microsoft has downplayed the significance of the potential attack, telling CyberArk Labs that it’s only useful as a technique for post-hack exploitation. MS dismisses the research as a “marketing report” from which The Reg infers it sees no need to have the tech patched.

A Microsoft spokesperson told The Reg: “The technique described in this marketing report does not represent a security vulnerability and requires a machine to already be compromised to potentially work. We encourage customers to always keep their systems updated for the best protection.”

BoundHook is the second known technique discovered by CyberArk Labs to hook functions in Windows. The first technique, dubbed GhostHook, bypasses Microsoft attempts to prevent kernel-level attacks (e.g. PatchGuard) and uses this hooking approach to take control of a device. Microsoft dismissed the potential route of that attack as a low-risk threat, as we previously reported. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/18/boundhook_windows_10_exploit_cyberark/