STE WILLIAMS

Google AI teaches itself ‘superhuman’ chess skills in four hours

Human chess grandmaster Peter Heine Nielsen tells the BBC that he’s “always wondered how it would be if a superior species landed on earth and showed us how they played chess.”

Well, move aside, ugly, giant bags of mostly water: now we know, because Google’s “superhuman” AlphaZero artificial intelligence (AI) taught itself chess from scratch in four hours. Then, it wiped the floor with the former world-leading chess software, Stockfish 8.

AlphaZero is actually a game-playing AI created by its Google sibling, DeepMind. DeepMind Technologies Ltd., a Google subsidiary, created a neural network that learns how to play video games in a fashion similar to that of humans.

That neural network had to learn how to play chess – without human interaction, mind you – because until recently it was a Go specialist that had confined itself to going around beating the world’s best Go players in its incarnation as AlphaGo.

Now that AlphaZero has been generalized, it can learn other games. After learning the rules to chess in four hours, it took on a 100-game match with Stockfish 8, which is an open-source chess engine that consistently ranks first or near the top of most chess-engine rating lists.

In the AlphaZero/Stockfish 8 games, AlphaZero won or drew all 100 games, according to a non-peer-reviewed research paper published by the DeepMind crew with Cornell University Library’s arXiv. It garnered 28 wins, 72 draws, and zero losses.

From the paper, whose authors include DeepMind founder Demis Hassabis: a child chess prodigy who reached the rank of chess master at the age of 13:

Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi [a similar Japanese board game] as well as Go, and convincingly defeated a world-champion program in each case.

Former world chess champion Garry Kasparov told Chess.com that AlphaZero’s performance is “remarkable”:

It’s a remarkable achievement, even if we should have expected it after AlphaGo. It approaches the ‘Type B,’ human-like approach to machine chess dreamt of by Claude Shannon and Alan Turing instead of brute force.

According to Chess.com, AlphaZero is like humans in that it searches far fewer positions than its predecessors. The paper claims that it looks at “only” 80,000 positions per second, compared to Stockfish’s 70 million per second.

In fact, the DeepMind programmers used a specific type of machine learning – reinforcement learning – to train AlphaZero. From Chess.com’s writeup:

Put more plainly, AlphaZero was not “taught” the game in the traditional sense. That means no opening book, no endgame tables, and apparently no complicated algorithms dissecting minute differences between center pawns and side pawns.

This would be akin to a robot being given access to thousands of metal bits and parts, but no knowledge of a combustion engine, then it experiments numerous times with every combination possible until it builds a Ferrari. That’s all in less time than it takes to watch the “Lord of the Rings” trilogy. The program had four hours to play itself many, many times, thereby becoming its own teacher.

Not all grandmasters are fully satisfied with the way the match was set up. They’re debating the processing power of the two adversarial systems, while American GM Hikaru Nakamura reportedly called the match “dishonest”, pointing out that Stockfish’s methodology requires it to have an openings book for optimal performance. Another expert, GM Larry Kaufman, said he wants to see how AlphaZero would do on a home machine, as opposed to Google’s souped-up computers.

But aside from arguments about the fairness of the match, experts say that we’re looking at actual AI at this point. From here, we could see much more than chess wins. Chess.com quotes GM Peter Heine Nielsen:

It goes from having something that’s relevant to chess to something that’s gonna win Nobel Prizes or even bigger than Nobel Prizes. I think it’s basically cool for us that they also decided to do four hours on chess because we get a lot of knowledge. We feel it’s a great day for chess but of course it goes so much further.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3PerVBB-e1w/

Phishing embraces HTTPS, hoping you’ll “check for the padlock”

After a slow-burning romance, HTTPS has recently bloomed into one of security’s great love affairs.

Google is a long-time admirer, and in October started plastering “not secure” labels on many sites failing to use HTTPS by default in the Chrome address bar, a tactic meant to persuade more website owners to share its enthusiasm.

Facebook, Twitter and WordPress, meanwhile, have been keen for years, which helps explain EFF figures from early in 2017 estimating that an impressive half of all web traffic was being secured using HTTPS.

So alluring has HTTPS become that it has now acquired suitors it could do without – phishing websites.

According to PhishLabs, a quarter of all phishing sites now use HTTPS, up from a few percent a year ago.

The increase has been so dramatic in 2017 that in a single quarter its popularity among phishing sites doubled. What’s causing this sudden interest?

One explanation:

As more websites obtain SSL certificates, the number of potential HTTPS websites available for compromise increases.

This is logical. As the number of sites using HTTPS increases the chances that a legitimate site compromised to host phishing attacks will have it enabled increases too.

Which means that acquiring an HTTPS certificate is an empty upgrade if other vulnerabilities are not addressed at the same time.

But there’s a second, less savoury possibility:

An analysis of Q3 HTTPS phishing attacks against PayPal and Apple, the two primary targets of these attacks, indicates that nearly three-quarters of HTTPS phishing sites targeting them were hosted on maliciously-registered domains.

We’ll call this the ‘window-dressing theory’: cybercriminals believe that web users are lulled into a false sense of security by the presence of HTTPS even though their scams might work without it.

That these certificates are obtained free of charge from services such as Let’s Encrypt, set up to spread the use of HTTPS among legitimate web makers, only adds to the painful sense of unintended consequences.

The culprit here is not really HTTPS, or Let’s Encrypt, but the green padlock symbol itself, browsing’s most misunderstood and over-rated signifier.

Too many people see its glow and think it guarantees a site’s legitimacy when, of course, it does nothing of the kind. Some of this is plain naivety but there’s also confusion about what HTTPS and padlocks are for.

This is partly the industry’s fault, starting with Google. Visit an HTTPS site in Chrome and the browser will describe padlocked sites as “secure”, which refers to the connection, not the site itself.

Except that not everyone knows this.

Browsers also use a colour-coding system to designate the trustworthiness of a site (green padlocks being awarded to sites with an Extended Validation certificate), but these can still appear on phishing sites that have not been detected by integrated filtering.

Naked Security discussed this issue (and the problem of how sites are verified) in 2015 so it’s not a new worry.

The logical result of the trend PhishLabs has detected is that eventually all websites will use HTTPS whether they are phishing sites or not, at which point the misunderstanding of the whole padlock system will become apparent.

The dream of an entirely encrypted internet is a noble one but its ubiquity will be a pyrrhic victory if cybercriminals can find easy ways to manipulate it from the inside.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3PS6QSONpjo/

Next-gen telco protocol Diameter has last-gen security – researchers

Some of the well-known weaknesses of SS7 Roaming Networks have been replicated in the next-gen telco protocol, Diameter.

Diameter will be used for roaming connections of LTE/LTE-A mobile networks. The protocol is designed for trusted environments – roaming interconnection interfaces between providers – but the “walled garden” assumptions of telco operators are not valid so that attacks including spoofing and more are possible, according to researchers from German security consultancy ERNW.

Diameter-based networks, messages and functions can be abused. Typical attacks would result in information leaks about a targeted environment, but attacks against the authentication and encryption of customers are also possible. Intelligence gleaned might be used to intercept mobile data/calls as well as opening up the possibility of running various types of fraud.

ernw at black hat

All around Diameter: ERNW spells it out at Black Hat

To demonstrate such attacks, researchers at ERNW developed a testing framework covering information gathering, mobile phone tracking, denial-of-service, pay fraud, and interception of data. The framework was released after a talk on the research at the Black Hat EU conference this week.

The tool is designed to enable providers and security companies to assess a telco’s Diameter network configuration and demonstrate the scope of possible malfeasance.

ERNW researchers urged telcos to secure these interfaces and assess the infrastructure components and configurations.

Diameter is an authentication, authorisation, and accounting protocol which is in the process of replacing RADIUS. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/08/diameter_protocol_security_shortcomings/

UK.gov law resources now untrustworthy, according to browsers

The SSL certificate on the criminal justice and court listing site justice.gov.uk expired yesterday, causing browsers to now warn users that their information is at risk.

The site can still be accessed if users click through their browser’s warnings, and contains resources on courts, procedure rules and offenders. It is separate from the Gov.uk Ministry of Justice site.

The reader who tipped us off to the snafu said: “This is a bit poor for a government department which serves out the civil procedure rules among other things.”

SSL (secure sockets layer) certificates are used to prove a website’s identity and protect online transactions. They can be purchased as a subscription from one of a small group of globally trusted companies, known as a certificate authority.

Since no transactions take place on justice.gov.uk, having no SSL certificate will likely not cause any major problems for users who dare to ignore their browser’s warning. It is, however, just plain bad practice on the part of the MoJ’s website team.

Sean Sullivan, security adviser for F-Secure, said the value of SSL can be overstated. However, he added: “The real question is how long does the organisation take to fix the problem? Even if I question the overall value of everything being encrypted, once you’ve committed to it, you need to do it. Or else the public will become even further confused.”

The Register has asked the MoJ for comment. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/08/moj_website_ssl_certificate_expires/

Microsoft Issues Emergency Patch for ‘Critical’ Flaw in Windows Security

Remote code execution vulnerability in Microsoft Malware Protection Engine was found by UK spy agency’s National Cyber Security Centre (NCSC).

Microsoft late yesterday issued an emergency patch for its major Windows malware protection tool that fixes a critical vulnerability discovered by the UK’s National Cyber Security Centre (NCSC), an arm of the Government Communications Headquarters (GCHQ) intelligence agency.

The remote code execution vulnerability (CVE-2017-11937) in the Microsoft Malware Protection Engine would allow an attacker to gain full control of Windows 7, 8, 10, and Windows Server systems via the Windows Defender feature that uses it. Also affected by the flaw are Microsoft Endpoint Protection, Microsoft Exchange Server 2013 and 2016, Microsoft Forefront Endpoint Protection, Microsoft Forefront Endpoint Protection 2010, and Microsoft Security Essentials.

“An attacker who successfully exploited this vulnerability could execute arbitrary code in the security context of the LocalSystem account and take control of the system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights,” Microsoft said in its security alert.

An exploit would require a “special crafted file” be scanned by the malware protection engine, and the malicious file could be served to a victim via email, a website, instant message, or via a hosting server, Microsoft said. Systems with Windows real-time protection enabled automatically get updated with the patch.

See Microsoft’s advisory here for more details.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/microsoft-issues-emergency-patch-for-critical-flaw-in-windows-security/d/d-id/1330595?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

What Slugs in a Garden Can Teach Us About Security

Design principles observed in nature serve as a valuable model to improve organizations’ security approaches.

Next year marks the 40th anniversary of a book that changed the world: Bill Mollison and David Homgren‘s Permaculture One, which described a set of agricultural and social design principles that mimic the relationships found in nature.

“In practice, permaculture is a growing and influential movement that runs deep beneath sustainable farming and urban food gardening,” Michael Tortorello wrote in The New York Times. “You can find permaculturists setting up worm trays and bee boxes, aquaponics ponds and chicken roosts, composting toilets and rain barrels, solar panels and earth houses.”

What does this have to do with information security? I believe there’s remarkable synchronicity between permaculture and security and that the use of design principles observed in natural ecosystems can serve as a valuable model to improve organizations’ approaches to security.

Think about the challenges of protecting an enterprise: lack of resources (people, technology, budget, or any combination thereof), competing priorities, balancing compliance requirements and business needs, awareness and training, enforcing policies and standards. 

It’s an environment well-suited for the application of permaculture principles, which focus on harmonious integration — working with, rather than against, nature — and embracing collaboration over competition. Permaculture, a portmanteau of “permanent agriculture,” embraces three basic ethics: care of the Earth (or, in this case, the system), care of people, and reinvestment of the surplus.

These three ethics guide 12 design principles that can be as useful in setting up and administering security systems as in agriculture, but we don’t need to go that deep in the weeds here (pun intended).

It’s also useful to think about the six permaculture zones and how they can be used to prioritize work. Permaculture zones are used to organize design elements based on frequency of use or need. The lowest number (0) denotes the most frequently touched, while the highest (5) is equivalent to wild land, requiring no human effort to produce anything.

How do security concepts line up with this zoned approach? For the purpose of illustration, let’s assume the following: You receive 25 to 50 alerts from your intrusion detection system (IDS) per day. You update your malware system or respond to alerts 10 times per week. You review VPN logs once a day. And you deploy code once per day, with integrated static code analysis.

Using this information, you can begin to align your tools with specific zones: IDS is in Zone 1 because these alerts happen frequently and are a strong indicator of compromise but don’t involve much interaction time. Malware issues have a pattern similar to IDS alerts, but the incidents are less frequent, pushing them out to Zone 2. VPN log reviews and static code analyses fall into Zone 3, thanks to less-frequent occurrences but a need for greater human intervention during such occurrences.

These are not hard-and-fast rules. If you do multiple code commits per day, for example, static code analysis would fall into a lower-numbered zone. Essentially, zone alignment is based on the number of times you need to touch the security control. It’s a great way to begin the application of the design principle — from patterns to details.

Some additional practical applications of permaculture in security:

The problem is the solution. Slugs are a problem in the garden. But if you add ducks, the slugs become a food source for them. And then the ducks provide eggs. In technology, an equivalent might be the training opportunities that arise when software developers deliver code that has vulnerabilities. By identifying vulnerabilities committed at an individual developer level, you can then tailor specific training material toward that user. This reduces the burden on the whole team, because they avoid mandatory training on material for which they’ve already demonstrated competence. This is a challenging concept for some people — whether something is positive or negative is entirely determined by how you view it.

Get the most benefit from the least change. In the physical world, a dam site might be chosen because it delivers the most water in relation to the least amount of earth that has to be moved. In the IT security world, an equivalent goal might be to remove admin rights from workstations, thereby immediately dropping the percentage of malware infections. This is a single action that can have a far-reaching positive effect on an entire organization.

Seeking order yields energy. Disorder consumes energy to no useful purpose, whereas order and harmony free up energy for other uses. By embedding operations staff into development teams, for example, you can avoid inefficiencies caused by engineers attempting to simultaneously manage systems while writing code.

Learn to harness natural cycles. Every cyclical event increases the opportunity for yield. Consider the software development life cycle and the plan-build-run model: both are examples of technological cycles that can make identification of IT security defects easier by coupling different tools to disparate stages.

Permitted and forced functions. Key system elements may supply many functions. However, if you force too many functions onto an element, it will buckle under the weight. Order is achieved by balancing simplicity and complexity.

Work with nature rather than against it. Pesticides destroy beneficial as well as destructive insects; the following year brings an explosion of pests because there aren’t any predators to control them. If your security controls cause inconvenience to your users, they’ll bypass them. When we build IT security policies and controls that function within the flow of the organization, enhanced security is the natural outcome.

Despite our many attempts to disrupt her, Mother Nature has been managing the world pretty efficiently for many millions of years. Permaculture reminds us to listen to what she tells us and apply this insight across every aspect of our lives. The lessons for information security are dramatic.

Related Content:

As Senior Director of Security and IT at Distil Networks, Chris Nelson leads the security and compliance initiatives across the organization by the use of permaculture for design of policy, standards, audit, and risk assessment. He works with customers, partners, and internal … View Full Bio

Article source: https://www.darkreading.com/risk/what-slugs-in-a-garden-can-teach-us-about-security/a/d-id/1330568?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Sloppy coding + huge PSD2 changes = Lots of late nights for banking devs next year

Poorly written code is leaving banks at greater risk of attack and poorly prepared for big changes in the financial sector due to come into effect early next year.

CAST, an organisation that reviews the quality of code for businesses, recently reviewed over 278 million lines of code and reveals that out of 1,388 applications, 1.3 million weaknesses were detected.

Financial Services organisations had the highest number of violations caused by coding mistakes and non-secure coding practices per thousand lines of code (KLOC). Telecommunications firms also fared poorly in the coding quality benchmarking exercise.

Bad coding and poor software quality have practical ramifications for the EU financial sector, by 13 January next year member states will have to implement the revised Payment Services Directive (PSD2) into their national regulations.

“A greater density of security weaknesses presents more opportunities for malicious actors to find vulnerabilities to exploit for unauthorised entry into systems,” Dr Bill Curtis, SVP and Chief Scientist at CAST Research Labs, told El Reg.

“Ramifications are the compromise of confidential customer information, malicious damage to systems, or worse, theft from accounts,” he added.

The financial sector has a greater Common Weakness Enumeration (CWE) density than other sector because of the need to support legacy systems, among other factors. Banks have been slower than other sectors in adopting modern coding tech, partly because of the need to support legacy apps written in Cobol but also because of complex coding environments.

Banks want to modernise and adopt more modular and compartmentalised modern code but this is far from straightforward. Just putting a Java or .Net wrapper on backend apps running on a mainframe doesn’t help.

Curtis explained: “Financial service firms have many older systems and in some cases have not spent the effort to upgrade them to modern security standards. They must dedicate effort to remediating security vulnerabilities, even as the business continues to demand more functionality and wants it prioritised over defect-fixing.”

The importance of following coding best practices is going to increase once the looming PSD2 for open banking regulations come into effect.

“Allowing multiple parties access to confidential customer information and funds will require greater software security than we currently see in financial services,” Curtis explained.

“Hackers are clever and the attack surface they can exploit will be exponentially expanded across multiple parties. Financial institutions will need a certification based on code analysis that ensures the systems gaining access to their accounts are secure and have eliminated known vulnerabilities.”

Companies tend to prioritise user experience at the expense of cybersecurity.

More generally, applications developed using Microsoft’s .NET have higher CWE densities and produce some of the poorest software quality overall. Java applications released more than six times per year have the highest CWE densities.

Applications between five and 10 years old have the greatest potential for security flaws. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/08/bank_coding_psd2/

Apple fills the KRACK on iPhones – at last

Remember KRACK, short for Key Reinstallation Attack?

Nearly two months ago, it was all over the news – what we jocularly call a BWAIN, short for “bug with an impressive name” – because it exposed a cryptographic weakness in WPA, the Wi-Fi encryption protocol that is used to secure most of the world’s wireless networks.

Very greatly simplified, KRACK involved tricking a wireless access point into sending the first two packets of a session scrambled with the same encryption key, with the result that if you knew the content of one of the packets, you could figure out the other.

KRACK wasn’t the end of the world as we know it (we happily reported that Wi-Fi was still safe to use), but it was worth patching against – encrypted Wi-Fi connections aren’t supposed to leak any data, and that’s that.

Apple, amongst others, put out a patch pretty quickly for iPhone users, as we reported in early November 2017…

…but there was a twist in the fix, because it wasn’t for everyone:

According to Apple’s official support documentation, the [02 November 2017] KRACK fix only applies to iPhone 7s, iPad Pro 9.7 (early 2016) and later.

We don’t know why the KRACK patch is only being made available for newer iDevices only – it’s possible a fix for earlier devices is still in the works, or perhaps Apple has determined that these older versions aren’t vulnerable to KRACK at all.

Either way, if you’re a pre-7 iPhone user, keep your eyes peeled for an update from Apple just in case.

Well, the wait is now over, because Apple’s latest round of updates includes iOS 11.2, and that officially (and at last) includes KRACK-related patches for the devices that were left out last time:

Wi-Fi.

Available for: iPhone 6s, iPhone 6s Plus, iPhone 6, iPhone 6 Plus, iPhone SE, iPhone 5s, 12.9-inch iPad Pro 1st generation, iPad Air 2, iPad Air, iPad 5th generation, iPad mini 4, iPad mini 3, iPad mini 2, and iPod touch 6th generation. (Released for iPhone 7 and later and iPad Pro 9.7-inch (early 2016) and later in iOS 11.1.)

Impact: An attacker in Wi-Fi range may force nonce reuse in WPA multicast/GTK clients (Key Reinstallation Attacks – KRACK)

As it happens, numerous other security holes were closed in the iOS 11.2 update, including four vulnerabilities listed as “may be able to execute arbitrary code with kernel privileges”, which is about as close to “good for a full jailbreak and takeover” as you’re likely to hear from Apple.

By the way, macOS goes to High Sierra 10.13.2 in the same tranche of updates, with three “may be able to execute arbitrary code with kernel privileges” fixed for Mac users, too.

Get ’em as soon as you can.

Use Settings | General | Software Update on an iPhone, and Apple Menu | About This Mac | Software Update... on a Mac.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3nAShxHRPAs/

Uber disguised $100,000 hacker payoff as bug bounty, claims Reuters

Remember the 2017 Uber breach?

The one that was actually discovered in 2016, except that Uber conveniently forgot about it for a year before admitting, “Well, yes, now you mention it, some records did get taken.”

57,000,000 records in all, apparently, including – for Uber drivers, at least – data such as driving licence and vehicle registration details.

From a regulatory point of view, Uber ought to have reported this breach promptly in many jurisdictions around the world, rather than hushing it up; in the UK, for example, the Information Commissioner’s Office has variously stated:

Uber’s announcement about a concealed data breach last October raises huge concerns around its data protection policies and ethics. [2017-11-22T10:00Z]

It’s always the company’s responsibility to identify when UK citizens have been affected as part of a data breach and take steps to reduce any harm to consumers. Deliberately concealing breaches from regulators and citizens could attract higher fines for companies. [2017-11-22T17:35Z]

Uber has confirmed its data breach in October 2016 affected approximately 2.7million user accounts in the UK. Uber has said the breach involved names, mobile phone numbers and email addresses. [2017-11-29]

At the time the breach news broke, it also emerged that Uber had paid $100,000 in what was effectively hush money to the hacker or hackers behind the breach, making it possible for Uber to sweep the breach under the carpet.

We speculated at the time how this payout might have been orchestrated:

It’ll be interesting to see how the story unfolds – if the current Uber leadership can unfold it at this stage, that is. I suppose you could wrap the $100,000 up as a “bug bounty payout”, but that still leaves the issue of “very conveniently deciding for yourself that it wasn’t necessary to report it”.

Well, if an exclusive investigation published recently by Reuters has it right, then so did we: Reuters claims that the payoff was indeed made to look like a bug bounty payout.

Bug bounties are official rewards offered by companies to researchers who find security bugs, flaws, holes and problems, but this sort of payout is offered within a legal framework that – for obvious reasons – puts limits on exactly where bounty hunters should go, and how they should behave.

Deliberately hacking a live system in a way that is likely to crash it just to prove a point is understandably off-limits; so too is using unlawful techniques to achieve a result – stealing a physical server, for example, or threatening an employee to extract a password.

Another unlawful no-no is actually cracking into a server, stealing a giant pile of data and then offering the data back for what amounts to a ransom, even if that ransom payment would also lead to finding and fixing the security hole.

But Reuters is insisting that is pretty much how it played out in the Uber case.

According to Reuters, the attack and breach went something like this: the hacker who was ultimately paid off by Uber contracted a “researcher” to dig out Uber passwords on GitHub; those passwords led to the 57 million records; Uber then received “an email […] demanding money in exchange for user data”.

Of course, even if that wasn’t quite how it what happened, or if calling this a bug bounty payout is ultimately deemed ethically acceptable…

…there’s still the issue that we described above, namely the matter of Uber very conveniently deciding unilaterally that it wasn’t necessary to report the breach.

Over here in the UK, we’ll be very interested to see what the Information Commissioner’s Office has to add to its earlier warnings.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/aLN_skR0-fs/

Security industry needs to be less trusting to get more secure

Delegates to Black Hat Europe have been encouraged to turn conventional security thinking on its head by practicing security through distrusting.

Security pros normally aim to make (computer) systems (reasonably) secure and trustworthy. This means striving to ensure everything (software, hardware, infrastructure) is trusted. This means the code has no bugs or backdoors, patches are always available and deployed, admins trustworthy, and the infrastructure is reliable.

trust

Security through distrusting

Joanna Rutkowska, chief exec of Invisible Things Lab, argued that this approach is increasingly difficult in arguing that it is better to treat any single component in a system as potentially pwned. This second approach involves distrusting (nearly) all components and actors, and have no single point of failure.

“The industry has been way too much focused on this first approach, which I see as overly naive and non-scalable to more complex systems,” Rutkowska told delegates during a keynote presentation at the security conference on Thursday.

Security through distrusting is no panacea because it involves trade offs – particularly in usability and convenience. Rutkowska has applied the principle in designing how Qubes – an operating system she designed handles image and pdf files. Other implementations are as yet thin on the ground. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/12/07/security_distrusting/