STE WILLIAMS

What the KRACK was that? [Chet Chat Podcast 264]

Oct
23

This episode of the Chet Chat podcast was recorded live at the BSides Calgary conference in Alberta, Canada.

Sophos expert Chester Wisniewski (he’s the Chet in the Chat) caught up with fellow security researcher and former colleague Michael Argast for a whirlwind tour of the big security issues of the past week.

In this episode

If you enjoy the podcast, please share it with other people interested in security and privacy and give us a vote on iTunes and other podcasting directories.

Listen and rate via iTunes...
Sophos podcasts on Soundcloud...
RSS feed of Sophos podcasts...


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bBfic7HB6do/

Facebook security chief stands by “college campus” comments

Oct
23

In late July, Facebook security chief Alex Stamos told employees in a conference call that the company isn’t doing enough to respond to growing cyber threats: in fact, with Facebook’s “move fast” mantra, the vault that stores the keys to a billion lives is (deliberately) run like a college campus but has the threat profile of a defense contractor, he said.

So that’s security worry No. 1.

Security worry No. 2 is that somebody on the call—a Facebook employee, one assumes—taped him and leaked the clip to ZDNet, which published it on Thursday.

Here are Stamos’ remarks from the call, which was concerned with the challenges of protecting Facebook’s networks from the growing threat of nation-sponsored hackers:

The threats that we are facing have increased significantly, and the quality of the adversaries that we are facing. Both technically and from a cultural perspective, I don’t feel like we have caught up with our responsibility.

The way that I explain to [management] is that we have the threat profile of a Northrop Grumman or a Raytheon or another defense contractor, but we run our corporate network, for example, like a college campus, almost.

We have made intentional decisions to give access to data and systems to engineers to make them ‘move fast,’ but that creates other issues for us.

As Ars Technica points out, nation states are suspected of being behind attacks against Google, Yahoo, defense contractors, security companies and more. In March, federal prosecutors indicted Russian intelligence agency officers for a 2014 hack on Yahoo that compromised 500 million user accounts, for example, while Google said in 2010 that it had lost intellectual property in a highly targeted attack coming from China.

That’s the kind of thing that Facebook, and everybody else online, is facing. And Facebook is being run like a campus. OK. We don’t know exactly what that means, but it doesn’t sound good. It sounds sloppy. It sounds like a high-risk environment.

But before we grab our torches and burn down the frat houses, let’s take a look at what Stamos had to say when he took to Twitter to clarify the remarks on Thursday:

I was asked for comment today wrt some leaked audio from when I was speaking to my security team at Facebook. 1/11

Here it is: I’ve said this before, internally, to describe one of the basic challenges security teams face at companies like ours 2/11

Tech companies are famous for providing freedom for engineers to customize their environments experiment with new tools 3/11

And also frameworks development processes. Allowing for this freedom helps creativity and productivity 4/11

We have to weigh that against the fact that we have become a potential target advanced threat actors. 5/11

As a result, we can’t architect our security the same way a defense contractor can, with limited computing options and no freedom. 6/11

Keeping the company secure while allowing the culture to blossom is a challenge, but a motivating one, I’m happy to accept. 7/11

The “college campus” wording is just a figure of speech to make the point; 8/11

My team runs network security for the company. Of course we secure it thoroughly. 9/11

It would not be correct to read my quote as a criticism of management not caring about security; they care a great deal. 10/11

It’s not a criticism of anybody, just a statement of why our team needs to be creative in how we protect our corporate network. 11/11

Some are sympathizing with Facebook. Software developer Molly McG: “…it’s actually an incredible analogy for the challenges you face and I love it … The college campus is a perfect metaphor for an environment where you can experiment while protected by institutional safeguards.”

“I don’t even see how this statement of reality is even remotely controversial” said April King, head of website security at Mozilla. “That freedom, despite its subsequent challenges, lets you attract the kind of tech talent that you simply couldn’t get at a large corporation.”

Fair enough. But we’re talking about personal information belonging to millions of people. Hiring whiz kids is great for churning out creative new ideas, but if that creativity comes at the expense of security, whose interests does it serve? Do we want surgeons to learn how to use a scalpel on a live patient?

Then again, as he explained, Stamos didn’t mean inexperienced, or foolhardy, when he referred to a “college campus.”

From the outside it looks like Facebook takes security very seriously: ever seen a Equifax- or Yahoo-level data breach from Facebook? No? Neither have we.

One of many examples of what Facebook does right can be found in the way it locks users in a closet if the company finds that they’ve reused their passwords on other sites that have been breached.

Another commendable practice: Facebook has been using secure browsing by default since July 2013. Plus, Facebook issues transparency reports to let us all know which governments are making plays for our data and how many times. On top of all that, it doesn’t balk at paying out decent bug bounties.

Plenty of other internet platforms are also doing those security-proactive things besides Facebook, but it’s still worth noting that clearly not every single Facebook security or development engineer is swinging from the ceiling fan.

Of course a company like Facebook only has to fail once for everything we’ve shared with it to be spilled.

Storing vast amounts of user data, moving fast and structuring themselves like a campus rather than a defence contractor are all deliberate decisions on Facebook’s part. Nobody obliged the company to do that, or shoulder the risks and responsibilities that go along with making it all work.

When it comes to Facebook securing its network, Naked Security’s Mark Stockley thinks that overall, it’s pretty impressive (though it’s certainly got a problem with at least one employee who felt that it’s OK to tape a confidential call and release it to a major tech publication).

On the other hand, regardless of Stamos trying to put his comments into the context of fostering creativity, the fact is that the top security guy at the company said “I don’t feel like we have caught up with our responsibility”. That’s why Mark said you could quote him on this one:

These are Facebook’s choices and the challenges it faces are real but self-imposed so I sympathize, but not enough to forgive it if they’re breached.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/oZLG4JnvdC4/

Just say “No!” – how to stop the DDE email attack [VIDEO]

Oct
23

You’ve probably heard of the DDE attack – a way of launching malware from a web download, an email attachment, or even directly from the body of an Outlook email message or calendar invite.

It sounds scary – no document macros, no tell-tale script files, no attachment to open…

…but once you know what to look for, stopping a DDE attack isn’t that hard.

Paul Ducklin tells you how the DDE attack works, what to look out for, and what to do.

(Can’t see the video directly above this line? Watch on Facebook instead.)

(You don’t need a Facebook account to watch the video, and if you do have an account you don’t need to be logged in. If you can’t hear the sound, try clicking on the speaker icon in the bottom right corner of the video player to unmute.)

PS. If you like the T-shirt in the video, you can buy one at https://shop.sophos.com/.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Q5hLxHp3g2E/

‘We’ve nothing to hide’: Kaspersky Lab offers to open up source code

Oct
23

Russian cybersecurity software flinger Kaspersky Lab has offered to open up its source code for third-party review.

The firm’s Global Transparency Initiative is in response to moves to ban the use of its technology on US government systems by the Department of Homeland Security over concerns of alleged ties with the Russian government.

The initiative comes days after reports that Russian government hackers used Kaspersky antivirus software to siphon off classified material from a PC belonging to a NSA contractor.

With this initiative, Kaspersky Lab will engage the broader information security community and other stakeholders in validating and verifying the trustworthiness of its products, internal processes, and business operations, as well as introducing additional accountability mechanisms by which the company can further demonstrate that it addresses any security issues promptly and thoroughly.

An independent review of the company’s source code by Q1 2018 will be followed by similar audits of its software updates and threat detection rules. A separate independent assessment of Kaspersky Lab’s secure development lifecycle processes and its software and supply chain risk mitigation strategies will take place in parallel.

Analysis of source code to rule out possible backdoors is all well and good but what really counts is how anti-malware software is configured to select what types of file are uploaded to the cloud for further scrutiny. The behaviour of the software can be and needs to be altered by updates.

Kaspersky Lab further plans to open up three Transparency Centres worldwide (in Asia, Europe and the US) by 2020. In the meantime, the company has increased the value of its bug bounty awards to up to £75,000 ($100,000) for the most severe vulnerabilities.

Eugene Kaspersky, chairman and chief exec, said the initiative was designed to re-establish trust and prevent the “balkanisation” of internet security.

“Cybersecurity has no borders, but attempts to introduce national boundaries in cyberspace is counterproductive and must be stopped,” Kaspersky said in a statement. “We need to re-establish trust in relationships between companies, governments and citizens. That’s why we’re launching this Global Transparency Initiative: we want to show how we’re completely open and transparent. We’ve nothing to hide.”

Industry reaction has been mixed.

Javvad Malik, security advocate at AlienVault, said: “Following the allegations against Kaspersky, the company needs to restore public trust and this is a good way to go about rebuilding that trust.

“With an increase in cyber warfare and hostile governments, it makes sense for more companies to bring more transparency to the market. So it could encourage other companies to follow suit and help technology companies remain politically agnostic.

“This is particularly relevant to security companies whose software often runs with high privileges.”

Lee Munson, security researcher for Comparitech.com, argued that the move had more to do with reputation management than transparency.

“However Kaspersky Lab packages this new initiative, it is clear to me that this is about reputation repair, especially in the US where claims of links to Russian intelligence agencies have been severely damaging.

“That it could build trust in the security community is true, though the trust being built for Kaspersky is obviously of paramount importance here.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/23/kaspersky_source_code_review/

ROCA ’round the lock: Gemalto says IDPrime .NET access cards bitten by TPM RSA key gremlin

Oct
23

Some Gemalto smartcards can be potentially cloned and used by highly skilled crooks due to a cryptography blunder dubbed ROCA.

Security researchers went public last week with research that revealed that RSA keys produced for smartcards, security tokens, and other devices by crypto-chips made by Infineon Technologies were weak and crackable.

In other words, the private half of the RSA public-private key pairs in the gadgets, which are supposed to be secret, can be calculated from the public half, allowing the access cards and tokens to be cloned by smart attackers. That means keycards and tokens used to gain entry to buildings and internal servers can be potentially copied and used to break into sensitive areas and computers.

Infineon TPMs – AKA trusted platform modules – are used by various computers and gadgets to generate RSA key pairs for numerous applications. A bug in the chipset’s key-generation code makes it possible to compute private keys from public keys in TPM-generated RSA private-public key pairs. The research was put put together by a team from Masaryk University in Brno, Czech Republic; UK security firm Enigma Bridge; and Ca’ Foscari University of Venice, Italy.

Infineon TPMs manufactured from 2012 onwards, including the latest versions, are all vulnerable. Fixing the problem involves upgrading the module’s TPM firmware, via updates from your device’s manufacturer or operating system’s maker.

Major vendors including HP, Lenovo and Fujitsu have released software updates and mitigation guides for their laptops and other computers. ROCA – short for Return of Coppersmith’s Attack AKA CVE-2017-15361 – hit the Estonian ID card system, too.

Although not included in the initial casualty list, it turns out some Gemalto smartcards are also affected by the so-called ROCA vulnerability. Gemalto confirmed to El Reg today that some of its tech – specifically the IDPrime .NET access cards – are affected while downplaying the significance of the problem and saying remediation work was already in hand:

There has been a recent disclosure of a potential security vulnerability affecting the Infineon software cryptographic library also known as ROCA (CVE-2017-15361). The alleged issue is linked to the RSA on-board key generation function being part of a library optionally bundled with the chip by this silicon manufacturer. Infineon have stated that the chip hardware itself is not affected. As Gemalto sources certain products from Infineon, we have assessed our entire product portfolio to identify those which are based on the affected software. Our thorough product analysis has concluded that:

It is standard practice that Gemalto’s products use our in-house cryptographic libraries, developed by our internal RD teams and experts in cryptography. In the vast majority of cases, the crypto libraries developed by the chip manufacturer are not included in our products. We can confirm that products containing Gemalto’s crypto libraries are immune to the attack. A very limited set of customized products (including IDPrime.NET) are affected. We have already contacted the customers using these products and are currently working with them on remedial solutions.

As of today, this theoretical vulnerability has only been demonstrated as a mathematical possibility but no real cases have been seen to date.

Gemalto takes this issue very seriously and has set up a dedicated team of security experts to work on it and we will continue to monitor any evolution to the situation.

Dan Cvrcek, of Enigma Bridge and one of the ROCA researchers, said: “Gemalto stopped selling these cards [IDPrime .NET smartcards] in September 2017, but there are large numbers of cards still in use in corporate environments. Their primary use is in enterprise PKI systems for secure email, VPN access, and so on.

“ROCA does not seem to affect Gemalto IDPrime MD cards. We have also no reason to suspect the ROCA vulnerability affects Protiva PIV smart cards, although we couldn’t test any of these.”

Cvrcek has blogged about the issue here.

A paper detailing the research – titled The Return of Coppersmith’s Attack: Practical Factorization of Widely Used RSA Moduli – is due to be published at the ACM’s Computer and Communications Security conference in Dallas, Texas, on November 2. There is no public exploit code for the TPM flaws that we know of. While we all wait for more technical details of the vulnerability to be released, this online checker can be used to test RSA keys for ROCA-caused weaknesses. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/23/roca_crypto_flaw_gemalto/

Security Training & Awareness: 3 Big Myths

Oct
23
The once-overwhelming consensus that security awareness programs are invaluable is increasingly up for debate.

Organizations of all sizes continue to invest heavily in security awareness training, hoping to transform employees into a primary defense against email phishing and other cybersecurity threats. But such an endeavor, which historically has been positioned as an inexpensive solution, is today proving costly. A recent report commissioned by Bromium discovered that large enterprises spend $290,033 per year on phishing awareness training.

Even more telling, according to security experts quoted in a recent article in The Wall Street Journal, security awareness initiatives often fall short of their intended purpose because the training is a “big turnoff for employees.” Unfortunately, such sentiment is frequently ignored by security awareness training vendors with three claims that can easily be dispelled as myths.

Myth #1: Employees must participate in numerous hours of security awareness training for it to be effective.

The Facts: While many reporters and analysts explore how to create security awareness training programs that employees “won’t hate,” few experts would argue for allocating more time than absolutely necessary. That’s because training adults on cybersecurity is a lot like training children in math or science — more time spent does not typically equate to better results.

Experiential learning techniques, such as gamified quizzes and interactive sessions in which attacks are simulated, can provide the mental stimulation required to capture attention spans of all generations that lead to measurable improvement in employee cybersecurity aptitude. For example, the state of Missouri in 2015 implemented a cybersecurity training program that required employees to participate in short, 10-minute learning sessions each month, leading toend users [who] have become one of the best ‘intrusion detection systems’ as a result and have alerted us to many sophisticated attacks,” according to Missouri Chief Information Security Officer Michael Roling in GCN.com.

Myth #2: Content leads to behavior change

The Facts: Changing behavior is one of the most difficult human undertakings, despite conventional wisdom to the contrary. In fact, psychologists have estimated that the average person requires 66 to almost 300 days to form a new habit. Can you imagine the backlash of mandating 66 or more days of cybersecurity training?

Instead of forcing employees to consume a plethora of content, organizations should remain focused on communicating their main security messages and repeating those messages over and over and over again. This concept of “less is more” is sometimes referred to in the corporate world as micro-learning, an educational philosophy that “allows companies to make their training relevant to the needs of their workers, easily accessible, and interesting enough to grab their attention and keep it.” While not all organizations subscribe to this way of thinking, micro-learning has been shown to increase knowledge retention, which is exactly what cybersecurity awareness training is supposed to be all about. 

Myth #3: Extensive training modules are necessary to reduce risk

The Facts: Modules, which can help employees learn how to classify and analyze data, do very little to prepare workers to identify and act on cyberattacks. Instead, the oversaturation of modules frequently confuses and frustrates employees who can’t see how such education benefits them. Organizations serious about reducing risk must mute themselves from the background noise and prioritize direct employee feedback and experiential learning techniques in order to train a truly cyber-aware workforce.

As evident by the continued escalation of successful phishing attacks, it is a myth that security awareness and training requires significant time investment, an abundance of content and modules to successfully educate workers and in turn significantly minimize risk. What is true — if done correctly — is that security awareness and training is a necessary part of the increasingly complex cybersecurity puzzle.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Eyal Benishti has spent more than a decade in the information security industry, with a focus on software RD for startups and enterprises. Before establishing IRONSCALES, he served as security researcher and malware analyst at Radware, where he filed two patents in the … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/security-training-and-awareness-3-big-myths/a/d-id/1330165?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Play Bug Bounty Program Debuts

Oct
23
Google teams up with HackerOne to create the Google Play Security Reward Program.

Google has teamed up with HackerOne to launch the Google Play Security Reward Program.

Top Google Play application developers that have opted into the program will be listed on the Google Play Security Reward program page, which currently includes such apps as Dropbox, Tinder, Snapchat, and others. Google is also including some of its own apps in the program.

Independent security researchers are required to report the vulnerability to the app developer, who then works with the researcher to resolve the flaw. After app maker pays the researcher his or her bounty and fixes the vulnerability, Google will provide the researcher an additional $1,000 bonus award.

Google already has public bug bounty programs Google Vulnerability Reward Program (VRP), Android Rewards, and Chrome Rewards in place. Under the VRP program, independent security researchers are paid anywhere from $100 to $31,337 for finding vulnerabilities in Google-developed apps, extensions, some of its hardware devices like OnHub and Nest, and on Google-owned Web properties. 

Read more about the Google Play Security Reward Program here.

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/google-play-bug-bounty-program-debuts/d/d-id/1330193?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Microsoft tears into Chrome security as patching feud continues

Oct
23

The ding-dong between Microsoft and Google vulnerability researchers is not yet an inter-generational conflict but it’s showing signs of turning into one.

After being embarrassed by Google’s Project Zero over a string of software flaws, Microsoft has fired back by publicising a critical Remote Code Execution (RCE) flaw its Offensive Security Research (OSR) team spotted after crashing Chrome’s open-source JavaScript engine, V8.

Identified as CVE-2017-5121, the flaw in the just-in-time compiler was patched by Google in September (Chrome 61.0.3163.100), which we now know was reported to the company by Microsoft because, the company’s blog reveals, its team were paid a $7,500 (£5,700) bug bounty by Google.

Normally, that would be that, except that Microsoft’s dissection swiftly turns into a launchpad for a broader critique of weaknesses in Chrome’s design. For example:

Chrome’s relative lack of RCE mitigations means the path from memory corruption bug to exploit can be a short one.

And, significantly:

Several security checks being done within the sandbox result in RCE exploits being able to, among other things, bypass Same Origin Policy (SOP), giving RCE-capable attackers access to victims’ online services (such as email, documents, and banking sessions) and saved credentials.

Bluntly, Microsoft seems to be saying, Chrome’s much-vaunted sandboxing (a feature that limits one web page or browser tab’s access to another) doesn’t always stop criminals from pwning the user.

The vulnerability was fixed weeks ago so why would Microsoft want to tear it apart in such detail?

Perhaps to make a point about throwing stones in glasshouses after a period in which the company has received a string of similar criticisms from Google’s Project Zero team.

Only days ago, Google’s Mateusz Jurczyk laid into Microsoft over its alleged prioritisation of Windows 10 patches over those for older versions of the OS.

In May his colleague Tavis Ormandy took to Twitter to talk up a “crazy bad” RCE vulnerability affecting Windows Defender which, as it happens, Microsoft fixed only days later.

Worst of all was February’s disclosure by Jurczyk of a vulnerability in Windows he felt the company was taking too long to patch but which, he said, Google had a responsibility to tell the world about under its 90-days disclosure policy.

The difference of opinion over what constitutes responsible disclosure has turned into a particular bone of contention. As Microsoft makes a point of saying:

We responsibly disclosed the vulnerability that we discovered along with a reliable RCE exploit to Google on September 14, 2017.

Rubbing salt in the wound, Microsoft’s used its new MSRD Azure “fuzzing” platform to find it, perhaps subtly mocking Google’s enthusiasm for spotting flaws using the same technique.

It seems unlikely that a truce will be called in this head-to-head any time soon. Google will continue hammering Microsoft for taking too long to fix flaws while Microsoft will shoot back that Google isn’t immune to security woes of its own.

For Microsoft and Google users, this is all good. Not that long ago, it seemed that the software industry lacked urgency when it came to acknowledging and fixing vulnerabilities. If that complacency is melting away, it does no harm for big companies to help the thaw by taking each other to task.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/G878w0_cWwg/

Wowee. Look at this server. Definitely keep critical data in there. Yup

Oct
23

Israel-based Illusive Networks claims that its approach of planting poison-pill servers in a network can detect incoming attacks faster than any other method.

At the startup’s Tel-Aviv office, CEO and founder Ofer Israeli told a visiting press group that his technology is a post-breach mechanism. It automatically learns the topology of a customer’s network and plants details of phoney servers and shared resources.

In a typical attack, a hacker might penetrate the network and gain privileges needed to move from node to node. Then they will move across the network to identify the target’s location.

Illusive Networks places extra network destinations and shares inside a server’s deep data stores. An attacker lands on a decoy and looks where to go next, finding a mix of real and phoney destinations, which all look genuine.

By having enough fake destinations, attackers will eventually land on one or more of them. As soon as they do, the software knows it’s a real penetration attempt and alerts network managers so that a response team can then deal with the attack.

Real users do not see the fake network addresses as they are planted deep in a server’s system data stores and will only be accessed by attackers looking for network topology data so as to progress their attack.

It works on-premises or in a public cloud. Israeli said: “We are deployed across a bank which is completely a cloud bank.”

The software can work on Windows servers and workstations, Macs, and Linux servers but not Unix ones, although Unix support is coming. There is a recent mainframe protection product which involves planting deception sites around mainframes rather than working in the mainframes directly.

The software does not work on network switches but will do so in future. Cisco is a strategic investor.

Illusive can provide risk analysis services. “We can provide risk analysis of attacks to organisations so they can respond appropriately,” Israeli said. “We can show which attacks are closest to your critical data and prioritise them.”

Business is picking up. The firm had a run rate in the high single-digit millions in 2016 and has grown fast since then. Its business model is based on annual subscriptions

There are around 65 employees in Israel and the US, and the firm has taken in just over $30m in funding since it was founded in 2014. Citibank and Microsoft are also strategic investors.

Business is particularly good in the banking and finance sector, with the Bangladesh SWIFT attack acting as a wake-up call.

“It’s an ongoing thing,” Israeli said. “Companies will never be safe. Attackers are always developing new methods.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/23/illusive_networks_decoy_server_software/

Phone crypto shut FBI out of 7,000 devices, complains chief g-man

Oct
23

The FBI has been locked out of almost 7,000 seized mobile phones thanks to encryption, director Christopher Wray has said.

Speaking at the International Association of Chiefs of Police conference in Philadelphia in the US, Wray lamented that device encryption kept the g-men out of “more than 6,900, that’s six thousand nine hundred, mobile devices … even though we had the legal authority to do so.”

“It impacts investigations across the board: narcotics, human trafficking, counterterrorism, counterintelligence, gangs, organized crime, child exploitation,” he added.

Device encryption, where a mobile phone encrypts information stored on it, is a useful security measure in cases where phones are stolen. However, if a device owner arrested by police refuses to decrypt it, cops would have to crack the device to read messages stored on it.

The problem does not arise in the UK, where it is a criminal offence to refuse to give your password to State investigators.

Wray later added in his speech: “I get it, there’s a balance that needs to be struck between encryption and the importance of giving us the tools we need to keep the public safe,” according to the BBC.

Device encryption is a separate thing from end-to-end encryption, which protects messages and phonecalls from being intercepted and read over the air as they travel between the device and the server. Amber Rudd, the Home Secretary, has repeatedly attacked technology companies for their use of end-to-end encryption, which makes it far harder for small State agencies to use their Investigatory Powers Act powers to spy on alleged miscreants.

In 2014 British police complained that another phone security measure, the ability to remotely wipe the device, was being used to cleanse phones seized in criminal investigations before police could read them. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/23/fbi_director_6900_phones_encryption/