STE WILLIAMS

Spies still super upset they can’t get at your encrypted comms data

The Five Eyes nations have told the tech industry to help spy agencies by creating lawful access solutions to encrypted services – and warned that governments can always legislate if they don’t.

The UK, US, Canada, Australia and New Zealand – which have a long-standing intelligence agreement – met in Australia this week.

In an official communiqué on the confab, they claim that their inability to lawfully access encrypted content risks undermining democratic justice systems – and issue a veiled warning to industry.

The group is careful to avoid previous criticisms about their desire for backdoors and so-called magic thinking – saying that they have “no interest or intention to weaken encryption mechanisms” – and emphasise the importance of privacy laws.

But the thrust of a separate framework for their plans, the Statement of Principles on Access to Evidence and Encryption, will do little to persuade anyone that the agencies have changed their opinions.

“Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute,” the document stated.

Although governments “should recognize that the nature of encryption is such that that there will be situations where access to information is not possible”, these situations “should be rare”.

The problem the Five Eyes have is that the principles that allow government agencies to search homes or personal effects don’t give them the ability to use the content of encrypted data.

The group described this situation as “a pressing international concern that requires urgent, sustained attention and informed discussion on the complexity of the issues and interests at stake”.

Ever keen to amp up the threat this poses to society, it added: “Otherwise, court decisions about legitimate access to data are increasingly rendered meaningless, threatening to undermine the systems of justice established in our democratic nations.”

The principles set out in the Five Eyes’ statement seek to stress that law enforcement’s inability to access the content of “lawfully obtained data” is the responsibility of everyone.

“Law enforcement agencies in our countries need technology providers to assist with the execution of lawful orders,” the group said.

The agencies also pointed out that tech firms, carriers and service providers are also subject to the laws of the land – and if they don’t cooperate willingly, well, they have ways of making them.

“The Governments of the Five Eyes encourage information and communications technology service providers to voluntarily establish lawful access solutions to their products and services that they create or operate in our countries,” it said.

Should governments continue to encounter impediments to lawful access to information necessary to aid the protection of the citizens of our countries, we may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions.

Providers can create customised solutions that are tailored to their individual system architectures, it added, but governments should not favour a particular technology.

The communiqué also makes the common complaint that the “anonymous, instantaneous, and networked nature of the online environment has magnified” the threats of terrorism, child abuse, extremism and disinformation.

Again, tech firms should “take more responsibility for content promulgated and communicated through their platforms and applications”, with another separate statement setting out the action industry needs to take.

This includes development of capabilities to prevent uploading of illicit content, to carry out “urgent and immediate” takedowns, and more investment in human and automated detection capabilities.

Major firms should also set industry statements and help smaller firms deploy these capabilities on their own platforms.

Elsewhere, the communiqué re-committed the five nations to cooperate on terrorism, cyber security, and immigration through intelligence sharing and new sources of data importance. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/31/five_eyes_2018_meeting_encryption_terrorist_content/

Boffins are building an open-source secure enclave on RISC-V

At some point this fall, a team of researchers from MIT’s CSAIL and UC Berkeley’s EECS aim to deliver an initial version of an open source, formally verified, secure hardware enclave based on RISC-V architecture called Keystone.

“From a security community perspective, having trustworthy secure enclaves is really important for building secure systems,” said Dawn Song, a professor of computer science at UC Berkeley and founder and CEO of Oasis Labs, in a phone interview with The Register. “You can say it’s one of the holy grails in computer security.”

Song just recently participated in a workshop to advance Keystone, involving technical experts from Facebook, Google, Intel, Microsoft, UC Berkeley, MIT, Stanford and the University of Washington, among other organizations.

Keystone is intended to be a component for building a trusted execution environment (TEE) that’s isolated from the main processor to keep sensitive data safe. TEEs have become more important with the rise of public cloud providers and the proliferation of virtual machines and containers. Those running sensitive workloads on other people’s hardware would prefer greater assurance that their data can be kept segregated and secure.

There are already a variety of security hardware technologies in the market: Intel has a set of instructions called Software Guard Extensions (SGX) that address secure enclaves in its chips. AMD has its Secure Processor and SEV. ARM has its TrustZone. And there are others.

But these are neither as impenetrable as their designers wish nor as open to review as cyber security professionals would like. The recently disclosed Foreshadow side-channel attack affecting Intel’s SGX offers a recent example of the risk.

That’s not say an open source secure element would be immune to such problems, but an open specification with source code would be more trustworthy because it could be scrutinized.

“All these solutions are closed source, so it’s difficult to verify the security and correctness,” said Song. “With the Keystone project, we’ll enable a fully open source software and hardware stack.”

RISC-V business

In addition, the RISC-V microarchitecture looks to be less vulnerable to side-channel attacks. As the RISC-V Foundation said following the disclosure of the Spectre and Meltdown vulnerabilities earlier this year, “No announced RISC-V silicon is susceptible, and the popular open-source RISC-V Rocket processor is unaffected as it does not perform memory accesses speculatively.”

(The RISC-V Berkeley Out–of–Order Machine, or “BOOM” processor, supports branch speculation and branch prediction, so immunity to side-channel attacks should not be assumed.)

A backdoor in plain sight

The off-brand ‘military-grade’ x86 processors, in the library, with the root-granting ‘backdoor’

READ MORE

RISC-V is relatively new to the scene, having been introduced back in 2010. Established chipmakers like ARM, however, view it as enough of a threat to attack it.

But its not yet clear whether makers of RISC-V hardware will go all-in on openness. Ronald Minnich, a software engineer at Google and one of the creators of coreboot, recently noted that HiFive RISC-V chips have proprietary pieces.

“I realize there was a lot of hope in the early days that RISC-V implied ‘openness’ but as we can see that is not so,” he wrote in a mailing list message in June. “…Open instruction sets do not necessarily result in open implementations. An open implementation of RISC-V will require a commitment on the part of a company to opening it up at all levels, not just the instruction set.”

RISC-V may end up being a transition to more secure chip designs that incorporate the lessons of Spectre, Meltdown and Foreshadow. According to Song, there was discussion at the workshop about “whether we can build a new hardware architecture from ground up.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/31/keystone_secure_enclave/

Cock-ups, rather than conspiracies, top self-reported data breaches

Data breaches at organisations that ‘fess up to the UK’s data protection watchdog are about seven times more likely to be caused by human error than hackers.

According to data released under the Freedom of Information Act, 2,124 incidents reported by organisations in 2017-18 could be pinned on mistakes or incompetence. Only 292 were classed as having a cyber element.

The figures, obtained by security biz Kroll, are on self-reported incidents from organisations to the Information Commissioner’s Office, combined with data from annual reports.

Overall, the ICO has said (PDF) there were 3,156 self-reported data breaches in 2017-18 – up 29 per cent on the previous year and up 19.3 per cent on 2015-16.

The increase is due to a mix of greater awareness of what constitutes a data breach, and the fact that, since May this year, organisations are required to report serious data leaks under the General Data Protection Regulation.

The largest number of reports came from the healthcare sector, where breach reporting was already mandatory, with Kroll revealing there were some 1,214 reports made during 2017-18.

This was followed by general business (362), education and childcare (354) and local government (328).

In addition to self-reported incidents, the ICO also has to probe complaints from elsewhere. In 2017-18, it received 21,019 data protection concerns.

After investigating, the ICO can fine to organisations, and an analysis by The Register earlier this year found that the mode and median values were £70,000 and £85,000 respectively for breaches of the Data Protection Act.

The highest penalty awarded for a DPA breach to date is £400,000, however the ICO has threatened to fine Facebook £500,000 for its part in the Cambridge Analytica saga, although the charge has yet to transpire.

According to the Kroll analysis, the most common cock-ups were people sending data to the wrong recipient by email (447 reports) or snail mail (441 reports), followed by the loss or theft of paperwork, which accounted for 438 incidents.

Failing to redact data resulted in 256 mea culpas, while leaving data in an insecure location was reported 164 times.

Everyone’s favourite technical hitch – staffers’ inability to use the bcc function in emails – was responsible for 147 breaches, closely followed by the 133 equally facepalm-inducing incidents where an unencrypted device was lost or stolen.

Cyber break-ins were smaller than all of these, with unauthorised access resulting in 102 breach reports. Malware and phishing accounted for 53 and 51 breaches respectively, while 33 reports were attributed to ransomware, 20 to brute-forcing and two denial-of-service attacks. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/04/cckup_over_conspiracy_tops_selfreported_data_breaches/

India’s ISPs show they have good MANRS, sign up to Internet Society’s routing security scheme

India’s ISPs have agreed as a bloc to join The Internet Society’s MANRS route integrity programme.

MANRS stands for Mutually Agreed Norms for Routing Security, and was launched in 2014 to try and solve some of the Border Gateway Protocol’s most pressing problems. In essence, the programme asks its members to play their part in stopping people accidentally or deliberately “black-holing” traffic with dodgy route advertisements.

Since creating the programme, The Internet Society (ISOC) has faced the uphill battle of pitching the initiative to network operators, so an agreement with the Internet Service Providers Association of India (ISPAI) is a big step forward for MANRS.

In its announcement, ISOC described its memorandum of understanding with ISPAI, signed by Rajnesh Singh of ISOC and ISPAI president Rajesh Chharia, as “a step towards taking immediate action to improve the resilience and security of the routing infrastructure in India”.

The four pillars of MANRS are that providers filter route advertisements to catch configuration errors; block traffic with spoofed IP addresses (often used to try and conceal DDoS attacks); coordinate their communications so if a bad BGP advertisement gets through it’s stopped quickly; and publish routing data to make it easier to validate each others’ information.

Singh told the Economic Times he believed MANRS also deserves to be incorporated into the academic curriculum, so “when the youth join the workforce, they will know what measures to take and would not be vulnerable to internet threats”.

In July, academic networks in Europe (GEANT) and Australia (AARNet) signed on with the MANRS programme. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/09/04/indian_isps_join_isoc_manrs_programme/

Lean, Mean & Agile Hacking Machine

Hackers are thinking more like developers to evade detection and are becoming more precise in their targeting.

It’s time again for another quarterly trek into the wilds of the cyber-threat landscape. As security practitioners work to put themselves in the shoes of hackers to better anticipate where attacks will be coming from, these malicious actors are starting to think more like developers to evade detection.

And lately, they are more precise in their targeting, relying less on blanket attempts to find exploitable victims. How can IT security teams keep pace with the agile development cybercriminals are employing and pinpoint the recycled vulnerabilities being used? Fortinet’s latest Global Threat Landscape Report sheds light on current criminal activity and suggests how organizations can stay a step ahead.

Agile Attacks
Malware authors have long relied on polymorphism — the ability of malware to constantly change its own code as it propagates — to evade detection, but over time, network defense systems have made improvements that make them more difficult to circumvent. Never ones to rest on their laurels, malware authors recently have turned to agile development to make their malware more difficult to detect and to quickly counter the latest tactics of anti-malware products. Addressing these emerging polymorphic swarm attacks requires a hive defense, where all of your deployed security components can see and communicate with each other, and then work in a cooperative fashion to defend the network.

Cybercriminals are using not only agile development but automation to advance their attacks. Malware is on the rise that is completely written by machines based on automated vulnerability detection, complex data analysis, and automated development of the best possible exploit based on the unique characteristics of that weakness. Organizations must counter with automation of their own, using machine learning to understand and even predict bad actors’ latest exploits so they can stay ahead of these advanced threats.

A prime example of malicious agile development is the 4.0 version of GandCrab.

GandCrab
The actors behind GandCrab are the first group to accept Dash cryptocurrency. It appears that they use the agile development approach to beat competitors to market and deal with issues and bugs when they arise. Another unique aspect to GandCrab is its ransomware-as-a-service model, which is based on a 60/40 profit-sharing model between the developers and criminals wishing to use their services. And lastly, GandCrab uses .BIT, a top-level domain unrecognized by ICANN, which is served via the Namecoin cryptocurrency infrastructure and uses various name servers to help resolve DNS and redirect traffic to it. GandCrab 2.x versions were most prevalent during the second quarter, but by the quarter’s close, v3 was in the wild, and the v4 series followed in early July.

We noticed that when a 8hex-chars.lock file in the system’s COMMON APPDATA folder is present, the files will not be locked. This usually occurs after the malware determines the keyboard layout is in the Russian language, along with other techniques to determine computers in Russian-speaking countries. We speculate that adding this file could be a temporary solution. Based on our analysis, industry researchers created a tool that prevents files from being encrypted by the ransomware. Unfortunately, GandCrab 4.1.2 was released a day or two later, rendering the lock file useless.

Valuable Vulnerabilities
Cybercriminals are becoming smarter and faster in how they leverage exploits. In addition to using dark net services such as malware-as-a-service, they are honing their targeting techniques to focus on exploits (e.g., severe exploits) that will generate the biggest bang for the buck. The reality is that no organization can patch vulnerabilities fast enough. Rather, they must become strategic and focus on the ones that matter using threat intelligence.

To keep pace with the agile development methods cybercriminals are using, organizations need advanced threat protection and detection capabilities that help them pinpoint these currently targeted vulnerabilities. With exploits examined from the lens of prevalence and volume of related exploit detections, only 5.7% of known vulnerabilities were exploited in the wild, according to our research. If the vast majority of vulnerabilities won’t be exploited, organizations should consider taking a much more proactive and strategic approach to vulnerability remediation.

Painting a New Security Landscape
This requires advanced threat intelligence that is shared at speed and scale across all of the security elements, and sandboxing that provides layered, integrated intelligence. This approach shrinks the necessary windows of detection and provides the automated remediation required for the multivector exploits of today. The Cyber Threat Alliance, a group of security companies that shares advanced threat information, was created for this reason.

While many organizations are working hard to collect as much data as they can from a variety of sources — including their own — much of the work in processing, correlating, and converting it into policy is still done manually. This makes it very difficult to respond to an active threat quickly. Ideally, the processing and correlation of threat intelligence that results in effective policy needs to be automated.

Effective cybersecurity also requires diligence in patching. With the data on which vulnerabilities are currently being exploited, IT security teams can be strategic with their time and harden, hide, isolate or secure vulnerable systems and devices. If they are too old to patch, replace them.

Network segmentation — and micro-segmentation — is a must, as well. These steps ensure that any damage caused by a breach remains localized. In addition to this passive form of segmentation, deploy macro-segmentation for dynamic and adaptive defense against the never-ending onslaught of new, intelligent attacks.

Cybercriminals are relentless, making use of and adapting the latest technology to ply their trade. IT security teams can beat them at their own game by using the information and recommendations outlined above.

Related Content:

 

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Derek Manky formulates security strategy with more than 15 years of cyber security experience behind him. His ultimate goal to make a positive impact in the global war on cybercrime. Manky provides thought leadership to industry, and has presented research and strategy … View Full Bio

Article source: https://www.darkreading.com/endpoint/lean-mean-and-agile-hacking-machine/a/d-id/1332691?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mozilla Taps Former Google Exec as it Rethinks Privacy

News of the recent hire closely follows Mozilla’s decision to block trackers in its Firefox browser by default.

Mozilla has found a new global policy chief in Alan Davidson, the former Google policy chief who also managed Internet policy and security under the Obama administration. He’ll report to chief operating officer Denelle Dixon.

Davidson will oversee the browser company’s efforts involving policy, trust, and security, with responsibilities extending to compliance and investigations. His arrival comes at a time when Mozilla is ramping up efforts to defend the openness of the Web and privacy of its users.

It’s also the latest privacy-focused move coming from Mozilla. Last week the company announced its Firefox browser will block Web trackers by default and give users more control over the information they share with sites they access. Firefox will strip cookies and block storage access from third-party tracking content, it said.

The decision is part of a broader effort intended to protect users’ information, according to Mozilla. In addition to eliminating cross-site tracking, it plans to improve page load performance and mitigate other harmful online practices, such as cryptomining and fingerprinting users.

Read more details here and here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/mozilla-taps-former-google-exec-as-it-rethinks-privacy/d/d-id/1332726?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Ads cracks down on tech support scammers

Remember Google’s boast earlier this year that it took down 3.2 billion bad ads during 2017?

A few months on and the company has now admitted its systems for detecting one especially tenacious form of malevolent ad – those pushing tech support scams – needs a lot more help.

In an announcement late last week, Google said that in future, any company wanting to advertise technical support services would have to pass manual verification checks first.

Assuming this resembles Google Ads’ established advanced verification system, this means that tech support is about to join other abused services such as payday loans and locksmiths on the league table of suspicion.

Presumably, Google has been using some form of automated ad checking, but this hasn’t worked. It’s not hard to imagine how this could go wrong. Wrote Google’s director of product policy, David Graff:

As the fraudulent activity takes place off our platform, it’s increasingly difficult to separate the bad actors from the legitimate providers.

Which is to say that when Google accepts paid ads, it has no quick way of knowing whether they’re honest because users who fall victim to scammers can’t feed that fact back to them.

Tech support fraudsters also change their paid ads regularly, which makes keeping up with them a frustrating game of whack-a-mole that’s difficult to win.

What’s a little odd is how long it seems to have taken the world’s leading search engine to wake up to the evidence that something has been going wrong with many of these ads.

Scams manifest in different ways, the simplest of which is that users running searches for phrases related to computer tech support turn up bad ads, which they trust simply because – yes – they’re on Google.

Another is to use Google Ads to push people entering unrelated search terms to promoted links that throw up bogus malware warnings in an effort to fuel even more tech support chicanery.

Will verification winkle the fraudsters out of paid search placement? It might, but only for those scammers who use ads in the first place.

Advanced verification involves jumping through hoops such as having to provide evidence of a business’s location, status and licenses, and perhaps trading history. This is bound to make it harder for criminals because at the very least it introduces some delay into getting ads up.

Unfortunately, there are other ways to trick people into buying unnecessary tech support such as malware infection, pop-ups inside browsers, and even cold calls from people claiming they’re working for a big company such as Microsoft.

There’s also an argument that the new protections don’t go far enough as they don’t extend to other dubious online advertisers – for example those duping travellers into paying inflated fees for ESTA visa checks to visit the US which cost $14 from the official site.

These have been a pest ever since the ESTA visa system started nearly a decade ago and yet Google continues to take money for ads pushing them despite several campaigns asking it to change its terms and conditions.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eFXyaF3e5a8/

Hollywood accuses itself of piracy

“Hey!” Hollywood studios are saying to those darn IMDb “pirates:” “Those listings of our own work look suspiciously like our own work!”

As Torrent Freak reports, it’s not Sony Pictures Television, National Geographic or Columbia Pictures’ copyright lawyers that have spontaneously developed dementia, per se. Rather, it’s the armies of “largely automated” bots they deploy each day to scour the internet for references to pirated content.

The result: a slew of bone-headed DMCA notices have been sent out to perfectly legitimate sites, including IMDb, which stands for Internet Movie Database and contains a wealth of information about films, TV programs, video games, and internet streams, including cast, production crew and personnel biographies, plot summaries, trivia, and fan reviews and ratings. It is, in short, the holy scriptures of film, yet because of buggy bots, it’s being treated as a copyright-infringing ragamuffin.

After the bots spot piracy, they report the links to various online services, including Google. It works fine, except when it doesn’t.

Last month, bots with bugs started to wheeze. As Torrent Freak reported, even its own publication was targeted with takedown notices, along with several other sites that cover censorship-related issues. Multiple Hollywood studios have thus been inadvertently asking Google to remove IMDb listings of their own work, according to the publication.

Torrent Freak, along with the also-targeted Electronic Frontier Foundation (EFF), says that a “content protection” service called Topple Track was at the heart of the problem. At least, it was at the heart of the first wave of these frivolous notices until was forced to cease operations last month. Now, however, a new, equally flawed reporting service appears to have picked up where Topple Track sputtered off.

Torrent Freak gave this example of the takedown notices being sent to perfectly legitimate sites: it was sent to Google on behalf of Melbourne artist Gamble Breaux, supposedly listing sites that linked to infringing copies of her song This Time, Torrent Freak reports.

Yet as the publication notes, most, if not all, of the links in the notice are “completely unrelated.” That goes for a Torrent Freak article on pirate site blocks in Denmark, and a Danish news report on the rise of illegal streaming.

As the EFF noted in an article about the issue last month, Topple Track has been peppering a variety of innocent sites with false infringement claims: besides the digital rights group itself, it’s seen bogus notices sent to news organizations, law professors and musicians.

Symphonic Distribution, which runs Topple Track, apologized to the EFF, explaining that “bugs within the system” had resulted in whitelisted domains receiving the notices by mistake. Symphonic said that it’s issued retraction notices and, as of 10 August, was working to resolve the issue.

The EFF’s response: Meh! We’ll believe it when we see it, given that your technology sucks:

While we appreciate the apology, we are skeptical that its system is fixable, at least via whitelisting domains. Given the sheer volume of errors, the problem appears to be with Topple Track’s search algorithm and lack of quality control, not just with which domains they search.

The EFF calls Topple Track “a poster child for the failure of automated takedown processes.”

It linked to a slew of improper notices, the targets of which have included:

Daniel Nazer, EFF senior staff attorney and Mark Cuban Chair to Eliminate Stupid Patents:

It goes on and on.

Nazer said that the EFF “cannot comprehend how Topple Track came to target EFF or Eric Goldman” on behalf of an artist going by the name of “Luc Sky.”

Torrent Freak, however, can. On Sunday, it reported that upon taking a close look at the reports, it appears that most are going to “classic pirate sites.” But the Topple Track bug seems to be switching certain links to the “infringing field” in some circumstances:

The IMDB links are mostly used as a reference to the original content. However, it appears that due to a bug in the system the IMDb links move to the “infringing content” field when there are no pirate links to report, as shown below.

Last month, Topple Track disabled the tool that was spewing the faulty takedown notices. The problem, however, turns out to be wider spread than that: Torrent Freak says that the past few weeks have seen a bunch of DMCA notices going out to IMDb, but this time around, they’re coming from a UK-based reporting agency called Entura International. The publication reached out to Entura to report the issue and request a comment, but it hadn’t heard back as of Monday.

Fortunately, most of IMDb’s links are whitelisted, so Google hasn’t removed any of the inaccurately reported sites. But as the EFF points out, even the most ridiculous takedown notices are no laughing matter: they have serious consequences.

Nazer:

Many site owners will never even learn that their URL was targeted. For those that do get notice, very few file counternotices. These users may get copyright strikes and thereby risk broader disruptions to their service. Even if counternotices are filed and processed fairly quickly, material is taken down or delisted in the interim. In Professor Goldman’s case, Google also disabled AdSense on the blog post until his counternotice became effective.

As for smaller sites? Some that lack the clout of whitelisted IMDb pages might be taken down for no reason whatsoever, as Torrent Freak notes.

One takeaway from the mess: bots might seem like nothing but brainless little snippets of code – but like any code, they don’t need brains to wreak havoc.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Ba2-EKXrGWs/

Governments demand companies allow access to data, or else

A decades-old alliance of national intelligence partners promised to get at encrypted data last week, whether tech companies helped them or not.

Australia, Canada, New Zealand, the United Kingdom and the United States released a joint statement calling on tech companies to help them access data when authorised by the courts – or else.

The alliance of countries is known as the Five Eyes, and it was formed after the Second World War as a collaborative effort to share intelligence information. The group released an Official Communiqué at a meeting last week, outlining several broad goals. One of these goals involved increasing government powers to target encrypted data when the courts authorized it (a concept known as ‘lawful access’).

The group went into more depth in its Statement of Principles on Access to Evidence and Encryption, released at the same time. The document starts off conciliatory enough, arguing that encryption is necessary:

Encryption is vital to the digital economy and a secure cyberspace, and to the protection of personal, commercial and government information.

Then came the common refrain: You can have too much of a good thing.

However, the increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security.

The same encryption that protects legitimate information is also protecting criminals, the statement said, adding that while privacy laws are important, the authorities need a way to access communications when a court has allowed it. The countries’ reasoning here is that the same principles have applied to searches of homes and other physical spaces for years. They want the same warrant principles to apply in cyberspace.

The unified governments set out three principles. One reinforced the rule of law, explaining that governments must follow due process when accessing data.

Assuming they do that, though, another principle says that technology product and service providers – including carriers, device manufacturers or over-the-top service providers – have a responsibility to help governments access the data that they need. These companies should assist governments in getting access to data, the statement said, adding that situations where governments cannot access information with the courts’ consent should be rare.

The final principle has the stinger. Entitled ‘Freedom of choice for lawful access solutions’, it encourages companies to “voluntarily establish lawful access solutions to their products and services that they create or operate in our countries”. But what if they don’t volunteer?

Should governments continue to encounter impediments to lawful access to information necessary to aid the protection of the citizens of our countries, we may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions.

So there it is. Companies must help governments gain lawful access to data, or else.

The Five Eyes’ approach to lawful access appears conflicted. On the one hand, its Communiqué says:

The five countries have no interest or intention to weaken encryption mechanisms.

On the other hand, its statement on encryption appears to advocate exactly that. Should encryption be removed during transit to allow Fives Eyes access to data, that encryption is weakened.

No ungoverned spaces

The other focus for Five Eyes was on online spaces (think Facebook, YouTube and suchlike). It advocated for a “free, open, safe and secure internet”. This means stopping wrongdoers online including terrorists and child abusers. It also singled out foreign interference and disinformation.

In its Statement on Countering the Illicit Use of Online Spaces, it said that it had asked tech leaders to help it look at this problem but came up empty-handed. So it outlined a set of goals anyway.

It urged the tech sector to figure out ways to prevent illegal content from being uploaded, and to take it down more quickly when identified. They should also go through existing online content and check that too. Tech companies should share hashes of this information more readily to co-operate on takedowns, it said, adding that the governments would also share these hashes between themselves and with the tech sector.

The five governments will also be watching the tech industry and reporting back on a quarterly basis, the statement concluded.

This more aggressive, official Five Eyes stance on governmental control of and access to internet information has been in the works for a while. Australia has been particularly outspoken on the issue.

Recently-ousted Australian Prime Minister Malcolm Turnbull called directly on Five Eyes for more action in June 2017 at a speech to the Australian Federal Council:

The internet cannot be an ungoverned space. We cannot continue to allow terrorists and extremists to use the internet and the big social media and messaging platforms – most of which are hosted in the United States I should say – to spread their poison.

Australia recently announced its own stricter rules on lawful access, following the United Kingdom’s lead.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cExc63pItgQ/

How refusing to give police your Facebook password can lead to prison

A 24-year-old murder suspect was sentenced to 14 months in prison on Friday for refusing to hand over his Facebook account password to detectives who are investigating the death of 13-year-old schoolgirl Lucy McHugh.

As The Sun reports, Lucy had been missing for two days last month before her body was found in the woods near a sports center in Southampton, UK. She was stabbed to death.

Stephen Nicholson, a friend of the family who’d been staying with them, was allegedly in contact with Lucy the morning of her disappearance. Police took him into custody and asked him – twice – for his password so they could check out the alleged conversation and whatever other content might help the investigation.

Nicholson has been jailed not for the murder, but for his refusal to cooperate with the detectives and let them into his account.

On Friday, he pleaded guilty to failing to disclose access codes to an electronic device under the Regulation of Investigatory Powers Act 2000 (RIPA).

According to the Independent, Nicholson argued that giving police access to his private Facebook messages could expose information relating to cannabis.

The judge scoffed, describing the excuse as “wholly inadequate”, considering the severity of the case.

Part 3 of RIPA empowers UK authorities to compel the disclosure of encryption keys or decryption of data. Refusal to comply can result in a maximum sentence of two years’ imprisonment, or five years in cases involving national security or child indecency.

Nicholson isn’t the first to be prosecuted under RIPA for refusing to decrypt devices for British authorities. The first case, in 2009, was that of a then-33-year-old man whom the Register described as a “schizophrenic science hobbyist with no previous criminal record.” He was detained after sniffer dogs picked up the scent of a model rocket in his belongings. He was then jailed for nine months for refusing to decrypt files.

Then, in 2010, 19-year-old Oliver Drage was sentenced to four months in jail after refusing to hand over his 50-character encryption key to detectives who were investigating a child exploitation network.

At the time, Detective Sergeant Neil Fowler said that Drage’s sentence showed how serious his offense was, according to the Independent, which quoted Fowler:

Computer systems are constantly advancing and the legislation used here was specifically brought in to deal with those who are using the internet to commit crime. It sends a robust message out to those intent on trying to mask their online criminal activities that they will be taken before the courts with the ultimate sanction, as in this case, being a custodial sentence.

RIPA is one of two laws that can be used to compel password/encryption key disclosure in the UK. The second is the Terrorism Act 2000, which was used against Muhammad Rabbani: a year ago, the international director for campaign group CAGE was found guilty of withholding his PIN, saying that his devices contained confidential data connected to the case of a man he’d just met in Qatar and who alleged he’d been tortured while in US custody.

Password disclosure in the US

In contrast with the UK’s RIPA and Terrorism Act, the US has a patchwork of laws governing password disclosure. Judges can and do order disclosure, such as in the case of a former policeman accused of storing child abuse images who is in jail indefinitely, until he lets authorities into his hard drive.

The legal landscape in the US seems to change by the minute, though. Within the past two weeks, a Court of Appeals ruled that forcing a woman to unlock her iPhone violates Fifth Amendment protection against self-incrimination, for example.

Does that mean that the US has turned the corner when it comes to compelled disclosure?

Hardly. The ongoing legal debate keeps getting swatted from one end of Fifth Amendment interpretation to the other, as in: Is a password something we know, which would be protected versus a fingerprint, which is something we are, and hence isn’t? And are files on a phone, or content within a Facebook post, similar to paper files in a cabinet, the unlocking of which the authorities can compel?

That most recent Court of Appeals majority decision was written by Judge Paul Mathias, who hopes that Fifth Amendment protection will, indeed, cover passwords and encryption keys. He went so far as to create a blueprint “for resolving decryption requests from law enforcement authorities” and asked reviewing courts of last resort to consider following it.

Regardless of legal interpretations of UK and/or US law, it would be nice to think that the most important aspect of Lucy McHugh’s case is that justice is served.

As he serves his jail term with his password safely hidden from detectives, Stephen Nicholson will not be helping to bring anybody that justice. But as legal firm Saunders Law pointed out to the Independent, that could be a self-protecting course for him to take: if disclosure of his Facebook password led to incriminating data, the 14 months jail time for his RIPA offense might look like chump change in comparison to what such incriminating data might lead to.

The news publication printed this statement from Saunders Law:

There could be a completely disproportionate result if someone is imprisoned for not providing a password but not the crime they are originally under investigation for, of which they might be innocent.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pl-v_8uqTx0/