STE WILLIAMS

London police’s use of facial recognition falls flat on its face

A “top-of-the-line” automated facial recognition (AFR) system trialled for the second year in a row at London’s Notting Hill Carnival couldn’t even tell the difference between a young woman and a balding man, according to a rights group worker invited to view it in action.

Because yes, of course they did it again: London’s Met police used controversial, inaccurate, largely unregulated automated facial recognition (AFR) technology to spot troublemakers. And once again, it did more harm than good.

Last year, it proved useless. This year, it proved worse than useless: it blew up in their faces, with 35 false matches and one wrongful arrest of somebody erroneously tagged as being wanted on a warrant for a rioting offense.

Silkie Carlos, the technology policy officer for civil rights group Liberty, observed the technology in action. In a blog post, she described the system as showing “all the hallmarks of the very basic pitfalls technologists have warned of for years – policing led by low-quality data and low-quality algorithms”.

In spite of its lack of success, the Met’s project leads viewed the weekend not as a failure, but as a “resounding success,” Carlos said, because it had come up with one, solitary successful match.

Even that was skewered by sloppy record-keeping that got an individual wrongfully arrested: the AFR was accurate, but the person had already been processed by the justice system and was erroneously included on the suspect database.

The Notting Hill Carnival pulls in some 2m people to the west London district on the last weekend of August every year. Out of 454 arrested people last year, the technology didn’t tag a single one of them as a prior troublemaker.

But why let failure puncture your technology balloon? London’s Metropolitan Police went right ahead with plans to again use AFR to scan the faces of people partying at Carnival, in spite of Liberty having called the practice racist.

Studies bear out the claim that AFR is an inherently racist technology. One reason is that black faces are over-represented in face databases to begin with, at least in the US: according to a study from Georgetown University’s Center for Privacy and Technology, in certain states, black Americans are arrested up to three times their representation in the population. A demographic’s over-representation in the database means that whatever error rate accrues to a facial recognition technology will be multiplied for that demographic.

Beyond that over-representation, facial recognition algorithms themselves have been found to be less accurate at identifying black faces.

During a recent, scathing US House oversight committee hearing on the FBI’s use of the technology, it emerged that 80% of the people in the FBI database don’t have any sort of arrest record. Yet the system’s recognition algorithm inaccurately identifies them during criminal searches 15% of the time, with black women most often being misidentified.

That’s a lot of people wrongly identified as persons of interest to law enforcement.

The problems with American law enforcement’s use of AFR are replicated across the pond. The Home Office’s database of 19m mugshots contains hundreds of thousands of facial images that belong to individuals who’ve never been charged with, let alone convicted of, an offense.

Here’s Carlos describing this woebegone technology making mistakes in real time at Carnival:

I watched the facial recognition screen in action for less than 10 minutes. In that short time, I witnessed the algorithm produce two ‘matches’ – both immediately obvious, to the human eye, as false positives. In fact both alerts had matched innocent women with wanted men.

The police brushed it off, she said:

They make their own analysis before stopping and arresting the identified person anyway, they said.

‘It is a top-of-the-range algorithm,’ the project lead told us, as the false positive match of a young woman with a balding man hovered in the corner of the screen.

Carlos writes that Carson Arthur, from the accountable policing organization StopWatch, was also observing the AFR trial. When he asked the officers what success would look like, here’s how a project leader reportedly responded:

We have had success this weekend – we had a positive match!

That’s not a lot of return on investment, to put it lightly: the arrest was erroneous, and police stopped dozens of innocent people to request identification after they were incorrectly tagged as troublemakers (thankfully, they had it on hand; otherwise, they could have been wrongfully arrested). Carlos points out that the one single match came at the price of biometric surveillance of 2m carnival-goers and plenty of police resources.

The lack of law enforcement acknowledgement of AFR’s poor track record and invasion of privacy is par for the course, Carlos said:

None of our concerns about facial recognition have registered with the police so far. The lack of a legal basis. The lack of parliamentary or public consent. The lack of oversight. The fact that fundamental human rights are being breached.

Carlos asks where we’ll wind up if this “offensively crude” technology dominates public spaces, saying:

If we tolerated facial recognition at Carnival, what would come next? Where would the next checkpoint be? How far would the next ‘watch list’ be expanded? How long would it be before facial recognition streams are correlated?

We can look to China for answers of what pervasive AFR looks like. We already know how it looks in Beijing: it looks like being followed into public restrooms as authorities ration toilet paper.

We can also look to China for recent police use of AFR that was actually effective, if one assumes local media accounts are trustworthy and haven’t been airbrushed by censors. On Monday, local media reported that police in Qingdao, a coastal city in eastern China’s Shandong province, used the technology to identify and arrest 25 suspects during an Oktoberfest held in August.

The system also recognized people with histories of drug addiction, 19 of whom tested positive for drug use and were subsequently arrested, as were five people with previous convictions for theft who were found to have stolen phones and other items at the festival. According to Sixth Tone, 18 cameras installed at four entrances captured a total of 2.3m faces.

We’ve also seen China roll out AFR in these situations:

Sixth Tone quoted a Shanghai lawyer who said that China has a long way to go when it comes to protecting individuals’ privacy rights. While cities and provinces have published or proposed guidelines, a set of rules at the national level that were drafted and published in November 2016 still haven’t been passed.

But really, what good are laws protecting individuals’ privacy when they’re simply ignored?

Both UK and US police have been on non-sanctioned AFR sprees. In the US, the FBI never bothered to do a legally required privacy impact assessment before rolling out the technology. In the UK, retention of millions of people’s faces was declared illegal by the High Court back in 2012. At the time, Lord Justice Richards told police to revise their policies, giving them a period of “months, not years” to do so.

“Months”, eh? It took five years. The Home Office only came up with a new set of policies in February of this year.

In the absence of policies, China’s gone whole-hog for AFR. Hell, soon people are going to be able to purchase KFC fried chicken by smiling.

We’re used to clucking our tongues over China’s approach to surveillance and censorship, from extensive media coverage of the Great Firewall to its forcing spyware onto a minority group.

But as the Notting Hill Carnival shows yet again, there’s nothing uniquely Chinese, or British, or American, about police using biometrics wily-nily, without regard for effectiveness, privacy invasion or legality.

Apparently, the gee-whiz nature of the technology is sparkling so brightly that it’s obscuring its flaws and repercussions. Politicians and law enforcement have, too often, regrettably, proven deaf and blind to hearing about or seeing the downsides.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rXPrUgMdGXA/

Bazinga! Social network Taringa ‘fesses up to data breach

Latin American social networking site Taringa has suffered a database breach that has resulted in the spill of more than 28 million records.

Usernames, hashed passwords (using the weak MD5 algorithm) and personal email addresses have been exposed by the breach. Argentinia-based Taringa’s breach statement (in Spanish) can be found here. Neither phone numbers nor addresses from Bitcoin wallets associated with a Taringa program were exposed by the breach, according to the Reddit-like social networking site.

LeakBase claims that it has already cracked 94 per cent of password hashes exposed in the latest dumps.

In response, Taringa – which has users all over the Spanish-speaking world – has applied a password reset as well as urging consumers to review their use of login credentials elsewhere to make sure they are not using the same (now compromised) passwords on other sites.

Although the breach affects a consumer site, it poses a risk for corporates because it opens the door to the well-practised hacker tactic of using the same login credentials to break into more sensitive (webmail, online banking) or corporate accounts. The still widespread practice of password reuse opens the door to such credential stuffing attacks.

A list of top 50 common/worst passwords chosen by Taringa users can be found here.

Andrew Clarke, EMEA director at One Identity, opined: “The reported breach at Taringa highlights some fundamental issues. The fact that an administrative file holding passwords was accessible demonstrates little or no control over privileged accounts. Then the passwords were easily cracked since the company used a weak MD5 (128-bit) algorithm rather than SHA-256.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/05/taringa_data_breach/

Kurat võtku! Estonia identifies security risk in almost 750,000 ID cards

The Estonian government has discovered a security risk in its ID card system, potentially affecting almost 750,000 residents.

“When notified, Estonian authorities immediately took precautionary measures, including closing the public key database, in order to minimise the risk while the situation can be fully assessed and a solution developed,” according to an email by Kaspar Korjus, managing director of e-Residency, to users.

The government said the security risk is still theoretical and is not aware that anyone’s digital identity has been misused. The use of an ID card is still safe for online authentication and digital signing.

ID cards issued before October 16, 2014, use an alternate chip and are not affected, nor are mobile-IDs.

In a statement Taimar Peterkop, director general of the Estonian Information System Authority, said: “According to the current assessment of Estonian experts, there is a security risk and we will continue to verify the scientists’ claims.”

Gareth Niblett, a security consultant holding Estonian residency, said this is not the first time there have been issues with the e-ID card.

“Last year a number of cards and certificates had to be reissued due to how Google Chrome did certificate validation checks and also a migration to SHA-2. This makes me confident that they will manage to deal with this issue too.”

Estonia has often been positioned as a poster boy for digital government, with all residents interacting with the state online via the country’s ID card system.

In late 2014 Estonia became the first country to offer electronic residency to people from outside the country, a step that the Estonian government terms as “moving towards the idea of a country without borders”.

Estonia’s state apparatus is relatively new, having restored its independence as a sovereign nation in 1991 following the Soviet occupation. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/05/estonia_identifies_security_risk_in_750000_id_cards/

UK not as keen on mobile wallets as mainland Europe and US

The UK is lagging behind other countries in mobile wallet adoption, according to a new survey out today.

Consumers in the US and Europe are catching up with those in fast-growing economies in Asia and Latin America where mobile wallets have already become the dominant payment platform, according to an online survey of 6,000 consumers in 20 countries worldwide sponsored by global payments software firm ACI Worldwide.

The research shows that 17 per cent of US consumers now regularly use their smartphone to pay, up from 6 per cent in 2014 when the survey was last conducted. In Europe, Spanish consumers are the most active users of mobile wallets, with 25 percent using them regularly, followed by Italy (24 per cent), Sweden (23 per cent) and the UK (14 per cent).

Mobile wallet security

As adoption rise, mobile payments is becoming the new battleground between banks and fin techs firms.

Consumer confidence regarding mobile wallet security remains high. In the UK, 37 per cent of respondents said they trust their bank to protect their personal information when paying via smartphone. This confidence might be misplaced. ACI’s report warns that as more consumers adopt mobile wallets, they may also become a bigger target for criminals.

The Revised Directive on Payment Services (PSD2) next year means that banks will be obliged to open their customers’ accounts to third-party payment and information requests, a measure designed to spur innovation in digital financial services.

“The rollout of immediate payments schemes worldwide, combined with new regulation in Europe coming into effect in early 2018, will only increase the importance of mobile payments,” said Lu Zurawski, practice lead, retail banking and consumer payments, ACI Worldwide. “This will open the door for a range of new players in the payments market and we may see mobile becoming the new plastic sooner than we thought.”

“Another important factor in the US is the ubiquity of mobile wallet acceptance. With the EMV rollout behind us, most stores are NFC-enabled and the acceptance of mobile wallets is now almost guaranteed by most larger retailers and even many smaller ones,” she added.

Adoption

India tops the list of countries surveyed, with 56 per cent of consumers saying they pay with a smartphone regularly, followed by Thailand (51 per cent) and Indonesia (47 per cent). These emerging markets are leap-frogging traditional card infrastructures and usage patterns in North America and Western Europe.

“Mobile wallets really started to grow in popularity after the launch of Apple Pay almost three years ago,” Zurawski explained. “What we are seeing is a tipping point regarding adoption, which can be attributed to consumers worldwide now almost exclusively using payment-enabled devices, as older models have cycled out, with a few exceptions.”

The Chinese market is dominated by two players – Alipay and WeChat Pay. Both schemes use optical scanning “QR code” techniques at the point of sale instead of the plastic card industry standards like NFC (Near Field Communication). These new Chinese payments services are expected to drive new payment behaviours across Asia and globally. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/05/mobile_wallet_adoption_survey/

News in brief: Pratchett’s data steamrollered; WikiLeaks hit by hackers; Instagram details for sale

Your daily round-up of some the other stories in the news

Terry Pratchett’s hard drive destroyed

Destroying data securely is something we at Naked Security care about, so as sad as we were to learn that there won’t be any posthumous books from the fantasy author Terry Pratchett, who died two years ago, we couldn’t help but approve of the steps taken by his business manager, Rob Wilkins, to carry out Pratchett’s wish.

Rather than using a software solution, a hard drive containing Pratchett’s unfinished works was crushed by a steamroller at a fair in Dorset, in southern England, last week, fulfilling Pratchett’s instructions. Fellow fantasy author Neil Gaiman told The Times after Pratchett’s death that he had wanted “whatever he was working on at the time of his death to be taken out, along with his computers, to be put in the middle of a road and for a steamroller to steamroll over them all”.

Wilkins posted before and after pictures on Twitter of the hard drive – which you can see if you’re going to be in southern England between next week and the middle of January, as it will be part of an exhibition, Terry Pratchett HisWorld, at the Salisbury Museum.

WikiLeaks hit by cyberattackers

WikiLeaks, which has always promised would-be whistleblowers that it offers a secure way to leak confidential information, suffered something of an embarrassment last week when its website was defaced by a group calling itself OurMine.

However, it seems unlikely that WikiLeaks itself was breached: the unexpected page seems to have been the result of DNS hijacking, where the attackers managed to redirect DNS queries, sending browsers to the replacement home page rather than the real WikiLeaks’ page.

The OurMine group that claimed responsibility isn’t new: it’s thought to be behind the leaking of Game of Thrones episodes, as well as attacks on BuzzFeed, Wikipedia founder Jimmy Wales and Facebook boss Mark Zuckerberg.

The page that greeted visitors to WikiLeaks said: “don’t worry we are just testing your … blabblablab, Oh wait, this is not a security test!” WikiLeaks regained control of the page quickly, while WikiLeaks Task Force, which says it’s the official support account for WikiLeaks, said on Twitter that it was “fake news”.

Instagram users’ details for sale

We reported last week about a glitch in Instagram’s API that apparently exposed the details of high-profile users. Instagram fixed the glitch, and all seemed to be well.

Not so, according to The Daily Beast, which reported over the weekend that hackers had posted a searchable database of the details of some of the social media platform’s highest-profile users.

The hackers said that anyone can search the database for contact details – for $10 a search.

The people claiming to be behind the database, called “Doxagram”, gave The Daily Beast a sample of the data including details of what they said were 1,000 Instagram accounts, saying: “Instagram clearly hasn’t yet understood the full impact of this bug.”

If you’ve got an Instagram account, whether you’re a celebrity or not, now would be a good time to review the security of your account: make sure you’ve got 2FA enabled on it, and have a look at our five tips to secure your account.

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/slui1FLzdrM/

Asterisk RTP bug worse than first thought: Think intercepted streams

One of the Asterisk bugs published last week is worse than first thought: Enable Security warns it exposes the popular IP telephony system to stream injection and interception without an attacker holding a man-in-the-middle position.

A reader (@kapejod, who collaborated with @sandrogauci on the work) alerted The Register to this advisory last published Friday.

In it, Enable Security explains that a bug it’s dubbed “RTPbleed” (the “RTP” stands for Real Time Protocol) which first emerged in September 2011, was patched in the same month, but was then reintroduced in 2013. As this page states, it doesn’t only affect Asterisk, because the bug’s in RTP proxy code.

The problem occurs when comms systems like IP telephony have to get past network address translation (NAT) firewalls. The traffic has to find its way from the firewall’s public IP address to the internal address of the device or server, and to do that, RTP learns the IP and port addresses to associate with a call.

The problem is, the process doesn’t use any kind of authentication.

For Asterisk, the bug is triggered when the system “is configured with the nat=yes and strictrtp=yes” – and because NAT is pretty much ubiquitous, those are default settings.

What’s special about this bug is that the attacker doesn’t need to be between the two ends of the conversation: a system with a vulnerable RTP implementation can be persuaded to reflect media streams towards the attacker.

“To exploit this issue, an attacker needs to send RTP packets to the Asterisk server on one of the ports allocated to receive RTP. When the target is vulnerable, the RTP proxy responds back to the attacker with RTP packets relayed from the other party. The payload of the RTP packets can then be decoded into audio.”

It’s a pretty knotty problem: admins can turn off the nat=yes flag, but only if they’re not using NAT; they can authenticate and encrypt media streams with Secure Real Time Protocol (SRTP), but only if both ends support it.

The Asterisk patch “limits the window of vulnerability to the first few milliseconds”, which is good because the other suggested mitigations could be troublesome for sysadmins: turning off NAT; or using Secure RTP (SRTP) which can authenticate and encrypt streams if both ends support it.

There are still issues with the patch:

Note that as for the time of writing, the official Asterisk fix is vulnerable to a race condition. An attacker may continuously spray an Asterisk server with RTP packets. This allows the attacker to send RTP within those first few packets and still exploit this vulnerability.

The official Asterisk fix also does not properly validate very short RTCP packets (e.g. 4 octets, see rtcpnatscan to reproduce the problem) resulting in an out of bounds read disabling SSRC matching. This makes Asterisk vulnerable to RTCP hijacking of ongoing calls. An attacker can extract RTCP sender reports containing the SSRCs of both RTP endpoints.

@kapejod links to his own contribution to fixing the issue. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/03/asterisk_rtp_bug_allows_intercepted_calls/

Tempted to join the games in the crytpcurrency playground?

Should central banks be worried about cryptocurrency – Bitcoin, Ethereum, Zcash, Monero and hundreds of others? Perhaps more important, should you – the average, privacy-conscious person or even the not-so-average Dark Web drug dealer – be worried?

That depends in part on who you ask. For the banks, the answer last week from Andrew Sheng, chief adviser to the China Banking Regulatory Commission and Distinguished Fellow of the Asia Global Institute, University of Hong Kong, is an emphatic yes.

He told Bloomberg that “central banks cannot afford to treat cyber currencies as toys to play with in a sand box. It is time to realize that they are the real barbarians at the gate.”

Indeed, one of the main selling points of cryptocurrency is that it circumvents the conventional money system. Which means that central banks, which set the value and quantity of conventional money, are losing control of an increasing amount of overall currency.

Not very quickly yet. The estimated $150bn value of the top 20 cryptocurrencies is still a tiny fraction – 3% – of the $5tn in conventional currency circulating daily.

But its growth has been explosive. There are now an estimated 900 cryptocurrencies available. Initial coin offerings (ICOs) – the “Kickstarter” way to launch a new currency – have spiked. William Mougayar, a Toronto-based venture adviser and investor, wrote in July that funding of ICOs in June was more than $560m – more than that of early stage venture capital funding.

The value of these coins is also through the roof. Bitcoin, still the best known and most popular, although others like Ethereum are gaining on it, was worth about $13 in 2013. Bitcoin was listed at $4,399.59 on Monday. A year ago, it was worth $570.

But Sumit Agarwal of Georgetown University and previously a senior financial economist at the Federal Reserve Bank of Chicago, is convinced that is largely a speculative bubble, telling Bloomberg:

It is a fad that will die down and it will be used by less than 1% of consumers and accepted by even fewer merchants.

Perhaps that is in part because the more important question for both businesses and individuals is: should you trust cryptocurrencies? And the answer to that also depends on who you ask, not least because it depends on what kind of appetite you have for a currency that is both volatile and risky.

The big lure of cryptocurrency is that it promises anonymity – the website of Monero offers “secure, private, untraceable currency”. And since criminals are in the risk business, that might pretty much seal the deal.

But for everybody else, both the advice and the evidence are compelling that you should only traffic in cryptocurrencies with money you can afford to lose.

One obvious reason is that there is no US Federal Reserve managing the existence and value of cryptocurrencies – that, as noted earlier, is one of its selling points – but it means there is no “full faith and credit” of the government behind it. There is no Federal Deposit Insurance Corporation (FDIC) guarantee that the money in your bank is secure, or that the dollar in your hand is worth a certain amount. It is totally dependent on what other people are willing to pay for it.

Another reason is that it might ultimately not be as anonymous as the promise. The pending case of the Internal Revenue Service (IRS) seeking transaction data from Coinbase – the largest US digital currency exchange – in order to track potential tax cheats, is evidence of that.

So is the IRS’s contract with Chainalysis, which markets a tool to track and analyze the movement of Bitcoin.

But the most important reason – with a growing list of ominous examples – is that lots of money attracts lots of scammers and hackers. And while numerous experts praise the security of blockchain, the underlying technology for cryptocurrencies, that doesn’t make the entire system secure – especially the exchanges.

As Naked Security’s Paul Ducklin reported more than two years ago, since Bitcoin (and other cryptocurrencies) are not conventional currency,

… generally speaking, it’s not covered by any of the laws relating to currency trading, brokerage, banking and so on.

In other words, if the company to which you entrusted your precious Bitcoins suddenly tells you, ‘So sorry, they seem to have vanished,’ then, well, that’s that: you’re out of luck.

Ducklin noted several examples in the $250,000 range, and then the “Big Daddy” of exchanges, Japan-based Mt. Gox, which in 2014 “lost” 650,000 Bitcoins worth about $500m.

And the risks are not just with the exchanges. In June 2016, a hacker was able to skim $50m n Ether (one of the Ethereum currencies) from the Decentralized Autonomous Organization, an experimental virtual currency project designed, ironically enough, “to prove the safety and security of digital currency”.

And just this past week, the digital financial services developer Enigma had to report that hackers had taken over its website, mailing list and Slack accounts and launched a scam “pre-sale” ahead of a planned September 11 ICO, draining investors of an estimated $500,000 in cryptocurrency.

All of which has not slowed either the creation of new cryptocurrencies or the rampant growth of the first ones on the block. But, as Forbes contributor Adam Hartung put it a couple of weeks ago, while the 750% increase in Bitcoin over the past year is drawing the interest of people “hoping to make a fast fortune … that would be very risky”.

In short, it seems that cryptocurrencies are a playground for speculators who can afford to lose money.


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/2WwEOPsapcY/

Security-focused phone launches crowdfunding drive

We’ve covered a number of stories here at Naked Security about how apps and settings on popular smartphone models often reveal more about their users than they realize, often without any knowledge of it happening. Our devices share data about our browsing habits, locations, contacts, WiFi connections and even health data with third parties.

That said, there’s a growing cadre of phone makers that are going in a different direction.

The latest of these is Purism, a San Francisco-based company that makes security-focused laptops, which says it has “a strict belief in users’ rights to privacy, security, and freedom”.

Purism said last month that it would be adding a smartphone (pictured) to its product range, the Librem 5, which it says will “empower users to protect their digital identity in an increasingly unsafe mobile world”.

Beyond the platitudes, what this really means is Purism hopes to sell a smartphone where every feature we normally associate with a smartphone is built with security in mind first, and to give users as much control over their phone as possible. For example, the Librem 5 will not used a closed or proprietary operating system; instead, it will run a fully open-source Linux distro, PureOS.

The phone will also have location settings disabled by default, end-to-end encryption set up by default for phone calls, texts, and emails; VPN for web browsing, and dedicated off switches for components that can be problematic for privacy such as the camera, microphone, Bluetooth and WiFi.

On phones by major market players such Apple, Samsung and Google, security and privacy can seem like afterthoughts, especially by app developers. Apps will ask for permissions to functionality, location data, and hardware that they don’t really need access to, and they aren’t always transparent about why they’re asking for this data in the first place.

Android itself has a number of high-profile vulnerabilities, and whether or not your phone can be patched often depends on the phone’s carrier, not Google, which means some devices can be several versions behind and lack sorely needed security updates. As a result, smartphones can be vulnerable to many issues that for a long time were thought to be in the realm of PCs only: arbitrary code executions, and even ransomware.

Often our best advice to readers is to be aware of what their kinds of access your apps are asking for, and to frequently check out your app’s settings and turn off any permissions you don’t want your app to have, such as always-on location tracking or data sharing.

But not everyone is going to have the technical know-how, initiative, or even just the time to stay on top of security issues for their phone. The hope with security-centric phones like the Librem 5 is that with more security features built into the phone’s core design, consumers will have less to actively manage without having to sacrifice their privacy.

It remains to be seen if Purism’s approach to the smartphone security conundrum is successful — it is certainly not the first phone5maker to try and run a Linux distro. Canonical’s Unity8 Ubuntu phone was abandoned earlier this year, citing lack of interest in the smartphone market in the platform.

Purism argues that by using a pure open-source OS for their phone, savvy phone users can even modify the source code on their phone to tweak and secure it as they like, but one wonders if there are enough phone users who will actually take advantage of this capability to sustain the market for a phone like this.

That’s the big question of course, and Purism is letting the market speak. To “gauge demand” and to get the funding needed to start manufacturing, Purism opened up a crowdfunding campaign to raise $1.5m. At the time of this writing, they’ve hit more than 10% of their funding goal with 49 days left to go, so it’s possible they’ll hit their target. Supporters of the crowdfunding campaign can vote with their dollars to get a Librem 5 at $599.

It will be interesting to see if consumers rally around products like this that set out to protect privacy and if this phone hits its fundraising milestone.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/BmzWmnYvBy8/

Lawyer suggests tying access to encryption to verified ID

Encryption has become one of those uncomfortable itches that nobody in the British government or its platoon of advisers seems quite able to scratch.

But every now and then, somebody feels compelled to try, the latest example of which emerged last week in comments made by Max Hill QC, who is leading the Independent Review of Terrorist Legislation (IRTL).

We only have the Evening Standard’s presentation of his comments plus a few follow-up observations by Hill to go on, but what he seemed to be saying was the following:

Social media accounts are used for direct communication and to spread terrorist propaganda, much of which uses encryption and is therefore difficult to monitor. The solution is to force all users to prove who they are before they get access to accounts with encryption privacy turned on.

In his words:

A discussion I have had with some of the tech companies is whether it is possible to withhold encryption pending positive identification of the internet user.

If the technology would permit that sort of perusal, identification and verification, prior to posting, that would form a very good solution… and would not involve wholesale infringement on free speech use of the internet.

According to Hill, this ID checking could be done in “nano-seconds” and at a cost that is reasonable for tech companies to bear given the profits they make.

Before dissecting how this might work – or not – let’s give Hill credit for opening his mouth in the first place. A lot of people will ridicule the proposal but it’s better to hear what people in influence think about the subject in order to expose its flaws before it influences policy-making.

Hill’s idea of identity checks sounds different from the home secretary Amber Rudd’s interest in bypassing encryption through technical means, but arguably all it’s doing is translating one problem (encryption privacy) into a new one (assessing identities).

The problem is that no such system of identity exists on the internet, let alone one that works in real time. Even making this work in one country, the UK, or on one platform, Facebook, sounds difficult.

And who would be the gatekeeper for an approved identity? The tech companies? A government appointee? ISPs? The latter already face a complicated challenge to implement age verification for UK citizens who wish to access porn from 2018 and that’s a relatively straightforward problem by comparison.

Then, as with the debate over bypassing encryption, there’s the problem of displacement, as Hill acknowledges:

It would not be an effective solution to the problem of online extremism simply to drive the criminal publishers of that material into dark spaces which neither the police nor anybody else can reach.

Even if an identity system could be invented, there’s the likelihood that criminals would simply game it by using bogus or stolen identities.

This is because the internet is a system that thrives on its lack of identity checking. This has negative consequences – criminals impersonating people and stealing their identities – but in other instances, protecting oneself from the growing number of nosy, censorious governments, say, it is fundamental.

Surely it is not identity that should be at issue but online behaviour. Funnily enough, that was supposed to be another thing tech companies promised earnestly to filter in real time despite having failed to do so.

Why tech companies have struggled with this is a matter of conjecture. But until they can control what goes on inside their own platforms, withholding encryption for the badly behaved sounds like another example of fixing the symptom, not the cause.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3QWbqnomZ00/

Leaky S3 bucket sloshes deets of thousands with US security clearance

Thousands of files containing the personal information of US citizens with classified security clearance have been exposed by an unsecured Amazon server.

The sensitive information of an estimated 9,400 job seekers, mostly military veterans, was stored on an Amazon Web Services S3 storage server that required no password to access. The details were held by third-party recruitment company TalentPen, who in turn were hired by TigerSwan, a North Carolina-based security firm. Many of the job seekers cited secret US government work.

Cyber resilience company Upguard discovered and reported the resumé file breach to TigerSwan, which after investigation confirmed the problem. In a statement, TigerSwan apologised and said it was in the process of notifying those whose files were exposed. TigerSwan blamed TalentPen for the whole snafu.

TigerSwan also provided Gizmodo with emails confirming the third-party recruitment firm in question had been “dissolved”.

It is our understanding that Amazon Web Services informed TalentPen of this issue sometime in August, resulting in TalentPen removing the resumé files on August 24. TalentPen never notified us of their negligence with the resume files nor that they only recently removed the files. It was only when we reached out to them with the information on August 31 did they acknowledge their actions.

Rich Campagna, chief exec at Bitglass, said AWS leaks are a growing problem largely as a result of human error.

“In the last few months, we’ve seen a string of high-profile data incidents of this nature, including Deep Root Analytics, Verizon Wireless and Dow Jones,” Campagna said. “These exposures are difficult to stop because they originate from human error, not malice.”

Amazon recently introduced Macie, a sort of “data loss prevention bot”, to discover, classify and protect sensitive data in AWS S3. “Organisations using IaaS must leverage at least some of the security technologies available to them, either from public cloud providers, IDaaS providers, or CASBs, which provide visibility and control over cloud services like AWS,” Campagna concluded.

Thomas Fischer, global security advocate at Digital Guardian, said: “This incident could likely have been avoided if TigerSwan had an effective security policy review process in place and was integrating third parties into this methodology. Outsourcing to new technology partners does not mean that you no longer need stringent security initiatives. In fact, it actually means you need to put into place a stronger set of controls.”

Javvad Malik, security advocate at AlienVault, added: “Massive breaches through unsecured AWS S3 buckets continue to be a troubling trend. While cloud providers take care of certain aspects of security, it is imperative that organisations ensure they are doing their part to ensure the security of data that is uploaded. As with other aspects of security, cloud environments need to be continually monitored and the security assessed. Otherwise organisations have no assurance as to whether the data is secure or not, and in this case, can be left exposed for long periods of time.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/04/us_security_clearance_aws_breach/