STE WILLIAMS

Facebook Launches ‘Secure the Internet Grants’ Program

The new initiative encourages universities, non-profits, and NGOs to submit applied research proposals for new security defense technologies that can be used in practice.

Facebook today opened its “Secure the Internet Grants” program and issued an invitation for university researchers and faculty, non-profits, and NGOs to submit applied research proposals to be considered.

In his keynote at Black Hat USA 2017, Facebook chief security officer Alex Stamos announced the company would invest up to $1 million in defense research to fight threats people face each day including password reuse, phishing attempts, and other common forms of cybercrime.

Secure the Internet Grants are part of this investment. The goal is to drive development of new security tech that can be applied in practice, rather than purely for research purposes. Applicants can now submit two-page grant proposals on these focus areas: abuse detection and reporting, anti-phishing, post password authentication, privacy preserving technologies, security in emerging markets, and user safety.

The deadline is March 30, 2018 and winners will be announced at Black Hat USA this year.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/facebook-launches-secure-the-internet-grants-program/d/d-id/1330865?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Under the hoodie: what makes bug bounty hunters tick?

If you were a company interested in starting a bug bounty program – say, like Google did a few months ago in an effort to clean up the rather grungy Play Store – wouldn’t you like to know what type of person is eager to pull your code apart?

Wouldn’t you want to know who these hackers are? Where they come from? How old they are? If they’re teenagers using homemade tools, or professionals who work with sophisticated technologies? What soft underbellies do they target, and what are their favorite attack vectors?

Are they in it for the money, and if not, what are they in it for?

We can actually answer those questions, because the bug bounty program management website HackerOne asked.

Last month, HackerOne surveyed 1,698 hackers from over 195 countries and territories, all of whom have successfully reported one or more valid security vulnerabilities, as verified by the organization that received the vulnerability report. Also in the mix are findings collected from the HackerOne platform using its proprietary data, which is based on over 900 collective bug bounty and vulnerability disclosure programs.

The result is the 2018 Hacker report: what HackerOne says is the largest documented survey ever conducted of the ethical hacking community.

According to HackerOne, it’s seen a tenfold increase in registered users – as in, ethical hackers – in just two years. As of December 2017, the platform had more than 166,000 registered hackers. It had logged more than 72,000 valid vulnerabilities, for which more than $23.5m had been paid in bug bounties.

Where are they?

If somebody found your bug, that somebody is most likely in India, where 23% of the HackerOne community lives. The United States comes in second with 20%. Russia comes in third with 6%, Pakistan is at 4%, and the United Kingdom is also home to 4% of registered hackers.

How much do they rely on bug bounties as income?

HackerOne compared competitive salaries for an equivalent job to the bug bounty earnings of top performers in each country and noted that the bounties can be “life-changing.” On average, top-earning bug bounty hunters make 2.7 times the median salary of a software engineer in their home country.

But once you get to countries with low median salaries, the multiplier blossoms. It was the highest in India for 2017, with hackers making 16 times the median salary of an India-based software engineer.

That’s quite the incentive to get hacking, the report notes before quoting Troy Hunt, security expert and creator of Have I been pwned:

Most bug bounties (usually) have no geographical boundaries, which means the ROI for the bug hunter can be enormously attractive… Consider what the “return” component of the ROI is for someone living in a market where the average income is a fraction of that in the countries many of these services are based in; this makes bounties enormously attractive and gets precisely the eyes you want looking at your security things. Bounties are a great leveler in terms of providing opportunity to all.

How old?

These are by and large “young, curious, gifted professionals,” HackerOne says. Over 90% of hackers are under the age of 35. The best-represented age group, at 45.3% of registered hackers, is between 18 and 24. They’re closely followed by the 37.3% of hackers who are between 25 and 35 years old. In fact, over 90% of bug bounty hackers on HackerOne are under the age of 35, with over 50% under 25 and just under 8% under the age of 18.

But there are a scattering of both older and younger hackers finding bugs: 0.4% are under the age of 13, and 0.5% are between the ages of 50 and 64.

How did they learn how to do this?

HackerOne found that the vast majority, 58%, are self-taught, while 44% are IT professionals. 67% learned tips and tricks through online resources, blogs and books or through their community (other hackers, friends, colleagues, etc.)

As far as job titles go, the best-represented is that of IT/software/hardware, at 46.7%. That’s followed by “student,” at 25.2%. 13% say they hack full time or 40+ hours per week.

What are their favorite tools?

Build-your-own is the second most popular type of tool they use. Here’s what else they like:

  1. Burp Suite 29.3%
  2. I build my own tools 15.3%
  3. Web proxies and scanners 12.6%
  4. Network vulnerability scanners 11.8%
  5. Fuzzers 9.9%
  6. Debuggers 9.7%
  7. WebInspect 5.4%
  8. Fiddler 5.3%
  9. Chip Whisperer 0.8%

Why hack?

Money is undoubtedly a strong motivation, but according to HackerOne, it’s fallen from the No. 1 motivator in 2016 to its current position at No. 4 on the list.

  1. To learn tips techniques 14.7%
  2. To be challenged 14%
  3. To have fun 14%
  4. To make money 13.1%
  5. To advance my career 12.2%
  6. To protect and defend 10.4%
  7. To do good in the world 10%
  8. To help others 8.5%
  9. To show off 3%

Many say they share knowledge freely with the community of hackers and security researchers.

They’ve also helped the US Department of Defense (DoD) resolve almost 3,000 vulnerabilities, HackerOne says. In March 2016, the DoD announced “Hack the Pentagon”: the first cyber bug bounty program in the history of the federal government.

It was carefully controlled, with dozens of pre-selected security researchers hunting down vulnerabilities in certain public-facing DoD websites, but it was undeniably effective: more than 138 unique vulnerabilities were found, and the DoD paid out tens of thousands of dollars to 58 hackers, Wired reports.

Where do they spend the loot?

HackerOne got some stories from some of its hackers. Here are two:

IBRAM MARZOUK
One of the things that I did with my bounty money was helping my parents buy a house when I first came to the US, so that’s probably the biggest thing I’ve done with bounty money.

DAVID DWORKEN
The most meaningful result of a bounty for me was actually one from Starterbox where there was some sort of miscommunication where they thought something was a bug and it ended up not being a bug. So [when] I talked to them, we actually just decided to donate the bounty that they had already awarded to the EFF.

According to HackerOne, over 24% of its hackers have donated bounty money to charity organizations. Besides the Electronic Frontier Foundation (EFF), the recipients have included the Red Cross, Doctors Without Borders, Save the Children and local animal shelters. Companies like Qualcomm, Google, and Facebook have “bounty match” promotions, matching any bounties earned that hackers in turn donate to a cause.

Lone wolves or pack animals?

Most – 30.6% – prefer working alone, but they still rely on each other to learn: 31.3% of hackers like to read other hackers’ blogs and publicly disclosed vulnerability results. 13% of hackers sometimes work with peers, 9% regularly work with other hackers, 8.7% of hackers serve as mentors or mentees to other hackers and 7.1% have filed at least one bug report with other hackers as part of a team.

How do they select targets?

Surveyed hackers said they respond primarily to two pheromones: the sweet smell of cash (23.7%), and the sweet smell of the opportunity to learn and hone their skills (20.5%).

Other incentives include going after a brand they like (13%) or going after a brand they don’t like (2.1%). They also like to target companies with good security (8.9%) and companies with lousy security (6.6%), as well as companies with responsive security teams (10.7%).

What’s their favorite attack vector?

Over 28% of hackers surveyed said they prefer searching for cross-site scripting (XSS) vulnerabilities. That’s no surprise: it’s been No. 1 on the OWASP list of the top most critical web application security risks for years. In fact, it was No. 1 in the OWASP 2017 list. HackerOne took the OWASP list and created this flashcard reference guide to download, print, and share “for easy learning!”

Their other favorite attack vectors were SQL injection (23.1%), fuzzing (5.5%) and brute force (4.5%), among other methods.

In aggregate, your prototypical bounty hunter is…

…the kind of person who loves a challenge, loves to pick apart systems to find loopholes, loves to learn, isn’t allergic to cash but tends to be invested in the public good, and is, in the words of Keren Elazari, a vital part of “the internet’s immune system.”


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/h1qvy3DeZjU/

Stock exchange finally fixes telnet router weakness

Oman’s stock exchange has fixed a serious router security misconfiguration after months of apparently ignoring the pleas of the researcher who tried to report it.

The technical aspect of this story dates back to a leaked list of 33,138 telnet credentials that appeared on Pastebin last June (telnet being an aging, vulnerable protocol once widely used by admins to manage network systems).

Although it later emerged that only 1,775 of these still worked, one that did was for a Huawei router that belonged to Oman’s Muscat Securities Market (MSM), as Dutch GDI Foundation researcher Victor Gevers discovered.

An enterprise model, this was running a telnet interface accessible with a default password and username of ‘admin’. Anyone finding this would have had admin-level privileges on a key piece of network infrastructure.

“Owning the network’ is a breeze,” Gevers told tech news site, ZDNet.

There’s no evidence that anyone did, but finding it using a port scanner wouldn’t have been a difficult exercise. Once located, default credentials are the first thing an attacker would try.

Gevers reportedly set about trying to contact the owners of each vulnerable telnet device but, in the case of the Omani Huawei router, failed to get anywhere.

He eventually contacted ZDNet with his story but even their help failed to make any headway.

As ZDNet says:

Several attempts by both Gevers and ZDNet over the past few months to contact Omani authorities and officials at the Omani consulate in New York by phone and email were unsuccessful.

Eventually, in the last few weeks, someone inside MSM noticed the router problem and (most likely) disabled external telnet access completely.

One could extract from this story a moral about treading carefully around things like telnet or changing default credentials, or simply using something more secure such as SSH.

But the even bigger problem in this incident was that the organisation using vulnerable equipment appears to have had no channel to receive bad news.

Security throws up lots of difficult problems but this, surely, should never be one of them.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5A-ES8FwICw/

California to make it harder for your license plate to be tracked

On 9 January, a California committee passed senate bill SB-712: a piece of legislation that would make, what might seem like, a tiny tweak to a law that says you can’t cover your car’s license plate.

In California, it’s currently legal to cover your entire vehicle, including the license plate, to protect the car from the weather, as long as the cover is easy enough to pull up to get a look at the license plate.

However, it’s illegal to cover just the license plate, which you may very well want to do to protect your privacy from automated license plate readers (ALPRs).

The proposed tweak to the bill, introduced by Sen. Joel Anderson in 2017 and endorsed by the Electronic Frontier Foundation (EFF), would change the bill to read like so:

A covering shall not be used on license plates except as follows:
(1) The installation of a cover over a lawfully parked vehicle to protect it from the weather and the elements, or the installation of a cover over the license plate of a lawfully parked vehicle, does not constitute a violation of this subdivision. A peace officer or other regularly salaried employee of a public agency designated to enforce laws, including local ordinances, relating to the parking of vehicles may temporarily remove so much of the cover as is necessary to inspect any license plate, tab, or indicia of registration on a vehicle.

In other words, keep your spying, data-collecting, privacy-invading cameras away from our cars. There are businesses that send ALPRs, mounted on vehicles, driving up and down streets to document the travel patterns of drivers, to take photos of every license plate they see, to time-stamp and location-stamp those photos, to upload them to a central database, and to sell the data to lenders, insurance companies, and debt collectors.

The data-aggregators-on-wheels companies also sell information to law enforcement, including to the US Department of Homeland Security (DHS), the EFF says. The DHS, in fact, last month released its updated policy for using this commercial ALPR data for immigration enforcement.

DHS’s notice mentions “privacy and civil liberties protections that have been implemented by the agency and the vendor,” but we’ve seen time and again how casually vendors treat sensitive ALPR data and how nonresponsive they’ve been to police when they’ve tried to protect investigation details from unauthorized eyes.

In August, Wired detailed one such instance, wherein a sensitive case became an open book to police departments that had no business looking at it, all because they were able to enter a license plate number involved in the case.

These things suck up so much data, with so little protection for privacy, that legislators and police seem to have gone a little kid-in-a-candy-store. In 2015, for example, the Los Angeles City Council thought it was a good idea to have the city attorney’s office analyze a proposal to use license plate readers to determine who owns vehicles spotted parking in, or driving slowly through, neighborhoods known for prostitution. The idea was to then send “Dear John” letters, warning of sexually transmitted diseases (STDs), to the car owners’ home addresses, in the hopes that wives or girlfriends would intercept them.

The Atlantic has called the amassing of this huge database of driver information an “unprecedented threat to privacy.” From its coverage:

[Vigilant Solutions, an ALPR surveillance company] has taken roughly 2.2 billion license-plate photos to date. Each month, it captures and permanently stores about 80 million additional geotagged images. They may well have photographed your license plate. As a result, your whereabouts at given moments in the past are permanently stored. Vigilant Solutions profits by selling access to this data (and tries to safeguard it against hackers). Your diminished privacy is their product. And the police are their customers.

We have seen the security of ALPR live feeds compromised, streaming out the license plates of mostly innocent drivers caught in vast police dragnets. We have also seen police in Oakland, California, hand over an entire license plate reader data set to a journalist.

Should we care? After all, the ALPRs are just automating the collection of what can be seen by anybody driving down the street and jotting down license plate numbers, the argument goes. That’s how a Vigilant company official framed it when talking to the Washington Post:

[The technology] basically replaces an old analog function – your eyeballs… It’s the same thing as a guy holding his head out the window, looking down the block, and writing license-plate numbers down and comparing them against a list. The technology just makes things better and more productive.

The response to that from The Atlantic’s Conor Friedersdorf:

By this logic, Big Brother’s network of cameras and listening devices in 1984 was merely replacing the old analog technologies of eyes and ears in a more efficient manner, and was really no different from sending around a team of alert humans.

As we’ve noted in the past when reporting about Big Data, we have to stop thinking about data sets in terms of individual records and start thinking about them in terms of huge networks of possible relationships that exist between those records.

As Naked Security’s Paul Ducklin has pointed out, license plate readers are a good example of how seemingly innocuous pieces of discrete data – i.e., where your license plate was and when – manifest into something entirely different when amassed in huge data sets and cross-correlated, given that your plate number stays constant while your location changes.

There are properties and capabilities that emerge from large collections of data that don’t exist in the same data at smaller scales (it’s why we had to invent a term – Big Data – to describe it).

While one data point about a license plate could – and has – been used to do things such as track fugitives or solve a gang-related homicide, there’s no saying what the government can do with massive amounts of correlated data spanning years – the vast majority of which has been collected from innocent people who aren’t breaking any laws.

To put some numbers around the scope of how (in)effective ALPR data has been in fighting crime, the EFF has filed dozens of public records requests. It’s found that “less than 0.1% of license plate data collected by police are connected to a crime at the point of collection, but the remaining 99.9% of the data is stored and shared anyway.”

On 10 January, the bill that would make it legal to cover up cars’ license plates was ordered to a third reading, so as of 22 January, it’s not yet a done deal.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EGPVOVr49PA/

Meltdown/Spectre week three: World still knee-deep in something nasty

It is now almost three weeks since The Register revealed the chip design flaws that Google later confirmed and the world still awaits certainty about what it will take to get over the silicon slip-ups.

The short version: on balance, some steps forward have been taken but last week didn’t offer many useful advances.

In the “plus” column, Microsoft and AMD got their act together to resume the flow of working fixes. Vendors started to offer tools to manage the chore of fixing the twin flaws, such as VMware’s dashboard kit for its vRealize Operations automation tools.

Typing

 $ grep . /sys/devices/system/cpu/vulnerabilities/*

into a Linux terminal window now reveals whether you have a Meltdown/Spectre problem to address.

On the downside, Intel faced a rebellion of sorts as major enterprise vendors like Red Hat, Lenovo, VMware and many others told their users to ignore Chipzilla’s first batch of microcode updates because they made servers reboot a lot. Intel first said only Broadwell and Haswell CPUs had the problem, but later said its more recent Ivy Bridge, Sandy Bridge, Skylake and Kaby Lake architectures are all misbehaving after patching. The company also revealed that data centre workloads will be slower after it’s done patching.

That’s bad news for all sorts of reasons, not least that some users rushing to cope with the twin menaces may have overlooked the fact that appliances sold as “it just does the job, don’t worry about the innards” often have Intel Inside. Hence analyst firm Gartner’s advice to remember that devices like application delivery controllers or WAN optimisation boxen pack x86s, need a fix and won’t optimise things quite as optimally from now on. Which means talking to telcos and all sorts of other fun.

News that software-defined storage powered by ZFS or Microsoft may slow down can’t have put smiles on too many faces either.

Also unwelcome was news that Spectre impacts Oracle’s SPARC platform, with patches due some time in February. Nor are the hordes of smaller ARM licensees making much noise.

Homer Simpson

Now Meltdown patches are making industrial control systems lurch

READ MORE

News that the sky has not fallen in on public clouds won a better reception. Indeed, there are even signs that big players have stopped worrying and learned to love the bomb, or at least minimise the impact of their patches.

Smaller clouds have had less to say, perhaps because they resent not having been included in the original cabal that nutted out a response to Meltdown/Spectre. The Register hears gossip to the effect that Oracle, for one, is furious it wasn’t immediately invited to the top table. It has, however, scheduled and/or executed patches for its x86 cloud. We’ve seen evidence of the same at VMware-on-AWS, Linode, IBM cloud and others.

But we’ve also heard an industry-wide silence about CPU-makers’ roadmaps for a Meltdown-and-Spectre-free future. Rumours are rife that a generation of products will have to be redesigned, at unknowable expense and delaying next-generations products by un-guessable amounts of time.

The news isn’t all glum, however: marketers have cottoned on to the fact that Meltdown and Spectre represent an opportunity to spruik products like data centre inventory tools or performance analysis code. Their offers aren’t classy, but are at least far more sensible than all the initial coin offerings landing in Reg inboxes. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/22/meltdown_spectre_week_three_the_good_the_bad_and_the_wtf/

Dridex redux, with FTP serving the nasties

Keep your eyes open for yet-another Dridex-based malware attack.

Forcepoint researchers spotted the campaign last week, noting that instead of hitting up HTTP links the attackers are targeting compromised FTP sites (and exposing those sites’ credentials).

The FTP sites in question were used to host the malware sent to victims who clicked on links (insert usual statement about care with links), and the post noted that the attackers didn’t care that they exposed the logins of sites they abused. The upshot, however, could be that other attackers also get a chance to abuse the same targets.

Around half of the phishing messages in the campaign went to .com domains, roughly a quarter to .fr domains, with Australia and the UK among other regional targets.

A victim who clicked the link either found themselves compromised via DDE (a popular vector late last year); or in an Excel file carrying an infected macro.

Forcepoint’s post associates the campaign with the Necurs botnet, because the distribution domains were already in the company’s records; Necurs has spread Dridex in the past; and “The download locations of the XLS file also follows the traditional Necurs format.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/22/dridex_redux_with_ftp_serving_the_nasties/

The Reg visits London Met Police’s digital and electronics forensics labs

More than 90 per cent of crime has “a digital element,” we were told as The Reg was welcomed into London Metropolitan Police’s Central Communications Command Centre, near Lambeth Bridge on the Thames.

Not only does that mean an exponential increase in the amount of data stored, with the increasing seizure of phones, it also raises questions over privacy and security, and the role of encryption.

Not surprisingly, the Metropolitan Police force deals with the highest volume of digital forensics (it is after all the largest force in the country, with more than 30,000 officers).

In a tour of its labs, Mark Stokes, head of digital electronics forensics at the Met, reveals the changing nature of his department’s work.

Among its work, he points out a pile of CCTV footage that police are still trying to recover from Grenfell Tower – the 27-storey block where at least 70 people died after the high-rise public housing in the wealthy London suburb of North Kensington caught fire in June 2017. Over 100 officers and civilian staffers are still working on the criminal investigation while the public inquiry rolls on – and the forensic examination of evidence collected is an important part of that.

Not the actual Cityman… for security reasons, we didn’t take snaps at Lambeth…

Stokes also shows us a machine that restores damaged chips from smartphones as well as a display of now-obsolete mobiles from the last decades.

Some of those relics include a Nokia handset from 1995, an HP PDA, with a built-in mobile phone, the original iPhone and most ancient of all a Mobira Cityman 1320 from the 1980s – which looks worthy of Gordon Gekko.

Stokes said he sees them as a visual reminder of how fast technology has moved on in a relatively short space of time.

“It’s funny when you think about it; digital forensics didn’t really exist 25 years ago,” he says.

At one point he picks up the motherboard from an old BlackBerry handset. “We used to see a lot of these after the London Riots [in 2011], but now we seldom see them.”

Not only has the field grown rapidly, but Stokes believes police are getting to a point where it is very difficult for a person to go through all the data on a computer. He recently explored the use of machine learning or quantum computing as possible fields that might help at the (ISC)2 Secure Summit in London.  Although he admits that the technology required is not yet here.

Maximum capacity

Because the volume of work is so huge, there’s an emphasis on self-serve, he says. Most of the data retrieved from mobile phones occurs at station level.

“If it’s from a victim, they can hand the phone over, download it and hand it back. Around three years ago if you said to someone we need to take away your phone and hand it back in a month, they would not be happy about it. So we’ve enabled more reporting of crime, but flip side of that is we have opened up a lot more demand and there is a lot more data to manage.”

One answer to the data explosion is, of course, the long-talked-about shift to cloud. Stokes says the force processes around one terabyte of data every hour. “The volume of data is going to end up in petabytes.”

“At the moment we are working on a case with 200 computers we need to ingest data from. We don’t do that every day… so the cloud would make sense. We could ingest the data, index, review and analyse it, and when we are finished scale back down.”

Consequently, it is considering Google, AWS and Azure. “At the moment a number of those aren’t secure platforms in terms of the level of security policing requires, but they have moved into that space, offering secure cloud segmented away,” he says.

“Microsoft call it a secure government cloud, data centres separate to public offering, although they have to get enough customers across government [to make it worthwhile]. Microsoft are doing some work with the MoD doing that cloud provision for them.”

But while cloud could be an answer to some of the problems, it is also posing some issues from a legislation perspective.

Legislating change

Currently the Met cannot access remotely stored data – for example, on the Dropbox service. In order to do so, it would have to go through a Regulation of Investigatory Powers Act (RIPA) – the controversial Act that regulates the powers of public bodies to carry out surveillance and investigation, and the interception of communications. This process can be lengthy, he says.

He believes legislation has not yet caught up with the technology, because it wasn’t envisaged that people would be sharing a lot of data remotely. “It’s about having a conversation with the public around the necessity and the proportionality of that. And if everyone agrees that should drive some sort of balanced legislative change that allows that to occur.”

Under changes in select cases individuals could be forced to hand over their passwords or face jail. But Stokes stresses that the use of such powers would have to be proportionate.

Another contentious challenge is encryption. Although Stokes is sceptical that it can ever be completely cracked – leaving aside the question of whether doing so would be desirable.

“If they have used good encryption and good password, that is the end of the day. But the reality is… if you are clever enough and want to do the work you will know how to cover your tracks.”

He adds: “A lot of criminals are chaotic. You have your serious and organised criminals [who] will plan various things… but then you have the rest that is probably not even thought through properly. Digital systems leave traces… It would be very difficult to go into a house and not leave some kind of trace behind, and a digital system is exactly the same.”

But that is not to say detection doesn’t remain a challenge.

“At the moment, the encryption thing: we are concerned about it going forward. Security is getting harder… it is becoming more difficult. That is absolutely the case. But there are bits of legislation that allow us to get passwords from people, and hand them over. Obviously victims and witnesses are more than happy for us to have access to the device. We just have to be careful about what we extract.”

Facial recognition

Biometric passwords could help in this regard. “It is very easy for someone to say ‘I forgot my password’, whereas they can’t say ‘I forgot my face’!”

When it comes to the national biometrics database, he believes we are in the “relatively right space” because it is about people who have been convicted of crime. “If they are not, then the data is not retained. I think that is a reasonable balance, and as we move forward does the legislation change to give us a good proportional balance that is open to inspection by the public?”

All UK police forces use Tetra

Cops’ use of biometric images ‘gone far beyond custody purposes’

READ MORE

What about the more controversial retention of custody images – which now contains 20 million images of people, many of whom have not been convicted.

“I can’t comment directly on that. But I think going forward it is biometric and it should be treated in the same way as fingerprints potentially… certainly DNA and fingerprints are only retained if there has been a criminal conviction.”

He adds: “And that probably needs to be where we are heading with other sources of biometric data in the future.”

Facial recognition technology is still in early days, he says, something he points out in when showing us some screen examples of facial recognition profiling. In fact, most of it is still done manually using software, rather than through automation.

The most notable example of this going wrong was its use at the Notting Hill Carnival last year, which led to a wrongful arrest.

“I think as you get higher and better resolutions it will improve, but if you take a lot of the products we get on CCTV.. you wouldn’t find anything because they are fuzzy blobs.”

Stokes reiterates that it is all about striking the right balance. “We are not prying, we are just trying to get the best and proportional evidence. We should manage how we control access to data and devices, with proper governance in place,” he says.

“There is a necessity to develop systems and processes [so that] we work on that data in an evidentially sound manner. Everything we do has to be examinable by the court, and probed by barristers once we get to that point.

“Although we are in policing, we see ourselves as independent forensic professionals. We are not here to prosecute or defend, we are here to recover data and explain what that data means.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/22/digital_forensics/

HMRC dev support team cc blurtfest: Over 1,400 email addresses blabbed

Almost 1,500 software developers registered to use HMRC’s sandbox or API platform have had their email addresses blabbed in a mass email.

The snafu happened on Friday afternoon, when an email about the HMRC Developer Hub was accidentally sent with users’ addresses visible in the CC field.

The email, with the subject line “API Platform update”, was sent by the software developer support team at 1604 GMT.

“Please note the HMRC Developer Hub will remain shuttered over the weekend to allow us to continue testing the service. The Developer Sandbox for testing remains available. The API Platform is working as expected,” the seemingly innocent email stated.

However, about an hour later, someone must have pointed out the mistake, and the team issued a recall for the message, which meant the same group received another email with all 1,455 or so email addresses cc’d in.

At 1809, a third email – this time blind-copying in the list – was sent to apologise for the breach.

“HMRC’s policy is always to protect customer data, and we take this responsibility very seriously,” the email said.

“Unfortunately, in a recent email, a mistake was made and your email address may have been shared with other recipients.

“I wish to apologise for this error and for any distress this may have caused.”

As the Reg reader who alerted us to the cock-up observed, this kind of error is easily made, especially when the time is ticking away to beer o’clock.

An HMRC spokesperson said: “HMRC takes the protection of customer data extremely seriously and has a strong security culture.

“We can confirm that this matter was immediately reported through our internal incident reporting process and will be fully reviewed. We have contacted the software developers affected to alert them and to apologise.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/22/hmrc_sends_email_cc_developers_error/

UK Army chief: Russia could totally pwn us with cable-cutting and hax0rs

The UK needs to invest in up-to-date army tech, including protection from cyber attacks, the Ministry of Defence’s chief of general staff will warn today.

In a speech to be given today at defence think tank, the Royal United Services Institute (RUSI), General Sir Nick Carter will warn of the military capabilities of Putin’s administration and the threats it poses.

One such danger is towards undersea cables, as discussed by a Policy Exchange report and a previous speech to the RUSI by Air Chief Marshall Sir Stuart Peach. Movements by Russian Navy “intelligence ships” have been centred around these important lines of communication, and the cutting of the main comms line in the Crimean peninsula during its annexation of the region shows precedence.

The speech will also discuss the Russians’ long-range strike capabilities and other areas where Carter feels the UK is lagging behind.

This comes after calls from MPs to increase defence spending, as well as reports of plans to cut down and consolidate the Armed Forces to save £20bn despite opposition from defence minister Gavin Williamson, who has reportedly given his approval to Carter’s lecture.

“The threats we face are not thousands of miles away but are now on Europe’s doorstep – we have seen how cyber warfare can be both waged on the battlefield and to disrupt normal people’s lives – we in the UK are not immune from that,” an excerpt from Carter’s upcoming speech reads.

“We must take notice of what is going on around us or our ability to take action will be massively constrained. Speed of decision making, speed of deployment and modern capability are essential if we wish to provide realistic deterrence.

“The time to address these threats is now – we cannot afford to sit back.”

Concerns around the Russian cybersecurity threat have been growing in recent months, leading to a government ban on the use of Russian-made antivirus software on its own computers; while Ciaran Martin, chief exec of the National Cyber Security Centre, revealed that hackers acting on behalf of Russia had targeted the UK’s telecommunications, media and energy sectors.

Foreign Secretary Boris Johnson also warned during a trip to Moscow that the UK would not stand for Russian cyber attacks and would retaliate to protect its interests. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/22/uk_chief_of_staff_beware_of_russian_cable_cutting_and_cyber_attacks/

China flaunts quantum key distribution in-SPAAACE by securing videoconference

China has revealed more detail of its much-hyped satellite quantum key distribution network.

In a paper published at Physical Review Letters, Liao Shengkai of University of Science and Technology of China and other researchers describe the experiment in which they passed quantum-created keys between Xinglong and Graz in Austria.

In quantum key distribution (QKD), the keys used to secure communications take advantage of quantum entanglement to protect secret keys against eavesdropping. Those keys are then used to secure communications transmitted over non-quantum channels.

The Chinese experiment demonstrated communication with transmitted images, and followed that up with a 75-minute videoconference on 29 September 2017 secured with quantum-distributed keys.

This is already commonplace on terrestrial fibre networks. China set its sights higher when it launched a satellite named “Micius” in 2016. That craft can create entangled particles used to carry encryption keys.

In June 2017, Chinese researchers demonstrated they could maintain space-to-ground entanglement, and at the same time, also maintained entanglement over a record distance, 1,200 km, between ground stations.

Micius Experiment

Micius distributing keys. Click to embiggen

In the latest research, the researchers added an Austrian node into the mix, achieving kilohertz-rate key distribution between stations that were 7,600 km apart.

The researchers wrote that the experiment proves the value of a satellite platform like Micius, because noise limits the distance that optical fibre can be used for QKD. “Due to photon loss in the channel, the secure QKD distance by direct transmission of the single photons in optical fibres or terrestrial free space was hitherto limited to a few hundred kilometres,” the paper stated.

On a satellite channel, once the signal is out of the atmosphere, noise is far less troublesome: “most of the photons’ propagation path is in empty space with negligible loss and decoherence.”

China doesn’t have the quantum space race all to itself. Last year, boffins from the University of Padua in Italy conducted their own earth-to-space experiment.

However, unlike Micius, which carries its own QKD transmitter, the Italian experiment created an entangled state on the ground and bounced it off a satellite, demonstrating that they were able to maintain the photons’ quantum state on the round trip.

Japan also has a quantum satellite experiment. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/22/china_flaunts_its_qkdinspaaace_by_securing_videoconference/