STE WILLIAMS

Apache “Optionsbleed” vulnerability – what you need to know

Remember Heartbleed?

That was a weird sort of bug, based on a feature in OpenSSL called “heartbeat”, whereby a visitor to your server can send it a short message, such as HELLO, and then wait a bit for the same short message to come back, thus proving that the connection is still alive.

The Heartbleed vulnerability was that you could sneakily tell the server to reply with more data than you originally sent in, and instead of ignoring your malformed request, the server would send back your data…

…plus whatever was lying around nearby in memory, even if that was personal data such as browsing history from someone else’s web session, or private data such as encryption keys from the web server itself.

No need for authenticated sessions, remotely injected executable commands, guessed passwords, or any other sort of sneakily labyrinthine sequence of hacking steps.

Crooks could just ask a server the same old innocent-looking question over and over again, keep track of the data that it let slip every time, and dig through it later on, looking for interesting stuff.

Well, something similar has happened again.

This time, the bug isn’t in OpenSSL, but in a program called httpd, probably better known as the Apache Web Server, and officially called the Apache HTTP Server Project.

The vulnerability has been dubbed OptionsBleed, because the bug is triggered by making HTTP OPTIONS requests.

If you know a bit about HTTP, you’ve probably heard of GET and POST requests, typically used to fetch data or web pages or to upload files or filled-in forms.

But there are numerous other HTTP request types, known officially as methods, including:

Not all web servers support all the many official methods, so there’s a special method to help you out:

By using OPTIONS you can avoid hammering a web server with requests that are never going to work, thus avoiding frustration at your end of the connection, and saving the server from wasted effort at the other.

For example:

  $ wget -S --method=OPTIONS https://my.example/index.html
  HTTP request sent, awaiting response... 
    HTTP/1.1 200 OK
    Allow: OPTIONS, TRACE, GET, HEAD, POST
    Public: OPTIONS, TRACE, GET, HEAD, POST
    Content-Length: 0
    Date: Tue, 19 Sep 2017 14:09:26 GMT
  $

Here, you can see the methods that the my.example server accepts, listed in the Allow: header.

Harmless enough, you might think – and that’s how it ought to be.

But software and security researcher Hanno Böck found otherwise.

He set out to measure what OPTIONS were supported by the Alexa Top 1,000,000 websites, asking each of them in turn..

Böck spotted that a small but noticeable fraction of the servers sent back garbled or corrupted-looking responses, such as:

  Allow: ,GET,,,POST,OPTIONS,HEAD,,
  Allow: POST,OPTIONS,,HEAD,:09:44 GMT
  Allow: GET,HEAD,OPTIONS,=write HTTP/1.0,HEAD,,HEAD,POST,,HEAD,TRACE

One that Böck lists looked curiously suggestive of a “bleed-type” data leak:

  Allow: GET,HEAD,OPTIONS,,HEAD,,HEAD,,HEAD,, 
    HEAD,,HEAD,,HEAD,,HEAD,POST,,HEAD,, HEAD,!DOCTYPE 
    html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"

Not only are many of the options suspiciously and weirdly repeated, but a fragment of what looks like a web page – content expected in an HTTP reply body, not in an OPTIONS response, which typically has no body content at all – has crept into the data sent back from the server.

What went wrong?

Böck noticed leaked data in some of the garbled replies that strongly suggested that Apache servers were involved. (Some of the leaked data apparently looked like Apache-specific configuration settings.)

With the help of Apache developer Jacob Champion, they got to the bottom of it, and the bug is intriguingly arcane – a strong reminder of how far-reaching the effects of a minor-sounding bug can be.

First, some background.

Apache servers can be configured by putting files called .htaccess into the directory tree of content that is stored on the server.

Each .htaccess file sets configuration options for the directory it’s in and all the others below it, unless their settings are overridden by another .htaccess file lower down, and so on.

Although this has the negative effect of scattering Apache’s configuration settings all across the server’s disk, it has the positive effect of safely letting one server, and one directory tree, serve many different websites (known in the jargon as virtual hosts) at the same time.

Whether the virtual hosts represent multiple departments inside the same organisation, or separate companies buying into a shared web hosting service, each customer can be given their own directory subtree.

The subtrees can be locked down to safe defaults at the top of the tree; each customer can then reconfigure their own part of the server to be even stricter if they want.

Rather than needing one computer, or one virtual machine, and one running copy of httpd for each customer, the hosting company can split up one high-powered server safely – in theory, at least – between numerous websites and customers.

One of the settings you can configure in your .htaccess files – Apache calls these settings by the rather bureaucratic name directives – is known as Limits, which, as the name suggests, limits the available OPTIONS in the current directory tree.

Like this, for example:

  Limits POST PATCH PUT DELETE
    Deny from all
  /Limits

If you have a part of your website’s directory tree that is only there to serve up files, this sort of restriction makes sense – it’s a good example of what’s known in the trade as “reducing your attack surface”, and you do it for much the same reasons that you don’t let everyone login as an administrator all the time.

But here’s the thing: if any of the HTTP methods you configure in a directive are inapplicable, the “Optionsbleed” bug is triggered.

By inapplicable, we mean either than the method has already been turned off in the global httpd configuration settings, or that the method doesn’t actually exist, for example because you typed DELLETE instead of DELETE.

You’d think that this shouldn’t matter, beyond perhaps a warning in a logfile somewhere – banning an option that is already banned or turning off a non-existent option is a waste of time, but shouldn’t make things worse.

However, it seems that trying to set a Limit that doesn’t exist (or doesn’t apply) causes Apache to free up memory that it no longer needs…

…but to carry on referring to that memory, perhaps even after it has been allocated to and used by another part of the Apache program.

This sort of bug has the self-explanatory name of use-after-free, and you can see how these bugs lead to data leakage vulnerabilities: you can easily end up using someone else’s data, perhaps even copying some of their personal or private information, and sending it out over the network by mistake.

So, as far as Böck and Champion could tell, a memory mismanagement bug, provoked when Apache processes an .htaccess file that is meant to improve security…

…can end up reducing security by leaking data later on when a completely different part of Apache processes an OPTIONS request.

Thus the name Optionsbleed.

How bad is it?

The good news is that the side-effects of the bug don’t seem to show up often, given that it requires the coincidence of an incorrectly-configured .htaccess file and an unluckily-timed OPTIONS request.

Indeed, Böck’s tests provoked just 466 “optionbleeds” from 1,000,000 different web servers. (Statistics suggest that about 200,000 of those would have been running Apache, for a bug-trigger rate of about 0.25% of vulnerable systems.)

Nevertheless, it’s important to remember that on a server that’s hosting many different domains for many virtual hosts in many different directory trees, one malevolent customer could provoke this bug by deliberately setting an invalid option in their own .htaccess, and then repeatedly visiting one of their own URLs to see what data might leak out.

The leaked data comes from the memory of the Apache server software, and could in theory include content from other customers, or from the server itself.

Similarly, a well-meaning customer could ruin it for everyone else by copying-and-pasting an .htaccess file from an example configuration, and leaving redundant options in there on the very reasonable grounds – as we mentioned above – that turning off options that are already off should be harmless.

What to do?

Fortunately, there’s a fix available: a patch for Optionsbleed is available from the Apache source code servers.

If you outsource your servers or your web hosting, ask your provider if they can apply the patch for you.

Unfortunately, at the time of writing [2017-09-19T17:00Z], Apache hasn’t yet published an formal advisory or an officially-updated Apache version – for the time being, you’ll need to apply the code changes yourself and rebuild your own updated version of httpd.

Of course, with no official approval yet from Apache, it’s hard to judge if the currently-available patch is indeed the best and safest way to squah this bug.

So, if you can’t or don’t want to patch immediately, we suggest you visit all your .htaccess files looking for settings that aren’t applicable (or are mis-spelled), even though you shouldn’t really have to remove settings that are legal and purposeful but just happen to be redundant in the current configuration.

Whichever route you choose, keep your eye out for Apache’s next official security update – the current patch may be replaced, improved, extended or superseded.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Xi51q8ai8JE/

PyPI Python repository hit by typosquatting sneak attack

Somebody with time on their hands has tested out a devious new form of typosquatting targeting developers installing Python packages from the PyPI (Python Package Index) repository.

According to an advisory posted to the Slovak National Security Office (NBU), ten packages for Python 2.x were removed from the site after being found to contain malicious code hidden inside software using filenames either nearly identical to, or which could be mistaken for, legitimate ones.
is a single character:

Real: urllib3-1.21.1.tar.gz
Fake: urllib-1.21.1.tar.gz

The other fake packages were (correct names in parenthesis):

  • acqusition (acquisition)
  • apidev-coop (apidev-coop_cms)
  • bzip (bz2file)
  • crypt (crypto)
  • django-server (django-server-guardian-api)
  • pwd (pwdhash)
  • setup-tools (setuptools)
  • telnet (telnetsrvlib)
  • urllib (urllib3, a second attack on the library mentioned above).

Says the NBU:

These packages contain the exact same code as their upstream package thus their functionality is the same, but the installation script, setup.py, is modified to include a malicious (but relatively benign) code.

There’s a lot to discuss here, but clearly the attack relies on two subterfuges, the first being devs-in-a-hurry mis-typing the package name when using Python command-line installers such as pip.

That’s easy to do and there’s no way of knowing that something untoward has happened because, as the advisory says, everything looks normal when the packages install on Python 2.x using admin privileges. The pip installer, it has been pointed out, lacks any way of verifying a package using a cryptographic signature.

But some of the names seem designed to impersonate popular packages in a more general way – file names that look plausible rather than identical – which suggests that the people who put the bogus files on PyPI are also trying to catch out developers who download the source code direct.

As to the issue of motivation, could this be a proof-of-concept attack?

The NBU describes the fake packages as the “code execution of benign malware”, which sounds about right given that they collect data on the users installing them, hostnames, and which package was installed.

Except that anything installed without consent with the intention of collecting identifiable information is, arguably, harmful even if the precise motive is not clear.

Separately, researchers Benjamin Bach and Hanno Böck used the same “Pytosquatting” MO to upload 20 modified Python libraries (now removed) designed to track the IP addresses of those accessing them. The results showed that since June the packages were accessed 45,000 times on 17,000 domains.

This research, they pair said, was designed to probe insecurities in the way repositories are being used. Typosquatting is more often associated with rogue websites but this research, and the attack spotted by the NBU, is a warning that the technique can be deployed in any context.

Unravelling the PyPI attack will still be a slog:

There is evidence that fake packages have been downloaded and incorporated into software multiple times between June 2017 and September 2017.

If the tail has a sting it’s that not only is the code sitting on an unknown number of servers, it is now part of real software.

Admins who see outbound connections to 121.42.217.44 on port 8080 may be harbouring a rogue. If you have one, re-installing the correct packages should fix the issue.

Developers – be careful including other people’s code in your projects.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/X_sq4CX1mUU/

European Commission proposes more powers for EU’s infosec agency

The European Commission has proposed an expansion in the role of ENISA, the EU’s cybersecurity agency.

During his State of the Union speech on Wednesday, Jean-Claude Juncker outlined plans to widen ENISA’s remit through a Cybersecurity Act. Under a revised mandate, ENISA would be tasked with introducing an EU-wide cybersecurity certification scheme. The thinking is that the agency would be able to counter threats more actively by becoming a centre of expertise for cybersecurity certification and standardisation of ICT products and services.

The agency would also support member states in implementing the Network and Information Security (NIS) Directive and be take a role in reviewing the EU Cybersecurity Strategy, an upcoming blueprint for cyber-crisis cooperation.

Dr Udo Helmbrecht, executive director of ENISA, said in a canned statement: “I welcome the proposal from the Commission to strengthen and expand ENISA’s mandate by addressing certification and standardisation of ICT products and better cooperation in relating to preparing and addressing cross-border cybersecurity challenges in Europe. I believe that these initiatives will improve the Digital Single Market and strengthen the ICT industry in Europe.”

Senior eurocrats said the revised mandate would include the development of new cybersecurity tools, but details remain unclear. ENISA’s press representatives are yet to respond to El Reg‘s questions on this and several other points, such as whether it will be taking on more staff and what its budget will be. We’ll update this story as and when more information comes to hand.

ENISA has been operational since 2005 and is based on the Greek island of Crete. The agency has sought to improve network and information security in the EU, partly by acting as a centre of expertise for both member states and EU institutions. As a think tank, ENISA has published research about everything from connected cars to the security of industrial control systems. ENISA has also played a central role in organising a series of pan-European cybersecurity exercises.

Infosec researcher Claus Cramon Houmann commented: “Good news overall, specifics remain to be proven beneficial and ENISA in Greece isn’t optimal at all.” ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/19/enisa_revamp/

What’s that, Equifax? Most people expect to be notified of a breach within hours?

Equifax hasn’t found time for a houseclean and is making claims of authority and competence about security breaches that, following its own recent high profile breach, come off as pretty cringeworthy.

An autumn 2016 whitepaper from Equifax – still available here at the time of publication – attempts to position the credit scoring agency as a go-to firm for organisations unfortunate or careless enough to suffer a security breach.

How would you reassure your customers – and satisfy regulators – if your business experienced a data breach?

Equifax is ideally placed to help businesses if they experience a data breach. We have one of the largest sources of detailed consumer data in the UK.

Equifax knows breaches

The offer is particularly out of place in the wake of Equifax’s widely criticised response to a breach at the credit reference agency that exposed the personal details of 143 million US consumers and 400,000 Brits.

Perhaps Equifax execs might want to re-visit their own Identity Theft and Data Breach whitepaper themselves, assuming they still have a job that is. Perhaps it would interest them that 63 per cent of punters want to know about a breach within hours of its occurrence? Not, er, the months it took Equifax to reveal its own dirty secret.

Alternatively, they might want to talk to experts at FireEye Mandiant, the incident response arm of the security firm, who have been brought in to help sort out the mess at consumer credit scoring agency.

Last week we reported how FireEye removed an Equifax case study from its website in response to the recently disclosed mega-breach at the credit reference agency.

Equifax’s endorsement of FireEye’s zero-day detections capabilities no longer counted as much of a recommendation after Equifax was comprehensively pwned by hackers who exploited an unpatched Apache Struts vulnerability.

Hat-tip

A tip of the hat to vulture-eyed reader Laurence M for the heads-up on Equifax’s promised expertise. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/19/equifax_breach_experts/

Sexploitation gang thrown in clink for 171 years after ‘hunting’ kids online and luring them in front of webcams

Four men have joined their two accomplices behind bars for tricking young girls into performing sex acts online so they could film them.

The six were charged in Michigan, USA, with 28 counts [PDF] of producing and viewing child abuse images, engaging in a child exploitation enterprise, conspiracy to access with intent to view child pornography, and enticement of a minor to engage in illegal sexual activity. Their combined sentences come to 171 years in prison, and most will face at least 20 years of supervision after that.

Between November 16, 2013 and March 10, 2016 the gang ran a sophisticated web ring to haunt internet forums that children use regularly. They would then lure each victim, one by one, into a private group chat session and take on various roles to achieve their ends.

According to federal court documents filed earlier this year, each gang member took on the part of either hunters, talkers, loopers or watchers. The hunter would find a promising young girl and encourage her to join a private chatroom, while talkers would engage with her and reassure their victim.

After a suitable period, the talkers would encourage a game of dare, and the looper would play a video of a teenage boy engaging in sexual acts to encourage the victim to also strip off and perform for the group. All of the gang present would cap – slang for record – the victim’s actions to disk. The watchers would keep an eye on all the participants of the chatroom to make sure no one outside the group had joined. These highly compromising recordings would then be shared with others.

On November 16, 2015, one of the gang members was arrested by the police and charged with possession of child pornography. He quickly folded under questioning and helped the cops identify Kik channels the gang was using to coordinate their actions. After the ring’s public IP addresses were obtained and traced back to their ISPs, via subpoenas, the gang members were cuffed and all pled guilty.

On Friday, Michigan District Judge Judith Levy sentenced Justin Fuller, 37, of Modesto, California, to 35 years in prison, with co-conspirator John Garrison, 52, of Glenarm, Illinois, receiving the same sentence.

Thomas Dougherty, 54, of Vallejo, California, received 26 years for his crimes while associate Virgil Napier, 54, of Waterford, Michigan, was sentenced to 20 years in the Big House. The two other gang members, Brandon Henneberg, 31, of Diller, Nebraska, and Dantly Nicart, 39, a citizen of the Philippines residing in Las Vegas, Nevada, received 35 years and 20 years in prison respectively. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/19/sexploitation_gang_sentenced_to_171yrs/

Pirate Bay digs itself a new hole: Mining alt-coin in slurper browsers

Bittorrent search engine and mortal enemy of intellectual property lawyers, The Pirate Bay, has upset the one group of people that actually likes it: its users.

Over the weekend, visitors to the infamous file-sharing watering hole were surprised to find their browsers working overtime, with their computers’ CPU usage rocketing on certain pages. It didn’t take them long to figure out why. The Bay’s HTML was trying to run this JavaScript code from coin-hive.com.

That script looked suspiciously like a Bitcoin mining operation, using visitors’ browsers to crash through hash calculations. And that wasn’t far from the truth. The embedded code quietly mines a fairly new cryptocurrency called Monero. Coinhive offers an in-browser Monero miner so websites can embed the code in pages and make money from passing web surfers. And that’s exactly what Pirate Bay.

It’s supposed to be an alternative to adverts, although we’ve heard, by way of anecdote, some web ads running the same sort of mining code in the background in browsers to earn a few extra pennies while pages are read.

The impact was noticeable and immediate, with Pirate Bay fans complaining on the site’s forum about the enormous resource hog. “I was looking at a torrent page and suddenly all my CPU threads went 80‑85 per cent – something which usually only occurs when I’m encoding stuff,” wrote one user on Reddit.

The mining script was placed on specific pages rather than across the site: it ran on search pages and category pages but not on the homepage nor individual torrent pages – ensuring the code got the most footfall.

Following the outcry, the Pirate Bay’s admins put out a statement.

Digging a hole

“As you may have noticed we are testing a Monero javascript miner,” a short post on the site’s blog reads. “This is only a test. We really want to get rid of all the ads. But we also need enough money to keep the site running.”

It turns out that “a typo” caused the mining operation to try to eat up every available cycle, rather than the 20‑30 per cent usage that The Pirate Bay planned for. And the initial installation may also have worked on every tab instance, so if people had multiple Pirate Bay pages open at the same time, it killed their browser’s ability to even walk downhill. Both issues have since been fixed.

Surprisingly, however, many netizens seem pretty content with The Pirate Bay leeching their computers’ processing energy to make money – especially if it means getting rid of ads.

“If this can become a viable way to support the website without seeing porno I am in,” wrote one user in response to the blog post, referring to the fact that the only companies that appear willing to pay a notorious copyright infringing site to advertise their services come from the adult industry.

“Do you want ads or do you want to give away a few of your CPU cycles every time you visit the site?” asked The Pirate Bay’s admin.

Of course, being the individualistic bunch there are, most Pirate Bay users have adopted a different tack altogether: ad-blockers and other browser extensions that block the Codehive script.

Mo money?

Mining Bitcoin in a browser is now unfeasible: to get anywhere you need some seriously souped up computer gear or specialized dedicated mining equipment. Monero is only three years old, and still relatively easy to mine, especially on ordinary machines, hence Codehive’s decision to go with Monero. One XMR is worth $101 (£74) whereas 1 BTC is $3,950 (£2,920), though.

Given that the site doesn’t have to pay for people’s electricity and hardware, and just milks them of crypto-cash, let’s do some back-of-the-envelope calculations for Pirate Bay. Assuming:

  • The average Pirate Bay user’s browser running Codehive’s script is capable of mining Monero at 30 hashes per second. A decent Intel Core i7 can do 90 per second in a browser. Your humble hack’s Core i5 in a laptop can do 30. Let’s just go for 30.
  • The site attracts 200 million users a day, apparently, and half of those don’t have some kind of script blocking in their browser, ie, there are about 100 million users crunching away at 30H/s for some period during the day.
  • Let’s assume they each spend just a minute on a mining page on the site, on average, a day. Some will stay on a mining page for a while, some will leave after a few seconds. That’s, let’s see, 180 billion hashes cracked a day in total, on average, or 2,083,333 a second over the day.
  • According to this online calculator, that would cough up $123k a month – again, based on these almost random estimates we’ve come up with. We’re willing to be way off the mark.

It’s impossible to know for sure, and The Pirate Bay is unlikely to release figures about how profitable its mining approach is, once it’s paid its hosting bills and expenses in XMR. But you can certainly see why it felt the idea was worth exploring – especially if it means the site can dump its ads and its users seem to accept the trade-off of free content in return for involuntarily donating their CPU cycles.

So long as Pirate Bay users don’t go ballistic and threaten to boycott the site altogether, perhaps the best measure of whether the mining operation is profitable will be to see if the organization continues to implement it past its “test.”

If it does, you can expect to see many more sites switch to Monero mining in their browsers. Online and off, nothing speaks louder than money. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/19/pirate_bay_bitcoin_mining_script/

DOJ lets itself off the privacy hook

The ever-growing pile of evidence that privacy is dead just got a bit larger.

Last week, privacy advocates lost another round with the US Department of Justice (DOJ) in the battle over the relatively unfettered collection, analysis and distribution of massive amounts of personal data of those both in and outside of government.

The DOJ’s goal is both laudable and necessary – the mitigation of insider threats. But, its method of reaching that goal involves eliminating significant protections for another crucial goal of a free society – personal privacy.

In a “final rule”, the DOJ excused its insider threat database – officially the “DOJ Insider Threat Program Records” – from multiple provisions of the 1974 Privacy Act.

The Electronic Privacy Information Center (EPIC), which filed objections to the exemptions when they were first proposed in June, noted that the database includes:

… detailed, personal data on a large number of individuals who have authorized access to DOJ facilities, information systems, or classified information, including present and former DOJ employees, contractors, detailees, assignees, interns, visitors, and guests. The scope of ‘insider threat’ is broad and ambiguous; the extent of data collection is essentially unbounded.

It added that the DOJ, “proposes to disclose information within the Database to multiple entities not subject to the Privacy Act, including state, local, tribal, or foreign law enforcement, private organizations, contractors, grantees, consultants, and the news media.”

Most of the people whose data is being collected, analyzed and shared, EPIC noted, aren’t under suspicion or the target of any investigation. And the “insider threat” information collected is not just used for law enforcement or intelligence purposes. It can then be shared with other agencies for what amounts to human resources purposes like hiring, retention, promotions and deployments.

The DOJ, which rejected all of EPIC’s requests to narrow or eliminate the exemptions, said they are all necessary to, “avoid interference with efforts to detect, deter, and/or mitigate insider threats.”

In response to EPIC’s request that only “relevant and necessary” records are maintained to detect and prevent insider threats, DOJ argued that it is impossible to say at the time data is collected whether some of them might become relevant later.

It said it protects the security and confidentiality of the data with, “appropriate administrative, technical and physical safeguards,” and is in compliance with multiple security standards, including those of NIST (National Institute of Standards and Technology) and the federal Office of Management and Budget (OMB).

That draws some intense skepticism from Shahin Buttar, director of grassroots advocacy at the Electronic Frontier Foundation (EFF), who said he was one of the 22 million current and former federal employees, “whose information submitted through the security clearance process ended up in the hands of Chinese intelligence agents” – the result of the notorious 2014 Office of Personnel Management breach.

“The relatively unbounded information that DOJ seeks for the Insider Threat system is not only overbroad, but also creates unnecessary security risks given its tremendous sensitivity,” he said.

Still, in response to EPIC saying that those whose personal information is collected ought to be able to have access to it and amend things that are incorrect, the DOJ said doing so:

… could compromise or lead to the compromise of information classified to protect national security; disclose information that would constitute an unwarranted invasion of another’s personal privacy; reveal a sensitive investigative or intelligence technique; disclose or lead to disclosure of information that would allow a subject to avoid detection or apprehension; or constitute a potential danger to the health or safety of law enforcement personnel, confidential sources, or witnesses.

The DOJ, noting that its data collection is, “for authorized law enforcement and intelligence purposes,” said it “follows lawful, vetted investigative practices and procedures.”

It claimed that it, “takes seriously its obligations to protect the privacy of Americans,” and said it might even waive one or more of the exemptions on occasion. But, of course, the decision to do that would be, “in its sole discretion” – a phrase that appears several times in the document.

Along with EPIC, other privacy advocates like Buttar say “final rules” like this make the DOJ essentially an unaccountable law unto itself.

Buttar suggested that government collection of data on “insiders” is as much, or more, about protecting itself as it is about protecting the nation:

“The Insider Threat program is itself a threat to the national security of the United States, by insulating from public accountability executive agencies that have repeatedly violated their constitutional and statutory limits,” he said. “Whistleblowers are conscientious public servants who advance the public interest by revealing fraud, waste, and abuse. They are heroes, not threats.”

Those arguments have obviously not swayed the DOJ. But that doesn’t mean privacy advocates are entirely out of options.

Final rules can be challenged through “judicial review” – a lawsuit.

And EPIC president Marc Rotenberg said Congress, the Privacy and Civil Liberties Oversight Board (PCLOB) and the Chief Privacy Officer (CPO) for the DOJ all have some oversight authority for enforcement of the Privacy Act and reviewing government agency surveillance.

“EPIC has frequently written to Congress, PCLOB, and agency CPOs about similar issues,” he said. “And we will now add the DOJ Insider Threat database to our list of programs that we expect officials charged with oversight of the DOJ to investigate.”

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Tl32m0ys62U/

DRM now a formal Web recommendation after protest vote fails

Anti-piracy and anti-copying protections are now formally part of the World Wide Web after an effort to vote down content controls at the WWW’s standards body failed.

The World Wide Web Consortium (W3C) has been embroiled in controversy for five years over the introduction of the Encrypted Media Extensions (EME) specification. It finally went to a vote and was approved by 58.4 per cent, with 30.8 per cent opposed and the rest abstaining.

Some argue that a so-called digital rights management standard is needed so browsers have a common way to make sure that things like copyrighted videos are protected uniformly across the web. The EME technology is designed to stop people saving, copying and sharing copies of movies and other high-quality stuff streamed online without permission.

Meanwhile, others have been fiercely opposed to the DRM mechanisms on principle: contending that the W3C should only ever create standards that promote an open internet.

The argument has also laid bare a rift in the standards body between internet engineers who see themselves as the guardians of the internet’s open philosophy, and corporate interests who have increasing sway over the W3C.

The tipping point likely came in the form of web inventor Tim Berners Lee who, as director of the W3C, argued in March for the inclusion of EME, having long tried to steer clear of the controversy.

“W3C is not the US Congress, or WIPO, or a court,” he argued. “W3C is a place for people to talk, and forge consensus over great new technology for the web. Yes, there is an argument made that in any case, W3C should just stand up against DRM [digital rights management], but we, like Canute, understand our power is limited.”

Berners-Lee noted that EME would promote greater interoperability and enable the data provided by content use to be limited, improving online privacy.

Defeatist

But critics were not persuaded, criticizing the W3C and Berners-Lee for their “defeatist” attitudes and for selling out to commercial interests. They argued that the proposal will “give corporations the new right to sue people who engaged in legal activity.” Three formal objections to the proposal were launched:

  • It does not provide adequate protection for users
  • It will be hard to include in free software
  • It doesn’t legally protect security researchers

After some back-and-forth and a few minor changes, the EME recommendation moved forward to a vote and was passed.

The decision sparked a considerable effort by the W3C to argue its case. CEO Jeff Jaffe wrote a blog post in which he argued that the debate over EME wasn’t actually about W3C standards, so much as a societal shift.

“W3C did not create DRM and we did not create DMCA. DRM has been used for decades prior to the EME debate,” he wrote.” But in recent years it is a credit to the World Wide Web that the web has become the delivery vehicle for everything, including movies. Accordingly, it was inevitable that we would face issues of conflicting values and the appropriate accommodations for commercial use of the web.”

By Veni Markovski - licensed under creative commons (attribution unported)

Web daddy Tim Berners-Lee: DRMed HTML least of all evils

READ MORE

Jaffe argued that discussion within the W3C has done what it was supposed to do: improve a technical specification. “Critics of early versions of the spec raised valid issues about security, privacy, and accessibility,” he wrote.

He also acknowledged that critics’ last sticking point – an effort to legally protect security researchers by allowing them to test and break encryption methods – did not make it through. That was largely because corporate interests felt such an approach would amount to a legal backdoor for hackers.

His fault

But even Jaffe wasn’t keen to stick his neck on the line, noting: “In the end, the inventor of the world wide web, the director of W3C, Tim Berners-Lee, took in all of the diverse input and provided a thoughtful decision which addressed all objections in detail.”

He concluded: “My personal reflection is that we took the appropriate time to have a respectful debate about a complex set of issues and provide a result that will improve the web for its users.”

The W3C also published a list of testimonials from companies including Netflix, Microsoft and the Motion Picture Association of America (MPAA) arguing why EME was a good thing, and outlined in detail why users will benefit from the recommendation.

But one of the most persistent critics of the proposal, Cory Doctorow of the Electronic Frontier Foundation, who also proposed and help organize the protest vote, was not persuaded.

In an “open letter” to the W3C, Jaffe and Berners-Lee, Doctorow focused on the failed “compromise” that would give legal protection to “legitimate activities, like research and modifications, that required circumvention of DRM,” and attacked Berners-Lee for “personally overriding every single objection raised by the members.”

He went on: “Somewhere along the way, the business values of those outside the web got important enough, and the values of technologists who built it got disposable enough, that even the wise elders who make our standards voted for something they know to be a fool’s errand.”

The W3C had “squandered the perfect moment to exact a promise to protect those who are doing that work for them,” Doctorow said, and somewhat dramatically announced at the end of the letter that the EFF was leaving the standards organization in response.

And so the Great Browser Battle of 2017 was ended. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/18/w3c_approves_eme/

Someone checked and, yup, you can still hijack Gmail, Bitcoin wallets etc via dirty SS7 tricks

Once again, it’s been demonstrated that vulnerabilities in cellphone networks can be exploited to intercept one-time two-factor authentication tokens in text messages.

Specifically, the security shortcomings lie in the Signaling System 7 (SS7) protocol, which is used to by networks worldwide to talk to each other to route calls, and so on.

There are little or no safeguards in place on SS7 once you have access to a cell network operator’s infrastructure. If you can reach the SS7 equipment – either as a corrupt insider or a hacker breaking in from the outside – you can reroute messages and calls as you please. Someone working for, or who has compromised, a telco in Morocco, for instance, can quietly hijack and receive texts destined for subscribers in America.

Infosec outfit Positive Technologies, based in Massachusetts, USA, obtained access to a telco’s SS7 platform, with permission for research purposes, to this month demonstrate how to commandeer a victim’s Bitcoin wallet. First, they obtained their would-be mark’s Gmail address and cellphone number. They then requested a password reset for the webmail account, which involved sending a token to the cellphone number. Positive’s team abused SS7 within the telco to intercept the authentication token and gain access to the Gmail inbox. From there, they were able to reset the password to the user’s Coinbase wallet, log into that, and empty it of crypto-cash.

Minimum personal information about a victim – just their first name, last name, and phone number – was enough to get their email address from Google’s find-a-person service and hack a test wallet in Coinbase.

Earlier this year, crooks exploited these aforementioned weaknesses in SS7 to log into victims’ online bank accounts in Germany and drain them of funds. The cyber-robbers intercepted texts with login authentication codes sent to customers of Telefonica Germany before using the stolen information to carry out unauthorized transactions, as we previously reported.

Wyden

Why are creepy SS7 cellphone spying flaws still unfixed after years, ask Congresscritters

READ MORE

“Exploiting SS7-specific features is one of several existing ways to intercept SMS,” said Dmitry Kurbatov, head of the telecommunications security department at Positive Technologies.

“Unfortunately, it is still impossible to opt out of using SMS for sending one-time passwords. It is the most universal and convenient two-factor authentication technology. All telecom operators should analyze vulnerabilities and systematically improve the subscriber security level.”

Banks try to strike a balance between usability and security. Tokens in text messages are easy to receive and type in. For sensitive accounts, using a phone for authentication will be risky if SS7 hijacks increase. However, if the choice is phone authentication or no two-factor authentication at all, it’s a good idea to use the phone for security reasons – or, even better, find a service that offers second-factor authentication from an app, key fob or other gizmo.

Ultimately, login token stealing, via SS7, is still rare. Most headaches with SMS tokens are caused by people getting locked out of their stuff, rather than having it all stolen.

“We should stop using SMS for 2FA, but also worth noting: for providers the biggest problem with 2FA is account lockouts, not bypasses,” said Martijn Grooten, a security researcher and editor of industry journal Virus Bulletin. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/09/18/ss7_vuln_bitcoin_wallet_hack_risk/

Chrome to brand FTP as “not secure”

On 14 September, it was announced in a Chrome developers group that Chrome will mark FTP (File Transfer Protocol) resources in the address bar as “not secure.”  The change is expected to be made by the release of Chrome 63 in December 2017.

Developer Mike West wrote:

We didn’t include FTP in our original plan (for Chrome development), but unfortunately its security properties are actually marginally worse than HTTP. Given that FTP’s usage is hovering around 0.0026% of top-level navigations over the last month, and the real risk to users presented by non-secure transport, labeling it as such seems appropriate.

FTP is so old it used to run on top of NCP (Network Control Program) before switching to the internet protocol suite, TCP/IP, in 1980. As of 2017, it’s now about 46 years old, which makes it 13 years older than I am.

Back in 1971, when FTP was invented, the internet as we know it didn’t exist. Its precursor, ARPANet did, but it was used exclusively by academics and members of the military.

Computer networks were a lot simpler than they are today, and they didn’t have to deal with malware, criminal hackers, cyberattacks and the other risks, which are an everyday reality now.

These days FTP is normally used for downloading files from public archives or for uploading webpages and media files to web servers. FTP can be set up so that users have to supply a username and password or in anonymous configuration where authentication isn’t required.

What makes FTP “not secure” is that all the data that’s uploaded and downloaded is sent in unencrypted plain text, including your username and password.

This means that FTP users are vulnerable to Man-in-The-Middle (MiTM) attacks that can steal usernames and passwords or modify files as they pass over a network.

As Cyber-Ark’s Adam Bosnian put it when speaking about the security weaknesses of FTP to Security Week “any network sniffer can hijack it”.

When people use FTP to transfer their files they’ll often use an FTP client like FileZilla but all modern web browsers support FTP too and aside from the ftp:// in the address bar you probably wouldn’t notice.

As Mike West wrote, 0.0026% of top-level navigations in August recorded by Chrome developers are FTP addresses, so very few Chrome users will notice the new “not secure” label.

West also recommended that developers follow the example set by The Linux Kernel Archives to migrate public-facing downloads from FTP to the much more secure HTTPS.

As a response to West’s post, Chrome developer Chris Palmer added:

Because FTP usage is so low, we’ve thrown around the idea of removing FTP support entirely over the years. In addition to not being a secure transport, it’s also additional attack surface, and it currently runs in the browser process.

There are other solutions for transferring files, not least a version of FTP which uses encryption to keep your data safe, called SFTP (Secure FTP.) The AS2 (Applicability Statement 2) and MFT (managed file transfer) protocols can also serve as secure FTP alternatives, as can tools like scp and rsync.

Frankly, I’d like to see FTP phased out entirely, for all possible implementations. Computer networks would be more secure and could function better if FTP went the way of other ill conceived 1970s inventions like pet rocks and vinyl topped cars.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qBgTTn_CheI/