STE WILLIAMS

Foxit PDF Reader is well and truly foxed up, but vendor won’t patch

The Zero Day Initiative (ZDI) has gone public with a Foxit PDF Reader vulnerability without a fix, because the vendor resisted patching.

The ZDI made the decision last week that the two vulns, CVE-2017-10951 and CVE-2017-10952, warranted release so at least some of Foxit’s 400 million users could protect themselves.

In both cases, the only chance at mitigation is to use the software’s “Secure Mode” when opening files, something that users might skip in normal circumstances.

CVE-2017-10951 allows the the app.launchURL method to execute a system call from a user-supplied string, with insufficient validation.

CVE-2017-10952 means the saveAs JavaScript function doesn’t validate what the user supplies, letting an attacker write “arbitrary files into attacker controlled locations.”

Both are restricted to execution with the user’s rights.

No fix

ZDI went public after its usual 120-day cycle because the authors made it clear no fix was coming, with this response:

“Foxit Reader PhantomPDF has a Safe Reading Mode which is enabled by default to control the running of JavaScript, which can effectively guard against potential vulnerabilities from unauthorized JavaScript actions.”

Foxit Software appears to be content to suggest users run its wares in Safe Mode, as its security advisories home page offers that advice for bugs identified in 2011.

The company did patch a dirty dozen bugs in 2016. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/21/foxit_reader_vulnerabilities/

Mirai copycats fired the IoT-cannon at game hosts, researchers find

The Mirai botnet that took down large chunks of the Internet in 2016 was notable for hosing targets like Krebs on Security and domain host Dyn, but research presented at a security conference last week suggests a bunch of high-profile game networks were also targeted.

Although Mirai’s best-known targets were taken out by the early infections, other ne’er-do-well types saw its potential and set up their own Mirai deployments, finishing up with more than 100 victims on the list.

That’s the conclusion suggested in a paper, Understanding the Mirai Botnet, presented at last week’s Usenix Security conference in Canada last week and penned by a group spanning Google, Akamai, Cloudflare, two universities and not-for-profit networking services provider Merit Network.

The authors confirm the kinds of infection targets seen by other Mirai researchers – digital video recorders, IP cameras, printers and routers – and observe that the devices hit were “strongly influenced by the market shares and design decisions of a handful of consumer electronics manufacturers.”

Helping matters out were known administrator passwords, such as “0000000” for a Panasonic printer and “111111” for a Samsung camera.

The authors were also surprised to find targets that previous Mirai research hadn’t revealed. They say the PlayStation Network, which Flashpoint hinted was a target, last year hinted at but didn’t name) was a target, as was XBOX Live. Other groups operating Mirai botnets targeted “popular gaming platforms such as Steam, Minecraft, and Runescape.”

Even that’s a small slice of the overall attack distribution, since the total of more than 15,000 Mirai attacks that ended up in the researchers’ sample hit 5,046 victims on 4,730 individual IP addresses, 196 subnets, and 120 individual domain names. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/21/mirai_copycats_fired_the_iotcannon_at_game_hosts_researchers_find/

Bitcoin-accepting shops leave cookie trail that crumbles anonymity

Bitcoin transactions might be anonymous, but on the Internet, its users aren’t – and according to research out of Princeton University, linking the two together is trivial on the modern, much-tracked Internet.

In fact, linking a user’s cookies to their Bitcoin transactions is so straightforward, it’s almost surprising it took this long for a paper like this to be published.

The paper sees privacy researcher Dillon Reisman and Princeton’s Steven Goldfeder, Harry Kalodner and Arvind Narayanan demonstrate just how straightforward it can be to link cookies to cryptocurrency transactions:

Sorry Alice: we know who you are. Image: Arxiv paper.

Only small amounts of transaction information need to leak, they write, in order for “Alice” to be associated with her Bitcoin transactions. It’s possible to infer the identity of users if they use privacy-protecting services like CoinJoin, a protocol designed to make Bitcoin transactions more anonymous. The protocol aims is to make it impossible to infer which inputs and outputs belong to each other.

Of 130 online merchants that accept Bitcoin, the researchers say, 53 leak payment information to 40 third parties, “most frequently from shopping cart pages,” and most of these on purpose (for advertising, analytics and the like).

Worse, “many merchant websites have far more serious (and likely unintentional) information leaks that directly reveal the exact transaction on the blockchain to dozens of trackers”.

Of the 130 sites the researchers checked:

It doesn’t help that even for someone running tracking protection, a substantial amount of personal information was passed around by the sites examined in the study.

A total of 49 merchants shared users’ identifying information, and 38 shared that even if the user tries to stop them with tracking protection.

Users have very little protection against all this, the paper says: the danger is created by pervasive tracking, and it’s down to merchants to give users better privacy.

Since, as they write, “most of the privacy-breaching data flows we identify are intentional”, that seems a forlorn hope. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/20/bitcoins_anonymity_easy_to_penetrate/

US DoD, Brit ISP BT reverse proxies can be abused to frisk internal systems – researcher

BSides Minor blunders in reverse web proxies can result in critical security vulnerabilities on internal networks, the infosec world was warned this week.

James Kettle of PortSwigger, the biz behind the popular Burp Suite, has taken the lid off an “almost invisible attack surface” he argues has been largely “overlooked for years.” Kettle took a close look at reverse proxies, load balancers, and backend analytics systems, and on Thursday revealed his findings. For the unfamiliar, when browsers visit a webpage they may well connect to a reverse proxy, which fetches the content behind the scenes from other servers, and then passes it all back to the client as a normal web server.

Malformed requests and esoteric headers in HTTP fetches can potentially coax some of these systems into revealing sensitive information and opening gateways into our victim’s networks, Kettle discovered. Using these techniques, Kettle was able to perforate US Department of Defense networks, and trivially earn more than $30k in bug bounties in the process, as well as accidentally exploiting his own firm’s ISP, BT.

“While trying out the invalid host technique, I noticed pingbacks arriving from a small pool of IP addresses for payloads sent to completely unrelated companies, including cloud.mail.ru,” Kettle explained. A reverse DNS lookup linked those IP addresses to bn-proxyXX.ealing.ukcore.bt.net – a collection of systems belonging to BT, PortSwigger’s broadband ISP. In other words, sending malformed HTTP requests to Mail.ru resulted in strange responses from his ISP’s servers.

“Getting a pingback from Kent, UK, for a payload sent to Russia is hardly expected behaviour,” he added. This sparked his decision to investigate. The responses were coming back in 50ms, which was suspiciously fast for a request that’s supposedly going from England to Russia and back via a datacenter in Ireland.

A TCP trace route revealed that attempts to establish a connection with cloud.mail.ru using port 80 (aka HTTP) were intercepted by BT within the telco’s network, but traffic sent to TCP port 443 (aka encrypted HTTPS) was not tampered with. “This suggests that the entity doing the tampering doesn’t control the TLS certificate for mail.ru, implying that the interception may be being performed without mail.ru’s authorisation or knowledge,” Kettle explained.

Further digging by the researcher revealed that the system he’d stumbled upon was primarily being used to block access to stuff like child sex abuse material and pirated copyrighted material. Essentially, these were the boxes inspecting and filtering Brits’ internet traffic. “For years I and many other British pentesters have been hacking through an exploitable proxy without even noticing it existed,” according to Kettle.

Crucially, Kettle said he could reach BT’s internal control panels for its snooping tech via these proxy servers. “I initially assumed that these companies must collectively be using the same cloud web application firewall solution, and noted that I could trick them into misrouting my request to their internal administration interface,” he said.

Kettle added that, as well as this worrying security vulnerability, putting subscribers behind proxies is bad because if one of the boxes ends up on a black list, every gets blocked:

All BT users share the same tiny pool of IP addresses. This has resulted in BT’s proxy IPs landing on abuse blacklists and being banned from a number of websites, affecting all BT users. Also, if I had used the aforementioned admin access vulnerability to compromise the proxy’s administration panels, I could could potentially reconfigure the proxies to inject content into the traffic of millions of BT customers.

Kettle reported the ability to access the internal admin panel to a personal contact at BT, who made sure it was quickly protected. The interception system is related to CleanFeed, which was built by BT in the mid-2000s to block access to images and videos of children being sexually abused. This technology was repurposed to target pirates illegally sharing movies, music, software and other copyrighted stuff. A Colombian ISP called METROTEL had a similar set up.

Later in his research, Kettle discovered that US Department of Defense proxies whitelist access to internal services using the Host header in HTTP requests, but forget that the hostname in the GET request takes precedence over the Host header. So a browser could connect to the external-facing proxy, set the Host header in the request to a public-facing site like “darpa.mil” but GET “some-internal-website.mil”, and get through to that intranet portal.

Essentially, he was able to route requests to servers intended to be accessible to US military personnel only.

“This flaw has since been resolved. It’s likely that other non-DoD servers have the same vulnerability, though,” Kettle told El Reg.

Kettle also discovered a system that enabled reflected cross-site scripting attacks to be escalated into server-Side request forgeries.

On the back of his research, Kettle developed and released Collaborator Everywhere, an open-=source Burp Suite extension that helps uncloak backend systems by automatically injecting non-damaging payloads into web traffic.

“To achieve any semblance of defence in depth, reverse proxies should be firewalled into a hardened DMZ, isolated from anything that isn’t publicly accessible,” Kettle concluded.

His research is summarized in this blog post. To defend against attacks, basically make sure you’re not susceptible to this kind of interference. Kettle presented his work at BSides in Manchester, England, on Thursday. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/19/reverse_proxy_war/

Berkeley boffins build better spear-phishing black-box bruiser

Security researchers from UC Berkeley and the Lawrence Berkeley National Laboratory in the US have come up with a way to mitigate the risk of spear-phishing in corporate environments.

In a paper presented at Usenix 2017, titled “Detecting Credential Spearphishing in Enterprise Settings,” Grant Ho, Mobin Javed, Vern Paxson, and David Wagner from UC Berkeley, and Aashish Sharma of The Lawrence Berkeley National Laboratory (LBNL), describe a system that utilizes network traffic logs in conjunction with machine learning to provide real-time alerts when employees click on suspect URLs embedded in emails.

Spear-phishing is a social engineering attack that involves targeting specific individuals with email messages designed to dupe the recipient into installing a malicious file or visiting a malicious website.

Such targeted attacks are less common than phishing attacks launched without a specific victim in mind, but they tend to be more damaging. High profile data thefts at the Office of Personnel Management (22.1 million people) and at health insurance provider Anthem (80 million patient records), among others, have involved spear-phishing.

The researchers are concerned specifically with credential theft since it has fewer barriers to success than exploit-based attacks. If malware is involved, diligent patching and other security mechanisms may offer defense, even if the target has been fooled. If credentials are sought, tricking the target into revealing the data is all that’s required.

The researchers focus on dealing with attacks that attempt to impersonate a trusted entity, which may involve spoofing the name field in emails, inventing name that’s plausibly trustworthy, like [email protected], or messages delivered from a compromised trusted account. Another means of impersonation, email address spoofing, is not considered because it can be dealt with through email security mechanisms like DKIM and DMARC.

The challenge in automating spear-phishing detection is that such attacks are rare, which is why many organizations still rely on user reports to trigger an investigation. The researchers note that their enterprise dataset contains 370 million emails – about four years worth – and only 10 known instances of spear-phishing.

So even a false positive rate of 0.1 per cent would mean 370,000 false alarms, enough to paralyze a corporate IT department. And the relative scarcity of spear-phishing examples ensures that machine learning techniques lack the volume of data to create a viable training model.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/18/spear_phishing_detector/

No, the cops can’t get a search warrant to just seize all devices in sight – US appeals court

It’s a ruling sending shockwaves through the worlds of privacy, device security, and law enforcement in America.

The US Circuit Court of Appeals in the District of Columbia on Friday overturned the conviction of a gang member because investigators obtained a search warrant for his devices without probable cause.

In other words, crucial evidence obtained by investigators using a search warrant to seize and scan all phones and other gadgets on sight has been thrown out.

Ezra Griffith, of Washington DC, was found guilty in 2013 of unlawful possession of a firearm by a felon, having been previously convicted of attempted robbery.

The then 23-year-old came under the suspicion of DC police officers investigating a gang-related murder due to his involvement with a rival gang and because of surveillance video.

His arrest on firearms charges followed a search of his girlfriend’s apartment that resulted in the recovery of a gun, something Griffith could not lawfully possess.

The officers at the scene had come with a search warrant for all cell phones and seized six, along with a tablet computer.

Hearing Griffith’s appeal to invalidate the warrant that led to the evidence used to convict him, the appeals court found that authorities had obtained permission to search the girlfriend’s apartment for electronic devices without adequately establishing probable cause.

The affidavit filed to obtain the warrant sought any cell phones or electronic devices belonging to Griffith, along with any written or printed material related to the homicide being investigated.

But authorities did not state any reason for believing that Griffith’s devices contained evidence of a crime.

“The government’s argument in support of probable cause to search the apartment rests on the prospect of finding one specific item there: a cell phone owned by Griffith,” the court ruling says. “Yet the affidavit supporting the warrant application provided virtually no reason to suspect that Griffith in fact owned a cell phone, let alone that any phone belonging to him and containing incriminating information would be found in the residence.”

Not only that, but the warrant allowed officers to seize any electronic devices present, even if they belonged to someone other than Griffith.

For the DC Court of Appeals, at least, such a broadly drawn warrant went too far.

“[W]e do not doubt that most criminals – like most people – have cell phones, or that many phones owned by criminals may contain evidence of recent criminal activity,” the court said. “Even so, officers seeking authority to search a person’s home must do more than set out their basis for suspecting him of a crime.”

The court’s affirmation of the Fourth Amendment comes as the Supreme Court is weighing how to apply the Constitution’s prohibition on unreasonable searches to cell phone location data. Earlier this week, tech companies joined media companies and other organizations to urge the Supreme Court to disallow demands for location data without a warrant.

Orin Kerr, a law professor at Georgetown University, via Twitter observed, “On a first read, at least, Judge Srinivasan’s alternative holding in Griffith is going to create a mess.”

However, he added, it’s great for defense attorneys. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/18/appeals_court_defends_device_privacy/

News in brief: few girls studying computing; new Galaxy Note battery issue; fine over parking data breach

Your daily round-up of some of the other stories in the news

Concern at number of girls studying computing

There’s been a lot of focus on how to improve the representation of women in the tech industry in the wake of concerns about the culture at companies such as Uber, and many experts agree that it’s important to focus on the pipeline and to encourage girls and young women to choose relevant subjects at school.

So the news that of those taking the A-level computing studies exam at 18, just 9.8% of them are girls has sparked concern – while there was also concern about the low overall numbers taking the course, the BBC reported.

Bill Mitchell of BCS, the chartered institute for IT, said in response to the figures from the Joint Council for Qualifications: “Today’s announcement that nearly 7,600 students in England took A-level computing means it’s not going to be party time in the IT world for a long time to come,” and added: “At less than 10%, the numbers of girls taking computing A-level are seriously low.”

He went on: “We need to make sure that our young women are leaving education with the digital skills they need to secure a worthwhile job, an apprenticeship or go on to further study.”

Battery fears hit Samsung again

Remember the debacle over the Samsung Galaxy Note 7 and the overheating batteries? Now Samsung has been hit by another battery issue – some refurbished Galaxy Note 4 devices are having their batteries recalled.

However, this time it’s not Samsung’s fault: the 10,000-odd affected devices, according to the US Consumer Product Safety Commission, which issued the recall, are “batteries placed into refurbished ATT Samsung Galaxy Note 4 cellphones by FedEx Supply chain and distributed as replacement phones through ATT’s insurance program only”.

The affected batteries are apparently counterfeit, and are at risk of overheating. Although the Note 4 is three years old, the affected phones were sent out to customers fairly recently, between December 2016 and April this year as replacements via ATT.

If you’ve got one of these devices, power down the phone and don’t use it – you’ll be hearing from FedEx.

Council fined over parking data breach

A local authority in London has been fined £70,000 after it exposed the personal information of 89,000 people via its parking ticket system, which allowed people to see CCTV images of their alleged parking offence.

The Information Commissioner’s Office, the UK’s data regulator, fined the council after a member of the public realised that by manipulating a URL on the council’s Ticket Viewer system they could access the information of other people including bank details, medical evidence and home addresses and phone numbers.

Sally Anne Poole, the ICO enforcement officer, said: “People have a right to expect their personal information is looked after. Local authorities handle lots of personal information, much of which is sensitive. If that information isn’t kept secure, it can have distressing consequences for all those involved.”

The ICO said that the council hadn’t tested the system either before it went live nor regularly after that.

Catch up with all of today’s stories on Naked Security


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wY9-8m9OoOg/

Berkeley boffins build better spear-phishing black-box brusier

Security researchers from UC Berkeley and the Lawrence Berkeley National Laboratory in the US have come up with a way to mitigate the risk of spear-phishing in corporate environments.

In a paper presented at Usenix 2017, titled “Detecting Credential Spearphishing in Enterprise Settings,” Grant Ho, Mobin Javed, Vern Paxson, and David Wagner from UC Berkeley, and Aashish Sharma of The Lawrence Berkeley National Laboratory (LBNL), describe a system that utilizes network traffic logs in conjunction with machine learning to provide real-time alerts when employees click on suspect URLs embedded in emails.

Spear-phishing is a social engineering attack that involves targeting specific individuals with email messages designed to dupe the recipient into installing a malicious file or visiting a malicious website.

Such targeted attacks are less common than phishing attacks launched without a specific victim in mind, but they tend to be more damaging. High profile data thefts at the Office of Personnel Management (22.1 million people) and at health insurance provider Anthem (80 million patient records), among others, have involved spear-phishing.

The researchers are concerned specifically with credential theft since it has fewer barriers to success than exploit-based attacks. If malware is involved, diligent patching and other security mechanisms may offer defense, even if the target has been fooled. If credentials are sought, tricking the target into revealing the data is all that’s required.

The researchers focus on dealing with attacks that attempt to impersonate a trusted entity, which may involve spoofing the name field in emails, inventing name that’s plausibly trustworthy, like [email protected], or messages delivered from a compromised trusted account. Another means of impersonation, email address spoofing, is not considered because it can be dealt with through email security mechanisms like DKIM and DMARC.

The challenge in automating spear-phishing detection is that such attacks are rare, which is why many organizations still rely on user reports to trigger an investigation. The researchers note that their enterprise dataset contains 370 million emails – about four years worth – and only 10 known instances of spear-phishing.

So even a false positive rate of 0.1 per cent would mean 370,000 false alarms, enough to paralyze a corporate IT department. And the relative scarcity of spear-phishing examples ensures that machine learning techniques lack the volume of data to create a viable training model.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/18/spear_phishing_detector/

Phone location privacy – for armed robber – headed to Supreme Court

Armed robbers are not sympathetic characters. Which means defending their right to privacy might not get much sympathy either.

But, as multiple privacy advocates note, it’s not just about them – it’s about the rest of us: if their privacy isn’t protected, neither is yours and neither is anyone’s.

That is at the heart of a case now headed to the US Supreme Court (SCOTUS). The legal issue is whether cell phone users “voluntarily” turn over cell tower location data to the carriers, which therefore means it is not private. It is a sure bet that almost nobody thinks that, since they don’t get to volunteer. If they want to use their phones, the carrier collects the data.

But the emotional/political issue is that it’s about a convicted criminal. Which recalls the words of HL Mencken, the iconic journalist and cultural critic, who famously saidL

The trouble with fighting for human freedom is that one spends most of one’s time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all.

The scoundrel in this case is Timothy Ivory Carpenter, convicted in 2013 of six robberies of cell phone stores in the Detroit area, and using a gun in five of them. He was sentenced to 116 years for his role in the crimes, committed with several others, including his half-brother, Timothy Michael Sanders.

But part of the evidence used to convict Carpenter was data from wireless carriers, which prosecutors said placed his phone within a half mile to two miles of the sites of the robberies when they were committed.

Carpenter and Sanders appealed, with the backing of the American Civil Liberties Union (ACLU) and other groups, arguing that the collection of that data without a warrant violated his Fourth Amendment protection against unreasonable search and seizure.

They failed at the Appeals Court level in April 2016, when the Sixth US Circuit Court of Appeals found that while personal communications are private, “the federal courts have long recognized (that) the information necessary to get those communications from point A to point B is not,” which includes the metadata from cell phone towers. The court added that such data

… are information that facilitate personal communications, rather than the content of those communications themselves. The government’s collection of business records containing these data therefore is not a search.

It also noted that access to the phone records had been granted by magistrate judges under the Stored Communications Act (SCA), which the FBI sought after one of the robbers confessed and then gave the agency his cellphone along with the numbers of other participants.

However, the FBI didn’t seek a warrant. And that prompted the appeal, which the Supreme Court is scheduled to hear in the term that begins in October, and which has prompted a small blizzard of amicus briefs from privacy advocates including the Electronic Frontier Foundation, (EFF), the Electronic Privacy Information Center (EPIC) and another from more than a dozen of the nation’s top tech companies including Airbnb, Apple, Cisco Systems, Dropbox, Evernote, Facebook, Google, Microsoft, Mozilla, Nest Labs, Snap, Twitter and Verizon.

One of their biggest objections to the Appeals Court decision is that it is based, as the court said, on “long recognized” precedent. Long, as in long ago, in the 1970s, when nobody had a cellphone. It holds that information voluntarily given to a third party as part of a business transaction doesn’t qualify for Fourth Amendment protection.

That, the advocates say, is vastly out of date – applying analog rules to a digital world – since just about everybody now carries a cellphone. There are now an estimated 396m mobile accounts in the US (more than the nation’s population), and the location data gathered by cell towers is becoming as precise as GPS tracking.

Even if location services is shut off on a phone, simply operating the phone means it connects to cell towers, generating data called cell site location information (CSLI). According to the EFF brief, “as the number of cell towers has increased and cell sites have become more concentrated, the geographic area covered by each cell sector has shrunk,” which makes it possible to determine where a phone is within 50 meters.

The tech companies’ brief also noted that the SCA, under which the FBI sought the phone metadata, was enacted in 1986, when, “few people used the internet, almost none had portable computers, and only around 500,000 Americans subscribed to basic cell phone service”.

Other reasons cited by privacy advocates for the Fourth Amendment applying to CSLI include:

  • Users don’t really “voluntarily” turn over that data to the wireless carriers, since they can’t use the phone without doing so. Alan Butler, senior counsel at EPIC, said the Supreme Court has already signaled that it understands that mobile devices “have become embedded into our daily lives. I think the notion that cell phone users necessarily ‘assume the risk’ or ‘consent’ to collection and disclosure of their location information is nonsense and flips privacy law entirely on its head.”

Butler, who also authored a recent post on SCOTUSblog about the Carpenter case, noted that the Supreme Court in 2012 unanimously threw out a conviction for drug trafficking because of evidence gathered by law enforcement putting a GPS tracker on the defendant’s car.

Carrying a phone, he and others have noted, amounts to a GPS tracker monitoring not just where your car goes, but where you go, all the time.

  • If precedent stands, Big Brother can track just about anybody without a warrant. EFF noted that “ATT alone received 70,528 requests for CSLI in 2016 and 76,340 requests in 2015. Verizon received 53,532 requests in 2016 and 50,066 requests in 2015.” The majority of them warrantless.
  • The location tracking of people extends far beyond real time, unlike human surveillance. It can go back months, or even years, creating a highly detailed record of everywhere a person has been.
  • Given the necessity of cell phones, people now have a “reasonable expectation” that their location information is private.

The lobbying for the Carpenter conviction to be overturned is not unanimous, however. Orin Kerr, a research professor at the George Washington University Law School, in a post on SCOTUSblog, argued that what is really at issue is “what you might call the eyewitness rule: the government can always talk to eyewitnesses”.

In this case, he said, the wireless carrier is an eyewitness. “Customers use their services and hire the companies to place calls for them,” he wrote, which means the business record of what they did for customers doesn’t have Fourth Amendment protection.

The right question for the court, he contended, is not Carpenter’s “expectation” of privacy, but whether he should “have a right to stop others from telling the government about what they saw [him] do”.

Of course, this is about billions of digital “eyes”, not people on the street.

Which calls to mind a talk by Christopher Soghoian late last year, when he was chief technologist at the ACLU, titled “Stopping Law Enforcement Hacking” at the Chaos Communication Congress (CCC).  He said:

Many of the court cases that define our basic privacy rights come from cases involving drug dealers, people smuggling alcohol, and paedophiles. So it can be very unpleasant for people to engage in these cases.

But if you wait until the government is using its powers against journalists and freedom fighters, by that point the case law is settled.


 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/eyh4yc-2oN4/

‘Pulse wave’ DDoS – another way of blasting sites offline

After all the excitement over 2016’s Mirai Internet of Things (IoT) DDoS attack, you could be forgiven for thinking that the criminal pastime of overloading servers with lots of unwanted traffic has gone a bit quiet recently.

It’s been this way for years. DDoS attacks tend not to be noticed by anyone other than service providers unless they are particularly huge, hit well-known websites, or manifest nastiness such as the notorious DD4BC extortion gang attacks of 2015.

This happens infrequently even though below the surface of service providers fighting fires and commercial secrecy that obscures many unreported attacks, innovation rumbles on.

Now, mitigation company Incapsula has spotted an example of this behind-the-scenes evolution in the form of “pulse wave”, a new type of attack pattern which, from the off, had its experts intrigued.

DDoS attacks, which spew forth from botnets of one type or another, normally follow a format in which traffic increases before a peak is reached, after which comes either a gradual or sudden drop. The rise has to be gradual because bots take time to muster.

The recent wave of pulse attacks during 2017 looked different, with massive peaks popping out of nowhere rapidly, often within seconds. Demonstrating that this was no one-off, successive waves followed the same pattern.

Says Incapsula:

This, coupled with the accurate persistence in which the pulses reoccurred, painted a picture of very skilled bad actors exhibiting a high measure of control over their attack resources.

Granted, but to what end?

The clue was in the gaps between the “pulses” of each attack. In fact, the botnet or botnets behind these attacks were not necessarily being switched off at all – the gaps were just the attackers pointing it at different targets, like turning a water cannon.  This explained the rapid surge in traffic on the commencement of each attack.

It’s likely not a coincidence, Incapsula claims, that this pattern causes problems for one DDoS defence, which is to use on-site equipment with fail-over to a cloud traffic “scrubbing” system in the event that an attack gets too big. Because traffic ramps almost instantly, that fail-over can’t happen smoothly, and indeed the network might find rapidly itself cut off.

If that’s true, organisations that have built their datacentres around sensible layered or “hybrid” DDoS defense will be in a pickle. Either they’ll have to beef up their in-house mitigation systems or convince their cloud provider to offer rapid fail-over. Incapsula, we humbly note, sells cloud-based mitigation.

All in all, it sounds like a small but important technical innovation that will be countered with the same. Given the impressive traffic these botnets seem able to summon at will – reportedly 300Gbps for starters – it would be unwise to dismiss it as just another day at the internet office.

Or perhaps the real innovation in DDoS criminality isn’t in the way traffic is pointed at victims so much as the tragic wealth of undefended servers and devices that can be hijacked to generate the load in the first place.

This was one of the surprising lessons of Mirai and perhaps it has yet to be learned: never underestimate the damage a motley collection of ignored and forgotten webcams and home routers can do to some of the internet’s biggest brands if given the chance.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/1oocShyuNFs/