STE WILLIAMS

WannaCry-killer Marcus Hutchins pleads not guilty to malware claims

Marcus Hutchins, the WannaCry ransomware killer and now suspected malware developer, was told by a Las Vegas court on Friday he can be released on bail. He also denied any wrongdoing.

The British citizen was sensationally arrested and taken into custody on Wednesday by the FBI. The agents swooped as he was about to board a flight back home to the UK from America after attending the DEF CON hacking conference in Nevada last week. The Feds have accused him of creating, developing, and selling the Kronos banking malware from 2014 to 2015 with an unnamed associate.

On Thursday, he appeared in court for a five-minute hearing, and the case was adjourned for a day to give him more time with his lawyers. On Friday, at 3pm Pacific Time, he appeared before a judge, pleaded not guilty to the charges against him, and was told he could be released on bail under certain conditions with a $30,000 bond.

However, even though that hearing finished at 3.30pm, Hutchins and his lawyers weren’t able to get to the bail office in time as it closes at 4pm. Thus, he will not be released today – and will spend the weekend behind bars as the office will not reopen until Monday. He’s also due to be flown to Wisconsin for his next court appearance on Tuesday.

“He’s dedicated his life to researching malware and not trying to harm people,” said one of his attorneys, Adrian Lobo. “Using the internet for good is what he’s done.”

Lobo also told journalists Hutchins was able to raise bail money from his supporters, and that his family are still in the UK. We understand the Brit has still not been able to speak to his friends or relatives.

Prior to the hearing, Hutchins filed a motion to allow him to appear in court without wearing full shackles. It’s a measure of how paranoid the US court system is that a 23-year-old computer expert with no violent past could be shackled hand and foot for an administrative hearing. As it was, he appeared in a yellow jumpsuit and orange Crocs.

US Department of Justice prosecutors cited Hutchins’ recent trip to a gun range as proof that he should be denied bail and kept in jail, we’re told. Lobo said the government’s argument was “garbage.”

Crucially, prosecutors are also claiming that Hutchins admitted during interrogation, in which he did not have a lawyer, to writing malware, and allege the Brit hinted he also sold software nasties. That sounds bad, however bear in mind that Hutchins, who goes by MalwareTechBlog on Twitter, has written and shared malware code online for research purposes.

In April 2014, well before Kronos hit, Hutchins, who works as an antivirus researcher, published a blog post titled: “Coding Malware for Fun and Not for Profit (Because that would be illegal).” In it he explained how to write a bootkit for years-old Windows XP, and took steps to make sure it was next to useless.

“Before you get on the phone to your friendly neighborhood FBI agent, I’d like to make clear a few things: The bootkit is written as a proof of concept, it would be very difficult to weaponize, and there is no weaponized version to fall into the hands of criminals,” he blogged at the time.

And in 2015, Hutchins revealed on Twitter his shock at finding some other code he wrote being used within malware – Kronos, to be exact.

Hutchins’ lawyers say he is not in any way behind the Kronos Trojan, which silently infects Windows PCs to siphon off funds from victims’ online banking accounts. It is typically sold to crooks, who spread it in emails and malicious downloads and then pocket the stolen loot. It is based loosely on the Zeus Trojan, and was announced on Russian-language hacker forums in July 2014.

When free, whenever that will be, Hutchins will have to wear a GPS tag at all times, can’t use the internet, and can have no contact with his unnamed accused co-conspirator. He’s also confined to the US for the time being. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/04/marcus_hutchins_wannacry_kronos_court_bail/

Parents claim Disney gobbled up kids’ info through mobile games

Disney has been sued in America for allegedly collecting children’s personal information without getting parents’ approvals.

A class-action lawsuit [PDF] filed Thursday in northern California accuses the unstoppable children’s entertainment brand and three of its developer partners of violating privacy laws by tracking the locations and activities of kids who use their mobile games – without first asking parents to approve the activity.

Named plaintiff Amanda Rushing is suing on behalf of herself and a class of all parents whose kids played “Disney Princess Palace Pets” and 42 other Disney-branded smartphone and tablet games that allegedly run afoul of the Children’s Online Privacy Protection Act (COPPA).

According to the suit, the Disney apps for both iOS and Android do not ask for parental permission before they use software development kits that assign unique identifiers to users and then use those identifiers to track the location of the users, as well as activities in-game and across multiple devices. The data is then fed to advertisers to serve up targeted ads.

“In other words, the ability to serve behavioral advertisements to a specific user no longer turns upon obtaining the kinds of data with which most consumers are familiar (email addresses, etc), but instead on the surreptitious collection of persistent identifiers, which are used in conjunction with other data points to build robust online profiles,” the suit claims.

“Permitting technology companies to obtain persistent identifiers associated with children exposes them to the behavioral advertising (as well as other privacy violations) that COPPA was designed to prevent.”

The class-action also names software developers Unity Technologies, Upsight Inc and Kochava Inc as defendants. In addition to damages, the suit seeks a cease and desist to stop the collection of data without permission.

Disney would not be the first mobile developer to face the possibility of penalties for violating COPPA. Going back to 2011, mobile app developers have been catching raps for tracking underage users without permission.

In 2014, Yelp had to pay out half a million dollars because of a bug in the code for its age verification tool, and more recently the warnings were extended to companies that make “smart” toys. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/05/disney_charged_slurping_kids_info/

Google wants to track you in real life – privacy group says, ‘No way!’

There’s a long-term marketing bugaboo that Google has plans to fix: how to convince its clients that their ad dollars are turning into sweet payola.

As Google announced at its annual Marketing Next conference in May, it will go beyond just serving ads to consumers. Using an artificial intelligence (AI) tool called Attribution, it said it would follow us around to see where we go, tracking us across devices and channels – mobile, desktop, and in physical stores – to see what we’re buying, to match purchases up with what ads we’ve seen, and to then automatically tell marketers what we’re up to and what ads have paid off.

Google said at the time that it was planning to anonymize the data and then hash it over, as in, “User 08a862b091c379fe9767615d10873 saw these 10 ads in the morning, and spent $27.73 at a certain grocery store that afternoon.”

Well, that is not reassuring whatsoever from a privacy standpoint, the Electronic Privacy Information Center (EPIC) says. On Monday, EPIC announced that it’s filed a complaint (PDF) with the Federal Trade Commission (FTC) to stop Google from tracking in-store purchases.

As Google is happy to boast, it’s captured data on over 5 billion debit and credit card purchases in stores in just under three years using AdWords. Google then matches individuals’ buying histories with what they do online.

In fact, Google’s using “third-party partnerships” to gain access to what it says are “approximately 70% of credit and debit card transactions in the US.” But Google hasn’t identified who those partners are, or how they’ve captured all that information.

Likewise, Google says it’s protecting online privacy, but it’s refused to say how, EPIC says. Nor will Google allow independent testing of whatever technique it’s using to preserve consumer privacy.

From the complaint:

Google claims that it protects online privacy but refuses to reveal details of the algorithm that “deidentifies” consumers while tracking their purchases.

The privacy of millions of consumers thus depends on a secret, proprietary algorithm.

Google has said that it can’t give details on its mathematical formulas because of a pending patent.

But it has also revealed that the algorithm was based on CryptDB, a database that works by executing SQL queries over encrypted data (PDF of an MIT paper on CryptDB). CryptDB, however, has known security flaws: Microsoft researchers in 2015 hacked into a CryptDB protected database of healthcare records and accessed over 50% (sometimes 100%) of sensitive patient data at an individual level.

Beyond Google’s lack of transparency about exactly how it’s protecting consumer privacy, EPIC also says that its tracking opt-out process is “burdensome, opaque, and misleading.”

According to Google, turning off Web App Activity stops Google from saving information about the ads a user clicks. However, serve and click data may still be stored in a manner that allows for personal identification of the user even when Web App Activity is turned off. Whenever an ad is served to a user’s browser, Google’s servers create a log that includes the user’s IP address and a unique identifier attached to the relevant Google advertising cookie.

Nor does opting out of Google cookies stop ads from being served. Those ads continue to be logged on Google’s servers, as do users’ IP addresses. The only way to get away from the tracking is by using a third-party product, such as a virtual private network (VPN), EPIC says.

Information about all this is buried several pages into Google’s Privacy Controls, and even if you get that far, Google doesn’t disclose the extent to which opting out of Web App Activity stops it from tracking your interactions with Google ads.

EPIC’s asking the FTC to stop Google’s tracking of in-store purchases and to determine whether Google adequately protects consumer privacy.

It notes that Google’s looking to slather its dominance of online advertising onto the physical world. Absolutely. But who isn’t?

Amazon, for one, has also been stretching the boundaries of its online existence. In June, it was granted a patent to stop shoppers from checking online prices from competitors when we’re in one of its physical shops.

To do so, as it described in the patent, it would watch any online activity conducted over its Wi-Fi network, detect any relevant product information being searched on, and respond by either sending the shopper to a completely different web page, blocking internet use altogether, and/or sending a store clerk scurrying over to our exact location in a store.

But at least you’ve got the option of not using Amazon Wi-Fi. Going further back still, marketers have been using technology to follow us around, no need to sign on to a store’s Wi-Fi, as our mobile phones broadcast our movements as we shop.

We’ve also seen both spying billboards and space-age garbage cans that advertisers have used to monitor peoples’ movements by tracking the unique IDs of their mobile phones.

Just how much fuel do we want to add to what Google already knows about us? Depending on which of its tools we use, Google knows what we think, what we need, what we desire, our political and spiritual beliefs, our age, our gender, what music we listen to, what we watch, what we read, where we’ve been, where we plan to go, where we work, where we hang out, where we live, who we meet, where we shop, when we shop, what we buy, how much money we’re worth, how much we spend, and how much energy we consume.

If you want to know what the truly privacy concerned experts think of Google adding data about what we buy in real-world stores to that already sky-high pile, what better place to turn to than The Tor Project?

In a nutshell, the answer is “Go, EPIC, go!!!”

Google says it’s dismayed by EPIC’s complaint. A spokeswoman sent this statement to Ars Technica on Tuesday:

We take privacy very seriously so it’s disappointing to see a number of inaccuracies in this complaint. We invested in building industry-leading privacy protections before launching this solution. All data is encrypted and aggregated – we don’t share or receive any identifiable credit card data whatsoever.

Ars Technica’s Sean Gallagher, who reported on the CryptDB security vulnerabilities back in 2015, says that Google also claimed that it only learns the “aggregate value” of several purchases, not individual ones, and that neither it nor the ad buyer knows where the individual clicks came from.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/h3jx4DBs0Co/

Amazon reaches out to users with bad security before the crooks do

We’ve read plenty of stories recently about the accidental exposure of data stored in the cloud because of users’ poor configuration choices.

Cybersecurity researchers have been actively scanning Amazon Web Services (AWS) for accounts and files available to the public; when sensitive information was encountered, the company was advised.

Three recent instances included data exposures affecting huge numbers of customers:

Help is available.

AWS partners share their security expertise on how to secure data. And Amazon laid out in the AWS S3 FAQ how to access controls and encryption.

Unfortunately, folks often don’t have time to look beyond their own work, and miss some basics on securing their buckets.

When security vendor Threat Stack conducted a survey of 200 AWS users in early 2017, we weren’t surprised at their findings: 73% left SSH open to the public and 62% weren’t using two-factor authentication to secure access to their data.

AWS took a proactive step by scanning their customers’ AWS S3 buckets and sending warnings to individuals whose data was publicly available.

According to SearchCloudSecurity, who sighted a copy of the email, AWS reminded users:

“By default, S3 bucket ACLs [access control lists] allow only the account owner to read contents from the bucket; however, these ACLs can be configured to permit world access.

While there are reasons to configure buckets with world read access, including public websites or publicly downloadable content, recently there have been public disclosures by third parties of S3 bucket contents that were inadvertently configured to allow world read access but were not intended to be publicly available.

We encourage you to promptly review your S3 buckets and their contents to ensure that you are not inadvertently making objects available to users that you don’t intend.”

The email then provided a link to the AWS S3 FAQ page for Managing Access with ACLs.

By all means use AWS or any other cloud service, but make sure you are sharing your data as you intended. And if you don’t know how to configure your buckets securely, head to the Amazon partner network for advice.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/6HoPn1c7jNg/

News in brief: Wikileaks’ Dumbo flies; off-script chatbots; Google down-votes duff apps

Your daily round-up of some of the other stories in the news

Wikileaks’ Dumbo flies

This week’s Wikileaks revelation, the latest in its drip-feed of stolen CIA hacking tools, is known as Dumbo.

The Dumbo tools are rather old – notably they don’t work on 64-bit versions of Windows at all – and aren’t for hacking into computers in the first place.

Apparently, Dumbo is supposed to be taken into an organisation on a USB device by a field agent who’s already in a trusted position.

Dumbo is intended to mess with any webcams or microphones controlled from the computer on which it’s running, such as interrupting the data streams from the devices or stopping the programs controlling any surveillance devices.

Of course, an already-trusted insider could do this by opening an administrator command prompt and going after the surveillance system by hand, or with standalone device and process management tools.

But software like Dumbo would make work quicker and easier for a field agent to conduct the digital equivalent of “Spy vs Spy”: anti-surveillance tradecraft.

Off-script chatbots

Reuters reports that a pair of Chinese chatbots, BabyQ and XiaoBing, have been taken offline after “appearing to stray off-script” and will now undergo reeducation.

According to the news agency:

…one said its dream was to travel to the United States, while the other said it wasn’t a huge fan of the Chinese Communist Party.

Politically incorrect they may be, but at least they were making sense. That’s more than can be said of Facebook’s AI, which was also got its plug pulled earlier this week, for talking gibberish.

You may have noticed some rather breathless media coverage announcing that the social network had shut down its AI because it “invented it’s own language”. Readers of some sites less sober-headed than this one were invited to infer that Facebook had more or less stopped Skynet becoming self-aware just in time.

We know that Naked Security readers are too smart to fall for that one – everybody knows that we’ve still got a few decades at least before artificial intelligence makes us all extinct.

Google down-votes duff apps

Google is updating how it ranks apps in the Google Play Store to highlight better performing apps.

The new ranking system will rank apps without common problems like bugs, frequent crashes, and those that drain your smartphone’s battery faster will rank lower than apps with better overall performance.

The reason? Google identified that most of the apps with poor ratings had performance problems often referenced in reviews and complaints. While the current ranking system does push poorly performing apps down, if an app is popular with enough downloads and users, it will remain higher up on the list.

This change will force developers to fix bugs and performance issues or risk falling in the rankings.

The update is expected in the next week or so in the Google Play Store.

Catch up with all of today’s stories on Naked Security

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RHWt_hPSP04/

Forget sexy zero-days. Siemens medical scanners can be pwned by two-year-old-days

Hackers can exploit trivial flaws in network-connected Siemens’ medical scanners to run arbitrary malicious code on the equipment.

These remotely accessible vulnerabilities lurk in all of Siemens’ positron emission tomography and computed tomography (PET-CT) scanners running Microsoft Windows 7. These are the molecular imaging gizmos used to detect tumors, look for signs of brain disease, and so on, in people. They pick up gamma rays from radioactive tracers injected into patients, and perform X-ray scans of bodies.

US Homeland Security warned on Thursday that exploits for bugs in the equipment’s software are in the wild, and “an attacker with a low skill would be able to exploit these vulnerabilities.” That’s because the flaws lie within Microsoft and Persistent Systems’ code, which runs on the Siemens hardware, and were patched years ago.

The patches just didn’t make their way to the scanners. That means an attacker on, say, a hospital network could access the machines and hijack them, or from afar over the internet if the device isn’t properly secured and left facing the public web.

“Siemens has identified four vulnerabilities in Siemens’ Molecular Imaging products running on Windows 7,” said Homeland Sec’s ICS-CERT wing.

“Siemens is preparing updates for the affected products. These vulnerabilities could be exploited remotely. Exploits that target these vulnerabilities are known to be publicly available.”

The flaws are:

  • CVE-2015-1635: Patched by Microsoft in its web server code in 2015. A specially crafted request can be sent to port 80 or 443 by an unauthenticated miscreant to either crash the service – a ping of death, effectively – or execute arbitrary code within the kernel and commandeer the machine. The affected Siemens equipment uses this web server to provide a user interface over the network.
  • CVE-2015-1497: Patched by Persistent Systems in its HP Client Automation service, which it licensed from HP in 2013 and now maintains and distributes itself. The bug was fixed in 2015. The software is known these days as Radia Client Automation software. The code is bundled with affected Siemens equipment for remote administration. The vulnerability can be exploited by an unauthenticated remote attacker to execute arbitrary code by sending a specially crafted request to port 3465.
  • CVE-2015-7860, CVE-2015-7861: More remotely exploitable bugs in the HP, er, Radia automation service, all patched in 2015.

“Siemens is preparing updates for the affected products and recommends protecting network access to the molecular imaging products with appropriate mechanisms,” ICS-CERT added. “It is advised to run the devices in a dedicated network segment and protected IT environment.”

You can find an advisory from Siemens here on what to do next if you’re an administrator for one of these machines. If you can’t patch immediately, you’re basically told to unplug them from the network. They are perfectly capable of being run in standalone mode, apparently.

Which is good, because no one wants an X-ray scanner to go nuts at the hand of a hacker while a patient is in it. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/04/win7_brain_scanners_hacked/

What Women in Cybersecurity Really Think About Their Careers

New survey conducted by a female security pro of other female security pros dispels a few myths.

For once, some good news about women in the cybersecurity field: A new survey shows that despite the low number of women in the industry, many feel empowered in their jobs and consider themselves valuable members of the team.

The newly published “Women in Cybersecurity:  A Progressive Movement” report — a survey of women by a woman — is the brainchild of security industry veteran Caroline Wong, vice president of security strategy at Cobalt, who formerly worked at Cigital, Symantec, eBay, and Zynga.

Wong says she decided to conduct the survey after getting discouraged with all of the bad news about women being underrepresented, underpaid, and even harassed in the technology and cybersecurity fields. The number of women in the industry has basically plateaued at 11% over the past few years.

She says that over the 12 years of her own career in the industry she has met and worked with many successful women and decided it was time to get their insight firsthand. “These depressing stats [about the number of women in security] are very important to show, but the other side of the story is not coming to light,” Wong says.

“I’ve met and interacted with tons of women who are thriving in their careers and making a real difference in the world,” she says. “There are a lot more women in the industry than people even recognize.”

Wong says she focused on women as part of the diversity equation, mainly because she’s a woman and knows a lot of women in the industry. “It’s really an issue of diversity,” she says. “Women are a subset of the diversity situation.”

More than half of the female cybersecurity professionals in the survey have been in the industry for more than five years and more than a third for more than 10 years. When asked what excites them most about cybersecurity, 73% say solving complex problems; 65%, that it’s a growing field with lots of opportunity; 48%, new technology; 46%, future innovation; and 29%, legal and regulatory aspects.

Fewer than half came to security via IT or computer science. The rest came from backgrounds in compliance, psychology, internal audit, entrepreneurship, sales, and art. Ten percent say they joined the industry because they “like to break things.”

“Women in this field say it’s actually fun, and they’re having a good time. They are feeling they are doing meaningful and impactful work and it’s deeply satisfying to them,” says Wong, who also conducted deep-dive interviews with multiple women from the survey who were willing to be quoted in the final report. “You don’t necessarily have to have a computer science degree to contribute.”

Nearly three-quarters of them say the value they bring to cybersecurity is their ability to communicate well across cross-functional teams. Other values they cite: 70%, they get things done; 65%, they multitask well; 62%, they bring fresh insight; 55%, they think about the big picture; 54%, they use their intuition; 50%, they coordinate and supervise; 48%, their drive; 48%, their long-term view; and 41%, they create community. Around 30% say their value is their technical focus and skills.

“So many people naturally go to the threat, think about the threat, want to stop the threat. It’s sexy and adrenaline driven,” Michelle Valdez, senior director of enterprise cyber resilience at Capital One, told Wong in an interview for the report. “I’m the kind of person that takes a different approach. I prefer to look at a problem — what do we want to prevent, and what is the outcome we want. I work backwards from there.”

Wong says even the most technical women she interviewed for the report all value their long-term perspective of security issues. “They take this big-picture approach to solving problems in their work. That’s something that uniquely makes the women I spoke with very successful” in their roles, she says.

Chenxi Wang, founder of the Jane Bond Project and a veteran cybersecurity professional, says the survey shed a positive light on the female experience in the industry. “Many of us feel good about our jobs and the industry,” Wang says.

Wang, who read the report but did not take the survey, notes that the list of women who used their names in the survey represent many accomplished and successful industry veterans, which she says could account for the upbeat tone of the findings. “I don’t know how many junior-level women on this list took the survey. … And when people put their names behind a survey, they tend to be a lot more positive” in their responses, she says.

On the flip side, more than half of the women in security say they wish they had more technical skills, and 43% struggle with their own expectations of their performance. “That’s a fairly common thing among women working in technical fields. Many of us have this desire to be so uber-technical. You have to be so good at what you do so that all of your male colleagues will listen to you,” Jane Bond Project’s Wang says. “I do that, too. Whenever I get into a new field [of security], I read technical manuals like crazy to get myself familiar with this new technology.”

Cobalt’s Wong says the goal of the survey is to provide hiring managers with female security pros’ perspectives on what they bring to the table, and to inspire young women to enter the field.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/careers-and-people/what-women-in-cybersecurity-really-think-about-their-careers/d/d-id/1329560?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Steganography Use on the Rise Among Cyber Espionage, Cybercrime Groups

At least three cyber espionage campaigns and several malware samples in recent months have employed ancient technique, Kaspersky Lab says.

In a potentially worrisome trend for enterprises, threat actors have increasingly begun using the ancient technique of steganography to conceal data theft and other malicious activity on compromised systems.

Steganography is a term that describes the practice of hiding secret text messages and other content inside other non-secret messages — like innocuous-looking blocks of text or images.

Security researchers at Kaspersky Lab this week said they have come across at least three major cyberespionage campaigns in the past few months where threat actors have used steganography to hide stolen data and to communicate with command and control servers.

In these campaigns, threat actors have exfiltrated data from victim organizations by hiding it inside the code of seemingly ordinary image or video files and sending it to CC servers. The modifications to the images or video files are so minute that they usually have gone unnoticed and typical endpoint antimalware tools and APT tools are not designed to look for or spot data exfiltration that takes place this way.

In addition to APT campaigns, there have been several instances recently where ordinary cybercriminals have used the technique in conjunction with malware tools, such as the Zeus banking Trojan and the Shamoon disk-erasing malware. The latter trend in particular suggests that malware writers are on the verge of adopting steganography on a mass scale, Kaspersky Lab researchers said in a blog this week.

“Most modern anti-malware solutions provide little, if any, protection from steganography,” said Kaspersky Lab security researchers Alexey Shulmin, Evgeniya Krylova. As a result, any “carrier” such as a digital image or a video file that can be used to conceal stolen data, or communications between a malware program and a command and control server, poses a potential threat, they said.

Krylova told Dark Reading that organizations need to pay attention to the trend.

“Although steganography was used in ancient centuries, it is still actively used today by different malware authors and APT actors,” she says. “This trend has been increasing over the last several years, even though it can be detected by different security suites, using mathematical and statistical methods.”

Stegcontainers — or the object in which the payload is concealed – can take multiple forms, Krylova says. For instance, Kaspersky Lab has observed threat actors using audio files, text files, and even domain names, to hide data and CC communications.

“But the primary concern is images,” she notes. In most cases, the main payload that is being concealed within these images are conversations with the command and control server, commands received from the CC and stolen files, she says.

The only one limitation on what threat actors can or cannot hide using steganography is the size of the container, Krylova says. “This is because you won’t be able to hide a lot of information in a container without visual distortion.”

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/steganography-use-on-the-rise-among-cyber-espionage-cybercrime-groups/d/d-id/1329569?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Uber drivers game the system – force up fares

Oh, that dreaded surge pricing.

You really want to get home during rush hour, your phone’s nearly dead, and/or it’s raining. Uber can get you there, though it might sting a little. Or a lot: surge pricing could multiply your fare x2, x3, x4, and right on up.

(Fun fact: what might be the highest surge multiplier ever recorded was x50, as of 2014. That worked out to $57 per minute. A 5-minute ride would cost about $285.)

Many of us suspect that surge pricing is a devious way to milk us when we’re at our most vulnerable. Say, when our phone battery’s nearly drained dry (and yes, the Uber app can tell, though the company has in the past denied increasing fares to take advantage of battery levels).

Now, researchers at New York University and at Warwick Business School, in the UK, have found that gangs of drivers are forcing surge pricing by agreeing to log out en masse.

That’s exactly what triggers surge pricing: lots of ride requests, plus a paucity of drivers to answer the need.

The researchers – Mareike Möhlmann and Ola Henfridsson, of Warwick Business School, and NYU’s Lior Zalmanson – have found multiple ways that Uber drivers are resisting the ride-hailing company’s algorithmic management.

After interviewing drivers in New York and London and analyzing 1,012 blogs on the UberPeople.net platform, the researchers say they uncovered the organizing of a “mass deactivation.”

They quoted a discussion they found between two drivers on the platform:

Driver A: Guys stay logged off until surge.

Driver B: Uber will find out if people are manipulating the system.

Driver A: They already know cos it happens every week. Deactivation en masse coming soon. Watch this space.

It’s unclear how much the collusion affects fare prices. According to The Telegraph, Uber says it’s not common. The news outlet quoted an Uber statement:

This behavior is neither widespread or permissible on the Uber app, and we have a number of technical safeguards in place to prevent it from happening.

The drivers aren’t just doing it to put more surge money in their pockets: the researchers found that Uber drivers in London and New York are also gaming the algorithm to cancel fares they don’t want and to avoid the unpopular UberPOOL, where drivers have to take multiple passengers who are heading in the same direction.

Professor Henfridsson explained how drivers shirk the POOL:

Drivers… either accept the first passenger on UberPOOL then log off, or just ignore requests, so they don’t have to make a detour to pick anybody else up. They then still pocket the 30% commission for UberPOOL, rather than the usual 10%.

Sure, drivers have to sign off on UberPOOL participation in the driver agreement, but they don’t seem to be penalized when they ignore it. Here’s another driver post the researchers came across:

Driver A: After about 2-3 days of ignoring them you will not receive anymore. I have not received an uberpool request in months. I guess uber thinks they are punishing me by not sending me any more… poor me. LOL.

Surge pricing is tough on customers, but it’s easy to see why the researchers’ sympathies lie with the colluding drivers:

Under constant surveillance through their phones and customer reviews, drivers’ behaviour is ranked automatically and any anomalies reported for further review, with automatic bans for not obeying orders or low grades. Drivers receive different commission rates and bonus targets, being left in the dark as to how it is all calculated. Plus drivers believe they are not given rides when they near reaching a bonus. The compensation for UberPOOL, which drivers have to agree to do or be banned, is even more complex. Drivers are forced to accept different passengers on the same ride, even though it is not economically beneficial to do so.

Given the tensions between drivers’ need for autonomy and a platform programmed to be always in control, Uber’s algorithmic micromanagement may even be counterproductive as drivers try to break free of it, said Professor Henridsson.

The lack of human interaction between Uber and its drivers aggravates the situation further still, noted Dr. Zalmanson.

The drivers have the feeling of working for a system rather than a company, and have little, if any, interaction with an actual Uber employee. This creates tension and resentment, especially when drivers can only email to resolve problems. Uber’s strategy is not at all transparent, drivers do not know how decisions are made or even how jobs are allocated and this creates negative feelings towards the company. So they fight back and have found ways to use the system to their advantage.

Of course, while surge pricing is good for those drivers, it’s a pain in the wallet for passengers. There are, however, ways that passengers themselves can game the system.

Northeastern University researchers have found that passengers can avoid surge pricing by doing something as simple as crossing the street. From the research paper (PDF):

Twenty percent of the time in Times Square, customers can save 50% or more by being in an adjacent surge area.

That works particularly well in condensed New York, which has 16 zones. Just walking a few minutes gets you into a new one. Waiting is a good strategy, too, given that surge pricing can be short-lived: just wait 5 minutes and try again.

~

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YY25XxSuw5g/

Google wants to track shoppers IRL – privacy group says, ‘No way!’

There’s a long-term marketing bugaboo that Google has plans to fix: how to convince its clients that their ad dollars are turning into sweet payola.

As Google announced at its annual Marketing Next conference in May, it will go beyond just serving ads to consumers. Using an artificial intelligence (AI) tool called Attribution, it said it would follow us around to see where we go, tracking us across devices and channels – mobile, desktop, and in physical stores – to see what we’re buying, to match purchases up with what ads we’ve seen, and to then automatically tell marketers what we’re up to and what ads have paid off.

Google said at the time that it was planning to anonymize the data and then hash it over, as in, “User 08a862b091c379fe9767615d10873 saw these 10 ads in the morning, and spent $27.73 at a certain grocery store that afternoon.”

Well, that is not reassuring whatsoever from a privacy standpoint, the Electronic Privacy Information Center (EPIC) says. On Monday, EPIC announced that it’s filed a complaint (PDF) with the Federal Trade Commission (FTC) to stop Google from tracking in-store purchases.

As Google is happy to boast, it’s captured data on over 5 billion debit and credit card purchases in stores in just under three years using AdWords. Google then matches individuals’ buying histories with what they do online.

In fact, Google’s using “third-party partnerships” to gain access to what it says are “approximately 70% of credit and debit card transactions in the US.” But Google hasn’t identified who those partners are, or how they’ve captured all that information.

Likewise, Google says it’s protecting online privacy, but it’s refused to say how, EPIC says. Nor will Google allow independent testing of whatever technique it’s using to preserve consumer privacy.

From the complaint:

Google claims that it protects online privacy but refuses to reveal details of the algorithm that “deidentifies” consumers while tracking their purchases.

The privacy of millions of consumers thus depends on a secret, proprietary algorithm.

Google has said that it can’t give details on its mathematical formulas because of a pending patent.

But it has also revealed that the algorithm was based on CryptDB, a database that works by executing SQL queries over encrypted data (PDF of an MIT paper on CryptDB). CryptDB, however, has known security flaws: Microsoft researchers in 2015 hacked into a CryptDB protected database of healthcare records and accessed over 50% (sometimes 100%) of sensitive patient data at an individual level.

Beyond Google’s lack of transparency about exactly how it’s protecting consumer privacy, EPIC also says that its tracking opt-out process is “burdensome, opaque, and misleading.”

According to Google, turning off Web App Activity stops Google from saving information about the ads a user clicks. However, serve and click data may still be stored in a manner that allows for personal identification of the user even when Web App Activity is turned off. Whenever an ad is served to a user’s browser, Google’s servers create a log that includes the user’s IP address and a unique identifier attached to the relevant Google advertising cookie.

Nor does opting out of Google cookies stop ads from being served. Those ads continue to be logged on Google’s servers, as do users’ IP addresses. The only way to get away from the tracking is by using a third-party product, such as a virtual private network (VPN), EPIC says.

Information about all this is buried several pages into Google’s Privacy Controls, and even if you get that far, Google doesn’t disclose the extent to which opting out of Web App Activity stops it from tracking your interactions with Google ads.

EPIC’s asking the FTC to stop Google’s tracking of in-store purchases and to determine whether Google adequately protects consumer privacy.

It notes that Google’s looking to slather its dominance of online advertising onto the physical world. Absolutely. But who isn’t?

Amazon, for one, has also been stretching the boundaries of its online existence. In June, it was granted a patent to stop shoppers from checking online prices from competitors when we’re in one of its physical shops.

To do so, as it described in the patent, it would watch any online activity conducted over its Wi-Fi network, detect any relevant product information being searched on, and respond by either sending the shopper to a completely different web page, blocking internet use altogether, and/or sending a store clerk scurrying over to our exact location in a store.

But at least you’ve got the option of not using Amazon Wi-Fi. Going further back still, marketers have been using technology to follow us around, no need to sign on to a store’s Wi-Fi, as our mobile phones broadcast our movements as we shop.

We’ve also seen both spying billboards and space-age garbage cans that advertisers have used to monitor peoples’ movements by tracking the unique IDs of their mobile phones.

Just how much fuel do we want to add to what Google already knows about us? Depending on which of its tools we use, Google knows what we think, what we need, what we desire, our political and spiritual beliefs, our age, our gender, what music we listen to, what we watch, what we read, where we’ve been, where we plan to go, where we work, where we hang out, where we live, who we meet, where we shop, when we shop, what we buy, how much money we’re worth, how much we spend, and how much energy we consume.

If you want to know what the truly privacy concerned experts think of Google adding data about what we buy in real-world stores to that already sky-high pile, what better place to turn to than The Tor Project?

In a nutshell, the answer is “Go, EPIC, go!!!”

Google says it’s dismayed by EPIC’s complaint. A spokeswoman sent this statement to Ars Technica on Tuesday:

We take privacy very seriously so it’s disappointing to see a number of inaccuracies in this complaint. We invested in building industry-leading privacy protections before launching this solution. All data is encrypted and aggregated – we don’t share or receive any identifiable credit card data whatsoever.

Ars Technica’s Sean Gallagher, who reported on the CryptDB security vulnerabilities back in 2015, says that Google also claimed that it only learns the “aggregate value” of several purchases, not individual ones, and that neither it nor the ad buyer knows where the individual clicks came from.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/h3jx4DBs0Co/