STE WILLIAMS

Uber’s Response to 2016 Data Breach Was ‘Legally Reprehensible,’ Lawmaker Says

In Senate hearing, Uber CISO admits company messed up in not quickly disclosing breach that exposed data on 57 million people.

A senior US lawmaker Tuesday slammed ride-hailing giant Uber for not promptly disclosing a November 2016 breach that exposed personal data on 57 million people, choosing instead to pay $100,000 to keep the two perpetrators of the theft silent.

At a hearing before the Senate Committee for Commerce, Science Transportation, Sen. Richard Blumenthal (D-Conn.) described Uber’s action as “morally wrong and legally reprehensible.”

Uber’s payoff violated not only the law but also the norm of what should be expected in such situations, Blumenthal said. “Drivers and riders were not informed and neither was law enforcement. In fact, it was almost a form of obstruction of justice.”

Uber CEO Dara Khosrowshahi last November disclosed that the company had a year ago quietly paid two hackers $100,000 to destroy data they had stolen from a cloud storage location. The compromised data included names and driver’s license information for some 600,000 Uber drivers in the US, and also the names, email addresses, and cellphone numbers of some 57 million Uber riders and drivers worldwide in total.

Khosrowshahi claimed he learned of the data breach and the payoff only just prior to disclosing it and vowed to make changes to ensure the same lapse would not happen again. He also disclosed that Uber had fired CISO Joe Sullivan and another executive who had led the response to the breach.

In testimony before the Senate Committee Tuesday, Uber CISO John Flynn admitted the company had made a mistake in not disclosing the breach in a timely fashion. But he claimed the primary goal in paying the intruders was to protect the stolen data.

“I would like to echo statements made by new leadership, and state publicly that it was wrong not to disclose the breach earlier,” Flynn said. There is no justification for Uber’s breach notification failure, Flynn candidly admitted. “The real issue was we didn’t have the right people in the room making the right decisions.”

According to Flynn, Uber first learned about the breach on November 14, 2016, when one of the attackers sent an email informing the company that its data had been accessed. The attacker demanded a six-figure ransom payment for not leaking the data.

Uber security engineers were quickly able to determine the thieves had accessed copies of certain databases and files stored in a private location on Amazon’s AWS cloud.

The attackers had apparently managed to gain access to the data using a legitimate credential that they found within code on a repository for Uber engineers on GitHub. The discovery prompted Uber to take immediate measures like implementing multifactor authentication on Github and across the company and adding auto-expiring credentials to protect against similar attacks.

In response to questions, Flynn admitted that Uber’s decision to pay the $100,000 ransom amount, as a bug bounty, was a mistake as well. Like many other companies, Uber runs a vulnerability disclosure program that offers cash rewards to white-hat hackers who find and report exploitable security bugs in its services. HackerOne has been managing Uber’s bug bounty program since 2018.

Blumenthal and Sen. Catherine Cortez Masto (D-Nev.) wanted to know why Uber had decided to pay the attackers the demanded $100,000 in the form a bug bounty. Both lawmakers pointed out that the actions by the two individuals was criminal in nature and not something that anyone would consider as responsible bug disclosure activity.

“The key distinction is they not only found a weakness but also exploited it in a malicious fashion to access and download data,” Blumenthal noted. “Concealing, it in my view, is aiding and abetting the crime.”

Masto expressed some incredulity that Uber would try to point malicious attackers to its managed bug bounty program in the first place. “So there was a criminal element trying to get into your data…and you are trying to put them on the right path?” she asked.

Flynn admitted that the intruders in this case were fundamentally different from usual bug hunters. He claimed that Uber decided to use the bug bounty program only as a means to gain attribution and assurances from the attackers. “We recognize that the bug bounty program is not an appropriate vehicle for dealing with intruders who seek to extort funds from the company,” he noted.

Casey Ellis, founder of managed bug hunting service Bugcrowd, said today’s hearing shines a spotlight on the ethics of Uber’s response. “This was not a bug bounty payout. This was extortion, and the difference between the two is unambiguous.”

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/informationweek-home/ubers-response-to-2016-data-breach-was-legally-reprehensible-lawmaker-says/d/d-id/1330997?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Uber’s Response to 2016 Data Breach Was ‘Legally Reprehensible,’ Lawmaker Says

In Senate hearing, Uber CISO admits company messed up in not quickly disclosing breach that exposed data on 57 million people.

A senior US lawmaker Tuesday slammed ride-hailing giant Uber for not promptly disclosing a November 2016 breach that exposed personal data on 57 million people, choosing instead to pay $100,000 to keep the two perpetrators of the theft silent.

At a hearing before the Senate Committee for Commerce, Science Transportation, Sen. Richard Blumenthal (D-Conn.) described Uber’s action as “morally wrong and legally reprehensible.”

Uber’s payoff violated not only the law but also the norm of what should be expected in such situations, Blumenthal said. “Drivers and riders were not informed and neither was law enforcement. In fact, it was almost a form of obstruction of justice.”

Uber CEO Dara Khosrowshahi last November disclosed that the company had a year ago quietly paid two hackers $100,000 to destroy data they had stolen from a cloud storage location. The compromised data included names and driver’s license information for some 600,000 Uber drivers in the US, and also the names, email addresses, and cellphone numbers of some 57 million Uber riders and drivers worldwide in total.

Khosrowshahi claimed he learned of the data breach and the payoff only just prior to disclosing it and vowed to make changes to ensure the same lapse would not happen again. He also disclosed that Uber had fired CISO Joe Sullivan and another executive who had led the response to the breach.

In testimony before the Senate Committee Tuesday, Uber CISO John Flynn admitted the company had made a mistake in not disclosing the breach in a timely fashion. But he claimed the primary goal in paying the intruders was to protect the stolen data.

“I would like to echo statements made by new leadership, and state publicly that it was wrong not to disclose the breach earlier,” Flynn said. There is no justification for Uber’s breach notification failure, Flynn candidly admitted. “The real issue was we didn’t have the right people in the room making the right decisions.”

According to Flynn, Uber first learned about the breach on November 14, 2016, when one of the attackers sent an email informing the company that its data had been accessed. The attacker demanded a six-figure ransom payment for not leaking the data.

Uber security engineers were quickly able to determine the thieves had accessed copies of certain databases and files stored in a private location on Amazon’s AWS cloud.

The attackers had apparently managed to gain access to the data using a legitimate credential that they found within code on a repository for Uber engineers on GitHub. The discovery prompted Uber to take immediate measures like implementing multifactor authentication on Github and across the company and adding auto-expiring credentials to protect against similar attacks.

In response to questions, Flynn admitted that Uber’s decision to pay the $100,000 ransom amount, as a bug bounty, was a mistake as well. Like many other companies, Uber runs a vulnerability disclosure program that offers cash rewards to white-hat hackers who find and report exploitable security bugs in its services. HackerOne has been managing Uber’s bug bounty program since 2018.

Blumenthal and Sen. Catherine Cortez Masto (D-Nev.) wanted to know why Uber had decided to pay the attackers the demanded $100,000 in the form a bug bounty. Both lawmakers pointed out that the actions by the two individuals was criminal in nature and not something that anyone would consider as responsible bug disclosure activity.

“The key distinction is they not only found a weakness but also exploited it in a malicious fashion to access and download data,” Blumenthal noted. “Concealing, it in my view, is aiding and abetting the crime.”

Masto expressed some incredulity that Uber would try to point malicious attackers to its managed bug bounty program in the first place. “So there was a criminal element trying to get into your data…and you are trying to put them on the right path?” she asked.

Flynn admitted that the intruders in this case were fundamentally different from usual bug hunters. He claimed that Uber decided to use the bug bounty program only as a means to gain attribution and assurances from the attackers. “We recognize that the bug bounty program is not an appropriate vehicle for dealing with intruders who seek to extort funds from the company,” he noted.

Casey Ellis, founder of managed bug hunting service Bugcrowd, said today’s hearing shines a spotlight on the ethics of Uber’s response. “This was not a bug bounty payout. This was extortion, and the difference between the two is unambiguous.”

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/informationweek-home/ubers-response-to-2016-data-breach-was-legally-reprehensible-lawmaker-says/d/d-id/1330997?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Adobe: Two critical Flash security bugs fixed for the price of one

Adobe has issued an emergency security patch for two bugs in its Flash player – after North Korea’s hackers were spotted exploiting one of the flaws to spy on people investigating the creepy hermit nation.

At the start of the month, South Korea’s Computer Emergency Response Team put the world on alert after it found miscreants abusing Flash to take control of and surveil Windows PCs in its country via Office documents carrying embedded malicious SWF files. Subsequent analysis showed the hacking was being done by Group 123, one of Kim Jong-un’s cyber-squads, who were targeting folks investigating North Korea’s abuses and operations.

Adobe acknowledged its software was still a security shit show shortly afterwards, and promised a patch this week.

Now that update has landed – and it contains a fix for not just one programming blunder but two, thanks to researchers at Qihoo 360 Vulcan Team. The Qihoo crew found a remote-code execution hole in Flash that is addressed with this update. Both bugs are rated critical for all supported OSes except the Linux build of Adobe Flash Player Desktop Runtime.

Essentially, patch your Flash installation now to stop scumbags exploiting two newly discovered bugs, one of which is being used by the North Koreans and the other was found by Qihoo’s infosec boffins. Opening a webpage or other document with a malicious Flash file embedded on a vulnerable computer is enough to trigger a malware infection.

“These updates address critical vulnerabilities that could lead to remote code execution, and Adobe recommends users update their product installations to the latest versions,” the Photoshop giant said today.

The Nork-exploited remote-code execution bug is CVE-2018-4878, and the Vulcan Team found CVE-2018-4877.

So, get updating, or better still, just dump the plugin. The Flash suite is over 20 years old, and is due for retirement at 2020 at the latest. HTML5 or bust, baby. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/06/adobe_emergency_bug_fix/

Adobe Patches Flash Zero-Day Used in South Korean Attacks

Critical flaw is one of two critical use-after-free vulnerabilities in Flash fixed today by the software firm.

Adobe issued its planned security update today for a previously unknown vulnerability in Flash Player that was exploited in targeted attacks against South Korean individuals. The software firm last week promised to patch the critical use-after-free bug, which was discovered and reported by South Korea’s Computer Emergency Response Team.

The attacks, believed to be the handiwork of a state-sponsored campaign by North Korea, inserted malicious Flash content inside Microsoft Office documents emailed to the victims. The vulnerability (CVE-2018-4878) allows remote code execution.

Adobe in its Flash update also patched a second critical use-after-free flaw in Flash, CVE-2018-4877, which also allows an attacker to remotely execute code on the victim’s machine.

For details on the security update, see Adobe’s advisory here.

 

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/adobe-patches-flash-zero-day-used-in-south-korean-attacks/d/d-id/1330993?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ukraine Power Distro Plans $20 Million Cyber Defense System

After NotPetya and severe blackouts, Ukrenergo responds with an investment in cybersecurity.

Ukrenergo, Ukraine’s state-run power distributor, will invest up to $20 million in a new cyber defense system, according to its chief executive Vsevolod Kovalchuk, reports Reuters

The Ukrainian energy industry has been a hot target for threat actors in recent years. On Dec. 23, 2015, attackers used stolen user credentials to remotely access and manipulate the industrial control systems of three regional power firms and shut down power for 225,000 customers in Western Ukraine. One year later, a power outage in Kiev Dec. 16, 2016 was also confirmed to be caused by a cyberattack, and linked to the same group of threat actors as the 2015 blackout. Ukrenergo was also a victim of the NotPetya outbreak in June 2017.

Kovalchuk told reporters in a briefing that the $20 million investment would go both to security technology and administrative action, and would be in place by 2020.

“We have developed a new concept of cybersecurity whose key goal is to make it physically impossible for external threats to affect the Ukrainian energy system,” he said, reports Reuters.

For more information, read here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/operations/ukraine-power-distro-plans-$20-million-cyber-defense-system/d/d-id/1330994?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Security vs. Speed: The Risk of Rushing to the Cloud

Companies overlook critical security steps as they move to adopt the latest cloud applications and services.

Businesses deploying cloud-based applications and services often overlook critical security steps as they scramble to keep up with the latest technology, and the rush is putting them at risk.

“There’s a lot of customers who have this cloud-first mandate,” says JK Lialias, senior director of cloud access at Forcepoint. “They’ve been told, ‘thou shalt move to the cloud as much infrastructure as you possibly can.'”

A lot of pressure is on line-of-business employees to adopt cloud applications and infrastructure, he continues. IT departments are essential in delivering these services and often neglect to understand how on-premises data and processes translate to the cloud.

“What’s happening in the move to the cloud has happened in the tech industry from the beginning,” says Michael Landewe, Avanan co-founder and VP of business development. “People move to new tech based on new features and capabilities. Security always follows.”

The gap between moving to the cloud and implementing strong security has shrunk as new technologies accelerate the process, he explains. However, most companies are still followers and don’t take all the necessary steps, sacrificing security in the process.

Never Assume You’re Secure
There’s a lot of assumption when it comes to cloud responsibility. “Some businesses think the whole security issue is something you put into the provider’s realm,” says Jim Reavis, CEO of the Cloud Security Alliance. “The cloud provider may have security services and capabilities, which you can order as an extra, but a lot of responsibilities shift to the cloud.”

Cloud providers typically own the hardware, network, host operator, and virtual machines, says Dan Hubbard, senior security architect at Lacework. The customer owns everything above that: operating systems, containers, applications, and all of the related access controls.

“This is where things get a little muddy from a corporate perspective,” he explains. Most companies have parameters in traditional data centers, and their core principles and rules don’t apply in the public cloud.

Landewe points to the shared responsibility model, which reminds companies they must secure data they move to the cloud. Many businesses, especially those with small IT departments, hand responsibility for data access and security to cloud providers. The service-level agreement from most vendors explains where customers are responsible for their data.

“You need to have an honest conversation with the vendor and ask, ‘where does your security responsibility end and where does mine begin?'” he explains. The owner of the data still has to be entirely responsible for that information.

Skipped Steps and Dangerous Consequences
“It’s one of those things where the speed sometimes impedes overall understanding and education,” says Lialias of the transition to cloud. “This is one of the areas where it needs to be balanced.”

Hubbard puts companies into two categories: cloud natives, which were founded in the cloud and don’t need to migrate, and larger businesses with traditional data centers. The latter group is navigating the transition to public cloud and overlooking critical steps in the process.

Proper account configuration is key here. Last year’s series of Amazon Web Services (AWS) leaks affecting major organizations, from Viacom to the Republican National Committee, demonstrated a broad oversight of basic cloud configuration steps. It’s an easy and dangerous misstep.

“From what we have seen and what we know about these, they have all come down to client-based issues; mistakes they’ve made,” says Reavis. AWS has strong security but most people don’t know to properly configure their access so that data is secured. If they’re making these configuration errors in AWS, they’re likely making them in other services, he adds.

Cloud credentials must also be secured, Hubbard emphasizes. Attackers frequently steal login data for platforms like AWS and Azure, and abuse the power of the cloud on behalf of customers to mine cryptocurrency, send spam, and distribute distributed denial-of-service attacks.

“If someone gets access to those, they can impersonate you in your portion of the cloud,” he says. “You need to manage access to the machines … who logs into machines, from where, and what do they do when they log in.”

Admins should adopt two-factor authentication and lock access so administrative accounts can only log in from certain IP addresses. Uneducated admins can do a lot of damage very quickly, says Reavis, who says phishing and credential-based attacks will be common going forward. There should be closer scrutiny on how admin accounts are hardened.

“Once someone has access to your account, they do everything in their power to maintain that control,” says Landewe. Administrators aren’t the only ones at risk, he notes. Many attackers target low-level employees and, once they’re in, use that access to target high-level workers.

Do Your Due Diligence
The average enterprise has about 1,000 software-as-a-service applications in use, says Lialias. They probably know about 600 of them, and there might be 30 that could potentially be very high risk. Businesses know they house both sanctioned and unsanctioned applications. It’s up to them to understand what’s out there and assume control over the software that employees use.

“The key for moving to the cloud is doing due diligence,” he explains. “They swipe a card and click a button, and they forget their due diligence.”

While mistakes can and will happen, businesses can stay one step ahead by ensuring accounts are properly configured, credentials are secured, and they have visibility into the applications being used and people using them. Being able to see and control data is essential.

Experts “hope” to see a slowdown in incidents like AWS bucket leaks and see companies marry caution with speed. However, many will need a wake-up call before adopting best practices.

“We’re going to see more of the same in organizations needing to make a mistake to learn that they need to take this seriously,” says Reavis. He advises businesses to look to educational programs from major cloud providers, the Cloud Security Alliance, and (ISC)², which all have cloud security courses.

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/cloud/security-vs-speed-the-risk-of-rushing-to-the-cloud/d/d-id/1330996?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Lawmaker Says Uber’s Response to 2016 Data Breach Was "Legally Reprehensible"

In Senate hearing, Uber CISO admits company messed up in not quickly disclosing breach that exposed data on 57 million people.

A senior US lawmaker Tuesday slammed ride-hailing giant Uber for not promptly disclosing a November 2016 breach that exposed personal data on 57 million people, choosing instead to pay $100,000 to keep the two perpetrators of the theft silent.

At a hearing before the Senate Committee for Commerce, Science Transportation, Sen. Richard Blumenthal (D-Conn.) described Uber’s action as “morally wrong and legally reprehensible.”

Uber’s payoff violated not only the law but also the norm of what should be expected in such situations, Blumenthal said. “Drivers and riders were not informed and neither was law enforcement. In fact, it was almost a form of obstruction of justice.”

Uber CEO Dara Khosrowshahi last November disclosed that the company had a year ago quietly paid two hackers $100,000 to destroy data they had stolen from a cloud storage location. The compromised data included names and driver’s license information for some 600,000 Uber drivers in the US, and also the names, email addresses, and cellphone numbers of some 57 million Uber riders and drivers worldwide in total.

Khosrowshahi claimed he learned of the data breach and the payoff only just prior to disclosing it and vowed to make changes to ensure the same lapse would not happen again. He also disclosed that Uber had fired CISO Joe Sullivan and another executive who had led the response to the breach.

In testimony before the Senate Committee Tuesday, Uber CISO John Flynn admitted the company had made a mistake in not disclosing the breach in a timely fashion. But he claimed the primary goal in paying the intruders was to protect the stolen data.

“I would like to echo statements made by new leadership, and state publicly that it was wrong not to disclose the breach earlier,” Flynn said. There is no justification for Uber’s breach notification failure, Flynn candidly admitted. “The real issue was we didn’t have the right people in the room making the right decisions.”

According to Flynn, Uber first learned about the breach on November 14, 2016, when one of the attackers sent an email informing the company that its data had been accessed. The attacker demanded a six-figure ransom payment for not leaking the data.

Uber security engineers were quickly able to determine the thieves had accessed copies of certain databases and files stored in a private location on Amazon’s AWS cloud.

The attackers had apparently managed to gain access to the data using a legitimate credential that they found within code on a repository for Uber engineers on GitHub. The discovery prompted Uber to take immediate measures like implementing multifactor authentication on Github and across the company and adding auto-expiring credentials to protect against similar attacks.

In response to questions, Flynn admitted that Uber’s decision to pay the $100,000 ransom amount, as a bug bounty, was a mistake as well. Like many other companies, Uber runs a vulnerability disclosure program that offers cash rewards to white-hat hackers who find and report exploitable security bugs in its services. HackerOne has been managing Uber’s bug bounty program since 2018.

Blumenthal and Sen. Catherine Cortez Masto (D-Nev.) wanted to know why Uber had decided to pay the attackers the demanded $100,000 in the form a bug bounty. Both lawmakers pointed out that the actions by the two individuals was criminal in nature and not something that anyone would consider as responsible bug disclosure activity.

“The key distinction is they not only found a weakness but also exploited it in a malicious fashion to access and download data,” Blumenthal noted. “Concealing, it in my view, is aiding and abetting the crime.”

Masto expressed some incredulity that Uber would try to point malicious attackers to its managed bug bounty program in the first place. “So there was a criminal element trying to get into your data…and you are trying to put them on the right path?” she asked.

Flynn admitted that the intruders in this case were fundamentally different from usual bug hunters. He claimed that Uber decided to use the bug bounty program only as a means to gain attribution and assurances from the attackers. “We recognize that the bug bounty program is not an appropriate vehicle for dealing with intruders who seek to extort funds from the company,” he noted.

Casey Ellis, founder of managed bug hunting service Bugcrowd, said today’s hearing shines a spotlight on the ethics of Uber’s response. “This was not a bug bounty payout. This was extortion, and the difference between the two is unambiguous.”

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/lawmaker-says-ubers-response-to-2016-data-breach-was--legally-reprehensible-/d/d-id/1330997?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Keeping kids safe online – trying to practice what I preach

Being a blogger in the world of cybersecurity, I’ve rather firmly established myself in the eyes of my friends and family as the person to go to with questions about an app they heard about on the news, or what to do about some new hack or big security bug, and how to keep their information safe.

I take a great deal of pride in being able to help people like that. When I was pregnant with my first child last year, one of my family members with young kids said something along these lines to me:

I can’t keep up with all the new tech and apps that kids have access to nowadays, it’s all happening so fast. But if anyone can sort it all out, you can.

I wish I shared that confidence.

My approach to keeping my kid safe online is easy right now because she’s a baby and it’s all fully under my control. My main concern is her future privacy, and I know it only gets harder from here.

I want my kid to have the choice about what to do with her data – as much as possible, anyway – without my actions removing all choice from her before she even has a say. After all, what we do know about what social networks actively do with identity and demographic information is alarming (or impressive, if you’re a marketer who wants to sell people stuff on Facebook).

Despite all the promises these companies make about how they take data privacy and protection seriously, breaches can happen to the most well-intentioned organization. The best personal data protection is ultimately preventative: Limit what data is available to companies in the first place.

In light of this, in trying to practice what I preach about data privacy online, these are the choices I’ve made:

  • I do not post my child’s name, date of birth, or any photos of her online.
  • I make sure my friends and family do the same.

My hopes are that this will allow her to decide on her own, as an adult, when and how to carve out her own identity online and share her childhood photos with the world. And, though it might be futile in a world where people who had never heard of Equifax were still affected by the breach, I hope by keeping as many of her personal details off the internet for as long as possible, that I might help guard her information from being stolen and used in identity theft. After all, we know babies and children are a favorite target for this kind of thing.

Even though I explained this was to protect my child’s privacy, these decisions were met by surprising resistance from people I knew. Worse, I was assured that I wouldn’t be able to keep it up for more than a week or two after my kid’s arrival, and that the “parental need to overshare” would override this privacy nonsense.

Thankfully, I’m holding fast. (If anything this assertation only riled up my contrarian side just to prove them all wrong!)

I say all this as the child of immigrants with almost all my family living countries and continents away: I know very well how easy social networks have made reconnecting with distant family, and what a blessing this is for so many of us.

From a cost and convenience point of view alone, putting up daily photos of Baby on Facebook for Auntie back in the homeland is far superior to a lucky-if-it-even-arrives mailed photograph, or an expensive and crackly long-distance phone call.

Despite all that, it is still alarming how many of us have adapted to a sense of inevitability (in the case of parents) or entitlement (in the case of friends and family) that we should of course be seeing plenty of photos of kids’ daily development through social media. Why is this, and how did we get here so fast?

It is too easy to forget that apps are inherently selfish with our data – they want as much of it as they can as often as they can, even if it’s not in our best interest. After all, this is what apps need to assure their evolution and survival.

Most of us aren’t even aware that our web browsers and smartphones are comfortable homes to data parasites. Social networks want us to share our lives on there, and though I know a “like” or a comment about my adorable child might feel nice, in the long run, they don’t outweigh larger concerns.

The social networks are providing services for us, and those services have made the world feel a lot smaller than it ever has, but, ultimately, our data is a resource to slice, dice and exploit.

I understand that I seem a little (for lack of better term) paranoid, and the choices I’ve made won’t work for many other families. We all have to figure out what’s realistic for how we live our lives.

So instead of a prescriptive list of dos and don’ts, here is what I urge you to keep in mind as you navigate this complex issue:

  • Whatever you put online stays there forever, even if you delete it*. Social networks are notorious for holding on to data even after users have deleted their profiles. Ask yourself if this photo or post would be something your child would not appreciate coming to light when they’re older.
    *Even the “right to be forgotten” in the EU won’t completely cover your tracks.
  • Be aware of the privacy settings you’re using on your social networks. Do your posts really need to be set to public, or to all your friends, or would a more narrow and restricted group (perhaps a family and close friends-only group) suffice?
  • Be especially aware of posting public photos of your child at locations you frequent. There’s no need to hand over this information on a silver platter to just anyone who you are connected with online.
  • Think about alternative methods for sharing photos and information if the advertising and profiling habits of social networks, like Facebook, make you uneasy. There’s no one-size-fits-all solution, but alternatives to social networks can include a private email chain, group texts, or private photo-sharing app like TinyBeans. (In my case I’m a purposeful luddite who sends people actual printed photographs!)

Though my tin foil hat is on awfully tight, I know I can’t keep my child in a bubble forever. There is a part of me that also wants to keep her photo off Facebook so the ubiquitous ad platform doesn’t create an advertising profile and facial recognition algorithm for her before she’s out of diapers.

However, I use an iPhone to take photos of her, and given how my phone already “conveniently” scours photos for familiar faces, no doubt Apple has already has a profile for her ready to go. In that regard, I’ve already lost the battle.

And I know right now, as parents of older children are no doubt thinking as they read this, this specific privacy battle is the easy stuff. As my child makes her way in the world and starts using tech on her own terms, I won’t be able to control things as I am right now (or at all).

Ultimately, protecting privacy is a muscle that needs frequent exercise, especially in our overshare-happy world. Opting out of these social networks is, of course, the simplest way to avoid these issues and it is absolutely a valid choice!

In my case, it’s not a step I wanted to make, so I’ve set these ground rules to exercise that privacy-protection muscle. I hope that by trying to protect her identity and privacy from a young age, that it becomes second nature for me and helps inform her as she moves through this world and makes her own decisions about her privacy.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/F72OM8uu1RU/

Early Google, Facebook employees band together to tame tech addiction

Fake news, foreign tinkering in the US 2016 presidential election, and mounting evidence about how bad technology is for kids: it’s all led to a tsunami of regret from those who helped to create the social media platforms that enable it all.

A quote from an early ex-Facebook employee, as reported by Vanity Fair:

Most of the early employees I know are totally overwhelmed by what this thing has become. They look at the role Facebook now plays in society, and how Russia used it during the election to elect Trump, and they have this sort of ‘Oh my God, what have I done’ moment.

We’ve seen ex-president of Facebook Sean Parker admit that from the get-go, the main goal has been to get and keep people’s attention, by hook, by crook or by dopamine addiction. Former vice president of Facebook user growth Chamath Palihapitiya has expressed remorse for his part.

Facebook has admitted that social media can be bad for you, Facebook founder Mark Zuckerberg has said that his platform needs fixing, Apple’s Tim Cook is keeping his nephew off social media, and, well, the list goes on.

The latest “woops!!!” news: a group of “what kind of mind-gobbling social media monster have we created?” repentants have come together to form the nonprofit Center for Humane Technology (CHT). On Sunday, the group launched a new campaign to protect young minds from what they say is “the potential of digital manipulation and addiction.”

Members include former employees and advisors to Google, Facebook, and Mozilla.

The CHT is partnering with Common Sense – a nonprofit that advocates for children and families – for the campaign, which is titled Truth About Tech.

The group’s notables include early Facebook investor Roger McNamee; former in-house Google ethicist Tristan Harris (an outspoken critic of Big Tech who’s leading the group); former Facebook operations manager Sandy Parakilas; former Apple and Google communications executive Lynn Fox; technologist Renée DiResta; and Justin Rosenstein, the co-founder of Asana who created Facebook’s Like button.

The New York Times quotes Harris:

We were on the inside. We know what the companies measure. We know how they talk, and we know how the engineering works.

McNamee said that the group is a chance for him to “correct a wrong.”

[With smartphones,] they’ve got you for every waking moment.

The NYT reports that the CHT plans to lobby for laws that will curtail the power of big tech companies. Its initial focus will be on two pieces of legislation: a bill being introduced by Senator Edward J. Markey – an author of the Children’s Online Privacy and Protection Act (COPPA) – that would commission research on technology’s impact on children’s health, and a bill in California by State Senator Bob Hertzberg, a Democrat, which would prohibit the use of digital bots without identification.

Do we really need more research into whether technology is bad for kids? Or for anybody, for that matter? It feels like we’re already awash in it.

For example, a recent study from the Harvard Business Review found that while face-to-face, real-world social networks were positively associated with overall wellbeing, the use of Facebook was negatively associated with overall wellbeing. In fact, researchers concluded, it might even affect your physical health, never mind your mental wellbeing.

Yet another of many studies found that Facebook’s dark side includes managing inappropriate or annoying content, being tethered to the platform, perceived lack of privacy and control, social comparison and jealousy, and relationship tension.

Correlation doesn’t equal causation, but yet another study has found that as social media use has surged, so too has the US teen suicide rate.

Unsurprisingly, children’s health advocates don’t seem to need much more convincing that social media is bad for kids. In fact, the Campaign for a Commercial-Free Childhood last month told Facebook that its Messenger for Kids should be junked.

At any rate, the CHT plans to target 55,000 US public schools and is going to try to enlist designers and technologists. The group wants them to think about their moral responsibility to use technology for the greater good and to keep it from harming children.

CHT members on 7 February will participate in a conference in Washington D.C., hosted by Common Sense, that will focus on digital health for kids.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/OXHMkIAsQos/

Firefox 59’s privacy mode plugs leaky referrers

In a small and partly symbolic tweak, the Firefox browser’s Private Browsing Mode is to stop passing websites the data that identifies the last web page a user visited.

Currently, when a user clicks on a link to visit a new website in any leading browser, that site is told the address of the page the visitor is coming from – the referring URL – via the (yes, misspelled) HTTP Referer header.

For example, if you visited Naked Security from our recent post about Intercept X on the Sophos News site, Naked Security would be passed the following:

Referer: https://news.sophos.com/en-us/2018/02/02/intercept-x-the-executives-view/

In some cases the referrer value can reveal a lot about a user’s interests, and it’s not just the web page you’re visiting that gets to see it. These days, many websites embed code from third parties, to perform tasks like web analytics or advertising, and they also get to see the referrer data.

In 2015, a study by Timothy Libert, a doctoral student at the University of Pennsylvania, found that nine out of ten visits to health-related web pages result in data being leaked to third parties like Google, Facebook and Experian.

The most infamous example of leaky Referer headers is probably the US government’s healthcare.gov website (the sign-up system for the US Affordable Care Act) which, thanks to URLs like the one below, could leak information about whether users were pregnant or a smoker; as well as their age, salary and zip code.

Referer: https://www.healthcare.gov/see-plans/85601/results/?county=04019age=40smoker=1pregnant=1zip=85601state=AZincome=35000

Using Firefox 59’s privacy mode, that same address will have the path information shorn from the URL, passing only:

Referer: https://www.healthcare.gov/

But here’s the rub. First, Firefox will only remove path information in privacy mode and not when using the main browser itself.

Second, intriguingly, Firefox users have been able to turn off information about the referring page for more than 15 years, by delving into the browser’s about:config screen (read this document for Mozilla’s explanation of these settings).

Be warned though – turning off referrer data could break some websites.

This still begs the question of why Mozilla has had a burst of enthusiasm for the concept now.

The answer might be that Mozilla had an epiphany regarding privacy, the result of which was November’s Firefox Quantum overhaul. This boasted a range of security and privacy enhancements, which are being added to with every point release.

Removing the referrer path in privacy mode is unlikely to have a major impact on Firefox user’s privacy but it does remind users that its existence is a risk they should at least pay attention to.

For years, privacy has been taken for granted, or at least the lack of it accepted as a necessary sacrifice so the web could work for website owners. Countering this philosophy could turn out to be the fuel for Firefox’s second coming.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/hdXIxpVscf4/