STE WILLIAMS

Epic in hot water over Steam-scraping code

Epic Games, the company behind online gaming phenomenon Fortnite, is at the centre of a privacy storm after players noticed that it was gathering data from their Steam accounts and storing it on their computers without permission.

Fortnite has been a gaming sensation. The game, which pits players against each other in an online world, is downloadable directly from Epic, which launched its own online Epic Games Store in December.

Last week, players found it gathering information about their accounts on rival online gaming service Steam, and Reddit was up in arms.

Reddit user notte_m_portent alerted Fortnite users to alleged suspicious activity in the Epic Game Launcher, which controls the Fortnite software. They claimed that it was watching other processes on the machine, reading root certificates, and storing hardware information in the registry, among other things.

Crayten, another Reddit user, also claimed to have found EGL creating an encrypted copy of the user’s ocalconfig.vdf file, which contains all friends on Steam and their name histories.

Epic VP of engineering Dan Vogel explained to concerned Redditors that tracking JavaScript feeds information to the company’s Support-a-Creator program, enabling it to pay creators. Epic describes these as “active video makers, streamers, storytellers, artists, cosplayers, musicians, and community builders” supporting its products.

The hardware survey data sends hardware information in line with the company’s privacy policy, he added, while the EGL looks at existing processes to ensure that it doesn’t try to update games that are currently running. That information isn’t sent to Epic, he said.

He added:

We only import your Steam friends with your explicit permission. The launcher makes an encrypted local copy of your localconfig.vdf Steam file. However information from this file is only sent to Epic if you choose to import your Steam friends, and then only hashed ids of your friends are sent and no other information from the file.

Even though Epic says it’s only sending user’s Steam data to its servers with their permission, it still scrapes the data and creates the file on the hard drive without getting the user’s permission first. Reddit user DukeNukem89 was concerned about this.

The same user also complained that Epic wasn’t accessing friends lists through the Steam application programming interface (API). An API is a digital interface that software can use to query other applications online. Given the user’s permission, the EGL could query the API on behalf of any Fortnight user logged into Steam, but Epic chose to ignore the API and scrape the data from the users’ hard drives instead.

Epic’s CEO Tim Sweeney explained that this was a throwback from an earlier development:

You guys are right that we ought to only access the localconfig.vdf file after the user chooses to import Steam friends. The current implementation is a remnant left over from our rush to implement social features in the early days of Fortnite. It’s actually my fault for pushing the launcher team to support it super quickly and then identifying that we had to change it. Since this issue came to the forefront we’re going to fix it.

He added that the company doesn’t like using third-party APIs because they can potentially create more security holes.

Valve, which runs Steam, told Bleeping Computer that it is looking into the issue, stating:

We are looking into what information the Epic launcher collects from Steam… This is private user data, stored on the user’s home machine and is not intended to be used by other programs or uploaded to any 3rd party service.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/P4Npm6nrBAo/

Court: Embarrassing leaks of internal Facebook emails are fishy

All those internal Facebook emails that got handed to MP Damian Collins during the UK’s fake-news inquiry, when he purportedly spooked Six4Three’s managing director into whipping out a USB drive?

A California court agrees with Facebook: a judge said on Friday that the “I panicked” explanation from Six4Three’s Ted Kramer could stand a bit of scrutiny. After all, Kramer handed over highly confidential documents, which he was explicitly told not to do during the company’s legal battle with Facebook. The whole thing looks more like a plot to leak confidential data than a flustered moment in an MP’s office, the court says.

Judge V. Raymond Swope, of the ­superior court of California, ruled that there was prima facie evidence that Six4Three had plotted to “commit a crime or fraud” by leaking the emails in violation of an earlier court order. Prima facie evidence is that which is sufficient to establish a fact or raise a presumption unless disproved or rebutted.

Six4Three’s legal team had been trying to hide the developer’s conversations with British MPs, claiming that they should be protected under attorney-client privilege. But given that prima facie evidence points to Six4Three having potentially leaked the emails, the court has ordered the developer to hand over all such records.

A bikini-sized bite of background

Six4Three, the former ‘wanna-see-your-gal-pals-in-bikinis?’ app developer, in 2015 launched a suit over Facebook’s 2014 decision to shut down the Friends data API, through which users could allow thousands of third-party apps to track their friends’ location, status, and interests.

No Facebook Friends API, no “pikinis” from Six4Three – a kicking-off-the-knees move that crippled the developer. In its suit, Six4Three alleged that Facebook turned off the tap as a way of forcing developers to buy advertising, transfer intellectual property or even sell themselves to it at a bargain basement price.

Why pull out a USB stuffed with confidential documents?

The story from MP Collins and Kramer: the two met in Collins’s London office on 20 November. Collins told Kramer that he was under active investigation in the UK’s fake news inquiry, that he was in contempt of parliament, and that he could potentially face fines and imprisonment.

Kramer claims to have “panicked” and whipped out a USB drive before frantically searching his Dropbox account for relevant files obtained under civil discovery. He purportedly looked for any files whose names suggested they might be relevant, purportedly dragged them onto the USB drive without even opening them, and handed over the USB stick – in spite of Facebook having labelled the documents highly confidential, and “against the explicit statements by counsel.”

In December, Collins’s parliamentary committee published about 250 pages of internal emails showing how Facebook limited the data on users’ friends that developers could see… at any rate, it cut off access to some developers. Facebook kept a whitelist of certain companies that it allowed to maintain full access to friend data.

The emails also showed that Facebook knew that changing its policies on the Android mobile phone system to enable the Facebook app to collect a record of users’ calls and texts would pull in bad PR… so the plan was to bury it deep, “to make it as hard as possible for users to know that this was one of the underlying features of the upgrade of their app,” as Collins characterized it.

As far as the “I panicked” story goes, the response from Facebook has been: Nah, we don’t buy it. Judge Swope thinks the company’s lawyers are on to something. The Telegraph quoted Judge Swope as he concluded what was reportedly a combative two-day hearing in Redwood City, California, near Facebook’s campus in Menlo Park:

The evidence reflects that Six4Three utilized the services of counsel to aid in committing a crime or fraud. It is apparent from the evidence that Six4Three’s counsel was engaged in the ‘heavy lifting’ of analyzing and summarizing Facebook’s confidential parties, and not merely acting in an ­advisory role.

”Scared out of” vs. “served up on a silver platter”

One example of Six4Three’s proactive communications noted by Judge Swope was an email exchange with the UK’s Information Commissioner’s Office (ICO). It was initiated by Six4Three lawyer David Godkin, who introduced himself in an initial email that bore the subject line “Extensive evidence regarding Facebook’s treatment of friend data and user privacy.”

He went on to say that his law firm had obtained…

extensive discovery of communications between [Facebook CEO Mark] Zuckerberg and numerous executives… that [they] believed is highly relevant to the Cambridge Analytica investigation.

The judge also found that lawyers in the case had shared summaries of legal filings with journalists and government agencies. Those summaries didn’t just include allegations; they “also analyze in detail the confidential information obtained from Facebook.”

The leaks keep coming

The leaks in December were just the start. Since then, 60 more pages of unredacted legal documents were published on GitHub.

The second leak of internal emails revealed that Facebook planned to spy on Android users and that Facebook itself had what it called a near-fatal brush with a data privacy breach when a third-party app came close to disclosing its financial results ahead of schedule.

It’s not clear how many more internal documents may have made it into the public domain or might still be destined to appear. Facebook’s lawyers have been fighting to shut down the leaks and identify their source by pushing for Six4Three’s legal team to be cross-examined under oath.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KLB4FHSdvLw/

Gargantuan Gnosticplayers breach swells to 863 million records

A hacker using the identity ‘Gnosticplayers’ has topped up one of the largest data breaches ever publicised by offering for sale 26 million records stolen from another six online companies.

The first of four data caches came to light in early February when The Register got wind that a database of 617 million records pilfered from 16 companies had been put up for sale on the Dark Web for $20,000.

Days later, Gnosticplayers added another 127 million records from a further eight websites, before adding a third round on 17 February comprising another 93 million from a further eight sites.

Round 4

The fourth round, posted to Dark Web market Dream Marketplace last weekend brings the total number of hacked records to 863 million from 38 sites.

The data at risk varies by site but reportedly includes email address, usernames, IP addresses, and in some cases, personal details, settings and in one case, phone numbers.

Passwords are also at risk with a variety of hashing algorithms used to secure them, including SHA1 (with and without salting), SHA256, SHA512 (with salting), and in the case of LifeBear, MD5.

Naked Security was unable to independently confirm the victims, but ZDNet has named the sites in the latest round as Bukalapak (13 million records) GameSalad (1.5 million), Estante Virtual (5.4 million), Coubic (1.5 million), LifeBear (3.8 million), Youthmanual.com (1.1 million).

Japanese site LifeBear was contacted by another news site which received the following statement:

We currently have been investigating the situation. We apologize for the inconvenience this may cause. We already have made contact with police department in Japan and a lawyer to consult this situation.

Two things stand out about these breaches, the first being that few of the companies seemed to be aware they’d been breached until contacted for confirmation by journalists.

A second is the sheer number of breached companies the hackers were able to break into over a period of months.

The first round of breached data included photography site 500px, which later said the data had been taken from its servers around 5 July 2018. According to ZDNet, five of the six companies in the latest cache data appear to have been breached as recently as last month.

According to the same report, allegedly, other victims were saved from being named because they agreed to pay an extortion ransom to keep it private.

What to do

Here’s the list of sites previously known to have been breached as part of the Gnosticplayers leak (in addition to those from the latest cache mentioned above):

500px, Dubsmash, MyFitnessPal, MyHeritage, ShareThis, HauteLook, Animoto, EyeEm, 8fit, Whitepages, Fotolog, Armor Games, BookMate, CoffeeMeetsBagel, Artsy, DataCamp, Xigo, YouNow, Houzz, Ge.tt, Coinmama, Roll20, Stronghold Kingdoms, PetFlow, Legendas.tv, Jobandtalent, Onebip, StoryBird, StreetEasy, GfyCat, ClassPass, Pizap.

Anyone who has an account with any of these sites should change their password as soon as possible, regardless of whether they’ve been asked to do so. If two-factor authentication (2FA) is offered, turn it on.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jh5Dc8a5h3c/

PuTTY in your hands: SSH client gets patched after RSA key exchange memory vuln spotted

Venerable SSH client PuTTY has received a pile of security patches, with its lead maintainer admitting to the The Register that one fixed a “‘game over’ level vulnerability”.

The fixes implemented on PuTTY over the weekend include new features plugging a plethora of vulns in the Telnet and SSH client, most of which were uncovered as part of an EU-sponsored HackerOne bug bounty.

Version 0.71 of PuTTY includes fixes for:

  • A remotely triggerable memory overwrite in RSA key exchange, which can occur before host key verification
  • Potential recycling of random numbers used in cryptography
  • On Windows, hijacking by a malicious help file in the same directory as the executable
  • On Unix, remotely triggerable buffer overflow in any kind of server-to-client forwarding
  • multiple denial-of-service attacks that can be triggered by writing to the terminal

Lead maintainer and “benevolent dictator” of all things PuTTY Simon Tatham told El Reg that “of all the things found by the EU bug bounty programme, the most serious was vuln-dss-verify. That really is a ‘game over’ level vulnerability for a secure network protocol: a MITM attacker could bypass the SSH host key system completely.”

“Luckily,” he continued, “it never appeared in a released version of PuTTY: it was introduced during work to rewrite the crypto for side-channel safety, and spotted only a few weeks later by a bug-bounty participant, well before the release came out. So the EU protected almost everybody from that one.”

Another one of the patched vulns was PuTTY not enforcing minimum key lengths during RSA key exchange, creating an integer overflow situation. Tatham explained that this “could be triggered by a server whose host key hasn’t yet been authenticated. So you’d not only have been at risk from servers you actually trust turning out to be untrustworthy; you were also at risk from anyone who could MITM your connection to such a server, because the usual mechanism that protects you from MITM has not yet kicked in at that stage in the connection.”

The other major vuln patched in v0.71 involved planting a malicious help file in the PuTTY root directory, something Tatham said wouldn’t have applied to those using the regular Windows .msi installer.

Opened in January, the EU review of PuTTY paid out more than $17,500 and was funded by the EU Directorate-General for Informatics, which describes itself as “providing digital services that support other Commission departments”. The bounty formed a wider part of the EU’s ongoing Free and Open Software Audit, or FOSSA. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/19/putty_patched_rsa_key_exchange_vuln/

New Mirai Version Targets Business IoT Devices

The notorious Internet of Things botnet is evolving to attack more types of devices – including those found in enterprises.

In yet another sign that the infamous Mirai botnet is evolving to target enterprise Internet of Things (IoT) systems, researchers have spotted a new iteration of the malware that infects WePresent WiPG-1000 Wireless Presentation systems and LG Supersign TVs.

“Both these devices are intended for use by businesses. This development indicates to us a potential shift to using Mirai to target enterprises,” according to Palo Alto Networks’ Unit 42 research group, which published its findings on the new botnet variant this week. The researchers last fall found Mirai exploiting vulnerabilities in Apache Struts and SonicWall.

The new Mirai malware version also targets routers, network storage devices, network video recorders (NVRs), and IP cameras, and includes 11 new exploits among a total of 27. Unit 42 found the new Mirai variant hosted on a compromised website in Colombia that describes itself as an electronic security and alarm monitoring firm.

Read more here

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/iot/new-mirai-version-targets-business-iot-devices/d/d-id/1334193?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Bandersnatch to gander snatched: Black Mirror choices can be snooped on, thanks to privacy-leaking Netflix streams

Boffins have found a side channel to observe the choices netizens make when viewing interactive streaming videos.

In recent years, computer scientists have demonstrated how they can determine the movie titles people watch over HTTPS connections on Netflix and YouTube.

At least at one point in history people worried about such things: four decades ago, the availability of home video rental data was deemed, in America at least, to be enough of a privacy invasion to prompt the Video Privacy Protection Act of 1988.

Now researchers at the Institute of Technology Madras, India, have found the specific choices viewers make while watching interactive videos can be determined from network traffic, opening up the possibility that ISPs may seek to sell such data as yet another data signal for ad targeting or that authorities might demand it to assess political attitudes.

Netflix last year presented an interactive movie, Black Mirror: Bandersnatch, in which viewers can make choices along the way that affect the path of the story.

When viewers watching the video choose one of the two narrative paths at various branch points in the story, that information gets sent back to Netflix to display the appropriate video segment. And it turns out to be possible to discern which branch each viewer took through network packet analysis.

In a paper just released through pre-print service ArXiv, “White Mirror: Leaking Sensitive Information from Interactive Netflix Movies using Encrypted Traffic Analysis,” a handful of the institute’s computer scientists show that story choices – sent from the viewer’s browser to Netflix via a JSON file – can be inferred despite the encryption of network traffic.

“Our experiments revealed that the packets carrying the encrypted type-1 and type-2 JSON files can be distinguished from other packets by their SSL record lengths which are visible even from encrypted traffic,” explain Gargi Mitra, Prasanna Karthik Vairam, Patanjali SLPSK, Nitin Chandrachoodan, and Kamakoti V in their paper.

schrems

Say GDP-aaaRrrgh, streamers: Max Schrems is coming for you, Netflix and Amazon

READ MORE

Using a data set of 100 viewers, the researchers claim they successfully determined the viewers’ choices 96 per cent of the time.

A data set of decisions in an interactive narrative may sound inconsequential in an era of social media oversharing, but the researchers nonetheless suggest the information could have commercial or political applications.

“Interestingly, the choices made and the path followed can potentially reveal viewer information that ranges from benign (e.g., their food and music preferences) to sensitive (e.g., their affinity to violence and political inclination),” they explain.

The Register asked Netflix if it has any concerns about these findings. We’ve not heard back.

The boffins from Madras suggest a straightforward mitigation: altering the JSON files sent from the viewer’s browser so they’re equally long and indistinguishable from one another. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/19/netflix_privacy_hole/

Bad cup of Java leaves nasty taste in IBM Watson’s ‘AI’ mouth: Five security bugs to splat in analytics gear

IBM has issued a security alert over five vulnerabilities in its golden boy Watson analytics system.

Big Blue has issued an update today to clean up a series of security flaws in Watson that stem from the analytics system’s use of Java components. The bugs are present in installations of Watson Explorer and IBM Watson Content Analytics.

In total, IBM says, five CVE-listed vulnerabilities are cleared up by the latest update, ranging from information disclosure flaws to remote takeover vulnerabilities.

The most serious of the five bugs is CVE-2018-2633, a flaw in Java SE, Java SE Embedded, and JRockit JNDI that can allow an attacker with local network access to remotely take control of the targeted box. While details of the flaw were not given, the exploit is said to require user interaction.

“Successful attacks require human interaction from a person other than the attacker and while the vulnerability is in Java SE, Java SE Embedded, JRockit, attacks may significantly impact additional products,” the CVE summary of the bug reads.

“Successful attacks of this vulnerability can result in takeover of Java SE, Java SE Embedded, JRockit”

While IBM notes that while the flaw is particularly difficult for an attacker to exploit, Watson boxes are a particularly valuable target, so admins would be wise to address the bugs post-haste.

Death

IBM to kill off Watson… Workspace from end of February

READ MORE

Another flaw, CVE-2018-2603, would allow an attacker to crash the targeted Watson system by initiating a denial of service attack. Unlike the remote takeover bug, this flaw can be more easily exploited by an attacker with local network access, but annoyingly Big Blue was skimpy on details of what made this so.

“Easily exploitable vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise Java SE, Java SE Embedded, JRockit,” the summary reads.

“Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of Java SE, Java SE Embedded, JRockit.”

The remaining three flaws, CVE-2018-2579, CVE-2018-2588, and CVE-2018-2602, all relate to information disclosure flaws. All of those three would allow an attacker to potentially retrieve sensitive information from the target machine, though Big Blue held off on saying exactly how the flaws were exploited.

Each of the flaws can be patched up by getting the latest version of the Java Runtime. Admins are advised to test and install the patch as soon as possible. ®

Speaking of IBM… If you’re using Big Blue’s BigFix relay server, ensure relay authentication is enabled. “Not doing so exposes a ridiculous amount of information to unauthenticated external attackers, sometimes leading to a full remote compromise,” infosec bod HD Moore warned today.

“Also note than an attacker who has access to the internal network or to an externally connected system with an authenticated agent can still access the BigFix data, even with Relay Authentication enabled. The best path to preventing a compromise through BigFix is to not include any sensitive content in uploaded packages.”

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/18/java_watson_flaws/

New Europol Protocol Addresses Cross-Border Cyberattacks

The protocol is intended to support EU law enforcement in providing rapid assessment and response for cyberattacks across borders.

The Council of the European Union has adopted a new EU Law Enforcement Emergency Response Protocol that is intended to aid in the response to large-scale, cross-border cyberattacks.

The protocol determines secure communication channels, contact points for exchanging critical data, and the procedures, roles, and responsibilities of key players inside and outside the EU. It’s meant to complement the EU’s crisis management processes by streamlining international cooperation and enabling collaboration between cybersecurity pros and the private sector.

Europol’s European Cybercrime Centre (EC3) has a central role in this protocol and is part of the EU Blueprint for Coordinated Response to Large-Scale Cross-Border Cybersecurity Incidents and Crises. The protocol aims to help law enforcement immediately respond to cyberattacks.

“Only cyber security events of a malicious and suspected criminal nature fall within the scope of this Protocol; it will not cover incidents or crises caused by a natural disaster, man-made error or system failure,” officials report.

Further, they explain, the protocol is a multistakeholder process with seven core stages, from the early detection and identification of a major cyberattack, to threat classification, to law enforcement operational plan, to emergency response protocol closure.

Read more details here.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/analytics/new-europol-protocol-addresses-cross-border-cyberattacks/d/d-id/1334189?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New IoT Security Bill: Third Time’s the Charm?

The latest bill to set security standards for connected devices sold to the US government has fewer requirements, instead leaving recommendations to the National Institute of Standards and Technology.

For the third time in as many years, lawmakers have introduced a bill that would require Internet of Things (IoT) products sold by federal contractors and vendors to abide by government guidelines to ensure a baseline of cybersecurity.

The current bill, supported by members of both parties and known as the Internet of Things Cybersecurity Improvement Act of 2019, eschews specific recommendations and instead calls for the National Institute of Standards and Technology (NIST) to develop security guidelines for IoT devices sold to the US government. 

The hope is that such legislation, if signed into law, would mean more secure IoT equipment overall, including in the consumer and commercial sector.

It is the latest attempt to convince manufacturers of connected devices to take security more seriously. The original legislation, known as the Internet of Things Cybersecurity Improvement Act of 2017, was introduced after the Mirai botnet targeted Internet infrastructure provider Dyn and disrupted a wide variety of other Internet services in October 2016. Mirai compromised weakly secured digital video recorders and connected cameras, creating a botnet of more than 100,000 endpoints that leveled a series of distriubted denial-of-service (DDoS) attacks on Dyn

The original bipartisan IoT security legislation required that any device sold to the US government followed basic, well-established cybersecurity norms, such as no hard-coded passwords and being easily patchable. Even so, the legislation failed to overcome industry resistance.

The new IoT security legislation will likely result in the same requirements, because the original bill had only focused on common-sense, or “evergreen,” guidelines, says Josh Corman, chief security officer for PTC, a maker of industrial IoT solutions.

“I think we are going to end up in a similar place, but later,” he says. “So it means more exposure to threats and less secure devices being brought to market.”

The bill is all about using the government’s buying power as an incentive for companies to create more secure connected devices, Sen. John Warner, D-Va., vice chairman of the Senate Select Committee on Intelligence and a co-sponsor of the bill, said in a statement.

“While I’m excited about their life-changing potential, I’m also concerned that many IoT devices are being sold without appropriate safeguards and protections in place, with the device market prioritizing convenience and price over security,” he said.

The current bill tasks NIST with creating requirements for federal agencies that consider the secure development, identity management, patching and configuration management of IoT devices. In addition, NIST is also tasked with developing recommendations on the management and use of IoT devices by March 31, 2020.

By removing the exact requirements from the legislation, lawmakers have eased the way to passage, says Nathan Owens, an attorney and partner with the law firm of Newmeyer Dillion, who focuses on risk management and cybersecurity.

For lawmakers under pressure from the industry, “making it broader and going to an organization like NIST is easy to get behind,” he says. “It makes the (passage of the) legislation really easy and palatable.”

Within six months of the passage of the bill, NIST will be required to issue a report on the impact connected devices will have on federal operations and how agencies can mitigate cybersecurity risk. In addition, the bill specifically calls for NIST to develop coordinated vulnerability disclosure guidelines.

Finally, the bill would require the Office of Management and Budget to issue guidelines for each federal agency, following the NIST report and creation of recommendations with regards to internet-of-things technology. The bill requires that all federal contractors and vendors adhere to these guidelines.

Feds Only

While legislators aim to broadly impact the security of IoT devices through the power of federal purchasing, the bill itself only focuses on government requirements for the devices that agencies buy for their own use, says PTC’s Corman. If the Pentagon purchases battlefield systems, they want to make sure that they cannot be hacked by other nations’ militaries.

“The government has every right as a purchaser of device to want to have those products not be hackable or trackable,” he says. “The manufacturers of a $100 device don’t realize that, if its deployed by the federal government, that the attack surface and threat model is markedly different.”

In many ways the legislation already follows the precedent set in legislation passed by California late last year, which requires manufacturers to incorporate “reasonable security feature or features” into IoT devices.

In a May 2018 report on defending against botnets and automated threats, the US Department of Homeland Security and US Department of Commerce recommended that technology and products include security at every stage of their development and manufacture. In addition, the report argued that the federal government should use its purchasing power to incentivize manufacturers to create more secure technology.

“Product developers, manufacturers, and vendors are motivated to minimize cost and time to market, rather than to build in security or offer efficient security updates,” the report stated. “Market incentives must be realigned to promote a better balance between security and convenience when developing products.

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/iot/new-iot-security-bill-third-times-the-charm/d/d-id/1334190?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Home DNA kit company now lets users opt out of FBI data sharing

Update 18 March 2019

FamilyTreeDNA emailed users last week to let them know that they can now opt out of DNA matching that will be used to help police identify the remains of deceased people or to help them track down violent criminals.

It’s now calling that type of investigative DNA research Law Enforcement Matching (LEM). The gene-matching company also set up a separate process for police to upload genetic files to the database. Police-uploaded files must now be used for the purpose of identifying a dead person or the perpetrator of a homicide or sexual assault.

Those EU residents who created accounts before 12 March 2019 have been automatically opted out of LEM. They still have the option of adjusting their Matching Profiles to opt back into LEM, however. To do so, users should visit the Privacy Sharing section within their Account Settings.

Original article, published 5 February 2019

FamilyTreeDNA – one of the larger makers of at-home genealogy test kits – has disclosed that it’s been giving the FBI access to DNA profiles to help solve violent crime.

Investigators’ use of public genealogy databases is nothing new: law enforcement agencies have been using them for years. But the power of online genealogy databases to help track down and identify people became clear in April 2018, when police arrested Joseph James DeAngelo on suspicion of being the Golden State Killer: the man allegedly responsible for more than 50 rapes, 12 murders and more than 120 burglaries across the state of California during the 70s and 80s.

What’s new about FamilyTreeDNA’s cooperation with the FBI – as reported by BuzzFeed News on Thursday – is that it’s the first time that a private genealogy company has publicly admitted to voluntarily letting a law enforcement agency access its database.

A spokesperson for FamilyTreeDNA told BuzzFeed that the company hasn’t signed a contract with the FBI. But it has agreed to use its private lab to test DNA samples at the bureau’s request, and to upload the profiles to its database, on a case-by-case basis. It’s been doing so since this past autumn, according to BuzzFeed.

The spokesperson said that working with the FBI is “a very new development” that started with one case last year and “morphed.” At this point, she said, the company has cooperated with the FBI on fewer than 10 cases.

Privacy implications

The more people who submit DNA samples to these databases, the more likely it is that any of us can be identified. Over the course of years of searching for the Golden State Killer, investigators had collected and stored DNA samples from the crime scenes. In that and other cases that have hinged on DNA searches, they ran the genetic profile they derived from the DNA samples through an online genealogy database and found it matched with what turned out to be distant relatives – third and fourth cousins – of whoever left their DNA at the crime scenes.

Getting a match with the database’s records helped investigators to first locate DeAngelo’s third and fourth cousins. The DNA matches eventually led to DeAngelo himself, who was arrested on six counts of first-degree murder.

It wasn’t that DeAngelo submitted a DNA sample to any one of numerous online genealogy sites, such as FamilyTreeDNA, 23andMe or AncestryDNA. Rather, it was relatives with genetic makeups similar enough to whoever left their DNA sample on something at a crime scene who made the search possible.

According to research published in October, the US is on track to have so much DNA data on these databases that we’re getting to the point that you don’t even have to submit a saliva sample in order to be identifiable via your DNA.

Researchers from Columbia University found that 60% of searches for individuals of European descent will result in a third cousin or closer match, which can allow their identification using demographic identifiers.

As time goes by, given the rate of individuals uploading genetic samples to sites that analyze their DNA, “nearly any US individual of European descent” will be able to be identified in the near future, they said.

Many, if not most, of these databases are free for the public to search, and law enforcement have gone that route, accessing genetic profiles that people have willingly uploaded.

Now, given its cooperation with FamilyTreeDNA, the FBI has gained access to more than a million DNA profiles, “most of which were uploaded before the company’s customers had any knowledge of its relationship with the FBI,” BuzzFeed notes.

You can opt out, with caveats

In December, FamilyTreeDNA changed its terms of service to allow law enforcement to use the database to identify suspects of violent crimes, such as homicide or sexual assault, and to identify victims’ remains. But FamilyTreeDNA says that investigators won’t be able to do so without the proper legal documents, such as a subpoena or search warrant.

In other words, FamilyTreeDNA is giving the FBI the same level of access to its records that the public now has, according to the company’s founder and CEO Bennett Greenspan:

We came to the conclusion that if law enforcement created accounts, with the same level of access to the database as the standard FamilyTreeDNA user, they would not be violating user privacy and confidentiality.

FamilyTreeDNA has been lauded for its strict protection of consumer privacy, and Greenspan said that the company has no intention of shedding that reputation to become a data broker:

Working with law enforcement to process DNA samples from the scene of a violent crime or identifying an unknown victim does not change our policy never to sell or barter our customers’ private information with a third party. Our policy remains fully intact and in force.

FamilyTreeDNA told BuzzFeed that to keep their profiles from being searched by the FBI, customers can opt of having their familial relationships mapped out. But it would also mean that people couldn’t use the service for finding possible relatives, which is one of the key attractions of such a database.

‘I feel violated’

Regardless of FamilyTreeDNA’s promise to not share information with the FBI that exceeds what other consumers can access – at least, not without a valid court order – multiple FamilyTreeDNA users are pondering whether they want to opt out of DNA matching or even have their previously submitted DNA kits destroyed. One such whom BuzzFeed talked to was Leah Larkin, a genetic genealogist based in Livermore, California:

All in all, I feel violated, I feel they have violated my trust as a customer.

Overall, however, it looks like most genealogy enthusiasts have no problem with helping to track down violent criminals. BuzzFeed cited an informal survey conducted by genealogist Maurice Gleeson that found that 85% of respondents – all of them involved in genealogy in the US or Europe – said they were comfortable with their DNA being used to catch a serial killer or rapist.

How well is this data being protected?

It’s easy to agree with wanting to help catch violent criminals by using DNA matching – what’s known as “enhanced forensic capabilities.” But we need to keep in mind that these databases and services are open to everyone, and not everyone will use them with good intentions.

For example, research subjects can be re-identified from their genetic data. However, rules that, starting this year, will regulate federally funded human subject research fail to define genome-wide genetic datasets as “identifiable” information.

The Columbia University researchers said that their work shows that such datasets are indeed capable of identifying individuals. That’s why they’re encouraging US Health and Human Services (HHS) to rethink that classification.

To better protect our genomes, the researchers proposed that the text files of raw genetic data be cryptographically signed:

Third-party services will be able to authenticate that a raw genotyping file was created by a valid [direct-to-consumer] provider and not further modified. If adopted, our approach has the potential to prevent the exploitation of long-range familial searches to identify research subjects from genomic data. Moreover, it will complicate the ability to conduct unilaterally long-range familial searches from DNA evidence. As such, it can complement previous proposals regarding the regulation of long-range familial searches by law enforcement and offers better protection in cases where the law cannot deter misuse.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/pxlFTOJiAG8/