STE WILLIAMS

Over 59K Data Breaches Reported in EU Under GDPR

In addition, 91 reported fines have been imposed since the regulation went into effect last May.

The General Data Protection Regulation (GDPR) officially went into effect across the European Union on May 25, 2018. Since then, more than 59,000 personal breaches have been reported to regulators.

New data breach notification laws have “fundamentally changed” the risk profile of businesses hit with data breaches, reports global law firm DLA Piper. Breaches likely to cause harm to individuals affected must be reported. Failure to comply can cost fines up to €10 million ($11.4 million) or up to 2% of the firm’s global annual turnover for the previous financial year – whichever is higher.

In the eight months since GDPR has been applied, 91 reported fines have been imposed. Not all were for personal data breaches. The highest to date was a €50 million ($57 million) fine imposed on Google related to processing personal data for advertising without valid authorization. A German company was fined €20,000 ($22,810) for failing to hash employee passwords, which led to a security breach.

The Netherlands reported the most data breaches (15,400 incidents), followed by Germany (12,600) and the United Kingdom (10,600). Those with the lowest number of breaches reported include Lichtenstein (15), Iceland (25), and Cyprus (35). Cyberattacks reported under GDPR range from minor security breaches to major, publicized hacks affecting millions of individuals.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/over-59k-data-breaches-reported-in-eu-under-gdpr/d/d-id/1333798?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Vulnerabilities Make RDP Risks Far From Remote

More than two dozen vulnerabilities raise the risk of using RDP clients to remotely manage and configure systems.

Researchers have announced a flurry of vulnerabilities in three separate implementations of RDP, the remote desktop protocol that is widely used in remote technical support and configuration operations at large enterprises and service providers.

In a presentation at their company’s annual conference, Check Point security researchers detailed 25 “reverse RDP” vulnerabilities in three separate RDP clients: FreeRDP, rdesktop, and mstc.exe. Two of the clients are native to operating systems; rdesktop is the client included in distros of Kali Linux, while mstc.exe is Microsoft’s RDP client included with Windows.

In all of these reverse RDP vulnerabilities, it’s the remote system — not the system being connected to — that’s vulnerable. As Yaniv Balmas, head of technical research at Check Point, says, “Once we have a direct channel back to your to your machine, we can practically do anything we want on that machine. We can do everything we want. The machine is ours.”

While many IT professionals believe that only display and user interface data is exchanged in an RDP session, Balmas says RDP clients have more capabilities, and it’s those additional capabilities that provide the source of the vulnerabilities.

In both of the open source RDP clients, Check Point found that malware on the “host” system could use a buffer overflow technique to force remote code execution on the client machine. There are actually a variety of ways to do this; so far, 19 vulnerabilities have been identified and given CVE designations in rdesktop, while six have been identified in FreeRDP.

All of these vulnerabilities were submitted to the open source community prior to public disclosure, and all have been patched. “So the remediation for the two free versions is essentially to make sure you’re using the latest patched version,” Balmas says.

The situation with mstc.exe is different. The researchers found that the code Microsoft uses is much stronger than that used by the open source versions. There’s one feature, though, that creates an opportunity for malicious behavior: Through the RDP client, the host and remote systems share a clipboard.

As the researcher wrote in their blog post on the vulnerabilities, “If the client fails to properly canonicalize and sanitize the file paths it receives, it could be vulnerable to a path-traversal attack, allowing the server to drop arbitrary files in arbitrary paths on the client’s computer, a very strong attack primitive.”

What this means in practical terms also is detailed in the post: “If a client uses the ‘Copy Paste’ feature over an RDP connection, a malicious RDP server can transparently drop arbitrary files to arbitrary file locations on the client’s computer, limited only by the permissions of the client. For example, we can drop malicious scripts to the client’s ‘Startup’ folder, and after a reboot they will be executed on his computer, giving us full control.”

The researchers were able to build code that pushed code onto the clipboard without the user’s permission or awareness, Balmas says. Then, if the remote user pastes anything from the clipboard, the malicious code is also pasted to an arbitrary location.

Because the exploit involves user interaction, Microsoft does not classify this as a code vulnerability and has not been given a CVE designation. Despite that, “We consider this to be critical, or at least important for users to know, because we think that this kind of — I would call it the bug — goes unnoticed and can definitely be used by malicious actors,” Balmas says.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/new-vulnerabilities-make-rdp-risks-far-from-remote/d/d-id/1333799?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Mitigating the Security Risks of Cloud-Native Applications

While containers can create more secure application development environments, they also introduce new security challenges that affect security and compliance.

Containers represent the most significant computing advancements for enterprise IT since VMware introduced its first virtualization product, Workstation 1.0, in 1999. They enable organizations to build, ship, and run applications faster than ever, fueling the rise of the DevOps movement. It’s important for CISOs to realize that while containers can create more secure application development environments, they also introduce new security challenges that impact security and compliance when rolling them out in production.

When talking to our customers, many cite a common challenge: how fluid and dynamic the landscape has become. Three years ago, container technologies were almost exclusively used in development, and in the move to production the live systems running in the data center were refactored to address operational requirements. In this window, the security team had plenty of time to evaluate risks and provide late-stage guidance to ensure compliance. At the time, Docker was by far the dominant technology in use.

Fast forward to today, when enterprises are implementing multiple technologies like Kubernetes for orchestration and alternate technologies such as serverless functions from all of the big cloud vendors, then deploying them “continuously” into production. The window for the security team to properly review the application and its infrastructure has become much shorter, if it still exists at all.

Security Issues
Traditional security tools cannot handle the velocity, scale, and dynamic networking capabilities of containers. Taking this a step further, serverless functions prioritize simplicity and agility by abstracting infrastructure concerns to provide a simple execution environment for applications and microservices. Attackers may leverage a vulnerability in base images used for containers, outsourced libraries or in serverless function code; or take advantage of vulnerabilities inside the cloud infrastructure’s permissions settings to reach services that contain sensitive information.

The reliance on open source applications or code snippets creates another security vulnerability. No one’s writing new code from scratch; everyone is grabbing components from GitHub, Docker Hub, and other open source repositories, leveraging other code written earlier for other projects inside the company. The people writing the code may not be as familiar with what they’re starting with, nor with any vulnerabilities that may be present (or show up later after they embedded the borrowed code). They also use general-purpose apps that encompass many more capabilities and privileges than their specific applications actually require — creating an unnecessarily large attack surface.

Shift Left, and Then Shift Up
DevOps and information security teams should work together to address these challenges by facilitating security’s “shift left” to the beginning of the development cycle. Shift left is a well-understood concept in developer circles, and it needs to become just as familiar from a security perspective in order to identify and remedy potential security issues before they move into production.

Security must also “shift up” to focus on its new priority — protecting the application layer — and success requires making these new controls and processes mandatory. The shift-left concept can’t fully address the new security issues that containers and serverless functions can create. For example, shifting left does not provide for effective detection and incident response in the case of a new zero-day attack on a running container. Effective incident response requires identifying the incident, understanding its causes and potential effects, then making a decision regarding appropriate action — something that is only possible with controls over the runtime environment.

Consider concern for securing the runtime environment. In a traditional server infrastructure on-premises or in the cloud, the application runs on a virtual machine (VM), and anti-malware is installed on the VM operating system. If the application is compromised, the anti-malware solution stops it. But if you are using AWS Fargate or Azure ACI, where do you install anti-malware?

The traditional location for executing security policies in the middle layers is no longer under your control. The serverless model exacerbate the problem, and security organizations are realizing these controls remain critically important to address even after they have worked with DevOps to facilitate the shift left. The “enforcement point” on the underlying operating system has to go somewhere — ideally inside the container where you will execute the controls, manage incident response controls, etc. All the controls that were once executed in the operating system: Preventing rogue deployments and malicious code injections, securing user credentials, guarding network connections, and thwarting zero-day attack are still critical. Shifting up requires you to spread these controls among the container, orchestration, and development environments.

You must decide what controls need to be executed, and where. Some things will shift left, including understanding what potential vulnerabilities or deficiencies in application code as well as the configuration of the image. Others should be implemented in the runtime, such as monitoring what containers are doing and understanding what software is running in them, requiring a shift up to protect these new infrastructures. That’s how security becomes a facilitator to the DevOps movement and seen as an ally in releasing secure applications quickly on these newer cloud-native infrastructures.

Related Content:

Dror Davidoff is co-founder and CEO of Aqua Security. Dror has more than 20 years of experience in sales management, marketing, and business development in the enterprise software space. He has held executive positions at several emerging IT security and analytics companies. … View Full Bio

Article source: https://www.darkreading.com/cloud/mitigating-the-security-risks-of-cloud-native-applications/a/d-id/1333773?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cybercriminals Exploit Gmail Feature to Scale Up Attacks

Criminals are taking advantage of Gmail’s ‘dots don’t matter’ feature to set up multiple fraudulent accounts on websites, using variations of the same email address, Agari says.

Some cybercriminals are taking advantage of a long-standing feature in Google Gmail designed to enhance account security, to create multiple fraudulent accounts on various websites quickly and at scale, security vendor Agari said this week.

The feature, which some have warned about previously, basically ensures that all dotted variations of a Gmail address belong to the same account. For example, Google treats johnsmith (at) gmail.com the same as john.smith (at) gmail.com and jo.hn.smith (at) gmail.com. An individual with johnsmith (at) gmail.com as their email address would therefore receive emails sent to all dotted variations of the same address.

A Google spokesperson declined to comment on the Agari research, but pointed to Google’s official description of the dots feature, where Google says that the “dots don’t matter” approach in Gmail ensures no one can take another person’s username. “Your Gmail address is unique. If anyone tries to create a Gmail account with a dotted version of your username, they’ll get an error saying the username is already taken,” Google said in its post on the feature.

But the feature can be problematic to organizations that support the creation of new user accounts on their websites—such as credit card companies and social media sites— Agari says. Most such sites, and indeed a vast majority of the Internet, treat each dotted variant as a separate email account. For instance, most websites that support account creation would treat johnsmith (at) gmail.com as a separate email address from john.smith (at) gmail.com. So a criminal can easily create multiple accounts on a website using dot variants of the same email address, Agari said in its new research.

Agari researchers recently have observed business email compromise (BEC) scammers taking advantage of the feature to set up dozens of accounts on single websites and have all communications associated with those accounts directed to single Gmail accounts.

Over the past year, the criminals have used the approach to submit 48 credit card applications at four US-based companies and to conduct at least $65,000 in fraudulent credit, Agari said. The attackers have also used the Gmail account feature to file 11 fraudulent tax returns via an online filing service, submit 12 change-of-address requests with the postal service; apply for unemployment benefits; and submit applications with FEMA disaster assistance using identities belonging to other people.

To be clear, the Gmail feature itself did not enable the actual scams: it just made it easier for the BEC attackers to monitor and receive communications across the multiple accounts using a single Gmail address.

“By exploiting this feature in Gmail accounts, scammers are able to scale their operations more efficiently,” Agari said. They then don’t need to create and monitor a new email account for every new account on a website, so their onlie  online scams are faster and more efficient.

“Any enterprise that includes account creation in its business model, such as financial services or social media, should be aware that attackers can use the Google ‘dot’ exploit to create a large number of fraudulent accounts,” says Crane Hassold, senior director of threat intelligence at Agari.

“Organizations can either treat dots the way that Google treats dots, which is to ignore them … or they can monitor for rapid-account creation from email addresses that include multiple dots to flag potentially suspicious behavior,” he says.

Related Content:

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/cybercriminals-exploit-gmail-feature-to-scale-up-attacks-/d/d-id/1333800?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Kids’ GPS watches are still a security ‘train wreck’

A year after Norwegian researchers found that child-tracking, GPS-connected smartwatches had major security flaws – flaws that would have let strangers eavesdrop on a child, talk to them behind their parent’s back, use the watch’s camera to take their picture, stalk them, or lie about their whereabouts – not much has changed.

When Pen Test Partners decided to check up on how one of the four models the Norwegian researchers looked at had shaped up over the course of 14 months, things turned out to be status quo: the security of TechSixtyFour’s Gator watch and thousands of other watches was still a train wreck.

Pen Test Partners’ TL;DR:

Guess what: a train wreck. Anyone could access the entire database, including real time child location, name, parents details etc. Not just Gator watches either – the same back end covered multiple brands and tens of thousands of watches

Following the Norwegian Consumer Council’s (NCC’s) 2017 report about these Internet-of-Things (IoT) wrist wraps, bad press broke out like so much prepubescent acne. At least one UK retailer, John Lewis, responded by yanking the Gator 2.

In November 2018, TechSixtyFour founder Colleen Wong said on the company’s blog that it had responded to the NCC’s report with a complete, one-month-long system overhaul. It also hired a vulnerability assessment firm to review its systems on an ongoing, monthly basis.

But last week, Pen Test Partners’ Vangelis Stykas said that when his firm checked in on the Gator, it was still a snap to hijack the kids’ watches. He explained that Pen Test Partners found that they could change user level to “super admin access” via the web portal for the Gator 3 – it was simple, given that the system didn’t bother to check whether or not the user should get that kind of privileged access:

The Gator web backend was passing the user level as a parameter. Changing that value to another number gave super admin access throughout the platform. The system failed to validate that the user had the appropriate permission to take admin control!

It meant that an attacker could get “full access to all account information and all watch information,” Stykas said. The fact that parents were able to specify their access level via a user-controlled parameter that they could boost to admin level means that child predators or other malicious attackers could have snooped on as many as 20,000 customer accounts and 35,000 affected devices. Malicious actors could have obtained user contact details, and they could have identified and tracked the locations of those users’ children.

They could view any user of the system and any device on the system, including its location. They could manipulate everything and even change users’ emails/passwords to lock them out of their watch.

On the plus side, Gator snapped fast

TechSixtyFour is apparently the UK distributor for the Gator watches, which rely on back-end service from a Chinese company called Caref Watch Co Ltd.

To its credit, TechSixtyFour made it fairly straightforward to fix the vulnerability, given that, unlike some vendors, it publishes a contact and a policy for vulnerability disclosure. It was also pretty speedy with getting the flaw patched: after Pen Test Partners disclosed the vulnerability on 11 January, TechSixtyFour (eventually) closed the vulnerability within 48 hours. …

… but only after its first “fix,” which consisted of blocking the researchers’ accounts with HTTP 502 errors. This first, ineffective fix also involved removing the form element from the web element that allowed the researchers to change to super admin access … while leaving the problem-causing parameter untouched.

The following day, Caref apologized for the non-fix and removed the offending form and parameter. It was all patched up for real as of 16 January.

Fortunately, nobody maliciously exploited the bug, as TechSixtyFour told the Register on Friday:

We appreciate Ken Munro of Pen Test Partners disclosing this vulnerability to us, and our team have taken this seriously as our fix was completed within 48 hours. An internal investigation of the logs did not show that anybody had exploited this flaw for malicious purposes.

[TechSixtyFour’s engineers] implemented a partial fix within 12 hours. They then identified the root cause and deployed a full fix within 48 hours of the notification.

Then there’s the ‘why hasn’t anything changed?’ side…

The problem with these kids’ watches is they should be getting far more thorough security testing before they hit the market, Stykas said. An automated vulnerability assessment service doesn’t cut it, he said:

They’re just not thorough or capable enough to really dig deep, particularly in to an API.

Unfortunately, given the thin margin for these gadgets, that’s unlikely to change.

The problem is that the price point of these devices is so low that there is little available revenue to cover the cost of security.

Bottom line: steer clear of these things, Stykas said.

Our advice is to avoid watches with this sort of functionality like the plague. They don’t decrease your risk, they actively increase it.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/F4RRyGWsis8/

Crypto exchange in limbo after founder dies with password

Customers of Canadian cryptocurrency exchange QuadrigaCX are missing over $250 million CAD in fiat and virtual currency (a total of around $190m in US dollars) after its founder died without telling anyone the password for his storage wallet.

QuadrigaCX enabled users to trade between fiat currency and cryptocurrencies including Bitcoin, Bitcoin Cash, Litecoin and Ethereum.

Gerry Cotten, the 30-year-old founder of the Vancouver-based exchange, passed away in India on 9 December 2018 due to complications from Crohn’s disease. In an affidavit to the Supreme Court of Nova Scotia, his partner Jennifer Robertson explained that cryptocurrencies had been stored in a cold wallet under his sole control.

In cryptocurrency trading, a wallet is a repository for cryptocurrency addresses that contain assets, along with private keys to access them. There are two kinds of wallet: hot, and cold.

A hot wallet is a software program connected to a blockchain, enabling it to make cryptocurrency transactions. A hot wallet can be vulnerable to hacking via software compromise.

A cold wallet stores address and private key details off the blockchain. It can take several forms. A paper wallet stores the details in writing, while a hardware wallet stores addresses and keys in a device. A cold storage wallet could even be a simple text file containing the appropriate addresses and keys. It can still be physically stolen, but because it isn’t connected to a blockchain it isn’t vulnerable to online compromise.

It is good practice for cryptocurrency exchanges to keep the majority of their funds in a cold wallet to stop them being hacked, and this is apparently what Cotten did. The mistake he made was in being so secure that he didn’t share the access details with anyone else. His untimely death left Robertson, who had not been previously involved with the company, unable to access the funds for the customers.

In the affidavit, which supported an application for bankruptcy protection for QuadrigaCX, Robertson said:

The laptop computer from which Gerry carried out the Companies’s [sic] business is encrypted and I do not know the password or recovery key. Despite repeated and diligent searches, I have not been able to find them written down anywhere.

The company continues to try and access to cold storage, she went on. It has hired an external expert, Chris McBryan, to try and hack into Cotten’s computers. He is also trying – so far unsuccessfully – to access an encrypted USB key.

Cotten was the sole officer and director for the company, and Robertson explained in the affidavit that she couldn’t find any business records. The search for any pertinent business documents, along with access to Cotten’s computer, is ongoing.

Meantime, Robertson has been dealing with social media comments from those that refuse to believe Cotten is dead, accusing him or others of stealing the coins as part of an exit scam. She has received threatening messages and one person even messaged everyone in her Facebook contact list.

Further complications

To further complicate matters, QuadrigaCX had also been denied access to around CAD$25 million in funds following a dispute between CIBC (one of the Big Five banks in Canada) and one of its payment processors, called Costodian. This service, engaged by another payment processor of QuadrigaCX’s called Billerfy, had its accounts frozen by the CIBC bank after they became overdrawn. CIBC then refused to permit any other withdrawals from the Costodian account until it could be proven who owned the funds.

The money was held in trust by the court, which eventually accepted that it belonged to QuadrigaCX. It released the money as a bank draft to Costodian, which cannot find a bank to accept them. QuadrigaCX couldn’t open a bank account of its own in which to deposit the drafts either.

The details in the affidavit, uploaded by cryptocurrency news site Coindesk, showed just how much the back end of this cryptocurrency exchange was patched together. QuadrigaCX had no corporate bank accounts, but instead conducted all its business via personal ones. This is because Canadian banks didn’t want to deal with a cryptocurrency company, according to the affidavit.

If there’s one lesson that any cryptocurrency user should take away from this it is that you should limit the amount of ‘ready money’ lying around in an exchange. You should store cryptocurrency securely at home, offline, in a cold wallet. Then, decide how to backup the password so it can be reconstructed by your executors in the event of your death.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/TYp3Z4qtmy0/

Half of IoT devices let down by vulnerable apps

Testing Internet of Things (IoT) devices for security weaknesses can often resemble a large fist punching a wet paper bag. Researchers report a litany of firmware vulnerabilities, insecure wireless communications, and consumer complacency about the risks of connecting smart devices to a home network.

With so much bad press, might things be improving?

Not as fast as they should be, according to a test by researchers from Brazil’s Federal University of Pernambuco and the University of Michigan, who took a closer look at 32 smartphone apps used to configure and control the 96 top-selling Wi-Fi and Bluetooth-enabled devices sold on Amazon.

There’s a lot for IoT makers to secure, including the apps themselves, their connection to cloud proxies (typically used during initial setup), and the subsequent wireless connection and authentication to and from the IoT device.

It’s also a lot of equipment to test, which is why the researchers in this study started by inferring potential weaknesses using heuristic analysis of the apps themselves.

Disappointingly, 31% of the apps (corresponding to 37 devices out of 96) had no encryption at all while another 19% had hard-coded encryption keys an attacker might be able to reverse engineer even if they’d been obfuscated.

The researchers backed up their findings by developing proof-of-concept attacks against five devices controlled by four apps: TP-Link’s Kasa app used with multiple devices, LIFX app used with that company’s Wi-Fi enabled light bulbs, Belkin’s WeMo for IoT, and Broadlink’s e-Control app.

Three used no encryption, and three communicated riskily via broadcast messages that would give an attacker a way of monitoring the nature of app-device communication with a view to compromise.

Based on our in-depth analysis of 4 of the apps, we found that leveraging these weaknesses to create actual exploits is not challenging. A remote attacker simply has to find a way of getting the exploit either on the user’s smartphone in the form of an unprivileged app or a script on the local network.

One of the vulnerable devices assessed with the Kasa app was TP-Link’s Smart Plug, which the reviewers point out has been reviewed 12,000 times on Amazon, achieving a star rating of 4.4 out of 5:

TP-Link shares the same hard-coded encryption key for all the devices of a given product line and that the initial configuration of the device is established through the app without proper authentication.

IoT utopia

Interestingly, the researchers pull out one smart device – Google’s Nest thermostat app – as an example of how IoT security might be done (if the user, of course, applies their own basic security), for example by conducting all configuration secured with SSL/TLS to the cloud or via Wi-Fi with WPA (the Nest has the advantage of a display to help configure this which many other types IoT devices might not).

Summing up the test, half of the apps are insecure in a variety of ways. Clearly, some vendors are better than others, an impression reinforced by the lack of response the researchers received from affected vendors in the detailed test of five devices.

Write the authors:

None of them have sent any response to our disclosures and to the best of our knowledge, have not released patches relative to these vulnerabilities.

Given the wider reputation IoT devices have for iffy security, this sounds dangerously like heads stuck in the sand.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qn1iVh5TvbM/

Home DNA kit company says it’s working with the FBI

FamilyTreeDNA – one of the larger makers of at-home genealogy test kits – has disclosed that it’s quietly been giving the FBI access to its database of 1 million DNA profiles to help solve violent crime.

Investigators’ use of public genealogy databases is nothing new: law enforcement agencies have been using them for years. But the power of online genealogy databases to help track down and identify people became clear in April 2018, when police arrested Joseph James DeAngelo on suspicion of being the Golden State Killer: the man allegedly responsible for more than 50 rapes, 12 murders and more than 120 burglaries across the state of California during the 70s and 80s.

What’s new about FamilyTreeDNA’s cooperation with the FBI – as reported by BuzzFeed News on Thursday – is that it’s the first time that a private genealogy company has publicly admitted to voluntarily letting a law enforcement agency access its database.

A spokesperson for FamilyTreeDNA told BuzzFeed that the company hasn’t signed a contract with the FBI. But it has agreed to use its private lab to test DNA samples at the bureau’s request, and to upload the profiles to its database, on a case-by-case basis. It’s been doing so since this past autumn, according to BuzzFeed.

The spokesperson said that working with the FBI is “a very new development” that started with one case last year and “morphed.” At this point, she said, the company has cooperated with the FBI on fewer than 10 cases.

Privacy implications

The more people who submit DNA samples to these databases, the more likely it is that any of us can be identified. Over the course of years of searching for the Golden State Killer, investigators had collected and stored DNA samples from the crime scenes. In that and other cases that have hinged on DNA searches, they ran the genetic profile they derived from the DNA samples through an online genealogy database and found it matched with what turned out to be distant relatives – third and fourth cousins – of whoever left their DNA at the crime scenes.

Getting a match with the database’s records helped investigators to first locate DeAngelo’s third and fourth cousins. The DNA matches eventually led to DeAngelo himself, who was arrested on six counts of first-degree murder.

It wasn’t that DeAngelo submitted a DNA sample to any one of numerous online genealogy sites, such as FamilyTreeDNA, 23andMe or AncestryDNA. Rather, it was relatives with genetic makeups similar enough to whoever left their DNA sample on something at a crime scene who made the search possible.

According to research published in October, the US is on track to have so much DNA data on these databases that we’re getting to the point that you don’t even have to submit a saliva sample in order to be identifiable via your DNA.

Researchers from Columbia University found that 60% of searches for individuals of European descent will result in a third cousin or closer match, which can allow their identification using demographic identifiers.

As time goes by, given the rate of individuals uploading genetic samples to sites that analyze their DNA, “nearly any US individual of European descent” will be able to be identified in the near future, they said.

Many, if not most, of these databases are free for the public to search, and law enforcement have gone that route, accessing genetic profiles that people have willingly uploaded.

Now, given its cooperation with FamilyTreeDNA, the FBI has gained access to more than a million DNA profiles, “most of which were uploaded before the company’s customers had any knowledge of its relationship with the FBI,” BuzzFeed notes.

You can opt out

In December, FamilyTreeDNA changed its terms of service to allow law enforcement to use the database to identify suspects of violent crimes, such as homicide or sexual assault, and to identify victims’ remains. But FamilyTreeDNA says that investigators won’t be able to do so without the proper legal documents, such as a subpoena or search warrant.

In other words, FamilyTreeDNA is giving the FBI the same level of access to its records that the public now has, according to the company’s founder and CEO Bennett Greenspan:

We came to the conclusion that if law enforcement created accounts, with the same level of access to the database as the standard FamilyTreeDNA user, they would not be violating user privacy and confidentiality.

FamilyTreeDNA has been lauded for its strict protection of consumer privacy, and Greenspan said that the company has no intention of shedding that reputation to become a data broker:

Working with law enforcement to process DNA samples from the scene of a violent crime or identifying an unknown victim does not change our policy never to sell or barter our customers’ private information with a third party. Our policy remains fully intact and in force.

FamilyTreeDNA told BuzzFeed that to keep their profiles from being searched by the FBI, customers can opt of having their familial relationships mapped out. But it would also mean that people couldn’t use the service for finding possible relatives, which is one of the key attractions of such a database.

‘I feel violated’

Regardless of FamilyTreeDNA’s promise to not share information with the FBI that exceeds what other consumers can access – at least, not without a valid court order – multiple FamilyTreeDNA users are pondering whether they want to opt out of DNA matching or even have their previously submitted DNA kits destroyed. One such whom BuzzFeed talked to was Leah Larkin, a genetic genealogist based in Livermore, California:

All in all, I feel violated, I feel they have violated my trust as a customer.

Overall, however, it looks like most genealogy enthusiasts have no problem with helping to track down violent criminals. BuzzFeed cited an informal survey conducted by genealogist Maurice Gleeson that found that 85% of respondents – all of them involved in genealogy in the US or Europe – said they were comfortable with their DNA being used to catch a serial killer or rapist.

How well is this data being protected?

It’s easy to agree with wanting to help catch violent criminals by using DNA matching – what’s known as “enhanced forensic capabilities.” But we need to keep in mind that these databases and services are open to everyone, and not everyone will use them with good intentions.

For example, research subjects can be re-identified from their genetic data. However, rules that, starting this year, will regulate federally funded human subject research fail to define genome-wide genetic datasets as “identifiable” information.

The Columbia University researchers said that their work shows that such datasets are indeed capable of identifying individuals. That’s why they’re encouraging US Health and Human Services (HHS) to rethink that classification.

To better protect our genomes, the researchers proposed that the text files of raw genetic data be cryptographically signed:

Third-party services will be able to authenticate that a raw genotyping file was created by a valid [direct-to-consumer] provider and not further modified. If adopted, our approach has the potential to prevent the exploitation of long-range familial searches to identify research subjects from genomic data. Moreover, it will complicate the ability to conduct unilaterally long-range familial searches from DNA evidence. As such, it can complement previous proposals regarding the regulation of long-range familial searches by law enforcement and offers better protection in cases where the law cannot deter misuse.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/VhAhDhVYG8U/

Original WWII German message decrypts to go on display at National Museum of Computing

Bletchley Park’s National Museum of Computing will be exhibiting original, freshly discovered decrypted WWII messages to coincide with the 75th anniversary of D-Day this June – messages that were broken by the Colossus machines based on the museum’s site.

The decrypts are due to be put on display in The National Museum of Computing’s (TNMOC) Colossus gallery, which houses the world’s only working replica of the Colossus code-breaking computer.

Colossus itself remained a state secret for 30 years after the war ended because the British government didn’t want anyone to know just how advanced its code-breakers were. Sadly, that policy also meant most of the people involved in the war-winning efforts were all but forgotten about. While the British public lionised war heroes such as Douglas Bader and Tasker Watkins, names such as Tommy Flowers, Alan Turing and Bill Tutte remained in obscurity for decades.

“By accelerating the discovery of the wheel patterns of the ever-changing Lorenz-encrypted messages, 63 million characters of high-grade German messages had been decrypted by 550 people working on the ten functioning Colossus machines at Bletchley Park by the end of the war,” said TNMOC in a statement.

Today also marks the 75th anniversary of the first time that Colossus attacked its first encrypted Nazi German message. That message had been encrypted by a German Lorenz machine, used for ultra-secret messages between Hitler’s top generals during the Second World War.

Britain’s maths and code-breaking geniuses managed to crack German codes through a combination of sheer determination along with largely manual methods. Early valve-driven computers were developed at Bletchley Park to speed up the process of decryption once the code-breakers had perfected the process through laborious hours with pen and paper.

Colossus earned its official name well, standing 7ft tall and 17ft wide (2m x 5m) and weighing in at five metric tonnes.

Andrew Herbert, chair of TNMOC, added: “The achievements of those who worked at wartime Bletchley Park are humbling. Despite decades of secrecy, the names of the key characters involved with the breaking of Lorenz and the construction of Colossus are becoming increasingly well-known. Bill Tutte who deduced how the Lorenz machine worked, Colonel Tester and the Testery who broke the cipher by hand, and Tommy Flowers who designed Colossus to speed up the code-breaking process are just a few of the names that are familiar to many today, but who were largely unknown for decades after the war.”

He concluded: “The working rebuild of Colossus at The National Museum of Computing is an inspiration to all our visitors who are astonished when they see the working machine and learn of its impact.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/05/tnmoc_original_ww2_colossus_decrypts/

RIP, RDP: Security house Check Point punches holes in desktop controls

Security firm Check Point has found some 25 security vulnerabilities in three of the most popular remote desktop protocol (RDP) tools for Windows and Linux.

The company tasked its bug-hunters with a manual code audit on Microsoft mstsc as well as the FreeRDP and Kali Linux remote desktop utilities, and what they turned up was a glut of potentially serious flaws and security workarounds.

Of the 25 CVE-listed vulnerabilities included in Check Point’s report, 15 would potentially allow for remote code execution attacks. Rather than assume a malicious client (the person connecting to the remote machine) would dupe a victim running an RDP server, Check Point focused its effort on flaws that would go from the server to the client.

The idea of the study, Check Point said, was to look at the ways someone trying to connect to a machine (such as an admin or tech support staff) could actually be compromised by the box they wanted to to remotely access.

“In a normal scenario, you use an RDP client, and connect to a remote RDP server that is installed on the remote computer. After a successful connection, you now have access to and control of the remote computer, according to the permissions of your user,” Check Point said.

“But if the scenario could be put in reverse? We wanted to investigate if the RDP server can attack and gain control over the computer of the connected RDP client.”

As it turns out, there are more than a few ways the RDP server could be used to attack the remote user. The researchers found that many of the channels used to exchange data between the two points do not properly check for the length of packets being sent, potentially allowing the server to throw malformed packets at the client to trigger out-of-bounds read errors and integer overflows that would potentially set up remote code execution attacks.

Another particularly vulnerable point of attack was the way both the client and server shared data through a common clipboard. Because, again, the data traffic over this channel is not properly sanitized, the shared clipboard would allow for data path traversal attacks or information disclosure caused by the server peeking into the activity of the client’s local clipboard.

A malicious RDP server can modify any clipboard content used by the client, worryingly, even if the client does not issue a “copy” operation inside the RDP window. “If you click ‘paste’ when an RDP connection is open, you are vulnerable to this kind of attack,” noted Check Point.

“For example, if you copy a file on your computer, the server can modify your (executable?) file / piggyback your copy to add additional files / path-traversal files using the previously shown PoC,” it added.

In total, the manual source code review led to the assignment of 19 CVE-listed vulnerabilities in rdesktop and six in FreeRDP.

Foggy Windows

The findings for Microsoft’s closed-source RDP client were a bit more murky. Though Check Point found Windows RDP to be vulnerable to the above-mentioned clipboard issues, the security house said Redmond did not see it as serious enough to merit a CVE or security patch assignment.

HackLabs' Chris Gatford at his office in Manly, New South Wales (Image: Darren Pauli / The Register)

Stealing, scamming, bluffing: El Reg rides along with pen-testing ‘red team hackers’

READ MORE

Regardless, what Check Point ultimately concluded was that there is nonetheless real potential for RDP to be abused by an attacker posing as a remote user or employee who might then compromise an admin simply by requesting an RDP service. It also mused that it could be used by criminals to fight back against malware researchers who use RDP to connect to virtual machines for analysis.

On a lighter note, Check Point also suggested that the bugs could allow for a bit of mischief between security teams.

“As rdesktop is the built-in client in Kali Linux, a Linux distro used by red teams for penetration testing, we thought of a 3rd (though probably not practical) attack scenario,” the report stated.

“Blue teams can install organizational honeypots and attack red teams that try to connect to them through the RDP protocol.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/02/05/rip_rdp_check_point_punches_over_twodozen_holes_in_desktop_controls/