STE WILLIAMS

Security Lessons from My Game Closet

In an era of popular video games like Fortnite and Minecraft, there is a lot to be learned about risk, luck, and strategy from some old-fashioned board games.

I was recently looking over my collection of board games. As my eyes moved from game to game, I thought about the strategy and approach with which I play them. But, then, an entirely different set of thoughts went through my head. I started to think about the security lessons each game can teach us, and in this piece, I’d like to share those valuable lessons with you. What can old-fashioned board games teach us about security? More than you think.

Risk: Where You Start from Matters
If you’ve ever played Risk, you know that starting in Australia gives any player a unique advantage. Since attacks can only come from one direction, there is only one direction to defend. This allows the player to focus on advancing more quickly. Likewise, in real life, reducing the attack surface gives security organizations a distinct advantage. If there is less risk exposure to defend, the security organization can focus its efforts on improving and maturing its capabilities, thus defending the enterprise more effectively.

Risk also teaches us about strategic distribution of resources. That means to avoid concentrating all of your resources in one area, and to be careful not to spread your resources too thinly. This is an important lesson in security as well. Determining the right mix of resources dedicated to a specific area is a key part of properly reducing risk and defending an enterprise.

Monopoly: Knowing When to Capitalize on Luck
While there is some skill involved in the game of Monopoly, there is also quite a bit of luck. A good Monopoly player knows how to turn a stroke of good luck into a strategic advantage. A good security team should understand how to do the same. On the other hand, it’s important for security teams to know how to account for bad luck: We all encounter bad luck from time to time. The question isn’t whether or not misfortune comes our way but, rather, what we do with it. In Monopoly, knowing how to account for bad luck and play through it is an important part of playing the game successfully. 

The same holds for security. For example, when staring at a stack of Monopoly money, it can be tempting to buy up everything in sight. The problem with this approach is that it can leave a player overextended and unable to pay expenses that may arise as the game unfolds. In security, it’s important to reserve resources for events and incidents that may arise over time rather than overextending and being left without any means with which to handle bumps in the road.

Clue: If It Isn’t Written Down, It Didn’t Happen
I once worked with someone who enjoyed repeating the mantra, “if it isn’t written down, it didn’t happen.” In the game of Clue, it’s important to document each piece of relevant information to ensure that it isn’t forgotten and that it can be leveraged later, as necessary. The same is true in a successful security program. Whether you are talking about security operations, incident response, engineering, compliance, risk management, or any other aspect of security, you must ensure that each relevant detail is properly described.

It’s also critical that you understanding the impact of each piece of information. When confronted with information, what possibilities does it eliminate? What possibilities does it allow? As with Clue players, successful security teams understand how to map each relevant piece of information to the impact it has on the organization. This allows the team to continue to react, adapt, and improve as additional information comes to light, which is an important component of a mature security team.

Life: Every Security Program Is at a Different Stage
In the game of Life, different life events happen at different times. An event that may be welcome and joyful in one stage of life may be less so at a different stage. The same is true in security. Security teams vary in their capabilities and maturity. What may be a sensible undertaking for one organization may be either overwhelming or woefully inadequate for another. It’s important to understand where your organization stands in order to properly recognize which efforts are right and appropriate.

The path through development and maturity needs to be planned out. A victory in the game of Life does involve some luck, but it also involves some skill and a strategically planned trajectory. In security, it’s important to strategically plan the improvement, growth, and maturing of your company’s security capability. Further, this strategic plan needs to be executed well at each different phase. This is easier said than done, of course, though example after example shows that haphazardly managing the evolution of a security program yields inferior results.

Checkers: The Pieces in Motion Matter
The pieces you move around a checkerboard, and the order in which you move them, directly affects the outcome of the game. The same holds true in security. A successful security program has many moving parts. Knowing which parts to move, at what time, and in what order is a challenge. Start by prioritizing resources to protect the crown jewels. No checkerboard allows for unlimited playing pieces. Knowing how to prioritize limited resources to protect the king is also an important skill for resource-constrained security teams. Every enterprise has crown jewels that need protecting, and resources need to be prioritized accordingly.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Josh (Twitter: @ananalytical) is an experienced information security leader with broad experience building and running Security Operations Centers (SOCs). Josh is currently co-founder and chief product officer at IDRRA and also serves as security advisor to ExtraHop. Prior to … View Full Bio

Article source: https://www.darkreading.com/analytics/security-lessons-from-my-game-closet-/a/d-id/1334207?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Two Found Guilty in Online Dating, BEC Scheme

Cybercriminals involved in the operation created fake online dating profiles and tricked victims into sending money to phony bank accounts.

Two men have been found guilty for their roles in a fraud operation in which cybercriminals spoofed emails, built fake online dating profiles, and fooled victims into sending them money.

Nigerian citizen Olufolajimi Abegunde and Mexican citizen Javier Luis Ramos-Alonso were both part of a cybercriminal organization that manipulated people into sending money to bogus bank accounts under the group’s control, the Department of Justice reports. Funds were laundered and wired out of the United States to various locations, West Africa among them.

Abegunde participated in black-market currency exchanges throughout the scheme and portrayed himself as a legitimate businessman, the report states. Evidence shows the business he claimed to work with was not operational in late 2017; his primary income came from off-the-record currency transactions. He reportedly told people he preferred cash as it “eliminated the risk.”

Using the cybercriminal network, Abegunde and Ramos-Alonso laundered fraud funds from an July 2016 business email compromise (BEC) scam of a real-estate business based in Tennessee, followed by an October 2016 BEC scam on a Washington-based land title company. Further, Ramos-Alonso engaged in a three-year relationship with someone he met through an online dating website. He consistently sent her money using a network of people based in Africa and the US, and evidence showed the woman was a front for people linked to the money-laundering scheme.

Five more people have pleaded guilty for their roles in this operation, and three foreign nationals are awaiting extradition to the US to face trial. Others remain at large. Sentencing for Abegunde and Ramos-Alonso is set for June 21, 2019.

Read more details here.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/two-found-guilty-in-online-dating-bec-scheme/d/d-id/1334232?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Inside Incident Response: 6 Key Tips to Keep in Mind

Experts share the prime window for detecting intruders, when to contact law enforcement, and what they wish they did differently after a breach.PreviousNext

(Image: Jcomp - stock.adobe.com)

(Image: Jcomp – stock.adobe.com)

Most data breaches we see in the headlines are the biggest – but a security incident doesn’t need to be of Equifax proportions to bring down a company. Impact is relative to business size.

“The vast majority of breaches are small [and] don’t get much attention,” said Suzanne Widup, senior analyst at Verizon Enterprise Solutions, during a talk at the recent RSA Conference in San Francisco. “The impact is going to depend on the resources of the company.”

To illustrate, she compared a cyber disaster to a natural disaster: If a small tornado descends on a small town, recovery won’t be the same as it would be for a large metropolis. Similarly, a breach that would make a major enterprise “barely pause” could shut down a small business.

While the incident response process is different for large businesses versus small ones, there are key components all companies should keep in mind when reacting to a data breach. Discussing the details of breach response was a common trend throughout this year’s RSA Conference, with experts across industries, roles, and company sizes sharing their thoughts on the process.

Less than two weeks after RSAC concluded, the security community watched the response to a major cyberattack unfold when Norsk Hydro, a major producer of alumium, was hit with LockerGoga ransomware. Since it first disclosed the attack, Norsk has demonstrated a level of preparedness and transparency that indicates it was equipped to handle a cybersecurity incident of this magnitude.

Here, we shed light on the key takeaways from conversations and conference sessions related to breach response, from initial detection through the legal details. Are there any lessons learned you’d add to our list? Feel free to continue the conversation in the Comments section, below.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/analytics/inside-incident-response-6-key-tips-to-keep-in-mind/d/d-id/1334187?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Scammer pleads guilty to fleecing Facebook and Google of $121m

Large, worldly tech companies would never fall for a wire transfer invoice scam, would they?

The truth is that any company can fall prey if the fraud is convincing enough – as shown by the case of 50-year-old Lithuanian, Evaldas Rimasauskas, who this week pleaded guilty to conspiring with others to fleece $121 million (£93 million) out of industry giants Facebook and Google.

Arrested in Lithuania two years ago, Rimasauskas orchestrated a phishing campaign, according to US authorities between 2013 and 2015, in which employees of the two companies were emailed spoofed invoices that appeared to come from Taiwanese computer maker, Quanta Computer.

The scammers even went as far as registering a company in Latvia under the same name to make the funds request look more plausible, as well as forging invoices using fake embossed corporate stamps.

In total, payments of $23 million from Google and as much as $98 million from Facebook ended up in banks accounts in Latvia and Cyrus, from where they were wired to bank accounts in Slovakia, Lithuania, Hungary, and Hong Kong.

The very thing that might normally arouse suspicion – the size of the invoices – was on this occasion what made them seem normal to two large companies that did regular business with the Asian supplier.

Just as small-time phishing scams are designed for the sort of person they hope to defraud so larger ones adopt the same tactic, but reconfigured to fool the invoice departments at big companies.

Company A and Company B

An intriguing aspect of the case is that the prosecutors have still not named the companies involved, even now referring to them as “US-based internet companies (the Victim Companies).”

The fact that Facebook and Google were involved emerged in the press in 2017 after Rimasauskas’s arrest, with both companies eventually confirming their involvement. Google later said it had recovered all the scammed funds while Facebook said it recovered “most” of the money.

When he’s sentenced in New York on 24 July on the charge of wire fraud, Rimasauskas faces up to 30 years in jail. Manhattan Attorney Geoffrey S. Berman said:

Rimasauskas thought he could hide behind a computer screen halfway across the world while he conducted his fraudulent scheme, but as he has learned, the arms of American justice are long, and he now faces significant time in a US prison.

Stay vigilant to email scams

Facebook and Google aren’t the only companies to fall victim to huge wire transfer scams.

Wire transfer fraud is just one of the ways that crooks attempt to part businesses from their money. To defend against email scams we offered some security tips for avoiding this kind of email threat:

  • Revisit your outbound email filtering rules to prevent sensitive information from going out to inappropriate destinations.
  • Require multiple approvals for overseas wire transfers.
  • Have strict controls over changes in payment details or the creation of new accounts.
  • Use strong passwords and consider two-factor authentication (2FA) to make it harder for crooks to gather intelligence from your network in the first place.
  • Consider a “back to base” VPN for remote users so their online security is kept up, even on the road.
  • Have your own “central reporting” system, in the manner of IC3, where staff can call in suspicious messages to prevent crooks trying different employees with the same scam until a weak spot is found.
  • Think twice about publicly posting personnel information that could be abused in phishing attacks.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YNlmteqWZVA/

Spycam sex videos of 1,600 motel guests sold to paying subscribers

We’ve heard before about hotel owners or Airbnb creep-hosts who’ve set up hidden webcams to capture videos of people having sex, but it seems there are also scumbags selling the live-streamed or prerecorded videos to paying subscribers.

The Korea Herald reported on Wednesday that police have arrested two people for setting up the spycams used to secretly film about 1,600 motel guests over the past year, and that the Seoul Metropolitan Police Agency’s cyber investigation unit had also booked two people for selling the videos.

The Korea Herald, and/or Seoul police, didn’t specify how they got tipped off, but however that happened, the investigation uncovered wireless IP cameras set up at 42 motel rooms at 30 motels in 10 cities in the North and South Gyeongsang and Chungcheong Provinces between 24 November 2018 and 2 March – as in, this was going on up until a few weeks ago.

Tiny hidden cameras

Investigators found “ultra-mini” webcams equipped with what the newspaper said were 1mm lenses – which I take to mean, based on optics-focused discussions like this, that the cameras were teensy. All the better to hide from you, my dear, tucked away in TV set-top boxes and wall sockets, among other hiding places.

In case you’re keeping a list of all the places you should look for these things, you should know that we’ve also heard of a spycam hidden in a clock that one traveler found in an Airbnb room. It was pointed at the bedroom. Itsy bitsy cameras can also be found in coat hooks, smoke alarms, USB power plugs, lightbulbs, teddy bears, air fresheners, picture frames and wall outlets.

In July 2018, Airbnb host Wayne Natt was sentenced to 364 days in jail for secretly filming his guests. He told police that he’d rigged his condo to film sex parties, but somehow, those cameras mustn’t have discriminated between willing vs. noncompliant participants, since he was convicted of 14 charges of video voyeurism.

At any rate, Seoul police said that the clips filmed in the Korean motels were live-streamed via a website server based overseas. The group reportedly made about 7 million won (USD $6,200) from 97 subscribers who purchased 803 illegally filmed videos. The website had far more than just those 97, though: in fact, it had a total of 4,099 members.

The suspects – it’s unclear if they’ve been convicted – are looking at up to five years in jail and a penalty of 30 million won if found guilty of distributing illegal videos, and up to a year and 10 million won for distributing porn.

With all the creepy teddy-cams and tiny cameras that are on the market nowadays, this problem won’t go away anytime soon. Regardless of whether you’re staying in a high-priced hotel, a fleabag motel or an Airbnb with rave reviews, it’s good to know how to detect these things and what to do if you find them. Here’s how:

How to detect a hidden webcam

Derek Starnes, who works in tech and who detected a smoke detector hidden webcam in a Florida Airbnb rental, told WFTS that he spotted a small black hole on the alarm and became curious. Poking around, he found a camera and microphone had been hidden inside the smoke detector. He immediately alerted police.

But you need other ways to pick up on these things besides spotting curious little holes. Some of them, including Nest Dropcam, can be hiding behind furnishings, decorations or vents.

Bear in mind that for a camera to see you, it needs a line of sight, and that means that you can see it. So visually inspect vents for holes or gaps – you could even look for a lens reflection by turning off the lights and scanning the room with a flashlight.

If you’re feeling flush, you could pick up a gizmo for finding cameras (they can get pricey), or if you’re technical you could use Nmap or similar to see what gadgets are using the Wi-Fi (although, of course, your host/peeping Tom might have a separate network for spying purposes, or might have a hard-wired surveillance device).

What to do if you detect a hidden camera

  1. Take photos of the device for evidence
  2. Take photos of your accommodation so you can prove that you haven’t trashed the place: some Airbnb hosts have reportedly made such false accusations.
  3. Get your clothes on and get out of there
  4. Report it to police. You want to stop that stream before other people get swept up in it.
  5. If you’re in a rental, report it to the management company, along with your evidence, before it happens to another victim.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vSBndS8zmJQ/

Sacked IT guy annihilates 23 of his ex-employer’s AWS servers

An employee-from-hell has been jailed after he got fired (after a measly four weeks), ripped off a former colleague’s login, steamrolling through his former employer’s Amazon Web Services (AWS) accounts, and torching 23 servers.

The UK’s Thames Valley Police announced on Monday that 36-year-old Steffan Needham, of Bury, Greater Manchester, was jailed for two years at Reading Crown Court following a nine-day trial.

Needham pleaded not guilty to two charges of the Computer Misuse Act – one count of unauthorized access to computer material and one count of unauthorized modification of computer material – and was convicted in January 2019.

As the Mirror reported during Needham’s January trial, the IT worker was sacked after a month of lousy performance working at a digital marketing and software company called Voova in 2016.

In the days after he got fired, Needham got busy: he used the stolen login credentials to get into the computer account of a former colleague – Andy “Speedy” Gonzalez – and then began fiddling with the account settings. Next, he began deleting Voova’s AWS servers.

The company lost big contracts with transport companies as a result. Police say that the wreckage caused an estimated loss of £500,000 (~USD $655,000). The company reportedly was never able to claw back the deleted data.

It took months to track down the culprit. Needham was finally arrested in March 2017, when he was working for a dev ops company in Manchester.

Should-a, could-a, would-a

Voova, like all companies, should have done a few things to protect itself from this sort of nightmare. Security experts had agreed, prosecutor Richard Moss noted during the trial, that Voova could have done a better job at security.

Voova CEO, Mark Bond, admitted to the court that the company could have implemented two-factor authentication (2FA):

There was no multi-factor authentication, a means of confirming the user ID which requires a user to verify their identification by something they know or possess.

If done properly (as in, not with SMS-based codes, which are prone to getting ripped off in SIM swap attacks), 2FA would have stopped Needham from traipsing through Voova’s AWS account posing as “Speedy.”

Of course, you also have to lock the door after employees leave by shutting down their accounts.

Make sure you have a plan in place for when employees leave that covers everything from physical access to your property and hardware like laptops, phones and access tokens, to email and call forwarding, and logins to every piece of software or service they used.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/JpwpxtPWZr0/

Microsoft Windows 7 patch warns of coming patchocalypse

Microsoft has issued a patch to remind Windows 7 users that they’ll soon have no patches.

The update tells users that they won’t be able to get support for Windows 7 after 14 January 2020, and it’s effectively a nudge to upgrade to a later operating system (Microsoft has been pressuring people for a long time to upgrade to Windows 10).

What does end of support really mean?

Each version of Windows goes through different support stages. In mainstream support, it gets all the updates and patches you’d expect, but this phrase eventually ends, at which point the operating system version switches to extended support. This still provides security updates, but non-security updates are no longer available for desktop consumer-products. Enterprises can only get them with extended hotfix support.

Mainstream support for Windows 7 without Microsoft’s Service Pack one (SP1) addition ended on 9 April 2013. Those users that had installed SP1 still found mainstream support ending on 13 January 2015. Since that time, Windows 7 SP1 users have been on extended support. The end of support that Microsoft is talking about on 14 January 2020 is the end of that extended support, which is a little like running off a cliff, security-wise.

Microsoft says that after extended support ends, the security updates stop coming, which means that the company won’t issue patches designed to seal off security bugs for Windows 7 SP1 as part of its patch Tuesday releases anymore:

After that, technical assistance and software updates from Windows Update that help protect your PC will no longer be available for the product.

Users will be on their own – sitting ducks for attackers who discover zero-day bugs in Windows 7. So, what can they do? Microsoft wants you to upgrade, of course:

Microsoft strongly recommends that you move to Windows 10 sometime before January 2020 to avoid a situation where you need service or support that is no longer available.

The company also has a webpage explaining how to do it, along with a video, just in case you didn’t get the message, showing happy people abandoning their dirty, dysfunctional old stuff to buy shiny new stuff.

But wait, didn’t Microsoft at one point offer Windows 10 for free?

When the OS first launched in 2015, Microsoft offered free upgrades under its Get Windows 10 program, but those ended in July 2016. The only exception was for those using assistive technologies on the operating system, in which case it ended in December 2017.

That means, strictly speaking, users who want to upgrade now have to pay.

I say ‘strictly speaking’ because Microsoft allows people to install legitimately downloaded versions of the software and not activate it.

Microsoft’s lenience about securing unactivated Windows 10 installs is presumably because it’s better to have consumers protected for the good of the entire ecosystem. It remains to be seen how it might treat businesses trying to get away with the same thing, however.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/e8jH7TyeadY/

BitLocker hacked? Disk encryption – and why you still need it [VIDEO]

A security researcher in New Zealand just showed that it’s possible to wire up a low-cost data sniffer to the security chip in a Microsoft Surface laptop…

…and read out the decryption key used by BitLocker, the software that is there to keep the data on your hard disk safe.

That has led to us getting asked, “Is BitLocker cracked? Is disk encryption still worth it?”

The answers are “No” and “Yes”, and this week’s Naked Security Live video explains why.

Watch now for answers the following questions and more:

  • Why is BitLocker suddenly in the spotlight?
  • How do BitLocker and “full-disk encryption” differ from encryption in general?
  • Does this hack mean anyone who steals my encrypted laptop can get at all my data anyway?
  • How do I set up disk encryption securely?
  • Will encryption slow my laptop down?
  • What’s the point of encrypting everything if most of the files aren’t personal data?
  • What if I forget the password – wouldn’t a hackable system be handy in that case?

(Watch directly on YouTube if the video won’t play here.)

PS. Like the shirt in the video? They’re available at: https://shop.sophos.com/

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/x2xkBTTEHrE/

Hey, what’s Mandarin for ‘WTF is going on?’ Nokia phones caught spewing device IDs to China, software blunder blamed

An undisclosed number of Nokia 7 Plus smartphones have been caught sending their identification numbers to a domain owned by a Chinese telecom firm.

The handsets spaffed the data in clear text over the internet to a server behind the domain vnet.cn, which appears to be owned by China Telecom. The HTTP POST requests from the devices included IMEI numbers, SIM numbers, and MAC identifiers, which can be potentially used to identify and track the cellphones.

According to HMD Global, which bought the Nokia phone business from Microsoft in 2016, a limited number of Nokia devices have been communicating by mistake to “a third party server.”

“We have analyzed the case at hand and have found that our device activation client meant for another country was mistakenly included in the software package of a single batch of Nokia 7 Plus,” an HMD Global spokesperson explained to The Register in an email. “Due to this mistake, these devices were erroneously trying to send device activation data to a third party server.”

The company’s spokesperson did not respond to requests to say how many phones are in “a small batch” or to confirm the software was intended for phone activation in China.

In January, security researcher Dirk Wetter identified a GitHub repo with Java code designed to handle some form of Android device registration, credited to Qualcomm, that includes the vnet.cn domain and a reference to China Telecom.

According to a whois lookup, vnet.cn is registered to China Telecom.

Obey text on a board

Haha! Conformist ‘Droids! Yep, that’s what’s most profitable

READ MORE

HMD insists “no personally identifiable information has been shared with any third party” and the the data sent was never processed – presumably because the activation attempt would fail in the absence of account data associated with an actual telecom customer in China.

The Finnish phone maker says a patch to fix the activation software in affected phones was released in February and nearly all these devices have installed it. The biz adds that collecting activation data is standard practice in the telecom industry and that it “takes the security and privacy of its consumers seriously.”

So too does Finland’s data protection ombudsman Reijo Aarnio, who is looking into the incident for possible data protection law violations, according to Reuters.

While HMD may have run afoul of the EU data protection regime, its misplaced activation software looks less problematic than apps and SDKs that transmit sensitive data deliberately. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/22/nokia_phones_leak/

‘Sharing of user data is routine, yet far from transparent’ is not what you want to hear about medical apps. But 2019 is gonna 2019

Folks using healthcare-related Android apps: after you’ve handed over your private details to that software, do you know where it is sending your data? If you don’t, nobody should blame you. It turns out it can be a complicated and obfuscated affair.

So much so, eggheads probing the data-sharing practices of mobile health applications have urged software developers to be more transparent regarding how they’re handling people’s personal info, after observing all sorts of records being passed on to third parties. Parent companies, adverting networks, analytics platforms, data brokers, and more, are seemingly getting their hands on at least some part of the pile, directly or indirectly.

And while the studied applications could well be above board, at least within their fine print and terms of use, and sharing data carefully and with consent, the lack of transparency and the large emission of information may deal a blow to any trust you may have in them.

Furthermore, even if the information is anonymized prior to sharing, the data tends to flow through the usual few suspects – Google, Facebook, etc – which could, in theory, piece together the identity of individual netizens using these apps, seeing as they capture so many data points.

Report

Academics hailing from universities in Canada, Australia, and the US, together studied 24 popular Android health and medicine-related apps, and found that nearly 80 per cent were passing on at least some of their users’ data to third parties. Their findings were published this week in the British Medical Journal. Check it out for the full details; we’ll summarize them here.

“Sharing of user data is routine, yet far from transparent,” the group concluded in their paper. “Clinicians should be conscious of privacy risks in their own use of apps and, when recommending apps, explain the potential for loss of privacy as part of informed consent. Privacy regulation should emphasise the accountabilities of those who control and process user data. Developers should disclose all data sharing practices and allow users to choose precisely what data are shared and with whom.”

We’re told that 38 per cent of the studied apps shared browser activities, such as medicines looked up and pharmacy websites visited, with third parties; the same again passed on users’ email addresses; 25 per cent handed over the list of drugs people are taking; 21 per cent the users’ first and last names; 17 per cent the users’ medical conditions; and so on.

These stats were produced by studying the network traffic of the applications, which range in install bases of 500 devices to 10 million and are among the top 100 most-used in their sector. “Although most (20/24, 83%) appeared free to download, 30% (6/20) of the ‘free’ apps offered in-app purchases, and 30% (6/20) contained advertising as identified in the Google Play store,” the academics noted. “Of the for-profit companies (n=19), 13 had a Crunchbase profile (68%).”

Table of leaky mobile medical apps

The types of details leaked by medical app, according to traffic analysis … Click to enlarge, or see page six of this PDF. Source: Grundy et al

One silver lining is that most of the programs encrypted this data while in transit, leaving six per cent that did not and broadcast private information in clear text. “Network analysis revealed that first and third parties received a median of 3 unique transmissions of user data,” the paper stated. “Third parties advertised the ability to share user data with 216 ‘fourth parties’ within this network.”

And where is this data going? The list should not surprise you:

Where data is going

Organizations receiving app data … Click to enlarge, or see page nine of this PDF. Source: Grundy et al

The obvious concern is whether or not people’s personal information is being properly scrubbed of any identifying info before it is offered to other organizations and advertisers. Unlike other types of user information, medical records are subject to strict regulations, and limits on how data can be disclosed, so you’d hope that stays within the app or its backend. What exactly is going where is still a bit of a black box mystery, which is kinda worrying given the sensitivity of the info we’re talking about here.

The researchers said developers need to be aware of these regulations, and should do a better job of informing everyone how they collect, scrub, and share patient information with outside groups. They also called on doctors and care providers to step up, and take a closer look at the apps they use.

“Most health apps fail to provide privacy assurances or transparency around data sharing practices. User data collected from apps providing medicines information or support may also be particularly attractive to cybercriminals or commercial data brokers,” said Quinn Grundy, an assistant professor and lead author of the study.

“Health professionals need to be aware of privacy risks in their own use of apps and, when recommending apps, explain the potential for loss of privacy as part of informed consent.” ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/21/medical_apps_personal_data/