STE WILLIAMS

Celebgate 3.0: Miley Cyrus among victims of photo thieves

Here we go again: it’s Celebgate 3.0, and that means a new round of stolen intimate photos of celebrities and tee-hee’ing jerks.

This time around, photos have been gang-grabbed from Miley Cyrus (pictured), Stella Maxwell, Kristen Stewart, Tiger Woods, Lindsey Vonn and Katharine McPhee.

The celebrity leak sites that posted the stolen content don’t merit whatever traffic they might get if we shared their names. Suffice it to say that one of them considers itself a “satirical website” that publishes rumors, speculation, assumptions, opinions, fiction, and what it calls facts… And, obviously, illegal stolen content.

According to Fossbytes, McPhee is taking legal action against the sites that published her content. Ditto for Woods and for Kristen Stewart and her girlfriend, Stella Maxwell, said TMZ.

Vonn, an Olympic skiier and Woods’ former girlfriend, called the theft “a depicable invasion of privacy”. The photos were stolen from her cell phone a few years ago. Her spokesman told People that she’s lawyering up:

Lindsey will take all necessary and appropriate legal action to protect and enforce her rights and interests. She believes the individuals responsible for hacking her private photos as well as the websites that encourage this detestable conduct should be prosecuted to the fullest extent under the law.

Celebs suffered through this type of mugging in 2015 with Celebgate 1.0. In v1, thieves and many equally scumbaggy photo-sharers trampled over the privacy of Jennifer Lawrence, Kate Upton, Kirsten Dunst, Selena Gomez, Kim Kardashian, Vanessa Hudgens, Lea Michele, Winona Ryder, Hulk Hogan’s son and Hillary Duff, among dozens of other women celebrities.

The photos in this latest round were still up as of Thursday evening.

We’ve seen multiple men convicted and given jail time over prying open the Gmail and iCloud accounts of Hollywood glitterati, but that sure didn’t stop Celebgate 2.0: in May, we saw the intimate photos of Emma Watson and Amanda Seyfried stolen and posted.

How to trip up the thieves

According to the FBI, the original Celebgate thefts were carried out by a ring of attackers who launched phishing and password-reset scams on celebrities’ iCloud and email accounts.

One of them, Edward Majerczyk, got to his victims by sending messages doctored to look like security notices from ISPs. Another Celebgate convict, Ryan Collins, chose to make his phishing messages look like they came from Apple or Google.

These guys’ pawing was persistent: the IP address of one of the Celebgate suspects, Emilio Herrera, was allegedly used to access about 572 unique iCloud accounts. The IP address went after some of those accounts numerous times: in total, somebody using it allegedly tried to access 572 iCloud accounts 3,263 times. Somebody at that IP address also allegedly tried to reset 1,987 unique iCloud account passwords approximately 4,980 times.

Some of the suspects used a password breaker tool to crack the account: a tool that doesn’t require special tech skills to use. In fact, anybody can purchase one of them online and use it to download a victim’s iCloud account if they know his or her login credentials.

To get those credentials, crooks break into a target’s iCloud account by phishing, be it by email, text message or iMessage.

All of which points to how scams that seem as old as the hills – like phishing – are still very much a viable threat.

Anybody who owns an email account and a body they don’t want to see parading around the internet without their permission should be on the lookout, though telling the difference between legitimate and illegitimate messages can be tough.

Here are some ways to keep your private images from winding up in the thieves’ sweaty palms:

  • Don’t click on links in email and thus get your login credentials phished away. If you really think your ISP, for example, might be trying to contact you, instead of clicking on the email link, get in touch by typing in the URL for its website and contacting it via a phone number or email you find there.
  • Use strong passwords.
  • Lock down privacy settings on social media (here’s how to do it on Facebook, for example).
  • Don’t friend people you haven’t met on Facebook, and don’t share photos with people you don’t know and trust. For that matter, be careful of those who you consider your “friends”. One example of creeps posing as friends can be found on the creepshot sharing site Anon-IB, where users have posted images they say they took from Instagram feeds of “a friend”.
  • Use multifactor authentication (MFA) whenever possible. MFA means you need a one-time login code, as well as your username and password, every time you log in. That’s one more thing the scumbags need to figure out every time they try to phish you.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/9idObu-SMLA/

Touchscreens ‘at risk from chip in the middle attack’, warn researchers

Using non-official (or even completely dodgy aftermarket) parts for do-it-yourself repairs historically has just meant, at worst, accepting some risk for the shelf life of your appliance. But smartphones, with all their unfettered access to our lives, are a very tempting target for attackers, and new research shows that even shattered screen or battery replacement mall kiosks could potentially introduce new risks to our privacy.

The research, called “Shattered Trust: When Replacement Smartphone Components Attack,” by researchers at the Ben-Gurion Univesity of the Negev in Israel raises the possibility that someone with hardware know-how could cause serious harm to a smartphone owner who takes their phone in for repairs or tries to DIY with compromised parts.

Reseachers published a proof-of-concept that it is possible for an attacker, who merely need access to inexpensive hardware and a smartphone (say, one that’s in the shop for repairs) to replace the old touchscreen with one that’s malicious — in other words, it has additional hardware sneakily installed to aid the attacker.

The malicious touchscreen could be used to intercept and record the phone’s unlock code, modify permissions and configurations, download malicious apps, redirect web browsers to phishing pages, and even attack device drivers to completely compromise the phone. They’re calling this “chip-in-the-middle” attack, after the man-in-the-middle attack, where an attacker intercepts web traffic to monitor and even redirect the browsing of an unwitting victim.

The chip-in-the-middle proof of concept isn’t exactly subtle in its current state: There are a lot of wires and chipsets hanging out of the opened phone assembly:

The proof-of-concept was modeled on an Android phone, so it’s not known if an Apple phone is as vulnerable to the same methods of attack. So — as far as we know — trying to use this attack vector isn’t yet viable in a subtle way.

However, the researchers argue that it’s only a matter of time and effort for someone to miniaturize the attack they’ve modeled, and it is absolutely possible for someone to fit the chipset into the tight spaces of a smartphone.

How big a problem could this be? From the paper itself:

Conservative estimates assume that there are about 2bn smartphones in circulation today. Assuming that 20% of these smartphones have undergone screen replacement, there are on the order of 400m smartphones with replacement screens in the world. An attack which compromises even a small fraction of these smartphones through malicious components will have a rank comparable to that of the largest PC-based botnets.

This is undoubtedly the argument that “walled garden” hardware and software advocates, like Apple, will continue to make, perhaps with the CitM attack being cited as a prime reason why they, the OEM (original equipment manufacturer), should have sole control over the hardware supply chain: stricter controls over hardware manfuacturing and quality control, dedicated diligence to try to prevent an outside force from manipulating hardware for devious means.

However, a key point raised in the research is that a number of hardware components made for smartphones, including the charging cables and touchscreen, are already not manufactured under the control of the phone vendor, and counterfeit parts abound for Apple and Google phones alike — meaning the genie is already out of the bottle on that front.

The research isn’t meant to stoke fears about third-party repairs; rather, it’s a call for increased diligence by phonemakers to recognize that compromised hardware has been and will continue to be a real possibility, and that phones should not automatically assume that the hardware it is using is 100% trustworthy.

Instead, the researchers call for “hardware-based countermeasures”, including a proxy firewall to intercept attacks from the screen and protect the rest of the device. The paper says:

Placing this device on the motherboard means that it will not be affected by malicious component replacement.

If you want to see proofs of the concept for yourself, the researchers modeled several of the attacks on video:


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/iVZBA7m9I9E/

Touchscreens ‘at risk from chip in the middle attack’, warn researchers

Using non-official (or even completely dodgy aftermarket) parts for do-it-yourself repairs historically has just meant, at worst, accepting some risk for the shelf life of your appliance. But smartphones, with all their unfettered access to our lives, are a very tempting target for attackers, and new research shows that even shattered screen or battery replacement mall kiosks could potentially introduce new risks to our privacy.

The research, called “Shattered Trust: When Replacement Smartphone Components Attack,” by researchers at the Ben-Gurion Univesity of the Negev in Israel raises the possibility that someone with hardware know-how could cause serious harm to a smartphone owner who takes their phone in for repairs or tries to DIY with compromised parts.

Reseachers published a proof-of-concept that it is possible for an attacker, who merely need access to inexpensive hardware and a smartphone (say, one that’s in the shop for repairs) to replace the old touchscreen with one that’s malicious — in other words, it has additional hardware sneakily installed to aid the attacker.

The malicious touchscreen could be used to intercept and record the phone’s unlock code, modify permissions and configurations, download malicious apps, redirect web browsers to phishing pages, and even attack device drivers to completely compromise the phone. They’re calling this “chip-in-the-middle” attack, after the man-in-the-middle attack, where an attacker intercepts web traffic to monitor and even redirect the browsing of an unwitting victim.

The chip-in-the-middle proof of concept isn’t exactly subtle in its current state: There are a lot of wires and chipsets hanging out of the opened phone assembly:

The proof-of-concept was modeled on an Android phone, so it’s not known if an Apple phone is as vulnerable to the same methods of attack. So — as far as we know — trying to use this attack vector isn’t yet viable in a subtle way.

However, the researchers argue that it’s only a matter of time and effort for someone to miniaturize the attack they’ve modeled, and it is absolutely possible for someone to fit the chipset into the tight spaces of a smartphone.

How big a problem could this be? From the paper itself:

Conservative estimates assume that there are about 2bn smartphones in circulation today. Assuming that 20% of these smartphones have undergone screen replacement, there are on the order of 400m smartphones with replacement screens in the world. An attack which compromises even a small fraction of these smartphones through malicious components will have a rank comparable to that of the largest PC-based botnets.

This is undoubtedly the argument that “walled garden” hardware and software advocates, like Apple, will continue to make, perhaps with the CitM attack being cited as a prime reason why they, the OEM (original equipment manufacturer), should have sole control over the hardware supply chain: stricter controls over hardware manfuacturing and quality control, dedicated diligence to try to prevent an outside force from manipulating hardware for devious means.

However, a key point raised in the research is that a number of hardware components made for smartphones, including the charging cables and touchscreen, are already not manufactured under the control of the phone vendor, and counterfeit parts abound for Apple and Google phones alike — meaning the genie is already out of the bottle on that front.

The research isn’t meant to stoke fears about third-party repairs; rather, it’s a call for increased diligence by phonemakers to recognize that compromised hardware has been and will continue to be a real possibility, and that phones should not automatically assume that the hardware it is using is 100% trustworthy.

Instead, the researchers call for “hardware-based countermeasures”, including a proxy firewall to intercept attacks from the screen and protect the rest of the device. The paper says:

Placing this device on the motherboard means that it will not be affected by malicious component replacement.

If you want to see proofs of the concept for yourself, the researchers modeled several of the attacks on video:


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/iVZBA7m9I9E/

‘Clever’ TapDance approach to web censorship that works at ISP level

Both China and India have been found to block websites sometimes. Don’t feel smug if you live outside of Asia, the American government may block websites in the future. The UK government has already talked about blocking websites that feature pornography of consenting adults, unless an adult Briton specifically asks to be able to access it.

Researchers from the University of Colorado at Boulder, Georgetown University Law Center, University of Michigan, and University of Illinois Urbana-Champaign have found a way to circumvent web censorship, but ISPs worldwide would need to implement their technology. Their refraction networking system is called TapDance.

One of the most common ways that people bypass region-based content blocking is by using third-party VPNs that operate in different countries. But governments and corporations can interfere by blocking access to a third-party VPN, or other kinds of internet proxies, such as Tor.

David Robinson is one of the researchers behind TapDance. The research team deployed a limited implementation of TapDance with the help of Psiphon, an application that helps people access the internet without censorship. Robinson wrote about TapDance on Medium:

For our trial, we built a high-performance implementation of the TapDance refraction networking scheme and deployed it on four ISP uplinks with an aggregate bandwidth of 100 gigabits per second. To reach end users, we partnered with Psiphon, a popular anti-censorship tool. For this trial, some Psiphon users received a specially updated version of the Psiphon client, which was configured to use TapDance instead of Psiphon’s other circumvention strategies. Over one week of operation, our deployment served more than 50,000 real users. The experience demonstrates that TapDance can be practically realized at ISP scale with good performance and at a reasonable cost, potentially paving the way for long-term, large-scale deployments of TapDance or other refraction networking schemes in the future.

Image courtesy of David Robinson and refraction.network

Here’s how TapDance works. A TapDance station at each participating ISP is located by a client that requests a blocked webpage. The station passively inspects a copy of its network traffic, and stealthily injects new packets into it, using HTTPS.

A user’s TapDance client sends incomplete HTTPS requests to sites that aren’t blocked. Clients tag the ciphertext of the connecting packets, which can be seen by the TapDance system, but not seen by censorship mechanisms. The reachable site won’t respond to the HTTPS requests because they’re incomplete. While the requests travel its route, the TapDance station impersonates the server and exchanges data with TapDance clients in a covert way.

The TapDance system has advantages that other refraction networking technologies lack. The researchers partnered with Merit Network, and the University of Colorado at Boulder’s internet infrastructure to test the technology. About 50,000 users were involved, and the ISP upsteam links peaked at a bandwidth of more than 55 Gbps. The researchers found that TapDance was less expensive to deploy than other refraction networking systems. That’s partly due to how TapDance uses infrastructure that ISPs already have in deployment, including standard gateway routers and default network interfaces for packet injection.

EFF senior staff technologist Seth Schoen and EFF staff technologist Erica Portnoy think TapDance is promising:

This is an impressive accomplishment at the engineering level and also logistically. The researchers had to do a lot of work to turn this idea into reality at a real ISP on the real internet; an ISP environment has traditionally been a challenging setting for a change like this on both technical and political levels.

Refraction networking is quite different from previous anti-censorship techniques. Previous techniques like domain fronting do something similar at the application layer, often using a content-delivery network (CDN) as the hidden anti-censorship intermediary that grants access to the blocked content.

This new method works further down at the network level, using an ISP as the anti-censorship intermediary. Using a fundamental network component this way partly re-architects the internet itself to be more resistant to censorship.

That’s clever, and may also be harder to detect and to block. But on the other hand, it’s not something that the operator of an individual censored site can just go and do directly. Instead, it has to start with ISPs. So this technique also shifts the ability and responsibility for getting around network-based censorship.

The new technique is an improvement. The statistical methods that the censors may use are much less certain overall than figuring out which particular sites are being used as anti-censorship proxies, and blocking those, which is what’s currently possible. In refraction networking, there’s no simple list of sites that can be blocked, since any site whatsoever can be used as a decoy even without that site’s knowledge.

So the researchers know that TapDance works on a small scale. What they don’t know yet is whether or not TapDance can circumvent censorship mechanisms that are used by governments to block internet content if their technology is deployed to millions of users.

The TapDance research team hypothesizes that the technology might be able to circumvent mass government censorship methodologies because interfering with a refraction networking system as covert as TapDance would be impractically expensive. We’ll only know for certain if TapDance is tested with millions of users simultaneously.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YzRITfmRs3M/

‘Clever’ TapDance approach to web censorship that works at ISP level

Both China and India have been found to block websites sometimes. Don’t feel smug if you live outside of Asia, the American government may block websites in the future. The UK government has already talked about blocking websites that feature pornography of consenting adults, unless an adult Briton specifically asks to be able to access it.

Researchers from the University of Colorado at Boulder, Georgetown University Law Center, University of Michigan, and University of Illinois Urbana-Champaign have found a way to circumvent web censorship, but ISPs worldwide would need to implement their technology. Their refraction networking system is called TapDance.

One of the most common ways that people bypass region-based content blocking is by using third-party VPNs that operate in different countries. But governments and corporations can interfere by blocking access to a third-party VPN, or other kinds of internet proxies, such as Tor.

David Robinson is one of the researchers behind TapDance. The research team deployed a limited implementation of TapDance with the help of Psiphon, an application that helps people access the internet without censorship. Robinson wrote about TapDance on Medium:

For our trial, we built a high-performance implementation of the TapDance refraction networking scheme and deployed it on four ISP uplinks with an aggregate bandwidth of 100 gigabits per second. To reach end users, we partnered with Psiphon, a popular anti-censorship tool. For this trial, some Psiphon users received a specially updated version of the Psiphon client, which was configured to use TapDance instead of Psiphon’s other circumvention strategies. Over one week of operation, our deployment served more than 50,000 real users. The experience demonstrates that TapDance can be practically realized at ISP scale with good performance and at a reasonable cost, potentially paving the way for long-term, large-scale deployments of TapDance or other refraction networking schemes in the future.

Image courtesy of David Robinson and refraction.network

Here’s how TapDance works. A TapDance station at each participating ISP is located by a client that requests a blocked webpage. The station passively inspects a copy of its network traffic, and stealthily injects new packets into it, using HTTPS.

A user’s TapDance client sends incomplete HTTPS requests to sites that aren’t blocked. Clients tag the ciphertext of the connecting packets, which can be seen by the TapDance system, but not seen by censorship mechanisms. The reachable site won’t respond to the HTTPS requests because they’re incomplete. While the requests travel its route, the TapDance station impersonates the server and exchanges data with TapDance clients in a covert way.

The TapDance system has advantages that other refraction networking technologies lack. The researchers partnered with Merit Network, and the University of Colorado at Boulder’s internet infrastructure to test the technology. About 50,000 users were involved, and the ISP upsteam links peaked at a bandwidth of more than 55 Gbps. The researchers found that TapDance was less expensive to deploy than other refraction networking systems. That’s partly due to how TapDance uses infrastructure that ISPs already have in deployment, including standard gateway routers and default network interfaces for packet injection.

EFF senior staff technologist Seth Schoen and EFF staff technologist Erica Portnoy think TapDance is promising:

This is an impressive accomplishment at the engineering level and also logistically. The researchers had to do a lot of work to turn this idea into reality at a real ISP on the real internet; an ISP environment has traditionally been a challenging setting for a change like this on both technical and political levels.

Refraction networking is quite different from previous anti-censorship techniques. Previous techniques like domain fronting do something similar at the application layer, often using a content-delivery network (CDN) as the hidden anti-censorship intermediary that grants access to the blocked content.

This new method works further down at the network level, using an ISP as the anti-censorship intermediary. Using a fundamental network component this way partly re-architects the internet itself to be more resistant to censorship.

That’s clever, and may also be harder to detect and to block. But on the other hand, it’s not something that the operator of an individual censored site can just go and do directly. Instead, it has to start with ISPs. So this technique also shifts the ability and responsibility for getting around network-based censorship.

The new technique is an improvement. The statistical methods that the censors may use are much less certain overall than figuring out which particular sites are being used as anti-censorship proxies, and blocking those, which is what’s currently possible. In refraction networking, there’s no simple list of sites that can be blocked, since any site whatsoever can be used as a decoy even without that site’s knowledge.

So the researchers know that TapDance works on a small scale. What they don’t know yet is whether or not TapDance can circumvent censorship mechanisms that are used by governments to block internet content if their technology is deployed to millions of users.

The TapDance research team hypothesizes that the technology might be able to circumvent mass government censorship methodologies because interfering with a refraction networking system as covert as TapDance would be impractically expensive. We’ll only know for certain if TapDance is tested with millions of users simultaneously.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/YzRITfmRs3M/

Identity theft at ‘epidemic’ levels, warn experts

Identity theft is running at “epidemic” levels in the UK, fraud prevention service Cifas has glumly announced.

Ever the bearer of bad news, the organisation has been saying the same thing for years. Since 2008 (when ID fraud fell after the credit crunch), reporting of the problem by Cifas’s 360 members across banking and financial services rose steadily from 77,000 cases in 2009 to 173,000 in 2016.

In the first six month of 2017, the figure reached 89,000, which implies a rise of around 9% for this year as a whole, or 500 individual identities stolen every single day in a crime that now makes up half of all fraud. Should ID theft breach 200,000 in 2018, nobody will be surprised.

If a bad thing keeps getting worse, it’s probably worth asking why. A clue comes in the breakdown of the individual fraud types that make up ID theft as a whole.

It’s noticeable that the two biggest – opening bank accounts or applying for credit cards in someone’s name – have fallen.

In the first six months of 2017, card fraud fell 12% to 30,000 compared to the same period in 2016, while bank account fraud fell 14% to 25,000.

Meanwhile, other types of fraud boomed spectacularly, with fraudulent loans rising 54% to 11,500 cases, telecoms fraud up 61% to 9,000, and online retail up 56% to 5,000.

The odd crime of taking out insurance in someone’s name went from only 20 cases to more than 2,000, apparently because it is a simple way of accessing personal data to fuel more serious ID theft crimes.

These are amazing rises in mere months and suggest that as security becomes more stringent in one area, criminals shift attention to less well-defended parts of the system.

A second depressing conclusion is that ID theft is inextricably linked to the rise of online commerce, whose rapid growth it neatly tracks. If so, ID theft it will continue to grow as this channel expands further.

Cifas is an organisation that represents the financial services industry so, naturally, it wants consumers to carry the can by paying attention to the amount of personal information they share online.  Comments chief executive Simon Dukes:

These frauds are taking place almost exclusively online. The vast amounts of personal data that is available either online or through data breaches is only making it easier for the fraudster.

That’s not bad advice, but being cagey about one’s name, dates of birth and addresses isn’t easy in an online world that constantly demands this stuff even for relatively trivial services.

The deeper problem is that the online world depends on a notion of identity so crocked it would make the Victorians wince.  It was bad enough when the world depended on birth certificates, passports and diving licences – these days, some online systems can be beaten simply by feeding them an individual’s personal data points.

Sounds far-fetched? Try applying for online credit and see how few checks are carried out in many cases.

Personal data is not identity and yet, too often today, it is taken to be. The financial services industry, for its part, is addicted to the flawed system of credit reports, ironically one of the first ways ID theft victims end up being “punished” for behaviour conducted in their name.

What to do – and not do

Be cagey about personal data on social media. Mentioning addresses, dates of birth and mobile numbers is now extremely risky. Profile pages are where criminals start.

Treat any online accounts containing personal data as precious. This means using decent passwords and every available security measure from multi-factor authentication to security software. And remember that dictionary attacks can beat a lot of apparently good passwords if they use common patterns of words or character substitutions.

UK citizens have a statutory right to see a copy of their credit report for a £2 ($3.50) fee. This might be worth checking annually for unusual activity.

If you’re contacted about a service you think borrowed your identity, contact your bank first rather than arguing about it with the company involved. Do not delay.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jkamZiPdjdw/

Identity theft at ‘epidemic’ levels, warn experts

Identity theft is running at “epidemic” levels in the UK, fraud prevention service Cifas has glumly announced.

Ever the bearer of bad news, the organisation has been saying the same thing for years. Since 2008 (when ID fraud fell after the credit crunch), reporting of the problem by Cifas’s 360 members across banking and financial services rose steadily from 77,000 cases in 2009 to 173,000 in 2016.

In the first six month of 2017, the figure reached 89,000, which implies a rise of around 9% for this year as a whole, or 500 individual identities stolen every single day in a crime that now makes up half of all fraud. Should ID theft breach 200,000 in 2018, nobody will be surprised.

If a bad thing keeps getting worse, it’s probably worth asking why. A clue comes in the breakdown of the individual fraud types that make up ID theft as a whole.

It’s noticeable that the two biggest – opening bank accounts or applying for credit cards in someone’s name – have fallen.

In the first six months of 2017, card fraud fell 12% to 30,000 compared to the same period in 2016, while bank account fraud fell 14% to 25,000.

Meanwhile, other types of fraud boomed spectacularly, with fraudulent loans rising 54% to 11,500 cases, telecoms fraud up 61% to 9,000, and online retail up 56% to 5,000.

The odd crime of taking out insurance in someone’s name went from only 20 cases to more than 2,000, apparently because it is a simple way of accessing personal data to fuel more serious ID theft crimes.

These are amazing rises in mere months and suggest that as security becomes more stringent in one area, criminals shift attention to less well-defended parts of the system.

A second depressing conclusion is that ID theft is inextricably linked to the rise of online commerce, whose rapid growth it neatly tracks. If so, ID theft it will continue to grow as this channel expands further.

Cifas is an organisation that represents the financial services industry so, naturally, it wants consumers to carry the can by paying attention to the amount of personal information they share online.  Comments chief executive Simon Dukes:

These frauds are taking place almost exclusively online. The vast amounts of personal data that is available either online or through data breaches is only making it easier for the fraudster.

That’s not bad advice, but being cagey about one’s name, dates of birth and addresses isn’t easy in an online world that constantly demands this stuff even for relatively trivial services.

The deeper problem is that the online world depends on a notion of identity so crocked it would make the Victorians wince.  It was bad enough when the world depended on birth certificates, passports and diving licences – these days, some online systems can be beaten simply by feeding them an individual’s personal data points.

Sounds far-fetched? Try applying for online credit and see how few checks are carried out in many cases.

Personal data is not identity and yet, too often today, it is taken to be. The financial services industry, for its part, is addicted to the flawed system of credit reports, ironically one of the first ways ID theft victims end up being “punished” for behaviour conducted in their name.

What to do – and not do

Be cagey about personal data on social media. Mentioning addresses, dates of birth and mobile numbers is now extremely risky. Profile pages are where criminals start.

Treat any online accounts containing personal data as precious. This means using decent passwords and every available security measure from multi-factor authentication to security software. And remember that dictionary attacks can beat a lot of apparently good passwords if they use common patterns of words or character substitutions.

UK citizens have a statutory right to see a copy of their credit report for a £2 ($3.50) fee. This might be worth checking annually for unusual activity.

If you’re contacted about a service you think borrowed your identity, contact your bank first rather than arguing about it with the company involved. Do not delay.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jkamZiPdjdw/

Hackable flaw in connected cars is ‘unpatchable’, warn researchers

The news for the motoring public was bad enough a few weeks ago: a team of researchers had demonstrated yet another hackable flaw in connected vehicles – in the Controller Area Network (CAN) bus standard – that could enable a Denial of Service (DoS) attack on safety systems including brakes, airbags and power steering.

Kind of a big deal, since the CAN is essentially the brain of the car – it handles a vehicle’s internal communication system of electronic control units (ECUs) that the researchers noted, “is driven by as much as 100,000,000 lines of code”.

And the news got worse this past week, with word that the flaw – which applies to virtually every modern car, not just a single brand or model – is unfixable. As Bleeping Computer put it, “this flaw is not a vulnerability in the classic meaning of the word … (It) is more of a CAN standard design choice that makes it unpatchable.” To patch it would require “changing how the CAN standard works at its lowest levels”.

To accomplish a redesign that would eliminate the flaw, the researchers concluded in their paper, titled “A Stealth, Selective Link-Layer Denial-of-Service Attack Against Automotive Networks”, would take an entire generation of vehicles.

Which is yet another ominous reminder that security remains an afterthought in too many industries. Instead of “security by design”, the mentality is that it will always be possible to “bolt it on” later. Except, in this case, it’s not possible.

The researchers’ attack worked by overloading the CAN with error messages, to the point where it was

… made to go into the Bus Off state, and thus rendered inert/inoperable. This, in turn, can drastically affect the car’s performance to the point that it becomes dangerous and even fatal, especially when essential systems like the airbag system or the antilock braking system are deactivated.

Of course, the Department of Homeland Security’s ICS-CERT said in an alert about the flaw that the attack requires access to one of the vehicle’s local open ports.

Which has generated a fair amount of mockery about how dangerous this really is. A number of comments on the blog of security expert Bruce Schneier, who noted it this past week, said a hacker getting access to one of the ports in the interior of the car is about as likely as a passenger in the car grabbing the wheel – possible but highly improbable. One called it “a tempest in a thimble”.

But then another, with equal snark, noted that it might not be necessary to gain physical access to the vehicle, “if someone were daft enough to add wifi connectivity to CAN … or digital radio … or a mobile phone. But who would do such a thing?” he concluded, with links to stories here, here and here about all three being done.

Schneier said “we don’t know” whether attackers could get attack remotely or would need physical access, but added, “my bet is on remote”.

One of the researchers, Andrea Palanca, said he and his colleagues believe remote attacks are possible. “Simply the lack of time and budget planned for the project impeded us from trying a remote version,” he said. And he contended that the risks from the CAN bus flaw are vastly more than “a tempest in a thimble”.

There are cars currently circulating on roads capable of safety-critical partially autonomous functionalities which entirely rely over their CAN buses availability, and whose abrupt and, most of all, unexpected disruption could lead to life-threatening situations – let alone should CAN bus be employed as a backbone for completely autonomous vehicles.

The hope of the research is to instill awareness over the important limits that this design-level vulnerability introduces to CAN bus adoption in such high-reliability demanding situations.

Another member of the research team, Federico Maggi, added that a malicious attacker getting physical access to the vehicle is not as far-fetched as it might have been years ago. “With current transportation trends such as ride-sharing, carpooling, and car renting, the scenario where many people can have local access to the same car is now more commonplace,” he wrote, adding, “A paradigm shift in terms of vehicle cybersecurity must happen.”

And if it does, all it will take is a generation to achieve.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nZEb_nkOCb4/

Hackable flaw in connected cars is ‘unpatchable’, warn researchers

The news for the motoring public was bad enough a few weeks ago: a team of researchers had demonstrated yet another hackable flaw in connected vehicles – in the Controller Area Network (CAN) bus standard – that could enable a Denial of Service (DoS) attack on safety systems including brakes, airbags and power steering.

Kind of a big deal, since the CAN is essentially the brain of the car – it handles a vehicle’s internal communication system of electronic control units (ECUs) that the researchers noted, “is driven by as much as 100,000,000 lines of code”.

And the news got worse this past week, with word that the flaw – which applies to virtually every modern car, not just a single brand or model – is unfixable. As Bleeping Computer put it, “this flaw is not a vulnerability in the classic meaning of the word … (It) is more of a CAN standard design choice that makes it unpatchable.” To patch it would require “changing how the CAN standard works at its lowest levels”.

To accomplish a redesign that would eliminate the flaw, the researchers concluded in their paper, titled “A Stealth, Selective Link-Layer Denial-of-Service Attack Against Automotive Networks”, would take an entire generation of vehicles.

Which is yet another ominous reminder that security remains an afterthought in too many industries. Instead of “security by design”, the mentality is that it will always be possible to “bolt it on” later. Except, in this case, it’s not possible.

The researchers’ attack worked by overloading the CAN with error messages, to the point where it was

… made to go into the Bus Off state, and thus rendered inert/inoperable. This, in turn, can drastically affect the car’s performance to the point that it becomes dangerous and even fatal, especially when essential systems like the airbag system or the antilock braking system are deactivated.

Of course, the Department of Homeland Security’s ICS-CERT said in an alert about the flaw that the attack requires access to one of the vehicle’s local open ports.

Which has generated a fair amount of mockery about how dangerous this really is. A number of comments on the blog of security expert Bruce Schneier, who noted it this past week, said a hacker getting access to one of the ports in the interior of the car is about as likely as a passenger in the car grabbing the wheel – possible but highly improbable. One called it “a tempest in a thimble”.

But then another, with equal snark, noted that it might not be necessary to gain physical access to the vehicle, “if someone were daft enough to add wifi connectivity to CAN … or digital radio … or a mobile phone. But who would do such a thing?” he concluded, with links to stories here, here and here about all three being done.

Schneier said “we don’t know” whether attackers could get attack remotely or would need physical access, but added, “my bet is on remote”.

One of the researchers, Andrea Palanca, said he and his colleagues believe remote attacks are possible. “Simply the lack of time and budget planned for the project impeded us from trying a remote version,” he said. And he contended that the risks from the CAN bus flaw are vastly more than “a tempest in a thimble”.

There are cars currently circulating on roads capable of safety-critical partially autonomous functionalities which entirely rely over their CAN buses availability, and whose abrupt and, most of all, unexpected disruption could lead to life-threatening situations – let alone should CAN bus be employed as a backbone for completely autonomous vehicles.

The hope of the research is to instill awareness over the important limits that this design-level vulnerability introduces to CAN bus adoption in such high-reliability demanding situations.

Another member of the research team, Federico Maggi, added that a malicious attacker getting physical access to the vehicle is not as far-fetched as it might have been years ago. “With current transportation trends such as ride-sharing, carpooling, and car renting, the scenario where many people can have local access to the same car is now more commonplace,” he wrote, adding, “A paradigm shift in terms of vehicle cybersecurity must happen.”

And if it does, all it will take is a generation to achieve.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nZEb_nkOCb4/

Hash of the Titan: How Google bakes security all the way into silicon

Google has unveiled more details about how security built into its custom silicon chips underpins the integrity of its servers and cloud-based services.

A blog post details how Google’s custom Titan chip provides a hardware-verified boot and end-to-end authenticated root of trust for the internet giant’s computing workhorses.

“We harden our architecture at multiple layers, with components that include Google-designed hardware, a Google-controlled firmware stack, Google-curated OS images, a Google-hardened hypervisor, as well as data center physical security and services,” the team of senior Google techies explain.

Titan is a secure, low-power micro-controller specially designed with Google hardware security requirements which was first announced at Google Cloud Next ’17 back in March.

The chip is a continuation of a longer running security philosophy involving building security in custom silicon for Google servers previously covered by The Register back in January.

Titan is designed to ensure a machine boots from a known good state using verifiable code, providing a secure foundation for subsequent operations and all but eliminating the possibility of firmware-based rootkits or other similar nasties.

“Our [data center] machines boot a known firmware/software stack, cryptographically verify this stack and then gain (or fail to gain) access to resources on our network based on the status of that verification. Titan integrates with this process and offers additional layers of protection,” the Google team writes.

Secure boot typically relies on a combination of an authenticated boot firmware and boot loader along with digitally signed boot files. In addition, a secure element can provide private key storage and management. Titan then offers two extra security controls – remediation and first-instruction integrity.

Remediation offers a way to re-establish trust in cases where bugs in Titan firmware are found and patched. First-instruction integrity allows Google to identify the earliest code that runs on each machine’s startup cycle.

Titan bundles several components: a secure application processor, a cryptographic co-processor, a hardware random number generator, a key hierarchy, embedded static RAM (SRAM), embedded flash and a read-only memory block.

In effect, Google is pushing verification of secure boot for its hardware all the way down the stack and onto bare-metal silicon. Google is taking a belt, braces and elasticated waistband approach to delivering secure boot – and it’s relying on in-house expertise rather than third parties to deliver this technology.

“[It’s] clearly worried about supply chain,” University of York techie Arthur Clune suggests.

As Clune notes, the recent Black Hat conference in Las Vegas research on firmware vulnerabilities (PDF) might be used to plant software backdoors. Google acknowledges such outside interference as a risk it is trying to exclude.

Google designed Titan’s hardware logic in-house to reduce the chances of hardware backdoors. The Titan ecosystem ensures that production infrastructure boots securely using authorized and verifiable code.

The custom Titan chip and how it fits inside Google’s purpose-built server [source: Google]

In addition to enabling secure boot, Google has developed an end-to-end cryptographic identity system based on Titan that offers a root of trust for varied cryptographic operations in its data centers. The system’s strong identity gives Google a non-repudiable audit trail of any changes done to the system. Tamper-evident logging capabilities are there to help identify actions performed even by an insider with root access.

Titan provides a root of trust by enabling verification of the system firmware and software components as well as establishing a strong, hardware-rooted system identity, Google concludes. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/08/25/google_titan_security_silicon/