STE WILLIAMS

2017 Has Broken the Record for Security Vulnerabilities

Some 40% of disclosed vulns as of Q3 are rated as severe, new Risk Based Security data shows.

2017 has already broken the record for the most security vulnerabilities – and that’s only as of the third quarter of this year.

There were some 16,006 vulnerabilities disclosed through September 30, which is more than all of 2016, when there were 15,832, according to new data published today by Risk Based Security. The number of bugs as of Q3 represents an increase of 38% over Q3 2016. According to Risk Based Security, that’s 6,295 more security vulnerabilities than those reported in the CVE and National Vulnerability Database.

“Any security product or tool that relies on CVE/NVD is putting your organization at serious risk,” said Jake Kouns, CISO for Risk Based Security.

The firm’s new Q3 2017 VulnDB QuickView report shows that the number of severe vulnerabilities is still high, with nearly 40% ranked above 7.0 on the CVSSv2 score. And 31.6% of disclosed vulnerabilities this year also are being abused in public exploits.

See the full report here.

 

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/2017-has-broken-the-record-for-security-vulnerabilities/d/d-id/1330410?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google researcher finds 79 Linux USB vulnerabilities

The Linux world learned last week that there is something surprisingly large and flaky at the heart of the platform’s kernel USB drivers.

It turns out they’re choc full of security vulnerabilities. USB drivers might not the first place in Linux that most people would think to look for vulnerabilities (or the coolest), but they turned out to be a rich hunting ground for Google researcher Andrey Konovalov all the same.

How big is the problem? It depends which subset of flaws you start with.

The headline list comprises 14 new flaws Konovalov found using a kernel fuzzing tool called syzkaller created by fellow Google researcher, Dmitry Vyukov, which have been assigned their own CVE numbers.

Then there are an additional 65 vulnerabilities previously found in the same subsystem (eight of which have been assigned their own CVEs), to make a grand total of 79 reported by the Google man since last December.

As to the harm they could do if exploited in differet versions of the kernel before v4.13.8 (which appeared in mid-October), he said something important of the original 14 that probably applies across the board:

All of them can be triggered with a crafted malicious USB device in case an attacker has physical access to the machine.

This sounds reassuring because an attacker would have to be sitting in front of a vulnerable Linux computer, able to plug a USB device into it, with the effect of an exploit being to cause a crash or a denial of service in most cases.

Except an attacker wouldn’t necessarily have to gain access to a target machine themselves, they only need to find a way to fool somebody else into doing it for them. Something that studies suggest users will do voluntarily if an attacker just leaves enough USB sticks lying around.

These flaws aren’t going to bring the Internet to a standstill any time soon (and many were patched some weeks ago), but they’re still a tempting target for a specialist attacker to use as a stepping stone for something more serious, such as attacks on air-gapped systems.

The usual advice to stay on top of your updates applies.

Being the Linux kernel, these flaws affect a lot of devices although how many is difficult to say. There are a profusion of Linux distributions, Google’s Chrome OS, the welter of devices built on Linux that have a USB port, and of course Android (some Android smartphones and tablets use the USB subsystem to enable the ageing USB OTG interface, some don’t).

Seventy-nine vulnerabilities is a lot to find in only one part of the Linux kernel in a year but perhaps we shouldn’t be too hard on Linux itself. Finding bugs is better than not finding them, after all, and when USB support was added in 1999 it supported just two types of device: mice and keyboards. The number has expanded considerably since then.

That’s a lot of software for developers to keep up with. Konovalov’s dogged research into this area suggests they haven’t been.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/jwR0mc3vfVo/

ID theft puppet master convicted of huge tax refund scam

The kingpin of a far-reaching identity theft scheme that targeted 16- and 17-year-olds, prison inmates and US soldiers – including some deployed to Afghanistan – has finally been convicted after his case wound through the courts for years.

The scam, which dragged on for three years and netted $9 million worth of ill-gotten gains, involved names of fake tax-preparation businesses, tax-refund bank products such as blank-check stock, and mail-route addresses to send the fraudulent IRS refund checks to, provided courtesy of paid-off US postal workers.

The ID theft ring may have raked in a total of $9 million, but it used its victims’ identities to try to claw a whole lot more – $26 million – out of the US Internal Revenue System (IRS).

The US Department of Justice (DOJ) on Thursday announced that the guy pulling the strings – William Anthony “Boo Boo” Gosha III, of Phenix City, Alabama – had been found guilty of one count of conspiracy, three counts of wire fraud, 22 counts of mail fraud and 25 counts of aggravated identity theft.

According to the DOJ, between November 2010 and December 2013, Gosha ran the huge identity theft ring with his co-conspirators, Tracy Mitchell, Keshia Lanier, and Tamika Floyd. His cronies were all previously convicted and sentenced to prison.

In fact, Tracy Mitchell and Tamika Floyd were two of eight defendants sentenced in a related stolen identity refund fraud (SIRF) case: the DOJ announced in August 2015 that eight defendants, including Mitchell and Floyd, were sentenced to serve a collective total of more than 31 years in prison for having filed over 8,800 tax returns with the Internal Revenue Service (IRS).

Gosha’s scheme started in November 2010, when he stole IDs from inmates of the Alabama Department of Corrections. He sent the IDs on to Lanier, who used them to file bogus tax refunds. Gosha and Lanier agreed to split the proceeds, the DOJ says.

Gosha also stole employee records from a company previously located in Columbus, Georgia.

In 2012, Lanier was hungry for more ripped-off identities, so she hit up Floyd, who worked at two Alabama state agencies: the Department of Public Health and the Department of Human Resources.

Both of those jobs gave Floyd access to the personal identifying information (PII) of individuals, including teenagers. Lanier specifically asked for PII of 16- and 17-year-olds. Floyd went along with it and forked over thousands of names.

After he got the names, Gosha recruited Mitchell and her family to help file the returns. Mitchell worked at a hospital in Fort Benning, Georgia, where she had access to the PII of military personnel, including soldiers who were deployed to Afghanistan. So into the grinder went the soldiers’ IDs, and out came the sausage of yet more fraudulent tax refunds.

In order to electronically file all these bogus tax returns, Gosha, Lanier and other co-conspirators applied for several Electronic Filing Identification Numbers (EFINs) with the IRS in the names of sham tax preparation businesses. An EFIN is a number the IRS assigns to tax preparers that are accepted into the federal/state e-file program. To become an authorized IRS e-file provider, tax preparers have to submit an application and undergo a screening process.

The DOJ didn’t explain how the crooks got their hands on valid EFINs for fake tax preparers, but last month, the IRS warned tax preparers that cybercrooks are increasingly targeting tax pros to get taxpayer data so they can file fake tax returns and EFINs so they can come off as real tax preparers.

The crooks target tax pros with the same attacks they use on the rest of us. The IRS’ “Don’t Take the Bait” educational series has tips for avoiding getting targeted by spear-phishers, ransomware, and remote-access takeover attacks, for example.

With EFINs in hand, Gosha, Lanier, and their co-conspirators not only filed fake returns; they also managed to get the right kind of paper from banks to print out refund checks. The fraudsters then printed out the bogus refund checks using the blank check stock – until, that is, the banks figured out something was up and stopped their ability to print the checks.

That’s when Gosha’s gang got US Postal employees in on the scheme. They paid the employees for addresses from their postal routes, had refund checks mailed to those addresses and got the postal workers to intercept the checks for them. Gosha also funneled tax refunds into prepaid debit cards, had them sent to addresses he could access and used the prepaid cards to cash out the money.

But wait, there’s more! Besides that theft ring, Gosha had another stolen ID refund scheme going with Pamela Smith and others. In that one, which he worked from January 2010 until December 2013, Gosha sold the inmates’ IDs he stole from the Alabama Department of Corrections to Smith and others. Smith and others used the IDs to file returns that went after some $4.8 million in fake refunds, of which the IRS paid out about $1.85 million. Smith was previously convicted and sentenced to prison.

Gosha’s sentencing date hasn’t yet been set. He’s facing a maximum of 10 years for the conspiracy to file false claims, 20 years for each count of wire and mail fraud, and a mandatory minimum sentence of two years for the aggravated ID theft. He’ll also be looking at paying back the money.

Of course, we’ve all got to watch out for attempts to get our PII, which can be used to file fake tax returns in our names. But Gosha’s gang didn’t pick off individuals one by one by targeting them with spear-phishing email or the like. They didn’t have to. From the sounds of it, they had, and abused, privileged access to captive audiences: prisoners (a group that Gosha will soon be joining), soldiers in hospitals and people who have their data stored in state agency databases – a group otherwise known as “everyone.”


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/qykPjAnmXag/

Google study reveals how criminals break into Gmail accounts

Google, it’s fair to say, is no fan of relying on passwords to secure online accounts.

Reading the recent study the company commissioned on the causes of online account takeover from the University of California, Berkeley, it’s not hard to understand why.

The year-long analysis to March 2017 mostly confirms a lot of bad news that security experts could have guessed, starting with the staggering haul of stolen credentials, covering a wide range of online services, that appear to be circulating on the dark web.

After crawling blackhat forums and paste sites, 1.9 billion credentials were traced to data breaches, 12.4 million to the work of phishing kits, and 788,000 were stolen by keyloggers.

Based on the 751,000 Gmail users within this data, the company was able to work out that for its users phishing attacks are by far the most dangerous of the three:

We find that the risk of a full email takeover depends significantly on how attackers first acquire a victim’s (re-used) credentials. Using Google as a case study, we observe only 7% of victims in third party data breaches have their current Google password exposed, compared to 12% of keylogger victims and 25% of phishing victims.

But just having the password and user name (which can be changed) isn’t the whole explanation for the different success rates. It turns out that phishing attacks and keyloggers are further boosted by their tendency to grab data such as telephone numbers, geo-location data and IP addresses.

This makes it much harder for a company such as Google to detect rogue activity simply by looking at where someone appears to be logging in from, say, because this can be spoofed.

The warning:

While credential leaks may expose the largest number of passwords, phishing kits and keyloggers provide more flexibility to adapt to new account protections.

Which brings us back to the perennial angst of passwords.

The study confirmed that large numbers of passwords (including large numbers of terrible ones that appeared to have been poorly stored) are re-used, which means that someone breached in one service has often put multiple accounts at risk.

The researchers’ conclusion is that password-based authentication is dead in the water. Credentials are simply too easy to steal while users don’t make much effort to secure them. No amount of tinkering can save this model.

Enabling multi-factor authentication (MFA) would mitigate much of this, particularly phishing attacks, credential leaks and, to some extent, keylogging. And yet only a minority use it, even after they’ve been the victim of an attack:

Our own results indicate that less than 3.1% who fall victim to hijacking subsequently enable any form of two-factor authentication after recovering their account.

This suggests that people have either not heard of MFA, don’t know how to enable it or really don’t like it.

It makes you wonder why Google doesn’t simply make MFA mandatory and just get on with migrating people for their own good, as Apple appears to want to do.

An intriguing possibility is that companies such as Google might more regularly trawl the dark web for accounts that have been breached, resetting them as they are spotted.

Facebook are already known to do this and Google did it for every compromised Gmail account the researchers uncovered in this study, so it’s not far-fetched that this could happen in future.

Naked Security has written several times on the importance of MFA (including for Gmail) which we’d implore anyone not using it to read and act on.

Google also recently launched something called the Advanced Protection Program (APP) for Gmail users who see themselves as being at high risk of phishing attacks.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/hHeP2YbrF94/

Privacy Pass protocol promises private perusing

Boffins have harnessed privacy-preserving crypto to create a browser extension that allows users to authenticate to services without being tracked.

The extension, Privacy Pass, offers people another way to authenticate themselves without having to repeatedly solve internet challenge-response tests like CAPTCHAs.

Alex Davidson, a PhD student at Royal Holloway, University of London, is one of a five-man team behind the extension, which he worked on while an intern at web security firm Cloudflare – the websites it protects support the extension.

“Privacy Pass aims to solve the problem of authenticating to services when the user seeks to preserve their anonymity,” he told The Register, adding that it’s most likely to benefit people who browse from shared IPs.

The extension allows users to generate a set of “signed” tokens from a service after a successful authentication attempt over HTTP, Davidson said.

“These tokens can be used as passes – allowing a means of authenticating to the same server in the future – instead of having to explicitly authenticate again; much like cookies are widely used now instead of having to log in over and over again.”

But, crucially, Privacy Pass also ensures that the service doesn’t recognise the user when they hand the pass back by making it cryptographically unlinkable.

How does it work?

The protocol uses a concept called verifiable oblivious pseudorandom function (VOPRF) combined with a blind signing protocol – where the server performs a compute function for the user without knowing the real input or output.

When a user needs to authenticate to a service, Davidson said, Privacy Pass will first generate a set of elliptic curve points that are used as tokens.

These are then blinded by Privacy Pass – by secretly multiplying each token by some random number – and sent with an authentication attempt to the server.

The server then validates the attempt and signs the token, by multiplying each with its own secret value, and returns them to the client.

The tokens are then unblinded by Privacy Pass, done by inverting the random multiplication, and are stored for the future.

When the user is asked to perform another authentication for that service, Privacy Pass creates a pass from an unspent token and sends that instead, giving the user quicker and easier access to the service.

“Since the blind is randomly generated by the client and never seen by the service, we ensure that the service cannot link a token that was signed to an unblinded token that is redeemed later,” Davidson wrote in a Medium post explaining the concept.

Davidson described the protocol as follows:

A VOPRF protocol allows a server with key x and a user with input y to evaluate F_x(y) for some PRF F, without the user learning x and the server learning y. This is the oblivious aspect of the construction. The verifiable aspect allows the user to verify that what is returned by the server is a valid pseudorandom output from the PRF. That is, to prevent the server from returning some input that is potentially not random.

In the Privacy Pass protocol, a user will present y along with F_x(y), and a service with x can verify that the user has received a valid F_x(y) in the past.

As well as blinding, the team also introduced verifiable key consistency to ensure users couldn’t be identified.

It uses a batched non-interactive, zero-knowledge proof that allows the service to prove that all clients are served outputs from the VOPRF using the same key x.

This is crucial as services that could use unique key pairs would be able to link future pass redemptions by analysing the key pairs used to compute the VOPRF protocol.

Davidson told The Reg that the tokens don’t encode any data about the user or when it was generated to protect anonymity, but acknowledged that this did make trading signed tokens possible.

“But we can avert this becoming too much of an issue by using regular key rotation on the server-side,” he said. “In the future, we may explore ways of encoding some data in the token to prevent this, without giving away any details about the user.”

An alternative to cookies?

Davidson said that, because Privacy Pass is agnostic to the authentication mechanism used, it can be built on top of existing frameworks.

“For example, we envisage that it could be used as an alternative method for signing into services without having to use authenticators that do not preserve privacy, such as cookies.”

This was particularly welcomed by privacy campaigners. “Cookie tracking is all too common, so methods to remove the need for it are a great idea,” said Open Rights Group director Jim Killock.

Killock added that it could have implications for age verification services. These are coming under the spotlight in the UK as the Digital Economy Act requires all porn sites to verify the age of users – bringing with it concerns over privacy and data security.

Pandora/Blake, porn-maker and civil liberties campaigner, echoed this hope.

“Privacy Pass doesn’t currently include a protocol for handling age verification itself, but if age verification services used this sort of zero knowledge proof then it would dramatically increase user privacy,” they said.

“If Privacy Pass does what it says it does, it indicates that anonymous authentication is possible, and that age verification providers have no excuse for creating protocols that needlessly see and retain user data, with potential harmful consequences for privacy.”

Experts have also welcomed the tech, with Alan Woodward, a security professor at the University of Surrey, saying that – although there are other ways of solving the problem – this was an “elegant” solution.

Software engineer Alec Muffet described it as “an awesome technology” but added: “We need to think about potential applications where it’s not just proving to the infrastructure/operators that you have a right to be there, but instead using it in a space with third, and fourth, parties.”

For its part, the Privacy Pass team has said that it views the protocol and extension as still being in beta, and it’s looking for new partners and support from the developer community – the code for the extension and a compatible server implementation are both open source. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/14/privacy_pass_protocol/

What the NFL Teaches Us about Fostering a Champion Security Team

Cybersecurity experts can learn how to do a better job by keeping a close eye on the gridiron.

Now that it’s NFL season, there’s a lot of wisdom to be gleaned from the football field for cybersecurity experts. How do all of the different positions on and off the field relate to your cybersecurity efforts? How can you correlate the value each position plays in your own squad and in building great policies and practices?

Here are the four ways that security pros can improve their play:

It All Starts with the Coach
The coach is arguably the most pivotal position on the team. Coaches dictate strategy, lead their players, work on trades and salary caps, and pore over new plays and rules. This is, of course, the CISO or CSO. The CISO must develop an overall strategy for your organization’s security, manage and lead the security staff, recruit talent, and budget for products and services, as well as understand the legal, regulatory, and compliance frameworks he or she must adhere to. And just like the coach, the CISO must be able to understand both offense and defense. As we look to  a future  in which companies  may have the ability to “hack back,” offensive play may become a bigger priority for security teams.  

The Quarterback Makes It All Happen
The CISO’s direct reports are your quarterbacks. Sure, in the NFL, it’s almost always a star QB and a couple of backups, and you may have a “star” manager in your organization who outshines your other quarterbacks. But just as in the NFL, the quarterbacks need to work tightly together to coordinate strategy and cover each other just in case. Quarterbacks have spent many sleepless nights in fear of a particularly potent pass rusher or blitz play they know they’ll see in their next game.

The security quarterbacks spend just as many sleepless nights thinking about a “hacker blitz” or a pass rusher swooping in past your organization’s line and getting the sack. Both the pass rusher and the sufficiently skilled attacker are unblockable forces in the “game” without the right visibility. The football quarterback must be able to see the rush coming and instantly figure out a way to get out of the rush. Your security quarterbacks need to be able to see into every corner of your infrastructure, every endpoint, every asset… all giving some sort of tell-tale sign that the blitz is coming.

Don’t Forget the Defensive Line
Flipping it the other way, and thinking of the offense as the attacker, we can’t forget the incredible value and critical role the defensive line plays in both the NFL and inside your security team. In football, those on the defensive line have one singular goal: to prevent the attacking side from scoring points.

Just as in football, your defensive line of analysts and security operations center (SOC) staffers are the first line to protect your network from being scored against. The defense in football must be ready at all times for deceptive tactics such as naked bootlegs, lateral passes, and other trick plays. For your SOC staff, they too must always be ready for trick plays: ransomware attacks that are designed to be a diversion against another attack that is designed to steal your valuable data; hundreds of false-positive alerts that draw skilled resources away from looking for that breach needle in the haystack; overloading one part of your security infrastructure in the hopes of overwhelming your defense staff so that something will get through undetected.

A recent Ponemon survey showed that the average organization spends 425 hours chasing down false positives. That same survey showed that same enterprise is spending almost $1.4 million annually dealing with those false positives. That’s a lot of defense time and money that could be better spent training, studying new plays, and practicing techniques.

What About the Fans?
Fans can make or break a team. A raucous home crowd in the NFL can add an unquantifiable positive to the home team. Remember Kansas City’s legendary noise levels or Seattle’s 12th man campaign? Much like the 12th man, an organization needs “fans” of its security efforts, including the rank-and-file employees you protect, your executive leadership team, and your shareholders, to buy in to your vision and strategy. If the organization’s employees feel as if they’re part of the solution, are not treated like second-class users, and believe they can come forward and report issues or incidents immediately without getting stomped on by your security staff, they’ll feel like they’re part of the team.

Your executive leadership team and shareholders must also buy in to your overall security vision. They’re the part of the team that approves operational and capital investments in security, or pushes back on rapidly expanding and increasing security spend. If they don’t see the value, it can be difficult to get what you need from them when you need it.

At the end of the day, it’s important to remember that no team, NFL or security, wins with a single star. You could have the world’s greatest quarterback/manager, or an All Star defensive line/analyst, or a wizard of a coach/CISO. But they can’t do it alone. No team wins on Sunday on the back of one single position. And just as in the NFL, it takes a well-oiled security machine to win games in the security gridiron. You need to see the whole field, read plays, work together, and stop the attacking side before they find their way into the end zone.

Related Content:
10 Mistakes End Users Make That Drive Security Managers Crazy
Why Common Sense Is Not so Common in Security: 20 Answers
How Law Firms Can Make Information Security a Higher Priority

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

 

Richard Henderson is global security strategist at Absolute, where he is responsible for spotting trends, watching industries and creating ideas. He has nearly two decades of experience and involvement in the global hacker community and discovering new trends and activities … View Full Bio

Article source: https://www.darkreading.com/operations/what-the-nfl-teaches-us-about-fostering-a-champion-security-team/a/d-id/1330396?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Companies Blindly Believe They’ve Locked Down Users’ Mobile Use

IT security teams may be in for a surprise about their mobile exposure as the GDPR compliance deadline approaches, according to a new survey.

Despite 61% of security and IT executives believing they have limited their users’ access to company resources on their mobile devices, some 64% of employees acknowledge they can access enterprise customer, partner, and employee data from those devices, new data shows.

Some 58% of employees say they can retrieve customers’ personally identifiable information (PII) from their mobile devices as well, according to a new report released today from Lookout.

The “Finding GDPR Noncompliance in a Mobile First World” report queried 2,062 US and UK respondents to assess where they stood as the May 25 deadline approaches for compliance of the European Union’s General Data Protection Regulation (GDPR).

GDPR requires that organizations with European citizen PII must adhere to new rules on the way they handle this information and inform citizens affected by data breaches or face a hefty fine of 4% of a company’s annual revenue.

“It was really surprising how companies thought they had locked their employees from accessing this data on their phones, but 64% of employees said they could access it. This awareness gap is significant,” says Aaron Cockerill, Lookout’s chief strategy officer.

With the majority of security and IT executives believing they have limited their employees’ mobile access to PII, it may explain why only 16% plan to expand their compliance strategy to include mobile devices, Cockerill surmises.

It’s also interesting to note that while a minority of executives plan to increase their mobile device compliance strategies, a whopping 84% of executives agree any personal data accessed via employees’ mobile devices could put their organization at risk of falling out of GDPR compliance.

The survey found that 73% of employees use the same mobile phone for both work and personal use.

Another risk to PII data may also stem from employees accessing such data connecting to potentially risky Wi-Fi networks, which could result in a man-in-the-middle attack and put data at risk of being stolen, notes Cockerill. The survey results show that 68% of US employees connect to such networks on the go, he says.

Additionally, 48% of US employees acknowledge they download applications from sources other than app stores Google Play and Apple’s App Store, which runs counter to advice security experts give users to steer clear of risky, unofficial app stores or sources.

Overall, 63% of survey respondents note they download apps outside of what their companies offer, in order to aid them in doing their jobs.

According to the survey, employees via their mobile devices actually have access to:

  • corporate contacts (80%)
  • work calendar (88%)
  • enterprise apps (85%)
  • corporate messaging (81%)
  • MFA/stored credentials (77%)
  • administrative tools (66%)

“Employees are just trying to do their jobs. This is not malicious,” Cockerill says. “The reason for this study is to show companies they have this problem. Technology does exist to find out what their employees have installed and where the information is going.”

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/mobile/companies-blindly-believe-theyve-locked-down-users-mobile-use/d/d-id/1330421?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Enterprise Physical Security Drives IoT Adoption

The vast majority of respondents to a new survey are deploying IoT technologies for building safety in the form of security cameras.

Enterprises are adopting Internet of Things devices to improve operational processes and cut costs, but the number one reason is for physical security, according to a survey released today.

Based on responses from 400 IT professionals at US, Canada, and UK enterprises, 32% of respondents point to a need for increased physical security as the top IoT adoption driver, according to the State of IoT 2017-2018 report. Other reasons for adoption cited by participants include improved operational processes (23%), reduced operational costs (21%), and simplified management (20%).

IoT Gets Physical

The Spiceworks research, which was commissioned by Cradlepoint, found the vast majority of survey respondents already use IoT technologies, with 71% deploying them for building security that largely comes in the form of security cameras.

“People have different ideas of what is an IoT device. It can be a security camera, motion detector sensors, or an RFID tag on a hanger in a retail store,” says Ken Hosac, vice president of IoT business development at Cradlepoint. “These are block and tackle IoT projects.”

Hosac says defensive forms of technology that keep a building or merchandise secure is not only a top motivator for enterprises to deploy IoT devices, but they are also among the easiest things to implement.

Mirai Considerations

But as IoT devices become more ubiquitous, so does the potential threat of a cyber attack on these devices. That notion is not lost on enterprises, with 40% of survey respondents listing cybersecurity as a top concern, according to the report.

Mirai, for example, commandeered vulnerable IoT devices, such as security cameras, to launch botnet-enabled DDoS attacks last year and inspired a number of copycat IoT botnets like Persirai.

Hosac advises enterprises to avoid placing IoT devices on their existing networks. Instead he recommends creating a new, separate network for IoT devices, or a software-defined perimeter network with a virtual network overlay on top of an existing network.

“Enterprises think they can buy antivirus software they use on their desktops and laptops and put it on their IoT devices. But with IoT there is a mix of different types of security that is needed,” he says, noting some devices are closed systems where it is impossible to even send a software update to the device.

Related Content:

 

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/mobile/enterprise-physical-security-drives-iot-adoption/d/d-id/1330425?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook’s ex-president: we exploited “vulnerability in human psychology”

Who needs to wander into the back alleys of the Dark Web to buy drugs? You can self-medicate for free on Facebook, in broad daylight, clear of legal encumbrance or the criminal element.

The drug of choice, of course, is the little hits of dopamine we get from social validation. In an interview with Axios, ex-president of Facebook Sean Parker said that the main goal from the start has been to get and keep people’s attention, and if hacking people’s psychological vulnerabilities is the way to do it, well, then hacking people’s brains is exactly what they’re happy to do.

From the interview, a video of which you can see here:

The thought process that went into building [social media] applications, Facebook being the first of them … was all about: ‘How do we consume as much of your time and conscious attention as possible?’

That means that we needed to sort of give you a little dopamine hit every once in a while because someone liked or commented on a photo or a post or whatever … It’s a social validation feedback loop … You’re exploiting a vulnerability in human psychology … The inventors, creators — it’s me, it’s Mark [Zuckerberg], it’s Kevin Systrom on Instagram, it’s all of these people — understood this consciously. And we did it anyway.

Parker, 38, is now a billionaire as well as the founder and chair of the Parker Institute for Cancer Immunotherapy. He made the remarks on Wednesday during an Axios event at the National Constitution Center in Philadelphia, where he spoke about accelerating cancer innovation.

… and about Facebook founders’ unbridled lust to turn us into pod people:

When Facebook was getting going, I had these people who would come up to me and they would say, ‘I’m not on social media.’ And I would say, ‘OK. You know, you will be.’ And then they would say, ‘No, no, no. I value my real-life interactions. I value the moment. I value presence. I value intimacy.’ And I would say, … ‘We’ll get you eventually.’

As Axios reports, Parker said that he’s become “something of a conscientious objector” on social media. That means he doesn’t shy away from expressing a bit of what you might call buyer’s remorse with regards to what Facebook and other social media have done to our humanity.

I don’t know if I really understood the consequences of what I was saying, because [of] the unintended consequences of a network when it grows to a billion or 2 billion people and … it literally changes your relationship with society, with each other … It probably interferes with productivity in weird ways. God only knows what it’s doing to our children’s brains.

Is this part of a bigger trend happening in Silicon Valley of people questioning the behavior that delivered them into the luxury of having a conscience?

Facebook “like” button co-creator Justin Rosenstein and former Facebook product manager Leah Pearlman have both implemented measures into their lives, and their devices, to curb their social media dependence. Rosenstein is also urging the tech industry to start addressing the role social media plays in our lives now because “we may be the last generation that can remember life before.”

But, it’s not just Facebook employees taking a step back and assessing the industry.

Former Google employee and “the closest thing Silicon Valley has to a conscience”, Tristan Harris attributes the influence of tech companies to altering our democracy and our skills of having conversations and relationships with others.

Loren Brichter, designer of the pull-to-refresh tool first seen in the Twitter app, also admits that the social media platform and the tool he created are addictive.

These are not good things. When I was working on them, it was not something I was mature enough to think about. I’m not saying I’m mature now, but I’m a little bit more mature, and I regret the downsides.

Perhaps such pangs of conscience aren’t surprising, we’re all inclined to regret the stupid things in our youth, but Parker and others in his tax bracket should enjoy a long, long life in which to ponder it all, given that they can afford much better health care than the rest of us stiffs:

Because I’m a billionaire, I’m going to have access to better health care so … I’m going to be like 160 and I’m going to be part of this, like, class of immortal overlords.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/nOgt9sCkQD8/

Sure, Face ID is neat, but it cannot replace a good old fashioned passcode

Apple’s iPhone X is one of several technologies bringing facial biometrics into the mainstream. It seems to have everything bar a heat scanner; the TrueDepth camera projects an impressive-sounding 30,000 infrared dots on to your phiz, scanning every blackhead in minute 3D detail.

The company claims some impressive figures, and it isn’t the only one touting facial recognition as a mainstream solution. Others include Microsoft, with Windows Hello, and Google, with the Trusted Face technology it released in Android Lollipop. Just how secure are these technologies, and should we rely on them?

There are two metrics that matter when discussing facial recognition systems. The first of these is the false acceptance rate (FAR), which describes how often a device matches the wrong face to the face it has on record. Its converse is the false rejection rate (FRR), which is how often if fails to recognise the correct face.

Matt Lewis, technical research director at security firm NCC Group, has spent a lot of time trying to fool facial recognition systems. He explains that an increase in one error rate decreases the other. The place where they intersect is called the Equal Error Rate.

The FRR might cause some inconvenience if it stopped you logging into your phone or workstation, or prevented you from getting into a building. But a false acceptance could be catastrophic if it permitted access by the wrong person. Perhaps that’s why facial recognition analysts and vendors tend to talk about accuracy primarily in terms of the FAR.

An angry woman looking at her phone

For fanbois only? Face ID is turning punters off picking up an iPhone X

READ MORE

Lewis categorises three levels of security based on the FAR. A 1:100 FAR would be described as low security (you’d only have to pass a phone around to 100 people and have them scan their faces for one of them to successfully log in). Medium security would be 1:10,000 users, while 1:1,000,000 would pass his high-security threshold.

The iPhone X seemed to suffer from a false rejection event at its first public demo, when the first attempt to unlock it didn’t work. Apple later blamed this apparent false rejection on the device doing exactly what it was supposed to. The iPhone X requires a passcode after five unsuccessful Face ID authentication attempts, and various people backstage had been messing about trying to authenticate with it, the firm said.

As for the FAR, Apple’s security guide on Face ID claims a 1:1,000,000 FAR, making it a high-security device, according to Lewis’s metrics, and about twice as accurate on the FAR side as its Touch ID system.

One in a million is the average FAR, but what happens when people deliberately try to fool the system by copying someone’s face and then using it to trigger a false acceptance?

There have been successful attempts to trigger false acceptances on facial recognition systems in the past. Lewis should know, because he engineered one of them.

He used three images of his face – front and both sides – taken on an iPhone 5s to produce a 3D image of his mug, and from there, a $299 full-colour resin mask. He then waved it at both Android Trusted Face and Windows Hello.

Trusted Face is apparently too trusting, because it happily authenticated him. This didn’t surprise him because Google’s guidance says that its facial recognition isn’t as secure as a PIN (why use it then?). Windows Hello was more surprising because the system uses an infra-red camera for more accurate facial scanning, and machine learning to refine its understanding of what you look like.

He worked with Microsoft to get to the bottom of this. Redmond decided that it had been too liberal in choosing samples that helped its facial recognition algorithm learn more about a user’s face. After using repeated facial scans to get better at recognising you, its algorithm effectively got too lax, looking at a Matt-like mask and effectively saying: “Oh, you’ll do.”

Microsoft has since tightened up its approach, and later versions of the algorithm don’t suffer from the same problem, said Lewis’s white paper.

For every successful false acceptance attack on a facial recognition system, designers will come up with an enhancement to the recognition algorithm that thwarts it. You’re trying to use a photo to spoof a system? Fine, we’ll create a system that scans your face in 3D. You’re using a mask? OK, here’s a liveness detector that looks for motion and blinking.

Then researchers will typically come back with a counter-hack. For example, researchers at the University of North Carolina developed an attack (PDF) that modelled colour 3D representations of faces from social media photos in virtual reality that could then be animated.

“The implication was that such spoofing attacks on existing systems could be performed simply by exploiting a user’s public persona, rather than hacking the authentication software (in code or in credential files), itself,” UNC researcher True Price told us.

Your shoe, chewing gum, or ciggies are now your extra password

READ MORE

There have been other attacks on facial recognition systems. For example, researchers at CMU (PDF) successfully triggered false acceptance and rejection on some systems by printing out eyeglasses with different visual characteristics.

Vulnerable to triplets

So how does Apple’s iPhone X hold up? We’re not on Apple’s friends list when it comes to getting review products, but The Wall Street Journal apparently is. They got fondling privileges and tested it in four scenarios: everyday use, using a photograph, using a mask, and using both fraternal and identical twins or triplets. The bad news: identical triplet kids fooled the system (but then Apple explicitly says that the probability of a match for twins is “different” in its security guide, and suggests using a passcode). The good news: in all other scenarios, including masks, Face ID did what was intended. Apparently those 30,000 infra-red dots really do mean something.

So, it’s game over for attackers who aren’t identical siblings, then? Don’t be daft. Security never was and never will be a zero-sum game. It’s a question of quantifiable risk, but the odds are shifting in the defenders’ favour.

“We have come far enough to make spoofing difficult but not impossible,” Lewis says. Not only are the cameras and learning algorithms getting better, but most of the facial recognition is embedded in the endpoint, meaning that you’d have to get physical access to it rather than phish your way into someone’s cloud account, for example. “The risk is going to drop much lower naturally by virtue of how we typically use facial recognition within end-user devices as well.”

Does that mean that facial recognition is driving down the cybersecurity poverty line, enabling more people to get high-security protection as a baseline? And if so, shouldn’t we all rush out and use it?

There’s one big argument against, according to 451 Research analyst Garrett Bekker. “Compromise,” he says. If someone does compromise a facial recognition system – either by stealing the biometric information created during enrolment or by finding a way to fool the system – then you’re stuffed. They have something that you can’t change.

It’s a constant worry, argues Lewis. “Biometrics are always at risk of copying because they’re not secret. That aspect will never go away,” he says.

You might be able to pilfer naked celebrity pics from iCloud, but you won’t be stealing face data from there. The iPhone X stores the biometric data taken during enrolment locally on a secure enclave – effectively Apple’s version of the trusted platform module – and it doesn’t leave the phone.

The prospects get far worse when people do start storing biometric data centrally, warns Merritt Maxim, principal analyst serving security and risk professionals at Forrester Research.

“We’ve already seen some examples of that in the US government OPM breach,” Maxim adds. Some of the stolen data was said to have included fingerprint data used as part of background checks.

This raises some legal questions around storing biometric data for public and private sector organisations alike.

“Under the GDPR [European Union’s General Data Protection Regulation] that’s coming into force next year, there are specific provisions in there around biometric data and the storage and capture of that,” says Lewis. “There are going to be a lot of systems that fall foul of that regulation.” If you store and subsequently lose someone’s biometric face data, the fines could be significant.

First iPhone X fondlers struggle to admit that Face ID sort of sucks

READ MORE

So how can you prevent a game-over scenario if your face data goes walkies? There are answers. They just might not be the answers that the typical consumer is looking for.

“The only real solutions there are to use multi-factor authentication, so you have to use your face and a PIN and a token to get a stronger binding of the individual,” Lewis says. But that’s a step backwards, and detracts from the convenience that consumer-facing authentication tech is looking for.

There have been some attempts to handle that. One concept, cancellable biometrics, effectively distorts the biometric image in a repeatable way. If the biometric image is compromised, the authenticating party can change the distortion process, invalidating the stored biometric date and reissuing a new version. This all seems largely academic so far, though.

Facial recognition seems a lot more secure than using a PIN or password, while using others is provably less so. As with any other cybersecurity mechanism defence in depth is the best approach and in this scenario, two authentication methods in unison will be more effective than one. Three, more effective than two. As ever, with facial recognition and all biometrics, it’ll be a case of keeping up with keeping ahead of the criminals. ®

Bootnote

It should be noted that infosec firm Bkav claims to have hacked the iPhone X’s Face ID using a 3D mask built for a measly $150.

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/11/14/is_facial_recognition_good_enough/