STE WILLIAMS

45% of Businesses Say Employees Are Biggest Security Risk

The most common cloud security worries remain the same, with unauthorized access and malware infiltrations topping concerns.

Despite facing mostly external attacks, nearly half (45%) of businesses believe their greatest security risk comes from their own employees, according to the 2018 Netwrix Cloud Security Report. The blame falls more heavily on IT staff (39%) and businesses users (33%) as much as, or more than, it does on cloud providers (33%).

Common cloud security concerns are the same across respondents, which represent 853 organizations. The greatest is risk of unauthorized access (69%), risk of malware infiltrations (50%), and the inability to monitor activity of employees in the cloud (39%).

Cloud security will continue to be an issue as most businesses plan to move more data to the cloud and begin storing sensitive data in cloud environments. The bulk of this will be customer (50%), employee (45%), and financial (37%) data. Part of the problem will be getting executives on board: only 66% of respondents have upper-level support for cloud security projects.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/45--of-businesses-say-employees-are-biggest-security-risk/d/d-id/1330879?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Less than 10% of Gmail Users Employ Two-Factor Authentication

Google software engineer reveals lack of user adoption for stronger authentication.

A Google software engineer told attendees of the Usenix Enigma conference in Santa Clara, Calif., this week that under 10% of active Google accounts have enabled two-factor authentication.

Google first rolled out 2FA for Gmail in 2011, but comments by Google’s Grzegorz Milka (reported by The Register) reflect the common security dilemma of convenience trumping stronger security. 

Milka reportedly told The Register that usability is the reason Google has not made 2FA a mandatory feature for Gmail accounts. “It’s about how many people would we drive out if we force them to use additional security,” he said.

See the full report here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/less-than-10--of-gmail-users-employ-two-factor-authentication/d/d-id/1330880?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Meltdown, Spectre Patches, Performance & My Neighbor’s Sports Car

When a flaw in the engine of a data center server makes it run more like a Yugo than a Porsche, it’s the lawyers who will benefit.

As I consider potential impacts from Meltdown and Spectre, what strikes me most is not the typical cybersecurity risks, reputational impacts, and operational hits. In the coming weeks and months, we will see lawsuits against the chip manufacturers, operating system providers, and OEM manufacturers whose devices house these chips and are the point of contact between the user and the chipset.

Surprisingly, it was my neighbors’ sports car that led me to focus on the legal issues, not the industry evaluation and response to the chip vulnerability. When my neighbor was showing me his new Porsche, he made me think about engineering, performance, and speed, as well as the difference in our expectations when we make purchasing decisions. When a person buys a high-performance vehicle, he or she has certain expectations about speed, acceleration, and craftsmanship. For a sports car, the engine is the most critical part of the vehicle, and really it’s what the car is built around.  

If I buy an $800,000 Porsche that is advertised to hit 60 mph in 2.2 seconds, then I expect it to perform reliably and consistently at this level. When I am advised the engine needs a system upgrade because of dangerous combustion timing and that upgrade decreases the performance of the vehicle by 30%, then I must question my purchase and whether the car has been negatively affected in a way that is irrecoverable and if it’s no longer enjoyable.

Degraded Performance?
There are many similarities between my sports car analogy and the performance hits that may occur after applying patches or other firmware/system changes to mitigate the effects of Meltdown and Spectre on various processors. When consumers and businesses make purchasing decisions for computers, data center infrastructure, or cloud services, the operations teams focus on architecting systems to run in the most efficient manner, with the highest operational delivery specifications, and in a secure fashion.  

If processors that used to run, for example, on a laptop at 3.4GHz now run at 2.4GHz in bench tests, then the overall performance and/or productivity of the teams may be impacted or make for a less robust computing platform. If server architecture in a data center environment or cloud instance has been purchased and specified to run at a specific speed, transaction flow, or simultaneous user session speed and this is negatively affected, then there may be issues experienced by the end customer.  

Both of these scenarios of degraded processor speed may interfere with employees’ ability to perform their job functions (think engineers, number crunching, and graphics), consumers’ enjoyment of their newly delivered holiday gift, and production capabilities for websites that have high transaction volume and user utilization. In these cases, the processor still exists and is still working, but it has been degraded in a manner that may affect the overall value of the technology device, business function, or customer appreciation and continued use of the product or service.

Legal Issues
In the days ahead, CISOs will be examining the mitigating controls they can implement to decrease risks to their environments and customers. Chief operating officers will want to stay abreast of performance issues, operational degradations, and customer issues. Similarly, lawyers and contract and procurement officers will start to ask questions. Legal experts will seek information on what they contracted for in their purchase or lease of equipment or services and what they are now receiving in terms of promised speed and system utilization.

To the extent there is a delta between what was purchased and what is now in operation, lawyers may seek a reduction in price, new equipment, or indemnification for affected customers going forward. In many instances these discussions will be held quietly, but we can expect a new round of contract claims, tort claims, and—one of my favorite claims from the early days of CAN-SPAM litigation—trespass to chattels. This last claim is one that has been around for hundreds of years and appears in lawsuits when the property still exists but is being blocked from being used, impacted negatively, degraded, or otherwise unavailable. When property quality, condition, or value has been impaired, then one may have a claim for trespass to chattels.

We will have to examine more closely what the true performance effects are and whether or not they are material in the coming months. We will have to examine what types or remuneration might be possible if indeed the Porsche is now operating like a Yugo. But no matter what, we must patch and secure this fundamental building block in all our technological devices.

Related Content:

 

Dr. Chris Pierson is the founder CEO of Binary Sun Cyber Risk Advisors. He is a globally recognized cybersecurity expert and entrepreneur who holds several cybersecurity patents. He serves on the Department of Homeland Security’s Data Privacy Integrity Advisory Committee … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/meltdown-spectre-patches-performance-and-my-neighbors-sports-car/a/d-id/1330863?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Twitter will email 677,775 users who engaged with Russian election trolls

Did you ever engage on Twitter with Jenna Abrams, the divisive alt-right blogger also known as the fabrication of a Russian troll farm? No? Perhaps then one of her 2,752 troll buddies?

Not sure if you got an earful from Russian trolls? Say, from accounts that sound like local news agencies, such as @DailyNewsDenver or @DallasTopNews, or from one of the seven accounts with the name “Trump” in it? How about @NewYorkDem?

Don’t sweat it if you can’t remember. Twitter’s going to give you a heads-up if you did engage with those and other accounts associated with Russia’s meddling in the 2016 US presidential election.

Twitter announced on Friday that it’s emailing notifications to 677,775 users in the US: that’s how many people it says followed one of the accounts created by the Russian government-linked propaganda factory known as the Internet Research Agency (IRA).

The number includes those of us who retweeted or liked a tweet from the accounts during the election period. The accounts have already been suspended, Twitter said, meaning that the relevant content is no longer publicly available on the platform.

In October, top officials from Facebook, Google, and Twitter told the Senate Judiciary Committee that Russian state actors had carried out a disinformation campaign on their platforms. Twitter at the time released a 65-page list of 2,752 now-deactivated accounts that it identified as being tied to Russia’s troll farm.

As part of an ongoing review, Twitter has identified an additional 1,062 accounts associated with the IRA. The new results don’t change Twitter’s earlier conclusion, though – that content coming from automated Russia-based accounts represented “a very small fraction of the overall activity” on Twitter in the ten-week period preceding the 2016 election.

The recently discovered 1,062 IRA accounts have been suspended for Terms of Service violations, primarily spam. All but a few that were restored to legitimate users remain suspended.

At the behest of congressional investigators, Twitter is sharing the account names with Congress, it said. Twitter mentioned sharing the account handles, but didn’t mention the content of the tweets in question, which might mean that it doesn’t intend to share the content of the banned accounts with Congress.

The tally is now up to 3,814 identified IRA-linked accounts that posted 175,993 tweets, approximately 8.4% of which were election-related, during that pre-election 10-week period.

How much of an impact did those 175,993 tweets make? It’s tough to say. As Ars Technica points out, the messages range from idle banter to content intended to be divisive. Although the tweets are no longer public, Twitter published a variety in its post on Friday.

Besides accounts associated with the IRA, Twitter has also found what is believed to be automated, election-related activity originating out of Russia during the election period. It’s identified 13,512 additional accounts, for a total of 50,258 automated accounts identified as Russian-linked and tweeting election-related content during the election period, representing approximately two one-hundredths of a percent (0.016%) of the total accounts on Twitter at the time. Twitter didn’t say whether there would be email notifications sent to those who’ve engaged with these additional accounts.

Twitter’s post outlines the ways that it’s trying to get better at detecting and blocking suspicious accounts. It’s currently detecting and blocking approximately 523,000 suspicious logins daily for being automatically generated.

Last month, its systems identified and challenged more than 6.4 million suspicious accounts globally per week – a 60% increase over rates from October 2017. Twitter says it’s also developed new techniques for identifying malicious automation (such as near-instantaneous replies to tweets, non-random tweet timing, and coordinated engagement). It’s also improved the phone verification process and introduced new challenges, including reCAPTCHAs, to validate that a human is in control of an account.

Twitter says these are its plans for 2018:

  • Investing further in machine-learning capabilities that help detect and mitigate the effect on users of fake, coordinated, and automated account activity.
  • Limiting the ability of users to perform coordinated actions across multiple accounts in Tweetdeck and via the Twitter API.
  • Continuing the expansion of its new developer onboarding process to better manage the use cases for developers building on Twitter’s API. This, Twitter says, will help improve how it enforces policies on restricted uses of developer products, including rules on the appropriate use of bots and automation.

It’s also planning these steps specifically to prepare for the upcoming 2018 mid-term elections:

  • Verify major party candidates for all statewide and federal elective offices, and major national party accounts, as a hedge against impersonation.
  • Maintain open lines of communication to federal and state election officials to quickly escalate issues that arise.
  • Address escalations of account issues with respect to violations of Twitter rules or applicable laws.
  • Continually improve and apply anti-spam technology to address networks of malicious automation targeting election-related matters.
  • Monitor trends and spikes in conversations relating to the 2018 elections for potential manipulation activity.

Twitter is likely to be in make-it-up-to-Congress mode at this point. Last Wednesday, it was supposed to give evidence on the steps it’s taking to combat the spread of extremist propaganda over the internet, in a Congressional hearing titled “Terrorism and Social Media: Is Big Tech Doing Enough?”.

Facebook, Google and Twitter were required to do a bit of homework as preparation: they were supposed to present responses to a series of detailed written questions from the committee. Facebook and Google did so, but Twitter’s reply was nowhere to be found, days after the deadline.

Well, that’s a letdown, said Senator Mark Warner, the ranking Democrat on the Committee, who was quoted by new media site Axios:

Facebook and Google met the deadline, and [with] voluminous amounts of information, Twitter did not. I’m disappointed in Twitter.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/vGKVyyaBIdQ/

How a teen used social engineering to take on the FBI and CIA

Of all the adversaries facing the US in cyberspace, there is one that the FBI and CIA often seem to struggle to contain.

It’s not a nation state hacking group such as Fancy Bear, APT 1 or Lazarus Group, but a group whose resourcefulness, determination, and ability to think creatively can prove to be every bit as big a handful – teenagers.

Teen hacking is as old as the hills of course and yet real-world examples keep coming, particularly court cases involving young males from the UK and US.

The latest relates to the appropriately-named Kane Gamble, who last October pleaded guilty to leading the ‘Crackas With Attitude’ (CWA) group that launched a series of innovative attacks on senior US government figures between June 2015 and his arrest in February 2016.

At last week’s sentencing hearing at the Old Bailey in London, the court heard how Gamble (then 15) first targeted then-CIA director John Brennan, accessing his email and iCloud accounts, and hoaxing his home phone number.

Next on the list were then-FBI deputy director Mark Giuliano, special FBI agent Amy Hess, secretary of homeland security Jeh Johnson, deputy national security adviser Avril Haines, and senior science and technology adviser, John Holdren – to name only a few.

Motivated by politics, Gamble was said to have leaked documents from Brennan’s email account, as well as 3,500 names, email addresses and contact numbers for US police and military personnel in a file stolen from Giuliano.

He listened to numerous voicemails, sent text messages from Jeh Johnson’s phone, and even remotely accessed his internet-connected TV to post the message “I own you.”

What stands out is not only the campaign’s success but a disarmingly simple MO that holds a big warning for organisations everywhere.

Far from using advanced hacking, Gamble simply phoned up help desks for broadband services and utilities using public numbers, convincing staff they were speaking to the target as a way of gaining access or resetting accounts.

The security that should have stopped the group – answering personal security questions – didn’t.

As prosecuting QC John Lloyd-Jones put it:

The group incorrectly have been referred to as hackers. The group in fact used something known as social engineering, which involves socially manipulating people – call centres or help desks – into performing acts or divulging confidential information.

If a few teens can talk their way into the accounts of high-profile targets such as the head of the CIA, what chance would the average organisation or citizen stand?  It’s a chink in the armour of authentication every organisation should assess.

Two Sophos experts recently spoke about the threat of social engineering in a Facebook Live chat. It’s worth a watch to learn more about the problem, and find out how to fight back against social engineers.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/rhtCYhs0GjY/

Gas pump malware tricks customers into paying for more than they pump

Russian authorities have uncovered a massive fraud ring that installed malicious software at gas pumps making customers think they were getting more fuel than they were. In fact they were pumping up to 7% less than they were being charged for, according to Russian news source Rosbalt.

On Saturday, Russian Federal Security Service (FSB) arrested the alleged mastermind, Denis Zaev (alternatively identified as Denis Zayev by various outlets) in Stavropol, Russia on charges that he created several software programs designed to swindle gas customers.

An unidentified source in law enforcement told Rosbalt that this is one of the largest such frauds detected by the FSB. The malware was discovered at dozens of gas stations, where customers were getting ripped off without noticing a thing:

A giant scam covered almost the entire south of Russia, [where the viruses] were found in dozens of gas stations in the Stavropol Territory, Adygea, Krasnodar Territory, Kalmykia, a number of republics of the North Caucasus, etc. A whole network was built to steal fuel from ordinary citizens.

The source said that Zaev is believed to have developed and created several of these programs. It was a unique product, the source said: the malware couldn’t be detected, be it by oil company control services that continually inspect filling stations or by employees of the Ministry of Internal Affairs.

At any rate, after creating his “perfect” malware, the FSB reportedly said that Zayev/Zaev began to offer it to employees of gas stations. Sometimes, he played the part of software salesman. Sometimes, he would also dip into the stolen funds to take his own share.

His alleged profits were worth hundreds of millions of rubles. 1m ruble is worth about USD $17,700.

The malware caused the gas pumps, cash registers and back-end systems to display false data. It was also able to cover its tracks.

It worked like so: every morning, employees would come up with a pretext to leave one of a station’s reservoirs empty – for example, under the pretense of cleaning. When a customer bought gas, the program automatically shortchanged the customer of between 3% and 7% of the gas purchased. But the gas pump itself would show that the entire volume of purchased gas had been pumped into the tank. The stolen gasoline was automatically sent to the tank that the attendants had left empty that morning.

This isn’t the first time we’ve seen crooks targeting gas stations.

A few years back, we saw a spate of Bluetooth-enabled, banking-data-gobbling skimmers installed at gas stations in the Southern US.

Eventually, 13 alleged thieves were charged with forging bank cards using details pinged via Bluetooth to nearby crooks from devices that were impossible for gas-buying customers to detect, given that the skimmers were installed internally.

We’ve also seen more analog skimmers attached to ATMs, such as the crudely glued-on card catchers that leave thieves hanging around the machine, pretending to look innocent, as they wait to snatch the cards after victims give up on ever getting them back.

True, the Bluetooth skimmer was installed internally, making it tougher to spot than the glued-on kludge of a card catcher. It still presented a problem for the thieves, though: using Bluetooth meant the skimmer still relied on the thieves hanging around nearby, given the limited range of this wireless technology. It also meant that anybody else using Bluetooth in the vicinity could get an eyeful of “Oooo, payment card details up for grabs!”

Last year, New York City police also started to see a new sort of skimmer on gas pumps that cuts the Bluetooth tie, instead relying on wireless GMS text messages to get card details to the crooks anywhere in the world.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/KEhnRHzTHNA/

We’re cutting F-35 costs, honest, insists jet-builder Lockheed Martin

Lockheed Martin aims to knock 14 per cent off the cost of Britain’s F-35B fighter jets over the next couple of years, the firm’s director of business development told The Register.

At a press event in London last week, Steve Over told us: “We’re currently negotiating [production] lots 12, 13 and 14 to get costs down. Lot 10 [the batch currently being delivered] each aircraft is about $122m, which I think is about £90m at current rates. This is not the low point.”

F-35s are ordered by a US government agency, the F-35 Joint Project Office, which formally places the orders with the company on behalf of both the American armed forces and foreign customers including the UK.

While production lots 12-14 have already been awarded to Lockheed Martin, officials are still haggling over the price of each supersonic stealth fighter. The F-35B will be the only fighter jet capable of flying from Britain’s Queen Elizabeth-class aircraft carriers.

What makes the jet so expensive (by comparison, a Eurofighter Typhoon, as flown by the RAF, comes in at around £57m/$80m) is what Over described as “sensor fusion”, which consists of using the jet’s Multifunction Advanced Data Link (MADL) to network its sensors together. This is a new, and separate, system from the NATO-standard Link 16 communications gear used by what Lockheed brands “fourth-generation” fighter jets. Naturally, Lockheed having dreamed up the term, theirs is the world’s only “fifth-generation” jet, though it also carries Link 16 comms gear for talking to Stone Age military aviators.

“The increasing volume of orders helps drive costs out of the system,” Over said.

Also at the press event were some of the handful of British companies building actual chunks of the F-35 itself. These included Cobham, which makes the critical ball joint used in the F-35B’s air-to-air refuelling probe. As the firm explained to El Reg, anyone can make a pipe and bolt it to an aircraft – but their unique weak link is designed to shear and cut off fuel flow only “under very specific loads”, such as a pilot in an emergency needing to pull away from the tanker ASAP.

Along with Cobham was Bagshot-based Sterling Dynamics, which has to date supplied 300 sets of controls for cockpit simulators being bought by F-35 customers.

An air-to-air refuelling probe for the F-35 as made by Cobham

An air-to-air refuelling probe for the F-35. Cobham makes the purple knobbly bit, which is the weak link that breaks in emergencies

These firms were joined by EU missile company MBDA, which has various outposts spread around Britain (only British F-35s will carry MBDA missiles, we were told), plus, inevitably, UK-based offshoots of American companies.

Honeywell UK integrates some of the Onboard Oxygen Generation Systems components, though these, company reps told us, are actually manufactured in France – and, they added in response to our questions, these are not the same units causing airflow problems for some US pilots; an assertion that does not appear to be the case.

On Board Oxygen Generation System for the F-35 jet, made by Honeywell

The Honeywell OBOGS

US firm Moog, which makes the hydraulic systems used for powering the F-35’s controls, does some of the work at what older British readers will know as the former Dowty Boulton Paul business. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/23/f35_price_will_reduce_claims_exec/

Optimus multi-prime is the new rule as OpenSSL transforms crypto policies again

OpenSSL’s maintainers have put the squeeze on insecure ciphers, with a raft of changes to how the project’s operations.

The changes were announced here following an OpenSSL management committee (OMC) meeting in London.

The cryptography policy changes include making sure insecure configurations aren’t enabled by default, but by compile-time switches, and “multi-prime RSA” will enforce a maximum number of prime factors by default.

The OMC’s decided that it must be possible for new algorithms to be disabled at compile-time, and that new crypto algorithms should only interface to OpenSSL via its EVP (digital EnVeloPe library) API.

In future, any new crypto algorithm will need to be backed by a national or international standards body, and all ciphers will need to be specified at run-time to be enabled in the TLS layer.

Beyond the crypto-policy changes, the OMC has made a number of other changes to the project.

The most noticeable will be the end of the openssl-dev mailing list, partly because there was overlap between posts to that list and to openssl-users, and partly because the OMC wants GitHub to be the primary channel for developer discussions.

For policy communications, there’s a new mailing list, openssl-project, “for discussions about the governance and policies of OpenSSL”. Anyone can sign on, but only the OMC and committers can post to it.

There will also be a renewed effort to reduce OpenSSL’s technical debt, including cleaning up old tickets, and refactoring code. “The recent addition of the PACKET and WPACKET API’s in the libssl make the code much more clear, and also avoid hand-coded packet processing bugs,” the post stated.

The project’s release cadence will change to weekly, on Tuesday, unless there’s a severe vulnerability with known exploits.

The OMC says TLS 1.3 remains its highest-priority roadmap item (just as soon as the IETF finally signs off on the standard), and after that, the effort will turn to FIPS compliance. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/23/openssl_project_changes/

Electronic voting box makers try to get gear stripped from eBay and out of hackers’ hands

Shmoocon Vendor intimidation, default passwords, official state seals for sale. Yes, we’re talking about computer-powered election machines.

The organizers of last year’s DEF CON Voting Village – a corner of the annual infosec conference where peeps easily hacked into electronic ballot boxes – are preparing for a similar penetration-testing session at this year’s event in August.

There are some hurdles to clear, though.

Speaking at the Shmoocon conference in the US capital last week, Finnish programmer and village organizer Harri Hursti said the team was having trouble getting voting machines to compromise for this year’s hackfest, in part because manufacturers weren’t keen to sell kit that could expose their failings.

In some cases, the box makers sent letters to people flogging election systems on eBay, claiming selling the hardware was illegal, which isn’t true. His team is still scouring the web for voting gear.

“One e-cycling company had 1,300 voting machines for sale, which it acquired when the ceiling of the warehouse in which they were being stored collapsed,” Hursti said. “We found the company had already sold 400 of the machines, in some cases back to counties for voting duties.”

One of the machines was duly bought for the hacking competition. The seller is also touting packets of 25 official election machine seals for the state of Michigan for less than $5.

“You’d think you could only buy these if you had a government ID and wee in the state of Michigan,” Hursti said. “But no, anyone can buy these.”

Hursti is pretty well known for finding ways into voting machines, and meddle with systems as an ordinary poll worker.

RTFM

Meanwhile at Shmoocon, we learned that Margaret MacAlpine, founding partner at Nordic Innovation Labs and another member of the DEF CON Voting Village team, found complete lists of the default admin passwords for electronic ballet boxes in their training manuals.

In one tome, election officials are instructed not to change the default password, and if someone had already, to reset passwords to the defaults. This manual covered machines used to count 18 per cent of the votes in US elections, we were told.

SAVE our souls

The sad levels of security in America’s voting infrastructure have worried politicians, and in October the bipartisan Securing America’s Voting Equipment (SAVE) Act was introduced. The legislation, if passed, would require election machines to be audited and officials trained to deal with with the latest credible security threats.

Voting village organizer Matt Blaze, an associate professor of computer and information science at the University of Pennsylvania, said that the proposed law was “a beautiful piece of legislation,” and should be supported. Given the intransigences within Congress, however, it may be a while before it gets through.

But it is needed, he argued, as there was already evidence that Russian hackers had been busy attacking election systems – not the voting machines themselves but the computers used to house voter rolls and tabulate the results.

“We’ll find out how much hacking went on in the history books, assuming they are allowed to be written in the future,” he told Shmoocon attendees. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/23/electronic_voting_machine_update/

Uber hit with criticism of “useless” two-factor authentication

Uber is in the computer security news again, this time over allegations that its 2FA is no good.

ZDNet, for instance, doesn’t mince its words at all, leading with the headline, “Uber ignores security bug that makes its two-factor authentication useless.”

Two-factor authentication, or 2FA, is also known as 2SV, short for two-step verification.

It’s an increasingly common security procedure that aims to protect your online accounts against password-stealing cybercrooks.

When you login, you have to put in your usual password, which typically doesn’t change very often, plus an additional login code, which is different every time.

These one-time login codes are typically sent to you via SMS (text message) or voicemail, or calculated by a secure app that runs on your mobile phone.

The “second factor” is therefore usually your phone; the “second step” is figuring out and confirming the one-off password that crooks can’t predict in advance.

It’s not a perfect solution, but it does make it much harder for a crook who has just bought stolen usernames and passwords on the Dark Web: your password alone isn’t enough to raid your account.

What’s the story?

So, why is Uber’s 2FA supposedly “useless”?

According to ZDNet, an Indian security researcher has convinced them that he can bypass Uber’s 2FA, thus reducing your security back to what it was before 2FA was introduced.

So, when would you expect to see a 2FA prompt?

We’re not Uber users, but some of our colleagues are, and as far as we can tell, Uber doesn’t have an option to force on an additional 2FA check every time you login.

Apparently, Uber automatically activates 2FA only when it thinks the risk warrants it.

This approach works because fraudulent logins frequently stand out from regular logins: they come from a different country; a different ISP; a new browser; an unusual operating system; and so on.

In a few tests here at Sophos HQ in Oxfordshire, England (ironically, Uber isn’t licensed to operate in Oxford, but that is a story for another time), we were able to provoke Uber’s 2FA prompts easily enough.

For example, we were asked for a one-time code after by forcing a password reset via the mobile app:

We also tried logging in to the mobile app and then connecting via a regular browser from a laptop, whereupon we hit the 2FA system, too:

Once Uber “knew” about the laptop, the Uber servers did’t ask for 2FA codes again when we logged back in from the same computer.

Is 2FA worth it?

Uber’s “part-time” approach to 2FA seems rather self-defeating: if 2FA is worth doing, surely it’s worth doing all the time?

Unfortunately, in real life, 2FA is not as popular as you might expect: Google, for example, recently lamented that the 2FA takeup rate amongst Gmail users is still below 10%.

In other words, fewer that 10% of Gmail users have turned the feature on.

Reasons for spurning 2FA include: I don’t trust Google with my phone number; it’s too much hassle; I get locked out every time I leave my phone at home; no or poor mobile coverage in my area; nothing worth hiding anyway.

In short: convenience before security.

Uber’s approach therefore takes a middle ground common to many online services, such as Google’s CAPTCHA system: try to avoid any extra customer-facing security shenanigans whenever possible.

Simply put: there’s a school of thought that it’s better to have everyone using 2FA some of the time, ideally when it’s most worth it, than to have most people not using it at all because of its perceived problems.

Is it useless?

If you could figure out what triggers a “part-time” 2FA system and therefore learn how to trick it into misidentifying you as a low-risk login, and you could reliably do it every time, you might reasonably claim that the 2FA system concerned was useless.

But in this instance, ZDNet admitted that “in some cases the bug would work, and in others the bug would fail, with nothing obvious to determine why.”

In other words, even if the effectiveness of Uber’s 2FA is less than expected, it doesn’t sound as though it’s strictly useless.

You’d also like to think that Uber deliberately doesn’t keep the when-to-activate logic in its 2FA system static, in order to keep the crooks on their toes.

(Of course, Uber infamously tried to hush up a recent data breach by paying off hackers under the guise of a bug bounty, and sacked its Chief Security Officer during the fallout, so just how proactive its security practices are remains to be seen.)

For all that Uber has done plenty to attract well-merited criticism in the past, we’re not sure that calling its 2FA “useless” on the basis of a bug that can’t reliably be reproduced is entirely fair…

…though if we were Uber, we’d make some tweaks anyway, such as the one we suggest at the end of the article.

What to do?

Suppressing 2FA doesn’t give cybercrooks a free lunch: they still need to crack your regular password, so:

  • Make sure you’ve changed your Uber password since the company’s recent breach notification.
  • Pick a proper password for every account, including your Uber account.
  • Don’t share passwords between accounts, or even use similar passwords that differ by a few letters that denote each account.
  • Consider using a password manager so you get high-quality passwords and don’t have to commit them all to memory.

Having said all that, we do have a suggestion for Uber, and for any other online services with “part-time” 2FA that only kicks in from time to time:

  • Offer an “always on” option for 2FA.

Here’s our message to Uber, and anyone else out there with what you might call “part-time 2FA”…

…there is an important minority of users out there who favour security over convenience, and who would be happy to turn 2FA on permanently, so don’t be afraid to let them lead the way!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/sufdSbenWKI/