STE WILLIAMS

‘George’ the Most Popular Password That’s a Name

A new study of stolen passwords reflects the consequences of password overload.

The most common type of password is a name, and the most common name password is George, according to a new analysis of compromised credentials found in the Dark Web.

ID Agent, a Kaseya company, found that names account for nearly 37% of password types per 1,000 records, followed by words (16.1%) and easy-to-remember keystroke patterns (8.7%). The findings were pulled from a random sample of more than 1 billion pilfered credentials in the past 12 months.

Passwords on average were 7.7 characters in length; the most popular word password was sunshine, and abcd1234 was the most common keystroke-pattern password.

The lame state of password creation doesn’t really come as a surprise. “Passwords are often deeply personal expressions of oneself, with the goal of making them easier to remember. However, remembering which extension is becoming increasingly difficult in our hyper-digital daily lives,” the company wrote in post on the data. “In fact, it is estimated that average US adult has between 90 and 135 different applications that require a set of credentials (usernames and/or email addresses and password combination) needed for access.”

Read more here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “7 Steps to IoT Security in 2020.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/george-the-most-popular-password-thats-a-name/d/d-id/1336939?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

What It’s Like to Be a CISO: Check Point Security Leader Weighs In

Jony Fischbein shares the concerns and practices that are top-of-mind in his daily work leading security at Check Point Software.

Check Point Software CISO Jony Fischbein has a lot on his plate. Like many CISOs, he juggles the security of multiple corporate departments with thousands of employees, all of whom possess different personalities, security requirements, and potential risk factors.

“A lot of these departments … they want to drive to the same place, but they have different needs,” said Fischbein in a keynote at this week’s CPX 360 conference, in New Orleans. Each day he is tasked with making decisions to secure these departments and each of their employees, while also tackling his overall goal and greatest challenge in being a CISO: enabling business processes.

Tackling this challenge starts with addressing human-based issues. “People are the biggest asset and the biggest weakness in any organization,” Fischbein said. “Engage them wisely.”

This means knowing how employees can aid in your defenses, but more importantly the people you need to protect against. The first group includes overmotivated employees. “These employees will do stuff because they just want to promote the business,” he explained, but they often do this by downloading tools and applications not sanctioned by the IT department. “Shadow IT,” or the use of software without the business’ consent, presents security issues.

While eager employees pose a risk, unhappy ones are considerably more dangerous. “These are the No. 1 people who will hurt the company,” Fischbein added. Angry workers who are motivated to cause damage can use their access to steal contacts and code and expose internal data. “These problems are relevant to everyone,” he said, noting that for every 1,000 employees, chances are five to 15 are unhappy. They may face penalties, he continued, but many unhappy employees forget about the contracts they signed when they started the job.

Cybercriminals and nation-states are the other two groups causing concern for Fischbein. As an example, he cited recent concerns of retaliation and potential cyberattacks from Iran in early January. “We have to immediately make sure our SOC was up-to-date,” he said of the response. “All IP addresses from Iran are going to be immediately blocked, no questions asked.”

The talk dove into two examples of how CISOs can help enable business processes. First, he said, is embracing the cloud and supporting the business’ ability to use it. In the past year, Check Point’s IT teams have worked in cloud environments and developed directly on them. One of their accounts is forbidden to be exposed to the Internet. If something is accidentally exposed, the team introduced a mitigation through which the incident is logged and sent to the SOC.

“The No. 1 topic that I believe is the reason for hacks or breaches in the cloud is misconfiguration,” said Fischbein.

Understanding security incidents is a second example of how the CISO can support the business. It’s essential to treat incidents well and thoroughly, said Fischbein, and it’s equally important to not be surprised or panic when a breach hits. Be sure you know which teams will be involved in response and the steps they will take in investigating and mitigating the threat.

“What is key during the incident is to try to [record] lessons learned during that incident,” he emphasized. “A month later you will not remember what happened.”

Fischbein also spoke to the use of automation, which he believes will allow security teams to survive the challenges of today and the future. “All security pros, such as myself, have to open the gates to third-party solutions. We have an automated process to vet the new technologies we will connect to our systems, so [they] will be rapid and secure.”

With respect to Check Point’s own product line, he called himself “customer zero” for all of the company’s tools.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “AppSec Concerns Drove 61% of Businesses to Change Applications.”

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/cloud/what-its-like-to-be-a-ciso-check-point-security-leader-weighs-in/d/d-id/1336940?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

UN hacked via unpatched SharePoint server

The UN suffered a major data breach last year after it failed to patch a Microsoft SharePoint server, it emerged this week. Then it failed to tell anyone, even though it produced a damning internal report.

The news emerged after an anonymous IT employee leaked the information to The New Humanitarian, which is a UN-founded publication that became independent in 2015 to report on the global aid community. According to the outlet, internal UN staffers announced the compromise on 30 August 2019, explaining that the “entire domain” was probably compromised by an attacker who was lurking on the UN’s networks.

A confidential report sent to the publication without permission by a UN IT official revealed that the cyberattack had started in mid-July last year. The hackers had compromised dozens of servers including those in its highly sensitive human rights operation, along with its human resources department.

Stéphane Dujarric, spokesperson for the UN Secretary-General, explained to media in a briefing on Tuesday:

Attempts to attack the UN IT infrastructure happen often. The attribution of any IT attack remains very fuzzy and uncertain. So, we are not able to pinpoint to any specific potential attacker, but it was, from all accounts, a well‑resourced attack.

The Associated Press (AP), which has seen the report, said that system logs had been meticulously cleared during the attack.

The hackers targeted a total of 42 servers, compromising the Active Directory domains of UN offices in Geneva, Vienna, and at the Office of the High Commissioner for Human Rights, although an official told the AP that nothing at the latter location was compromised. The three hacked locations employ around 4,000 staff. Geneva was the hardest hit, with 33 hacked servers, according to The New Humanitarian.

The attackers likely got in through an anti-corruption tracker at the UN Office of Drugs and Crime, reports said. The entry point was a flaw in Microsoft’s SharePoint software. CVE-2019-0604 discloses a remote code execution vulnerability in the collaboration system that enabled an attacker to run arbitrary code. Microsoft patched this in February 2019, but the UN hadn’t applied the fix.

News of the breach follows an IT audit in 2018 that revealed significant problems with the UN’s technology systems. The audit found that 223 servers at the secretariat were operating with obsolete or unsupported technology such as Windows 2000 servers on legacy networks as of March 2018. They were not centrally managed. The audit also complained of fragmented issue tracking and couldn’t confirm that a network segmentation project had been completed.

Most damning is the fact that the organisation had shifted to self-certification for website and web application security, leaving it up to individual offices to confirm that they had applied updates to web-based systems. Of 37 offices, only 9 responded. Of those, only 3 reported full compliance with all policies. Only one in 1,462 UN websites have been checked by an external cybersecurity team.

In a commercial setting, GDPR could well kick in here. However, as UN officials have said when apologising for other data breaches in the past, they consider UN agencies to be above such things.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/RonIAIVbyIQ/

Serious Security – How ‘special case’ code blew a hole in OpenSMTPD

If there’s one open source project with an unashamedly clear focus on security, it’s the OpenBSD operating system.

In its own words, its efforts “emphasize portability, standardization, correctness, proactive security and integrated cryptography.”

Indeed, numerous sub-projects under the OpenBSD umbrella have become well-known cybersecurity names in their own right, notably OpenSSH – which ships with almost every Linux distribution and, since Windows 10, with Windows – and LibreSSL.

There’s also OpenSMTPD, a mail server that aims to allow “ordinary machines to exchange emails with other systems speaking the SMTP protocol”, for example to let you run a mail server of your own instead of relying on cloud services like Gmail or Outlook.com.

Well, if you do use OpenSMTPD, you need to make sure you’re not vulnerable to a recently-disclosed bug that could let a crook take over your server simply by sending an email containing evil commands.

Being security-conscious doesn’t stop the OpenBSD project from writing buggy code…

…but it has made the core team very quick at responding when bugs are reported, which is what happened in this case.

1988 calling!

The bug itself brings back memories of the infamous Internet Worm from way back in 1988, when a programmer called Robert Morris – ironically, the son of a government cryptographic researcher called Robert Morris – unleashed an auto-spreading computer virus that quickly swamped the then-fledgling internet.

One of the self-spreading tricks used by Morris was to exploit a “feature” in the Sendmail software – one that was not supposed to be used in real life, only for debugging – that allowed him to embed system commands inside the text of an email.

When the email was received by the server, it would essentially be launched as a program, instead of processed and delivered as a message.

This new OpenSMTPD bug, denoted CVE-2020-7247, was found by cybersecurity company Qualys, and gives cybercriminals a similar sort of attack lever to Morris’s worm.

In fact, when Qualys coders developed and published a Proof of Concept (PoC) to demonstrate the exploitability of this bug, they admitted that they “drew inspiration from the Morris worm”.

How the bug works

OpenSMTPD allows you to specify a command that it will use to handle the mail that it receives, whether that’s email coming in from outside or messages that you’re queuing up for delivering to other servers.

Like many Unix programs, it uses the system’s command shell /bin/sh to spawn your command of choice, passing along the email address details as parameters.

As you probably know, “shelling out” to user-specified commands is risky, because the shell treats some characters in its list of parameters in a special way.

You can try this for yourself, for example by sending the commands below to a Unix shell.

(The option -c means “run what follows as a command” and the text echo inside the command string is itself a command that means “print the message that follows”.)

/bin/sh -c 'echo [email protected]'
/bin/sh -c 'echo [email protected] more text'
/bin/sh -c 'echo [email protected];echo more text'

You’d probably, and reasonably, expect to see the following output:

[email protected]
[email protected] more text
[email protected];echo more text

But you don’t – instead, you see:

[email protected]
[email protected] more text
[email protected]
more text

The reason is that the semicolon character (;) in the last line tells the shell to split the line into two commands and run them one after the other.

So the shell doesn’t print out ;echo more text at the end of the third line.

Instead it acts as though you had done this…

/bin/sh -c 'echo [email protected]'
/bin/sh -c 'echo [email protected] more text'
/bin/sh -c 'echo [email protected]
/bin/sh -c 'echo more text'

…which is not the same thing at all!

How the bug came about

OpenSMTPD does try to stop dangerous characters such as semicolons from leaking into the commands it generates, by checking both the username part (duck in our example above) and the domain part (example.com in our example) of any email address you specify as the sender or the receiver of any message.

In pseudocode, it’s something along these lines:

if the username is dodgy or the domain is dodgy then
   reject the message
end

But things are never quite that simple, because usernames and domains that are totally blank obviously fail the dodginess test, but sometimes need to be allowed.

When issues like this come along, programmers often need to describe this sort of ‘special case’ logic in their code, and wherever there’s an exception, there’s a risk that a security bypass might be introduced.

The OpenSMTPD code actually ended up like this:

if the username is dodgy or the domain is dodgy then
   -- allow 'dodginess' if it's caused by the fact that the address
   -- is completely blank, because that's a special case
   if both the username and the domain are blank then
      allow it   -- WHY NOT CHECK THIS FIRST IF IT'S SPECIAL?
   end 
   -- a missing username is useless, so don't allow that
   if just the username is missing then
      reject it
   end
   -- but a missing domain name is OK, because it means 'use the default'
   if just the domain is missing then
      use the default domain name 
      allow it    -- OOPS! THE CODE CAN GET HERE *EVEN IF WE
                      ALREADY KNOW THE USERNAME IS DODGY*              
   end
  
   reject the message
end

You can see the problem above, namely that two special cases for accepting dodgy data were handled inside the very “if” statement that was there to reject dodgy addresses.

The end result is that a blank domain name is enough to get a message accepted even though that message already failed the username safety check!

You’re supposed to address SMTP messages like this…

MAIL FROM:[email protected]
RCPT TO:[email protected]

…but the Qualys researchers figured out that they could trick the software into running commands of their own by saying something like this…

MAIL FROM;command line of their choice;
RCPT TO;another unexpected command;

..instead.

The “usernames” above are ;command line of their choice; and ;another unexpected command;, both of which are clearly both dodgy and dangerous.

Ironically, even though OpenSMTPD correctly detects those text strings as dangerous, the rogue data gets allowed through to the command shell anyway because it’s not followed by a domain name.

How it was fixed

The new code is much easier to follow, and gets the special cases out of the way first, so the “if” statements that deal with rejecting messages don’t have sub-clauses that revoke that rejection:

if both the username and the domain are missing then
   -- a very specific special case tested first
   allow it
end

if the username is missing or dodgy then
   -- blank or dodgy usernames *must* fail up front
   reject it
end

if the domain is missing then
   --if we get here, the username is OK so we
   --
   use the default domain name
end

if the domain name is dodgy
   -- if we get here, we do have a domain because
   -- it's either the one specified or the default,
   -- so now we can check if it's dodgy
   reject it
end

-- and at this point, we have:
-- a valid, non-empty username
-- a valid, non-empty domain (perhaps the default) 
-- so we can...

accept it

What to do?

This bug is dangerous because, by default, OpenSMTPD listens for local mail that’s being sent out.

When mail is received locally, the server uses the root (superuser account) to deal with it, so anyone who’s already logged in can use this bug to “promote” themselves to root.

That’s an elevation of privilege (EoP) vulnerability.

But if you are using OpenSMTPD to accept mail from outsiders, then the bug is worse because users who don’t even have accounts on your system, let alone who aren’t logged in, can run commands on your server just by transmitting a sneakily-formatted email.

That’s a remote code execution (RCE) vulnerability.

Therefore:

  • If you have a vulnerable version of OpenSMTPD, patch it now. The fix was delivered rapidly, so do yourself the favour of applying it rapidly, too. The patch arrived in OpenSMTPD 6.6.2 (6.6.2p1 if you are using the so-called Portable source code intended for use on operating systems other that OpenBSD itself).
  • Watch out when programming for special cases. If you are a coder, don’t be in too much of a hurry to “fix” problems handling unusual or unexpected data. Try to get the special cases out of the way first so you don’t end up with code that’s supposed to block errors but has numerous exceptions that cause the error to be ignored. The clearer your code, the easier it is to review and the more likely it is to be correct.
  • Minimise your use of the root account. Sometimes, you can’t avoid it, for example if you need to access system files or reconfigure privileged services. But always be even more cautious than usual when preapring data to be handed up to a program that will run as root.
  • Avoid running sub-programs via the shell if you can. Sometimes, for reasons of flexibility – like here, when you might want to hand off from OpenSMTPD to a script of your own – you have little choice. But always be even more cautious that usual when preparing data to be passed into a shell script, because of those dangerous “special characters”.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/4YZhTzLrMmk/

China’s Winnti hackers (apparently): Forget the money, let’s get political and start targeting Hong Kong students for protest info

A Chinese hacking crew which had previously been focusing on industrial and commercial attacks has now involved itself in efforts to suppress protests in Hong Kong.

Researchers at security shop ESET say the Winnti Group, a hacking operation believed to be backed by the Chinese government, has begun targeting the networks and accounts of at least five universities in Hong Kong. Active malware infections were found at two of the schools in November of last year and ESET believes three others have since been targeted by the hackers.

The aim of these intrusions, ESET believes, is to gather intelligence and disrupt protests by students at those universities, as Hong Kong continues to deal with civil unrest between pro-democracy protesters and the mainland government.

According to the ESET team, the Winnti hackers have been using their namesake malware trojan – first documented back in 2013 – to get into the university PCs and drop a backdoor called ShadowPad. From there, it is believed the hackers comb the infected machines for information relating to the ongoing protests.

Death Star cannon

China fires up ‘Great Cannon’ denial-of-service blaster, points it toward Hong Kong

READ MORE

“ShadowPad is a multi-modular backdoor and, by default, every keystroke is recorded using the Keylogger module,” explained Mathieu Tartare, the lead researcher for the ESET team studying the attack.

“The use of this module by default indicates that the attackers are interested in stealing information from the victims’ machines. In contrast, the variants we described in our earlier whitepaper didn’t even have that module embedded.”

The protester attacks are a departure from what the Winnti hackers usually focus on. Previously, the Chinese crew had devoted itself to financial and intellectual property heists, targeting online gaming companies and supply chain operators in the pharmaceutical, aviation, telecoms, and software markets.

The group has gained some notoriety for its use of custom-built, sophisticated malware. The Winnti crew was among the first to make use of stolen certificates to evade security software. ®

Sponsored:
Detecting cyber attacks as a small to medium business

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2020/01/31/winnti_hackers_students/

Embracing a Prevention Mindset to Protect Critical Infrastructure

A zero-trust, prevention-first approach is necessary to keep us safe, now and going forward.

In the TV series Mr. Robot, Elliot Alderson, a gifted cybersecurity engineer by day, moonlights as a vigilante hacktivist for the “fsociety” group, which conspires to topple corporate America by canceling the debt records of every citizen.

In this doomsday scenario, cyber anarchists aim to disrupt the financial infrastructure that supports the global economy as a means to bring about their ideological political goals. Beyond this dramatic metaphor lies a sobering truth: Our world is interconnected to such a degree that the notion of critical infrastructure has evolved beyond what we have traditionally classified as such.

While power plants, chemical factories, and government agencies rightfully deserve the “critical” designation, there are scores of other industries upon which these critical infrastructure organizations would cease to properly function if they were knocked out of commission by a well-orchestrated targeted attack.

To reduce risk and thrive in this age of unpredictable and targeted attacks, critical infrastructure organizations must take a more expansive view of the critical infrastructure ecosystem, commit to making cybersecurity training a priority for employees at every level of the organization, and embrace a holistic zero-trust approach that prioritizes prevention strategies over reactive detection methods.

Mitigating Cyber-Risk with Training and Awareness
In February 2019, employees of the Fort Collins Loveland Water District and South Fort Collins Sanitation District in Colorado were hit by a ransomware attack that locked them out of their computers — for the second time in two years. In September 2019, Kudankulam Nuclear Power Plant, the largest nuclear plant in India, was breached in a malware attack, and in November 2019, criminals shut down computers at Mexican oil giant Pemex in exchange for a $5 million ransom. The US experienced the first attack on a power grid in March 2019 when North American Electric Reliability Corp. (NERC) was disrupted in a “cyber event” that lasted nearly 12 hours.

As public and private enterprises look to new cybersecurity solutions to mitigate the risks, global cybersecurity spending is expected to grow to $133.8 billion by 2022, according to International Data Corporation. The White House’s 2020 budget alone includes more than $17.4 billion for cybersecurity-related activities, a 5% increase over 2019. However, we’ll need to do more than throw money at the issue.

The problem lies in the fact that critical infrastructure sectors have become increasingly attractive targets — both for nation-states engaged in geopolitical campaigns as well as profit-motivated criminal syndicates. That’s largely due to the fact that much of our nation’s critical infrastructure is built upon a tangle of legacy industrial control systems that were intentionally designed as closed, air-gapped systems.

But perhaps the greatest vulnerability is the human element. While many of these companies address supply chain risks by certifying the cybersecurity practices of their partners, basic security awareness and training often lags behind other industries. Threat actors, regardless of their motivation, are like water flowing in a riverbed: They will always choose the path of least resistance.

A Shift in Mindset: From Detection to Prevention
As we enter the next decade, executive leadership for critical infrastructure organizations must take a hard look at their existing IT systems, their security practices, and, most importantly, their attitudes toward how they approach cybersecurity.

And because threats can now come from anywhere, any piece of connected technology must be treated as potentially malicious. This is the essence of a zero-trust, prevention-first mentality, one in which trust is never implied and the legitimacy of every file, every device, and every network connection is always questioned.

All employees — whether executives, engineers, or accountants — must develop a deeper appreciation that any interaction with technology can open a door to a potential cyberattack. It’s imperative that critical infrastructure organizations prioritize cybersecurity training for all employees, emphasizing that every person who interacts with technology also plays an important role in protecting mission critical infrastructure.

To prepare for the increasing sophistication and frequency of cyberattacks on critical infrastructure sectors, the burden will rest on the shoulders of executive leadership, who must take the lead in showing that all employees, regardless of their role or responsibility, are aware that any interaction with technology has the potential to unleash the next Stuxnet, or worse.

As we move into this new decade, there are more unknowns than knowns. While critical infrastructure security leaders can’t predict and prepare for every attack scenario, they must at least acknowledge that the threat landscape has shifted and that a prevention-first, zero-trust approach is necessary to keep us all safe, this year and beyond.

Related Content:

Benny Czarny is the Founder and CEO of OPSWAT, a leading cybersecurity firm with over 1,000 customers, 200 employees, and 8 offices worldwide. Founded with a personal investment in 2002 to offer a unique, market-driven approach to security application design and development, … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/embracing-a-prevention-mindset-to-protect-critical-infrastructure-/a/d-id/1336907?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

AppSec Concerns Drove 61% of Businesses to Change Applications

According to new Dark Reading research, some respondents have even left behind commercial off-the-shelf software and migrated to open-source or in-house homegrown applications. Click image to read more.

The marketplace is beginning to pinch the software industry for application security failings and complications, according to a new Dark Reading study.

Sixty-one percent of respondents to the survey, released today, stated that security concerns about one application have caused them to migrate to an alternative. Twenty-seven percent swapped one commercial off-the-shelf (COTS) application for another. Others migrated over to a COTS solution, leaving behind either open source (6%) or in-house developed (16%) tools. However, 12% dropped their commercial software altogether, in favor of either open source or in-house developed apps.

Why the changes? Some of the reasons are familiar.

For example, internal dev teams may be poorly trained on secure coding and be liable to run into business conflicts with their counterparts in the security department. When asked to name the biggest risk to appsec, the No. 1 answer was “developers untrained in security,” cited by 38% of respondents. This worry persists despite a majority of respondents giving positive reviews of the relationship between these two teams.  

And, of course, commercial software vendors’ security records vary widely; while one may have a large dedicated security team, supported by a bug-bounty program and a reliable process for issuing patches and updates, another may have none of those things and leave bugs unfixed for years. Similarly, open source communities vary in the kind of support they provide.      

Some reasons, however, are less familiar. Recently, new security challenges have arisen to further complicate the choices that businesses make about applications.

For example, this year the United States, citing national security concerns, prohibited the use of technologies from Chinese tech giant Huawei, as well as surveillance technologies from other Chinese companies. In 2017, the administration ordered the removal of Kaspersky Lab cybersecurity tools from all federal systems for similar reasons.

Meanwhile, attacks exploiting vulnerabilities in open source code libraries have increased — and while that might initially make open source applications appear less attractive, these components are also frequently used by internal development teams and commercial software vendors alike. Fortunately, most respondents have a process in place to repair vulnerabilities in open source software components, though 21% admit to being “completely at a loss.”

Read the complete report, “How Enterprises Are Developing and Maintaining Secure Applications,” here.   

Related Content:

 

Sara Peters is Senior Editor at Dark Reading and formerly the editor-in-chief of Enterprise Efficiency. Prior that she was senior editor for the Computer Security Institute, writing and speaking about virtualization, identity management, cybersecurity law, and a myriad … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/appsec-concerns-drove-61--of-businesses-to-change-applications/b/d-id/1336934?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook to pay $550m to settle face-tagging suit

A class-action lawsuit against Facebook for scanning a user’s face in photos and offering tagging suggestions looks like it’s finally done churning through the courts.

The upshot: it will pay $550 million to settle the suit, Facebook disclosed in its quarterly earnings report on Wednesday.

Filed in 2015, plaintiffs had claimed that the platform violated the strictest biometric privacy law in the land – Illinois’s Biometric Information Privacy Act (BIPA) – with its tag suggestions tool.

Facebook started using that tool in 2015 to automatically recognize people’s faces in photos and suggest to their friends that they tag them. It’s done so without users’ permission and without telling them how long it would hang on to their biometrics, the suit contended, squirreling faceprints away in what Facebook has claimed is the largest privately held database of facial recognition data in the world.

In September 2019, Facebook said that it was dumping tag suggestions in favor of the multi-purpose “face recognition” setting, which it made available to all users, along with an opt-out option.

The New York Times referred to the $550 million hit as a “rounding error” for Facebook, which reported that revenue rose 25% to $21 billion in the fourth quarter, compared with a year earlier, while profit increased 7% to $7.3 billion.

Jay Edelson, a lawyer for the Facebook users named in the facial recognition class action, told the Times that the settlement underscored the importance of strong privacy legislation:

From people who are passionate about gun rights to those who care about women’s reproductive issues, the right to participate in society anonymously is something that we cannot afford to lose.

Facebook got off easy. BIPA requires companies to get written permission before collecting a person’s biometrics, be they fingerprints, facial scans or other identifying biological characteristics. It also gives Illinois residents the right to sue companies for up to $5,000 per violation: a fine that could potentially add up to billions of dollars in payouts for tech companies that don’t settle and go on to lose lawsuits filed under the legislation.

Facebook has fought this lawsuit tooth and nail. In 2016, it tried – and failed – to wriggle out of it, saying that its user agreement stipulates that California law would govern any disputes with the company. Besides, Facebook said in its motion, BIPA doesn’t apply to Facebook’s facial tagging suggestions for photos.

The judge’s response: nope, squared. Going by Illinois law is just fine, and of course BIPA covers faceprints, like it covers all biometrics.

After backlash from Canadian and EU citizens and regulators, Facebook in 2012 had turned off its first incarnation of the tag suggestion feature in Europe and deleted the user-identifying data it already held.

The US has long trailed the EU when it comes to beating Facebook’s facial recognition into submission. However, last year, the country did a bit of catchup when the Federal Trade Commission (FTC) fined Facebook $5 billion for losing control of users’ data.

As part of the new 20-year settlement order, Facebook agreed to delete any existing facial recognition templates and to provide “clear and conspicuous notice” about any new facial recognition uses. The FTC’s order requires Facebook to give clear notice of how it uses facial recognition data and requires that it get consumers’ express consent before “putting that data to a materially different use.”

In September 2019, when Facebook ditched tag suggestions, it introduced face recognition designed to deliver an actual, bona fide opt-in choice for using our faceprints. And if you don’t yet know how to turn it off or on, here’s how:

How to turn face recognition on or off

In Facebook, go to Settings Privacy Settings Under ‘Privacy’ tap Face recognition and select Yes or No next to the prompt ‘Do you want Facebook to be able to recognise you in photos and videos?’


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

 

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/-vpu7yQz3I0/

Financial tech firms disagree on ban of customer data screen-scraping

For years, financial technology (fintech) companies have used screen-scraping to retrieve customers’ financial data with their consent. Think lenders, financial management apps, personal finance dashboards, and accounting products doing useful things: like, say, your budgeting app will use screen-scraping to get at the incoming and outgoing transactions in your bank account, using the information to power its analysis…

…putting your privacy, passcode and other security information in danger of getting lost along the way.

Because of those potential dangers to people’s privacy and data, many in fintech are urging the Australian government to follow in the footsteps of the European Union (EU) and to ban screen-scraping. But the call is far from unanimous, with some saying that smaller companies just can’t afford the alternatives to get at customer data.

On Thursday, representatives of companies in the fintech industry met with Australia’s Senate Committee of Financial Technology and Regulatory Technology to chime in.

As ZDNet reports, one of the calls for a ban came from Lisa Schutz, founding director of The Regtech Association and CEO of Verifier, who said that her company could use screen-scraping, but it’s chosen not to. That’s because they don’t want to step on her customers’ toes, privacy-wise, she said. Instead, Verifier abides by the 12 principles of Australia’s Privacy Act to access data: the “long way to get the right outcome,” she said, but worth it:

It comes back to what is the 2050 Australia that we want to live in.

The question of banning screen-scraping has come to pass thanks to the UK’s Open Banking initiative – a new, more secure way for consumers, including small businesses, to share information. It’s created a standardized way to share data and collect customer consent.

It’s an important security upgrade: one that means that, unlike with screen-scraping, passwords aren’t shared with third-party fintech service providers.

Some in the fintech industry want to ban screen-scraping outright, but not all. In fact, some argue, the only other option is to develop APIs – a prohibitively expensive proposition for the companies, some of which are pretty small.

Astrid Raetze, general counsel for one of those small companies – Raiz Invest – said that you’ve got the banks on one hand, demanding that screen-scraping be banned, while on the other hand, you’ve got fintechs that aren’t affiliated with banks that have no other alternative but to develop APIs under open banking to access data.

That would entail a lot of resources that they don’t have, she said:

[What it] doesn’t take into consideration is the disparity of resources between the two camps.

If you switch on open banking and turn off screen-scraping […] what you will do is hamstring the fintech industry.

Raetze said that if her company was forced to develop APIs because of a ban on screen-scraping, they’d be looking at development costs that have been estimated to run between a minimum of AU$1 million to AU$2 million and would require 6-12 months to complete.

But, the committee asked her, how can she confidently claim that screen-scraping puts customers and their data at “no risk?”

Because our security is solid and there are no transactions taking place, she said:

We have the same level security and we do not transact on your account, so there is no risk to you.

Another from the pro-screen-scraping camp was Luke Howes, managing director of Illion, who said that a ban on screen-scraping would be “simplistic and misguided”.

I have never seen, in six years, any consumer harm, because it’s safe. Banning it will cripple millions of users and businesses who rely on it. If you ban it, you’ll send an industry back five or 10 years.

But just because smaller fintech startups haven’t bungled data yet doesn’t mean they won’t, the big banks have been saying for years. Jim Routh – MassMutual chief information security officer, former CISO for Aetna, and former global head of application and mobile security for JPMorgan Chase – said back in 2014 in a conversation with American Banker:

Protecting credentials isn’t necessarily high on their priorities.

…a problem, he said, that’s worsened by data aggregators that collect marketing data, such as the device a consumer is using, to understand their behaviors across channels.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/p33_Z7ZunMU/

US Interior Dept extends drone grounding over foreign hacking fears

Now can’t be an easy time to be a professional drone pilot working for the US Department of the Interior (DOI).

After years of enthusiastic expansion, in November 2019 the agency announced the temporary grounding of its fleet of Unmanned Aircraft Systems (UAS) over hacking fears unnamed sources claimed were connected to their manufacture in China or use of Chinese parts.

This week, the DOI doubled down on that order, with Secretary of the Interior David Bernhardt signing a follow-up that will keep the agency’s drones on the ground for another 30 days until a more in-depth security review is completed.

It’s not clear what prompted the need for additional checks beyond a sense of caution. The statement simply noted:

In certain circumstances, information collected during UAS missions has the potential to be valuable to foreign entities, organizations, and governments.

Grounding drones for another month would give the agency time to carry out a cybersecurity assessment to make sure this can’t happen, it continued.

Until the issue is resolved, the only DOI drone flights allowed will be those connected to emergencies – monitoring wildfires and floods, both uses that underscore the importance of drones to the agency’s work.

Investigating drone cybersecurity sounds like a good idea even if how the agency might go about this remains open to speculation.

Drone Utopia

In a separate development last November, the US Department of Justice (DOJ) recommended that drones used by government departments be subjected to a thorough security assessment before use. The latest order is explicit that it’s the foreign dimension the agency is worried about when it specifies:

UAS manufactured by designated foreign-owned companies or UAS with designated foreign-manufactured component.

Easier said than done. In common with almost any other product one might think of, drones are built from a complex mix of hardware and software from across the world.

Much of it might come from China, but not all of it. And even the stuff that doesn’t might involve supply chains that lead who knows where. What’s certain is that many components will not be designed or manufactured in the US.

One answer might be to certify platforms in the same way the US Government does for other types of hardware. However, doing this for a relatively small fleet of drones used by one department would inevitably make them a lot more expensive and less likely to keep up with innovation.

The alternative is for the US to repurpose specialised drone platforms used by the US military but that could be beyond the budget of a department as small as the DOI.

The practical reality is that while engineers can peer at the software code used by drones, achieving absolute certainty about their underlying design is probably Utopian.

More achievable might be to take a leaf from mainstream cybersecurity and develop or adopt an open source platform which could be studied by the wider security community for security issues.

While complex proprietary technologies such as 5G equipment don’t lend themselves to this approach, drones are another matter.

The DOI seems unlikely to scrap or permanently ground its current drone fleet. At some point they will start flying again. But the hiatus is the perfect moment to reassess the flawed ‘fly and hope’ security approach that has shaped current drone use.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/WAwnXRjPMI8/