STE WILLIAMS

The Night Before ‘Breachmas’

What does identity management have to do with Charles Dickens’ classic ‘A Christmas Carol’? A lot more than you think.

In Charles Dickens’ A Christmas Carol, Ebenezer Scrooge — played by Michael Caine in the best version, The Muppet Christmas Carol — is visited by three ghosts who foretell his future based on his past and current actions. Since Scrooge is such a coldhearted person, his future is … grim.

Photo Credit: Buena Vista Pictures

There’s an interesting parallel here: An individuals’ cybersecurity hygiene can also predict the cybersecurity future of an entire enterprise. Whether that future is grim or great depends on the leadership from security teams to correct earlier, unsafe individual Internet interactions.

The Ghost of Passwords Past
It’s almost 2020: Have you deleted your MySpace profile? If not, it’s worth a visit, no matter how cringey the experience might be. While obsolete social media pages may be nostalgic for individuals, they’re a jackpot for attackers who mine old sites for information that can be used to answer security questions. What was the model of your first car? Check Tumblr. Who was your first crush? Check Friendster. What’s a likely password? Check your AOL Instant Messenger name. If that information is there for you, it’s also likely there for employees across your entire organization.

A savvy attacker could trigger a “forgot password?” flow and change a team member’s password simply by entering security answers discovered by perusing that person’s Internet presence. There’s also an exceptional amount of information lingering about each of us in old forums, sites, and social media. That’s nothing short of chilling.

The Ghost of Passwords Present
There’s another component to this digital pillaging: reusing passwords. Enterprises spend untold amounts of money hardening their digital infrastructure, but all that security can be undone with valid credentials. Is the password you’re currently using similar to passwords you used in high school? Possibly. Count how many employees are currently using logins across your organization and then consider how many of them are likely reusing the same password from app to app. That number is higher than you may realize. Even the most security-minded of us are guilty of reusing passwords in the interest of saving time and frustration.

Old passwords can be bought for pennies on the Dark Web, but they can also be found by cleverly infiltrating old websites that don’t have today’s security. It’s unlikely LiveJournal, for instance, has the same security as Cisco. That means an employee’s old login can be determined fairly easily, and an attacker can try that login and variations of it to attempt logging into an enterprise system. The implications of that are downright haunting. According to a study from the Ponemon Institute, a negligent employee costs the organization $283,281 per incident. Worse, attackers may not even make their presence known, choosing instead to repeatedly log in with legitimate credentials and silently leech information for years at a time.

The Ghost of Passwords Future
When the attackers are finally discovered, the results can be disastrous. Consider the Flipboard breach, for instance, which could have affected over 100 million users (the extent isn’t yet known). The breach was blamed on poor cyber hygiene. Users reused their passwords on numerous sites and systems, and an attacker likely obtained a user’s password from an account with weaker security. Then, it was simply a matter of using credential stuffing to automate the attack process and enter passwords into a variety of sites until one worked.

That’s not the only example. Reusing passwords that have been involved in previous breaches results in still more breaches, like the 44 million account users compromised in the Microsoft and Azure cloud breach earlier this month. It’s a practical reality that an employee’s old Yahoo login could be the very thing to take down a system guarding millions of customers’ sensitive information.

Outsmarting the Ghosts
First, scrub your Internet presence. Delete old social media accounts and omit personal information from LinkedIn and other current social media.

Next, start changing passwords. Make sure they’re completely different from any former passwords. In fact, don’t tie them to any facet of your life at all. For instance, resist the temptation to use your dog’s name.

Finally, get your employees to do the same. Cybersecurity hygiene starts with cybersecurity education: If people understand the reason why they’re being asked to be so diligent about making unique, strong passwords, they’ll be much more likely to comply. And while you can’t expect them to delete their old MySpace account, you can make them aware of the dangers of leaving their personal information in the open.

In A Christmas Carol, Scrooge learns from his past mistakes and mends his ways, resulting in a happy Christmas and a hopeful future. May we all learn from our past Internet selves and herald a brighter, more secure Internet of tomorrow.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “5 Pieces of GDPR Advice for Teams Without Privacy Compliance Staff.”

Matt Davey is the COO (Chief Operations Optimist) at 1Password, a password manager that secures identities and sensitive data for enterprises and their employees. In a previous life working with agencies and financial companies, Matt has seen first-hand how important security … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/the-night-before-breachmas/a/d-id/1336643?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

To protect data and code in the age of hybrid cloud, you can always turn to Intel SGX

Sponsored Data and code are the lifeblood of digital organisations, and increasingly these are shared with others in order to achieve specific business goals. As such, data and code must be protected no matter where the workloads run, be they in on-premises data centers, remote cloud servers, or edge-of-the-network.

Take medical images processed in the cloud for example. Their processing must be encrypted for security and privacy. Banks need to share insights into financial data without sharing that underlying confidential data with others. Other organisations may want to process data using artificial intelligence and machine learning but keep secret these learning algorithms that turn data into useful analysis.

While encrypting data at rest or in transit is commonplace, encrypting sensitive data while it is actively in-use in memory is the latest, and possibly most challenging, step on the way to a fully encrypted data lifecycle.

New security model army

One new security model that is growing increasingly popular as a way of protecting data in use is confidential computing. This model uses hardware protections to isolate sensitive data.

Confidential computing changes how code and data are processed at the hardware level and changes the structure of applications. Using the confidential computing model, encrypted data can be processed in the hardware without being exposed to the rest of the system.

A crucial part of that is the Intel® Software Guard Extensions (Intel® SGX). It was introduced for client platforms in 2015 and brought to the data center in 2017, and developed as a means of protecting the confidentiality and integrity of code. It does this by creating encrypted enclaves that help safeguard information and code whilst in use. This year, Intel® submitted the SGX software development kit (SDK) to the Linux Foundation’s new Confidential Computing Consortium to help secure data in applications and the cloud.

Don’t trust me, trust the extensions

To protect data in use, applications can employ something called Trusted Execution Environments (TEEs) running inside a processor. The fundamental principle here is of hardware isolation between that TEE – where only trusted code is executed on selected data – and the host device’s operating environment. Within a TEE, data is safely decrypted, processed, and re-encrypted. TEEs also provide for the secure execution of authorised software, known as trusted applications or TAs, and protect the execution of authenticated code.

To keep data safe, TEEs use a secure area of memory and the processor that is isolated from the rest of a system’s software stack. Only trusted TAs are allowed to run inside this environment, a system that is cryptographically enforced. Applications using a TEE can be divided into a trusted part (the TA) and an untrusted part (the rest of the application that runs as normal), allowing the developer great control over the exact portions of data needing advanced protections.

Unpacking Intel SGX

The goal of the Confidential Computing Consortium is to establish common, open-source standards and tools for the development of TEEs and TAs.

This is where Intel® has stepped in with Intel® SGX. It offers hardware-based memory encryption that isolates specific application code and data in memory. It works by allowing developers to create TEEs in hardware. This application-layer TEE can be used to help protect the confidentiality and integrity of customer data and code while it’s processed in the public cloud, encrypt enterprise blockchain payloads, enable machine learning across data sources, significantly scale key management solutions, and much more.

This technology helps minimise the attack surface of applications by setting aside parts of the hardware that are private and that are reserved exclusively for the code and data. This protects against direct assaults on the executing code or the data that are stored in memory.

To achieve this, Intel® SGX can put application code and data into hardened enclaves or trusted execution modules – encrypted memory areas inside an application’s address space. Code in the enclave is trusted as it cannot be altered by other apps or malware.

Intel® SGX provides a group of security-related instructions, built into the company’s Intel® Core™ and Xeon® processors. Intel provides a software development kit as a foundation for low-level access to the feature set with higher-level libraries that open it up to other cloud-optimized development languages.

Partition this

Any number of enclaves can be created to support distributed architectures. Some or all parts of the application can be run inside an enclave.

Code and data are designed to remain encrypted even if the operating system, other applications, or the cloud stack have been compromised by hackers. This data remains safe even if an attacker has full execution control over the platform outside the enclave.

Should an enclave be somehow modified by malicious software, the CPU will detect it and won’t load the application. Any attempt to access the enclave memory are denied by the processor, even those made by privileged users. This detection stops encrypted code and data in the enclave from being exposed.

Where might enterprise developers use Intel® SGX? A couple of specific scenarios spring to mind. Key management is one, with enclaves used in the process of managing cryptographic keys and providing HSM-like functionality. Developers can enhance the privacy of analytics workloads, as Intel® SGX will let you isolate the multi-party joint computation of sensitive data. Finally, there’s digital wallets with secure enclaves able to help protect financial payments and transactions. There are more areas, but this is just a sampler.

Separate – and secure

Intel® SGX enables applications to be significantly more secure in today’s world of distributed computing because it provides a higher level of isolation and attestation for program code, data and IP. That’s going to be important for a range of applications from machine learning to media streaming, and it means stronger protection for financial data, healthcare information, and user smartphone privacy whether it’s running on-prem, in hybrid cloud, or on the periphery of the IoT world.

Sponsored by Intel®

Sponsored:
Your Guide to Becoming Truly Data-Driven with Unrivalled Data Analytics Performance

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/24/intel_data_security_hybrid_cloud/

Santa and the Zero-Trust Model: A Christmas Story

How would the world’s most generous elf operate in a world of zero-trust security? A group of cybersecurity experts lets us know.

(image by olly, via Adobe Stock)

On Christmas Eve, snow will fall, Yule logs will blaze, visions of sugarplums will dance in children’s heads, and in the eyes of zero-trust experts, countless security breaches will happen in homes around the world.

Zero-trust security has blanketed IT like the snow Bing Crosby sang about. Based on the idea of maintaining strict access controls and not trusting anyone or any component by default — even those already inside the network perimeter — zero trust seeks to prevent intrusion wherever possible and minimize the damage from intrusions that do occur.

Each Christmas Eve, though, a party we’ve never met and know only by reputation enters our homes and leaves packages. The question Dark Reading put to security experts is whether this “Santa Claus” can be made compliant with the requirements of zero-trust security — or whether modern security might mean the end of children’s dreams.

“For far too many years, we’ve given carte blanche to Santa Claus to ignore basic security best practices —— not to mention safety issues bringing potential carcinogens with him down the chimney,” says Willy Leichter, vice president at Virsec. “Simply saying we ‘trust’ the big guy is dangerous and naïve.”

“Santa’s visit has been invited, typically, by one of the junior members of the household. This junior staffer is likely to have also given Santa a list of items that can be used to bribe his way through security,” points out Kevin Sheu, vice president of product marketing at Vectra.

This reality makes it likely, experts say, that Santa Claus will be able to make his way through the outer perimeter, so the focus shifts to minimizing potential damage. How might that work when it comes to the jolly, ol’ elf?

Background Basics
“First and foremost, Santa needs a background check before we go any further,” says Tyler Reguly, manager of security research and development at Tripwire. “I want to know everything about where this magical elf that makes it around the globe in 24 hours has been. I want to know everything about him.”

Getting deep background on a possibly imaginary individual isn’t enough of a challenge. The required knowledge doesn’t stop with Santa, himself. Reguly points out that Santa seems to have an extensive supply chain, and that the supply chain and support staff should come under scrutiny, as well. That means Mrs. Claus, the sly Elf on the Shelf, and the elves at the North Pole manufacturing and shipping facility must be accounted for.

When it comes to Santa authentication, Sheu points out that the zero trust’s evolution means a simple one-time event might not be enough. Instead, he points out, it’s about the one-time decision and then long-term follow-up to make sure that the authorization is still appropriate. After all, the Santa authenticated at the North Pole might or might not be the Santa who shows up on our roof — and not everyone is willing to outsource the interim security to NORAD’s Santa Tracker.

Santa Supply Chain
Other experts brought up the fact that Santa himself is only the most visible end of a very long supply chain. “Do we know that Santa has effectively assessed the reliability of his elves and of the production process?” asks Bob Maley, CSO of NormShield. “Have the reindeer been trained to land on the roof safely?”

Maley suggests that the level of supply chain verification can be subject to consideration of just how critical the risk is, and points out that, historically, the risk of Santa-inflicted damage is low. Still, that doesn’t mean Santa should necessarily be given free rein within the household.

“There’s got to be some clear communication of who’s arriving and an announcement of who he is, with confirmation that he is who he says he is before he even lands,” says Reguly. “And then, assuming you have a chimney, I think the next step, of course, has to be authentication at the chimney.”

(Continued on next page: Segmentation, and about those gifts…)

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full BioPreviousNext

Article source: https://www.darkreading.com/edge/theedge/santa-and-the-zero-trust-model-a-christmas-story/b/d-id/1336684?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Serious Security: The decade-ending “Y2K bug” that wasn’t

A curious Naked Security reader alerted us to what they thought might be a “Y2K-like bug” in Java’s date handling.

The cause of the alarm was a Twitter thread that started with a headline tweet saying, “PSA: TIS THE SEASON TO CHECK YOUR FORMATTERS, PEOPLE.”

As @NmVAson points out, the problem comes when you ask the Java DateTimeFormatter library to tell you the current YYYY, a commonly-used programmers’ abbreviation meaning “the year expressed as four digits”.

For example, when programmers abbreviate the world’s commonly used date formats, they often use a format string to denote the layout they want, something like this:

Layout                      Format string    Example
------------------------    -------------    ----------
US style (Dec 29, 2019)     MM/DD/YYYY       12/29/2019
Euro style (29 Dec 2019)    DD/MM/YYYY       29/12/2019
RFC 3339 (2019-12-29)       YYYY-MM-DD       2019-12-29

In fact, many programming languages provide code libraries that help you to print out dates using format strings like those above, so that you can automatically adapt the output of your software to suit each user’s personal preference.

The problem here is that there are many different, and incompatible, date-handling libraries, from the system’s own strftime() all the way way to Java’s all-singing, all-dancing DateTimeFormatter.

The abovementioned Java library, amongst others, lets you conveniently format dates with the three strings shown above, giving you plausible results like this:

import java.time.LocalDate;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;

public class CarefulWithThatDateEugene {
   private static void tryit(int Y, int M, int D, String pat) {
      DateTimeFormatter fmt = DateTimeFormatter.ofPattern(pat);
      LocalDate         dat = LocalDate.of(Y,M,D);
      String            str = fmt.format(dat);
      System.out.printf("Y=%04d M=%02d D=%02d " +
                        "formatted with " +
                        ""%s" - %sn",Y,M,D,pat,str);
   }
   public static void main(String[] args){
      tryit(2020,01,20,"MM/DD/YYYY");
      tryit(2020,01,21,"DD/MM/YYYY");
      tryit(2020,01,22,"YYYY-MM-DD");
   }
}

//---------------

Y=2020 M=01 D=20 formatted with "MM/DD/YYYY" - 01/20/2020
Y=2020 M=01 D=21 formatted with "DD/MM/YYYY" - 21/01/2020
Y=2020 M=01 D=22 formatted with "YYYY-MM-DD" - 2020-01-22

So far, so good!

But if you try this in the middle of the year, you get:

Y=2020 M=05 D=17 formatted with "MM/DD/YYYY" - 05/138/2020
Y=2020 M=05 D=18 formatted with "DD/MM/YYYY" - 139/05/2020
Y=2020 M=05 D=19 formatted with "YYYY-MM-DD" - 2020-05-140

An easily spotted bug

What?!?

Note the weird day numbers that are way greater than 31, even though the longest months in the year only have 31 days.

This should get you scrambling back to the documentation, or at least to your favourite search engine, where a cursory glance will reveal that the abbreviation DD actually means day of the year rather than day of the month.

So DD and dd only produce the same answer in January, after which the day of the year goes to 32 while the day of the month resets to 01 for the first of February. (To be clear, on 31 December the day of the year is 365, or 366 in a leap year, while the day of the month is 31.)

In other words, even cursory testing of dates outside January will show up this format-string bug, so not many people make it.

What you want is the format string dd, as follows:

Y=2020 M=05 D=17 formatted with "MM/dd/YYYY" - 05/17/2020
Y=2020 M=05 D=18 formatted with "dd/MM/YYYY" - 18/05/2020
Y=2020 M=05 D=19 formatted with "YYYY-MM-dd" - 2020-05-19

A bug that’s harder to spot

Except that you’re still wrong, because YYYY doesn’t mean “the Christian-era year in four digits”.

That’s denoted, in the Java library (and other full-fat date libraries, too) as the lower-case text string yyyy.

In contrast, YYYY denotes what’s known as the week based year, something that accountants rely on to avoid splitting weeks – and thus the company payroll – between two different years.

Inconveniently for farmers, priests, astronomers and businesspeople alike, the solar year doesn’t divide exactly into days, and therefore doesn’t divide neatly into weeks or months. (The lunar month isn’t commensurate with the solar year, either, to make things even more complicated.)

As every bookkeeper knows, there aren’t exactly 52 weeks in a year, because there are always one or two days left over at the end.

That’s a consequence of the fact that there are 365 (or 366) days in a year (or a leap year); that there are 7 days in a week; and that 365/7 = 52 remainder 1 (or 366/7 = 52 remainder 2).

For accounting convenience, therefore, it’s conventional to treat some years as having 52 full weeks, and others as having 53 weeks, which keeps things like weekly revenue plans and weekly payroll evened out in the long run.

In other words, in some years, “payroll week 01” actually starts just before New Year’s Day; in other years, it doesn’t start until a few days into the first week of the New Year.

There’s a standard for that

There’s a standard for that, defined in the ISO-8601 calendar system, described in Java’s documentation as “the modern civil calendar system used today in most of the world.”

ISO-8601 makes some assumptions, including that:

  • The first day of every week is Monday.
  • The week that is split at the end of the year is assigned to the year in which more that half of the days of that week occur.

The second assumption seems reasonable, because it means that you always have more payroll days in the correct year than in the wrong one.

For 2015, for example, the there were four days left over after week 52, so the first three days of 2016 were “sucked back” into the payroll year 2015:

Sun 2015-12-27  - Payroll week 52 of 2015

Mon 2015-12-28  - Payroll week 53 of 2015 
Tue 2015-12-29  - Payroll week 53 of 2015 
Wed 2015-12-30  - Payroll week 53 of 2015
Thu 2015-12-31  - Payroll week 53 of 2015 
-------------NEW YEAR---------------------
Fri 2016-01-01  - Payroll week 53 of 2015
Sat 2016-01-02  - Payroll week 53 of 2015
Sun 2016-01-03  - Payroll week 53 of 2015

Mon 2016-01-04  - Payroll week 01 of 2016

But in 2025, it’s the other way round, with just three days left over at the end of 2025 that get “shoved forwards” into the payroll year 2026:

Sun 2025-12-28  - Payroll week 52 of 2025 
 
Mon 2025-12-29  - Payroll week 01 of 2026 
Tue 2025-12-30  - Payroll week 01 of 2026 
Wed 2025-12-31  - Payroll week 01 of 2026 
-------------NEW YEAR---------------------
Thu 2026-01-01  - Payroll week 01 of 2026 
Fri 2026-01-02  - Payroll week 01 of 2026 
Sat 2026-01-03  - Payroll week 01 of 2026 
Sun 2026-01-04  - Payroll week 01 of 2026
 
Mon 2026-01-05  - Payroll week 02 of 2026 

Big date bug ahead!

Can you see where this is going?

If you’ve got a date format string like MM/dd/YYYY or YYYY-MM-dd at any point in any software in which you are using ISO-8601 date formatting libraries…

…you will inevitably encounter bugs that print out dates with the wrong year, either at the end of one year or the start of the next, except in years when New Year’s Day is on a Monday.

(When 31 December is on a Sunday and 01 January is on a Monday, the ISO-8601 “week splitting” process works neatly, with 0 days at the end of the year left over.)

Here are the dud dates you’d see for 2018:

Y=2018 M=12 D=30 formatted with "YYYY-MM-dd" - 2018-12-30  +correct+
Y=2018 M=12 D=31 formatted with "YYYY-MM-dd" - 2019-12-31  *WRONG* (one year ahead)
-------------------------------NEW YEAR------------------------------
Y=2019 M=01 D=01 formatted with "YYYY-MM-dd" - 2019-01-01  +correct+

For 2019:

Y=2019 M=12 D=29 formatted with "YYYY-MM-dd" - 2019-12-29  +correct+
Y=2019 M=12 D=30 formatted with "YYYY-MM-dd" - 2020-12-30  *WRONG* (one year ahead)
Y=2019 M=12 D=31 formatted with "YYYY-MM-dd" - 2020-12-31  *WRONG* (one year ahead)
-------------------------------NEW YEAR------------------------------
Y=2020 M=01 D=01 formatted with "YYYY-MM-dd" - 2020-01-01  +correct+

For 2020:

Y=2020 M=12 D=31 formatted with "YYYY-MM-dd" - 2020-12-31  +correct+
-------------------------------NEW YEAR------------------------------
Y=2021 M=01 D=01 formatted with "YYYY-MM-dd" - 2020-01-01  *WRONG* (one year behind)
Y=2021 M=01 D=02 formatted with "YYYY-MM-dd" - 2020-01-02  *WRONG* (one year behind)
Y=2021 M=01 D=03 formatted with "YYYY-MM-dd" - 2020-01-03  *WRONG* (one year behind)
Y=2021 M=01 D=04 formatted with "YYYY-MM-dd" - 2021-01-04  +correct+

If the Twitter thread started by @NmVAson is anything to go by, quite a few programmers still seem to be making this sort of mistake, which implies that they’re not testing their code very well.

As we mentioned above, writing DD by mistake instead of dd seems to be an unusual bug, presumably because the error shows up for about 85% of the year, and stands out dramatically for more than 70% of the year thanks to the three-digit days.

Admittedly, writing YYYY by mistake instead of yyyy produces bugs on fewer than 1% of the dates in a year, and not at all in one year every seven, but even with a 1% error rate, there’s not really any excuse for failing to spot that you have made this blunder.

You could probably excuse not catching a one-in-232 error as bad luck; you might even get away with “bad luck” to excuse an error rate of one-in-a-million…

… but at 1% (especially when those one-percenters are right at the proverbial year-end), you really ought not to be letting this sort of bug escape your notice.

What to do?

If you are a programmer or a project manager reponsible for code that handles dates – and anything that does any sort of logging almost certainly needs to do just that – then please make sure that you:

  • Don’t make assumptions. Just because upper-case YYYY denotes the calendar year in some places doesn’t mean it always does.
  • Read the full manual, or RTFM for short. Sadly, TFM for ISO-8601 is almost absurdly complicated, but that should be your problem, not your users’ problem – with power comes responsibility.
  • Review your code properly. Remember, the reviwer needs to RTFM as well.
  • Test your code thoroughly. Anyone with ISO-8601 YYYY bugs really doesn’t have a good and varied test set, given that this bug shows up around the end of six years in every seven.

We take calendars for granted, but their design and use of is a mixture of art and science that has been going on for millennia, and still requires great care and attention.

Especially in software!


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/hCkMBFmx46A/

Mastercard Announces Plan to Purchase RiskRecon

The acquisition is expected to close in the first quarter of 2020.

Mastercard has announced that it will purchase RiskRecon, a company that builds security products and services on an artificial intelligence and data analytics platform.

In the statement announcing the planned acquisition, Mastercard said that it will use RiskRecon’s technology to complement its existing technology to protect financial institutions, merchants, and governments.

Financial terms were not disclosed for the purchase agreement, which is expected to close in the first quarter of 2020.

For more, read here.

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “SIM Swapping Attacks: What They Are How to Stop Them.”

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/mastercard-announces-plan-to-purchase-riskrecon/d/d-id/1336694?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Citrix Urges Firms to Harden Configurations After Flaw Report

A vulnerability in two of the company’s appliances opens 80,000 networks up for exploitation.

A vulnerability in two network appliances made by Citrix and used by an estimated 80,000 companies worldwide could be exploited to allow an attacker to gain access to a firm’s local network from the internet, according to advisories published today. 

The vulnerability (CVE-2019-19781), which affects the Citrix Application Delivery Controller and Citrix Gateway, allows an unauthenticated attacker to run arbitrary code on the appliances, according to Citrix’s advisory on the issue. While few details of the vulnerability have been released, Citrix did document several mitigation steps that will protect users but has not yet released a patch.

Because it is so easy to exploit and does not require authentication, the vulnerability is the highest criticality, says Mikhail Klyuchnikov, one of the three vulnerability researchers credited with finding the issue and a Web-application security specialist with vulnerability assessment firm Positive Technologies.

“It’s really easy to exploit, [and] it’s very reliable,” Klyuchnikov says. “[We don’t] know if it is being used in the wild.”

Citrix appliances are often used as gateways for application load balancing and remote access. Judging from the mitigation steps, the Citrix issue appears to affect the virtual private networking component of the appliances’ software. 

Of the 80,000 companies in 158 countries potentially at risk, the plurality — 38% — are based in the United States. An addition 9% are in Germany, 6% in the United Kingdom, 5% in the Netherlands, and 4% in Australia.

“Citrix applications are widely used in corporate networks,” said Dmitry Serebryannikov, director of the security audit department with Positive Technologies, in a statement. “This includes their use for providing terminal access of employees to internal company applications from any device via the Internet. Considering the high risk brought by the discovered vulnerability, and how widespread Citrix software is in the business community, we recommend information security professionals take immediate steps to mitigate the threat.”

Positive Technologies reported the vulnerability to Citrix in early December, according to the firm. Citrix responded quickly with risk mitigation measures, the company said. An attack can be completed in less than a minute, and some Citrix products have been vulnerable for more than five years, Positive Technologies stated.

The appliance, many sold under the NetScaler brand, is a common way to gain remote access to networks or applications, Klyuchnikov says.

“Using Citrix NetScaler to access the internal network is common practice because this software has the ability to implement SSL VPN features,” he says. “This feature, for example, can be used to access the corporate network by employees who work remotely.”

In its advisory on the vulnerability, security firm Symantec recommends companies block external access at the edge of the network and use intrusion detection systems to monitor links that need to be accessible. 

“If global access isn’t needed, filter access to the affected computer at the network boundary,” Symantec stated. “Restricting access to only trusted computers and networks might greatly reduce the likelihood of successful exploits.”

This is not the first time Citrix has had to deal with a serious security weakness. In March, the FBI notified the company that attackers had breached its network and downloaded business documents.

With the latest security vulnerability, two other security experts — Gianlorenzo Cipparrone and Miguel Gonzalez of online betting service Paddy Power Betfair plc — are credited with the discovery of the issue. 

Citrix did not respond to an e-mail requesting comment.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “SIM Swapping Attacks: What They Are How to Stop Them.”

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/citrix-urges-firms-to-harden-configurations-after-flaw-report/d/d-id/1336695?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Facebook will stop mining contacts with your 2FA number

Did you know that when you use your phone to authenticate your Facebook login, the company feeds the number into its friend suggestions feature? Neither did most other people until the social media giant told Reuters about it this week.

Facebook operates a two-factor authentication (2FA) system that lets users add a second authentication channel to their account. Instead of relying solely on a username and password, they can also set their account to require a login code from a third-party authentication app, or a code sent via SMS text message to their phone.

It’s the phone number part that’s a problem.

Facebook clearly likes to use as much of your personal data as it feels it can, and that includes the phone number linked to your 2FA setting. A study by researchers at Princeton and Northeastern universities released in May 2018 found that the company had been using these 2FA phone numbers to serve advertisements. What’s worse is that you couldn’t register for the 2FA service without a phone number until Facebook changed its policy in May 2018.

When it fined Facebook $5bn in July 2019, the FTC also made it promise not to do that anymore. The 20-year settlement order that the Commission submitted said that Facebook:

[…] shall not use for the purpose of serving advertisements, or share with any Covered Third Party for such purpose, any telephone number that Respondent has identified through its source tagging system as being obtained from a User prior to the effective date of this Order for the specific purpose of enabling an account security feature designed to protect against unauthorized account access (i.e., two-factor authentication, password recovery, and login alerts).

So it stopped. So far, so good. But in an interview with Reuters, Facebook’s chief privacy officer Michel Protti explained that the company had also been feeding those numbers into its ‘people you may know’ feature, which suggests friends for you to connect with on the platform.

This is all part of a wide-ranging effort to improve the company’s privacy, Protti told Reuters. How safe does it make you feel? A lot of people will have had no idea that it was using peoples’ 2FA details in this way. You can file this little gem under “you were doing what, now?”

Facebook will flip the off-switch on that data usage over the next few months, beginning in Ecuador, Ethiopia, Pakistan, Libya and Cambodia next week and going global next year.

Reuters said that if you’ve already given the social media platform your number as part of the 2FA service then the change won’t be retroactive – you’ll have to go into your settings manually, delete your number, and enter it again.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/8FHvy_UlUI0/

Congress passes anti-robocall bill

A bill to punish robocallers has finished its passage through Congress and is expected to become law any day now.

US phones now get 200m robocalls each day, and lawmakers have had enough. They hope that the Pallone-Thune Telephone Robocall Abuse Criminal Enforcement and Deterrence (TRACED) Act currently on its way to the President’s desk will put a stop to it.

The bill, introduced in the Senate on 16 January 2019, already passed a Senate vote in May before going to the House of Representatives, where it finally passed with amendments in early December. That sent it back to the Senate, which passed it with a final vote, meaning that once the President signs it, it will be law.

What will this Act do to stop robocallers from polluting US phones with timeshare offers, payday loan scams and other predatory messages? The headline is the $10,000 penalty per violation that the Federal Communications Commission (FCC) could impose on them. It would also force carriers to use a call authentication framework called Secure Telephone Identity Revisited and Signature-based Handling of Asserted information using toKENs (STIR/SHAKEN).

These two protocols use digital certificates to ensure that the number showing up on your phone really is the number calling you. Without that technology, it’s simple for scammers to pretend they’re calling from any number. This is a partial solution, though, as it only identifies legitimate callers, not scammers. Neither can it trace those bad actors when they do call.

To cope with the tracking problem, the Act does provide for a consortium of private companies that will focus on tracing robocalls back to their senders. The Commission can take action against voice carriers that don’t participate.

The law will also force the FCC to review its policies on how carriers sell lists of telephone numbers, examining registration and compliance obligations to avoid bad actors getting hold of number lists. It will also require the Commission to create a database of disconnected telephone numbers, to stop companies calling their new owners.

Alongside general nuisance robocallers, the law singles out some specific attacks and victims for special treatment. It explicitly calls upon the FCC to tackle one-ring scams, in which a bot calls long enough for the victim’s phone to ring just once. The curious victim then returns the call using the number that showed up on caller ID, incurring charges. The act also establishes a hospital robocall protection group to tackle the rising problem of robocalls that jam hospital lines.

The real question is whether the FCC will step up to the plate and use the law to tackle robocallers. It already imposes stiff penalties, sometimes reaching into the millions of dollars, but isn’t too keen on collecting them. A March 2019 Wall Street Journal investigation into the Commission’s follow-through on fines found that it had collected just $6,790 of the $208m in penalties that it imposed on robocallers since 2015.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/udpJFCXhb-g/

Smartphone location data can be used to identify and track anyone

In today’s smartphone economy, hiding your location has become a major challenge.

At any moment, someone knows where you are, or have been, and they might even be able to work out where you will go next.

The work of government? Google? Advertising companies? Or perhaps Facebook, which this week was hauled up by US Senators who think the company is tracking smartphone users locations despite having apparently promised not to?

While these might be tracking your location, according to the New York Times Privacy Project it’s the entities nobody has heard of that should perhaps worry us more.

Its researchers know this, they say, because earlier this year the NYT’s Privacy Project got its hands on a large data set leaked to it by unnamed sources from a “data location company.”

The data contains 50 billion location pings generated by the smartphones of 12 million Americans in cities including New York, Washington, New York, San Francisco, and Los Angeles during 2016 and 2017.

This looks like a first. To date, almost all that is known about how location data is collected and used is based on the capabilities of the technology and inferences made from the business models of the companies concerned.

The research demonstrates what’s really going in new detail. One way to understand it is to view the visuals generated by the NYT to explore the deeper patterns it can be coaxed into revealing.

For instance, the activity map showing a “senior Defense Department official and his wife” as they attended the Women’s March in 2017.

In another example, the researchers were able to pick on a random smartphone spotted in Central Park and track the owner’s entire movement history across New York over a period of up to two years.

Following the movements of defence officials, police officers, lawyers? No problem. They even traced smartphone pings from workers inside the Pentagon.

One search turned up more than a dozen people visiting the Playboy Mansion, some overnight. Without much effort we spotted visitors to the estates of Johnny Depp, Tiger Woods and Arnold Schwarzenegger, connecting the devices’ owners to the residences indefinitely.

Peek-a-boo

Takeaway number one is that very few people realise how much can be inferred about someone, including people that hackers might be very interested in, simply by noticing the location of their smartphone through time.

A second is that this data doesn’t appear to be as anonymous as companies claim. Indeed, if this evidence is held across large numbers of smartphone users, one might even conclude that the anonymity defence is somewhere between evasion and an outright lie.

The company whose data was studied isn’t named but the NYT offers a list of companies which it says are in the same location data business. The data is collected using GPS, but also via other sources such as Bluetooth beacons (small sensors hidden in stores and malls) and, presumably, Wi-FI and cell base station proximity.

Some companies say they don’t sell data but because there’s little regulation to stand in the way of this beyond privacy policies few read it’s impossible to be sure – the data is privately collected and, in the US at least, private property.

Short of turning off a smartphone for long periods, or not carrying one, it’s probably impossible to stop all tracking, even using the facilities provided in mobile operating systems.

Similarly, turning off GPS, Wi-Fi, mobile data, Bluetooth, and NFC isn’t exactly going to make someone’s smartphone more useful.

But before people get to that stage, they need to know that location data has become the sort of problem that could define the next 10 years of privacy arguments.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/wo6BHh--3_s/

Emirati ‘surveillance app’ ToTok promoted by Huawei as Apple punts it from store

A popular UAE messaging app has been reportedly used by the country’s government to spy on its population. This app, called ToTok, passed all the usual Google Play and Apple App Store checks. Huawei even promoted it via social media.

On its Huawei Mobile Services MENA Facebook page, which has over 1.8 million likes, the Chinese mobile handset maker ostensibly endorsed it via a glowing post with the hashtag “#AppsMustHave.”

“Stay connected anywhere, anytime with ToTok Messenger,” it said in both English and Arabic. “ToTok will provide you with unlimited calls, whether voice or video calling all are FREE. Download NOW.”

totok

Click to enlarge

The post directs users to ToTok’s download page on the Huawei App Gallery. At the time of writing, ToTok was still available to download from Huawei’s fledgling app marketplace.

Earlier today, Apple announced it had removed TokTok from the App Store. Although Google has yet to formally announce it has removed ToTok from the Play Store, it didn’t appear in any searches when we checked this morning. This suggests Mountain View has already taken action.

To be fair, there is no reason to suspect foul play on behalf of Huawei. ToTok took great pains to appear as a legitimate mobile application. There was never anything obvious that would lead someone to suspect that it was a tool for state-sponsored mass surveillance.

That said, it’s a painful reminder that endorsing any product is not without an element of risk, particularly for purveyors of application marketplaces.

The New York Times broke the story, and assisting the paper with its investigation into ToTok was Patrick Wardle, a former NSA employee, and current security researcher at Jamf. He published a technical analysis of the app, which showed that it was largely a re-badged version of YeeCall — an existing messaging platform — rather than a bespoke new product.

By delving into the code, Wardle found ToTok was configured to run continuously in the background. Via its permissions, it had access to the microphone, location, and camera. While these are required for ToTok’s legitimate functionality, they also could be used to remotely spy on an individual.

Wardle also raised doubts about the existence of the developer, Breej Holding Ltd, which he believes to be a front company for the Abu Dhabi-based digital intelligence firm Dark Matter, and highlighted a bevy of suspicious reviews designed to raise ToTok’s profile.

ToTok regularly ranked among the most popular apps within the United Arab Emirates, and was gradually building an international cadre of users. According to SensorTower, it had over 600,000 downloads across iOS and Android during November.

This popularity is presumably due to the local government’s policy of banning most VOIP services — including Skype and WhatsApp calls. It also forced local network providers to block VPNs, which allow users to circumvent internet restrictions.

By removing the international competition, the Emirates was able to swoop in with its own domestically approved alternative. This quickly found a fertile market.

The Register has asked Huawei for comment. If we hear back from it, we’ll update this post. ®

Sponsored:
What next after Netezza?

Article source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/12/23/huawei_promoted_emirati_app_totok/