STE WILLIAMS

Train up to navigate the diverse, chaotic cyber security landscape at SANS Munich

Promo High-profile cases of successful attacks on critical industrial control systems show the growing importance of protecting your organization or facing a turbulent future. Malware delivered by ever more creative methods can find its way to plant floors, encrypting critical files or wiping them altogether.

For security professionals charged with protecting their organisations against such incidents, training company SANS Institute is staging its annual ICS Europe Summit on June 24-29 in Munich, Germany.

The event offers a series of informative talks by leading figures in the ICS sector:

ICS down! It’s Go time Christopher Robinson, Principal Consultant, industrial control systems at security software firm Cylance, outlines the challenges and pitfalls an ICS incident response team might encounter.

OT security requirements vs real-life stories Łukasz Maciejewski, Security Manager at IT consultancy Accenture, discusses how companies’ focus on speed of implementation can lead to risk negligence. The talk includes real-life examples of how security is weakened for the sake of functionality and shows how to marry the two.

Extending an IT security operation centre to include critical systems Listen to Markus Braendle, Head of Cyber at Airbus, provide a real use case study of Airbus as an asset owner

Using ICS/SCADA honeypots the right way Fake devices or networks, known as honeypots, have been around for decades but few asset owners are using them. Mikael Vingaard, Preparedness Manager at Denmark’s state-owned gas supplier Energinet, demonstrates the value of honeypots in industrial networks and offers guidance on deploying them.

Extend your learning experience further by combining the summit attendance with SANS ICS Training. You can choose to combine the summit with any of the three ICS courses on offer:

ICS/SCADA security essentials A foundational set of standardised skills for industrial cybersecurity professionals.

Essentials for NERC critical infrastructure protection Learn from the North American Electric Reliability Corporation’s critical infrastructure protection plan.

ICS active defence and incident response A hands-on approach to monitoring and responding to malware threatening ICS such as Stuxnet, Havex and BlackEnergy2.


More information about the event and registration details are available here

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/29/navigate_the_diverse_and_chaotic_cyber_security_landscape/

Malware Makes Itself at Home in Set-Top Boxes

Low-cost boxes that promise free TV streaming services often come complete with malware, according to a new study.

“Free” can be an almost irresistible lure for consumers. When criminals use that lure in the form of television set-top boxes that promise free access to premium channels and services, they can entice many consumers into a trap that takes log-in credentials, private information, and financial data in return for the latest binge-worthy programming.

A new report from the Digital Citizens Alliance (DCA) says about 12 million people are using devices that promise illicit access to streaming services. The hackers taking advantage of those users follow a classic pattern in their activity. “[They] bait consumers with offers of free content, infect those that take the bait with malware, and steal vital personal information,” the report states.

According to Tom Galvin, executive director of DCA, the problem is not with the hardware of the set-top boxes, many of which are used to host legitimate applications for accessing streaming services. The issue is with the applications that a criminal can load onto the hardware before it’s delivered to the consumer.

These boxes, known as “Kodi boxes” for the Android open-source media player that serves as the software hub inside the device, are sold on Craigslist, Amazon, Ebay, and local for-sale websites. “Sandvine has estimated that almost 10% of the homes in North America are using a [Kodi box],” Galvin says. “They figure about 70% of those devices are configured to access unlicensed content.” And it’s in the apps that allow unlicensed access that the study found malware.

“At least 40% of the apps we evaluated were infected,” says Timber Wolfe, owner of Dark Wolfe Consulting, which worked with DCA on the study. Wolfe executed malware found on one box in a controlled environment and then did reverse-engineering on the software. The first thing the software did was contact a server and update to a new version.

“As soon as it updated, it started exhibiting bad behavior,” Wolfe says. “It looked for free file shares on my network, uploaded that data to a server, and immediately stole my Wi-Fi credentials,” he explains.

Ultimately, the malware uploaded 1.5 terabytes of data from an available file share while it continuously looked for other file shares and unprotected devices on the network. In addition, it specifically looked for other malware.

“It’s called ‘port knocking,” Wolfe says. “It was knocking on my Western Digital 4100 looking specifically for another malware family to open up a port and talk to it.”

One of the great dangers of these schemes, beyond the threat to consumer privacy, is that the network-attached devices people use at home and then take to work become attack surfaces hackers can use to get into the corporate network, Galvin says.

Because the boxes themselves aren’t illegal, protection from the malware they carry is complicated. “It’s a combination of consumer awareness, law enforcement, and making sure that those who have sensitive information are aware of the risks here,” Galvin says.

That consumer awareness may be the most important point. “When they put this machine on the network, they have allowed the hacker to bypass most of their security,” Galvin says. “They’ve just escorted the hacker behind the firewall.”

Related Content:

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/malware-makes-itself-at-home-in-set-top-boxes/d/d-id/1334550?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Slack Warns of Big, Bad Dangers in SEC Filing

A filing prior to an IPO lists nation-state dangers to Slack’s services and customers as a risk for investors.

In an SEC filing published today, Slack has warned potential investors that it is a target for attacks from “sophisticated organized crime, nation-state, and nation-state supported actors.”

The details come as part of an S-1 form filed prior to the company’s initial public offering. In a section titled “Risk Factors,” Slack meticulously lists, as all companies going through this process must, all the things that could go wrong and lower the price of its stock. But the 3,469 words that make up the cybersecurity portion of that section read like a compendium of factors that can cause sleepless nights for CISOs.

The sophisticated nation-state threat is listed alongside “normal” cyber risks from garden-variety hackers, criminals, and disgruntled individuals.

For more, read here.

 

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/slack-warns-of-big-bad-dangers-in-sec-filing/d/d-id/1334553?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How to Build a Cloud Security Model

Security experts point to seven crucial steps companies should be taking as they move data and processes to cloud environments.PreviousNext

(Image: Bruce Jones - stock.adobe.com)

(Image: Bruce Jones – stock.adobe.com)

More and more businesses are deploying applications, operations, and infrastructure to cloud environments – but many don’t take the necessary steps to properly operate and secure it.

“It’s not impossible to securely operate in a single-cloud or multicloud environment,” says Robert LaMagna-Reiter, CISO at First National Technology Solutions (FNTS). But cloud deployment should be strategized with input from business and security executives. After all, the decision to operate in the cloud is largely driven by business trends and expectations.

One of these drivers is digital transformation. “There is a driving force, regardless of industry, to act faster, respond to customers quicker, improve internal and external user experience, and differentiate yourself from the competition,” LaMagna-Reiter says. Flexibility is the biggest factor, he adds, as employees and consumers want access to robust solutions that can be updated quickly.

Economic and financial drivers also play a role, with organizations moving to subscription models and shifting from capital to operational expenditures. However, many view the cloud as a means to cut costs – one of many misconceptions that should be clarified, says Yaron Levi, CISO at Blue Cross and Blue Shield of Kansas City and research fellow at the Cloud Security Alliance.

“Now you have a big chunk of companies that are moving to the cloud and not necessarily for the right reasons,” he says, adding that in addition to saving money, some feel they won’t have to worry about security in the cloud. “It’s not always cheaper. Not all clouds are created equal.”

[Hear Robert LaMagna-Reiter, CISO at First National Technology Solutions, present Building a Cloud Security and Operating Model at the Cybersecurity Crash Course at Interop 2019 next month.]

People often think about security in the sense of, “I put in AWS, so we’re secure,” he adds. This isn’t the case: Amazon Web Services provides the fabric, which users should ensure is secure.

Most companies don’t understand cloud posture, let alone cloud security, LaMagna-Reiter says. You also have to think about threats that could potentially affect in-house systems and mitigations to put in place. Gary Marsden, senior director of data protection services at Gemalto, points to shadow IT as an example. He describes a bank that had 2,000 cloud accounts with multiple vendors. They didn’t know about most of them. Six months later, they had detected 5,000 additional cloud accounts, bringing the total to 7,000 cloud accounts – most of which were not IT-approved.

“That’s a dynamic we’re going to see more and more of going forward,” he says.

Threat planning is just one step businesses should be taking as they move operations to the cloud. Here, cloud security experts outline crucial steps to include in building a cloud security model, and what should be kept in mind before and after deployment. Any tips you’d add to the list? Feel free to add them in the Comments.

 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full BioPreviousNext

Article source: https://www.darkreading.com/cloud/how-to-build-a-cloud-security-model/d/d-id/1334552?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Go Medieval to Keep OT Safe

When it comes to operational technology and industrial control systems, make sure you’re the lord of all you survey.

Digital transformation has dramatically changed the world of industrial control systems (ICS) and operational technology (OT) all the way up to today, when they have joined the online world with direct factory-floor connections to the Internet.

Production floors are exposed to an entire spectrum of nonstop cyber threats — even hostile code from the 1990s — that we in the manufacturing community see continually.

Attacks are constant, they can come at any time from anywhere in the world, and our governments and regulators cannot protect us. When planning our tactical approach to risk mitigation at our company, we sometimes liken our situation to that of an independent city-state in medieval times. Enemies are everywhere, no one can be trusted, and we are on our own.

This metaphorical model can be extended to aptly describe the state-of-the-art cybersecurity we deployed that provides us with defense in depth across our entire OT infrastructure, one that previously was unprotected in almost every layer.

To better understand how to combat current cyber threats, let’s explore tried-and-true lessons from these age-old medieval tactics.

Advanced Walls, Layered Fortifications, and Lockdowns
Castles of old layered their defenses, too, using moats, multiple perimeters, and even keeps for last-ditch protection. Gates tightly controlled who came in and who got out.

Similarly, each production system in our OT world must be separated from the rest of the world by multilayer virtual walls and air gaps, with entrances and exits via approved gates only. Elements in the IT security stack that help achieve this include next-gen firewalls and network access control (NAC); however, these generally do not natively protect ICS components.

It is necessary to integrate them with an ICS cybersecurity solution that discovers and monitors Industrial Internet of Things (IIoT) and ICS/OT devices and speaks their languages, such as Modbus and DNP3, and recognizes specialized ICS devices, such as programmable logic controllers and human-machine interfaces.

The gate for today’s networks is the NAC. All communications, whether incoming or outgoing, must undergo checks to ensure the communication is legitimate and not hostile. Source and destination addresses are checked and behavior patterns analyzed. Anomalies and suspicious data exfiltration can be blocked.

Installing walls and gates is critical to protecting the vulnerable production system. But we know that no walls are 100% foolproof and hostile code may still break in. The ability to lock down and isolate a city and place it under quarantine at a moment’s notice, so nothing goes in or out, will be carried out by the same walls that are supposed to protect, thereby mitigating the harm to the entire country/society.

Spy Network, Hunting, and Assassins
Sometimes information brought by spies has great value and can determine the outcome of a campaign. In the past, the internal and external espionage networks created an extensive intelligence infrastructure, and thus the kingdom was defended.

The parallel today is that listening to OT production networks is critical. The ability to understand their languages — different from those used in IT systems — and simultaneously hunt for any unusual activity requires the use of a purpose-built platform designed for the unique protocols, devices, and behaviors of OT networks.

Unusual activity, or anomalies, must generate alerts in a control center or even real-time mitigation through integration with a system information and event management system, NAC, or firewall.

Citizens’ Awareness
Clever people have long used deception and others’ gullibility in warfare. During the Crusades, for example, medieval knights handed over a well-fortified Syrian castle when they were presented with a forged letter purportedly from the Hospitallers’ Grand Master ordering them to surrender. Similarly, today’s cyber enemies use human nature as vulnerabilities to bypass sophisticated cyber mechanisms. Constant training is essential to ensure awareness and alertness of the employees. Addressing this weakness requires a unique awareness program for everyone, and the availability of a hotline around the clock for when suspicion arises on the production floor.

The Royal Court
In the past, threats to the city-state were always met with a keen sense of urgency. Information was passed to the ruler and his or her insiders in the king’s court. Decisions were taken and commends for action meted out.

Of course, that all pales in comparison to the volume of information, complexity, and real-time scale of today’s cybersecurity decision-making support systems.

What hasn’t changed is the need to deliver all the information and alerts coming from the countermeasures and monitoring systems above into a nerve center where the information from the production floor will be processed in seconds, analyzed, and deemed normal or abnormal activity.

The decision-makers in these situations must have clear protocols on how to address various scenarios, and must be armed with the tools to take action.

Think Layered Fortifications
In all layers of this defense-in-depth discussion, presented using the medieval city-state as a metaphor, the unique needs of the IIoT/ICS environment must be addressed. A comprehensive approach incorporating layered fortifications, ICS-aware continuous monitoring, and employee awareness is essential for strategic risk mitigation, and to ensure safe and continuous operation of production facilities. 

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Ilan Abadi joined Teva Pharmaceutical Industries in May 2012 as Global CISO. In his current role, Ilan is in charge of establishing cybersecurity strategy and structure and managing ongoing cyber activities, including current and future security threats. Among his … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/go-medieval-to-keep-ot-safe/a/d-id/1334490?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cops can try suspect’s fingers on locked iPhones found at crime scene

In January, a Northern California federal judge ruled that police can’t force suspects to unlock their phones with biometrics, even with a warrant, because it amounts to the same type of self-incrimination as being forced to hand over your passcode.

Now, Law360 has uncovered a search warrant that says the opposite: in the document, issued on 18 April, Massachusetts federal district judge Judith Dein gave agents from the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) the right to press a suspect’s fingers on any iPhone found in his apartment in Cambridge that law enforcement believes that he’s used, in order to unlock the devices with iPhone Touch ID.

The suspect, Robert Brito-Pina, is suspected of gun trafficking. He’s a convict, which makes gun possession illegal.

It’s unclear whether the search has been executed yet, but the ATF has until 2 May to search Brito-Pina’s apartment. The warrant covers any records, receipts, all mobile phones, and all of their content, including text messages, email, apps, internet history, voicemails, photographs or videos relating to the acquisition of firearms or ammunition made since 1 July 2018.

In the warrant, ATF special agent Robert Jacobsen goes into great detail about a web of illegal, interstate gun trafficking that allegedly led to Brito-Pina. ATF agents have to get into any phones that he may have used, he said, given that there’s a window of time to use to unlock iPhones with Touch ID before they require the passcode.

Attempting to unlock the relevant Apple device[s] is necessary because the government may not otherwise be able to access the data contained on those devices for the purpose of executing the requested search warrants.

For some reason, the warrant specifies that it doesn’t apply to computers in the apartment: agents won’t seize or search any computers they find/found.

The agent said that Brito-Pina’s phone is likely to contain a lot of evidence. In fact, the investigation that led agents to Brito-Pina was in large part enabled by information gleaned from other people’s phones, including text messages, drop-off locations stored in the Waze navigation app, and photos of illegal guns taken by people on their own phones – often featuring them posing with the guns.

He referred to what agents say they found on another suspect’s phone:

Collins communicated with Brito-Pina via cell phone regarding the sales and purchases of firearms.

This is yet another volley in the back-and-forth of courts’ interpretation of the Fifth Amendment and the debate between compelling suspects to use “what they are” (i.e., forced use of their bodies) vs. “what they know” (i.e., forcing suspects to unlock their brains to get at their passcodes).

The earlier decision from California denied issuance of a warrant to police who were investigating alleged extortion in Oakland, California. The suspects allegedly used Facebook Messenger to threaten a man with the release of an embarrassing video unless he coughed up money.

Over the years, there have been many cases that have fallen on opposite sides of the question with regards to the legality of finger-forcing. As we wrote with regards to the California case that found compelled testimony to be against the Fifth Amendment, be it turning over a passcode or swiping a finger, there’s no guarantee that other courts will choose to apply this most recent forced-biometrics-is-OK ruling in Massachusetts.

In this Massachusetts decision, the unlawful trade in weapons – allegedly by a man convicted of assault, at that – is very serious. So is the potentially unconstitutional act of forcing biometric device unlock.

But Judge Dein’s decision doesn’t open any doors to forcing finger unlock onto any random person nearby the search site. She specifies that ATF agents may search the contents of any mobile phones, not by forcing just anybody’s fingers onto a Touch ID sensor, but by pressing Brito-Pina’s fingers to unlock the device.

This doesn’t sound like an earth-shattering decision: rather, it sounds like yet another round in an ongoing debate wherein courts are trying to keep up with developing technology. Readers, your thoughts?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/cmnP0z4GJgg/

Microsoft drops password expiration from Windows 10 security

What is it about a secure password that makes us think it’s secure?

Traditionally, for businesses it’s been things like complexity, minimum length, avoiding known bad passwords, and how often passwords are changed to counter the possibility of undetected compromise.

And yet, recently, the last of those orthodoxies – password expiration – has started to crumble.

In 2016, the influential US National Institute of Standards and Technology (NIST) broke with generations of received wisdom by recommending that scheduled password change should be dropped from the list of good practice on the basis it now does more harm than good.

This week, the mighty Microsoft joined them in no uncertain terms in a blog explaining the company’s security baselines for the forthcoming Windows 10 version 1903, due in May. Microsoft’s Aaron Margosis didn’t mince his words:

Periodic password expiration is an ancient and obsolete mitigation of very low value, and we don’t believe it’s worthwhile for our baseline to enforce any specific value.

Windows baselines aren’t just a set of recommendations written down somewhere that nobody reads – defining how the world’s most popular business OS should be secured by businesses, these matter.

At the last count, Windows 10 had 3,000 of them (including many not related to security) implemented as Group Policy Objects. Having these parameters set up means that IT staff don’t have to configure everything from scratch as well as helping with the ordeal of compliance.

If NIST downgrading the importance of password expiration was a big marker, Microsoft doing the same signals that change is coming in the real world.

Why password expiration doesn’t help

At first glance, password expiration sounds sensible because, as numerous security compromises demonstrate, passwords today are often stolen and abused long before their owners realise.

Logically, then, changing them on a schedule should minimise the risk by reducing the length of possible compromise to a defined period of weeks or months.

In the consumer space, it’s become such an accepted part of security that password managers urge users to update their passwords regularly and offer mechanisms to automate this for big internet sites.

The problem is that this can have unintended consequences, which can render the effort worthless. As Microsoft’s Margosis writes:

When humans are forced to change their passwords, too often they’ll make a small and predictable alteration to their existing passwords, and/or forget their new passwords.

In effect, users are not really changing their passwords, just tweaking them so they’re easier to remember. In the worst-case scenario, this might include using the same tweaked password across multiple sites, a habit that fuels password-stuffing attacks.

Notice that although Microsoft no longer recommends a specific password expiration value, there’s nothing to stop organisations implementing one if they want to.

It could be that Microsoft’s angst over its baseline is really asking a deeper question – should baselines and the endless compliance that follows in their wake be that important anyway? Margosis again:

Removing a low-value setting from our baseline and not compensating with something else in the baseline does not mean we are lowering security standards. It simply reinforces that security cannot be achieved entirely with baselines.

As recent announcements from Microsoft have made clear, everyone would do better to move to more sophisticated forms of authentication.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/DTyn-s4R7AA/

Fingerprint glitch in passports swapped left and right hands

True, we accidentally swapped fingerprints for Danish citizens’ left and right hands on their passports, but it probably won’t cause much grief for these 228,000 people, said the head of Kube Data, which encoded the biometric data on the passports’ microprocessors.

The Copenhagen Post quoted Jonathan Jørgensen:

It’s difficult to imagine that this will give citizens much of a headache. It’s only the state police [Rigspolitiet] that has access to the encryption key to where the error is found, and many affected citizens have probably travelled with their passports without any problems.

According to the local news outlet, the fingerprint errors were discovered, by chance, in 2017 by a citizen. The mistake occurs in passports issued between 2014 and 2017.

Denmark introduced biometric passports in 2011, containing digital photos, fingerprints and signatures. The purpose is to fend off identity theft and passport forgery, as well as to fight a roster of other crimes:

The decision to introduce fingerprints in passports has been made at central level in the EU as part of the combat against terrorism, human trade, human trafficking, illegal immigration and other transnational crime. With the new biometric passport Danish citizens are secured the possibility to travel to countries which in the future will demand this type of passport for entry.

Police are looking into whether or not the quarter-million affected passports will need to be replaced. If they do, who’s going to pay for it? They’re discussing that with Kube Data, reportedly trying to make sure that the cost of passport replacement doesn’t come out of Danish citizens’ pockets.

We’ve heard from at least one think tank that has called the stuttering rollout of digital identity a mess. But aside from governments’ unsteady rollout, what about the security of the passports themselves?

As of mid-2018, e-passports had been rolled out in more than 150 countries, according to one vendor. Gemalto says that the future will bring e-passports that will soon store travel information such as eVisas and entry/exit stamps. Plus, with the upcoming version 2 of the logical data structure protocol (LDS2), they’ll move from read-only to read and write.

In other words, they’ll store even more than biometrics.

Let’s hope that Kube Data’s optimism about this left-to-right fingerprint swap is warranted. It’s a bit unnerving to think that errors can creep into the encryption key, an essential piece of keeping all that data secure.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Fv7QQptifWM/

NSA asks to end mass phone surveillance

The National Security Agency (NSA) has asked to end its mass phone surveillance program because the work involved outweighs its intelligence value, according to reports this week.

Sources told the Wall Street Journal that the NSA has recommended the White House terminates its call data records (CDR) program. The logistics of operating it aren’t worth the intelligence that it provides, they said.

The NSA’s clandestine phone records gathering program dates back to the introduction of the Patriot Act in 2001, shortly after the 9/11 attacks on the US. Section 215 of the Act enabled the US intelligence community to collect extensive information.

Shortly afterwards, President George Bush authorized the warrantless collection of data about international telephone calls and emails, and the NSA began collecting data under a program called Stellar Wind.

In 2006, a class action suit targeted Verizon, BellSouth and ATT, alleging that they handed over call records to the NSA. In 2013, Edward Snowden publicly revealed documents detailing the Stellar Wind program. The American Civil Liberties Union (CLU) sued then-director of national intelligence James Clapper to stop the bulk metadata collection program for violations under the first and fourth amendments.

The ACLU won its case on appeal in May 2015, just before Section 215 expired under a sunset clause. Congress passed the Freedom Act a day after the clause expired, sustaining Section 215 but with new restrictions. One of these included the retention of bulk call metadata by the phone companies. The NSA would have to query it using specific selectors to limit the number of records gathered. This contrasts with the previous practice, in which the NSA collected and held call data records itself.

This legislation seems to have made the bulk data collection program less useful. The WSJ cites one government official saying that:

The candle is not worth the flame.

The CDR program has come under increasing political pressure in recent months. In March, Senators Ron Wyden and Rand Paul introduced the Ending Mass Collection of Americans’ Phone Records Act of 2019, which aims to eliminate the program. Wyden said at the time:

The NSA’s sprawling phone records dragnet was born in secrecy, defended with lies and never stopped a single terrorist attack. Even after Congress acted in 2015, the program collected over half a billion phone records in a single year. It’s time, finally, to put a stake in the heart of this unnecessary government surveillance program and start to restore some of Americans’ liberties.

The pair were concerned by a public statement by the NSA in June 2018, in which it admitted to collecting some call data records that it was not authorized to receive, due to “technical irregularities”.

The senators wrote a letter to the NSA’s inspector general detailing their concerns in August 2018.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/UAqJj5KZCIY/

There’s NordVPN odd about this, right? Infosec types concerned over strange app traffic

Weird things are afoot with NordVPN’s app and the traffic it generates – Reg readers have spotted it contacting strange domains in the same way compromised machines talk to botnets’ command-and-control servers.

Although NordVPN has told us this is expected behaviour by the app and is intended as a counter-blocking mechanism, the company’s explanation has shifted a number of times.

It began after Reg reader Dan became confused when his office network’s security products started alerting on traffic from one infrequent visitor’s laptop. On looking at the logs, our reader saw it was talking to these “garbage” domains:

f5d599a39d02caef1984e95fdc606f838893ffc5[dot]com 8d46980d994cc618aeed127df1b5c86d8acd86ce[dot]xyz 10bdc75ab2f0486f008dbdd8f1b0a38d7399598e[dot]xyz

Further scratching of heads led to infosec bod Ryan Niemes’ personal blog, where he had written about exactly the same odd traffic. Except Niemes had noticed something else too: these domains weren’t owned by anybody. So he bought them and spun up an EC2 instance to log what was coming in.

“Fast-forward a few hours,” he wrote, “I ran a netstat command and saw a crapload of connections to 443. So, I registered a Letsencrypt certificate watched my logs start to fill up.”

Niemes responsibly disclosed his findings to NordVPN’s security team, which thanked him and said it would update its apps to stop the oddness. The firm also offered him three years’ free subscription as a thank you.

El Reg spoke to Niemes and he told us that after the update was deployed (he installed it on a test device), incoming connections were still being made from clients with “NordVPN” in their user-agent string.

Niemes saw a number of API calls within the HTTPS-encrypted traffic hitting his new domains, including:

GET /v1/users/services HTTP/1.1 GET /v1/users/current HTTP/1.1 GET /v1/servers?filters[servers.load][$gt]=85fields[servers.id]limit=5114 HTTP/1.1 GET /v1/servers?fields[servers.status]limit=1filters%5Bservers.id%5D=939653 HTTP/1.1 GET /v1/servers?fields%5Bservers.created_at%5D=fields%5Bservers.groups.id%5D=fields%5Bservers.groups.title%5D=fields%5Bservers.groups.type.identifier%5D=fields%5Bservers.hostname%5D=fields%5Bservers.id%5D=fields%5Bservers.load%5D=fields%5Bservers.locations%5D=fields%5Bservers.name%5D=fields%5Bservers.specifications%5D=fields%5Bservers.station%5D=fields%5Bservers.technologies.identifier%5D=filters%5Bservers.status%5D=onlinelimit=5114 HTTP/1.1 GET /v1/servers/count HTTP/1.1 GET /v1/helpers/ips/insights HTTP/1.1 GET /v1/plans?filters[plans.active]=1filters[plans.type]=android_sideload HTTP/1.1 GET /v1/helpers/hosts/metadata HTTP/1.1

“The POST I’m seeing is concerning because there’s a field called renewtoken which appears to be unique,” he told The Register. As well as the user-agent string, the inbound requests also disclosed app version, host operating system build and the user’s IPv4 address.

It’s an anti-censorship mechanism. Honest

NordVPN spokeswoman Laura Tyrell first told us: “I would like to assure you that we have not observed any irregular behavior that could in any way support the theory of our applications being compromised by a malicious actor.”

She added: “Such domains are used as an important part of our workaround in environments and countries with heavy internet restrictions. To prevent such requests from contacting the domains which aren’t owned by us, we have modified our URI scheme. All URLs are being validated, so the problem as such will never occur. It is also important to note that no sensitive data is being sent or received through these addresses.”

This was obviously bunkum and we said so. Tyrell then replied: “Once URL is generated, we send a call to validate it and only when URL is validated we proceed with the communication.”

Among the other things Niemes had previously showed us was this sample of an incoming request from a NordVPN-using Android device:

--1c721304-A-- [23/Apr/2019:15:00:11 +0000] XL8oe@Cs4AQkZiAuc0uRFgAAAG8 [00.00.00.00 - IP address] 47522 [xxx.yyy.zzz.aaa – user IP address] --1c721304-B-- POST /v1/users/tokens/renew HTTP/1.1 User-Agent: NordApp android (playstore/3.10.1) Android 9 Content-Type: application/x-www-form-urlencoded Content-Length: 75 Host: f5d599a39d02caef1984e95fdc606f838893ffc5.xyz Connection: Keep-Alive Accept-Encoding: gzip

--1c721304-C-- renewToken=3a76c968108386e8adc64e973dc3d[random obfuscation by El Reg]34463cc8b83a4cdaf9c --1c721304-F-- HTTP/1.1 404 Not Found Content-Length: 219 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1

Yup, plenty of unique user information there – and that gzip string looks rather like the client is expecting to receive a payload from the server. Curiouser and curiouser.

“While the information did not contain user credentials, it can still be considered sensitive. In theory, the tokens can be used by a third party to gain unauthorized access to our service,” conceded Tyrell. “However, none of this information could have been used to intercept the users’ traffic or to tie an individual to their specific internet activity.”

NordVPN has been in the news before over allegations that its userbase could be turned into a botnet, something it addressed in a blog post last year. Among other things, the company said it had been a victim of a smear campaign by rival VPN operators.

This latest weirdness is being picked up by security monitoring products and concerned sysadmins, and the company’s explanations appear to be shifting every time it is presented with detailed evidence.

Reg reader Dan spotted a new domain in his logs yesterday morning, https://wutlk3t9mybdz[dot]info/, which appears as a 404 page with a prominent link to NordVPN’s website. He commented to us: “If this was legitimate, they’d effectively be exposing their authentication method. I feel like they’re aware people are digging into them, so they’ve thrown this up to appear legitimate.”

Could be innocent keep-alive heartbeat traffic

Max Heinemeyer, infosec biz Darktrace’s director of threat hunting, told The Register: “We’ve seen it quite a lot. We don’t know what it’s for, but it looks like it tries to hide. Sensible for a VPN trying to cut around censorship!”

He added that it looks on the face of it like botnet traffic, highlighting some of the common features the mystery NordVPN traffic has with typical botnet C2 streams:

“The domains look DGA-generated… they’re using suspicious TLDs, dot-xyz, something we have from other botnets. Then we see domains are using Let’s Encrypt [it wasn’t clear if Heinemeyer had looked at one of Niemes’ domains], something which is also used by cybercriminals because it’s easy. Repeated connections to the same domain looks like command-and-control traffic; it looks and smells like command-and-control traffic, but it’s actually [likely to be] keep-alive traffic.”

“We’ve seen NordVPN usage reported in at least 188 cases last year,” he continued, adding that this isn’t the only instance Darktrace has seen of VPN apps sending odd traffic around: “We’ve also seen PIA make odd connections to random IPs. In their case it was random UDP connections on port 8888.” ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/04/26/nordvpn_strange_traffic_domains/