STE WILLIAMS

New Locky Ransomware Strain Emerges

Latest version goes by the .asasin extension and is collecting information on users’ computer operating system and IP address.

Locky authors have again retooled the highly persistent ransomware campaign with a new strain that performs reconnaissance on victims’ computers and goes by a new file extension name, PhishMe reports today.

The latest Locky strain, which began appearing on Oct. 11 and goes by the .asasin extension, is collecting information on users’ computers such as the operating system used, IP address, and other such information, says Brendan Griffin, PhishMe threat intelligence manager.

“The information it’s collecting is nothing too personally identifiable, but it gives the actors a rough idea of information about the computer, and attackers never do things without a purpose,” Griffin observes.

Although the intent of Locky’s reconnaissance isn’t fully clear, its ability to collect information on infected Windows versions could help its authors determine which OS version is the most susceptible to its attacks, says Griffin.

Collected IP address information, which reveals the geographic location of a computer, is helping to set the stage for a new twist with Locky. Victims are hit with either a Locky ransomware attack or banking Trojan TrickBot, depending on their geographic location.

Locky’s Muted Threat

The latest Locky strain uses a .asasin extension, a move that could be designed to intimidate victims into paying the ransom, Griffin surmises. “It could be a muted threat, or a form of new branding to get their name out there again,” he notes.

Since Locky first emerged in February 2016, it has undergone nearly a dozen changes to its file extension name with each new strain, Griffin estimates. Some of its previous strains included extensions .ykcol, .lukitus, and .thor, Griffin says.

Despite this most recent name change, Griffin says it is still apparent that this ransomware strain is Locky. Tell-tale signs that Locky continues to lurk within this strain include the way it runs its encryption process to lock down victims’ data, the structure of its ransom note, and the payment method it demands of its victims.

“Combine those attributes and behaviors and we’re talking about the same animal,” says Griffin.

Locky is considered one of the most persistent and destructive ransomware campaigns, due to the prolific ransomware samples its authors churn out. Locky’s operators, believed to be a group called Dungeon Spider, work with other actors to distribute the malicious payloads via botnets and cleverly crafted phishing campaigns but over the course of last year law enforcement agencies have disrupted these different distribution mechanisms, says Adam Meyers, vice president of intelligence at CrowdStrike.

While some agencies characterize Locky as launching a wave of periodic forceful attacks and then going dormant, Meyers suspects Locky’s authors are rolling out new ransomware variants and allowing Locky to fall into the background until the new experiments don’t pan out. Then they bring back the old standby Locky.

In May, for example, the Jaff ransomware family emerged in force but it wasn’t until researchers released a decryption tool for Jaff in June that the ransomware went away.

“All of sudden, when that happened, Locky popped up. Jaff may have been a replacement for Locky but when that did not work, Locky returned,” Meyers says, noting other similar timing issues with other ransomware variants during Locky’s existence that leads him to believe Locky has been ever-present since it emerged in 2016.

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/new-locky-ransomware-strain-emerges-/d/d-id/1330168?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

US-CERT study predicts machine learning, transport systems to become security risks

The Carnegie-Mellon University’s Software Engineering Institute has nominated transport systems, machine learning, and smart robots as needing better cyber-security risk and threat analysis.

That advice comes in the institute’s third Emerging Technology Domains Risk Survey, a project it has handled for the US Department of Homeland Security’s US-CERT since 2015. The surveys are cumulative, meaning any emerging technologies noted are in addition to those recommended for scrutiny in previous surveys. In other words, previously noted concerns are still live; it’s not like phishing and firewall security should be forgotten about just because the latest study focuses on AI and transport stuff.

The report “helps US-CERT identify vulnerabilities, promote good security practices, and understand vulnerability risk,” we’re told.

The institute’s CERT Coordination Centre (CERT/CC) sees machine learning as a potential security quagmire, since it expects aggressive adoption in the medium term, but use-cases are legion, making it difficult to observe from a security point of view. In its survey, published this month, the team stated:

“Characteristics of interest likely include big data applications dealing with sensitive information, security products whose efficacy depends on effective anomaly detection, and learning sensors that inform actions in physical reality (such as in self-driving vehicles).”

In its assessment of transport, the survey worries about long-term interconnectedness.

“Future intelligent transport systems will provide communications and data between connected and autonomous cars and trucks, road infrastructure, other types of vehicles, and even pedestrians and bicyclists,” the report notes.

Road transport will, the report predicts, also become increasingly integrated with public transport – so, for example, train or bus dispatches could be ramped up to relieve road congestion:

“A miscommunication in the system, whether accidental or intentional, could lead to numerous traffic accidents, causing property damage, injury, and possibly death … A compromise of a city-wide system could lead to a massive traffic jam or other major event.”

As for smart robots – a categorisation designed to distinguish this application from today’s more common robots performing a strict and limited set of actions – CERT/CC has a host of fears if risks aren’t managed.

They’re likely to carry familiar software and network vulnerabilities into their workplaces, the survey says, adding the following observations:

It is not difficult to imagine the financial, operational, and safety impact of shutting down or modifying the behaviour of manufacturing robots, delivery drones; service-oriented or military humanoid robots; industrial controllers; or … robotic surgeons.

Are you depressed enough? If not, the survey also gives you the chance to read up on the risks of the Blockchain, IoT mesh networks, robotic surgery, smart buildings, and virtual personal assistants. The full report [PDF] has much more, in case you need something scary to read before bed. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/19/cert_cc_threat_survey/

EU: No encryption backdoors but, eh, let’s help each other crack that crypto, oui? Ja?

The European Commission has proposed that member states help each other break into encrypted devices by sharing expertise around the bloc.

In an attempt to tackle the rise of citizens using encryption and its effects on solving crimes, the commission decided to sidestep the well-worn, and well-ridiculed, path of demanding decryption backdoors in the stuff we all use.

Instead, the plans set out in its antiterrorism measures on Wednesday take a more collegiate approach – by offering member states more support when they actually get their hands on an encrypted device.

“The commission’s position is very clear – we are not in favour of so-called backdoors, the utilisation of systemic vulnerabilities, because it weakens the overall security of our cyberspace, which we rely upon,” security commissioner Julian King told a press briefing.

“We’re trying to move beyond a sometimes sterile debate between backdoors or no backdoors, and address some of the concrete law enforcement challenges. For instance, when [a member state] gets a device, how do they get information that might be encrypted on the device.”

America

WHY can’t Silicon Valley create breakable non-breakable encryption, cry US politicians

READ MORE

How exactly… we don’t know. Maybe someone has an RSA-cracking supercomputer up their sleeve they’re keeping secret. Maybe someone’s particularly good with a soldering iron and can read off keys from extracted flash memory chips.

What we do know is that the thrust of the plan boils down to asking member states to help each other by sharing their knowledge on dealing with encryption and creating a observatory to keep an eye on the latest tricks of the trade.

Share the wealth

“Some member states are more equipped technically to do that [extract information from a seized device] than others,” King said.

“We want to make sure no member state is at a disadvantage, by sharing the tech expertise among the member states and reinforcing the support that Europol can offer.”

It’s possibly hard to fault the idea of sharing expertise – indeed security researchers The Register contacted said it was a sensible suggestion – and the commission is probably by now aware it’s onto a losing bet if it trots out the tired idea of simply banning or scuttling encryption.

Instead, as Alan Woodward, security professor at the University of Surrey, England, put it: “What they can do is try to level the playing field by ensuring that all member states have access the latest tools and techniques that might have help when encryption is encountered.”

But he added: “This doesn’t mean decryption will be any easier than it is at present for the best equipped. As recent experience has shown, some of the commonly used encryption can be remarkably resistant to analysis.”

There is also the question of whether law-enforcement agencies will be happy to share their knowledge.

Thomas Rid, professor of strategic studies at John Hopkins University in the USA, said that, although it was a sensible suggestion, it was possible “the bigger states would be extremely reluctant to share that kind of capability, because it is so fragile.”

Rid added that, overall, “public key encryption is practically saving the internet from itself,” and that it was disappointing for governments to “treat this most crucial technology as a problem.”

Data slurping measures due out next year

Elsewhere in the commission’s antiterror proposals, it confirmed that measures governing access to “electronic evidence” will be published in 2018.

This, said King, would “ensure law enforcement can get access to information, encrypted or not, when it’s held elsewhere – another member state, another jurisdiction, or in the cloud.”

The commission’s Eurospeak-filled proposals also included a smidge more funding for training investigators – a mere €500,000 from the ISF-police fund in 2018 – and support to boost Europe’s decryption capabilities.

These measures were first discussed back in June, and at the time The Reg was told talks had focused on possible “production orders” that would require technology companies based in one member state to hand over data when it is requested by cops in another. A more extreme proposal, that would allow police to copy data directly from the cloud, was also floated.

Another idea was to oblige member states holding information on a terrorist suspect to share that data on Europe’s border intelligence exchange, the Schengen information system.

“I hope that they will agree that this autumn,” King told us.

Europe earlier warned that if the world’s tech giants did not make enough progress in removing extremist content as soon as possible from the web, the commission had left itself room to legislate against the internet corps – and this will be reviewed at the start of next year. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/19/eu_crypto_cracking/

Stealth web crypto-cash miner Coin Hive back to the drawing board as blockers move in

Malwarebytes has had enough of Coin Hive’s alt-currency-generating browser-side code, and is now automatically blocking it.

The biz joins ad-block plugins in preventing Coin Hive’s Monero-crafting JavaScript from running in webpages, using visitors’ electricity and hardware to mine new money. Coin Hive is a legit outfit, and its mining code is supposed to be embedded in pages to earn site owners’ revenue as an alternative to annoying ads. However, this freely available tool has been abused.

Malwarebytes said coinhive.com, which hosts the mining software, was the second-most-commonly blocked domain by its users, with 130 million users expressing their disdain for the technology.

“We do not claim that Coin Hive is malicious, or even necessarily a bad idea,” noted Adam Kujawa, director of Malwarebytes Labs, on Wednesday. “The concept of allowing folks to opt-in for an alternative to advertising, which has been plagued by everything from fake news to malvertising, is a noble one. The execution of it is another story.”

Kujawa pointed out that loads of websites are now running Coin Hive’s JavaScript to make a fast buck – either openly, or surreptitiously like the Pirate Bay, or after hackers injected the code, as in the cases of Politifact.com and Showtime’s websites.

A handful of euro 1 cent coins

More and more websites are mining crypto-coins in your browser to pay their bills, line pockets

READ MORE

Although Coin Hive encourages web devs to alert visitors that mining is happening in the background in their browsers, and to throttle the calculations to a sensible rate, there’s no requirement for site admins and hackers to display any warning nor take the brake off coin processing. And that means folks aren’t aware of the secret number-crunching until their CPUs are jammed with activity and their batteries are dying faster than an unnamed Star Trek red shirt.

The smackdown by Malwarebytes and ad-blockers isn’t surprising – and we note Fortinet also treats Coin Hive’s site as dangerous – but neither is it total. Users can still configure the Malwarebytes software to allow Coin Hive code to work with a few simple steps.

The response from Coin Hive was surprising. No whining or prevaricating: the developers told The Register they’re totally fine with the ban.

“We can’t blame them,” the team said, noting that this isn’t the first time blocking of its software has come up. “Spotting a crypto-miner on a website is surprisingly difficult. It’s easy for technical people to check in the browser’s developer console, but it’s not obvious at all for the average user.”

Instead of complaining, the Coin Hive team already has a solution. It’s come up with new code, released this week, called AuthedMine, and it is similar to the previous cryptocurrency miner but with one crucial and very important addition – a user consent page.

“AuthedMine enforces an explicit opt-in from the end user to run the miner,” the team said in a blog post on Monday.

“We have gone through great lengths to ensure that our implementation of the opt-in cannot be circumvented and we pledge that it will stay this way. The AuthedMine miner will never start without the user’s consent.”

Coin Hive also said that it is working with translators and has now written the permission page in 46 languages, with more to come. It described the response from website owners and users as “very positive,” and said it hopes web devs using the original Monero web miner will transfer to AuthedMine.

With such a good replacement, websites that don’t should expect to be called out by their users, The Reg hopes. Drop us a line if you come across a website unfairly using your computer for its benefit. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/19/malwarebytes_blocking_coin_hive_browser_cryptocurrency_miner_after_user_revolt/

6 cybersecurity predictions (that might actually come true)

October is National Cybersecurity Awareness Month (NCSAM) and this week’s theme is Today’s predictions for tomorrow’s internet.

And that presented us with a bit of a problem.

At Naked Security we’re big fans of NCSAM but we aren’t fans of predictions. Or at least not the popular, blue sky kind that sees every glitch, failure and fumble as a sign of the impending digital Pearl Harbour. So we decided to support week three of NCSAM with some predictions but we’re doing it our way – by taking the “tomorrow’s internet” part literally.

We asked a number of people working in different technical roles at Sophos where they’re actually planning to spend some of their time and energy in the next six months.

So here are our “from the trenches” predictions that reflect what people are actually preparing for. We’re preparing for them to come true, maybe you should too.

1. More file-less attacks

Principal Threat Researcher 2, Fraser Howard:

To date file-less attacks have been fairly isolated, but it seems to be growing in prominence (Poweliks, Angler for a bit, Kovter and more recently Powmet). This is a natural response to the widespread deployment of machine learning.

I also expect to see a rise in Powershell abuse.

2. Smarter fuzzing for everyone

Senior Security Analyst 2, Stephen Edwards:

I’m expecting the sophistication of fuzzing to improve significantly. Fuzzing can be used to automatically create billions of ‘stupid’ tests and  the next challenge is to make those tests smarter, by informing the test creation process with knowledge about how a program works.

Automatic exploration of code is hard though.

Hybrid techniques try to balance the speed of stupid tests with the efficiency of smarter ones, while avoiding getting lost in too many choices.

A number of promising approaches to improving fuzzing have already been demonstrated and it feels to me that we’re almost at a breakthrough where those different techniques will be combined and made public.

Stephen provided such a long and detailed response to our question we published it as a full article too. It’s called Is security on the verge of a fuzzing breakthrough?

3. Ask who and what, not where

Cybersecurity Specialist, Mark Lanczak-Faulds:

Traditionally, security focuses on the domain as a whole. As we look to blur the boundaries of a traditional network and the internet, what matters are the identities and assets residing within the domain.

We need to determine risk based on identity and the assets associated with that identity. When you trigger an alert accounting for those factors, you know what’s at stake and can act proportionately and swiftly.

4. Focus on exploit mitigation

Sophos Security Specialist, Greg Iddon:

Patching is no longer something you can save until after change freeze or a rainy day.

I think that in the next six to twelve months, implementing exploit mitigation – protection against the abuse of known or unknown bugs and vulnerabilities, and the underlying way attackers take advantage of these bugs and vulnerabilities – is going to be key to staying ahead.

What concerns me most is that there is a swathe of new vendors who are only focussing on detection of Portable Executable (PE) files, touting machine learning as the be-all and end-all of endpoint security. This simply isn’t true.

Don’t get me wrong, machine learning is great, but it’s just a single layer in what must be a multi-layered approach to security.

5. Ransomware repurposed

Global Escalation Support Engineer, Peter Mackenzie:

Based on some trends we’re seeing now I think we could see a shift in the way that ransomware is used.

Unlike most other malware, ransomware is noisy and scary – it doesn’t work unless you know you’ve got it, and it has to make you feel afraid. As security tools get better at dealing with ransomware, some attackers are using that noisyness as a technique for hiding something else, or as last resort after making money off you another way using, say, key loggers or cryptocurrency miners.

Once you’ve removed the noisy ransomware infection it’s easy to think you’ve cleaned your system. What you need to ask is “why did it detonate now?” and “what else was, or still is, running on the computer where we found the ransomware?”.

6. Data is a liability, not an asset

Senior Cybersecurity Director, Ross McKerchar:

I expect to spend a lot of time in the next 6 months deleting unnecessary data and generally being very careful about what we store and where. It’s a defence in depth measure – the less you store the less you have to lose.

This applies across entire companies but, probably more importantly, on exposed assets such as web servers too. They should only have access to the minimum amount of data they need and nothing more. Why does a web server need to have access to someone’s SSN, for example? You may need it for other reasons, or your web server may need to collect an SSN once, but does it need to keep it?

That’s enough from us, we’d love to read your predictions in the comments below.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/5Al4KurxIgA/

IRS tax bods tell Americans to chill out about Equifax

The United States Internal Revenue Service has said that citizens affected by the Equifax breach need not panic, because it probably didn’t reveal anything that hasn’t already been stolen and the agency has tooled up to deal with fraudulent tax claims.

Commissioner John Koskinen, discussing whether the breach would interfere with tax collection, told journalists “a significant percent of those taxpayers already had their information in the hands of criminals”, according to a report of a QA session after a speech at the Service’s “Security Summit”.

In his prepared remarks, the commissioner said “We’ve seen the number of identity theft-related tax returns fall by about two-thirds since 2015. Over the past two years, fewer false returns have entered the system, fewer fraudulent refunds have been issued and fewer taxpayers have reported to the IRS that they were victims of identity theft. This dramatic decline helped prevent hundreds of thousands of taxpayers from facing the challenge of dealing with identity theft issues.”

But that still leaves as many as 100 million individuals at risk of Equifax-sourced data giving them problems beyond the IRS. Koskinen added that Americans should assume their data is in criminal hands and act accordingly.

As we reported at the time of the mega-breach, not everything Equifax knew about Americans was leaked: “only the names, social security numbers, birth dates, addresses and, in some instances, driver’s license numbers of 143 million Americans”.

It later emerged that the patching error that left the credit reporting company trouserless was common, with estimates that as many as 50,000 organisations downloaded still-vulnerable Apache Struts 2 packages after the software was patched against CVE-2017-5638.

Koskinen promised taxpayers the IRS wouldn’t end up on the breach list, given how much “sensitive personal information has fallen into the hands of criminals recently”.

The Register decided a reality test was in order, and asked Troy Hunt (who maintains the HaveIBeenPwned database of breached accounts) whether Koskinen’s remarks ring true.

“I think that would be just under one-third of the population … it may be fractionally on the high side,” Hunt said.

However, any general statement that “what’s technically called a sh*tload” of Americans were already pwned is “probably accurate”. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/18/internal_revenue_service_tells_americans_their_data_was_probably_stolen_before_equifax_lost_it/

You’re doing open source wrong, Microsoft tsk-tsk-tsks at Google: Chrome security fixes made public too early

A few weeks ago, Google paid Microsoft $7,500 after Redmond’s security gurus found, exploited and reported a vulnerability in the Chrome browser – a flaw that would allow malicious webpages to run malware on PCs.

Now Microsoft isn’t entirely happy with the way Google handled it, and having been schooled a few times on security by the web giant, the Windows goliath has taken the opportunity to turn the tables and do a little finger wagging of its own.

As it turns out, the Chrome bug is pretty interesting. The Microsoft Offensive Security Research fired up its internal ExprGen fuzzer, normally used to hunt for vulnerabilities in Edge’s Chakra JavaScript, and pointed it at Google’s browser. The Redmond gang found that they could reliably crash Chrome’s V8 JavaScript interpreter, but couldn’t work out what the exact issue was.

They found that the Chrome programming cockup appeared in code dynamically generated by V8’s just-in-time compiler, but only when, on a 64-bit Intel system, the processor’s rax register was zero and used as a base pointer. This wasn’t good news, because it looked like a classic null-dereference bug – rax had been set to null but used anyway – which is a pain to exploit because today’s operating systems forbid access at and near address zero.

The issue was traced to a memory slot being used before it is initialized with a valid pointer, and the team found it could spray enough values over memory to fill in the slot with their own pointer. The team then found a way to exploit this to read and write as they pleased in memory. This arbitrary access was, as usual, the bridge the gang needed to place their own code in memory and then change a function pointer to that code, so it is executed by the browser. Now they have control of Chrome from data injected from a webpage: straight up remote-code execution, and a ticket to compromising the browser and potentially the underlying system.

You can read the full, highly detailed, explanation here.

Google fixed the issue within days of being alerted to the bug by Microsoft, and paid a bug bounty to the researchers, along with another $8,337 for other uncovered blunders. And the team may have been tempted to go for dinner and lots of drinks, but instead donated the dosh to charity. But while the problem was easy enough to fix, it was what happened next that had the Microsofties raising their eyebrows.

The team sent its bug report to Chrome engineers on September 14 and it was acknowledged and fixed within a week. The fix was pushed out to the public Chrome GitHub source code repository days before new builds featuring the security patch were released to the world. This approach, this delay between security fixes appearing in the GitHub repo and updated binaries going out to the public, Redmond felt, poses a real danger.

Eagle-eyed miscreants watching the GitHub repo can spot fixes applied publicly in the Chrome source code, and develop and deploy malware exploiting these bugs before people get a chance to download and install corrected versions of the browser. During that delay, their Chrome installations are vulnerable.

For example, the above V8 hole was fixed publicly in the source code here, and Chrome was updated and released three days later. Microsoft gave another example, though: this private security bug report with an accompanying public patch. The code wasn’t released as a stable build until a month later.

On Wednesday this week, Microsoft team member Jordan Rabet said:

Servicing security fixes is an important part of the process and, to Google’s credit, their turnaround was impressive: the [V8 engine] bug fix was committed just four days after the initial report, and the fixed build was released three days after that. However, it’s important to note that the source code for the fix was made available publicly on Github before being pushed to customers. Although the fix for this issue does not immediately give away the underlying vulnerability, other cases can be less subtle.

Case in point, this security bug tracker item was also kept private at the time, but the public fix made the vulnerability obvious, especially as it came with a regression test.

This can be expected of an open source project, but it is problematic when the vulnerabilities are made known to attackers ahead of the patches being made available. In this specific case, the stable channel of Chrome remained vulnerable for nearly a month after that commit was pushed to git. That is more than enough time for an attacker to exploit it.

Somewhat primly, Rabet noted that Microsoft’s own Chakra JavaScript engine is open source, and Redmond would never release a flaw report before it was fixed for just this reason.

“Some Microsoft Edge components, such as Chakra, are also open source. Because we believe that it’s important to ship fixes to customers before making them public knowledge, we only update the Chakra git repository after the patch has shipped,” said Rabet.

“Our strategies may differ, but we believe in collaborating across the security industry in order to help protect customers. This includes disclosing vulnerabilities to vendors through Coordinated Vulnerability Disclosure (CVD), and partnering throughout the process of delivering security fixes.”

Back in old Blighty, we’d call that a score draw, Google. The advertising giant did not respond to a request for comment. ®

Sponsored:
The Joy and Pain of Buying IT – Have Your Say

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2017/10/19/microsoft_google_security_chrome/

Oracle Fixes 20 Remotely Exploitable Java SE Vulns

Quarterly update for October is the smallest of the year: only 252 flaws to fix! Oracle advises to apply patches ‘without delay.’

Oracle this week urged administrators to apply security patches to their systems more quickly even as it increased their burden with a set of fresh fixes for another 252 vulnerabilities across products including Oracle Database Server and Java SE.

Tuesday’s critical patch update (CPU) is Oracle’s first since news of the Equifax breach and of serious vulnerabilities in the widely used WPA2 WiFi security protocol, and separately, in numerous products featuring a particular crypto chipset from Infineon.

In its patch availability announcement, Oracle did not specifically call out these incidents as heightening the urgency for organizations to apply its CPU’s more promptly. Instead as it usually does, Oracle more generally cautioned customers about periodic reports it receives about intruders successfully breaking into organizations by exploiting vulnerabilities for which the company has already issued patches.

“Oracle therefore strongly recommends that customers remain on actively supported versions and apply Critical Patch Update fixes without delay,” the company noted.

Big as October’s CPU is, it is actually smaller than Oracle’s last one in July when the company announced fixes for 310 flaws and the one before in April that involved patches for 300 vulnerabilities.

Commenting on the security update, application security vendor Waratek said the CPU contains fixes for bugs in the Java Virtual Machine and five additional components in Oracle’s Database Server. Two of the patched flaws are remotely exploitable without the need for any credentials.

Oracle’s October CPU also patches 22 Java SE vulnerabilities, says Chris Goettl, product manager at Ivanti. “Twenty of these may be remotely exploited without requiring authentication,” Goettl says.

“What this means is an attacker with a foothold in your environment just needs to be able to resolve a system with one of these vulnerabilities exposed and they would not even need to have a credential to exploit it.”

Unlike Microsoft’s monthly updates, Oracle releases its security patches once every three months. So the October update is the last one for this year. Up to now in 2017, Oracle has fixed a total of 79 vulnerabilities in Java SE or more than double the 37 it addressed last year, Waratek noted. The sharp increase suggests that Oracle is paying more attention to find flaws in Java SE. But it also underscores the growing risk that the Java platform poses for organizations, the vendor observed in its commentary.

“While smaller than recent CPUs, there are very important updates included in this critical patch such as patches that fix the serialization flaws,” Waratek security architect Apostolos Giannakidis said in the guide. “This CPU is not backwards compatible for specific cryptographic classes. If security teams are not mindful, applying the CPU risks breaking the application.”

James Lee, executive vice president and chief marketing officer at Waratek, says the key takeaway here is that patching quickly is vital. “The bad guys have been banging away since the CPU was released looking for the flaws that can be remotely exploited,” he notes.

The Oracle CPU is an all-or-nothing patch, so an organization has to apply everything at once, he adds. Between configuration, coding and testing, it can take weeks or months for an organization to fully deploy such updates. “So these don’t tend to be fast fixes,” Lee says.

Patch teams are already under tremendous pressure dealing with new and previously released patches from Oracle, Microsoft, IBM and others while ensuring their organizations are protected against the Apache Struts vulnerability that felled Equifax. Patch teams also have their hands full ensuring their organizations are protected against the WPA2 flaw and the factorization bug in the Infineon chipset, Lee notes. “When you layer legacy software on top of that, the teams charged with keeping apps safe are overwhelmed with the task.”

Related content:

 

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/oracle-fixes-20-remotely-exploitable-java-se-vulns-/d/d-id/1330166?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Malicious Minecraft Apps on Google Android Could Turn Devices into Bots

New ‘Sockbot’ malware has ‘highly flexible proxy topology’ that might be leveraged for a variety of nefarious purposes.

Eight “skin apps” for changing the appearance of video-game characters in the Minecraft Pocket Edition for Android are wrapped in new malware that Symantec researchers have dubbed “Sockbot.” Although the malware is currently being used to drive app users to ads, Sockbot’s “highly flexible proxy topology” could also turn infected devices into bots, researchers say, and use those devices to launch distributed denial-of-service attacks and exploit a variety of network-based vulnerabilities.

Sockbot is thus named because its command-and-control server requests that the app open a socket using SOCKS and wait for a connection from a specified IP address on a specified port. As researchers explain: “A connection arrives from the specified IP address on the specified port, and a command to connect to a target server is issued. The app connects to the requested target server and receives a list of ads and associated metadata (ad type, screen size name). Using this same SOCKS proxy mechanism, the app is commanded to connect to an ad server and launch ad requests.”

The apps, developed by “FunBaster,” are all available on Google Play, Android’s official app store. Symantec estimates that between 600,000 and 2.6 million devices have been exposed.

Read more in Symantec’s blog today

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/mobile/malicious-minecraft-apps-on-google-android-could-turn-devices-into-bots/d/d-id/1330161?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

5 cybersecurity predictions (that might actually come true)

October is National Cybersecurity Awareness Month (NCSAM) and this week’s theme is Today’s predictions for tomorrow’s internet.

And that presented us with a bit of a problem.

At Naked Security we’re big fans of NCSAM but we aren’t fans of predictions. Or at least not the popular, blue sky kind that sees every glitch, failure and fumble as a sign of the impending digital Pearl Harbour. So we decided to support week three of NCSAM with some predictions but we’re doing it our way – by taking the “tomorrow’s internet” part literally.

We asked a number of people working in different technical roles at Sophos where they’re actually planning to spend some of their time and energy in the next six months.

So here are our “from the trenches” predictions that reflect what people are actually preparing for. We’re preparing for them to come true, maybe you should too.

1. More file-less attacks

Principal Threat Researcher 2, Fraser Howard:

To date file-less attacks have been fairly isolated, but it seems to be growing in prominence (Poweliks, Angler for a bit, Kovter and more recently Powmet). This is a natural response to the widespread deployment of machine learning.

I also expect to see a rise in Powershell abuse.

2. Smarter fuzzing for everyone

Senior Security Analyst 2, Stephen Edwards:

I’m expecting the sophistication of fuzzing to improve significantly. Fuzzing can be used to automatically create billions of ‘stupid’ tests and  the next challenge is to make those tests smarter, by informing the test creation process with knowledge about how a program works.

Automatic exploration of code is hard though.

Hybrid techniques try to balance the speed of stupid tests with the efficiency of smarter ones, while avoiding getting lost in too many choices.

A number of promising approaches to improving fuzzing have already been demonstrated and it feels to me that we’re almost at a breakthrough where those different techniques will be combined and made public.

Stephen provided such a long and detailed response to our question we published it as a full article too. It’s called Is security on the verge of a fuzzing breakthrough?

3. Ask who and what, not where

Cyber Security Specialist, Mark Lanczak-Faulds:

Traditionally, security focuses on the domain as a whole. As we look to blur the boundaries of a traditional network and the internet, what matters are the identities and assets residing within the domain.

We need to determine risk based on identity and the assets associated with that identity. When you trigger an alert accounting for those factors, you know what’s at stake and can act proportionately and swiftly.

4. Focus on exploit mitigation

Sophos Security Specialist, Greg Iddon:

Patching is no longer something you can save until after change freeze or a rainy day.

I think that in the next six to twelve months, implementing exploit mitigation – protection against the abuse of known or unknown bugs and vulnerabilities, and the underlying way attackers take advantage of these bugs and vulnerabilities – is going to be key to staying ahead.

What concerns me most is that there is a swathe of new vendors who are only focussing on detection of Portable Executable (PE) files, touting machine learning as the be-all and end-all of endpoint security. This simply isn’t true.

Don’t get me wrong, machine learning is great, but it’s just a single layer in what must be a multi-layered approach to security.

5. Ransomware repurposed

Global Escalation Support Engineer, Peter Mackenzie:

Based on some trends we’re seeing now I think we could see a shift in the way that ransomware is used.

Unlike most other malware, ransomware is noisy and scary – it doesn’t work unless you know you’ve got it, and it has to make you feel afraid. As security tools get better at dealing with ransomware, some attackers are using that noisyness as a technique for hiding something else, or as last resort after making money off you another way using, say, key loggers or cryptocurrency miners.

Once you’ve removed the noisy ransomware infection it’s easy to think you’ve cleaned your system. What you need to ask is “why did it detonate now?” and “what else was, or still is, running on the computer where we found the ransomware?”.

That’s enough from us, we’d love to read your predictions in the comments below.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/MdLAbCvx2hk/