STE WILLIAMS

Good news, everyone: Ransomware declining. Bad news: Miscreants are turning to crypto-mining on infected PCs

For the past few years, ransomware has been a bane of computer users. These software nasties infect PCs, scramble files, and demand payment in cryptocurrency to restore the documents.

Those cryptocurrencies are a right faff to get hold of and transfer to miscreants at short notice. And there’s no guarantee crooks will hand over the decryption key. And antivirus and operating systems are getting much, much better at blocking ransomware infections. As such, people either don’t pay up, or don’t need to.

Oracle WebLogic hole primed to pump Monero

READ MORE

It’s therefore no wonder that criminals are cutting out the middle person – the human victim – and infecting machines with remote-controlled malware that quietly mines alt-coins and slips the digital dosh back to its masters.

A single hijacked box can typically mine about 25 cents of Monero a day. Multiply that over tens of thousands of machines, and it adds up to a nice little earner. According to security researchers, criminals are shifting from coining it with ransomware to raking it in directly with stealthy miners.

A Monero miner that’s injected by hackers into Windows-powered computers using the NSA’s stolen and leaked EternalBlue exploit has netted its overlords $2.8m to $3.6m, according to infosec biz Proofpoint today.

This remote-controlled network of mining bots, dubbed Smominru, is churning out roughly 24 XMR ($8,500) a day, we’re told. The botnet has press-ganged something like 526,000 Windows boxes, which are mostly servers and mostly in Russia, India, and Taiwan. Two dozen or so computers crawl the internet for vulnerable devices and hijacks them using EternalBlue, which attacks Windows network file-sharing services.

At the end of last yer, Panda Security also found a similar EternalBlue-wielding Monero mining network, dubbed WannaMine after the WannaCry ransomware that famously used the NSA’s exploit.

You’re havin’ a bubble, mate

Cisco’s Talos security team has also seen a marked increase in covert cryptocurrency miner installations. The top five digital-coin-crafting operations found by Talos have been similarly making serious bank – up to $330,000 a year in just one case. The rewards are potentially even greater considering the bubble online currency prices are still in.

“The number of ways adversaries are delivering miners to end users is staggering. It is reminiscent of the explosion of ransomware we saw several years ago,” the Talos team said on Wednesday. “This is indicative of a major shift in the types of payloads adversaries are trying to deliver. It helps show that the effectiveness of ransomware as a payload is limited.”

The thing about ransomware is that it’s easy for security tools to detect and block: just look out for programs that start working their way through filesystems to encrypt the contents of documents. No normal everyday application behaves like that.

Currency crafting software, on the other hand, doesn’t do anything obviously weird apart from consuming some processor time. Monero mining is ideal for covert crooks because it doesn’t require a lot of processing oomph, meaning it can be done in the background with the victim none the wiser.

The miner has to contact an outside server to transfer out its coins, though, which a network admin can detect. Some mining code – particularly Coinhive’s widespread JavaScript – is stopped by antivirus packages and ad-blocking tools, so mineware will have to disguise or use fresh cryptographic software routines to avoid detection.

To get a miner onto a computer, a victim is typically tricked into opening that old chestnut of a booby-trapped Word or some other Office document. When opened, the malicious file downloads the mining software from online storage, and gets to work.

Exploit toolkits are also getting mining code to inject into infiltrated systems. The Talos team reported that the RIG exploit kit now has a miner on offer, and one miscreant was pulling in $85 a day using the system, which may not sound like much but adds up to $31,000 a year, tax free.

The only sign that a miner is installed is an increased CPU load on the infected machine, and the occasional transfer of coinage out of the system. Miscreants can configure their malware to send back mined coins daily, but that increases the chance of detection. Leave it too long between deposits, however, and all that sneaky coinage could be lost if the infection is spotted.

Not all miners are as smart as others. The Talos team found one inept CPU-cycle thief who was installing open-source mining code called NiceHash Miner, which is on GitHub. The crook forgot to change the default settings in the app, meaning that any coinage mined when to the software’s developer, not the idiot sticking it on other people’s systems.

Talos recommends enterprises scan their systems for undercover miners and strip them out as soon as possible. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/02/01/monero_mining_malware/

Lazarus Group, Fancy Bear Most Active Threat Groups in 2017

Lazarus, believed to operate out of North Korea, and Fancy Bear, believed to operate out of Russia, were most referenced threat actor groups in last year’s cyberattacks.

The busiest threat actor groups of 2017 were Sofacy (otherwise known as Fancy Bear or APT28) and the Lazarus Group, security experts report. As these groups ramped up activity, threat actors operating out of China became quiet.

Analysts at AlienVault leveraged data from its Open Threat Exchange (OTX) threat intelligence sharing platform to take a broad look at threat patterns from last year. They found the most frequently referenced threat group in 2017 was Sofacy.

Ten years ago, Sofacy primarily targeted NATO and defense ministries. Over the past three years its operations have expanded to target businesses, individuals, and elections in the United States and France. Leaked information from the US government, and an official report from the German government, indicate the threat group is associated with Russian military intelligence.

The second most active group was Lazarus, which is believed to operate out of North Korea (or Democratic People’s Republic of Korea, DPRK).

“In the past, security researchers thought DPRK cyber adversaries were unsophisticated compared to more traditional nation-state adversary groups, like China or Russia,” says Dmitri Alperovitch, cofounder and CTO at CrowdStrike.

“However, the North Korea regime has invested significant resources in training and development in recent years and their cyber capabilities have matured significantly as a result.” Alperovitch points out that in 2017, cyber operations were linked to DPRK almost monthly. Lazarus was linked to WannaCry and has hacked into banks and cryptocurrency exchanges.

Crowdstrike found Lazarus is comprised of four groups: Silent Chollima, Stardust Chollima, Labyrinth Chollima, and Ricochet Chollima. Most adversaries focus on targeted attacks or cyberespionage; DPRK threat actors aren’t as particular. While they primarily focused on South Korean targets in 2017, they have been known to hit organizations in other regions.

What usually motivates these groups? John Bambenek, manager of threat systems at Fidelis Cybersecurity, says financial gain is often a driver. “You’re dealing with organized crime, in essence,” he explains. “There’s a payday at the end of it.”

Attackers, specifically those in North Korea, have begun turning to cryptocurrency. More are targeting consumer devices and leveraging their computing power to mine crypto. “For a nation that is highly sanctioned with currency requirements, Bitcoin and its related cousins provided great means to capitalize,” Bambenek points out.

The goals of nation-state threat actors will vary from group to group. Those looking for money could target cryptocurrency exchanges while those seeking to disrupt election cycles could target social media to spread disinformation. “It depends on the geopolitical circumstances,” he says.

Why Chinese threat groups fell silent

AlienVault’s data shows Stone Panda, also known as APT10 or CloudHopper, fell in tenth place for 2017 activity. This is the highest-ranked group operating out of China, and AlienVault threat engineer Chris Doman notes its ranking “would have been very different three years ago.”

The last year saw a significant decrease in the number of targeted attacks from China-based threat groups against Western businesses. While this followed political pressure and agreements to stop activity, it’s also possible their attacks have become tougher to detect. CloudHopper is known to hit targets by compromising major IT service providers, a method that’s difficult to detect for vendors and government agencies.

“We may continue to see reported activity from groups in China drop further,” Doman writes, adding that UPS (also known as Boyusec or APT3) switched from Western to domestic targets.

What should you worry about?

Alperovitch warns businesses to worry about the danger North Korean threat groups pose to their brands and networks. “These adversaries have demonstrated a degree of unpredictability about what they may try to do next,” he says. “It is important for organizations to continually hunt their systems for potential intrusions and swiftly remediate before any damage is done.”

Bambenek acknowledges the potential for ICS-based attacks, which he says will be a growing area of focus for threat groups. “Someone will take a utility hostage for ransom,” he says. “With Triton getting published to GitHub, we’ve drastically lowered the bar for ICS attacks.”

Related Content:

Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/lazarus-group-fancy-bear-most-active-threat-groups-in-2017/d/d-id/1330954?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Cloud Least-Privilege Function Goes Live

Custom Roles for Cloud IAM now available in production from Google.

Google today announced that its custom roles feature for its Cloud IAM (Identity and Access Management) is now available in production after a beta phase that began last October.

The new least-privilege feature basically allows organizations to carve out specific, custom access for users that ensures they only access and perform the functions they need for their jobs.

“IT security aims to ensure the right people have access to the right resources and use them in the right ways. Making sure those are the only things that can happen is the ‘principle of least privilege,’ a cornerstone of enterprise security policy,” Google product manager Rohit Khare and engineering manager Pradeep Madhavarapu wrote in a post about the new offering.

While Google Cloud already provides predefined roles such as owner and cloud storage view, for example, the custom roles feature allows more specific and targeted user roles. 

Read more here about Custom roles for Cloud IAM.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/google-cloud-least-privilege-function-goes-live/d/d-id/1330957?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Johnny Hacker hauls out NSA-crafted Server Message Block exploits, revamps ’em

Hackers* have improved the reliability and potency of Server Message Block (SMB) exploits used to carry out the hard-hitting NotPetya ransomware attack last year.

EternalBlue, EternalSynergy, EternalRomance and EternalChampion formed part of the arsenal of NSA-developed hacking tools that were leaked by the Shadow Brokers group before they were used (in part) to mount the devastating NotPetya cyber attack.

The exploits – linked to the CVE-2017-0143 and CVE-2017-0146 Microsoft vulnerabilities – have been “rewritten and stabilised” to affect operating systems from Windows 2000 up to and including Server 2016 edition, Heimdal Security warns. These beefed-up exploits can be used to push arbitrary code on vulnerable systems targeted with specially crafted messages to the Microsoft SMB servers.

“Instead of going for injecting a shellcode into a target system and taking control over it, attackers will try to overwrite the SMB connection session structures to gain admin rights over the system,” Heimdal said.

“After that, the exploit module will drop to disk (or use a PowerShell command), explains zerosum0x0, and then copy directly to the hard drive.”

Worse yet, the revamped exploits could have worm-like self-replicating abilities, meaning any infection could spread far more quickly.

The development makes the patching of older server-based systems an even higher priority. Those still relying on Windows 2000 Server need to either disable or firewall inbound SMB traffic since there’s no patch.

Heimdal’s call to patch is backed by an earlier warning by security researcher Kevin Beaumont, who warned that the revamped exploits can be used to push ransomware, trojans or other nasties onto vulnerable Windows systems instead of simply crashing them and causing the infamous Blue Screen of Death.

The crashing, rather than spreading, effect limited the impact of the WannaCry outbreak, which partly relied on the EternalBlue exploit.

Patches that address the vulnerabilities are already available in the shape of updates from MS17-010 onwards. ®

*Including those who are not named John, Johnnie, Janelle or Jonah

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/31/wannacry_smb_exploit_beefed_up/

Terror law expert to UK.gov: Why backdoors when there’s so much other data to slurp?

Secure end-to-end encrypted comms is a desirable technology that governments should stop trying to break, especially as there’s other information to slurp up on crims, UK politicians were told this week.

Blighty’s former independent reviewer of terrorism legislation, David Anderson, told the House of Commons Home Affairs Committee on Tuesday that there are plenty of sources of intelligence for law enforcement to get their hands on, rather than banging the drum for backdoors in communications.

In what has now become a frustratingly standard question from politicians about tech companies’ role in the war on terror, Anderson was asked if he thought the state would ever get access to encrypted messages for security purposes.

“No,” he replied. “Because end-to-end encryption is not only a fact of life, it is, on balance, a desirable fact of life. Any of us who do our banking online, for example, are very grateful for end-to-end encryption.”

The debate, Anderson continued, was sometimes wrongly “portrayed in very black and white terms, as if the world is going dark and because of end-to-end encryption we’re all doomed”.

He argued that although the loss of information the state can gather from the content of someone’s communications is “very significant”, it is tempered by the mass of other data it can slurp from elsewhere.

“I mean who would have thought 30 years ago you could track somebody’s movements all around London by Oyster card? And you don’t even need the Oyster anymore, because you can get the location data from the phone company. It’s almost as good as having someone on their tail the whole time.”

Here we go again… UK Prime Minister urges nerds to come up with magic crypto backdoors

READ MORE

He said that the most striking of these measures are those contained in the controversial Investigatory Powers Act, which allow public authorities to gain access to 12 months’ worth of a person’s internet connection records from their provider.

“The more people spend their lives online, the more revealing that behaviour becomes,” Anderson said.

But although Anderson may not share the government’s magical thinking when it comes to backdoors, he does believe tech firms could be doing more to help tackle terrorism.

This includes being more cooperative in helping governments hack into physical devices (we think he’s looking at you, Apple).

Anderson also expressed surprise at admissions from social media companies – also made to the Home Affairs Committee – that they were only actively searching for content from one proscribed organisation (ISIS).

Referring to terrorist videos coming up in the first line of searches, Anderson appeared sceptical of YouTube’s efforts too. “This from the master of search engine optimisation,” he said. “If I may say so, I would advise you [the committee] to keep up the pressure.”

He also questioned whether a “West Coast” attitude to free speech meant companies were less responsive to other states’ opinion on what needs to be taken down – but said he’d rather firms “recognise they’re working in a global environment” so they didn’t end up with a “heavy-handed approach” to regulation.

D’oh! Amber Rudd meant ‘understand hashing’, not ‘hashtags’

READ MORE

When pressed on what he thought of taxes, fines or other punitive measures – for instance those recently imposed by Germany – Anderson said that was one possible way, but increased transparency was another.

“If we as a state are effectively outsourcing these Ofcom-like functions to these private operators, surely we need to see not just their terms of service but the internal guidelines they’re applying when they decide to take these down,” he said.

He also pointed out that although companies might say they have 5,000 people looking at content, they won’t say exactly where they are – “in which case they might all be [in Germany] because that’s where the fines are”.

But, Anderson cautioned, the issue of regulating the internet and big business would be the biggest global legislative issue of the next decade.

“Anyone who thinks they’ve got easy answers has a long way to go and a lot more thinking to do.”

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/31/backdoors_uk_home_affairs_committee/

New click-to-hack tool: One script to exploit them all and in the darkness TCP bind them

Python code has emerged that automatically searches for vulnerable devices online using Shodan.io – and then uses Metasploit’s database of exploits to potentially hijack the computers and gadgets.

We’re surprised it took this long.

The software, posted publicly on GitHub this week by someone calling themselves Vector, is called AutoSploit. As its name suggests, it makes mass hacking exceedingly easy. After collecting targets via the Shodan search engine – an API key is required – the Python 2.7 script attempts to run Metasploit modules against them.

Metasploit is an open-source penetration testing tool: it is a database of snippets of code that exploit security flaws in software and other products to extract information from systems or open a remote control panel to the devices so they can be commanded from afar. Shodan allows you to search the public internet for computers, servers, industrial equipment, webcams, and other devices, revealing their open ports and potentially exploitable services.

Autosploit screen

At your fingertips … The Autosploit tool

“The available Metasploit modules have been selected to facilitate Remote Code Execution and to attempt to gain Reverse TCP Shells and/or Meterpreter sessions,” the GitHub repository explains.

Because automated attacks of this sort could bring legal trouble, the repo also includes a warning that running the code from a machine easily traceable to you “might not be the best idea from an OPSEC standpoint.”

Other security industry types contend this isn’t the best idea in general.

S’kiddies

“There is no need to release this,” said Richard Bejtlich, founder of Tao Security, via Twitter. “The tie to Shodan puts it over the edge. There is no legitimate reason to put mass exploitation of public systems within the reach of script kiddies. Just because you can do something doesn’t make it wise to do so. This will end in tears.”

At the same time, there may be some value in explicitly connecting the dots between vulnerability scanning and vulnerability exploitation. The exercise makes it clear that automation defeats security through obscurity.

Vector, reached via Twitter, told The Register that the code has been received fairly well in the security community.

“I have seen comments critical of the tool for sure as well, but what they say can be said for every other attack tool that implements automation to some end,” Vector said.

“As with anything, it can be used for good or bad,” the security researcher added. “The responsibility is with the person using it. I am not going to play gatekeeper to information. I believe information should be free and I am a fan of open source in general.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/31/auto_hacking_tool/

Data Encryption: 4 Common Pitfalls

What’s This?

To maximize encryption effectiveness you must minimize adverse effects in network performance and complexity. Here’s how.

Employing data encryption is a no-brainer, as it supports the defense-in-depth strategy that organizations must embrace to stop bad actors from accessing sensitive network files. However, outside of the extra layers of protection data encryption can provide, there are also tradeoffs in network performance and complexity that might arise when organizations aren’t approaching encryption thoughtfully. Here are four pitfalls to avoid as you begin encrypting content.

Pitfall One: Proprietary Algorithms
It may seem counterintuitive to the way many effective security strategies are designed and implemented, but relying on standardized algorithms to encrypt sensitive data is actually safer for organizations than tasking their own IT staff or developers with crafting a unique encryption algorithm or even authentication system. The reason for this is that cryptography is its own specialization that requires an advanced degree of scientific and mathematical precision. While specific individuals from in-house security teams may have this highly specialized set of skills, dedicated cryptographers have devoted their sole attention to crafting industry-standard algorithms like IDEA 128-bit and ARC4 128-bit – more attention than an IT generalist or cross-functional developer could devote given the wealth of other projects in their purview.

Pitfall Two: Full Disk Encryption
While it is essential to ensure that data is encrypted while at rest and in motion, considerations must be made for the systems that manage that encryption.

Full Disk encryption, for instance, is designed to prevent access to sensitive data if a device or its hard drive(s) are removed. When the device is on, and a user is logged in, the sensitive data is available for anyone who is logged in – including bad actors who may have a backdoor into the system. In a roundabout way, this highlights challenges with key management. No matter how strong the crypto, if the key that provides the ability to return the content to plain text is available to adversaries, its game over.

Pitfall Three: Regulatory Compliance
Across most industries, rules regarding data collection, sovereignty and storage are extensive and usually mandated by legislation at the local and federal level. While regulations like HIPAA, PCI, CJIS and CIPA go far in detailing the costs of noncompliance, they are less instructive in telling businesses how to avoid it. In fact, many of these regulations don’t mention data encryption at all, even though encryption can prevent many of the most egregious regulations from taking place. These laws may represent a good starting point for mapping out a security strategy, but teams need to be diligent about going beyond just the standard “checklist” of protocols and standards many of these mandates provide.

Pitfall Four: Decryption Key Storage
Even after teams have gone about extensively encrypting their data, many developers make the mistake of storing the decryption key within the very database they are hoping to protect. After all, encryption is a means for protecting data even after bad actors have infiltrated the data base. If the key to decrypt all that data is basically hiding “under the doormat” right on the other side of the network gateway, all those efforts to encrypt are basically worthless.

As a result, many teams are exploring “Key Encryption Key,” “Master Encryption Key” and “Master Signing Key” encryptions that they store elsewhere to protect enterprise data – a step that may seem excessive to some, but provides an all-important level of assurance that minor missteps don’t curtail major security operations.

Joe Cosmano has over 15 years of leadership and hands-on technical experience in roles including Senior Systems and Network Engineer and cybersecurity expert. Prior to iboss, he held positions with Atlantic Net, as engineering director overseeing a large team of engineers and … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/iboss/data-encryption-4-common-pitfalls/a/d-id/1330918?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

5 Questions to Ask about Machine Learning

Marketing hyperbole often exceeds reality. Here are questions you should ask before buying.

How tired are we of “artificial intelligence” and “machine learning” being sprinkled like pixie dust on every product being hawked by vendors? The challenge for cybersecurity professionals is to see through the fog and figure out what’s real and what’s just marketing hyperbole.

Often, marketing hyperbole exceeds the reality. Notoriously, Tesla’s Autopilot sensors can be fooled in certain edge conditions, iPhone X can be fooled to unlock a phone by a doppelganger, and Apple’s Siri isn’t very good at taking directions. Even the winning team in the DARPA Cyber Grand Challenge lost spectacularly to actual hackers at the DEFCON conference following its win against other machines at Black Hat.

Machine learning is built on recursive algorithms and mathematics, making the concept itself difficult for many to comprehend. So how can buyers and practitioners decipher what’s “real” machine learning technology from marketing spin and, just as importantly, what is effective versus what is not?

The five questions below go to the heart of how well a particular machine learning approach performs in detecting attacks, regardless of which particular algorithm it uses.

1. That detection rate you quote in your marketing materials is impressive, but what’s the corresponding false-positive rate?

The false-positive rate is the flip side of detection rates. False positives and true detection rates go hand in hand. In fact, a system can be tuned to optimize false positives or true detections to acceptable levels. The receiver operating characteristic (ROC) is a curve that shows the relation between true detections versus false positives. Pick a false-positive rate on the curve and you’ll see the corresponding true detection rate of the algorithm. If a vendor can’t or won’t show you a ROC curve for its system, you can bet it hasn’t done proper machine learning research, or the results are not something it would brag about.

2. How often does your model need updating, and how much does your model’s accuracy drop off between updates?

Just as important as detection and false-positive rates is the ability of the model to age well. Machine learning models will age with time as the training data it trained on becomes obsolete. The ability of a machine learning model to generalize from what it has trained on can be measured by its decay rate, the rate at which the model’s performance declines with time as the data it trained on ages. A good machine learning model will age slowly, which in practice means it will not need to be replaced that often. For comparison, traditional signature-based models need updating daily. A good machine learning model only needs to be replaced once every few months rather than every few days. The decay rate is heavily influenced by the training data. A diverse training set leads to a stable model, and a narrow training set ages out very fast.

3. Does your machine learning algorithm make decisions in real time?

Depending on your application, you can use machine learning for retrospective forensic analysis or for inline blocking — that is, blocking attacks as they occur in real time. If used for inline blocking, the approach needs to operate in real time, typically measured in milliseconds. In general, this rules out online lookups because of round-trip times from the cloud. Real-time performance requires a compact model able to run on-premises in the device’s memory. Asking the real-time performance of the model is one way of figuring out whether the model is compact enough to block attacks in real time. 

4. What is your training set?

The most overlooked important attribute in machine learning is the training set. The performance of a machine learning algorithm depends on the quality of the training set. Good, curated training sets that are robust to change, reflect real-world conditions, and diverse are hard to acquire, but they are incredibly important for effective performance. If the data on which the model is trained is not representative of the threats you will face, then the performance on your network will suffer regardless of how the model was tested. Models tested on narrow data sets will have misleading performance results.

5. How well does your machine learning system scale?

The good and bad news for machine learning in security is that there is a massive amount of data on which to train. Machine learning algorithms typically require those massive amounts of data to properly learn the phenomena it is trying to detect. That’s the good news. The bad news is the models must be able to scale to Internet-sized databases that change continuously. Understanding how much data an algorithm is trained on gives an indication of its scalability. Understanding the footprint of the model gives an indication of its ability to compactly represent and process Internet-scale databases.

As you can see, for a machine learning approach to be successful, it must do the following:

  • Have high detection rates and low false positives on known and unknown attacks, with a published ROC curve.
  • Be trained on a robust training set that is representative of real-world threats.
  • Continue to deliver high performance for months after each update.
  • Provide real-time performance (threat blocking) without consuming large amounts of system resources such as memory and disk.
  • Scale reliably, without using more memory or losing performance, even as the training set increases.

Next time you talk to a company that claims to use machine learning in its products, be sure to get answers to these questions.

Related Content:

Anup Ghosh is Chief Strategist, Next-Gen Endpoint, at Sophos. Ghosh was previously Founder and CEO at Invincea until Invincea was acquired by Sophos in March 2017. Prior to founding Invincea, he was a Program Manager at the Defense Advanced Research Projects Agency (DARPA). … View Full Bio

Article source: https://www.darkreading.com/operations/5-questions-to-ask-about-machine-learning/a/d-id/1330930?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

700,000 Bad Apps Deleted from Google Play in 2017

Google rejected 99% of apps with abusive content before anyone could install them, according to a 2017 security recap.

Google took down 700,000 apps from Google Play in 2017 because they violated the store’s policies. This marks a 70% increase from the amount of apps removed in 2016, reports Google Play product manager Andrew Ahn in a blog post on 2017 security measures.

Ahn says 99% of malicious apps were identified and rejected before anyone could install them. Improvements in detection models helped find apps containing malware or inappropriate content, as well as threat actors and abusive developer networks. Google Play took down 100,000 bad developers in 2017 and made it difficult for them to create new accounts.

Examples of bad apps that were removed include copycats, which try to deceive users by disguising as famous apps. More apps were flagged for content, including pornography, extreme violence, hate, and illegal activities. Potentially harmful applications, which had a 50% lower install rate in 2017, are designed to phish users’ data, act as Trojans, or conduct SMS fraud.

Read more details here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/application-security/700000-bad-apps-deleted-from-google-play-in-2017/d/d-id/1330945?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

IoT Botnets by the Numbers

IoT devices are a botherder’s dream attack-vector.PreviousNext

Image Source: Adobe Stock

Image Source: Adobe Stock

Even before Mirai burst onto the scene a year-and-a-half ago, security experts had been warning anyone who listened about how juicy Internet of things (IoT) devices were looking to criminal botnet herders. Proliferating faster than black t-shirts at a security conference, IoT sensors have spread throughout our personal and business lives inside cameras, automobiles, TVs, refrigerators, wearable technology, and more.

They offer the perfect combination of variables for attackers seeking an ideal botnet node: ubiquity, connectivity, poor default settings, rampant software vulnerability – and utter forgetability. Once these devices are deployed, they’re rarely patched or even monitored. So it was only a matter of time before cybercriminals started harvesting them for botnet operations.

Mirai offered one of the first large-scale implementations of IoT botnets, and since its inception in late 2016 the attacks have been relentless.

Here is a rundown of some of the most relevant stats around IoT botnet attacks.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/perimeter/iot-botnets-by-the-numbers/d/d-id/1330924?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple