STE WILLIAMS

Facebook settles after 14-year-old sues over nude image reposting

This is the argument that Facebook tried to make in the case of a nude photo of a 14-year-old girl that was repeatedly published on a “shame page”: yes, Facebook said, that photo was published, but every time it was reported, we took it down.

Her lawyers’ response: Why was reposting possible at all? …given that there’s technology that can assign hashes to known child abuse imagery and prevent them from being reposted?

That’s a good question, and it well might have helped the Northern Ireland teenager and her legal team to prevail in out-of-court negotiations with Facebook.

On Tuesday, the BBC reported that the girl, who can’t be named, has agreed to a confidential settlement with Facebook that included her legal costs.

The teen also sued the man who posted the image in 2014 and 2016, claiming that he got the photo through blackmail. Before the settlement, Facebook had been facing charges of alleged misuse of private information, negligence and breach of the Data Protection Act.

This is what her lawyer told the High Court in Belfast on Tuesday, according to the BBC:

I’m very happy to be able to inform Your Lordship that the case has been settled.

I’m happy too. I’ll be happier when the alleged sextortionist is brought to justice. And I’m extremely happy that this case, or at least cases like it, undoubtedly pushed Facebook into adopting what sounds like photo hashing in order to stop this type of abuse.

In November 2017, Facebook asked people to upload their nude photos if they were concerned about revenge porn. It didn’t give many details at the time, but it sounded like it was planning to use hashes of our nude images, just like law enforcement uses hashes of known child abuse imagery.

A hash is created by feeding a photo into a hashing function. What comes out the other end is a digital fingerprint that looks like a short jumble of letters and numbers. You can’t turn the hash back into the photo, but the same photo, or identical copies of it, will always create the same hash.

So, a hash of your most intimate picture is no more revealing than this:

48008908c31b9c8f8ba6bf2a4a283f29c15309b1

Since 2008, the National Center for Missing Exploited Children (NCMEC) has made available a list of hash values for known child sexual abuse images, provided by ISPs, that enables companies to check large volumes of files for matches without those companies themselves having to keep copies of offending images or to actually pry open people’s private messages.

The hash originally used to create unique file identifiers was MD5, but Microsoft has since donated its own PhotoDNA technology to the effort.

PhotoDNA creates a unique signature for an image by converting it to black and white, resizing it, and breaking it into a grid. In each grid cell, the technology finds a histogram of intensity gradients or edges from which it derives its so-called DNA. Images with similar DNA can then be matched.

Given that the amount of data in the DNA is small, large data sets can be scanned quickly, enabling companies including Microsoft, Google, Verizon, Twitter, Facebook and Yahoo to find needles in haystacks and sniff out illegal child abuse imagery. It works even if the images have been resized or cropped.

Why so much detail on hashing? Because there was a lot of victim-blaming when the girl’s case first came to light. Hashing technology seems to be a far more productive approach than blaming victimized children who are under the age of consent for getting talked into nude photos.

It’s shocking to think of a 14-year-old being subjected to sextortion, but kids even younger – we’ve heard of those as young as 11 – have been victims of revenge porn.

As far as keeping your kids safe when they’re online goes, there are tools that can help us do it. These include parental controls that let you set your children’s privacy settings, control whether they can install new apps, enforce ratings restrictions on what they can buy on iTunes, and even limit what type of app they can use.

We’ve got more tips to keep your kids safe online here.

And if you’re not even sure what your kids are up to online, this could help.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/Cjl2YZcUwVU/

FBI director says ‘unbreakable encryption is a public safety issue’

Meet the new year, same as the old year – at least when it comes to the debate over encryption.

Top law enforcement officials – this week it was FBI director Christopher Wray – are still contending that it is possible to give government (and only government) “back-door” access to the encrypted digital devices of alleged criminals, without jeopardizing the encryption of every other device on the planet.

And the response from privacy advocates and encryption experts remains that this is magical thinking – at the level of asserting that if there is just enough good-faith cooperation between the tech sector and law enforcement, it will be possible for pigs to fly.

Wray “picked up where he left off last year,” as the Register put it, in a speech this week at the International Conference on Cyber Security, held at Fordham University in New York.

He made the same arguments he had made a month earlier, on 7 December, before the House Judiciary Committee, that selective encryption access is possible without jeopardizing its effectiveness for everybody, and that the need for it is beyond urgent, to protect innocent citizens from criminals and terrorists who are using encrypted devices to “go dark.”

And those arguments pick up where his predecessor, James Comey, left off. They also amount to something of a tag team approach with Assistant Attorney General Rod Rosenstein, who we reported also made the same arguments multiple times last year.

By now, they are familiar. First up are frightening statistics – the agency has been unable to crack thousands of devices. Wray said in 2017 the number was 7,775 – more than half of those that the agency sought to access – with “lawful authority to do so.”

Second, even though the metadata from those devices – “transactional” information about calls, texts and messages – is available, that is not nearly enough.

…for purposes of prosecuting terrorists and criminals, words can be evidence, while mere association between subjects isn’t evidence.

Third, that the world has changed in the last decade to the point that terrorists, child predators, nation states and others can operate almost freely under the cloak of unbreakable encryption. This, Wray said, is “a major public safety issue.”

This problem impacts our investigations across the board – human trafficking, counterterrorism, counterintelligence, gangs, organized crime, child exploitation, and cyber.

And fourth, that what he, Rosenstein and Comey have all called “responsible encryption,” is possible – that the makers of encrypted devices can create a secret key to defeat it (that they don’t even have to give to the government!) which can then be used when law enforcement comes bearing a locked phone and a warrant.

Wray’s example was the chat and messaging platform Symphony, used by major banks, in large part because it guarantees “data deletion.” He said bank regulators were concerned that this could hamper their investigations of Wall Street.

So, he said, four New York banks reached an agreement:

They agreed to keep a copy of all e-communications sent to or from them through Symphony for seven years. The banks also agreed to store duplicate copies of the decryption keys for their messages with independent custodians who aren’t controlled by the banks. So the data in Symphony was still secure and encrypted – but also accessible to regulators, so they could do their jobs.

Wray insisted this does not mean that the FBI is seeking a back door, “which I understand to mean some type of secret, insecure means of access.”

But that is exactly what civil liberties and encryption experts say he is seeking. Bruce Schneier, CTO of IBM Resilient Systems and an encryption expert, has said many times that it is absurd to think that encryption can, “work well unless there is a certain piece of paper (a warrant) sitting nearby, in which case it should not work.”

Mathematically, of course, this is ridiculous. You don’t get an option where the FBI can break encryption but organized crime can’t. It’s not available technologically.

Schneier has likened the selective encryption key argument to poisoning all the food in a restaurant – you might poison a terrorist, but you would also poison every other innocent person eating there.

And Tim Cushing, writing last month in Techdirt after Wray’s appearance before the House Judiciary Committee, contends that the number of locked phones held by the FBI is meaningless because, “there’s no context provided by the FBI, nor will there ever be.”

The FBI is unwilling to divulge how many accessed phones are dead ends and how many cases it closes despite the presence of a locked device.

Not to mention, as numerous critics have indeed highlighted, that the government’s record of securing its own crucial data is, well, spotty at best. There are the continuing dumps by Wikileaks of hacking tools held by the CIA and NSA (and not shared with the private sector). There was the catastrophic hack of the federal Office of Personnel Management (OPM) several years ago that exposed the personal information of about 22 million current and former federal employees.

As the FBI acknowledged, it has been able to defeat the encryption on a phone used by one of the terrorist shooters in December 2015 at a San Bernardino nightclub through the help of a “third party,” reportedly the Israeli mobile forensics firm Cellebrite, which it is said to have paid $900,000.

The FBI also has a department called the National Domestic Communications Assistance Center (NDCAC), to provide technical assistance to local law enforcement agencies. And one of its goals is to make tools like Cellebrite’s services, “more widely available” to state and local law enforcement.

But the fundamental issue will likely be resolved only through legislative action or a court decision. Rosenstein said last fall that he would like to see the matter come to the courts.

I want our prosecutors to know that, if there’s a case where they believe they have an appropriate need for information and there is a legal avenue to get it, they should not be reluctant to pursue it. I wouldn’t say we’re searching for a case. I’d say we’re receptive, if a case arises, that we would litigate.


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/H9gGVGeOm8w/

Everything running smoothly at the plant? *Whips out mobile phone* Wait. Nooo…

The security of mobile apps that tie in with Supervisory Control and Data Acquisition (SCADA) systems has deteriorated over the last two-and-a-half years, according to new research.

A team of boffins from IOActive and IoT security startup Embedi said they had discovered 147 vulnerabilities in 34 of the most popular Android mobile apps for SCADA systems.

Mobile applications are increasingly being used in conjunction with SCADA systems. The researchers warned these apps are “riddled with vulnerabilities that could have dire consequences on SCADA systems that operate industrial control systems”.

If successfully exploited, the vulnerabilities could allow attackers to disrupt industrial processes or compromise industrial network infrastructure.

how mobile apps fit into modern industrial control systems

How mobile apps fit into modern industrial control system architectures [source: IOActive white paper]

Code-tampering vulns found in 94% of sampled apps

The 34 Android applications tested were randomly selected from the Google Play Store.

The research focused on testing software and hardware, using backend fuzzing and reverse engineering. The team successfully uncovered security vulnerabilities ranging from insecure data storage and insecure communication to insecure cryptography and code-tampering risks.

The research revealed the top five security weaknesses were: code tampering (94 per cent of apps), insecure authorisation (59 per cent of apps), reverse engineering (53 per cent of apps), insecure data storage (47 per cent of apps) and insecure communication (38 per cent of apps).

The same team of researchers found 50 vulnerabilities across 20 Android apps in 2015. The rise to 147 vulnerabilities in 34 apps therefore represents an average increase of 1.6 vulnerabilities per app.

Technical details of the research will be released by Alexander Bolshev, IOActive security consultant, and Ivan Yushkevich, information security auditor for Embedi, in a new paper “SCADA and Mobile Security in the Internet of Things Era”.

Bolshev explained: “It’s important to note that attackers don’t need to have physical access to the smartphone to leverage the vulnerabilities, and they don’t need to directly target ICS [Industrial Control Systems] control applications either. If the smartphone users download a malicious application of any type on the device, that application can then attack the vulnerable application used for ICS software and hardware.

“What this results in is attackers using mobile apps to attack other apps,” he added.

Yushkevich added: “Developers need to keep in mind that applications like these are basically gateways to mission-critical ICS systems. It’s important that application developers embrace secure coding best practices to protect their applications and systems from dangerous and costly attacks.”

Yushkevich said the team tested only Google Play apps in order that it could “compare the results of this research with those of the previous research in 2015”.

He said of the threats that could occur as a result of these vulnerabilities: “Some of the revealed vulnerabilities are the client-side ones. For example, SQL injections may be used to disrupt the operation of a device.

“To exploit most of the described vulnerabilities, a hacker has to simply intercept traffic and get to the same network segment a victim is in. So, the SQL injection vulnerability is an exception here.”

IOActive and Embedi have informed the impacted vendors of the findings and are coordinating with a number of them to ensure fixes are in place. ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/11/scada_mobile/

Vulnerable Mobile Apps: The Next ICS/SCADA Cyber Threat

Researchers find nearly 150 vulnerabilities in SCADA mobile apps downloadable from Google Play.

As if ICS/SCADA networks weren’t a juicy enough target, now those networks face a new generation of threats via mobile apps.

Researchers Alexander Bolshev, a security consultant with IOActive, and Ivan Yushkevich, information security auditor for Embedi, randomly selected 34 Android mobile apps from the Google Play store from third-party developers and well-known ICS/SCADA vendors to check for security vulnerabilities: they found 147 security flaws that could be exploited to disrupt or sabotage an industrial process or network infrastructure.

The pair in 2015 had conducted a similar but more cursory study of 20 mobile apps, where they rooted out 50 security weaknesses. They decided to revisit their research this time but at a deeper level, with more rigorous testing of software and hardware, conducting back-end fuzzing and reverse-engineering, and mapping their findings to OWASP’s Top 10 Mobile Security Risks.

“They tore them [the apps] apart looking for bugs, and compared the bugs to the previous” research, says Jason Larsen, principal security consultant at IOActive. “The rate of bugs had increased over the past three years. You’d think with higher quality software, the bug rate would go down, but it went up.”

Some 59% of the apps had insecure authorization controls and 47% employed insecure data storage. “About one-third had problems with insecure communications, and either lacked encryption or had incorrect implementations of encryption,” Bolshev says. “This is pretty scary.”

Attackers could exploit the flaws in several ways, according to the researchers. First, if an attacker had physical access to the mobile device and app, he or she could extract the SD card, for example, and embed an exploit on the card and then reinsert it into the device. “They would need just one or two minutes to extract the card … and put it back. Most apps store data insecurely, and there’s no data integrity or strong encryption,” he says.

Second, an attacker could wage a man-in-the-middle attack between the mobile app and the back-end system. “Thirty-eight percent of the apps have insecure communications. So if an attacker could somehow [perform] man-in-the-middle between the app and backend, it could compromise the app,” Bolshev says.

A rogue WiFi or VPN channel could be compromised to perform such an attack, according to the research, or an attacker could also compromise the application itself. An attacker could alter a SCADA operator’s view of a pressure gauge, for example. “They could show an invalid picture of the system” status, for example, Bolshev explains. “It could [be altered] to show there’s a problem when there isn’t,” which could result in physical or monetary damage to the plant.

Android in the Plant?

To date, most mobile ICS/SCADA apps deployed in plants are trials or with limited functions, Larsen says.

If running Android apps in a sensitive ICS/SCADA environment seems counterintuitive security-wise, consider the business side of the equation. Part of the motivation for going to mobile apps is pure economic pressure.

“Overall there is an active push by manufacturers and other industrial controls users to be more efficient and to reduce headcount costs. As such, there is a motivation by the users and the ICS vendors to build applications that allow for remote access to ICS systems/components, respond to alarms, etc.,” says Ernie Hayden, founder and principal of 443 Consulting LLC. That has meant pressure to push apps to market without proper security assessment and evaluation, he notes.

“Hence, and sadly, vulnerabilities are discovered after the remote devices are installed and used in the field,” Hayden says.

ICS/SCADA mobile app vendors don’t have the proper policies and procedures in place for secure mobile software development given the market pressures to crank out the apps, according to IOActive’s Larsen. “Most [mobile apps] are being outsourced and they don’t have that rigor in it yet. In general, code is getting worse and not better.”

The researchers did not disclose which apps contained which vulns, and say they alerted app vendors whose products were affected. Among the apps vendors whose software the researchers tested were BACmove, Cybrotech, IDEA-Teknik, Schneider Electric SE, ICONICS, Siemens AG, and TeslaSCADA.

Bolshev declined to reveal any specifics on what they found or not in specific vendor apps but says: “If a vendor is taking care of overall security, it also takes care of its mobile app security from what we saw.”

While most of the solution lies with app developers upping their secure development game, the researchers say ICS/SCADA plants need to carefully deploy mobile apps. “I’d recommend if you want to integrate mobile into OT, pen-test it” first, Bolshev says. “Then you can make the decision to integrate it or not.”

Larsen says mobile apps will become more mainstream in industrial networks in the next few years. “Everyone tried to fight WiFi on laptops, and now everyone has it now,” he says, and mobile apps are also inevitable in those networks.

Related Content:

Kelly Jackson Higgins is Executive Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Article source: https://www.darkreading.com/endpoint/privacy/vulnerable-mobile-apps-the-next-ics-scada-cyber-threat/d/d-id/1330801?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Why Facebook Security Questions Are no Substitute for MFA

What’s This?

If identity is established based on one thing you know and one thing you have, the latter should not also be a thing you know because in the sharing economy, we share everything.

If you’ve been on Facebook any length of time, you’ve probably scrolled by one of those “let’s get to know each other” status updates. These seemingly innocuous exhortations to share information make my teeth itch.

It’s not because I’m not into sharing with friends, or divulging sometimes quite personal information. It’s because this data is increasingly part of the security equation that “protects” even more sensitive personal data.

Yes, the scare quotes are necessary in this case.

I present as Exhibit A this screen capture of a fairly well-known cloud app which recently updated its security questions. It appears scarily like those lists you see on Facebook, lists that are shared and re-shared no matter how many times you might offer a kind, cautioning word against them.

My favorite color, by the way, is black. Or at least it will be until something darker comes along.

While marginally better than asking for personal information that is just as easily discovered on the Web —your mother’s maiden name, where you were born (my mother claims it was in a barn based on my habit of leaving doors open as a child), what high school you graduated from—the fact remains that these questions are useless for verifying identity.

Seriously, how many colors are there? And how many of us share the same love of one of those limited choices?

The answers to these questions aren’t that hard to guess in case a quick search doesn’t turn them up. Because, while we’re great at sharing, we aren’t so great at managing admittedly sconfusing privacy settings on social media, and some things are a matter of public record. Check an obituary sometime. You’ll quickly find not only my maiden name, but my mother’s maiden name and the names of all my siblings and their children and … See, it’s not that hard to find information if you know where to look. A new upcoming breach trends research report by F5 Labs, studies data breaches over the past decade and concludes that “there have been so many breaches that attacker databases are enrichened to the point where they can impersonate an individual and answer secret questions to get direct access to accounts without ever having to work through the impacted party.”

It is also true that passwords are not enough. Credential stuffing is a real threat, and the upcoming F5 Labs breach trends report discovered that 33% of the breaches started with identity attacks, of which phishing was the primary root cause. Many of the malicious URLs clicked on, or malware files opened, in phishing attacks collect credentials, which are then sold and used to gain illicit access to corporate systems.

Security questions are used as a secondary source of identity verification. They simulate, albeit poorly, the second-factor in a multi-factor authentication (MFA) strategy. But they do so with stunning inadequacy. MFA is based on the premise that identity is established based on one thing you know and one thing you have. The latter should not also be a thing you know, because in the sharing economy, we share everything—whether we should or not.

MFA is a good idea. It’s not always convenient;  we use it extensively at F5, so I say that as a user, but it is safer. And that’s the point. Because it’s really hard to duplicate a one-time password from an isolated key, but it’s pretty easy to figure out my pet’s name, thanks to Facebook, Twitter and others.

So, if your implementation of MFA uses Facebook favorite lists or any other security questions as a second form of authentication, you need to rethink your strategy.

Get the latest application threat intelligence from F5 Labs.

Lori MacVittie is the principal technical evangelist for cloud computing, cloud and application security, and application delivery and is responsible for education and evangelism across F5’s entire product suite. MacVittie has extensive development and technical architecture … View Full Bio

Article source: https://www.darkreading.com/partner-perspectives/f5/why-facebook-security-questions-are-no-substitute-for-mfa-/a/d-id/1330755?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

AI in Cybersecurity: Where We Stand & Where We Need to Go

How security practitioners can incorporate expert knowledge into machine learning algorithms that reveal security insights, safeguard data, and keep attackers out.

With the omnipresence of the term artificial intelligence (AI) and the increased popularity of deep learning, a lot of security practitioners are being lured into believing that these approaches are the magic silver bullet we have been waiting for to solve all of our security challenges. But deep learning — or any other machine learning (ML) approach — is just a tool. And it’s not a tool we should use on its own. We need to incorporate expert knowledge for the algorithms to reveal actual security insights.

Before continuing this post, I will stop using the term artificial intelligence and revert back to using the term machine learning. We don’t have AI or, to be precise, artificial general intelligence (AGI) yet, so let’s not distract ourselves with these false concepts.

Where do we stand today with AI — excuse me, machine learning — in cybersecurity? We first need to look at our goals: To make a broad statement, we are trying to use ML to identify malicious behavior or malicious entities; call them hackers, attackers, malware, unwanted behavior, etc. In other words, it comes down to finding anomalies. Beware: to find anomalies, one of the biggest challenges is to define what is “normal.” For example, can you define what is normal behavior for your laptop day in, day out? Don’t forget to think of that new application you downloaded recently. How do you differentiate that from a download triggered by an attacker? In abstract terms, only a subset of statistical anomalies contains interesting security events.

Applying Machine Learning to Security
Within machine learning, we can look at two categories of approaches: supervised and unsupervised. Supervised ML is great at classifying data — for example, learning whether something is “good” or “bad.” To do so, these approaches need large collections of training data to learn what these classes of data look like. Supervised algorithms learn the properties of the training data and are then used to apply the acquired knowledge to classify new, previously unknown data. Unsupervised ML is well suited for making large data sets easier to analyze and understand. Unfortunately, they are not that well suited to find anomalies.

Let’s take a more detailed look at the different groups of ML algorithms, starting with the supervised case.

Supervised Machine Learning
Supervised ML is where machine learning has made the biggest impact in cybersecurity. The two poster use cases are malware identification and spam detection. Today’s approaches in malware identification have greatly benefited from deep learning, which has helped drop false positive rates to very low numbers while reducing the false negative rates at the same time. The reason malware identification works so well is because of the availability of millions of labeled samples (both malware and benign applications) or training data. These samples allow us to train deep belief networks extremely well. A very similar problem is spam detection in the sense that we have a lot of training data to train our algorithms to differentiate spam from legitimate emails.

Where we don’t have great training data is in most other areas — for example, in the realm of detecting attacks from network traffic. We have tried for almost two decades to come up with good training data sets for these problems, but we still do not have a suitable one. Without one, we cannot train our algorithms. In addition, there are other problems such as the inability to deterministically label data, the challenges associated with cleaning data, or understanding the semantics of a data record.

Unsupervised Machine Learning
Unsupervised approaches are great for data exploration. They can be used to reduce the number of dimensions or fields of data to look at (dimensionality reduction) or group records together (clustering and association rules). However, these algorithms are of limited use when it comes to identifying anomalies or attacks. Clustering could be interesting to find anomalies. Maybe we can find ways to cluster “normal” and “abnormal” entities, such as users or devices? Turns out that the fundamental problem that makes this hard to do is that clustering in security suffers from good distance functions and the “explainability” of the clusters. You can find more information about the challenge with distance functions and explainability in this blog post.

Context and Expert Knowledge
In addition to the already mentioned challenges for identifying anomalies with ML, there are significant building blocks we need to add. The first one is context. Context is anything that helps us better understand the types of the entities (devices, applications, and users) present in the data. Context for devices includes things like a device’s role, its location, its owner, etc. For example, rather than looking at network traffic logs in isolation, we need to add context to make sense of the data. Knowing which machines represent DNS servers on the network helps understand which of them should be responding to DNS queries. A non-DNS server that is responding to DNS requests could be a sign of an attack.

In addition to context, we need to build systems and algorithms with expert knowledge. This is very different from throwing an algorithm at the wall and seeing what yields anything potentially useful. One of the interesting approaches in the area of knowledge capture that I would love to see get more attention is Bayesian belief networks. Anyone done anything interesting with those (in security)? Please share in the comments below.

Rather than trying to use algorithms to solve really hard problems, we should also consider building systems that help make security analysts more effective. Visualization is a great candidate in this area. Instead of having analysts look at thousands of rows of data, they can look at visual representations of the data that unlocks a deeper understanding in a very short amount of time. In addition, visualization is also a great tool to verify and understand the results of ML algorithms. 

This visualization of 100GB of network traffic over a period of one week is used to determine 'normal' behavior and identify potential  anomalies by experts.

In the ancient practice of Zen, koans are a tool or a stepping stone to get to the enlightenment. Just like ML, it’s a tool that you have to learn how to apply and use in order to come to new understanding and find attackers in your systems.

Related Content: 

Raffael Marty is one of the world’s most recognized authorities on security data analytics and visualization. Raffy is the CEO and founder of pixlcloud, a next generation visual analytics platform. With a 15 year track record in the big data and log analysis space, at … View Full Bio

Article source: https://www.darkreading.com/threat-intelligence/ai-in-cybersecurity-where-we-stand-and-where-we-need-to-go/a/d-id/1330787?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

RIG EK Remains Top of Heap, Turns to Cryptomining

Popular exploit kit turns its sights to drive-by cryptomining in what security researchers believe will be a trend to follow in 2018.

Even after a precipitous drop in activity last quarter, security researchers say that the RIG Exploit Kit (RIG EK) still leads the pack when it comes to overall malicious campaigns. And some of them have found that the crooks are expanding their moneymaking horizons by using RIG to take advantage of the cryptocurrency craze bubbling the market for Bitcoin and other currencies. The exploit kit is being used by the bad guys in a new malicious campaign to distribute coin miners through drive-by downloads that they say likely signals another wide-scale evolution in the cybercriminal enterprise.

“There isn’t a day that goes by without a headline about yet another massive spike in Bitcoin valuation, or a story about someone mortgaging their house to purchase the hardware required to become a serious cryptocurrency miner,” writes Jérôme Segura, lead malware intelligence analyst for Malwarebytes Labs in a new report this week. “As cryptocurrencies become more and more popular, we can only expect to see an increase in malicious coin miners, driven by the prospect of financial gains and increased anonymity.”

According to Segura, the bad guys are leveraging RIG in a new campaign called Ngay that distributes droppers containimg one or more coin miner malware for cryptocurrencies like Monero and Electroneum. While some might write off these kinds of exploit kit payloads as less risky than a banking Trojan, Segura says their long-term impact is still serious.

“Not only can existing malware download additional payloads over the course of time, but the illicit gains from cryptomining contribute to financing the criminal ecosystem, costing billions of dollars in losses,” he says.

Overall, RIG remains one of the most prevalent exploit kits to distribute any kind of malicious payload online, not just coin miners. According to a report out today by Zscaler, this leading position was maintained in spite of a pretty sizable drop in activity last quarter. 

“We saw an approximate drop in weekly activity of 63% between October and November 2017. RIG EK has been active at about the same volume of activity into January 2018 since the end of October,” Derek Gooley, senior security researcher for Zscaler told Dark Reading. “RIG maintained a fairly constant level of activity throughout the summer (of 2017), which is what made this recent drop of observed activity stand out.”

The Ngay campaign is not necessarily the first to have RIG EK or other exploit kits distribute coin miners, but it does offer an anecdotal touchstone for where researchers expect things to go in the next year.  

“Cryptocurrency mining payloads delivered by exploit kits are becoming increasingly common,” Gooley wrote in his report. “Earlier this fall we observed a one-off RIG campaign that used a different malicious redirect structure than the common RIG campaigns to deliver the exploit kit. This campaign infected victims with the Dofoil Trojan, which then installed the malicious BitCoinMiner cryptocurrency mining tool.”

Related Content:

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/rig-ek-remains-top-of-heap-turns-to-cryptomining/d/d-id/1330804?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Ohio coder accused of infecting Macs, PCs with webcam, browser spyware for 13 years

A computer programmer has been accused of hacking, committing identity theft, and creating child pornography after allegedly developing custom malware to take control of thousands of computers.

Phillip Durachinsky, 28, of North Royalton, Ohio, USA, was indicted on Wednesday on 16 separate charges relating to the alleged creation of malware dubbed Fruitfly, which could commandeer infected macOS and Windows PC systems. Prosectors claim Durachinsky used the code to spy on thousands of people in a campaign that started in 2003, when he was just a teenager.

“For more than 13 years, Phillip Durachinsky allegedly infected with malware the computers of thousands of Americans and stole their most personal data and communications,” said Acting Assistant Attorney General John Cronan.

“This case is an example of the Justice Department’s continued efforts to hold accountable cybercriminals who invade the privacy of others and exploit technology for their own ends.”

According to court documents [PDF] filed in Ohio, Durachinsky created the malware to harvest keystrokes and snoop on web browser activity on infected systems. It also allowed the operator to watch and listen in on the victims via their webcams and microphones, and otherwise take full control of infected machines.

Prosecutors claimed the malware was configured to activate when the user of a compromised computer typed in search terms related to pornography. The Feds said he slurped millions of images from his victims’ cameras. It sounds as though the spyware would deliberate surveil people – kids and adults – as they browsed the web, particularly if they were looking at smut, and beam the pictures back to Durachinsky, allegedly.

backdoor

‘Ancient’ Mac backdoor discovered that targets medical research firms

READ MORE

“This defendant is alleged to have spent more than a decade spying on people across the country and accessing their personal information,” said First Assistant US Attorney David Sierleja.

The Fruitfly malware had computer security researchers puzzled for some time. The code was an interesting mix of very old and new coding styles. One suspicion was that it was state-sponsored malware, another that it was an espionage tool.

US prosectors claimed they got involved after the malware cropped up multiple times on the servers of Case Western Reserve University. This led to an investigation and the arrest of Durachinsky. The FBI said the software nasty was later found on the US Department of Energy network, as well as in a police department and various schools.

“Durachinsky is alleged to have utilized his sophisticated cyber skills with ill intent, compromising numerous systems and individual computers,” said Special Agent in Charge Stephen Anthony.

“The FBI would like to commend the compromised entities that brought this to the attention of law enforcement authorities. It is this kind of collaboration that has enabled authorities to bring this cyber hacker to justice.” ®

Sponsored:
Minds Mastering Machines – Call for papers now open

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2018/01/11/ohio_fruitfly_malware_charges/

CISOs’ No. 1 Concern in 2018: The Talent Gap

Survey finds ‘lack of competent in-house staff’ outranks all other forms of cybersecurity worry, including data breaches to ransomware attacks.

The top concern among CISOs for 2018 falls outside the typical realm of attacks, employee negligence, or staffing shortages, according to findings released this week in a Ponemon Institute Survey.

The top concern: “lack of competent in-house staff.”

“I am not surprised that this was a leading concern – it is consistent with what we have been hearing as a critical need and gap in the market. However, being the leading concern was somewhat surprising if you follow what’s typically the most reported consequences of the staffing situation: breaches and cyberattacks,” says Lee Kirschbaum, senior vice president and head of product, marketing, and alliances at Opus, which commissioned the report.

Larry Ponemon, author of the report, says he was also was surprised by the finding, adding that typically data breaches, ineffective security tools, or some other technical aspect of guarding security tops the concerns list. Workforce issues are usually somewhere in the middle, he says.

According to the survey of 612 chief information officers and IT security pros, the top five threats that worry them the most in 2018:

  • 70%:  lack of competent in-house staff
  • 67%:  data breach
  • 59%:  cyberattack
  • 54%:  inability to reduce employee negligence
  • 48%:  ransomware

A majority of survey respondents, 65%, also believe attackers will be successful in duping employees to fall for a phishing scam that will result in the pilfering of credentials – even more so than the organization suffering from a data breach or cyberattack.

“It is one of the oldest forms of cyberattacks, dating back to the 1990s, and one of the most widespread and easier forms of attacks,” Kirschbaum says. “It targets one the weakest links – the human factor – and focuses on human behavior to encourage individuals to discuss sensitive information.”

Challenging technologies for IT security professionals in 2018 include IoT devices, 60%; mobile devices, 54%; and cloud technology, 50%, according to survey respondents.

Over the last year or two, CISOs have been increasingly talking about how to secure IoT devices and the challenges they pose, Ponemon says. Their questions have ranged from how to encrypt a smart lightbulb to whether IoT security should rest on the company or the manufacturer, he notes.

Gloom and Doom

CISOs exhibited a general sense of gloom in their survey responses, says Ponemon.

“Maybe security people are stoic. They don’t see 2018 as a year for improvement, and that security risks are becoming a greater problem,” notes Ponemon.

The survey found 67% of respondents believe their organizations are more likely to fall victim to a data breach or cyberattack in the New Year.

And the majority of respondents expect breaches and attacks to stem from inadequate in-house expertise (65%); inability to guard sensitive and confidential data from unauthorized access (59%); an inability to keep pace with sophisticated attackers (56%); and a failure to control third parties’ use of company’s sensitive data (51%), according to the survey.

“The sheer volume of information, ranging from threat intelligence to third-party assessments, continues to increase,” Kirschbaum says. “In an environment with increasing risks from new threats, new disruptive technologies, and legacy systems that continue to demand attention, companies are simply unable to bring on enough qualified staff to keep up.”

Despite all the talk of an IT security labor shortage, survey respondents appear relatively optimistic that improvements may be on the horizon in 2018. According to the survey, 61% of respondents believe they could see staffing improvements in 2018. That coincides with other research that Ponemon Institute is involved with, Ponemon says.

Four years ago, a Ponemon survey found 40% of IT security respondents complained that job openings went unfilled because they could not find candidates, but that figure has since dropped to 32% based on a follow-up survey this year, Ponemon says.

Despite potential staffing improvements, CISOs and other IT security professionals foresee stress in the New Year.

 

Source: Ponemon Institute Survey and Opus

“Overall, threats are multiplying, CISOs are having trouble finding in-house resources to keep up – and above all, are worried about threats they have limited control over, like the billions of new devices in the Internet of Things, each bringing with them potential new security threats and the always unpredictable element of human behavior,” Kirschbaum says.

Related Content:

 

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET’s … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/cisos-no-1-concern-in-2018-the-talent-gap/d/d-id/1330800?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Year’s #sophospuzzle crossword 2017/2018 – solution and winners!


🥇 Donal (@donalhunt), Cork, Ireland @ 2017-12-29T23:24Z


🥈 James Hodgkinson (Yaleman), Brisbane, QLD, Australia @ 2017-12-30T00:48Z


🥉 Rhett Hooper, Salt Lake City, UT, USA @ 2017-12-30T00:54Z


Andrew King, Darwin, NT, Australia


Melissa R, Mississippi, USA


Peter K, Hungary


Ian O’Neill, Leighton Buzzard, Bedfordshire, UK


William Steinka, Connecticut, USA


Presian Yankulov (@presianbg), Sofia, Bulgaria


Andrew Yeomans, Tring, Hertfordshire, UK


David Nason, Chapel Hill, NC, USA


Jenn L, Vancouver, BC, Canada


Jason Haar (@jhaar), Christchurch, New Zealand


James Beckett (@hackery), UK


Jorrit Kronjee, The Netherlands


Beverly P, Texas, USA


Monica G, Oregon, USA


John P, France


Richard Cardona (@richardcardona), Texas, USA


Phil Rhea, UK


Malcolm (@obelix6320), Bristol, UK


David McKenzie, (@davewj), Glasgow, UK


Ramesh Babu, Kerala, India


Kenny P, Eau Claire, WI, USA


Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/17GDq65oNNc/