STE WILLIAMS

Old RAT, New Moves: Adwind Hides in Java Commands to Target Windows

The Adwind remote access Trojan conceals malicious activity in Java commands to slip past threat intelligence tools and steal user data.

The Adwind jRAT, a remote access Trojan known for targeting login credentials and other data, is adopting new tactics as its operators aim to better conceal malicious activity. Its actors exploit common Java functionality to steal information while evading defensive security tools.

Adwind, related to AlienSpy and also known as Frutas, Unrecom, Sockrat, and JSocket, is a known cross-platform RAT that has been targeting businesses since 2013. It’s capable of stealing credentials, system information, and cryptographic keys, as well as keylogging, taking screenshots, and transferring files. This jRAT typically uses phishing emails, infected software, or malicious websites to target a range of platforms including Windows, Linux, and macOS.

A new variant is focused on Windows machines and common Windows applications Explorer and Outlook, report researchers at Menlo Security who detected it about four months ago. Adwind is now going after Chromium-based browsers, including newer browsers such as Brave. Menlo security researcher Krishnan Subramanian says the pivot to Windows was a logical move for Adwind’s operators: While the jRAT was platform-agnostic, most of its victims ran Windows.

The latest jRAT variant uses Java to take control over and collect data from a victim’s machine. It’s specifically after login credentials, says Subramanian, who notes this particular variant has been actively targeting industries like financial services, where login credentials are valuable.

This malware arrives in a JAR file concealed in a link inside a phishing email or downloaded from a legitimate site serving up unsecured third-party content. Researchers also noticed infections coming from outdated and illegitimate WordPress sites, noting the latter delivery technique is growing popular among cybercriminals capitalizing on vulnerabilities in the publishing platform.

Adwind jRAT arrives in a malicious JAR file, with malware hidden under layers of obfuscation. The initial JAR decrypts, prompting a set of processes that ends with initializing the RAT with the command-and-control (C2) server. Adwind is then able to decrypt a file to access a list of C2 server IP addresses. It chooses one, and an encrypted request is made via TCP port 80 to load another set of JAR files. These activate the jRAT, which becomes functional and can send C2 requests to access and send credentials from the browser to a remote server. Credentials can be from banking websites or business apps, so long as they’re from a Windows browser or app.

Hidden in Plain Sight
This variant of Adwind stays hidden by acting like any other Java command. Millions of Java commands flow in and out of an enterprise network, and threat intelligence has little to know heuristics to use for creating a static rule or signature that will detect the initial JAR payload. There is nothing suspicious about its appearance or behavior; on the surface, it seems normal. 

“Malware that takes advantage of common Java functionality is notoriously difficult to detect or detonate in a sandbox for the simple fact that Java is so common on the Web,” Subramanian writes in a blog post. As he explains, efforts to block or limit Java on the Web would have far-reaching consequences. It’s a nonstarter for those relying on rich web apps or SaaS platforms.

There is one way the Adwind jRAT stands out: Most Java commands don’t view and send stolen credentials to a remote server, Subramanian says. This behavior will eventually show itself.

Related Content:

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/operations/old-rat-new-moves-adwind-hides-in-java-commands-to-target-windows/d/d-id/1336205?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Running on Intel? If you want security, disable hyper-threading, says Linux kernel maintainer

Linux kernel dev Greg Kroah-Hartman reckons Intel Simultaneous Multithreading (SMT) – also known as hyper-threading – should be disabled for security due to MDS (Microarchitectural Data Sampling) bugs.

Kroah-Hartman, who was speaking at the Open Source summit in Lyons, has opened up on the subject before. “I gave a talk last year about Spectre and how Linux reacted to it,” he told The Reg. “And then this year it’s about things found since the last talk. It’s more and more of the same types of problems.

“These problems are going to be with us for a long time; they’re not going away.”

There is another issue, though. “People didn’t realise how we do security updates, the whole CVE mess, and the best practices we need to have. Linux isn’t less secure or more secure than anything else. The problem is: these are bugs in the chips. We fix them in time, we just have to make sure that everybody updates.”

Flushing buffers takes time. Every single one of these mitigations, to solve hardware bugs, slows down your machine …

Kroah-Hartman explained to attendees how these vulnerabilities “exploit bugs in the hardware when the chip is trying to look into the future”.

He added: “MDS is where one program can read another program’s data. That’s a bad thing when you are running in a shared environment such as cloud computing, even between browser tabs.

“You can cross virtual machine boundaries with a lot of this. MDS exploits the fact that CPUs are hyper-threaded, with multiple cores on the same die that share caches. When you share caches, you can detect what the other CPU core was doing.”

Open BSD was right, he said. “A year ago they said disable hyper-threading, there’s going to be lots of problems here. They chose security over performance at an earlier stage than anyone else. Disable hyper-threading. That’s the only way you can solve some of these issues. We are slowing down your workloads. Sorry.”

Kroah-Hartman, a kernel maintainer, described some post-Spectre MDS examples, like RIDL, Fallout and Zombieland. “You can steal data across applications. You can steal data across virtual machines. And across ‘secure enclaves’, which is really funny. Inside Intel chips there is something called SGX [Software Guard Extensions] where you can run code that nobody else can see, it’s really porous. In the kernel we fix this by flushing buffers every time we switch context. It solves the problem.”

But then there’s the performance hit. “Flushing buffers takes time. Every single one of these mitigations, to solve hardware bugs, slows down your machine.”

The extent of the slowdown depends on the workload. If it is IO-bound, you may hardly notice. But Kroah-Hartman builds kernels. “I see a slowdown of about 20 per cent. That’s real. As kernel developers we fight for a 1 per cent, 2 per cent speed increase. Put these security things in, and we go back like a year in performance. It’s sad.”

Kroah-Hartman dispelled the idea that an issue like Spectre has a single fix. “We are still fixing Spectre 1.0 issues [almost] two years later. It’s taken a couple of thousand patches over [almost] two years. Always take the latest kernel and always take the latest BIOS update.”

The CVE database of security issues is irrelevant when it comes to the Linux kernel, he said. “CVEs mean nothing, for the kernel. Very few CVEs ever get assigned for the kernel. I’m fixing 20 patches a day, I could create a CVE to each one of them, I was told not to because it would burn the world down,” he said.

“If you’re not using a supported distro, or a stable long-term kernel, you have an insecure system. It’s that simple. All those embedded devices out there, that are not updated, totally easy to break. If you are running in a secure environment and you trust your applications and you trust your users then get the speed back. Otherwise, running in a shared environment, running untrusted code, you need to be secure.”

Is AMD safer than Intel? “All the issues that came out this year, were reported not to be an issue on AMD,” he told us. Would he enable SMT on AMD? “As of today, that is still a safe option from everything I know. Yes.”

Are the MDS vulnerabilities being actively exploited by malware? “They’re not that hard to exploit,” Kroah-Hartman said. “The research has proved how to do it. The hard part is, you can’t tell if somebody is exploiting it. But it is a known problem, you can reproduce it yourself. The Zombieland guys have a great demo. It’s a real issue, you need to fix it.”

SUSE queue?

As it happens, next up for The Register at the Open Source Summit was SUSE EMEA CTO Gerald Pfeifer. Naturally, we asked him whether SUSE ships with hyper-threading on or off by default.

“On,” he said. “Greg K-H? He’s right. Ultimately every customer needs to decide, because there is a cost associated with it. But from a technical perspective he’s right.”

Imagine, he said, you were Google. “Making that switch means one, two, three more data centres. I’m not arguing leave it on. All I’m saying is, it’s not an easy choice. Because someone is going to yell at you if something takes longer.”

So there you have it. If you’re running on Intel, but want to be secure: best practice is to disable hyper-threading and keep your BIOS and kernel up to date. In reality, though, many factors conspire against that best practice being achieved. ®

Sponsored:
Beyond the Data Frontier

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/29/running_on_intel_disable_hyper_threading_says_linux_kernel_maintainer/

Chrome devs tell world that DNS over HTTPS won’t open the floodgates of hell

Chrome devs have had a little rant about “misinformation”, repeating that DNS-over-HTTPS (DoH) will be supported but won’t necessarily be automatically used in upcoming builds of the browser.

In a blog post published last night, Google’s Chrome product manager insisted it was not going to “force users to change their DNS provider” after building the technology into Chrome 78, released last week.

The blurb comes as part of Google’s effort to convince hostile police agencies and legislators around the world that DNS-over-HTTPS (DoH) won’t result in ordinary people’s internet usage being completely shielded from the ability of state agencies and ISPs to monitor and police them – the snoops will just have to work harder to eavesdrop on folks. In contrast, Mozilla, maker of Firefox, has vowed to press on and redirect users’ DNS queries to its preferred host, Cloudflare, if it is so enabled.

Google said last night that Chrome’s DoH feature will operate by checking whether the user’s DNS provider – typically their ISP – is on a Google list of participating DoH providers. This, so far, small list includes Google’s own DNS service, OpenDNS, Cloudflare, and a few others. If the netizen’s provider is on the list, the query is routed to that DoH server, and if not, then their DNS queries continue over an unencrypted connection, just as they do today.

“We are optimistic about the opportunities DoH offers for improving user privacy and security, but we also understand the importance of DNS and that there could be implementation concerns we haven’t foreseen,” simpered the Chocolate Factory in its blog post. It might as well have said: “Please, regulators, don’t ban or bugger about with this.”

It added: “We’re taking an incremental approach with this experiment, and our current plan is to enable DoH support for just 1% of our users, provided that they are already using a DoH compliant DNS provider. This will allow Google and DoH providers to test the performance and reliability of DoH. We’ll also monitor feedback from our users and from other stakeholders, including ISPs.”

Image by elroyspelbos https://www.shutterstock.com/g/elroyspelbos

DoH! Mozilla assures UK minister that DNS-over-HTTPS won’t be default in Firefox for Britons

READ MORE

In addition, to keep corporate admins sweet and not allow enterprise end-users to bypass carefully honed corporate web access policies, Google added: “Most managed Chrome deployments such as schools and enterprises are excluded from the experiment by default. We also offer policies for administrators to control the feature.”

Paul Vixie, Farsight Security CEO and a contributor to the design of the DNS protocol, who last month warned DoH could limit network admins’ autonomy, opined on Twitter last night that Mozilla should “do DoH in Firefox the way Google is doing it in Chrome”.

DNS lookups essentially translate the domain name you type into your browser – say, theregister.co.uk – into a machine-readable format so internet servers can fetch you your IT news and daily fix of cat videos. At the moment those queries are unencrypted, and while this makes them theoretically vulnerable to eavesdropping, filtering, and tampering, in practice the world keeps turning without too many problems.

Countries such as the UK place great store on surveilling users’ DNS queries. In the context of Google and Mozilla’s DoH proposals, the most useful tool available to state agencies is the ability to order domestic DNS server operators to sinkhole certain results, such as those leading to child abuse material. This is how the Internet Watch Foundation’s blacklist operates.

To head off the UK’s notoriously technophobic civil service and government ministers, Mozilla agreed not to make DoH a default option for British users – though a few mouse clicks is all it takes to turn it on. Americans will eventually default to sending all their DNS queries to Cloudflare, however.

In addition to preventing users from accessing content that upsets local authorities, ISPs also use their own DNS servers to implement things like parental controls, antivirus and general online safety, helping keep users away from compromised websites. This is a useful thing at a time when increasingly large proportions of ISPs’ userbases have no idea about basic online security precautions and don’t really care enough to learn about them. ®

Sponsored:
Beyond the Data Frontier

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/10/29/chrome_dns_https/

Why It’s Imperative to Bridge the IT & OT Cultural Divide

As industrial enterprises face the disruptive forces of an increasingly connected world, these two cultures must learn to coexist.

We hear it all the time from security marketers and evangelists alike. “Information technology and operational technology are converging!” It’s a simplistic way of characterizing what is a highly complex web of digital transformations affecting a broad range of industries, from manufacturing to energy to real estate.

But the statement is only half true. IT and OT are converging from a technology perspective, but the two disciplines are lagging from a governance and management perspective.

When I was first stepped-in as the chief technology officer of New Jersey, a veteran of the state’s enterprise IT agency gave me a simple piece of advice. Having served decades in government, he had learned one indisputable truth: “Technology is the easy; culture is the hard part.”

As I speak with chief information security officers (CISOs), security operations center (SOC) analysts, and plant engineers in the course of my work, I can’t help but relate those words to industrial enterprises facing the disruptive forces of an increasingly connected world. As IT and OT technologies converge, their respective people and processes remain separated by different professional and intellectual cultures. This needs to change.

Perhaps the most obvious cultural divide between the two disciplines is how each thinks about risk. On the IT side, risk is largely calculated in the context of security. This means that consequences are often measured in terms of data loss, reputational harm, and legal or regulatory liability. On the OT side, risk is all about safety. The consequences range from plant downtime and the associated profit loss to physical damage and personal injury. 

In addition, IT and OT personnel might as well speak different languages: OT practitioners deal in obscure and often vendor-proprietary protocols, while IT professionals speak an almost universal vernacular dominated with a growing bias for open source technology. This language barrier hinders cross-functional collaboration and perpetuates siloed cultures — both of which are incompatible with mitigating the cyber-risks of converging IT and OT systems.

Finally, there’s the issue of leadership. Most organizations still have not decisively adapted their organizational structures to address this new normal. While more and more CISOs are gaining responsibility for OT security, many enterprises are still federated in their governance structure. This perpetuates the institutional divides between IT and OT, and also contributes to redundancy in both technology investments and human resources. 

All of these should make us ask this important question: What do we do about it?

The first step is to converge your IT and OT people. Make them sit together, eat with each other, and go to happy hour together. It sounds straightforward because it is. There shouldn’t be any daylight between these teams. Their respective networks are colliding into one network and the organizational structure must mirror this change, which may take some forcing.

Yes, IT and OT folks are different breeds, but at the end of the day, they’re more likely than not to unite around common interests — especially when they all share a common boss. As a side benefit, converging IT and OT teams will naturally break down the language barrier between the two groups.

Step two, a slightly more complicated stage, is to converge your IT and OT processes. Doing so will require an independent third-party to harmonize the different risk calculuses. This independent third party can be the enterprise SOC, acting as a fusion center of sorts for both IT and OT security. By assuming a technology-agnostic monitoring posture, the SOC can translate IT to OT and vice versa, applying universal standards to managing both IT and OT cyber-risk. Other processes will naturally flow from this example, be it vulnerability management, change and configuration management, or incident response. 

These steps are a simple starting point, with much more to consider as both technologies converge and grow. But the outcome for doing nothing — leaving OT to outpace IT — will likely push your organization’s overall security risk into prohibitive levels.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Is Voting by Mobile App a Better Security Option or Just ‘A Bad Idea’?

Dave Weinstein is the chief security officer of Claroty. Prior to joining Claroty, he served as the chief technology officer for the State of New Jersey, where he served in the Governor’s cabinet and led the state’s IT infrastructure agency. Prior to his appointment as CTO he … View Full Bio

Article source: https://www.darkreading.com/operations/why-its-imperative-to-bridge-the-it-and-ot-cultural-divide/a/d-id/1336173?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Cybersecurity Trumps Political, Reputational Concerns for Companies

The average company has seen its risk increase, with cybersecurity topping the list of business threats, followed by damage to reputation and financial risks, a report finds.

Companies are increasingly worried about risks in the current economy, and while reputational, financial, and political risks continue to concern corporate managers, cybersecurity has become the top worry for companies, according to a survey conducted by IT governance group ISACA.

According to its annual “State of Enterprise Risk Management” report, ISACA found that 29% of the 4,625 risk managers polled identify cybersecurity at the top threat to their business, while 15% consider reputational risks and 13% name financial dangers as most critical. The average risk manager expects their risk to slightly increase in 2020, with the share of risk managers considering cybersecurity to be most critical climbing to 33%.

Overall, the steady advance of technology — both in the corporate infrastructure and those used to attack that infrastructure — has left risk managers concerned that they are not keeping up with cybersecurity threats, says Rob Clyde, director at ISACA.

“You have a dedicated group of adversaries that are trying to attack you, and they are using new technologies,” he says. “Training alone is not going to solve those problems because the attacks have become so sophisticated.”

The survey underscores that companies need to continue to develop their risk management capabilities. While the fundamentals of risk management, such as assessment and identification, are widely adopted, many domain-specific processes continue to be poorly understood. While only 24% of risk managers consider defining and assessing compliance and legal risk to be difficult, 41% find cybersecurity assessment difficult and 46% consider political-risk assessment difficult, according to the report.

It would not be “appropriate for every enterprise to work toward the highest maturity level for every risk management step,” according to the report. But companies could benefit “with moving away from ad-hoc processes toward a workmanlike, systematic, documented and repeatable methodology for risk management.”

New technologies and changes in current IT pose the most significant cybersecurity challenge for companies, according to the report. The largest share of respondents — 64% — identify technological advancement as a significant challenge for their company. Other challenges include the changing threat landscape, a lack of skilled cybersecurity workers, and workers missing the necessary skills for cybersecurity. 

The technology with which most companies continue to struggle is not even new: the cloud. Seven in 10 companies identify the cloud as a technology that is increasing their risk, followed by 34% identifying the Internet of Things and a quarter pointing to machine learning and artificial intelligence.

The fact that an “old” technology is still causing headaches for companies is telling, Clyde says. “It’s not like it just came out yesterday — it is something that enterprises have gotten to know for quite a while,” he says. “I’m a little bit concerned.”

Yet cloud is also a solution to security problems for many companies, he adds. “Cloud providers are improving many measures of security for businesses, especially small businesses,” he says.

Reputational and political risks stymie many companies. At least half of firms have difficulty measuring the reputational and political risks that could potentially affect their business, according to the study. Unsurprisingly, with a lack of measurement comes an inability to manage the problems as well. Nearly the same amount find mitigating the risks of political or reputational damage difficult or worse.

“Political risk is often outside of a company’s control,” Clyde says. “It can be really tough to plan for and mitigate.”

While most companies focus on awareness training as a fundamental part of managing risk, companies should pursue that training throughout the company, not just among senior management, ISACA’s report stated. In addition, training workers with domain-specific skills is important, especially with cybersecurity, which is already suffering from a scarcity in resources and knowledgeable workers, Clyde says.

“The knowledge needed to deal with certain types of risk and certain types of security needs to be embedded in the company itself,” he says. “Organizations need to cross train people — find someone in an adjacent field and train them to so that they can cross over.”

Related Content:

 

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT’s Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline … View Full Bio

Article source: https://www.darkreading.com/risk/cybersecurity-trumps-political-reputational-concerns-for-companies/d/d-id/1336200?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Who Made the List Of 2019’s Nastiest Malware?

This year’s compilation features well-known ransomware, botnet, and cryptomining software.

Just in time for Halloween comes Webroot’s list of the nastiest malware of 2019, filled with attacks and exploits that include ransomware, phishing attacks, botnets, cryptomining, and cryptojacking.

Among the mentions, Emotet, Trickbot, and Ryuk are cited as “the most frightening ransomware triple threat,” according to researchers. The “top offenders” in the malware category are Emotet, Trickbot, and Dridex. Meanwhile, Hidden Bee and Retadup are noted for creating cryptojacking havoc.

Webroot’s selections are based on malware that delivered the greatest number of malicious payloads or caused the most damage to victims.

Read more here.

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/who-made-the-list-of-2019s-nastiest-malware/d/d-id/1336201?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Google Cloud Adds New Security Management Tools to G Suite

Desktop devices that log into G Suite will have device management enabled by default, streamlining processes for IT admins.

Google Cloud today debuted a new release of security and identity management tools for G Suite in an effort to give enterprise IT administrators more streamlined control over devices.  

Desktop devices that log into G Suite will have fundamental device management enabled by default. When a user logs in to G Suite through any browser on a Windows, Mac, Chrome, or Linux machine, it will automatically be registered with endpoint management. Users don’t have to install agents or profiles on the device. Admins can view device type, operating system, first sync time, and last sync time via the admin console. They can also sign users out from a device.

The idea behind fundamental device management is to give admins a clearer picture of all machines accessing corporate data and help them make security and policy decisions about how to manage enterprise devices. Through the console, admins can identify which devices need operating system updates or remotely log someone out if a computer is lost or stolen.

Today’s update also gives admins the ability to filter for devices that don’t have endpoint verification, which can help them identify which are accessing corporate data without it. Google Cloud points out this can aid in the deployment of context-aware access control, which relies on endpoint verification and lets admins create granular control policies for apps based on factors like user identity, location, device security status, and IP address. Context-aware access for G Suite is now generally available for G Suite Enterprise and G Suite Enterprise for Education.

Fundamental device management begins rolling out today. An extended rollout could take longer than 15 days for feature visibility; Google says it could take up to six months to reach all domains.

Read more details here.

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/cloud/google-cloud-adds-new-security-management-tools-to-g-suite/d/d-id/1336202?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Why Cloud-Native Applications Need Cloud-Native Security

Today’s developers and the enterprises they work for must prioritize security in order to reap the speed and feature benefits these applications and new architectures provide.

The rise of cloud-native application architectures — and the deployment speeds they enable — are forcing many organizations to prioritize new features and functionalities over security. However, compromises that make security a low priority expose these organizations to greater risk.

Recent discoveries of API vulnerabilities and the commonplace nature of generic container configurations, for example, have combined to make modern applications highly susceptible to attacks. Enterprises cannot afford to let application security be an afterthought.

Unlike traditional applications that consisted of a single workload and required additional resources to ensure speed, today’s cloud-native applications are mostly microservices-based. There are probably as many perceptions on the exact definition of the word “microservice” as there are development teams working on modern applications. Typically, however, these types of applications are broken in such a way that each individual microservice can scale independently of the others. 

If you need the application to go faster, you add more instances of the microservice that is currently acting as a bottleneck. This approach works well — except for the high likelihood of risk exposure by human error.

Cloud-Native Vulnerability
It is bound to happen. Human beings, especially when working quickly to meet deadlines, make bad judgment calls. Despite warnings, employees will continue to copy and paste blindly from Stack Exchange, make microservices out of random applications found on GitHub, and even automate these microservices to regularly pull code from a repository maintained by an unknown and only questionably trustworthy third party that the developer in question has never met, or even conversed with.

Even in instances when all code is written in-house, removing the risk of third-party actors, the distributed nature of a microservices-based application means that each component can be “owned” by a different team. Communication barriers between teams can lead to all sorts of problems, among them a lack of coordination regarding testing, quality assurance, and even the resolution of vulnerabilities in the application.

A single cloud-native application can consist of thousands of workloads spread across multiple infrastructures. There can be individual microservices in on-premises data centers, multiple public clouds, edge data centers, and, eventually, in network locations we have yet to develop.

Each developer — and each team of developers — knows how to solve different problems. What they work on determines their focus and shapes their experience. Even if every team were to somehow make their own piece of the larger application “secure,” from an internal code perspective, that microservice needs to communicate with others, and that communication is a point of vulnerability. 

The bad guys — and even paying customers — are well known for doing things to applications that developers simply didn’t anticipate, often creating vulnerabilities in implementation and execution that aren’t visible with a simple code revision. In addition, each infrastructure applications can run on has a different security model, with different controls to be learned. Every difference is scope for further vulnerability in implementation.

This all sounds super scary. But cloud-native applications evolved for a reason. They solve very real problems and are not going away, creating a serious need to secure them. So, what can we do?

Learn, Adapt, Implement
We might call an assemblage of thousands of interoperating workloads a single application, but that doesn’t mean that it is one. A cloud-native “application” is, in fact, a whole bunch of individual applications that are stitched together with automation orchestration — and a demographically disproportionate amount of caffeine.

Each and every microservice template (from which the multitude of instances are spawned) needs to be treated like its own application, when considering patching and code sourcing. It needs to be regularly updated, code must come from only known-good places and any changes in code should be tested before being allowed into production. That includes changes made to third-party repositories.

But each microservice instance — or in the worst case, group of similar instances on a single host or pod — needs to be treated like an application to ensure security. Data that flows in and out needs to be analyzed, baselined, and monitored for unexpected deviations.

Dependency management and copy data management applications can help with the template herding, but securing running instances means existing network security defenses — firewalls, advanced threat protection, command-and-control sensing, and so forth — all need to get smaller. They need to fit in containers and be able to run alongside microservices. They need to be as easy to automate and orchestrate as the microservices they defend.

The important part here, however, is that there needs to be more than just the bare-bones firewall offered up at the edge of every virtual data center a public cloud provides. Just as lateral movement can occur when an application in a traditional, on-premises data center is compromised, lateral movement can occur within an application (or at least within the portions of the application that live in that virtual data centers) when one of its microservices is compromised.

Today’s cloud-native developers and the enterprises they work for need to prioritize security in order to reap the speed and feature benefits these applications and new architectures provide.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s top story: “Is Voting by Mobile App a Better Security Option or Just ‘A Bad Idea’?.”

Trevor Pott is a Product Marketing Director at Juniper Networks. Trevor has more than 20 years of experience as a systems and network administrator. From DOS administration to cloud native information security, Trevor has deep security knowledge and a career that matches the … View Full Bio

Article source: https://www.darkreading.com/cloud/why-cloud-native-applications-need-cloud-native-security/a/d-id/1336187?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

The Real Reasons Why the C-Suite Isn’t Complying with Security

Is the C-suite really that bad at following security policy? Or is it a case of mixed messages and misunderstanding?

Douglas Graham, chief security officer at globalization services provider Lionbridge, says it’s time to put the rumors to rest: C-suite executives are getting a bad rap for refusing to comply with security policies. In his experience, their so-called failure to fall in line is often a case simple misunderstanding. 

“At a prior workplace, I once asked my CEO why he’d refused to do security awareness training. He told me he’d never said that at all and that he felt that there should be no exceptions to the training. As it turns out, someone else had made the decision for him,” Graham recalls. “Across my career, I’ve found many C-suite executives are not fully aware when they aren’t in compliance; rather, others often make decisions for them based on what they think the executive might or might not tolerate.”

New research around C-level executives’ willingness to follow security protocol from security vendor Bitdefender is not encouraging. Its “Hacked Off!” report, surveyed more than 6,000 infosec professionals globally, 57% of whom said key executives are the ones least likely to comply with a company’s cybersecurity policy. 

But why? Other research finds security is an increasing priority across all levels at most organizations. For example, a study earlier this year from Radware found cybersecurity was recognized as a key business driver by the C-suite, with 98% of C-suite executives noting they have some management responsibility for security.

It’s also well-known that CEOs and other top execs, with their influence and exposure to critical data, are seen as targets. According to Verizon’s “2019 Data Breach Investigations Report,”  C-level executives are being increasingly and proactively targeted by social breaches for financial gain. And senior executives are 12 times more likely to be the target of social-engineered attacks.

Clearly, an understanding of the need to be careful and risk-averse is sitting with the C-suite. So why are they getting a reputation for being bad at compliance?

“It’s time to take a look at the controls or the culture,” Graham says. “CISOs need to work with the C-suite and other key influencers to explain the reason behind the controls and not just demand compliance for compliance sake, even if that takes more time.”  

High-Ups Require Different Levels of Control
John Pescatore, a veteran analyst and professional in the security industry, has seen the issue evolve over the years. Currently the director of emerging security trends at SANS, he says one of the most common reasons why reasons executives don’t comply with security policies is because they need security controls fit for them exclusively.

“Too often security policy has been one-size fits-all – the same for the CEO as for her secretary,” Pescatore says. “This makes no sense. Never has. There are many areas in corporate policy where executives have additional privileges and accommodations compared to the average employee, and security policies need to do so as well.”

Pescatore points to the example of a security policy that prohibited the use of Blackberries several years ago and, in more recent years, iPhones. A solid case for having a secure mobile device at the executive level easily could be made, and it made sense to find a way to configure mobile devices securely for the executives. But in many places, that didn’t happen. So executives simply brushed up against the directive not to use their devices for work – and did it anyway.

“Too many security teams fall back on, ‘Well, we told them not to do that’ rather than focus on developing security architectures and controls that can enable those executives to securely meet the demands of their jobs,” Pescatore says.

Spell Out the Risk With $$$
Want to get executives to pay attention to how much their lack of compliance might cost? Give them a breakdown of the cost of a breach, says John Gelinne, managing director, advisory for Deloitte Cyber Risk Services.

“CISOs need to hit the other C-suite members in the pocketbook,” Gelinne says. “Taken one step further, the financial impact associated with the exploitation of an executive by an adversary can be calculated through evolving cyber-risk quantification-modeling techniques. Cyber-risk modelling can illustrate, in financial terms, the broad business impacts a cyberattack can have — from the time an incident has been discovered through the long-term recovery process — all as a result of a single executive, exploited by a single adversary and a single point in time. By looking realistically at potential costs, business leaders can see the direct impact of how their actions can hit them, and their shareholders, in the pocketbook.”  

Gelinne also recommends training specific to the C-suite. This can help them understand how — and why — they will be targeted with spear-phishing and whaling attacks over email. When it comes to executives, criminals are often willing to be very patient in order to pull off the long con in the hope of a very big payoff.

Is It All Just a Big Misunderstanding?
Pescatore also notes that a lot of the buzz around the C-suite’s bad attitude about compliance is misplaced.

“When I do briefings to boards of directors, I always ask, ‘How many of you use an iPhone or Android phone for your business?’ and these days it is pretty much 100% of them. Then I ask, ‘How many of you use the fingerprint or facial recognition to open your phone?’ Typically, 80% to 90% [do], yet most security teams will say, ‘We can’t get the executives to use strong authentication.'”

So maybe it’s time for the security team to have a change of heart about executives. A lot of evidence shows those at the C-level really do care about security. Ultimately, Pescatore says, it’s about marketing security in a positive way to get C-suite buy-in. And many CISOs are already doing that, he says.

“[I have a] lot of positive examples out there of CISOs strong at communicating, selling, and enabling business,” he says. “Those are probably in the 41% that were not blamed [for lack of compliance] in the Bitdefender survey.”

Related Content:

(Image: pictworks via Adobe Stock)

This free, all-day online conference offers a look at the latest tools, strategies, and best practices for protecting your organization’s most sensitive data. Click for more information and, to register, here.

Joan Goodchild is a veteran journalist, editor, and writer who has been covering security for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online. View Full Bio

Article source: https://www.darkreading.com/edge/theedge/the-real-reasons-why-the-c-suite-isnt-complying-with-security/b/d-id/1336204?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

New Facebook AI fools facial recognition

Facebook is both embroiled in privacy struggles over its use of facial recognition, working to spread it far and wide, and coming up with ways to flummox the technology so it can’t match an image of a person to one stored in image databases.

On Sunday, Facebook Research published a paper proposing a method for using a machine learning system for de-identification of individuals in videos by subtly distorting face images so they’re still recognizable to humans, but not to machines.

Other companies have done similar things with still images, but this is the first technology that works on video to thwart state-of-the-art facial recognition systems.

Here it is in action, with before and after videos of celebrity faces that many of us will recognize but that automatic facial recognition (AFR) systems can’t identify:

This, from the holder of the world’s biggest face database?

Why would Facebook do this, when it’s been so keen to push facial recognition throughout its products, from photo tag suggestions on to patent filings that describe things like recognizing people in the grocery store checkout lines so the platform can automatically send a receipt?

An approach that’s resulted in bans of facial recognition in Europe and Canada, and at least one, $5 billion class action lawsuit?

Facebook last month turned off the default setting for tag suggestions – the feature that automatically recognizes your friends’ faces in photos and suggests name tags for them – while also expanding facial recognition to all new users.

In the de-identification paper, researchers from Facebook and Tel-Aviv University said that the need for this type of artificial intelligence (AI) technology has been precipitated by the current state of the art when it comes to the adoption and evolution of facial recognition. That state of the art is a mess, given the growing number of governments that use it and other AI to surveil their citizens, and the abuse of the technology to produce deep fakes that adds to the confusion over what’s real and what’s fake news.

From the paper:

Face recognition can lead to loss of privacy and face replacement technology may be misused to create misleading videos.

Recent world events concerning the advances in, and abuse of, face recognition technology invoke the need to understand methods that successfully deal with deidentification. Our contribution is the only one suitable for video, including live video, and presents quality that far surpasses the literature methods.

Venture Beat spoke with one of the researchers, Facebook AI Research engineer and Tel Aviv University professor Lior Wolf. Wolf said the AFR fooler works by pairing an adversarial autoencoder with a classifier network.

It enables fully automatic video modification at high frame rates, “maximally decorrelating” the subject’s identity while leaving the rest of the image unchanged and natural looking: that includes the subject’s pose and expression and the video’s illumination. In fact, the researchers said, humans often recognize identities by nonfacial cues, including hair, gender and ethnicity. Therefore, their AI leaves those identifiers alone and instead shifts parts of the image in a way that the researchers say is almost impossible for humans to pick up on.

This could be used to create video that can be posted anonymously, Wolf said:

So the autoencoder is such that it tries to make life harder for the facial recognition network, and it is actually a general technique that can also be used if you want to generate a way to mask somebody’s, say, voice or online behavior or any other type of identifiable information that you want to remove.

It’s comparable to how face-swapping apps work: the de-identification AI uses an encoder-decoder architecture to generate both a mask and an image. To train the system, an image of a person’s face is distorted by rotating or scaling it before the image is then fed into the encoder. The decoder outputs an image that’s compared with the initial, undistorted image. The more obfuscation, the less natural looking the face.

A Facebook spokesperson told Venture Beat that the company currently has no plan to apply this AFR-roadblock technology to any of Facebook’s apps, but that methods like this could enable public speech that remains recognizable to people but not to AI systems.

Where could this come in handy?

I can think of at least two scenarios in which face de-identification would come in handy when it comes to government use of facial recognition technology: it might have potential to replace the facial recognition-enhanced police bodycams that recently got outlawed statewide (for three years) in California. The states of Oregon and New Hampshire already had similar laws on the book, as do various US cities.

It could also conceivably help the forgetful who fail to scrub the faceprints off their agency’s files when privacy experts come calling with their Freedom of Information Act (FOIA) requests, as happened when the New York Police Department (NYPD) handed over non-redacted files in April… and then had to ask to get them back.

Whether those are pluses or minuses vis-a-vis privacy rights is a worthy discussion. But there’s one case that comes to mind in which use of face de-identification technology could, imaginably, lead to undeniable privacy harm: namely, the theft of women’s images to create deep fake porn.

These days, you can churn those things out fast and cheap, with little to no programming skills, thanks to open-sourcing of code and commodification of tools and platforms. For all the women, particularly celebrities, whose likenesses have been stolen and used in porn without their permission, there are no quick, easy-to-use, affordable tools to spot their faceprints and identify deep fakes as machine-generated.

Would de-identification technology make it even tougher to find out when you’ve been unwillingly cast in nonconsensual porn? Is there any reason why deep-fake creators couldn’t, or wouldn’t, get their hands on this so they can keep cashing in on their fakery work?

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/98QOXSevUPY/