STE WILLIAMS

7 Stats That Show What It Takes to Run a Modern SOC

An inside look at staffing levels, budget allocation, outsourcing habits, and the metrics used by security operations centers (SOCs).PreviousNext

Image Source: Adobe Stock ( Gorodenkoff)

Image Source: Adobe Stock ( Gorodenkoff)

As the nerve center for most cybersecurity programs, the security operations center (SOC) can make or break an organizations’ ability to detect, analyze, and respond to incidents in a timely fashion. According to a new study from SANS Institute, today’s SOCs are treading water when it comes to making progress on maturing their practices and improving their technical capabilities. Experts say that may not be such a bad thing considering how quickly the threats and the tech stacks they monitor are expanding and changing.

“Going strictly by the numbers, not much changed for SOC managers from 2018 to 2019,” wrote Chris Crowley and John Pescatore in the SANS 2019 SOC Survey report. “However, just staying in place against these powerful currents is impressive, considering the rapid movement of critical business applications to cloud-based services, growing business use of ‘smart’ technologies driving higher levels of heterogeneous technology, and the overall difficulties across the technology world in attracting employees.”

Dark Reading explores the statistics from this study, as well as a recent State of the SOC report from Exabeam, to get some understanding about what it takes to run a SOC today and some of the major challenges security teams face in getting the most out of their SOC investments.

 

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full BioPreviousNext

Article source: https://www.darkreading.com/threat-intelligence/7-stats-that-show-what-it-takes-to-run-a-modern-soc/d/d-id/1335306?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Answer These 9 Questions to Determine if Your Data Is Safe

Data protection regulations are only going to grow tighter. Make sure you’re keeping the customer’s best interests in mind.

Since the EU’s General Data Protection Regulation went into effect, California and New York have successfully passed the California Consumer Privacy Act (CCPA) and Stop Hacks and Improve Electronic Data Security (SHIELD) regulations, respectively. There are 12 more states getting approval on data protection legislation currently, and that number is expected to grow.

As more disparate legislation is introduced across the US, what organizations must do to avoid costly regulatory fines will only become more complicated. Answer these questions, and you’ll sleep a little better at night. Those that have a plan of attack or are already executing on these guidelines should feel confident that their enterprise is keeping the customer’s best interests in mind.

● Do you incorporate “privacy and security by design” in your environment?
Privacy and security by design are methodologies based on proactively incorporating privacy and data protection from the very beginning. This approach follows seven principles for implementing growing processes within your IT and business environments. Advocating privacy and security early on in your design process for specific technologies, operations, architectures, and networks will ensure you are building a mature process throughout the design life cycle.

● Is sensitive data encrypted during transit and at rest?
Encryption keys are vital to the protection of transactions and stored data. Key management should be deployed at a level commensurate with the critical function that those keys serve. I strongly recommend encryption keys be updated on a regular basis and stored separately from the data. Essentially, data is always being pushed and pulled and protecting that information as it moves across boundaries should require strong encryption at rest and while in transit.

● Is access to data on a need-to-know basis?
Data should always be classified as sensitive versus nonsensitive and should only be accessed by authorized employees who have a legitimate business reason to access it. Using role-based permissions and “need-to-know” restrictions will help protect your data. It’s wise and highly recommended to always use nonshared usernames and passwords with multifactor authentication, which will verify each user. Furthermore, an access review should be conducted at least once per year; this will ensure the appropriate access is given to the correct people.

● Do you have a disaster recovery and backup location?
Having a disaster recovery (DR) and backup environment is a must in today’s digital world. DR and business continuity (BC) plans must be in place, and all relevant personnel should be apprised of their roles. DR and BC plans should be tested on an annual basis, followed by lessons learned. Separating your production and backup locations by a few hundred miles will ensure greater data security in the event of a natural or man-made disaster.

● Are vulnerability, risk, penetration, and other audit assessments conducted?
Assessments should be continuously completed throughout the year. Your team should be performing assessments focused on the information system and operational areas within your environment. It’s important to conduct these assessments on all assets, internally and externally. Your analysis should be completed in a five-step phase:

  • Identify and prioritize assets
  • Identify threats
  • Identify vulnerabilities
  • Analyze controls
  • Understand the likelihood of an incident and know the impact that threat could have on your systems.

● Is a process in place to delete or destroy data?
Whoever is handling your data should have a data retention schedule. Building out a schedule will ensure you are deleting data within the scoped time period. After you’ve defined the data retention schedule and you understand what can be deleted, you should follow security best practices around properly deleting and destroying data. Following industry standards such as the National Institute of Standards and Technology (NIST) will ensure your employees know how to and when to destroy and delete data. Any method that conforms to the NIST 800-88 guidelines for data sanitization should be approved for use.

● Do you have an established incident response team and data breach plan?
Your enterprise should have a robust incident response (IR) and data breach plan in place, and they should be tested annually. It should be the IR team’s responsibility to manage the IR process, defend against attacks and prevent further damage from occurring when an incident does occur, implement improvements that prevent attacks from reoccurring, and report the outcome of any security incidents.

Your internal plan should be developed based on industry leaders and cover these three phases:

  • Phase I: Detection, assessment, and triage.
  • Phase II: Containment, evidence collection, analysis and investigation, and mitigation.
  • Phase III: Remediation, recovery, and post-mortem. Notifying customers in a timely manner of a breach is mandatory, and this should be spelled out in your agreement.

● Are you logging security events?
Logging should be enabled in order to establish a sufficient audit trail for all access to sensitive data. Logging should be performed at the application level, too. Automated audit trails should be implemented to reconstruct system events and they should be secured so they cannot be altered in any way. File integrity monitoring should be used to ensure you are maintaining confidentiality, integrity, and availability of all customer data.

● Are you keeping your privacy up to date?
Your enterprise needs to be up-front about the information it’s collecting. You should be closely following the latest security and privacy regulations to avoid any legal issues. Your privacy policies must be available to all customers if your organization is collecting any data about them (e.g., IP addresses, location, etc.). Your privacy policy should involve all major key stakeholders, legal team, marketing team, and security.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Chad Cragle has professionally practiced IT security now for over a decade. He started as an IT security auditor and has a proven track record of leading audits and requirement-gathering efforts for several businesses and IT units. Chad is proficient in threat and … View Full Bio

Article source: https://www.darkreading.com/endpoint/answer-these-9-questions-to-determine-if-your-data-is-safe-/a/d-id/1335307?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Security Training That Keeps Up with Modern Development

Black Hat USA speakers to discuss what it will take to ‘shift knowledge left’ to build up a corps of security-savvy software engineers.

Software development is undergoing fundamental changes that are completely changing the face of the application attack surface. Modern software teams are moving faster and their development patterns are shifting dramatically. Developers favor assembling together microservices and open source software components into smaller applications. Those are then bound together with more API integrations and abstraction layers than ever. This is contributing to new kinds of vulnerabilities, as well as recombined — and sometimes more toxic — versions of the same old types of vulnerabilities that application security teams have been dealing with for years.

Security experts and development professionals generally agree that the only way that the industry will solve the appsec problem is through more effective security training of developers. The trouble is that even before the advent of the modern DevOps and continuous integration/continuous delivery (CI/CD) movements, dev training has been nonexistent to nominally effective at best. Now with things changing so quickly, the traditional appsec training regimes that orgs have in place are not in line with the tools or processes that engineering teams engage in.

“Traditional developer security training has never been as effective as it needs to be; take a look at the OWASP Top 10 to see that we are still suffering from the same common issues as a decade ago,” says Fletcher Heisler, CEO and founder of Hunter2, on the trouble with traditional developer security training. “We give the same generic advice in our training while development frameworks have continued to evolve.”

Heisler is teaming up with Mark Stanislav, head of security engineering for Duo Security, now part of Cisco, to present at Black Hat on August 8 on how organizations need to change in order to equip their developers with the knowledge they need to build more secure software.

“It’s often the case that generic education modalities — like recorded lectures and quizzes — are used for secure software development when that approach is fundamentally at odds with how software engineers generally learn: by writing code,” Stanislav says.

Furthermore, as Heisler suggests, the content itself is often not in line with how the developer’s fast-evolving set of tools actually work. 

“Secure code training should focus on how best to make use of modern web development tool sets,” Heisler says, explaining that, for example, most modern web frameworks include some default protections against SQL injection. “The tools are already out there, but they’re evolving rapidly, and we need to give developers practice writing modern secure code in their frameworks of choice.” 

On the flip side, while many newer developer tools and methods do add opportunity for enhanced security, others do not come with prebuilt security guardrails — leading to developers sometimes assuming something is secure that is not. Which means security teams and engineering leads have a responsibility to stay on top of and communicate the new risks. 

“Modern application development is adding more abstraction to what a developer writes, to what it does,” Stanislav says. “Engineers often believe a feature provides native security where it does not, leading to an assumption of creating safe code when in practice, it is vulnerable.” 

Similarly, they should be providing enough of a security baseline education that developers can start sniffing out risks on their own, Stanislav explains.

“Educating software engineers at deeper levels will allow them to use their expertise to better discern where risk may exist, enabling them to be more defensive in development and lead to empowered work that yields better net security results,” he says.

In order to get there, Stanislav and Heisler will suggest that security professionals tasked with developer training start to question the content and teaching methods of the past. This skeptical view should dig deep — perhaps even going so far as to question the industry’s reliance on OWASP Top 10 as a teaching aid and safety checklist.

“The OWASP Top 10 serves a distinct purpose, which is to provide a two- to three-year lagging indicator of web application security. In practice, however, much of the industry has co-opted this resource as a de facto standard and often checklist of what issues must be fixed,” Stanislav explains. “That has led to a reduction in awareness of the many vulnerability types not explicitly highlighted by the OWASP Top 10.”

In order to help the community improve the way it trains developers, Stanislav and Heisler will be releasing at their talk a free training platform meant to be shared among development teams that includes interactive training labs designed to help engineers practice exploiting and patching up modern web applications in their framework of choice. 

“Deploying new code five times a day means you can’t wait for an annual training on outdated topics. Instead, we need to be sharing the latest, up-to-the-day information on best practices and secure coding techniques,” Heisler says. “That is what we hope to accomplish in releasing an open platform where teams can easily share their latest findings in interactive labs.”

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Article source: https://www.darkreading.com/application-security/security-training-that-keeps-up-with-modern-development/d/d-id/1335343?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Johannesburg Ransomware Attack Leaves Residents in the Dark

The virus affected the network, applications, and databases at City Power, which delivers electricity to the South African financial hub.

Johannesburg’s City Power, the municipal entity delivering power to the South African financial hub, was hit with a ransomware attack that encrypted its network, databases, and applications.

The attack struck Thursday morning and prevented residents from buying electricity, uploading invoices, or accessing the City Power website. Officials said it also affected response time to logged calls, as some of the internal systems to dispatch and order material were slowed down.

“Ransomware virus is known globally to be operated by syndicates seeking to solicit money,” the City of Joburg tweeted after the attack. “We want to assure residents of Johannesburg that City Power systems were able to proactively intercept this and managed to deal with it quickly.” The city, which owns City Power, notes there was no personal data compromised in the attack.

Johannesburg implemented temporary measures to help those affected. Suppliers seeking to submit invoices were told to bring them to City Power offices; customers were asked to log calls on their cellphones using the mobile site, as they couldn’t access the utility’s website. Residents called a local radio station to say the attack had left them without power, Reuters reports.

At the time of the attack, City Power spokesperson Isaac Mangena said to News24 that cold weather could lead to unplanned outages, as the electrical system overloads with higher demand. Plans were in place to deal with unplanned outages, he added; City Power had sent more technicians to regions of the city where unplanned, repeated outages frequently occur.

City Power and Johannesburg officials have been regularly posting updates to both entities’ Twitter accounts; the City of Joburg most recently reported most of the IT applications and network affected by the attack “have been cleaned up and restored.”

Johannesburg joins a growing number of cities targeted with ransomware as criminals take aim at municipalities around the world. Other victims include Baltimore, Atlanta, and Riviera Beach, Florida. While security experts typically recommend not paying ransom — and US mayors have committed to follow their advice — unprepared victims may have no choice. Riviera Beach recently paid $600,000 to its attackers, a decision that could potentially have “far-reaching consequences,” said Ilia Kolochenko, founder and CEO of security company ImmuniWeb.

Kolochenko anticipates attacks like these will continue. “Cities, and especially their infrastructure sites, are usually a low-hanging fruit for unscrupulous cyber gangs,” he says. “These victims will almost inevitably pay the ransom as all other avenues are either unreliable or too expensive.” What’s more, he adds, is cryptocurrencies can’t be traced back to the attackers; as a result, most get away with it.

Cybercriminals are taking the time to profile and target entities that are more likely to pay more money, says Matt Walmsley, Vectra’s director of EMEA. City Power was an appealing target: The broad scope of disruption to its databases and other software, affecting most its applications and networks, suggests ransomware was able to quickly spread throughout the organization.

“The disruption to their services, as well as consumer backlash, will further compound the pressure on City Power’s IT and security teams to rapidly restore systems to a known good condition from backups, or chance of paying the ransom,” Walmsley explains.

Kolochenko also notes the risk of dangerous ransomware attacks will grow unless governments develop and enforce security regulations to protect their cities. Humans feel very real effects of ransomware in incidents like these: Following the City Power attack, Twitter posts reflected the struggles of individuals and families who found themselves without power. Future incidents could affect airports, for example, and other components of critical infrastructure.

Related Content:

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance Technology, where she covered financial … View Full Bio

Article source: https://www.darkreading.com/endpoint/johannesburg-ransomware-attack-leaves-residents-in-the-dark/d/d-id/1335344?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

You can probably be identified from your anonymized data

If you thought that removing identifying information from a database of sensitive personal records was enough to retain privacy, it’s time to think again. A study published this week asserts that it’s even easier to re-identify information than we first thought.

The idea of de-identifying (anonymizing) data has been around for a while. It involves removing sensitive information like names and exact addresses from databases so that you can still analyse the data without identifying specific people. Article 28 of the EU’s GDPR recommends it as a way of reducing the risk to sensitive records.

The study, released in Nature Communications, calls all that into question. Its authors at the Université catholique de Louvain (Belgium) and at Imperial College London (UK) say that it’s easy to re-identify a high percentage of people in de-identified data sets.

Furthermore, the researchers challenge a key assumption among organizations that de-identify data, which is that releasing a subset of a data sample makes it much harder to re-identity data with confidence.

The conventional wisdom goes like this: Let’s say you’re an organization in charge of people’s sensitive data. You want to make that data public so that crowdsourced researchers can crunch the numbers and find patterns in it, but you want to stay compliant with privacy rules.

So, you release only a small sample of a large data set – say, 1,000 of 100,000 people. The data contains a postal code, birth date, and the results of a cancer treatment.

An employer might search that data set and find just one record matching one of its own employees. “Aha!” They would say. “Now we know that our employee has been getting cancer treatment. So much for your privacy!”

You’d counter that there might be other people with the same birth date and postcode in the rest of the data that you hadn’t released. This gives you plausible deniability – the privacy advocate can’t be sure that the person in the data set is John Smith.

According to the researchers’ paper, that’s no longer true:

Our paper shows how the likelihood of a specific individual to have been correctly re-identified can be estimated with high accuracy even when the anonymized dataset is heavily incomplete.

The reason is that the more pieces of individual information a data set contains about you (say, the number of people you live with, the colour of the car you drive or whether you have a pet) the less likely it is that there’s another person with those characteristics. Gather enough pieces of information, and it turns out that you’re a uniquely special flower after all. They said:

Using our model, we find that 99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes.

The researchers wrote a machine learning program and trained it on incomplete data sets to test their theory out. They used 210 demographic and survey data sets, and were able to identify people with a high degree of confidence, even in subsets representing just 1% of the data or less.

The result led them to question the whole de-identification concept:

Our results suggest that even heavily sampled anonymized datasets are unlikely to satisfy the modern standards for anonymization set forth by GDPR and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model.

We’ve known for a while that people with good statistical chops can re-identity anonymous data sets. For example, Netflix released a de-identified data set of its users’ viewing habits in 2006, and then researcher Arvind Narayanan identified people in it.

What this latest research proves is that it’s even easier than we thought to reconstruct people’s identities, even when only a tiny subset of the data is released. When it comes to de-identification, it suggests that it might be time to go back to the drawing board.

The researchers have created an online tool that lets you check to see how identifiable you might be given your own characteristics.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/K0vDv3n7Irg/

New York City moves to protect citizens’ location data

New York City is considering a law that could stop cellphone carriers and smartphone app vendors from selling their location data.

The bill would ban anyone from collecting and sharing location data from mobile phones in the city, imposing harsh penalties for violators. It says:

It is unlawful for a mobile application developer or a telecommunications carrier to share a customer’s location data where such location data was collected while the customer’s mobile communications device were physically present in the city.

Anyone violating the law would face fines of up to $1,000 for each user’s data, up to $10,000 per day.

Sales of location data are rife in the US. It comes from at least two sources: apps that gather it from the phone, and wireless carriers who continuously monitor phones’ locations. Both have been found sharing the location information with data aggregation companies who can then make it available to third parties.

In December, the New York Times reported on a database of a million New Yorkers’ phones that updated their location as frequently as every two seconds. At least 75 companies received precise location data from apps whose users enable location services on their phones to get location-specific information like weather reports, it said.

The Times tested 20 popular apps and found that 17 of them shared location data with third parties. Only four of them told users during the permissions process that their data could be used for advertising.

That’s not all such data might be used for. A month later, Motherboard reporter Joseph Cox explained how he had purchased data on a phone’s current location from a shady dealer (with the target’s consent). He reported that some companies selling this data aren’t that worried about who they sell it to or how it’s used.

Another study from Guardian Firewall found dozens of iOS apps that slurped location histories from the phones and sent them to data monetisation firms. In many cases, those apps also sent ongoing location data.

Many of these apps bury detail about their data sharing in lengthy privacy policies or license agreements.

The problem has privacy advocates concerned. Last week, the Electronic Frontier Foundation (EFF) sued ATT and two data aggregators on behalf of Californian customers to stop them from letting companies access user location data. In a statement responding to New York City’s proposed law, it told us:

We’re glad to see proposals that would give users, not tech companies or app makers, control over who gets to see their location data. Users need strong privacy protection laws at the local, state, and national level to prevent the sharing and sale of their data without consent.

State law that would curb companies’ ability to sell personal data is already underway. The New York Privacy Act, proposed in May by state senator Kevin Thomas, would allow people to personally sue companies who are mishandling their data. However, the law is only a proposal right now, and a similar clause was removed from the California Consumer Protection Act before it reached the governor’s desk.

A spokesperson for New York City Council said that it didn’t want to wait for state legislators:

The bill failed to pass this session and therefore it is not yet law and we have no idea as to when it might become law. We have jurisdiction within the City of New York and so we will proceed with our bill.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/EEMeIRvhNwI/

Trends

How to Create Smarter Risk Assessments

Executives and directors need quantitative measurements – such as likelihood of loss and hard-dollar financial impact – to make more informed decisions about security risks.

You wouldn’t set foot in Sweden and start speaking Swahili — so why would you use the language of bits and bytes in a boardroom full of executives to discuss cyber-risk?

Like anywhere, CISOs and security professionals have to learn (and master) the language of the C-suite. And where risk is concerned, just presenting directors with a qualitative tool like a heat map to depict the organization’s current cyber-risk isn’t going to cut it anymore. The nature of digital business, not to mention unrelenting headlines of hacks, ransomware, and phishing incidents, has sensitized executives beyond the security basics of malware and firewalls.

“It used to be, ‘Tell us how bad it is,’ but now it’s more a case of, ‘We’re giving you money … we need to know what we’re getting in return,'” says Nick Sanna, CEO of RiskLens, a risk management software vendor.

Sanna adds that directors and executives face more requests to assess risk in financial terms, including from the Securities and Exchange Commission.

Because qualitative measures won’t cut it like they used to (so long, traffic signal graphics!), organizations are either embracing or being pushed toward measuring risk along two axes: likelihood and potential impact. These are the two essential metrics for any risk calculation, cyber or otherwise.

By moving from qualitative to quantitative risk assessment, the organization also helps itself create a guide for action. “How much risk do we have? Are we doing too much or too little? What does it take for us to stay out of trouble? These are basic questions, but they are the things you want to know as a business owner,” Sanna explains.

Risk management that relies on likelihood and financial impact should lead organizations and their stewards to better decision-making, Sanna adds.

And for large organizations and Fortune 500 companies, it’s likely they’re also tracking other types of risk (strategic, reputational, legal) within the organization. So tying in other risk measurements with cyber-risk makes good sense, if only to have everyone using similar models, methods, and/or lexicon for risk management, according to Fred Kwong, CISO for Delta Dental Plans Association.

Kwong looks at risk management through a slightly different filter, using three categories to help measure the organization’s cyber-risk: operational risk (availability of systems), risk to the organization’s data, and reputation risk, also known as risk to the brand.

Kwong points to other risk criteria that peers and colleagues use. Perhaps best known among these are the NIST risk management resources, cited by many as a basic compliance checklist. There’s also the Center for Internet Security’s Risk Assessment Methodology (RAM), created by Halock Security Labs. And generating consistent buzz is the risk framework from the Factor Analysis of Information Risk Institute (FAIR), which, by most accounts, comes closest to delivering on the quantitative risk approach advocated by Kwong and Sanna (who’s also president of the FAIR Institute).

“All these models boil down to what the risk is to the organization,” Kwong says. “They also help us with how to track and measure that risk so our leaders have the data points they need to make the best decisions” about managing that risk, he adds.

Kwong cautions against equating compliance with risk mitigation – think Payment Card Industry Data Security Standard (PCI DSS), Health Insurance Portability and Accountability Act (HIPAA), or the Federal Information Security Management Act (FISMA), for example.

“Many risk mitigation plans are built against HIPAA standards, but that’s not an answer to the risk question,” he explains. These frameworks may help mitigate risk, but they don’t really manage risk or measure impact and likelihood.”

As most security professionals know, raising the spectre of noncompliance has been a great way to get funding for a pet project.

“No one wants to hear they’re going to get fined by regulators or not considered trustworthy,” Kwong says. But there’s more work involved in risk management than simply being PCI-compliant, he adds.

Related Content: 

(Image: jozefmicic via Adobe Stock)

 

Terry Sweeney is a Los Angeles-based writer and editor who has covered technology, networking, and security for more than 20 years. He was part of the team that started Dark Reading and has been a contributor to The Washington Post, Crain’s New York Business, Red Herring, … View Full Bio

Article source: https://www.darkreading.com/edge/theedge/how-to-create-smarter-risk-assessments/b/d-id/1335179?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Android Malware ‘Triada’ Most Active on Telco Networks

Google in May disclosed that several Android devices had been shipped pre-installed with the RAT.

New research into the impact of Triada, a sophisticated remote access Trojan that was recently found pre-installed on numerous Android devices, has shown that more than 15% of telecom companies globally have infected devices running on their network.

Security ratings firm BitSight, which has been tracking the malware since May, recently gathered telemetry from command-and-control domains that Triada-infected devices have been communicating with.

BitSight also looked for telemetry related to PrizeRAT – a remote access Trojan that like Triada was found pre-installed on some low-cost Android devices last year – and for data pertaining to Ztorg, a lightweight Trojan that in the past has been associated with Triada.

The research showed that all three malware strains impact networks in the telecommunications sector far more so than any other industry. BitSight’s data suggested that more than 25% of telecommunications networks have PrizeRAT-infected devices running on them, more than 20% have Ztorg and 15.5% have Android systems with Triada installed. In many cases, the devices were infected via shifty applications downloaded from unapproved third-party Android app stores.

BitSight found devices infected with at least one of the three mobile malware strains are present on other networks as well though on a much smaller scale.

More than 5% of education networks, for instance, contain Android devices that are infected with PrizeRAT, and about 0.6% of them have Triada-infected Android phones and tablets. BitSight found that Ztorg had a stronger presence on networks belonging to companies in the utilities, government and retail sectors — with 1.27%, 1.4%, and 0.79%, of organizations impacted, respectively.

The reason for the relatively heavy presence of infected systems on telecom and education sector networks is likely because many of the devices belong to consumers and students. “The telecommunications and education industry sectors often do not enforce security controls on devices communicating through their networks since they primarily offer transit services to their customers, or students in the case of the latter,” says Dan Dahlberg, a security researcher at BitSight.

Devices infected with Triada and PrizeRAT include those running Android 9.0 and other recent versions of the operating system Ztorg has generally been infected devices running Android 5.1.1 and older according to the company.

Tiago Pereira, a security researcher at BitSight, says these malware strains pose a risk to organizations that are storing and processing data on mobile devices. “These malware families place the data on those devices at risk and may give the same level of access as the device owner to corporate assets,” he says. “Organizations should consider this as a real attack vector and deploy countermeasures to it.”

A Sophisticated Threat

Google in May disclosed that it had learned of Triada being pre-installed on some Android devices via a system image backdoor. According to Goolge, a company named Yehuo or Blazefire likely supplied the infected system image to some Android equipment manufacturers during the production process in a supply chain attack.

Researchers have known about Triada for sometime. In a July 2016 blog, Kaspersky Lab described the mobile malware as one of the most sophisticated of its kind the company had encountered. Triada works by modifying Zygote, a core process in the Android OS so it becomes part of every application on the infected device, the company had noted. At the time, Symantec said Triada had the potential to impact a high number of Android devices.

As far back as July 2017, antivirus firm Dr. Web had warned about Triada being built into the firmware of several Android devices. Dr. Web, Google, and others have described Triada as capable of penetrating processes on all running Android applications and of downloading and running additional malicious payloads.

Initially at least, the malware’s purpose was to install apps for displaying ads on an infected device for ad fraud purposes. But Triada is modular and can be easily repurposed for other malicious purposes, the vendors have warned. The only way to get rid of it from systems on which it is pre-installed is to upgrade the firmware.

Ztorg, meanwhile, is a lightweight Trojan that attackers have been using to download Triada on Android devices. The relatively high presence of Ztrog on utility, government, and retail networks could spell potential trouble for organizations in those sectors.

According to BitSight, most organizations that have been impacted by Triada so far have a security rating of 400 or lower, meaning their security practices lag well behind industry standards. Metrics that are used to calculate an organization’s risk scores include the currency of operating systems and browsers, and the number of services that are exposed externally on corporate networks.

“Previous studies we’ve done also demonstrated that organizations with a rating of 400 or lower were five times more likely to experience a publicly disclosed data breach than companies with a 700 or higher,” Dahlberg says.

Related Content:

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/mobile/android-malware-triada-most-active-on-telco-networks/d/d-id/1335337?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Sanctions-hit Russian developers fingered for crafting ‘Monokle’ Android snoopware

A Russian software developer, currently under American sanctions for its purported role in the Kremlin’s interference with the 2016 US elections, is now selling spyware to governments.

Researchers at security house Lookout today reported [PDF] that St Petersburg-based Special Technology Centre (STC) is developing and maintaining a commercial spyware tool known as Monokle.

STC was among the Russian businesses hit last year with US economic sanctions for their supporting role in the GRU’s election-meddling efforts. The Russian tech firm is said to have its hands in a number of fields, including UAVs, radio equipment and, now, surveillance software.

Monokle targets Android devices by being added as a hidden payload in seemingly legitimate apps like Google Play or Skype. Most recently, Lookout says, the malware has used the names and icons of apps popular in Syria and the Caucasus regions in an effort to keep an eye on groups in those areas.

Once installed Monokle launches a wide range of surveillance tools including remote-access backdoors, certificate installers to allow for man-in-the-middle attacks, and functions that gather the personal data of the target.

“While most of its functionality is typical of a mobile surveillanceware, Monokle is unique in that it uses existing methods in novel ways in order to be extremely effective at data exfiltration, even without root access,” Lookout notes.

“Among other things, Monokle makes extensive use of the Android accessibility services to exfiltrate data from third party applications and uses predictive-text dictionaries to get a sense of the topics of interest to a target. Monokle will also attempt to record the screen during a screen unlock event so as to compromise a user’s PIN, pattern or password.”

bank

Phuck off, phishers! JPMorgan Chase crafts AI to sniff out malware menacing staff networks

READ MORE

Monokle is also just the tip of the iceberg for STC’s mobile operations. Lookout’s team said it found evidence that an iOS version of the spyware is in development along with a set of security tools STC is pitching to governments alongside the spyware.

“According to our research, although STC has never publicly marketed their Android security suite, it is clear that STC is producing this software and that it is intended for government customers,” Lookout said.

“Multiple Android developer positions have been advertised by STC on popular Russian job search sites in St Petersburg and Moscow. The positions require both Android and iOS experience and advertise working on a native antivirus solution for Android.”

The dual offerings make sense enough. Who better to protect you from targeted, government-backed malware than a company that develops it on the side? ®

Sponsored:
Balancing consumerization and corporate control

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/07/24/monokle_android_snoopware/