STE WILLIAMS

The ‘Twitterverse’ Is Not the Security Community

The drama on social media belies the incredible role models, job, training, and networking opportunities found in the real world of traditional cybersecurity.

When you look at social media, you get the impression that there is a serious shortage in security professionals — and at the same time, you can’t imagine why anyone would want to work in cybersecurity in the first place. Judging by Twitter, it seems as if security professionals are frequently at each other’s throats and, with few prominent role models, it’s a terrible place for women. All you have to do is look at last year’s DerbyCon controversy for proof of how dysfunctional the community can appear online.

But here’s a not-so-secret secret: The large majority of security professionals don’t have time for the drama on social media, and social media is not representative of the profession as a whole.

The security professionals who post on Twitter frequently make valuable contributions to the security community as a whole. However, they are not representative of their million or so industry colleagues who go to work and then go back to their families and real life when work is over, and rarely get on social media.

These people don’t necessarily go to hacker cons; they do their security related jobs and then leave their work in the office. They work for government agencies such as the NSA, Cyber Command, DISA, the FBI, military commands, NIST, etc. Then there are government contractors, the security teams at major banks made up of hundreds and sometimes thousands of people, professionals who work at the Big Four consulting firms with thousands of security professionals each, thousands of people who perform security work at telecommunications companies such as Verizon and ATT, on top of the hundreds of thousands of people working in other government agencies around the world. Tech companies such as Cisco, IBM, HP, and others have thousands of employees as well. I’d guess relatively few of these people are active on social media.

The reason I want to highlight the difference is that I recently read a post from a woman on Twitter who said she saw no comparable female role models in the industry who would be making $200,000 per year. I saw one or two people mention someone, but what surprised me was the complete void in recognition of some of the most notable women in the profession — women who are not active on Twitter but active within the security community as a whole.

Quick examples who come immediately to mind include:

  • Myrna Soto, former CISO, Comcast, and now a venture capitalist
  • Renee Guttmann, CISO, Campbell Soup Company, and previously Coca-Cola, Time Warner, and Royal Caribbean
  • Dawn Cappelli, CISO, Rockwell Automation
  • Jennifer Minella, Chair, (ISC)2 Board of Directors and VP Carolina Advanced Digital,
  • Terry Grafenstine, former Chair, ISACA Board of Directors and Managing Director at Deloitte
  • Shelley Westman, Partner, EY
  • Mischel Kwon, CEO, MKACyber, and former director, US-CERT
  • Sandra Toms, General Manager, RSA Conference
  • Illena Armstrong, VP, SC Media
  • Rhonda MacLean, former CISO, Bank of America and Barclays, and board member to several cybersecurity companies
  • Dr. Chenxi Wang, venture capitalist
  • Dr. Cynthia Irvine, Distinguished Professor at the Naval Postgraduate School, who trained a generation of military cybersecurity professionals
  • Dr. Dorothy Denning, an icon of the field, who people refer to as the “Mother of Computer Security”

To be sure, women such as Katie Moussouris and Parisa Tabriz, accomplished and active in both social media and in the larger community, serve as outstanding role models. Yet even with the visibility of these prominent women, social media represents only a subset of the cyber universe. Furthermore, what people who rely solely on Twitter as their window to cybersecurity are missing is not just incredible role models, but also job opportunities, training opportunities, professional networking opportunities, and volunteer opportunities.

Don’t get me wrong. I personally get a great deal out of social media, and have developed many friends who I only know through that forum. But I also remain active in the mainstream, including traditional conferences, local ISSA, (ISC)2, ISACA, Infragard chapter meetings, vendor events, and online forums associated with these groups. The bottom line is that it is to everyone’s benefit that people within the hacker and traditional cybersecurity communities look beyond their own echo chambers as much as possible.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Ira Winkler is president of Secure Mentem and author of Advanced Persistent Security. View Full Bio

Article source: https://www.darkreading.com/careers-and-people/the-twitterverse-is-not-the-security-community-/a/d-id/1334260?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

GAO Finds Deficiencies in Systems for Handling National Debt

IT systems at the Bureau of the Fiscal Service and the Federal Reserve Bank show vulnerabilities that could lead them open to exploitation and breach.

Two related reports from the General Accounting Office (GAO) point out significant issues with the IT systems involved in managing and servicing more than $22 trillion in federal debt. The reports — one a GAO financial audit of the Bureau of the Fiscal Service, and the other a management report on the Federal Reserve — each conclude that there are problems with the configuration and control systems within IT. And while those problems have not yet resulted in system breaches, according to the reports, they’re worthy of immediate attention and remediation.

In the financial audit of the Bureau of the Fiscal Service, auditors include a section in which they underscore the importance of a significant deficiency in information system controls. “These general control deficiencies increase the risk of unauthorized access to, modification of, or disclosure of sensitive data and programs and disruption of critical operations,” they say. In particular, the audit mentions the lack of least privilege access as a security concern for systems.

The audit also lists broad suggestions for remediation, though it doesn’t express any great optimism for the implementation of those steps. “We continued to identify instances in which known information system vulnerabilities were not being remediated on a timely basis. We also continued to identify instances in which implemented configuration settings were not effectively monitored against baseline security requirements,” they say.

Federal Reserve Banks are charged with implementing many of the practical steps of dealing with the national debt. Although the GAO didn’t audit the banks, it did conduct a management review looking at issues similar to those raised in the Bureau of the Fiscal Service audit. The GAO issued a pair of reports, one limited to staff and the board of governors, and the other a high-level public report.

The public management report points out deficiencies in configuration management. “These new and continuing control deficiencies increase the risk of unauthorized access to, modification of, or disclosure of sensitive data and programs,” it says.

Though the deficiencies have not resulted in breaches, the response has not come through improvements to the technology and its use. Rather, the GAO found the deficiencies were “mitigated primarily by Fiscal Service’s compensating management and reconciliation controls to detect potential misstatements of the Schedule of Federal Debt.”

Each report points out that some of the issues identified in previous audits and reports have been mitigated, while others remain to be dealt with.

Read more here and here.

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/gao-finds-deficiencies-in-systems-for-handling-national-debt/d/d-id/1334266?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Threat Hunting 101: Not Mission Impossible for the Resource-Challenged

How small and medium-sized businesses can leverage native features of the operating system and freely available, high-quality hunting resources to overcome financial limitations.

Threat hunting is considered to be an essential part of modern cybersecurity operations. There are numerous benefits to this type of activity such as the proactive identification of threat actors in your environment, potentially reducing the dwell time of adversaries in your network, and the identification and resolution of benign but significant issues that can improve overall enterprise IT operations.

For resource-challenged organizations, threat hunting is often deemed mission impossible. This is not true. The benefits of threat hunting can be realized by deploying a methodical approach.

The first consideration prior to implementing threat hunting activities is the maturity of your security operations. While the thought of moving from a passive, detection-oriented posture to a hunter is an exciting prospect for many teams of defenders, it is likely that many organizations would be better off first improving their configuration management, patch management, and vulnerability management efforts, which are often challenging for organizations with dedicated teams in those functional areas.

These important processes, typically, are less mature in smaller organizations. As a result, small or resource-constrained security teams would be well served to review their operational maturity against some of the available information security guidance documentation, such as the Center for Internet Security Critical Security Controls. The CSC helps organizations prioritize security control implementation, and while threat hunting is one of the items in the CSC, it is recommended only after organizations have achieved a specific level of security operations maturity.

Onward to Threat Hunting
After demonstrating effective asset management, patch management, and vulnerability management, an organization is better suited to spin up threat-hunting efforts. Looking at the IT systems for a typical small or medium-sized organization, we often find Windows desktop in an Active Directory domain. It is also likely that there is some sort of cloud presence, most likely in the software-as-a-service area. It’s in this type of environment that we can explore how some native capabilities can be leveraged to support threat-hunting efforts.

The lifeblood of threat hunting is security data. This will typically be in the form of logs, whether it is network-related data such as firewall or web proxy logs, or endpoint data from the operating system or from the endpoint security suite. To effectively hunt, data from client endpoints must be collected centrally. Given the prevalence of client-side attacks such as phishing, and the observed pattern of attack behaviors that use compromised clients to persist and move around the environment, the critical data related to adversary activity will often only be in the endpoint logs.

While the collection of endpoint logs may conjure visions of expensive SIEM solutions or yet another agent to be deployed and managed, this data collection problem has a straightforward solution. A native feature in Windows, Windows Event Forwarding, can be leveraged to solve the problem of endpoint log collection, and it is at a cost that will make management happy: zero. Managed via Group Policy, endpoints can be configured to push data to a central server. For many small organizations, a single server or even an existing low-activity server would be suitable for this data collection role.

Once this data from endpoints is centrally collected, the hunting can begin. Ideally, this data would be further consolidated into an existing SIEM or log aggregation solution to facilitate searching and correlation. However, if that option does not exist, there are projects that provide PowerShell scripts to perform analysis of this forwarded log data. One example is the DeepBlueCLI project authored by Eric Conrad, freely available on his GitHub profile. This series of scripts provides a basic set of threat-hunting capabilities by looking for evidence of malicious behavior in the endpoint log data.

Log Data and Other Indicators
If the capability does exist to roll this data into some sort of log correlation tool, then the flexibility to hunt for different indicators expands greatly. A resource such as the Mitre ATTCK framework can be used to look for specific elements from various stages of the attack life cycle, providing a wealth of indicators to use as the basis of the hunting process. From an endpoint perspective, this methodology provides a ready way to get started with endpoint hunting in Windows event logs. However, Windows event logs are not the only source of data that can support threat hunting efforts.

Network-based data is another essential data source that can be used to find indications of adversary activity in your environment. Data such as netflow, firewall logs, proxy logs, DNS logs, and DHCP logs can all play a role in threat hunting. Collection of these log types may be more difficult in some environments due to access issues, performance overhead associated with the log generation, and the ability to effectively analyze the data source. These sources can provide a wealth of information that can show signs of threat actors, such as unusual user-agent strings, unusual data volumes or destination IP addresses, unusual client-to-client data flow for specific protocols such as SMB, odd hostnames or MAC addresses with DHCP leases, and indications of DGAs in certificate names and URLs. These are only a few of the indicators that can be used for threat hunting, but they can be highly effective components that will lead to successful hunt efforts.

Threat hunting can be effectively performed in smaller or resource-constrained environments. Leveraging native features of the operating system and using freely available, high-quality hunting resources can overcome financial limitations. Staffing constraints may be more difficult to address, but prioritization of security tasks based upon the highest value that they offer to the organization will drive the time that can be allocated to threat hunting.

If you wish to learn more about threat hunting, David Mashburn will be giving a talk on this subject “Hunting Highs and Lows: Misadventures in Threat Hunting” at SANS Pittsburgh in July, or you can research these concepts online.

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

David Mashburn is a SANS Certified instructor and an IT Security Manager for a global non-profit organization in the Washington, D.C. area. He has experience working as an IT security professional for several civilian federal agencies, and over 15 years of experience in IT. … View Full Bio

Article source: https://www.darkreading.com/vulnerabilities---threats/threat-hunting-101-not-mission-impossible-for-the-resource-challenged/a/d-id/1334252?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

DragonEx exchange hacked, smoking ashes being raked over

The DragonEx cryptocurrency exchange announced that it was hacked In the small hours of Sunday morning.

It’s managed to retrieve some of its customers’ funds; it’s got the address for a Bittrex account that gobbled up at least some of the loot; it’s asking for its “fellow exchange” to freeze that account; it’s got cyber-cops from Estonia, Thailand, Singapore and Hong Kong on the case; and so please, everybody, just go away for a week and stop clamoring for your money back.

We don’t know how much is gone, but we swear, we’ll make good on this, said the DragonEx team – and the team at every other looted exchange ever, except, that is, for the exchange that promised (almost) nothing when it exit-scammed.

In its official Telegram account, DragonEx promised:

For the loss caused to our users, DragonEx will take the responsibility no matter what.

DragonEx first took its platform offline on Sunday (apparently at the time it was first discovering the breach) saying that it was upgrading its system. Later that day, it announced that it was “still working on system maintenance,” before finally disclosing on Monday that it had been hacked. From Monday’s Telegram announcement:

Part of the assets were retrieved back, and we will do our best to retrieve back the rest of stolen assets.

Joanne Long, an admin of that official DragonEx Telegram account, said that the team has been able to identify where the stolen funds have wandered off to. Namely, some of them turned up in an account at Bittrex, which is a US-based cryptocurrency exchange headquartered in Seattle, Washington.

It could be that some of the funds have been restored because DragonEx asked fellow exchange Bittrex to freeze the wallets that got stuffed with the stolen funds.

As it is, DragonEx identified a list of 20 cryptocurrency accounts used by the hackers to move the stolen funds from the exchange, and the company told investors that it’s cooperating with other exchanges to recover users’ funds.

On that list are what Coindesk says are the top five cryptocurrencies by market capitalization: bitcoin (BTC), ether (ETH), XRP, litecoin (LTC) and EOS, as well as the tether stablecoin (USDT), represented by six destination addresses.

How trustworthy is/was DragonEx?

Here’s hoping that investors eventually get their money back. If not, are there lessons to be learned?

You could look at reviews, but I wouldn’t bet my Frappuccino money on the results. One review of DragonEx gave it a security score of B- (which is quite good, as far as exchanges go) based on Mozilla’s Observatory website scanner… a rating which, as of Tuesday, had dropped to an F.

Unfortunately, all of this leads us to reiterate what we’ve said before, and it isn’t particularly comforting. Namely, a cryptocurrency exchange is Just Another Website and therefore unaffected by the magical un-crackability of cryptocurrency crypto.

Cryptocurrency exchanges are websites where such currencies are bought, sold and stored. For Bitcoin and its ilk, they’re a soft and vulnerable underbelly. Like “the cloud,” an “exchange” is just another name for “somebody else’s computer.” You know next to nothing about the quality of that computer, or the ethics of the person operating it.

DragonEx was around for seven years before it got hacked. That’s 49 in dog years and an epoch in exchange years. Let’s hope this dragon rises from the ashes. And yes, that’s a phoenix metaphor, but what matters is that it (hopefully) manages to cough up everybody’s funds.

On the optimistic side, its admins are sticking around and answering questions: that’s a good sign, given all of the exchanges that have gone up in smoke, their teams quietly slipping out the fire exits and leaving investors scratching their heads and shaking their fists.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/uPrE5BtM1Bg/

Preinstalled Android apps are harvesting and sharing your data

When you buy a brand-new smartphone, there’s that precious moment just after you take it out of the box when it is shiny and clean, unsullied by dirty software that could endanger your data. Or so you thought. New research reveals that the bloatware preinstalled on many new Android phones could do far more than simply chew up your storage.

Many Android phones ship with software that has been pre-installed by the smartphone vendor. Researchers at IMDEA Networks Institute, Universidad Carlos III de Madrid, Stony Brook University, and ICSI scanned the firmware of more than 2,700 consenting Android users around the world, creating a dataset of 82,501 pre-installed Android apps.

Many of these apps spied on their users, according to the research paper, accessing highly personal information. The researchers said:

According to our flow analysis, these results give the impression that personal data collection and dissemination (regardless of the purpose or consent) is not only pervasive but also comes pre-installed.

What data are these apps collecting?

Not only did preinstalled applications harvest geolocation information, personal email, phone call metadata and contacts, but some of them even monitored which applications users installed and opened. In many cases, personal information was funneled straight back to advertising companies.

Many of these preinstalled apps gather and communicate information using custom permissions, granted by the smartphone vendor or mobile network operator, which enabled them to perform actions that regular applications cannot.

Examples included preinstalled Facebook packages, some of which were unavailable on the regular Google Play store. These automatically downloaded other Facebook software such as Instagram, the researchers said. They also found Chinese applications exposing Baidu’s geolocation information, which could be used to locate users without their permission.

The researcher’s analysis suggests that many of these apps may be using custom permissions like these to harvest and exchange information as part of pre-defined data exchange agreements between companies:

These actors have privileged access to system resources through their presence in pre-installed apps and embedded third-party libraries. Potential partnerships and deals – made behind closed doors between stakeholders – may have made user data a commodity before users purchase their devices or decide to install software of their own

The paper singled out the people doing digital deals behind your back as smartphone vendors, mobile network operators, analytics services and online services companies. We recently wrote about the apps that secretly share information with Facebook.

The researchers also found malware libraries embedded in some preinstalled software. One such library, called Rootnik, has the ability to gain root access to a device, leak personally identifiable information, and install additional apps. The researchers added:

According to existing AV reports, the range of behaviors that such samples exhibit encompass banking fraud, sending SMS to premium numbers or subscribing to services, silently installing additional apps, visiting links, and showing ads.

How do these apps make their way onto Android phones?

There are several contributing factors. First is that Google allows third-party companies to package and preinstall applications that they see fit onto their own versions of Android. In many cases that process is far from transparent, the paper warned.

The second compounding problem is that many of the apps that make it through this process are self-signed. Mobile applications are supposed to prove their legitimacy by using digital certificates, but many developers simply create their own. It’s a bit like giving your own name as a reference when applying for a job.

Some of these apps also use third-party libraries which may contain their own security or privacy issues. By granting custom permissions to an app, a smartphone vendor is also granting the same permissions to the third party library that is piggybacking on it.

All of which is to say that if you buy an Android phone, you may well be getting more than you signed up for.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/ITHy5Jj_sJc/

Facebook’s Whitehat Settings lets bug-hunters dial back app security

What if the security controls added by Facebook to make it harder for snoopers and ne’er-do-wells to attack the company’s servers…

…makes things harder for researchers who are trying to hunt for bugs legitimately?

That’s what’s been happening, bug hunters have told Facebook via its Whitehat survey.

Nearly all Facebook-owned apps make it a hard as they can to stop tricks such as Man-in-the-Middle (MiTM) attacks, which could allow rogues in your local coffee shop to spy on you, but this also makes it tough for ethical hackers and security researchers to intercept and analyze network traffic to find server-side security vulnerabilities.

That’s why Facebook decided to help them out by giving them Researcher Settings so they can dial back their connection security and pretend that it’s still 2009.

Facebook’s Whitehat Settings

Facebook’s Bug Bounty program announced on Friday that it’s implemented what it’s calling Whitehat Settings.

These “backed off” connection settings will help security researchers analyze network traffic on Facebook, Messenger and Instagram Android applications – on their own accounts, that is.

In other words, these less secure settings don’t affect other people using Facebook, and don’t let researchers spy on traffic that isn’t theirs to start with.

The new settings allow researchers to run Facebook’s mobile apps in “watch what happens” mode by:

  • Disabling Facebook’s TLS 1.3 support
  • Enabling proxying for Platform API requests.
  • Using user-installed security certificates.

Facebook recommends that in order to ensure that the settings show up in each mobile app, researchers need to sign out from each mobile app, close it, then re-open the app and sign in again.

The sign in process will fetch the new configuration and setting updates you have just made. You only need to do this once, or whenever you make changes to these settings.

Keep in mind that these settings reduce the security available for apps. That’s why the social media platform advises researchers to turn off the settings when they’re not bug-hunting:

For the security of your account, we advise turning these settings off when not testing our platform to find Whitehat bug bounty vulnerabilities.

One security researcher, at least, thinks the Whitehat settings are a “cool idea”:

Ass Naked Security’s own Paul Ducklin put it:

Facebook is helping security researchers have their cake and eat it, too. By default, you’re protected against other people sniffing out your network traffic, which stops them seeing what data you’re sending to Facebook. But now you can carefully snoop on yourself when you need to, so you can see how Facebook is sending your data. That’s good for security, privacy and transparency.

How to put on your Whitehat

Facebook’s new Whitehat settings aren’t visible by default. Rather, bug hunters have to explicitly turn them on, which you can do here.

You can also get setup instructions, including video tutorials, on this help page.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/N8E1i2JpfE0/

Yeah, you better, you… you better tell us how you’re misusing people’s data, privacy, watchdog suggests to US telcos

The US Federal Trade Commission has asked seven American providers of mobile broadband service to provide details about how they deal with customer and device data.

The trade watchdog issued an order [PDF] on Tuesday directing ATT, ATT Mobility, Comcast (Xfinity), Google Fiber, T-Mobile US, Verizon, and Cellco Partnership, aka Verizon Wireless, to detail their privacy policies, procedures, and practices.

Recent revelations of cellular network giants selling subscriber location data to pretty anyone who vaguely looked like a cop or bounty hunter prompted 15 senators in January ask America’s broadband overseer the Federal Communications Commission to investigate the practice, and it has grudgingly agreed to do so.

Now the FTC, derided for years by critics as a regulatory lapdog, has stepped in, too, “in light of the evolution of telecommunications companies into vertically integrated platforms that also provide advertising-supported content.”

The agency suggests it will rely on its remit of going after unfair and deceptive trade practices to punish companies that say one thing then do another, perhaps unaware that industry behavior tends to be disclosed in the lengthy, ambiguous privacy policies that no one actually reads.

This is the same agency the the Electronic Privacy Information Center slammed on Tuesday in a letter [PDF] to the the House Oversight Committee for a hearing on strengthening the oversight powers of consumer-focused agencies like the FTC and Consumer Protection Bureau.

In conjunction with the hearing, the Government Accountability Office, said in a report [PDF] that the FTC and CPB lack the tools to deal with poor data handling related to incidents like the exposure of 145 million consumer records at credit bureau Equifax in 2017.

Give us the tools and we, er, won’t do the job

But as EPIC put it, the issue for the FTC is not lack of tools but lack of will. Data protection challenges, said EPIC president Marc Rotenberg and policy director Caitriona Fitzgerald, “will not be solved by granting the FTC more authority: the agency has failed to use the authority it already has.”

mobile

ATT, Sprint, Verizon, T-Mobile US pledge, again, to not sell your location to shady geezers. Sorry, we don’t believe them

READ MORE

Oblivious to EPIC’s request that House members support the creation of a dedicated data protection agency, as seen in Europe, the FTC set out on a quest to find out more about the above-mentioned mobile firms and their ad habits. After more than a decade of smartphone data shenanigans, perhaps the time seemed right.

The agency asked for information about: the categories of personal data collected from consumers and their devices; the ways such data gets gathered; whether the data gets shared with third parties; and the internal policies governing data access and retention.

It also wants to know about: how companies provide notice and consent for data processing; how data gets aggregated, anonymized and processed; whether consumers are given choices about data handling; whether declining to accept data collection leads to degraded service; and whether companies provide a way to access, correct, or delete personal data.

Don’t expect meaningful action soon. A year ago, the FTC, following pressure from EPIC, said it would investigate Facebook for potential violations of the 2011 consent decree the social ad biz struck with the regulator. If you listen and remain as still as a US regulator, you can hear the silence of companies not shaking in their boots. ®

Sponsored:
Becoming a Pragmatic Security Leader

Article source: http://go.theregister.com/feed/www.theregister.co.uk/2019/03/27/ftc_moble_data/

Insurers Collaborate on Cybersecurity Ratings

A group of insurers will base rates and terms on whether customers purchase technology that has earned a stamp of approval.

It’s in the best interest of insurance companies to have their customers protected from cybersecurity losses. That, in a nutshell, is why a number of global insurers are collaborating on a rating system for cybersecurity products.

According to The Wall Street Journal, Marsh McLennan, a professional services company specializing in risk and insurance, will evaluate enterprise cybersecurity technology in a program called “Cyber Catalyst.” The article states, “Marsh will collate scores from participating insurers, which will individually size up the offerings, and identify the products and services considered effective in reducing cyber risk.”

Companies that choose security products from among the approved selection may find themselves qualified for improved insurance terms and conditions. Insurers already signed up to participate include Allianz SE, AXA SA, Axis Capital Holdings Ltd, Beazley PLC, CFC Underwriting Ltd., Munich Re, Sompo International, and Zurich Insurance Group AG.

Read more here.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

Article source: https://www.darkreading.com/risk/insurers-collaborate-on-cybersecurity-ratings/d/d-id/1334258?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Small Businesses Turn to Managed Service Providers for Security

The average cost of a cyberattack at an SMB is $54,650, a new study shows.

Nearly 90% of small- to midsized businesses (SMBs) would consider hiring a new managed services provider (MSP) if they offered the right cybersecurity solution, and nearly half would pay at least 20% more for the right security solution from a new MSP. 

“We found that SMBs understand that they are not well protected,” says Brian Downey, senior director of security product management for Continuum, which collected data from 850 SMBs in the United States, United Kingdom, France, Germany, and Belgium, for its new SMB security report. “And … they are willing to pay more for the right service and would even trade out an existing MSP.”

Nearly 25% of SMBs have already changed MSPs in the aftermath of a cyberattack, he says.

Some 77% of SMBs anticipate that at least half of their cybersecurity needs will be outsourced in five years – and 78% are planning to invest more in cybersecurity in the next 12 months. Downey says SMBs understand what a breach can do to their businesses, and that the average total cost of cybersecurity attacks experienced in the past two years can vary by company size.

For example, the average cost of an attack for a company with 10- to 24 employees runs $38,437. That number increases to $70,357 for companies with 500- to 1,000 employees. The average cost across all the SMBs surveyed was $54,650.  

“There’s a growing recognition among SMBs that a security event can impact customer loyalty,” Downey says. “In fact, we found that 89% of SMBs surveyed saw security as one of their top five business priorities. As consumer awareness of security has increased following the 2016 election, SMB owners realized they need to focus on security so they can protect their companies.”

Martha Vazquez, senior research analyst for infrastructure services at IDC, says there’s no question SMBs are becoming more security-aware. “SMBs are still looking at price, but they wouldn’t go with a Secureworks or IBM” because they are out of their price range, she says. “However, they will pay for providers that offer more advanced threat detection and analytics. They’re looking at the security talent the providers have, plus the 24×7 support.” 

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Steve Zurier has more than 30 years of journalism and publishing experience, most of the last 24 of which were spent covering networking and security technology. Steve is based in Columbia, Md. View Full Bio

Article source: https://www.darkreading.com/cloud/small-businesses-turn-to-managed-service-providers-for-security/d/d-id/1334259?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

ASUS ‘ShadowHammer’ Attack Underscores Trusted Third-Party Risks

Taiwanese computer maker says it has fixed issue that allowed attackers to distribute malware via company’s automatic software update mechanism.

News this week about attackers compromising an automatic software update mechanism at ASUS to distribute malware to targeted victims has refocused attention on best practices for addressing risks to organizations from trusted vendor and channel partner relationships.

ASUS on Tuesday announced that it has addressed an issue with a version of its Live Update utility that last year allowed an attacker to distribute malware disguised as legitimate software updates to some customers of the company’s notebook computers.

The Taiwanese computer maker said it has also implemented “multiple security verification mechanisms” to prevent attackers from manipulating the company’s automatic software update mechanisms in the future.

It also released a diagnostic tool that customers of its notebook computers can use to quickly determine if their systems were infected via the compromised Live Update utility. ASUS is urging customers with infected systems to back up their files and restore the operating system to factory settings.

ASUS announced these new measures less than one day after Kasperksy Lab released a report describing how a threat group called Barium had embedded poisoned, digitally signed files in ASUS’ software update servers and pushed them out as legitimate firmware and software updates.

The so-called ShadowHammer attacks happened between June and November of last year and impacted ASUS notebook customers that had enabled Live Update, a utility that automatically searches for and installs new software and firmware updates from ASUS.

Kaspersky researchers described the ShadowHammer campaign as potentially impacting over one million ASUS devices, although the attackers seemed specifically interested in just 600 or so of them. In a blog Tuesday, security vendor Avira said it had observed at least 438,000 ASUS devices on which the initial malware installer was executed.

ASUS itself downplayed the scope of the attack. In its statement Tuesday, ASUS did not say how many of its notebook computers might have been impacted, and that a “very small number” of specific user groups were targeted in the attack.

The ASUS attack is similar to other incidents in recent years where attackers have managed to distribute malware tools to targeted victims by embedding malicious code in trusted software products. A 2017 incident involving Avast’s CCleaner software and another one the same year in which attackers inserted malware dubbed ShadowPad in a product from NetSaranag Computers are two relatively recent examples.

Such attacks are hard to detect and stop because they take advantage of the trusted relationships organizations have with software vendors, suppliers, and other channel partners. In ASUS’ case, the challenge was complicated by the fact that the attackers signed their malware using legitimate ASUS digital certificate.

Here are key best practices to protect against this type of attack:

1. Identify and Monitor High-Risk Vendors

Tech companies that issue remote patches and remote updates to customers are big targets for attackers because of their broad trusted relationships with customers, says Jake Olcott, vice president at BitSight.

“As a risk management best practice, organizations must identify their most high-risk vendors, include security performance requirements in contracts with those suppliers, and monitor the cyber posture of those suppliers on an ongoing basis,” he says.

The challenge is that such assessment and monitoring process can be extremely time-consuming, he says. But simply turning a blind eye to this risk all together can have detrimental consequences, he notes.

2. Know What to Look for and Monitor

“When performing due diligence, you’re not auditing every line of a vendor’s code,” says Mike Jordan, senior director at The Shared Assessments Program, an organization focused on third-party risk mitigation. Rather look for indications of practices that should catch problems like the one at ASUS, Jordan says. 

Make sure you can identify whether the vendor is following secure coding practices and reviews during and after development, especially if the vendor’s software can cause a high degree of harm, he advises.

Verify if the vendor has adopted Threat Modeling practices, because that’s one of the most effective ways to gain assurance about the vendor’s security habits. “Not all software vendors do this, but those who want to be considered reliable and secure partners should.”

3. Review and Prioritize

Automatic software update mechanisms are not all equally risky. So organizations should first prioritize the software that could have the most serious impact if attacked in the way that happened with ASUS, says Jordan.

“What the software does and where the updates go are important when determining its risk profile,” he says. With the ShadowHammer attack, the software updater utility went to about a million different computers, he notes. That gave the threat actors a huge attack surface to go after by replacing just one file.

“In an organization that uses a lot of computers from one vendor, that vendor should be much riskier than one whose software is used on only one computer in the organization,” Jordan says. “However, if that one computer has the crown jewels on it, you’d want to prioritize it.”

4. Trust But Verify

To mitigate risk from software updates, verify that the file you are installing is the file that the vendor intended, says Colin Little, senior threat analyst at Centripetal Networks. “A lot of popular software development companies will post the expected file hash of the package,” when making the update available for download, he says.

The goal is to give recipients a way to verify that the file hash of the file they downloaded is the same as the expected value. Any change in the package would change the hash value.

While security experts consider such hash comparisons an extra precaution, it does not work all the time. With the ASUS attacks, for instance, comparing the new files to the legitimate update using hash values would have been of little use since the attackers replaced legitimate updates on the server with their own, says Mark Orlando, CTO, cyber protection solutions at Raytheon.

5. Monitor Your Own Code-Signing Processes

Code-signing certificates are fundamental to establishing trust and are therefore a coveted commodity for attackers. For cybercriminals, such certificates provide a way to make malware seem trustworthy and therefore undetectable to threat detection systems.

Unfortunately, at many organizations the responsibility for protecting code-signing processes lies mostly with developers that are not prepared to defend these assets, says Kevin Bocek, vice president of security strategy and threat intelligence at Venafi. “Security teams must know where code-signing is being used; you can’t secure what you don’t know about,” Bocek says.

“Second, everything throughout the software delivery pipeline must be secured and continuously monitored,” he notes. This includes approval, use of keys and auditing of signing operations.

“It’s not good enough to just place a code signing key in an HSM or upload code to the cloud for signing.”

Related Content:

 

 

Join Dark Reading LIVE for two cybersecurity summits at Interop 2019. Learn from the industry’s most knowledgeable IT security experts. Check out the Interop agenda here.

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Article source: https://www.darkreading.com/attacks-breaches/asus-shadowhammer-attack-underscores-trusted-third-party-risks/d/d-id/1334261?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple