STE WILLIAMS

New Study Calls Common Risk Figure into Question

Many risk models use a commonly quoted number — $150 per record — to estimate the cost of an incident. A new study from the Cyentia Institute says misusing that number means that estimates are almost never accurate.

It’s one thing to know your organization’s level of cyber-risk. It’s a step farther along the maturity path to be able to quantify that risk. But if you don’t know where your risk ranks in relation to the risks that other organizations face, you may still be operating in a partial information vacuum. That’s the premise of a new study that looked at the numbers behind industrywide risk and reached some conclusions that many may find provocative.

The Information Risk Insights Study (IRIS), conducted by the Cyentia Institute, is intended to help business risk managers build better models for risk, and to use those models to make better decision for managing cyber-risk. David Severski, senior data scientist at Cyentia and the principal author of the IRIS report, says that the point isn’t to have more data, but to use available data more effectively. “We’re never going to have perfect information, but we can use information that we have available to make a better decision rather than just a finger-in-the-wind type of analysis there,” he says.

Severski says that one example of using information more effectively would be to use industry-scale information if risk data on a specific company isn’t available. “If I have very little information about my organization or about a vendor that I’m working with, for instance, I can use the information that’s in the IRIS study to start the risk conversation,” he explains. He says that knowing the market area of the company and its size can allow a starting point for conversations involving the frequency and size of loss.

With information in hand on industry averages, Severski says, discussions can continue about whether the particular organization is better or worse than average, and what any available data says about the possibility of changing risk levels.

Size Matters
The averages for companies of different sizes are among the report findings that surprised Wade Baker, partner at the Cyentia Institute. “I was surprised that the likelihood that a Fortune 1000 firm would have an incident is about 1 in 4, or 25% in a given year,” he says. He found that likelihood to be much higher than he expected. An equal surprise on the flip side came from their study of small and midsize businesses.

“We found a 2% likelihood of an incident in any given year among small and medium businesses,” Baker says. Those percentage don’t say anything, though, about the impact an incident can have on the organization.

“If you’re a really large organization, with high revenue, when you have a breach, you stand to lose more just on a sheer dollars standpoint than when a small business is compromised,” he says. But when Cyentia researchers analyzed the incident data as a proportion of revenues, they found that the incident cost was well under 1% of annual revenues for a typical breach for a large corporation.

The news is quite different for small companies. “It’s a quarter of annual revenues for the typical breach for a small and medium enterprise. And I mean, that’s just shocking,” Baker says. One of the reasons it’s shocking is not simply the high level of the loss, but because it indicates that a key number frequently used by risk managers and analysts may be quite wrong.

One Size Fits None
The “2019 Cost of a Data Breach Report” by IBM Security and the Ponemon Institute shows that data breaches cost, on average, $150 per record involved. That number is frequently used (and, Severski says, commonly misused) to estimate incident costs in risk analysis. In IRIS, Severski writes, “A single cost-per-record metric simply doesn’t work and shouldn’t be used. It underestimates the cost of smaller events and (vastly) overestimates large events.”

As an example, Severski mentions a group that published a figure of $5 trillion in losses from misconfigured clouds. It is, Severski says, a patently ridiculous number that comes from multiplying 33 billion exposed records by $150. And the effect of errors like that is, he explains, huge.

When Severski plotted the projected costs of historical breaches versus the known actual costs, he found that the projection matched reality far less often than statistical modeling would expect. And the total amount of the error was more than $1.7 trillion — an amount that exceeded the total amount of the actual losses.

As a result, Severski says that a table of probabilities, with number of records (from 10 to 1 billion) on the X axis and total loss amounts (from $10,000 to $1 billion) on the Y axis offers a far more accurate way to use available data to build risk models.

A Trusted Voice
Asked why cybersecurity professionals should care about the accuracy of historical numbers, Baker says the answer depends on the company those professionals serve. For those in large enterprises, he says, it’s all about being seen as a reliable source of information for the board of directors. “A wildly overestimated view of the potential impacts of these cyber events will lead to wildly overspending to mitigate them, which will lose the confidence of the board in the long run. And we’ll lose the ability to have a real discussion and be taken seriously,” Baker explains.

On the other hand, “if you’re a small organization, you can quickly look at this and say, OK, how worried should I be about this particular topic for publicly disclosed breaches for my organization, and maybe you stop there,” he says, because that level of information would allow the company to decide whether to spend money on mitigating risk or launching a new product.

The key, Baker says, is understanding that no matter how much we want simple answers, risk isn’t a one-size-fits-all matter. Putting that understanding into action will, he says, allow organizations of all sizes to make better decisions about how to address the risks they face.

Related Content:

Check out The Edge, Dark Reading’s new section for features, threat data, and in-depth perspectives. Today’s featured story: “Security Lessons We’ve Learned (So Far) from COVID-19.

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Article source: https://www.darkreading.com/risk/new-study-calls-common-risk-figure-into-question/d/d-id/1337350?_mc=rss_x_drr_edt_aud_dr_x_x-rss-simple

Comments are closed.