In the late 1990’s and early 2000’s there was a concept that was bandied about that was coined “Return on Security Investment” or ROSI. Borrowing from the common business term Return on Investment (ROI) where a return on a particular investment (capital investment, personnel, training etc.) could be quantified, the cybersecurity industry attempted to quantify a return on security investment.

Fundamentally, the primary failing of this concept is that it is mathematically impossible (approaches mathematical impossibility) to quantify an event “not occurring”. In short, if a company has “zero” security events that impact them deleteriously in a given year, was the $5 million security expenditure appropriate? Should it have been less since there was no security event that caused a loss? If the company experienced an event, was the return on the investment then the difference between the expenditure and the overall losses from the incident? It simply did not work, as it was mathematically flawed.

Fast forward to 2021 and companies once again are fixated on quantifying cyber risk and, more importantly, cybersecurity exposure. The question is similar and is asked: “Can companies accurately quantify cybersecurity risks today?”

This is a complex question but to attempt an answer it is first important to have a working definition of several terms.

**Risk-** is an artificial construct which can be easily expressed as the function of the likelihood of an adverse event occurring (often provided as a statistical probability) and the impact, should the event be realized (in business, and for the purposes of this article, it will be expressed in monetary terms.). In short R=*f*PI.

**Probability-** refers to the extent to which something is probable; the likelihood of something happening. It can be either quantified (in which case it is deterministic) or qualified in which case it refers to the belief that something will happen (non-deterministic). Frequentist probability models quantify risk and conditional probability models qualify risk using subjective interpretations. There is an ongoing debate amongst statisticians and probability folks as to which model is more accurate in predicting actions in real life.

**Security** is a very important concept that can be defined simply as the implementation of controls commensurate with the identified risks.

Understanding the above, we can use a real-world example to understand the failings of attempting to quantify cybersecurity risks using traditional models employing frequentist probability theory.

Suppose for a moment that you find natural gas on your property and you decide to build a natural gas well. Being concerned for the environment and the safety of your workers, you want to provide that the natural gas well is engineered correctly against failure which could release gases and have deleterious impacts on people and the environment. One primary piece of the well is the “Mark Ie Main Actuation Recumbent Key valve” (Mark-Ie MARK).

The manufacturer states that the Mark Ie MARK has a mean failure rate (MFR) of 1 in 2 million actuations causing a catastrophic failure and total destruction of the well. This means that the valve could fail on the first actuation or never fail as long as it is used, however, given a large enough population of valves tested there will be a regression to the mean in which 1 out of every 2 million actuations will fail causing a catastrophic failure. Using the probabilities of failure and the impact (catastrophic) one can quite easily quantify the risk. This model is deterministic which means that if the model was run an infinite number of times output will always be the same. There is a known probability of failure (1 in 2 million) and a known impact (total loss of X$). You talk to the engineers and then design redundant systems to reduce the likelihood of failure to an acceptable level. The failure is predictable and the controls are easily identified. (it should be noted that this model ONLY accounts for loss of the well and not other impacts.)

Consider that you now are concerned about a security event which could impact the well. Security events do not occur by happenstance they are actions taken by people with some motivation. There are seven basic motivations in which an actor may want to take action against some asset. Some are intrinsically motivated while others are extrinsically motivated. The basic motivations include monetary gain, ideological reasons (social or political motivations), espionage, war/defense, facilitation, nuisance/destruction, and revenge.

When considering why a person may want to take adverse actions there is little, if any information available that would allow one to quantify with any degree of certainty the probability of an attack against the well. This is what is referred to as Knightian uncertainty which simply means that there is no quantifiable knowledge about the occurrence of some event and acknowledges some fundamental degree of ignorance, and unpredictability of events. Frequentist views of probability assigns an objective probability based upon a series of experiments under ideal situations and cannot be used as a predictive mechanism under uncertainty. Where there is Knightian uncertainty, traditional (frequentist) probability models fail.

Suppose you read an article on quantifying security and decide to hire a company to help you quantify your security risk. The company arrives and does some calculations on the likelihood of a security event occurring such as a disgruntled employee, or competitor attacking your well. They come up with a number to tell you your risk and expected losses. Can you believe their numbers? Within failure analysis it would be believable as it pertains to the failure of the valve as there is a predictable risk that can be evaluated and quantified.

Within the context of a security event it is not predictable as it is not feasible to predict human behavior with any degree of certainty (see Knightian uncertainty). Adding to the complexity of uncertainty of attacks, and differing attacker motivations, there is also the common thread of attacker or threat adaptation. According to the Department of Homeland Security’s Security Lexicon, Adaptive Threats are defined as:

*“…threats intentionally caused by humans.” It further states that Adaptive Threats are: “…caused by people that can change their behavior or characteristics in reaction to prevention, protection, response, and recovery measures taken.”*

The concept of threat adaptation is directly linked to the defense cycle. In short, as defenses improve, threat actors change their tactics and techniques to adapt to the changing controls. As the threat actor improves their capabilities the defensive actors necessarily have to change their own protections. This cycle continues ad infinitum until there is a disruption.

Even if a company does make a claim as to your current security state it likely ignores the human element of rationality (people act in their own interests) and their ability and motivation to adapt to implemented controls. It also ignores social, cultural, and other events that change attacker’s motivations. Consider a movie studio several years ago. They likely felt they had implemented controls commensurate with their identified security risks. They did not count on their producing a movie which cast a World leader in a negative light as changing the motivation paradigm which subjected them to unpredicted attacks by a very sophisticated and motivated nation state actor.

The fundamental failing of existing models of risk within the context of security is that they apply the classic or frequentist view of probability. The frequentist view of probability assigns an objective probability based upon a series of experiments under ideal situations and cannot be used as a predictive mechanism under uncertainty. Even the Department of Homeland security recognizes this when they state that frequentist models do not account for changes in the environment and cannot be applied to security issues, such as adversarial human actors which are considered adaptive threats.

Frequentist probabilities exist only in theory and can be considered only within the construct of a theoretical environment as external conditions (such as the movie being produced in the example above) do not exist in such an environment. In a theoretical sense, the probability of flipping a heads or tails on a coin is 50%. Is it really 50%? No. There is a very small, yet real, likelihood it could land on it’s edge. The coin could be unbalanced and land on heads more often than tails. In the real-world coins are not balanced perfectly and environmental as well as other factors will influence the true odds of flipping heads or tails.

Conditional Probability, by contrast, is a concept that borrows from the aforementioned aspect of frequentist probabilities. Even frequentists, when assigning probabilities are in effect assigning conditions to those probabilities. As Fenton and Neil explain in their book Bayesian Analysis for Risk Management, all probabilities assigned to an uncertain event are conditional in a given context. Any attempt to measure uncertainty inherently requires a subjective judgement on the conditions that affect the event. Understanding this point, it is appropriate to use Bayes’ theorem as a representation of conditional probability where P(A/K) where K is the background knowledge or context that impacts the variable A and probability P.

In contrast with objective, frequentist probability models of risk, the incorporation of knowledge and conditional probabilities are fundamental aspects of subjective Bayesian probability theory. This is important to understand as frequentist views of probability measure the objective proportion of outcomes of experiments where subjective probability models express the measure of belief of an outcome. Using the coin flipping example, one could state that they believe, given the weight of the coin, the prevailing wind and other factors, that the probability is 55% landing on heads. After weighing the coin, and obtaining new knowledge, the person may adjust the flipping of the coin probability.

Consider the following. Chris and Erin have run a race every day for 29 days and have a final race tomorrow. Erin, being taller, a better athlete, and younger, has won 20 of the 29 days (68.9%) and Chris has won the race 9 of the 30 days (31.03%). Who would you bet on to win the race tomorrow? If you said Erin, it makes sense given what you know. Now consider that every time Chris won the race and Erin lost the race it was raining. It is going to rain tomorrow. Would you change your bet? If you said “yes” you are applying conditional probability theory. Once you have more information, you can update your beliefs and therefore the probability to account for the new information.

So what is the answer to security risk quantification? That is a great question that lends itself to considerable debate. It is suggested that conditional probability models (Bayesian models, for example) lend themselves to more accurate evaluation of cybersecurity risks than frequentist probability models. Frequentist probability models do not account for the changing environment and variable, unpredictable threats facing organizations today. By applying conditional probability models, it is suggested that companies can gain a more complete, and over time, a more accurate analysis of their cybersecurity risks.