In this series ...

Two approaches to managing risk

In the previous post we adopted a view on risk grounded in decision theory: risk represents the basis on which rational decisions are made. We also aligned with ISO 31000 by accepting their definition of risk as the effect of uncertainty on objectives.

The ultimate common mode failure would be a failure of the risk management process itself. A weak risk management approach is effectively the biggest risk in the organization. — Douglas W. Hubbard, “The Failure of Risk Management” second edition

Below we will discuss a common problem in risk management practice and make an argument for what we believe is the right approach forward. But since this series is intended for readers who may not be fluent in risk management, we first need to provide an overview of how risk management activities relate to each other.

Risk management overview

While risk management frameworks differ in specific aspects, almost all share a common view on the high-level concepts involved. The following diagram (inspired by Open FAIR™ ) illustrates how risk management activities relate:

Risk management landscape

Risk governance provides the overall steering for the rest of the management process. In it, leadership sets the organisation’s thresholds for what is acceptable and oversees whether the risk management process is functioning as intended. Governance does not manage risks directly; it sets the rules and ensures they are followed. To the outside world, those involved in governing risk are typically those held accountable.

Risk management operates within the framework set by governance. It is the ongoing process of understanding, deciding on, and responding to risks, as well as reporting to those responsible for governance, monitoring whether the responses are working, and occasionally re-evaluating identified risks.

Risk assessment is part of the management process and groups the analytical steps: identifying what risks exist, analysing how likely they are and what impact they could have, and evaluating whether the resulting level of risk is acceptable given the thresholds set by governance. It is common for people to use “risk assessment” and “risk analysis” interchangeably, but the diagram above should make the distinction clear.

We will work through each of these areas in detail throughout this series. But before we do that, we need to look at an issue that underpins all the activities mentioned — the way risk is expressed and communicated. Most risk management frameworks are agnostic to it, but as we will see, it is critical for the overall success of a risk management system.

A red herring: qualitative vs. quantitative methods

Among the first decisions a risk practitioner needs to make when adopting a risk management framework is whether to express risk quantitatively or qualitatively.

Quantitative methods are rooted in statistics. When adopted, risk properties are expressed and analysed using well-established quantities such as probability of an event occurring and loss expressed in financial terms.

Qualitative methods, on the other hand, use descriptive (nominal) or ranking (ordinal) scales to convey the properties of a risk. Most commonly, matrices with 3–7 steps in each dimension are used to rank or classify a risk’s likelihood and impact.

We consider this framing a red herring because it forces the discussion to be about methods, distracting from the real issue: what do we try to achieve by adopting a risk management framework? Arguing whether to express risk quantitatively or qualitatively is like arguing whether to wear brown or black boots on our next hiking tour; if neither provides proper support for the route, the colour is irrelevant.

Risk matrices are worse than useless

Before we lose half our readers with that statement, let us qualify it: risk matrices are worse than useless if our goal is rational decision-making and due diligence.

Researchers and practitioners, such as Douglas W. Hubbard from whose work we draw extensively in our practice, have dedicated several books to this topic. The core of the argument is that labels such as “Rare”, “High”, or “Low-Medium” are too vague to convey what an assessor actually believes about the properties of a risk.

Further, studies in psychology and behavioural economics, including those by Nobel laureate Daniel Kahneman, have shown that human judgement is subject to biases that systematically distort how we perceive and evaluate uncertainties. These biases affect quantitative estimates as well, but when an assessor’s judgement is masked behind a layer of vague labels, there is no way to objectively monitor — and therefore improve — the quality of the risk management system over time.

Even ISO, which otherwise accepts qualitative methods as a viable option, acknowledges:

Qualitative and semi-quantitative techniques can be used only to compare risks with other risks measured in the same way or with criteria expressed in the same terms. They cannot be used for directly combining or aggregating risks and they are very difficult to use in situations where there are both positive and negative consequences or when trade-offs are to be made between risks. — ISO/IEC 31010:2019

What ISO doesn’t state, however, is that measuring risks “in the same way” or with “criteria expressed in the same terms” is practically impossible the moment more than one person is involved in the process. Our “Medium” is almost certainly different from your “Medium”.

Risk matrices are not just useless, but they actively harm both the organisations that use them and the risk management field in general. If you engage with any community of people involved in enterprise or security risk management, outside of formal settings, you will encounter a significant amount of cynicism about the practice. Many consider the whole exercise to be corporate and regulatory charade. Even when adopted honestly, risk matrices inevitably end with people providing arbitrary input to arrive at a desired, already-decided output.

It’s not all doom and gloom for qualitative methods

You might have noticed that we began by discussing “qualitative methods” but then proceeded to criticise risk matrices specifically. This is neither because we are sloppy with terminology, nor because we have maliciously set up a straw man to attack. Where qualitative methods fail is in how risks are scored and communicated. Risk matrices dominate this area by a large margin, which is why we singled them out.

Information can be gathered from sources such as literature reviews, observations, and expert opinion … It is common to encounter problems where there is both data and subjective information. Bayesian analysis enables both types of information to be used in making decisions. — ISO/IEC 31010:2019

For the rest of this series, as in our practice, we adopt a subjective interpretation of probability in which instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

Under this interpretation, both quantitative methods (such as formulas and algorithms) and qualitative methods (brainstorming, bow-tie analysis, causal mapping, and others) are useful, as long as they reduce the uncertainty regarding a risk at hand.

Common arguments against quantitative methods

Much of what follows overlaps with arguments that Hubbard addresses in his books. If you are interested in the details we strongly recommend his work — it is a valuable asset to any risk practitioner, even if you don’t fully agree with the arguments.

Quantitative methods are too difficult

Ⅰ: Maybe to some extent, but probably not as difficult as people think.

Ⅱ: Why do we expect rational decision-making to be easy?

To define and implement a methodology for quantitative risk analysis, one does need familiarity with probability theory, decision theory, and related fields. This is especially true for domains like insurance or pharmaceutical research where precise predictive modelling is required. But for the vast majority of domains where risk matrices are used today, an undergraduate-level familiarity with statistics is sufficient to get started. The users of the system need even less, as long as their roles are facilitated by a competent person.

The more important question is whether it is truly easier to make a rational decision when risks are expressed qualitatively. Imagine you are about to undergo a serious surgery and you have narrowed your choice to two clinics, each offering a different method. Would you rather each told you the risk is “low” and recovery time “decent”, or gave you recovery time in days and complication rates by type?

If a decision is obvious and easy to make, you don’t need risk management to support it.

Quantitative techniques require high-quality data

Ⅰ: Do they really?

Ⅱ: How do qualitative methods compensate for the lack of high quality data?

This one comes straight from ISO/IEC 31010:2019 “Risk management — Risk assessment techniques”. The fact is, however, that decision science applies quantitative methods precisely because we don’t always have high-quality data. If we had such data we wouldn’t need probabilistic models. The decision criteria for such environments are straightforward and well within introductory-level material.

Some will argue that we have more data than most people think, which is true. But the more important question is: how do risk matrices avoid the need for high-quality data if they truly support decisions? Whatever methods one used to form a qualitative score can also be applied to construct a quantitative estimate. The latter would at least enable us to measure our accuracy over time.

Some aspects of risk are intangible and cannot be measured

Couple of Hubbard’s books address exactly this topic. Our reinterpretation of the argument is as follows:

  1. There is no such thing as perfect measurement. Measurement is always an approximation and the objective is to reduce the uncertainty to acceptable levels.
  2. We measure things by making observations.
  3. We are concerned about things because they have observable manifestations we care about.
  4. Therefore things that we are concerned about can, in principle, be measured.

The real challenge is to do so with a satisfactory level of accuracy. But even if we can’t reach such a level, reduced uncertainty is better than making decisions blindly.

Take a common example of an intangible: business reputation. There are different reasons why someone might care about his or her reputation, but in business this almost always connects to gaining or losing market share. So instead of arguing about reputation in the abstract, one can observe more tangible indicators:

  • Have sales volumes changed?
  • Has the conversion rate from leads to customers shifted?
  • Is revenue growing in proportion to marketing investment?

None of these is a perfect proxy for reputation, but each is observable and measurable.

How can we tell what works?

If you base medicine on science, you cure people. If you base the design of planes on science, they fly. If you base the design of rockets on science, they reach the moon. — Richard Dawkins

We are not researchers and cannot claim to have a mountain of evidence on what works. Also, simply adopting quantitative methods does not automatically yield better results. The McNamara fallacy , for example, shows how otherwise sound methods can lead to bad results if misunderstood or poorly implemented.

What we do know, however, is that science’s principle of testing our hypotheses works and has consistently bettered our lives and society. Expressing risk quantitatively is a precondition for monitoring the accuracy of our predictions and improving our system over time.

Further reading

If you are convinced, or simply curious, here are resources to explore:

Hubbard’s Rapid Risk Audit is a simple Excel-based risk register that provides a quantitative view across a set of risks. It is intended as an example rather than a production tool, but it demonstrates that quantitative risk analysis does not require expensive tools.

Open FAIR™ is a free, open standard for risk analysis offering a detailed taxonomy and decomposition of risk. If you have a specific risk to analyse, it is the simplest way to get started, though it is scoped to individual risk analysis rather than end-to-end risk management. The FAIR Institute offers a broader set of resources around the standard, including training and community support.

If by any chance, you have access to Information Security Forum , their Quantitative Information Risk Analysis (QIRA) methodology and tooling is the most complete of the three from an end-to-end risk assessment perspective. A couple of years ago when we last reviewed it, it was still under active development, but ISF is a serious organisation and we expect it has matured since.

Throughout this series we will also present our own approach to quantitative methods and provide free resources to support the different phases of the risk management process.

What’s next

Though a bit extensive and confrontational, in this post we have provided an overview of the landscape of risk management activities and argued that the meaningful distinction is not between qualitative and quantitative methods, but rather between vague and clear expression of risk. We made the case that risk matrices, despite their prevalence, actively undermine the objectives of risk management.

Next we will look at the role of risk governance, and into one of the frequently mentioned but rarely explained concepts in risk management: risk appetite.

What is risk, and why bother managing it?