Risk, Randomness, Uncertainty and other Ambiguous Terms
Uncertainty versus Risk is a popular discussion topic among risk managers, especially after major risk management disasters. The debate can get really hairy and drift into deep philosophical areas about the nature of knowledge etc. Yet the significance of having an as clear as possible language toolkit around these terms should not be underestimated. Practical risk management typically shuns too deep excursions into the meaning of things, yet that is not quite compatible with the use of sophisticated methods and tools (such as a Risk Model ) that assumes an understanding of the scope and limitations of “knowledge”.
This article aims to collect and synthesize a number of approaches that people have advanced in distinguishing / clarifying uncertainty, ignorance and randomness in the context of risk management.
Clearing up The “Knightian” Confusion
Before we embark into a “taxonomy” of uncertainties it is worth to try clear up (or at least put in context) what seems to be a case of simple mislabelling.
The distinction of between the terms uncertainty and risk will bring up in the minds of many readers the work of Frank Knight. The University of Chicago economist distinguished in his Ph.D. dissertation (1921) between two types of randomness: one that is amenable to formal statistical analysis, which Knight called “Risk”, and another that is not, which he called “Uncertainty”.1
In modern usage this terminology and distinction is confusing. Today, a century later, there is a large contingent of people that would call themselves (formally or informally) Risk Managers. While the concept of Risk and Risk Management is definitely fraught with ambiguity the art and science of Risk Management is certainly not restricted to quantifiable risks.
In the above mentioned post we emphasized the subjectivity of risk perception and put forward a simple definition of Risk that does not purport to place it on any particular place on the “uncertainty continuum”:
Risk is an uncertain future outcome that is unfavorable for a person or a collection of persons
The essence of Knight’s deliberations is still important (and we will expand on this below), but we will reserve the use of the term Risk as per the above definition which is at once narrower (it concerns uncertainty that is negative / needs to be managed) and broader (it does not distinguish the precise nature of the uncertainty). In simple words, of us Risk is what pre-occupies risk managers.
Taxonomies of Uncertainty
Having (hopefully) cleared that up, we can move on to the main topic, which is to discuss a taxonomy or classification of the different “types” of uncertainty. Once we do that we will go back to linking this discussion to issues that keep risk managers busy, such as risk taxonomies and model risk.
Revisiting and Completing The Rumsfeld Taxonomy
A thought framework that is very useful to get us started is what we might call the Rumsfeld Taxonomy of Uncertainty, namely the famous “Known Knowns” quote of United States Secretary of Defense Donald Rumsfeld in 2002:
This implied taxonomy of uncertainty is very useful as it sets up a rudimentary theory of knowledge with the following elements:
- a subject (observer) perceiving an external system
- various external (observed) systems of different intrinsic nature
- various possible configurations of observer / observed, classified according to the fidelity by which the observer understands the observed
This explicit duality becomes a simple taxonomy after we assign a label “known/unknown” on both sides of the observer / observed pair:
- Known Knowns: Denotes a situation where the subject is aware of the true properties of an external system. This category can be associated with perfect knowledge, determinism and lack of any uncertainty.
- Unknown Knowns: The external system has knowlable properties but the subject is not aware of them. The situation can in-principle be remedied with further work (collecting and analysing data, evidence etc).
- Known Uknowns: The subject is aware of their limited knowledge around some properties of an external system. This type can be associated with examples where the uncertainty can be parametrized and quantified. Yet further work will not completely eliminate the uncertainty.
- Unknown Unknowns: The subject is not aware of their lack of knowledge about some external system. Hence, in particular, they may be completely surprised by future events.
In the above matrix, the two most famous “quadrants” are the known unknowns and the unknown unknowns. In more formal terms they are recognized (since at least the start of the 20th century) as the difference between aleatory variability and epistemic uncertainty. We will dive into those terms in more detail. But before we do so it is worth mentioning that the overall structure of this organizational pattern has further merits towards risk management applications:
- It emphasizes that the work done by the subject in understanding a system is an important and non-trivial element. We will see this in more detail in more detailed schemes discussed below.
- It also emphasizes the role of the subject as important in defining who knows (or doesn’t know) something about a system. In our definition of Risk the observing / measuring subject is at the same time the actor who defines which states of the system are actually undesirable. In other words, different actors will not only have different knowledge, but they may also have different concepts of what is an adverse or risky scenario.
Aleatory Variability versus Epistemic Uncertainty
Aleatory Variability is a long name to describe randomness in a system. Such randomness can have causes that are i) intrinsic to the system or ii) due to our measurement errors of a given the system.
Alternative / related concepts aleatory variability are thus:
- known unknowns
- quantifiable uncertainty
- observational, statistical or measurement error
Epistemic Uncertainty is, in contrast, lack of certainty around the conceptual model of a system. It is due, foremost, due to limited knowledge. Epistemic uncertainty can be identified when fundamentally alternative models can plausibly be advanced to interpret / explain the same phenomenon.
Alternative / related names for epistemic uncertainty are thus:
- unknown unknowns
- unquantifiable uncertainty
- model risk / model error
- pretense of knowledge
We will dive deeper into the above starting with a more detailed taxonomy that has been proposed along these lines:
The Lo and Mueller Taxonomy of Uncertainty
Lo and Mueller2 proposed a more granular taxonomy of uncertainty, aiming to explain differences across a spectrum of intellectual pursuits from physics to biology to economics to philosophy and religion. Let us first summarize their classification:
- Level 1: Complete Certainty. All past and future states of the system are determined exactly if initial conditions are fixed and known
- Level 2: Risk without Uncertainty. This level of randomness is Knight’s (1921) definition of risk: randomness governed by a known probability distribution for a completely known set of outcomes
- Level 3: Fully Reducible Uncertainty. This is risk with a degree of uncertainty, an uncertainty due to unknown probabilities for a fully enumerated set of outcomes that we presume are still completely known. At this level, classical (frequentist) statistical inference must be added to probability theory as an appropriate tool for analysis
- Level 4: Partially Reducible Uncertainty. Situations in which there is a limit to what we can deduce about the underlying phenomena generating the data
- Level 5: Irreducible Uncertainty. Irreducible uncertainty refers to a state of total ignorance; ignorance that cannot be remedied by collecting more data, using more sophisticated methods of statistical inference or more powerful computers, or thinking harder and smarter.
Notes on the Lo-Mueller taxonomy
LM use the term Risk (Level 2) in the Knightian sense. As discussed already this is confusing and we will refrain from doing so. A simple relabelling “Risk -> Randomness” clears things up and lets us focus on the essence of the classification.
In comparison with the “Rumsfeld Taxonomy”, the authors introduce an important technical tool / distinction: a system governed by a probability distribution that is known exactly, versus a system subject to imperfect (statistical) knowledge. This distinction reflects the more general split of Knowledge as Measurement or Theory. The dichotomy and interplay of these two manifestations of knowledge is very obvious in physical sciences (experimental versus theoretical physics) but is it is a useful paradigm in risk management as well3. Uncertainty can be an intrinsic aspect of the theory (our conceptual representation of a system) or associated with the measurement approach.
Let us discuss some examples of the above “ladder” of certainty:
Simple Deterministic Theories
This concerns case where the system is a classic and simple Dynamical system . Its behavior (evolution in time) is determined by (mathematical) rules that are easy to develop.
In physics, the motion of two bodies under their mutual gravitation attraction is a good example but there are many more. In social sciences (finance and economic) there is simply no system with such simple and knowable characteristics. Even simple assertions such as “the human economy takes place on the surface of a planet” are tenuous characterisations that can be invalidated in possible future states of the world (especially if Elon Musk gets his vision implemented!)
A faint analog of certainty in the domain of human affairs would be certain identities that are essentially mathematical tautologies which follow from the definitions that are adopted to describe a system. For example various balance sheet identities.
Deterministic but Complex Theories
In this situation the system is still reasonably described as a classic dynamical system but its structure and interactions are sufficiently complex that its behavior (evolution in time) cannot be easily be deduced. This is the domain of chaotic behavior, emergence, Ergodic theory .
In physics there are again many, many examples of this nature:
- Chaotic non-linear mechanical systems (starting with the simplest Double pendulum )
- Large-N systems in Statistical Physics describing aggregates of physical elements such as gases, liquids, solids etc
There are many Econophysics efforts to use the toolkit of complexity to understand economic systems. As of 2021 they are still considered heterodox economics.
Measurement, Statistical Errors and Inference
Up to this point in the taxonomy we discussed the intrinsic nature of systems. In reality all systems are only known through measurement. Measurement (quantification) is a numerical quantification of the attributes of a system which is the starting point towards building knowledge about the system (a model). It applies to all and any system, irrespective of their complexity and intrinsic quality.
Informally, measurement introduces an additional layer or “fog of observational uncertainty” on top of whatever underlying intrinsic uncertainty there might exist. For example no deterministic physical law is ever observed directly but always through a distorting lens that includes possible thermal fluctuations, the noise of experimental apparatus etc. which we collectively term Observational error :
- In benign cases measurement errors can be assumed of a probabilistic nature that can be averaged away (IID errors). This averaging happens with the accumulation of sufficient observations. Random error is caused by inherently unpredictable fluctuations in the readings of a measurement apparatus or in the experimenter’s interpretation of the instrumental reading. Random errors show up as different results for ostensibly the same repeated measurement. They can be estimated by comparing multiple measurements, and reduced by averaging multiple measurements.
- It is entirely possible that our lens is “systematically” distorted: an intrinsic measurement bias (Systematic Error). Systematic errors are caused by imperfect calibration of measurement instruments or imperfect methods of observation, or interference of the environment with the measurement process, and always affect the results of an experiment in a predictable (non-random) direction. Systematic errors can sometimes be identified by applying the measurement process to a known system. If the cause of the systematic error can be identified, then it usually can be eliminated.
Statistical inference comes to play when we measure an intrinsically stochastic process. A canonical definition is that Statistical inference is the process of using data analysis to deduce properties of an underlying distribution of probability.
Quantum Uncertainty and the Uncertainty Principle
As an aside, there is an interesting example in physics where a probabilistic model of a system and the measurement of system properties interfere in non-trivial way. In Quantum mechanics systems can only be described using probabilistic tools. This is quite an important difference from a physical perspective, in that quantum mechanical variability is considered intrinsic to the system whereas complex but classical statistical systems are in principle of the deterministic type.
In terms of the Lo-Mueller taxonomy a quantum mechanical system is a Level 2 type of uncertainty: The probability distribution of future outcomes is completely described by Schroedinger’s equation. Yet in quantum mechanics measurement and knowledge of the system are famously in intrinsic conflict: the act of measurement disturbs the system, leading to the famous Uncertainty principle .
Keynes versus Hayek and the Pretense of Knowledge
The broad shape of the uncertainty taxonomy discussed up to this point is not the only take on uncertainty. While the “knowability” of large swathes of the empirical universe is not in doubt, when it comes to human affairs there is the Hayek critic to content with. For Keynes, economic affairs are “knowable”. Probability is more general than mathematical probability and is the hypothesis upon which it is reasonable for us to act (Keynes CW VIII: 339).
In Hayek’s theory, in contrast with Keynes, probability plays no role in guiding decision and action in conditions of limited knowledge.4 Hayek seems to implicitly accept Hume’s view: probability (and probable judgement too) is groundless, irrational, subjective as are taste and passion. The Pretense of Knowledge is an expression and critique that was the title of the Friedrich von Hayek Nobel Prize Lecture. In summary it posits that economic sciences are susceptible to a bias towards measurable information:
The evolution of scientific knowledge suggests that the question of what is knowable and what is not is open ended. History is replete with examples of over-confidence (assuming a system has been fully understood, only to discover fundamental new facets) and under-confidence (assuming something is forever beyond our comprehension and being surprised by what ingenuity, experimentation and systematic analysis can achieve).
Let us now draw some parallels with the needs of risk management. First, notice that we have purged the use of the term Risk as per the initial paragraph. Now that we have some sort of “map” of the uncertainty universe, how does Risk and Risk Management fit-in?
Linkages: Taxonomies of Risk versus Taxonomies of Uncertainty
Let us first go back to the nitty-gritty of daily risk management, coping with the myriad of risks a risk manager is facing. A Risk Taxonomy is the - typically hierarchical - categorization of risk types. In turn a Risk Type is a classification label that is used to identify and characterise the variety of risk phenomena to which an individual or organization is exposed.
The Open Risk Taxonomy aims to be a holistic picture of risks facing an organization (in particular financial organizations) in support of holistic risk management. As discussed in detail in this post and this white paper on risk taxonomies we take a stylized view of a financial organization as a system that processes information, makes decisions and engages in contracting.
The “system” we aim to risk manage (the organization and its interaction with the social / economic environment) is very complex. The underlying dynamics and factors are poorly understood (ranging e.g. Macroeconomic Factors to the detailed systems and processes of an organization that underpin its Operational Risk . Hence in risk management, “measurement” is always a very high level exercise which if taken too literally may lead to extreme disappointment, see e.g. the debacle around the AMA Model.
The taxonomy of uncertainty shows up quite naturally in model risk considerations:
- Intrinsic Model Risk
concerns types of epistemic uncertainty (multiple possible models or no possible model). It may manifest in various ways:
- Inconclusive / inappropriate / incomplete variable selection in describing the system
- Inadequate description of system dynamics
- Different possible distributional assumptions
- Data Quality concerns measurement errors that can manifest as:
Model Embedding Risk: Decision Making under Uncertainty
Category:Model Embedding Risks Model is an additional layer of risk that emerges as a certain model is embedded in decision making.
More generally, a Model Risk Taxonomy denotes the categorization of different aspects and realizations of Model Risk into a consistent overall framework that may help with mitigating or otherwise managing this risk.
Knight, F. H. (1921), Risk, Uncertainty, and Profit. Boston, MA: Hart, Schaffner & Marx, Houghton Mifflin Company ↩︎
Andrew W. Lo and Mark T. Mueller, March 19, 2010, WARNING: Physics Envy May Be Hazardous To Your Wealth!, Preprint ↩︎
Francis X. Diebold, Neil A. Doherty, and Richard J. Herring University of Pennsylvania, June 2008, The Known, the Unknown, and the Unknowable in Financial Risk Management ↩︎
Anna Carabelli & Nicolò De Vecchi (1999), Where to draw the line? Keynes versus Hayek on Knowledge, ethics and economics, Journal of the History of Economic Thought, 6:2, 271-296” ↩︎