Types and importance of uncertainty quantification

By Tim Kinyanjui on August 12, 2021 - 5 Minute Read

In early 1961, around 1,200 armed Cuban exiles were given the go ahead by the American government to invade Cuba in an effort to topple its government.

The plan immediately fell apart and it never achieved its desired purpose. However, in ‘Bay of Pigs, Untold Story,’ Peter Wyden revealed that when President John F. Kennedy asked the US military to assess the invasion plan, their assessment was that it had a “fair chance” of success. This was a qualitative answer which Kennedy took as a positive assessment of the potential to succeed in the invasion. 

Instead, what the military chiefs meant with “fair success” was a 1:3 chance of success. Had this quantification been made explicit to the president, he would possibly not have approved the plan to go ahead as the chance of failure was 75%. 

What does this show us? That the quantification and proper communication of uncertainty is indeed key in helping all decision makers assess the potential impact of their strategy. This obviously applies in the political arena, but also in the world of business when it comes to making those all-important commercial decisions.

Business leaders know that making good (maybe optimal) decisions fast and at scale is extremely challenging, even under the best of circumstances. This is why there has been a huge demand and growth of a brand new category of enterprise AI software called Decision Intelligence.

Simply put, Decision Intelligence is the commercial application of AI to drive profit and growth. The use of AI allows for different technologies to work together to enable machines to make sense, act and learn with a level of intelligence, with machine learning (ML) algorithms being a subset of these technologies.

Machine learning models have been shown to achieve exceptional accuracy in solving many complex tasks, but they are not without their mistakes. Just like human decision making, AI models need to account for uncertainty in their decisions and, without doubt, a crucial indicator of trust in ML algorithms is their ability to quantify their confidence in a given prediction or decision. In literature, there have been two classical categorizations of uncertainty based on their source. The two main categories are aleatoric uncertainty and epistemic uncertainty.

What is aleatoric uncertainty?

Aleatoric uncertainty refers to the notion of randomness i.e. the variability in the outcome of an experiment that is due to inherent randomness. An example of this kind of uncertainty can be found in something as simple as a  coin flip. The process that generates this data has a stochastic component that cannot be reduced by any additional information. That’s why it’s generally accepted that a coin flip (of course, the coin has to be fair) before the beginning of a football match is a fair mechanism to determine which side of the pitch each team should start at. It is impossible to predict the outcome of a throw based on a purely deterministic process and, therefore, the uncertainty is generally considered to be irreducible.

What is epistemic uncertainty?

The second type is epistemic uncertainty, which refers to uncertainty caused by a lack of knowledge. As opposed to uncertainty caused by randomness, epistemic uncertainty can be (in principle) reduced by additional information or data. For example, what does the Swahili word “kichwa” mean, head or tail? The possible answers are the same as in the coin flipping example and you might be equally uncertain, but the uncertainty could be reduced or even gotten rid of if you know Swahili language. (For those playing along at home, Kichwa is Swahili for head!) Epistemic uncertainty is therefore considered to be reducible.

These same types of uncertainty exist in machine learning and, given the steady increase in the relevance of applying ML to practical business problems, there is a need to explicitly quantify and communicate the uncertainty.

Types of uncertainty exist in machine learning and, given the steady increase of applying ML to business problems, there is a need to explicitly quantify and communicate the uncertainty.

Tim Kinyanjui

Data Scientist, Peak

Uncertainty definitions in ML are context-specific

Having defined the two types of uncertainties, it might come as a surprise if I state that the definitions are context-specific within machine learning. We will assess these definitions by use of an example. Let’s consider a binary classification problem characterized by one feature, x1. Let’s denote the feature space in this first example as X. In Figure 1, the model has been trained to distinguish between two classes (pink and blue) and there exists class uncertainty where the two classes overlap (query point with a green ‘X’). No amount of extra data about x1 can give us information about how to distinguish the two classes at the intersection. This can therefore be seen as aleatoric or irreducible uncertainty.

uncertainty blog chart 1

But what happens when we add another extra feature, x2 from feature space X’. It turns out that by embedding the data in a higher dimensional space, accomplished by adding feature x2, the two classes become separable and the irreducible uncertainty can now be resolved.

 

uncertainty in ml models

In more general terms, embedding the data in a higher dimensional space, e.g. from X to X’ can reduce aleatoric uncertainty. This, unfortunately, can lead to an increase in the epistemic uncertainty because there are more possible hypotheses (models) than can explain the data, and one of the ways to reduce this is by collecting more data about x1 and x2.

What the above example shows is that reducible and irreducible uncertainty should not be seen as universal notions, but instead the context of the definitions should be considered and that they only make unambiguous sense if they are defined within the confines of a model of analysis. 

In my next post, coming soon, I will discuss some of the methods used in uncertainty quantification in ML algorithms and how to communicate the uncertainty to decision makers.

Interested in a career in data science?

We're currently growing our world-class data science as we continue to scale. Think you fit the bill? Check out our vacancies and apply online 👇

More from the Peak data science team

Check out some more great blogs written by our data scientists 👇

Stay in touch!

Subscribe to our newsletter to find out what’s going on at Peak

Subscribe today!