Epistemic AI

Research Themes: Software Technology & Intellligent Systems


A TRL is a measure to indicate the matureness of a developing technology. When an innovative idea is discovered it is often not directly suitable for application. Usually such novel idea is subjected to further experimentation, testing and prototyping before it can be implemented. The image below shows how to read TRL’s to categorise the innovative ideas.

Click to enlarge

Summary of the project


One of the challenges for researchers that work on applying machine learning algorithms in real life settings is how these systems should deal with uncertainty. In this research project the researcher works on reconsidering one of the most fundamental assumptions used in most machine learning algorithms, namely how the algorithm should act in cases it does not know how to act, and moreover might not even know that it does not know which could turn out to be a black swan. The usual assumption is that there is a probability distribution. This distribution guides the algorithm in the face of uncertainty. But what if the probability distribution is itself uncertain? A problem of the current approach is that it forces the algorithm to pretend it is more certain than it really is: the assumption is that all the uncertainty is manifest in ‘known unknowns’ that can be captured by a probability distribution. A first approach could be to train your algorithm with more data, however there will still be real-life settings where collecting more data will not close the uncertainty about the ‘unknown unknowns’. Mathematically alternative theories for this ‘second-order’ uncertainty have been developed but hardly yet applied in AI. The researcher aims to rewire several machine learning algorithms with advanced approaches for dealing with uncertainties. Through this rewiring the same machine learning strategy can be tested for how it will deal with different ways of not knowing. The overall objective is to create a new paradigm for a next-generation artificial intelligence providing worst-case guarantees on its predictions thanks to a proper modelling of real-world uncertainties.

What's next?


Since the duration of the project is too short for integrating and testing all combinations of uncertainty theories and forms of machine learning, one of the next steps will be to do more research into untested combinations. Obviously another next step is to apply these new algorithms in settings where AI is already been used to study the effect of their working with a different notion of uncertainty. For example, autonomous vehicles. This might lead to the notion of epistemic uncertainty being treated in a principled way across AI.

With or Into AI?


Into

Dr Neil Yorke Smith

KU Leuven (BE)

Oxford Brooks University (UK)

Faculties involved

  • EEMCS
  • 3mE

Additional information