Projects
Fundamental AI
Incorporating physical knowledge into Gaussian process regression
To combine data from different sources, it is important to know when to trust which source of information. Because of this, it is essential to make use of probabilistic modelling. One of the most promising probabilistic methods for AI are Gaussian processes. These methods can be used to learn new models from data when it is difficult to learn these models from first principles. This PhD research project focuses on incorporating various forms of physical knowledge into Gaussian process regression. However, there are several challenges that must be overcome, such as scalability, dealing with noisy inputs to the Gaussian process, and recursive/online estimation. This fundamental research in AI has strong connections to the topic of the three other PhDs as well as to our ongoing work on indoor localisation, underwater localisation, drone navigation and satellite swarms.
Applied AI
Distributed PNT for inaccessible autonomous systems
How will we position, navigate and time-synchronize (PNT) without the global positioning system (GPS)? Now, imagine a swarm of 10+ satellites orbiting on the far side of the moon, which are denied access to GPS and other man-made systems due to the Moon shielding. Alternatively, consider a swarm of drones deployed for underground search and rescue missions, where GPS signals are heavily impeded, and fixed infrastructure cannot be installed promptly. Along similar lines, imagine a network of autonomous vehicles driving through a tunnel without any references (or anchors) for navigation. To ensure safety and prudent data inference, all the nodes must localize themselves in 3-D space over time and must be time-synchronized for the coherent functioning of the swarm. Our goal in this project is to develop scalable AI-driven sensor platforms under distributed control frameworks, in which an anchorless swarm of mobile nodes cooperatively estimate their time-varying positions, correct their time-varying clock errors, and estimate their orientation. This platform will use onboard inertial sensor measurements, and exploit two-way communication with neighboring nodes to estimate the relative navigation parameters, without the need for an external reference such as GPS or other dedicated infrastructure.
Combining fundamental and applied AI
Effective perception in autonomous systems
Such as drones and self-driving vehicles, rely on perception algorithms to avoid obstacles and navigate in their ambient spaces. With the increasing use of multiple sensors on these platforms, it is crucial to collectively extract meaningful perceptual cues from various types of onboard sensors. However, multi-modal sensory data is typically high-dimensional, large in volume, and with irregular structures, which poses significant challenges for developing resilient and generalizable perception systems. Natural data is known to possess rich regularities and structures inherently, and recent studies have shown that exploiting these regularities leads to improved generalizability and performance in various machine learning tasks. In this PhD, we aim to leverage the inherent regularities in sensory data to enhance the effectiveness and reliability of perception in autonomous systems, and consequently develop a unified framework for multi-modal machine perception.
Sensor AI for human motion estimation
The foundational principle of sensor AI is to use physics-based models when possible and to use AI to learn models or parts of models that cannot be captured efficiently by physics alone. This project will focus on the specific topic of human motion estimation, in which extensive physics-based models are available from the field of biomechanics. Although promising results have been shown in recent years, it remains an open research question how to effectively combine sensor fusion with AI, or how to effectively integrate physical models with models learned from data. To address this challenge, this project aims to introduce novel methods that tightly couple sensor fusion and AI. Rather than treating these components as distinct entities, the focus will be on tightly integrating both to most effectively use all available information in the system and to properly propagate uncertainties. We plan to use this approach to improve estimates of human motion.