For more information about any of the projects, don’t hesitate to get in touch!
Biomorphic design of aerial robots for cluttered environments
Bio-inspired design and morphology greatly impact aerial robots' manoeuvrability, aerodynamics, and endurance when deployed in obstacle-dense environments (urban areas, forests, and post-disaster areas). Different morphologies can facilitate navigation in certain areas or cater to interaction with different objects, e.g., delicate objects vs. rigid walls. Morphology can also be adapted in flight in response to changing environments: moving away from standard drone configurations, the student will investigate new shapes that are best suited for flight in cluttered environments. Moreover, the design variables will be optimized to cater to multiple behaviors that favor, for instance, dynamic landing, gliding, and perching. This project provides the aerial platform with extensive actuation and mobility capabilities for morphing, therefore addressing ‘act’.
Tactile-based control for interaction with aerial robots
Tactile feedback plays a key role in human manipulation skills. It ensures safe interaction with delicate targets, enables manipulation of soft, malleable objects, it improves accuracy and user experience. Likewise, drones interacting with the environment need a sense of touch to conduct tasks precisely and safely. Inspired by humans, the student will develop tactile-driven control of robotic tools on the drone to favor safe navigation in cluttered environments, to allow tuning of the force output at the end-effector based on the sensory input, and to enable behaviors such as dynamic interaction and real-time control. Example targets envisioned for this project are tree trunks or curved targets such as tunnels or pipes. This project addresses ‘sense, think & act’.
Aerial obstacle avoidance using dynamic vision sensors
Biologically-inspired dynamic vision sensors (or event cameras), which asynchronously respond to the intensity change in each pixel, have grown in popularity for vision-based robotics applications due to their considerably high temporal resolution and power efficiency. Building on the synergy between the speed and efficiency of event cameras and the low-power and low-latency requirements of lightweight aerial robots, this project will focus on the early detection and avoidance of obstacles in a drone's flight path. The student will develop algorithms to predict the distance and impact time with potential obstacles and to estimate obstacle-free routes using a stereo setup of two onboard event cameras. Such an approach is most suited for navigation in areas filled with fragile objects and adds an extra layer of safety to the robot. This project addresses ‘sense & think’.
Object detection using neuromorphic systems in drones
Neuromorphic systems have the potential to implement powerful computer vision algorithms at only a fraction of the power cost of modern deep learning models. Nevertheless, implementing biologically-plausible spiking neural network models at large scales for realistic computer vision applications is still an under-explored research area. This work package will focus on real-time object detection on drones using neuromorphic computing for navigation in unknown environments. The student will develop a deep spiking network algorithm for object detection using standard RGB video input from an on-board camera and implement it on a state-of-the-art neuromorphic board. This project addresses ‘sense & think’.
Graph representations for efficient processing of visual event data
Event-based cameras capture compressed videos as sequences of "illumination change" events, allowing for high temporal resolution and low power consumption. However, many of the competitive algorithms rely on first converting the events to frame-based videos, and then applying standard computer vision methods such as CNNs. Consequently, these methods neglect the sparse nature of the events, resulting in less efficient algorithms with high latency. An alternative approach is to utilize graph neural networks (GNN), which enable direct processing of events as point clouds, due to their capability of handling irregularly structured data. Particularly, the focus of this project is to make complex computer vision tasks such as object detection computationally efficient by proposing novel and online spatiotemporal GNNs techniques. This project addresses ‘sense & think’.