Developments in sensing technologies have created new possibilities to capture the real-world on a large scale, in 3D, and with high detail, and to infer the semantic information about what each object or surface represents (e.g. a tree, a roof surface, terrain). Such information is of great value in obtaining insight into the current state of an urban environment.
The aim of the 3DUU lab is to develop novel methods and techniques to automatically identify and model real-world objects in 3D for large-scale urban environments. These objects range from large structures such as buildings down to small structures such as trees, lampposts, and traffic signs.
In our research, we will combine data from different sources, such as aerial images and vehicle-mounted laser scanners. Using artificial intelligence, our goal is to “teach” and to “collaborate with” computers to automatically process and interpret large amounts of urban data and to rapidly reconstruct, visualize, and annotate them for supporting design and planning. The semantically-rich urban models produced by our techniques can, for example, serve as input for simulations to evaluate the impact of different scenarios related to flooding, traffic, energy, wind, pollution, noise, etc. These models are also a key input for mapping, localization, and traffic scene analysis in intelligent vehicle research.