Wendelin Böhmer

I studied computer science and received a PhD at the Technical University of Berlin, after which I went on to work as a postdoc for the University of Oxford. I started this September as an assistant professor in the Algorithmics group within the Department of Software Technology here at the Delft University of Technology.

 My research interest focuses at the intersection between inductive and deductive reasoning in Artificial Intelligence (AI). Traditionally deductive approaches like Operations Research (OR) have dominated AI, but over the last two decades, inductive approaches like Machine Learning (ML) captured both the name AI and the majority of public perception. While these new techniques address one of the major underlying flaws of deductive reasoning, namely the mismatch between model and reality, they come with their own blind spots. This is nowhere more visible than in Reinforcement Learning (RL), which inductively learns to interact with an unknown and possibly non-deterministic environment. Over the last 10 years, this paradigm has set world records in learning to play a variety of computer and board games, and is generally considered one of the most promising paths to general AI. At the same time, however, RL violates some of the most basic assumption that make ML so successful in practical applications. These discrepancies lie at the heart of inductive reasoning: generalization from examples. In RL these examples are interactions with the environment, which by the very nature of interactivity are changing with the agent's behavior and are limited to the exact circumstances during training. Not only can methods developed for ML not cope well with these challenges, learned solutions also go against the core competency of inductive reasoning: adaptation to reality.

 It is my believe that inductive reasoning alone will not allow us to progress in AI. Instead of treating the agent as a black box, which miraculously transforms complex input patterns into sensible interactions with the environment, we should aim for more "imaginative" agents. These agents should use deductive reasoning based on inner abstractions, models and believes, which are both simplifying reality and are constantly tested against it. There are several paths towards such a goal: structural constraints, auxiliary tasks, meta-learning and distributed reasoning. While the core competency of imaginative agents, the contextualization of learned knowledge, remains elusive, it is my goal to approach this question from many different angles until one will free the way towards a more general theory, which allows us to construct software agents and autonomous robots that can be released into the wild.

I am approaching these lofty goals within the framework of Deep Reinforcement Learning, which uses neural networks for approximate inductive reasoning. My current research focus is on structural constraints like Graph Neural Networks, adaptive constraints like Attention Architectures, uncertainty reduction with methods like Ensemble Estimates and distributed reasoning like Multi-agent Reinforcement Learning. However, I am generally interested in the entirety of RL and I am always looking forward to interesting applications of these techniques, in particular for robotics and OR. If you are working in any adjacent field and are interested in a collaboration, please don't hesitate to contact me at <j.w.bohmer@tudelft.nl>. I have more than 10 years of experience in this field and am happy to share my knowledge with you.

Wendelin Böhmer

Dr. J. W. Böhmer

EEMCS, Algorithmics

P.O. Box 5031, 2600 GA Delft
The Netherlands