Representation learning for acting and planning

Prof. Hector Geffner (RWTH Aachen University, Linköping University)

20 October 14:00-15:30 | Pulse-Hall 4, 33.AT0.200 | https://collegerama.tudelft.nl/Mediasite/Channel/eemcs-cs-distinguished-speaker-lectures-cs-dsl/watch/1be845c35c614e14ae07acb38d8e8bf51d


Recent progress in deep learning and deep reinforcement learning (DRL) has been truly remarkable yet two problems remain: structural policy generalization and policy reuse. The first is about getting policies that generalize in a reliable way; the second is about getting  policies that can be reused and combined in a flexible, goal-oriented manner. The two problems are studied in DRL but only experimentally, and the results are not crisp and clear. In our work, we have tackled these problems in a slightly different way, separating what is to be learned from how it is to be learned. For this, we have developed languages for expressing general policies and methods for learning them using combinatorial and DRL approaches. We have also developed languages and methods for expressing and learning general subgoal structures (sketches) and hierarchical polices which exploit the notion of problem width, a measure developed for bounding the complexity of classical planning problems.

This is joint work with Blai Bonet, Simon Stahlberg, Dominik Drexler, and other members of the RLeap team.