ELLIS Delft Talk: From Generalization to Reliability in Reinforcement Learning
04 mei 2021 16:00
Reinforcement learning (RL) has demonstrated exceptional potential to control machines in complex environments, but has so far not been applied in practice. This is mainly because learned control policies are not reliable enough for potential applications like autonomous driving and autonomous factories. Neural networks can generalize well within the training distribution, but often fail in critical moments when agents leave the lab. I will present my recent work on in-distribution generalization in deep RL, in particular in multi-task and multi-agent RL. Our latest findings indicate a significant advantage of attention/graph neural networks in multi-task learning, and I will try to explain this in the context of generalization by parameter sharing.
Lastly, I will discuss my future plans to create attention networks that can control their own epistemic uncertainty. I hope that this will allow for out-of-distribution generalization in deep RL, which I believe is an essential step towards learning reliable, and therefore practically applicable, control.
To join this event, please contact Frans Oliehoek.