Thesis defence C. Wang: robot learning
20 June 2017 15:00 - Location: Aula, TU Delft - By: Webredactie
Robot Learning of Affordances: Towards Developing Assistive Household Robots. Promotor: Prof.dr. R. Babuska (3mE).
The learning capability is essential for service robots to develop useful manipulation skills and solve household tasks. It is useful for robots to learn object affordances, which provide information about potential action effects on objects. This information is task-independent and can be used to select actions for solving a variety of tasks. In this dissertation, we are interested in efficient robot learning and use of affordances. The learning and the use of affordances are considered together rather than separated as two independent stages. Robots have to cope with changing environments that require affordance learning whenever necessary. In addition, the efficiency should be taken into account because affordance learning through embodied robot interaction with objects is typically time-consuming in order to collect enough training data. Continuous action spaces provide infinitely many action choices which make the data collection more difficult. Moreover, it is not efficient for the robot to learn every object from scratch. The robot needs to reuse relevant past experience to improve its current performance of task execution.
This dissertation aims to improve the efficiency of affordance learning and use. We have considered three learning mechanisms that can speed up collecting data and solving tasks.
First, we have proposed on-line learning of affordances that enables on-line data collection whenever the robot applies an effective action on the objects.
Meanwhile, the learned affordances can be used to avoid undesired actions in the reinforcement learning framework in which goal-directed tasks are formulated.
Second, we have proposed active learning of affordances that speeds up the data collection by active exploration in continuous action spaces. Affordance models are learned to predict action effects in continuous spaces while the prediction error is served as the reward signal to update the action selection policy. Third, we have proposed transfer learning of affordances that reuses the learned affordances of relevant objects to speed up the learning of a new task. The robot decides by itself not only whether the transfer learning should happen, but also how to adjust its action selection strategy.
We have demonstrate through real-world experiments with the humanoid robot NAO that the proposed affordance learning methods are more efficient than previous approaches in the literature.
Finally, we have proposed an agent-based robot control architecture that facilitates affordance learning and reasoning at different cognitive levels. In contrast to affordance learning that takes place at the sub-symbolic level through embodied robot interaction with objects, reasoning takes place at a higher symbolic level. These two levels effectively interact that affordance learning is controlled by the cognitive layer, while the affordance knowledge saved in the cognitive layer is grounded in the robot’s own sensory-motor experience. The agent can autonomously decide by itself when to switch on/off affordance learning. This approach is efficient for task execution because it is not always necessary to spend time on affordance learning if the available affordance knowledge is sufficient to solve the task. The proposed architecture enhances the robot's ability to solve complex real world tasks.
For access to theses by the PhD students you can have a look in TU Delft Repository, the digital storage of publications of TU Delft. Theses will be available within a few weeks after the actual thesis defence.