The key to AI innovation: human-AI interaction

Claudia Hauff

Frans Oliehoek

TU Delft does a lot of fundamental research in the field of AI. Key themes include improving smart systems for information retrieval and decision-making in complex environments. Researchers Claudia Hauff and Frans Oliehoek talk about the challenges in their field.

Dr Claudia Hauff, associate professor, is leader of the Lambda-Lab in Delft. One of its key areas of focus is information retrieval: research is done into strategies for optimising the collection of information. For example, Hauff is attempting to improve search results by means of interaction between the user and a smart search system. Hauff: “Think Siri or Google Voice, but better. Systems that learn from consecutive searches what the user really needs and do better the next time.”

Machine Learning AI

Training data

 

The smart systems work based on machine learning: algorithms that can learn from training data (see box). Hauff mainly uses one type in particular: deep learning. But finding enough data can be a challenge, Hauff explains. “Data from online search engines can teach us how people search and we can use that to train smart systems. But it’s not public data.” For this reason, Hauff works with a different source of data, closer to home. TU Delft offers more than 80 accessible online courses with over two million participants. Hauff makes interventions. Hauff: “For example, we test how behaviour changes when we put questions to participants after a lesson, or if we show them other participants’ results. That provides input for our deep-learning systems.”

 

Traffic simulations

 

Interactions are also key to associate professor Dr Frans Oliehoek’s research, but in this case between AI and humans and between smart systems themselves. Oliehoek is in charge of the INFLUENCE project, that explores interactive learning and decision-making in situations with uncertainties. Examples include traffic simulations involving several smart systems. Oliehoek is also part of ELLIS Delft (see box). “Hauff and I have a shared goal: to support human-system interaction.”

In simulations, existing smart systems can already enable a self-driving car to make a decision at a junction. But the algorithms that Oliehoek is developing are aimed at a larger scale. “This is not about data from a junction, but all the traffic data in a city. Thousands of variables. That’s what we aim to be able to manage.” Oliehoek is using reinforcement learning (see box) to teach smart systems to make a series of decisions, enabling them to think in more abstract terms. Oliehoek: “As a result, self-driving cars can deal with uncertainties, such as the effect of rain on road holding. Or anticipate other smart systems. The point of the simulations is to test the fundamental principles.”

 

Balancing

 

Upscaling is the main challenge. More specifically: teaching systems to achieve a balance between the use of existing knowledge (e.g. following the rule “stay in lane”) and exploring for new knowledge (trying out something new and learning from it). In complex simulations, a system also has to deal with the expectations of other systems. Oliehoek: “We’re developing different strategies to teach systems how to deal with considerations such as long-term versus short-term results.”

Fundamental research of this kind provides breakthroughs and proofs of concepts fir learning AI systems in complex interactive systems such as robots and self-driving cars. In this respect, Delft is way ahead of, for example, tech companies. Oliehoek: “Many tech companies also do AI research, but only a few focus on human-AI interaction. When they do it, it’s for computer games such as Go. Delft is strong in thinking about how AI systems can be used socially and ethically.”

 

Back to overview