Hybrid intelligence stimulates both human intelligence and the development of the artificial variant. Catholijn Jonker is a strong advocate of this new vision. In other words: the computer as a sheepdog for humans.

The question of what artificial intelligence is and what problems it should solve has shifted with the passage of time. In the past, people were far better and faster at calculating than any machine. Since then, the problem of calculation has been totally solved by computers. But, quite aside from this, the mainstream approach in artificial intelligence (AI) has always focused on autonomous systems that can replace people and human activities. "However, I see much more potential in intelligent systems that collaborate with humans, can adapt to changing circumstances and explain their actions. The idea is that you make much greater progress by combining artificial intelligence with human intelligence. This enables both of them, human and computer, to grow. Compare it to a sheepdog that accentuates the sheep-herding capacities of a shepherd. We call that hybrid intelligence”, explains Catholijn Jonker, Professor of Interactive Intelligence at the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS).

Gravitation programme

This radical basic idea was last year awarded funding by the so-called Gravitation programme, financed by the Ministry of Education, Culture and Science (OCW). This funding will enable leading researchers to spend ten years working on fundamental research and collaboration. The research proposal Hybrid Intelligence: Augmenting human intellect was awarded a total of €19 million. Together with colleagues at Amsterdam’s Vrije Universiteit (the main applicant), the University of Amsterdam, and the universities of Leiden, Utrecht and Groningen, Jonker and her Delft-based Interactive Intelligence department will set to work developing theory and methods for intelligent systems that collaborate with humans. The aim is for these systems to enhance our strengths and compensate for our weaknesses.

Jonker: “In the faculty of EEMCS, our main focus is on the dialogue between humans and machines, in which they each help each other by shining their own light on the shared challenge. This means having the capability to explain your thoughts to the other, and the social intelligence and ability to learn from that dialogue.”

In other words, it is essentially about building the bond between the computer and people, especially for the longer term. “Currently, for example, coaching systems work for a few weeks at most. But the interaction in a hybrid intelligence system needs to be able to adapt over a long period; and it needs to remain engaging and enthralling. This means that the development of hybrid intelligence will very much depend on knowledge from other disciplines, such as psychology.”

This is why it is particularly useful that Jonker not only works at TU Delft (since 2006) but also at Leiden University. Since 2017, she has also been a part-time professor at the Leiden Institute for Advanced Computer Science (LIACS). “This enables me to see connections with other disciplines, in a way that’s not possible at TU Delft. Within hybrid intelligence, these external connections are absolutely essential.”

Computer as co-author

Potential areas of application for hybrid intelligence include healthcare, education and science. “A good future example could be in my work as a scientist. You could imagine a scientist having a hybrid intelligent agent as a research assistant. This assistant would start by searching for what else has been published about a particular research subject, or checking if your specific hypothesis has already been tested. The next step would involve this kind of system in thinking about your research strategy or even brainstorming with you about the subject. The ultimate consequence of this would be that the agent would need to be credited as co-author of the scientific paper. But that’s still quite some way off, by the way.”

“I would be so bold as to say that this new approach makes us quite unique in the world of artificial intelligence. Although other research programmes have touched on this vision, they haven’t done it in the same way as we’re doing. This also makes it difficult. Perhaps you could compare us to Baron van Münchhausen: we’ll need to pull ourselves out of the swamp by the hair.”

Ethics

If all goes well, the Gravitation programme will not only deliver new scientific insights and applications, but will also play a crucial role in the debate about artificial intelligence and policy relating to ethics around AI. This subject of ethics touches on an essential point for Jonker: “For example, I’m one of the founders of the Delft Design for Values institute but I’m also involved in that other programme that received Gravitation funding: Ethics of socially disruptive technologies (SDTs).” New technological advances, such as robotics, synthetic biology, nanomedicine, molecular biology and neurotechnology, but also artificial intelligence, all have potential to bring about major changes to day-to-day life, culturally, socially and economically. But they also raise complex moral issues that call for ethical reflection.

“It basically comes down to not doing everything just because you can, which is an easy trap to fall into, especially in technology. If you don’t think carefully all the time, you eventually have no idea what it is you’re doing. And once you’ve introduced a new technology, it’s very difficult removing it from the system. Just think about a problem like asbestos and how long we’ve been dealing with it.”

Pocket Negotiator

EOne area affected by an ethical issue was the development of the so-called pocket negotiator: an AI system to support human negotiators. “I could have opted to equip this negotiator with the ability to recommend lying a little; after all, that could have been useful strategically, but I chose not to incorporate that option.”

The pocket negotiator is a good example of an AI development that has enjoyed years of success. Jonker was awarded a Vici grant in 2007 for the project. Since then, a spin-off company (WeGain) has developed around the technology and various customers are showing an interest. “It works in two ways. On the one hand, the pocket negotiator helps to think about your preferences for potential outcomes in the negotiation. On the other hand, it’s now becoming possible to actually ask the pocket negotiator for advice during the negotiation itself.”

Emotions

Ethical considerations are always an issue in artificial intelligence, even in the very long term. “For example, I sometimes think about the following dilemma: what happens if we keep making everything easier for ourselves through the development of AI? Would it make us increasingly lazier and would that ultimately become part of our DNA?”

In any case, the subject of artificial intelligence evokes all kinds of emotions, especially among the general public. One recurring fear is that artificial intelligence could make enormous numbers of jobs surplus to requirements, resulting in very high levels of unemployment. What does Jonker think about that, as a scientist? “I think you need to draw a distinction. Are you thinking about individual cases or the fate of humanity in general? For humanity as a whole, I actually think there‘s nothing new under the sun, apart perhaps from the pace at which changes are now happening. Jobs will certainly disappear, but others will replace them. That’s always what happened throughout history. I think that the rise of artificial intelligence will certainly have painful consequences in many individual cases and we need to be vigilant about that. What will the people who lose their jobs be able to do? AI might actually be able to help in answering that question.”

Once you’ve introduced a new technology, it’s very difficult removing it from the system. Just think about a problem like asbestos and how long we’ve been dealing with it

The question of what artificial intelligence is and what problems it should solve has shifted with the passage of time. In the past, people were far better and faster at calculating than any machine. Since then, the problem of calculation has been totally solved by computers. But, quite aside from this, the mainstream approach in artificial intelligence (AI) has always focused on autonomous systems that can replace people and human activities. "However, I see much more potential in intelligent systems that collaborate with humans, can adapt to changing circumstances and explain their actions. The idea is that you make much greater progress by combining artificial intelligence with human intelligence. This enables both of them, human and computer, to grow. Compare it to a sheepdog that accentuates the sheep-herding capacities of a shepherd. We call that hybrid intelligence”, explains Catholijn Jonker, Professor of Interactive Intelligence at the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS).

Gravitation programme

This radical basic idea was last year awarded funding by the so-called Gravitation programme, financed by the Ministry of Education, Culture and Science (OCW). This funding will enable leading researchers to spend ten years working on fundamental research and collaboration. The research proposal Hybrid Intelligence: Augmenting human intellect was awarded a total of €19 million. Together with colleagues at Amsterdam’s Vrije Universiteit (the main applicant), the University of Amsterdam, and the universities of Leiden, Utrecht and Groningen, Jonker and her Delft-based Interactive Intelligence department will set to work developing theory and methods for intelligent systems that collaborate with humans. The aim is for these systems to enhance our strengths and compensate for our weaknesses.

Jonker: “In the faculty of EEMCS, our main focus is on the dialogue between humans and machines, in which they each help each other by shining their own light on the shared challenge. This means having the capability to explain your thoughts to the other, and the social intelligence and ability to learn from that dialogue.”

In other words, it is essentially about building the bond between the computer and people, especially for the longer term. “Currently, for example, coaching systems work for a few weeks at most. But the interaction in a hybrid intelligence system needs to be able to adapt over a long period; and it needs to remain engaging and enthralling. This means that the development of hybrid intelligence will very much depend on knowledge from other disciplines, such as psychology.”

This is why it is particularly useful that Jonker not only works at TU Delft (since 2006) but also at Leiden University. Since 2017, she has also been a part-time professor at the Leiden Institute for Advanced Computer Science (LIACS). “This enables me to see connections with other disciplines, in a way that’s not possible at TU Delft. Within hybrid intelligence, these external connections are absolutely essential.”

Computer as co-author

Potential areas of application for hybrid intelligence include healthcare, education and science. “A good future example could be in my work as a scientist. You could imagine a scientist having a hybrid intelligent agent as a research assistant. This assistant would start by searching for what else has been published about a particular research subject, or checking if your specific hypothesis has already been tested. The next step would involve this kind of system in thinking about your research strategy or even brainstorming with you about the subject. The ultimate consequence of this would be that the agent would need to be credited as co-author of the scientific paper. But that’s still quite some way off, by the way.”

“I would be so bold as to say that this new approach makes us quite unique in the world of artificial intelligence. Although other research programmes have touched on this vision, they haven’t done it in the same way as we’re doing. This also makes it difficult. Perhaps you could compare us to Baron van Münchhausen: we’ll need to pull ourselves out of the swamp by the hair.”

Ethics

If all goes well, the Gravitation programme will not only deliver new scientific insights and applications, but will also play a crucial role in the debate about artificial intelligence and policy relating to ethics around AI. This subject of ethics touches on an essential point for Jonker: “For example, I’m one of the founders of the Delft Design for Values institute but I’m also involved in that other programme that received Gravitation funding: Ethics of socially disruptive technologies (SDTs).” New technological advances, such as robotics, synthetic biology, nanomedicine, molecular biology and neurotechnology, but also artificial intelligence, all have potential to bring about major changes to day-to-day life, culturally, socially and economically. But they also raise complex moral issues that call for ethical reflection.

“It basically comes down to not doing everything just because you can, which is an easy trap to fall into, especially in technology. If you don’t think carefully all the time, you eventually have no idea what it is you’re doing. And once you’ve introduced a new technology, it’s very difficult removing it from the system. Just think about a problem like asbestos and how long we’ve been dealing with it.”

Pocket Negotiator

EOne area affected by an ethical issue was the development of the so-called pocket negotiator: an AI system to support human negotiators. “I could have opted to equip this negotiator with the ability to recommend lying a little; after all, that could have been useful strategically, but I chose not to incorporate that option.”

The pocket negotiator is a good example of an AI development that has enjoyed years of success. Jonker was awarded a Vici grant in 2007 for the project. Since then, a spin-off company (WeGain) has developed around the technology and various customers are showing an interest. “It works in two ways. On the one hand, the pocket negotiator helps to think about your preferences for potential outcomes in the negotiation. On the other hand, it’s now becoming possible to actually ask the pocket negotiator for advice during the negotiation itself.”

Emotions

Ethical considerations are always an issue in artificial intelligence, even in the very long term. “For example, I sometimes think about the following dilemma: what happens if we keep making everything easier for ourselves through the development of AI? Would it make us increasingly lazier and would that ultimately become part of our DNA?”

In any case, the subject of artificial intelligence evokes all kinds of emotions, especially among the general public. One recurring fear is that artificial intelligence could make enormous numbers of jobs surplus to requirements, resulting in very high levels of unemployment. What does Jonker think about that, as a scientist? “I think you need to draw a distinction. Are you thinking about individual cases or the fate of humanity in general? For humanity as a whole, I actually think there‘s nothing new under the sun, apart perhaps from the pace at which changes are now happening. Jobs will certainly disappear, but others will replace them. That’s always what happened throughout history. I think that the rise of artificial intelligence will certainly have painful consequences in many individual cases and we need to be vigilant about that. What will the people who lose their jobs be able to do? AI might actually be able to help in answering that question.”


/* */