Smart software assistant knows what your colleagues would do

By Agaath Diemel

Insure or not? Operate or not? Fraud or not? Many organisations have to make a lot of fast decisions that can have considerable impact. Spin-off Councyl helps them to digitalise their expertise in order to support this decision-making. For this, co-founder Caspar Chorus uses his twenty-plus years of experience in research on choice modelling.

You could hardly conceive of a more relevant example: an intensive care specialist who has to decide in the middle of the night whether a Covid-19 patient should be admitted to Intensive Care. “Ideally a doctor would like to consult with colleagues on this, but the timing or time constraints may make this impossible. So they often make very lonely choices,” Caspar Chorus, Professor of Choice Behaviour, tells us. But it is not only hospitals that face pressure in decision-making. “Last year the Immigration and Naturalisation Service had to pay €65 million in fines because people were kept on waiting lists too long. Many organisations need help to become quicker and more efficient.”

Rule-based systems and unfathomable AI


Decision-support systems are nothing new. Until recently there were two types. One of these works according to fixed decision rules. “To stick with the ICU example, a system of this kind looks at certain threshold values for age or BMI, etc. In practice, however, a doctor will always have to weigh up his decision against many factors, so that a BMI that is slightly too high, for example, doesn't have to be the deciding factor at all.” At the other end of the spectrum is the newest generation of AI systems. “These can use machine learning to discover all kinds of connections in large data sets, such as millions of patient files. But whether a specific decision is then based on the condition of the patient or on an irrelevant combination of factors remains unfathomable.”

Councyl finds the happy medium between rigid rule-based systems and unfathomable AI. Rather than big data, Councyl works with expert data. “Our principle is that people find it difficult to put their expertise into words; they are unconsciously competent, as it is known. We make this knowledge explicit.” This is done on the basis of experiments. “We generate hypothetical decision scenarios and get experts to evaluate them. In the case of the ICU, we use hypothetical patients. You can think up millions of these, but we can extract the 20 or 30 that tell us what we want to know about the implicit decision rules that experts use.”

Identifying expert knowledge


For this Councyl uses techniques from econometric market research. “This kind of choice experiments with smart scenarios were initially conceived to chart the preferences of large groups of people, for use in developing new products, for example. We are not predicting what type of iPhone someone will buy, but what percentage of colleagues would make a certain decision and why. We are taking this method, which I have been working on for years, and turning it 90 degrees so we can use it to identify the knowledge of small groups of experts. We are using an old hammer in a new way. Mathematically it works almost the same, but the interpretation and the goal are completely new,” explains Chorus. Chorus has already spent nearly 20 years exploring choice modelling. He became very well known for the regret-based model he developed, in contrast to the traditional utility-based model. His hypothesis was that people do not always base their choices on what maximises utility, but make decisions in the hope of avoiding regret afterwards. The regret-based model is now a part of the most widely used econometric software packages. Nowadays he also examines the moral aspects of choice behaviour.

Digital assistant and human expert deciding together

Councyl uses the expert data it obtains as a basis to customise its software assistant to fit each organisation, whether it be a hospital, a mortgage provider or a social benefits provider. “We can fix experts’ knowledge at domain, company or national level.” The degree of obligation of the advice may vary. “For the example of the incoming patient in the ICU, the system tells what percentage of colleagues would behave in a certain way and why. We also show the counter-arguments. So the doctor receives a no-obligations online recommendation that is well supported. But there are any number of situations conceivable. Take operating a power station, where the system could take over certain choices automatically, for example if a minimum of nine out of ten experts would agree that a valve needs replacing. And between these two extremes is a grey area where the digital assistant and the human expert could decide together.”

I can publish about this and show my fellow specialists how things can be done differently, but sometimes it takes a long time to get the science into practice.

Caspar Chorus, Full professor / Hoogleraar & Head of Department Engineering Systems and Services

Inner workings


Once it is taken into use, the software assistant continues to learn. “Interaction comes about between the digital assistant and the human expert, each of which inputs their own choices. This feeds new knowledge into the system, not via extra decision rule input, but via the heads of experts who keep feeding themselves with new knowledge. This is really a technological innovation in my field,” says Chorus. “We saw it with Covid-19 patients in the ICU. Doctors have increasing knowledge and now take account of different criteria than they did a year ago, or they attach different importance to these criteria. Then a manager can take a look at the inner workings of the system, to see how the choices develop. This promotes a continuous dialogue on what is decided and why. I believe that in this way we can not only increase the efficiency and speed of decision-making, but also the quality and fairness of decisions.”

The human measure


There is a real need for this. “Take the Dutch childcare benefits scandal, for instance. People were sometimes mercilessly let down by the system if they earned just a few euros too much, because a stringent rule-based system was used. The human measure was lacking.” And the opaque System Risk Indication (SyRI) algorithm, that the government used to track down benefits fraud, was banned by the court in 2020 because it violated fundamental human rights. “As a start-up, Councyl has been able to benefit from such scandals, yet we also see a certain reticence towards the use of AI in general. But when we enter into dialogue with governmental bodies or companies, they soon see that we actually avoid the black box and retain the human measure.” Councyl has already been able to help some ten paying customers to their satisfaction and the start-up is now looking for investors.

Innovation born from irritation


It was precisely his irritation about scandals like this that motivated him towards innovation. “In recent years we've been hearing a lot about hoe AI is going to take over the world, but there is also growing dissatisfaction with black-box systems. So much that organisations are even turning away from them. Before you know it, the baby will be thrown out with the bath water. A shame, because there is so much potential in AI that would then never get a chance,” says Chorus. “I realised that my methods, which I was using for a completely different goal, should be able to obviate the drawbacks of AI. I can publish about this and show my fellow specialists how things can be done differently, but sometimes it takes a long time to get the science into practice. And there’s no time to lose; we must digitalise. We'll do it ourselves, I thought back then.”

It is a big misunderstanding that the black box will become white if we just continue to do enough research

Caspar Chorus, Full professor / Hoogleraar & Head of Department Engineering Systems and Services

Delft Enterprises


It was with this idea in mind that Chorus approached Delft Enterprises, the holding company for TU Delft spinoffs, where he was given a lot of help and advice by director Paul Althuis and investment manager Mathijs Heutinck. Together with programme manager Hubert Linssen and entrepreneur Nicolaas Heyning, he founded Councyl. Following a hectic start-up period, he now works for Councyl one day a week from TU Delft. “Delft Enterprises invested in us and continues to support us in word and deed. My own shares are held in a Trust Office Foundation, because as a TU Delft employee I cannot wear two hats,” he tells us. Making good arrangements from the start, about IP among other things, allows Councyl to build on TU Delft research. In turn, practical knowledge flows back into the university. “Our customers are impressed by the university's excellent reputation. And it is easy for us to attract good graduates who are keen to work for an AI spin-off. Right now, for example, they are working to further develop our user interface. This is another of our innovations, because the existing methods do not come in user-friendly software packages.”


The future

Chorus sees a rosy future for Councyl. “I recently took part in a panel discussion on digitalising the tasks of judges. The fear is that with AI prejudices can creep in there too. But you can turn that around too. Working in the Councyl way can help you to discover your implicit reasoning. So if ethnicity plays a role in the severity of the sentence, for example, you can take extra care here. I do believe that ‘good’ AI can make people more consciously competent.” But this is not true of all AI: “It is a big misunderstanding that the black box will become white if we just continue to do enough research. There are some 100 million relations in a neural network, just like in our brains. That's what makes it so good at predicting the weather, or the stock market. But it is precisely a fundamental property of that kind of AI, that we can't capture in rules or models, that forms the basis for making this kind of prediction. This black box is just getting blacker.”