In our age of technology algorithms have become more complex and more influential than ever before. And yet, because of their omnipresence, we are much less aware of them. We only notice their existence when they stop doing what they were intended for. What exactly is an algorithm, and what is their impact?

What is an algorithm?

Many people associate algorithms with a computer solving a complex mathematical problem. But in its essence, an algorithm is little more than a finite set of instructions for achieving a certain goal. A recipe for preparing a tasty meal also fits this description, but the term algorithm will indeed more often be used in combination with numbers and computers. In general, these algorithms process information. Mathematics are important, for example for translating a real-world problem into a model, which then determines the so-called solution space. The algorithm is a smart way to search for the best possible solution within this space. A good algorithm is characterised by it being both correct and efficient. Efficiency means that the solution will be reached with as few steps – as little computing power – as possible. Thanks to an algorithm, the user does not need to understand the problem in order to solve it. The inventor of the algorithm, on the other hand, needs to have maximum understanding (see frame).

The role of computers

A computer program is the implementation of an algorithm. Multiple computer programs can lead to the same solution, but they may differ in, for example, the number of calculations needed, or the amount of storage space. In the early days of the computer age, computational power was very much limited, and the science of algorithms focussed on the discovery and implementation of algorithms for problems that were relatively easy to solve, such as finding the shortest route. Because of the enormous increase in computing power and in the fundamental knowledge of algorithms, this focus has shifted towards problems that were previously unsolvable because of their sheer magnitude. Where the need for fast execution of algorithms used to be a driving force behind the development of computers, nowadays powerful computers and the availability of huge digital datasets have led to the development of new algorithms. Machine Learning is currently drawing most of the attention.

Machine Learning

In Machine Learning, a kind of artificial intelligence, it is no longer human understanding that powers the algorithm. Rather, it is the algorithm itself that uses a large dataset to determine a relationship between input and output. In this process, the role of humans is limited to gathering a correct and sizable dataset for training the algorithm, designing the underlying artificial neural network (in case of the highly popular Deep Learning approach), and the decision of whether or not to release the trained network into real-world operation. Machine Learning has led to breakthrough successes in bioinformatics and in the fields of speech and image recognition. Machine Learning networks outperform humans in certain tasks, but they do not think like humans. In fact, their inner workings are so complex that it is often impossible for a human to determine how and why a Machine Learning network makes certain decisions. Their elusiveness may give them a somewhat spooky quality, as their influence on our daily life is continually increasing.

Finite applicability

As a consumer you often don’t realise when an algorithm impacts your life. When your phone automatically adjusts its brightness setting, for example, but also when the train schedule is adapted because of a disruption. In the latter case, an algorithm could perhaps instruct trains not to stop at your tiny train station for a few hours. It is therefore important to realise the circumstances under which an algorithm stops performing as intended. For certain algorithms, such as the one by Euclid (see frame), it can be proven mathematically that they will always function correctly. For other algorithms, such as the one adjusting the train schedule, this may only be the case as long as the underlying model accurately represents reality. Removal of a train track may necessitate the use of a different algorithm, or at least a new optimisation using the existing algorithm. Deep Learning neural networks, and virtually all kinds of artificial intelligence, can find very good solutions, but it is not possible to mathematically prove these to be optimal. Their training datasets may furthermore be biased (all training pictures containing cats also contain furniture), resulting in unexpected network behaviour.

New insights

The use of existing (combinations of) algorithms to solve real-world problems almost always results in suboptimal solutions that are either too slow or not accurate enough. To fix this, the underlying model may have to be over-simplified. A researcher will then implement smart improvements to build a suitable algorithmic solution. These algorithmic insights often have a significance beyond the scope of the problem just solved.

An optimal solution that isn't fair, is not optimal to me.

De Weerdt

A fair energy transition

As associate professor in the Algorithmics group at TU Delft, Matthijs de Weerdt employs algorithms for solving problems involving multiple parties. In his current research, he develops algorithms for ensuring an efficient and fair use of the electricity network. This is of special importance now that supply and demand are less and less in sync due to the transition towards renewable energy. “You cannot generate solar power at night, or wind power on a calm day,” he says. “It helps if we only charge our electric car and turn on our electric boiler at moments of sufficient supply.” Furthermore, there are capacity constraints within the electricity network. In theory, this can all be addressed by offering a financial incentive to parties that would like to supply or use electricity at peak moments. The magnitude of the incentive depends on how strong a need each party expresses. “This only works when the optimal solution is found, each time,” explains de Weerdt. “Otherwise, users can abuse the system by expressing a fake need.” As an optimal solution is required, de Weerdt does not use Machine Learning. He also insists for the solution to be a fair one. “It is not fair to always cut off the same neighbourhood or windmill park in case of capacity issues. This may be the best solution for all users as a group, but for me it is not an optimal solution.” In the long run, because of his algorithms, fewer gas and coal-fired power stations may be needed to handle network imbalances. His research is of both practical and scientific value. “When modelling a real-world problem, we often encounter new challenges, such as a fair distribution. We then take the structure of these problems into account to build faster and better solutions,” he explains. “This way, we build new algorithms containing components of existing algorithms. We then show that our new approach may be more widely applicable.”

Optimal spread of ambulance services

With many medical professionals in her family, Theresia van Essen’s childhood dream was to become a surgeon. But it is algorithmics with which she now helps to achieve optimal medical care. As assistant professor in the Optimization group at TU Delft she for example investigates the optimal spread of emergency care ambulance services. “Healthcare is expensive, so any improvement in services, without increasing cost, is more than welcome,” van Essen says. Because of increased computer power, this problem can now be addressed at a much more detailed level. “In the Netherlands, our level of detail is down to the numeric part of the postal codes. We take into account the current spread of emergency ambulances and their historic response times, for example. The number of incidents in a region turned out to be a determining factor.” The starting point of her research is to find an optimal solution for the highest level of detail. “If the optimal solution is too slow, I work on algorithms approaching that optimal solution in much less time. One could also simplify the model, of course, for example by assuming averaged response times that are constant throughout the day. But an optimal solution for a simplified model is not optimal for the real problem.” Thanks to her algorithms, an additional dynamic ambulance post has been introduced in the Utrecht area, for a number of hours each day. As a result, the region now complies with the applicable regulations regarding the maximum response time. It is unlikely for the additional ambulance post to be cancelled anytime soon. “The number of incidents in a region will only vary slowly over the course of years,” van Essen explains. “It is, however, still advisable to rerun the algorithm each year, to see if a substantial improvement in emergency ambulance care can be achieved after all.”

Thanks to increased computing power, we can use much more detail when optimising the spread of emergency ambulances.

Van Essen

Our aim is to stimulate diverse behaviour in the user.

Sullivan

Diversity in recommendations

Emily Sullivan is a postdoc in the Web Information Systems group at the TU Delft, where she works on improving so-called recommender systems. These systems generate recommendations, based on the movies you have watched or news article you have read on a website, for example. “We try to increase the diversity in these recommendations,” Sullivan says. “If the recommendations are too similar to your preferences, you may end up in the well-known filter bubble. But if they are too dissimilar, you will ignore them out of hand. It is important for recommendations to be diverse, but not too diverse. If a certain user is known for only reading short news articles, you may want to recommend short news articles about slightly differing topics.” Van Essen uses existing Natural Language Processing software, a kind of artificial intelligence, to extract linguistic features from texts. “In our research, we use various methods to weigh these features in order to reorder the list of recommendations. Our ultimate aim is to stimulate diverse behaviour in the user.” Another part of their research is focussed on the best way to present the resulting list of recommendations to the user. The recommendations may, for example, be accompanied by an explanation such as: ‘If you are interested in a more in-depth understanding of this topic, you may like these articles’. “In our group, we are not so much interested in the commercial application of this research,” Sullivan says. “Rather, we want the recommender system to represent the norms and values of both the user and the company.”

Optimal radiotherapy

Peter Bosman is part-time professor in Evolutionary Algorithms at the TU Delft, and senior researcher in Life Sciences and Health at the Centrum Wiskunde & Informatica (Centre for Mathematics and Informatics) in Amsterdam. Evolutionary algorithms are his specialty, both their fundamental structure and their practical application. He, for example, applies them in the field of brachytherapy, a cancer treatment in which radiation is applied from within the patient. “It is often not possible to deliver the required amount of radiation to the tumour for the patient to be cured, while also limiting the amount of radiation to surrounding healthy organs in order to limit the risk of treatment related complications,” Bosman explains. “Using our algorithms, the treating radiation oncologist gains a unique understanding of how these competing requirements can be balanced for each particular patient. Being the expert, they can then select the optimal radiation treatment plan for each patient, based on these insights.” It is important for the selected treatment plan to be deliverable from a technical perspective and for it to be an accurate representation of reality. “We basically refrain from applying simplifications to the underlying model,” says Bosman. In order to find a suitable solution in limited time, the evolutionary algorithm tries several possible solutions at once. Through a process mimicking natural selection, the algorithm will combine the good features of these solutions into ever better solutions. A close collaboration with radiation oncologists is important for the results to be useful in the clinic. “Our understanding of their underlying goals in the radiation treatment allows us to build an algorithm that calculates the desired collection of applicable treatment plans,” Bosman says. “These plans are sometimes radically different from what they are used to. It prompts them to reconsider the treatment plan qualities they find most important.”

Evolutionary algorithms are a kind of artificial intelligence introduced in the ‘70s. Nowadays, it is relatively easy for anyone to apply them but, Bosman stresses, “they certainly are not a cure for all. Thanks to our fundamental understanding of these algorithms we are much more likely to find a good solution. For me, it is essential to combine fundamental research with its practical application.”

We provide radiation-oncologists with a unique understanding of the competing requirements in the radiation treatment of cancer.

Bosman

Theresia van Essen

Euclid's algorithm

One of the oldest known (mathematical) algorithms, defined in 300 BC by Euclid, finds the greatest common divisor of two numbers. The algorithm shows great understanding by its inventor, is easy for the user to apply, and works as follows:

  1. Subtract the smallest number from the biggest number
  2. Repeat for the new number and the smallest of the two previous numbers until you reach 0 (zero).

For the numbers 49 and 21, the steps are:

49 – 21 = 28,

28 – 21 = 7,

21 – 7 = 14,

14 – 7 = 7,

7 – 7 = 0.

The greatest common divisor of 49 and 21 is the number 7.

What is an algorithm?

Many people associate algorithms with a computer solving a complex mathematical problem. But in its essence, an algorithm is little more than a finite set of instructions for achieving a certain goal. A recipe for preparing a tasty meal also fits this description, but the term algorithm will indeed more often be used in combination with numbers and computers. In general, these algorithms process information. Mathematics are important, for example for translating a real-world problem into a model, which then determines the so-called solution space. The algorithm is a smart way to search for the best possible solution within this space. A good algorithm is characterised by it being both correct and efficient. Efficiency means that the solution will be reached with as few steps – as little computing power – as possible. Thanks to an algorithm, the user does not need to understand the problem in order to solve it. The inventor of the algorithm, on the other hand, needs to have maximum understanding (see frame).

The role of computers

A computer program is the implementation of an algorithm. Multiple computer programs can lead to the same solution, but they may differ in, for example, the number of calculations needed, or the amount of storage space. In the early days of the computer age, computational power was very much limited, and the science of algorithms focussed on the discovery and implementation of algorithms for problems that were relatively easy to solve, such as finding the shortest route. Because of the enormous increase in computing power and in the fundamental knowledge of algorithms, this focus has shifted towards problems that were previously unsolvable because of their sheer magnitude. Where the need for fast execution of algorithms used to be a driving force behind the development of computers, nowadays powerful computers and the availability of huge digital datasets have led to the development of new algorithms. Machine Learning is currently drawing most of the attention.

Machine Learning

In Machine Learning, a kind of artificial intelligence, it is no longer human understanding that powers the algorithm. Rather, it is the algorithm itself that uses a large dataset to determine a relationship between input and output. In this process, the role of humans is limited to gathering a correct and sizable dataset for training the algorithm, designing the underlying artificial neural network (in case of the highly popular Deep Learning approach), and the decision of whether or not to release the trained network into real-world operation. Machine Learning has led to breakthrough successes in bioinformatics and in the fields of speech and image recognition. Machine Learning networks outperform humans in certain tasks, but they do not think like humans. In fact, their inner workings are so complex that it is often impossible for a human to determine how and why a Machine Learning network makes certain decisions. Their elusiveness may give them a somewhat spooky quality, as their influence on our daily life is continually increasing.

Finite applicability

As a consumer you often don’t realise when an algorithm impacts your life. When your phone automatically adjusts its brightness setting, for example, but also when the train schedule is adapted because of a disruption. In the latter case, an algorithm could perhaps instruct trains not to stop at your tiny train station for a few hours. It is therefore important to realise the circumstances under which an algorithm stops performing as intended. For certain algorithms, such as the one by Euclid (see frame), it can be proven mathematically that they will always function correctly. For other algorithms, such as the one adjusting the train schedule, this may only be the case as long as the underlying model accurately represents reality. Removal of a train track may necessitate the use of a different algorithm, or at least a new optimisation using the existing algorithm. Deep Learning neural networks, and virtually all kinds of artificial intelligence, can find very good solutions, but it is not possible to mathematically prove these to be optimal. Their training datasets may furthermore be biased (all training pictures containing cats also contain furniture), resulting in unexpected network behaviour.

New insights

The use of existing (combinations of) algorithms to solve real-world problems almost always results in suboptimal solutions that are either too slow or not accurate enough. To fix this, the underlying model may have to be over-simplified. A researcher will then implement smart improvements to build a suitable algorithmic solution. These algorithmic insights often have a significance beyond the scope of the problem just solved.

An optimal solution that isn't fair, is not optimal to me.

De Weerdt

A fair energy transition

As associate professor in the Algorithmics group at TU Delft, Matthijs de Weerdt employs algorithms for solving problems involving multiple parties. In his current research, he develops algorithms for ensuring an efficient and fair use of the electricity network. This is of special importance now that supply and demand are less and less in sync due to the transition towards renewable energy. “You cannot generate solar power at night, or wind power on a calm day,” he says. “It helps if we only charge our electric car and turn on our electric boiler at moments of sufficient supply.” Furthermore, there are capacity constraints within the electricity network. In theory, this can all be addressed by offering a financial incentive to parties that would like to supply or use electricity at peak moments. The magnitude of the incentive depends on how strong a need each party expresses. “This only works when the optimal solution is found, each time,” explains de Weerdt. “Otherwise, users can abuse the system by expressing a fake need.” As an optimal solution is required, de Weerdt does not use Machine Learning. He also insists for the solution to be a fair one. “It is not fair to always cut off the same neighbourhood or windmill park in case of capacity issues. This may be the best solution for all users as a group, but for me it is not an optimal solution.” In the long run, because of his algorithms, fewer gas and coal-fired power stations may be needed to handle network imbalances. His research is of both practical and scientific value. “When modelling a real-world problem, we often encounter new challenges, such as a fair distribution. We then take the structure of these problems into account to build faster and better solutions,” he explains. “This way, we build new algorithms containing components of existing algorithms. We then show that our new approach may be more widely applicable.”

Optimal spread of ambulance services

With many medical professionals in her family, Theresia van Essen’s childhood dream was to become a surgeon. But it is algorithmics with which she now helps to achieve optimal medical care. As assistant professor in the Optimization group at TU Delft she for example investigates the optimal spread of emergency care ambulance services. “Healthcare is expensive, so any improvement in services, without increasing cost, is more than welcome,” van Essen says. Because of increased computer power, this problem can now be addressed at a much more detailed level. “In the Netherlands, our level of detail is down to the numeric part of the postal codes. We take into account the current spread of emergency ambulances and their historic response times, for example. The number of incidents in a region turned out to be a determining factor.” The starting point of her research is to find an optimal solution for the highest level of detail. “If the optimal solution is too slow, I work on algorithms approaching that optimal solution in much less time. One could also simplify the model, of course, for example by assuming averaged response times that are constant throughout the day. But an optimal solution for a simplified model is not optimal for the real problem.” Thanks to her algorithms, an additional dynamic ambulance post has been introduced in the Utrecht area, for a number of hours each day. As a result, the region now complies with the applicable regulations regarding the maximum response time. It is unlikely for the additional ambulance post to be cancelled anytime soon. “The number of incidents in a region will only vary slowly over the course of years,” van Essen explains. “It is, however, still advisable to rerun the algorithm each year, to see if a substantial improvement in emergency ambulance care can be achieved after all.”

Thanks to increased computing power, we can use much more detail when optimising the spread of emergency ambulances.

Van Essen

Our aim is to stimulate diverse behaviour in the user.

Sullivan

Diversity in recommendations

Emily Sullivan is a postdoc in the Web Information Systems group at the TU Delft, where she works on improving so-called recommender systems. These systems generate recommendations, based on the movies you have watched or news article you have read on a website, for example. “We try to increase the diversity in these recommendations,” Sullivan says. “If the recommendations are too similar to your preferences, you may end up in the well-known filter bubble. But if they are too dissimilar, you will ignore them out of hand. It is important for recommendations to be diverse, but not too diverse. If a certain user is known for only reading short news articles, you may want to recommend short news articles about slightly differing topics.” Van Essen uses existing Natural Language Processing software, a kind of artificial intelligence, to extract linguistic features from texts. “In our research, we use various methods to weigh these features in order to reorder the list of recommendations. Our ultimate aim is to stimulate diverse behaviour in the user.” Another part of their research is focussed on the best way to present the resulting list of recommendations to the user. The recommendations may, for example, be accompanied by an explanation such as: ‘If you are interested in a more in-depth understanding of this topic, you may like these articles’. “In our group, we are not so much interested in the commercial application of this research,” Sullivan says. “Rather, we want the recommender system to represent the norms and values of both the user and the company.”

Optimal radiotherapy

Peter Bosman is part-time professor in Evolutionary Algorithms at the TU Delft, and senior researcher in Life Sciences and Health at the Centrum Wiskunde & Informatica (Centre for Mathematics and Informatics) in Amsterdam. Evolutionary algorithms are his specialty, both their fundamental structure and their practical application. He, for example, applies them in the field of brachytherapy, a cancer treatment in which radiation is applied from within the patient. “It is often not possible to deliver the required amount of radiation to the tumour for the patient to be cured, while also limiting the amount of radiation to surrounding healthy organs in order to limit the risk of treatment related complications,” Bosman explains. “Using our algorithms, the treating radiation oncologist gains a unique understanding of how these competing requirements can be balanced for each particular patient. Being the expert, they can then select the optimal radiation treatment plan for each patient, based on these insights.” It is important for the selected treatment plan to be deliverable from a technical perspective and for it to be an accurate representation of reality. “We basically refrain from applying simplifications to the underlying model,” says Bosman. In order to find a suitable solution in limited time, the evolutionary algorithm tries several possible solutions at once. Through a process mimicking natural selection, the algorithm will combine the good features of these solutions into ever better solutions. A close collaboration with radiation oncologists is important for the results to be useful in the clinic. “Our understanding of their underlying goals in the radiation treatment allows us to build an algorithm that calculates the desired collection of applicable treatment plans,” Bosman says. “These plans are sometimes radically different from what they are used to. It prompts them to reconsider the treatment plan qualities they find most important.”

Evolutionary algorithms are a kind of artificial intelligence introduced in the ‘70s. Nowadays, it is relatively easy for anyone to apply them but, Bosman stresses, “they certainly are not a cure for all. Thanks to our fundamental understanding of these algorithms we are much more likely to find a good solution. For me, it is essential to combine fundamental research with its practical application.”

We provide radiation-oncologists with a unique understanding of the competing requirements in the radiation treatment of cancer.

Bosman

Theresia van Essen