The political dimension of algorithmic error

Nieuws - 23 februari 2023

There is no surprise in the algorithm providing an erroneous output. In certain cases, such errors lead to harm, such as when facial recognition algorithms misrecognize the person, which leads to their detainment. When such mistakes occur, there is an often-occurring impulse to somehow ‘fix’ the algorithm, to minimize the probability of error in various ways by resorting to technical and social interventions. While the potential of an algorithmic error seems inescapable, it is surprising that rarely do such mistakes take as beyond thinking of the algorithm itself as a decision-making device.

It seems to be because the very notion of an algorithmic error is situated within the technical horizon of contemporary machine learning. To see that, it may be useful to consider how algorithmic errors have been conceived throughout history. Théo Lepage-Richer traces the history of the notion of adversariality, starting through the writings of Norbert Winner and early models of neural networks to the contemporary literature on machine learning. Among other things, Lepage-Richer astutely demonstrates that in these domains algorithmic error refers not to the limits of what can be known through the algorithm, but rather error becomes a temporary drawback, a piece of information that can be used to train an algorithm to be ‘better’ at its function.

From this history of science and technology, one can carefully draw an important political implication. If it is important to us as a society to question not only a particular algorithm but an algorithm as such as a way of addressing problems, then targeting its erroneous outputs has its political limits. Addressing errors politically to delineate what an algorithm should not do is limited precisely because in the domain of machine learning such errors are rendered as an unlimited source of potential for technical improvement.

For example, one can consider Liam Young’s and Jacob Jonas’s art project Choreographic Camouflage. In response to algorithms for gait recognition and skeletal modeling, the artists have made a performance in which the dancers move unpredictably to ‘trick’ the algorithm. Such a project, however, seems to work on the very technical horizon of neural network algorithms, for which error is not only and not so much a failure as an opportunity for improvement. Seen this way, unpredictable moves, aimed at maximizing error, are a great way to eventually improve the algorithm. Therefore, a political response that aims to address the role of algorithms in society at large cannot solely rely on their fallibility to make claims about the circumstances of their deployment.