The ethical concerns associated with Large Language Models

News - 12 July 2023 - Webredactie

The Economist published an article on the future of generative AI models on April 19th, outlining some of the risks and ethical concerns associated with Large Language Models. While the article discussed some of the well-known risks with AI-driven technology in general, it failed to adequately address the problem that generative AI poses. 

In a letter to the Editor - published in the online version of The Economics - Dr. Michael Klenk stresses the unique potential of the new wave of generative AI to enable manipulation at scale: www.economist.com/letters/2023/05/11/letters-to-the-editor. Michael is one of the DDEC researchers, focusing on moral values and on AI & manipulation. Please find below his response to the article.

“You did a fine job outlining the risks and general ethical concerns associated with large language models (“How generative models could go wrong”, April 22nd). However, the new wave of generative AI is unique for its potential to enable manipulative influence at scale. The problem with that goes much deeper than easing the spread of misleading or false information. Effective influence is a prized goal, for nefarious and benevolent actors alike, such as governments, corporations, or alleged Nigerian princes in dire straits. 

To influence effectively you need to understand what makes people tick and then tailor your approach. What makes generative AI unique is that it removes the bottleneck of having to create the tailored influence yourself. Highly persuasive messages, images, and videos are available at the click of a button. Such persuasive influence easily degenerates into manipulation when it does not contribute to understanding and successful inquiry. 

It is a welcome sign that institutions like the EU ponder plans to explicitly outlaw manipulation by AI. For that regulation to have a bite, however, more attention must be paid to the question of how to avoid manipulation and design for influence that aids understanding. Otherwise, I worry that before a “stochastic parrot” can obtain superhuman intelligence, it will do much harm by amplifying (wittingly or unwittingly) manipulation at scale. Although we are not being turned into paper clips just yet, we must ask both quotidian and quintessentially humane questions of how generative AI can contribute to good, meaningful influence.