Artificial Intelligence and the Future of Content Moderation
This PhD project will address Artificial Intelligence (AI) concerning questions of justice and the future of work by looking closely into the socio-legal and technical dimensions of Content Moderation (CM). CM is a timely example of AI and human effort to safeguard online spaces – the digital manifestation of speech can have real-world consequences for society. Because of the masses of content uploaded each second and the frequent illegality of the content itself, CM is a rich field of future research. If platforms need to follow the law, they need to introduce mechanisms that enable them to do so – for their automated systems, their Terms of Service (ToS) and their human moderator's work conditions. The research domain will focus on large online platforms and their EU Regulation through the Digital Services Act (DSA). By answering the overarching question: How do large online platforms build socio-legal systems to create legally compliant and socially acceptable online spaces? This research project will provide a methodology to test CM processes and offer design recommendations for large online platforms in line with the DSA.
Process and Procedure
By better understanding CM and its status quo for large online platforms, this research will provide an overview of processes, practices and procedures for CM, describing the automated CM practices (Systems supporting initial moderation, e.g. through AI support), as well as the human moderation practices (Human moderation teams, education, systems in use, User Interface etc.). Including a description of different forms of large online platforms, users and their specificities for CM while considering the variety of forms of content to moderate. Additionally, this project will shed light on details like the structure of Community Standards (CS), CS accuracy (according to publicly available data), the possible actions of moderation, the time to take a decision/moderate, internal vs external CM, levels of automated CM, and a description of UI of CM systems. Including a selection of the legally relevant norms and processes, this project will illustrate the complexity of the CM process.
Method and Moderation
This research provides a new empirical-legal method to audit large online platforms for DSA provisions. The methodology includes testing moderation decisions for compliance with the DSA by answering the research question: What statical methods and sampling techniques exist to measure the legality of platform content? Standard practice for testing legal compliance of CM decisions in line with Art 34 DSA and Art 37 DSA, this research aims to add new qualitative and quantitative methods for researching online platforms.
From interaction primitives to a design space for human-AI interactions
The goal of this project is to develop a design space for human-AI interactions which provides information about the design and technical aspects of given interaction patterns. The proposed approach uses a set of interaction primitives as design materials for more complex interactions, including explainable and human-in-the-loop AI methods. The goal is to define such a design space both as a set of design and implementation guidelines (Cheat Sheets), as well as a prototype software tool to design and implement interaction patterns, e.g., how to negotiate with a model.
Developing interactions between AI and people of little power that foster justice in work
This project aims to explore how design approaches such as speculative design and more than human design can help generate interactions between AI and people in the context of work that enhances justice. Justice will be approached from a feministic perspective, focusing on power to understand how the use of AI can change and mange power structures at work, and how this is facilitated thought the interactions between people and AI. The project will primarily deal with cases of work where bigger power discrepancies are found, such as remote work, AI managed work and work under extensive surveillance.
Processes to design Justice–oriented AI artifacts for Remote Work
Both techno-solutionist mainstream and techno-pessimistic schools of thought consider digital artifacts, more or less ontologically secure and fixed. However, these artifacts evolve through continuous and consequential encounters with other entities through societal dynamics. AI-based artifacts, especially with their learning nature, assemble fluid configurations with other entities of their environment and as a result, constantly form new, unknown, and changing values in certain societal contexts for better or worst. They would be better interpreted as intrinsically value-laden but also constantly value-creating through interactions with each other, humans, and societal structures.
Justice and resistance in the workplace
This project looks into the context of information workers and how they struggle with online visibility and impression management. While justice as an important lens for making sense of the impacts of digital artifacts is mainly manifested as embedding fairness, here, compatible with the none-deterministic nature of AI artifacts, justice is seen as enabling processes designed for the active overcoming of algorithmic oppression and resistance against the societal structures that weave constraints against some into the fabric of everyday work.
Futures of justice-oriented remote work
Building upon experiential Futures as a method of inquiry for future exploration and Causal Layered Analysis as a Futures poststructuralist lens, this project seeks to design disruptive ways through which workers can actively reconfigure AI-enabled work artifacts to overcome imbalances of power within their digital workplace. The focus of this practice-led research will be to enable critical alternative futures by “unifying the future of work” and “suspending disbelief” about desirable changes toward justice-oriented AI artifacts of remote work.
Translating the concept of justice into the design and creation of AI systems
This PhD project aims to explore the methods through which designers can bring core components of justice into the specification and creation of current and future collaborative technologies, and more specifically, work-oriented AI systems. There are three key elements which need to be addressed in this research: the concepts of AI; justice; and work.
The first step is choosing a working definition to each of these three terms, understanding their conceptual issues and nuances. The second step is to analyse the intersection between these terms in conceptual pairs. From doing so, we can start to discern the complexities of this issue and have more clear paths of research.
Once the concepts have been defined and the underlying questions about their relations have been addressed, we can start working towards building a framework of justice for the design of AI systems for work. We need to find a way to include two categories of AI-related work: work needed to produce AI and work impacted by the existence of AI. Both categories are related in a feedback loop because the existence of AI will as well impact the way that AI is produced. Also, from the perspective of this project, our objective as justice-seekers must be twofold: on the one hand, we need to look at justice backwards (assess the blame for current injustices going on, make reparations…), and on the other hand, we need to prevent future injustices from taking place (tracking the decision-making process on AI decisions effectively…).
Finally this project will be looking towards creating a framework that can be applied to the design and production of AI systems, so it is expected to establish collaborations between TPM and IDE researchers to create models and prototypes that could be translated into large-scale projects, addressing the specifics of certain AI applications.