PhD on Detecting and Mitigating Bias in Social Interactions though Generative AI

Fully funded PhD position on developing and applying generative AI to detect and mitigate bias in online and offline social interactions.
 
The TU Delft Faculty of Industrial Design, Department of Sustainable Design Engineering is seeking a highly motivated PhD candidate to work on developing and applying generative AI to detect and mitigate bias (e.g. gender and racial bias) in online and offline social interactions.
 
The candidate will work in an exciting interdisciplinary environment, collaborating with experts in artificial intelligence, design, and social sciences within the Feminist Generative AI Lab.
  
Feminist Generative AI Lab
 
The Feminist Generative AI Lab is a joint research Lab based at TU Delft and Erasmus University of Rotterdam. The Lab pioneers research at the intersection of generative AI, design, and data feminism. Recognizing the recent developments in generative AI technology and its widespread impact, the Lab addresses the ethical challenges and opportunities inherent in this transformative field through a theoretical feminist lens. A feminist approach to technology goes beyond gender. Broadly understood, it is an approach that aims for less dominating alternatives, thinks beyond binary oppositions and embraces pluralism and difference.
 
Recent advances in generative AI technology, marked by increased accessibility and broad applicability across diverse contexts, have the potential to revolutionize societal structures, professions, activities, and human interactions. This brings forth exciting opportunities alongside significant ethical concerns. Such concerns arise from the lack of diversity in gender, race, and cultural narratives throughout the development and application of AI. Such tendencies intentionally and unintentionally perpetuate and reinforce systemic discriminations and inequalities, and prioritize profit over societal well-being, justice, and inclusion. The adoption of generative AI technologies at scale can propagate and amplify these risks, necessitating urgent exploration of the implications, pitfalls, and potentials of generative AI from an ethical standpoint.
 
Our research Lab applies feminist AI principles to the design, development, and deployment of generative AI solutions to support empowerment, inclusion, and justice across diverse domains. We examine and address aspects like intersectionality, hierarchies, and power structures within generative AI by employing methods such as participation, pluralism, and embodiment. This approach entails investigating various strategies to integrate AI, design, and data feminism, leveraging a multidisciplinary environment that merges expertise and methods from design, social sciences, and artificial intelligence. Together, we strive to chart a path towards a more ethical, inclusive, and socially just technological landscape.

Vacancy
 
We invite applications for a fully-funded PhD position in Design. The PhD candidate will be responsible for investigating the technical feasibility, effectiveness, and societal implications of designing generative AI applications that detect and mitigate bias (e.g., gender and racial bias) in both online and offline social interactions, to promote awareness and foster more balanced, equitable, and inclusive communication practices. 

Research Context

This project applies the principles of data feminism to the realm of AI for social good, focusing on the awareness and mitigation of gender and racial bias in both online and offline social interactions. It aims to actively address issues such as biased, discriminatory, or offensive content in conversations and online spaces, seeking to counteract them through innovative solutions based on the potential of generative AI. By addressing the complexities of bias in various social settings, this research aims to reflect on the opportunity to develop generative AI-powered solutions that align with, and foster, principles of justice and inclusivity.
In response to concerns about the risk of bias in existing generative AI models, researchers are actively developing de-biasing methods. This study investigates the use of de-biased, ad-hoc generative AI models to enhance real-time awareness of gender and racial bias in social interactions, aiming to effectively mitigate them.

Methodologies and Activities

The successful candidate will adopt methodologies and tools based on, among others, generative AI models training or customization, participatory design, value-sensitive design, experience and/or critical design, and mixed methods. They are expected to engage in the following activities:
•    Co-develop (i.e. design, train) generative AI models that consider various dimensions of diversity and inclusivity, by applying the concept of intersectionality;
•    Co-design (real and/or critical) solutions and artifacts powered by the trained generative AI model to be adopted in online and offline human-human interactions;
•    Develop interfaces that promote awareness and understanding of biases in real-time social interactions;
•    Prototype the designed solutions and deploy them in controlled environments to assess their effectiveness in detecting and addressing biases;
•    Conduct evaluations analyzing the systems' user experience, acceptability, system performance, and helpfulness in promoting more inclusive and equitable communication.
 
Qualifications
 
•    Master's degree in Design, Human-Computer Interaction, Computer Science, or a related field;
•    A strong background in human-centered design, interaction design, user experience design, human-computer interaction, or related fields;
•    Programming skills (e.g., Python, TensorFlow, PyTorch), experience in training (generative) AI models, or willingness to learn;
•    Strong analytical and problem-solving skills;
•    Strong communication skills and an ability to collaborate across disciplines;
•    Proficiency in human-centered design tools and methodologies;
•    Excellent scientific writing skills are a plus;
•    A commitment to social justice issues.

This PhD position offers a unique opportunity to contribute to cutting-edge research at the intersection of design, feminism, and artificial intelligence, with the potential to shape the future of inclusive and just technological landscapes. We encourage applicants from diverse backgrounds and experiences to apply.

For more information about this vacancy, please contact Dr. Sara Colombo (sara.colombo@tudelft.nl)

Applicants should send their CV, a cover letter (1-2 pages) outlining their qualifications and motivations, and names and contact details of two referees, who may be approached by the selection committee. If applicable, they can attach a sample of scientific writing (e.g., an academic publication) and a design portfolio. 

Are you interested in this vacancy? Please apply before 18 February 2024 via this link and upload your motivation letter and CV.