Addressing Gender Bias in AI Chatbots: An analysis of ChatGPT

Mentor: Sara Colombo, sara.colombo@tudelft.nl

Problem / Issue to investigate: The project is motivated by the need to investigate and rectify issues related to gender discrimination and biases that may be present in AI chatbots, particularly focusing on ChatGPT. Gender biases within these AI systems have the potential to create inequities and perpetuate stereotypes, which can lead to harmful consequences in how AI interacts with users. If not addressed, these biases can result in unequal treatment based on gender, further exacerbating societal disparities.
 
Goal: The project aims to identify and mitigate gender-based biases in ChatGPT responses. It seeks to ensure that ChatGPT provides equitable and respectful responses to users regardless of their gender and contributes to a more inclusive and fair AI ecosystem.
 
Methods: Rooted in data feminism theories, the research can follow either an exploratory, technical, or design approach. The exploratory approach aims to investigate the user experiences and ethical issues encountered in using ChatGPT, employing methods such as user surveys, content analysis, or sentiment analysis. In the technical approach, it will employ a combination of data analysis and natural language processing techniques, by collecting and analyzing data to detect gender biases in ChatGPT's responses, as well as experimenting with strategies to reduce or eliminate these biases. In the design approach, it will focus on generating UX designs that allow to detect biases, compare responses, or implement human-in-the-loop strategies.
 
Impact: The expected outcome is a more inclusive and fair ChatGPT model that provides respectful and non-discriminatory responses to users of all genders. The project impact extends to promoting gender equality and inclusivity in AI interactions, setting a positive example for responsible AI development.
 
Relevance: This project addresses the ethical and societal concern of gender discrimination in AI. AI chatbots have the potential to interact with a diverse user base, and it's essential that they do so without perpetuating harmful biases or stereotypes. By actively working to eliminate gender discrimination in ChatGPT, the project contributes to a more ethical and responsible use of AI technology.