Date of Award
6-11-2024
Document Type
Thesis - SCU Access Only
Publisher
Santa Clara : Santa Clara University, 2024
Department
Computer Science and Engineering
First Advisor
Yi Fang
Abstract
A major issue with AI chatbots such as ChatGPT is that their responses can contain biases, whether intentional or not. Biases can vary from politically-oriented biases, racially-based biases, gender-oriented biases to a plethora of other types of biases. In this paper we present a step taken towards mitigating bias in AI programs like ChatGPT: UnbiasText. Given a text input, our UnbiasText application detects and quantifies the gender or political bias within the text. Through this system, if a text input were to be inputted, the predictions for both biases as well as the most influential words based on those predictions will be provided to the user. As for preprocessing the data, stemming and removal of gendered pronouns and names (for gender bias) is done while vectorizing the text with a TF-IDF vectorizer. This is then fed to the random forest model for gender bias or the calibrated classifier model for political bias which is then evaluated through the testing data (training/testing data is split 80/20). The metrics from the evaluation of our model were quite promising with the gender bias model’s accuracy being 94.7% with an F1 score of 0.95 while the political bias model’s accuracy being 77.9% with an F1 score of 0.78. Through this implementation, training of LLMs like ChatGPT can be improved to promote fairness and representation in their outputs.
Recommended Citation
Cuachin, Byron Josh; Landoch, Samuel; and Pradeep, Donal, "UnbiasText: Breaking Gender Bias in AI" (2024). Computer Science and Engineering Senior Theses. 305.
https://scholarcommons.scu.edu/cseng_senior/305