Quantifying the Bias of Transformer-Based Language Models for African American English in Masked Language Modeling
Résumé
In the last three years we witnessed the proliferation of innovative natural language processing (NLP) algorithms attempting at solving different tasks and designed for the most diverse applications. Despite groundbreaking transformer-based language models (LMs) have been proposed and widely adopted, the measurement of their fairness with respect to different social groups still remains unsolved. In this paper, we propose and thoroughly validate an evaluation technique to assess the quality and the bias of the predictions of these LMs on transcripts of both spoken African American English (AAE) and Standard American English (SAE). Our analysis reveals the presence of a bias towards SAE encoded by state-of-the-art LMs, like BERT and DistilBERT, a lower bias in distilled LMs and an opposite bias in RoBERTa and BART. Additionally, we show evidence that this disparity is present across all the LMs when we only consider the grammar and the syntax specific to AAE.
Fichier principal
Quantifying_the_Bias_of_Transformer_Based_Language_Models_for_African_American_English_in_Masked_Language_Modeling.pdf (425.24 Ko)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|