Unveiling Biases in NLP Systems



Ninareh Mehrabi (University of Southern California)

Ninareh Mehrabi received her B.Sc. degree in Computer Science and Engineering from University of Southern California. She is currently a PhD candidate at University of Southern California's Information Sciences Institute. Her research is on developing trustworthy AI systems with an emphasis on algorithmic fairness in Machine Learning and Natural Language Processing. She is part of the MINDS group advised by Aram Galstyan and a recipient of 2021-2022 Amazon Fellowship from USC+Amazon Center on Secure and Trusted Machine Learning.



Short Abstract: In this talk, we will discuss observed existing biases in different Natural Language Processing (NLP) applications, namely Named Entity Recognition (NER) and commonsense reasoning. For commonsense reasoning applications, we will focus on knowledge completion or generation and commonsense story generation and discuss major sources of bias which mainly exist in knowledge resources. We will also discuss some methods for reducing such unwanted biases as well as how interpretability methods can be used to alleviate unfairness.