Practical Individual Fairness



Mikhail Yurochkin (IBM Research)

Mikhail is a Research Staff Member at IBM Research and MIT-IBM Watson AI Lab in Cambridge, Massachusetts. His research interests are Model fusion and federated learning; Algorithmic fairness; Applications of optimal transport in machine learning; Bayesian (nonparametric) modeling and inference. Before joining IBM, he completed Ph.D. in Statistics at the University of Michigan, where he worked with Long Nguyen.



Short Abstract: Individual Fairness (IF) is a very intuitive and desirable notion of fairness: we want ML models to treat similar individuals similarly, that is, to be fair for every person. For example, two resumes of individuals that only differ in their name and gender pronouns should be treated similarly by the model. Despite the intuition, training ML/AI models that abide by this rule in theory and in practice poses several challenges. In this talk, I will introduce a pipeline for achieving individual fairness. In practice, the starting point is typically an existing trained model, thus the first step is to audit this model for IF violations. Then I describe a simple IF post-processing approach that can be used on any pre-trained model. The final step is to re-train the model to satisfy IF. All individual fairness methods require a fair metric that quantifies the similarity of individuals. The fair metric can be learned from data in various ways depending on the application.