Bias and Fairness in Large Language Models: A Survey

Authors

  • Isabel Orlanes Gallegos Stanford University http://orcid.org/0000-0002-4872-6447
  • Ryan A. Rossi Adobe Research
  • Joe Barrow Pattern Data
  • Md Mehrab Tanjim Adobe Research
  • Sungchul Kim Adobe Research
  • Franck Dernoncourt Adobe Research
  • Tong Yu Adobe Research
  • Ruiyi Zhang Adobe Research
  • Nesreen K. Ahmed Intel Labs

Abstract

Rapid advancements of large language models (LLMs) have enabled the processing, understanding, and generation of human-like text, with increasing integration into systems that touch our social sphere. Despite this success, these models can learn, perpetuate, and amplify harmful social biases. In this paper, we present a comprehensive survey of bias evaluation and mitigation techniques for LLMs. We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing, defining distinct facets of harm and introducing several desiderata to operationalize fairness for LLMs. We then unify the literature by proposing three intuitive taxonomies, two for bias evaluation, namely metrics and datasets, and one for mitigation. Our first taxonomy of metrics for bias evaluation disambiguates the relationship between metrics and evaluation datasets, and organizes metrics by the different levels at which they operate in a model: embeddings, probabilities, and generated text. Our second taxonomy of datasets for bias evaluation categorizes datasets by their structure as counterfactual inputs or prompts, and identifies the targeted harms and social groups; we also release a consolidation of publicly-available datasets for improved access. Our third taxonomy of techniques for bias mitigation classifies methods by their intervention during pre-processing, in-training, intra-processing, and post-processing, with granular subcategories that elucidate research trends. Finally, we identify open problems and challenges for future work. Synthesizing a wide range of recent research, we aim to provide a clear guide of the existing literature that empowers researchers and practitioners to better understand and prevent the propagation of bias in LLMs.

Author Biographies

  • Isabel Orlanes Gallegos, Stanford University

    Isabel O. Gallegos is a Ph.D. student in Computer Science at Stanford University. She researches algorithmic fairness to interrogate the role of artificial intelligence in equitable decision-making. Isabel has been awarded two patents and received national computing awards for her work, and is also the recipient of the Hertz Fellowship, NSF Graduate Research Fellowship, Knight-Hennessy scholarship, and GEM Fellowship.

  • Ryan A. Rossi, Adobe Research

    Ryan is a machine learning Senior Research Scientist at Adobe Research. He earned his Ph.D. and M.S. in Computer Science at Purdue University. His research lies in the fields of machine learning and spans theory, algorithms, and applications of large complex relational (network/graph) data from social and physical phenomena.

  • Joe Barrow, Pattern Data

    Joe is an NLP Research Scientist at Adobe Research, working out of College Park, MD. He earned his Ph.D. from the University of Maryland. His research interests are improving evaluation and document collection understanding.

  • Md Mehrab Tanjim, Adobe Research

    Mehrab is a Research Scientist at Adobe Research. He received his Ph.D. and M.Sc. from the Department of Computer Science and Engineering at University of California San Diego, where his research primarily focused on bias and fairness, especially in detecting and mitigating biases in image-generative tasks such as text-to-image, image-to-image translation, text-based image editing, etc. His current research focuses on LLMs and multimodal generative models.

  • Sungchul Kim, Adobe Research

    Sungchul is a Senior Research Scientist at Adobe Research, based in San Jose. He specializes in predictive analytics and data mining, with a particular focus on graph mining across a wide range of real-world applications, including considerations of fairness and bias. Recently, his work has expanded to encompass Large Language Models (LLMs) tailored for enterprise use cases.

  • Franck Dernoncourt, Adobe Research

    Franck is an NLP Senior Research Scientist at Adobe Research in Seattle. He received his Ph.D. from MIT. His research interests include neural networks, NLP, and more recently, LLMs. He has published on social bias issues in NLP at EMNLP Findings and LREC.

  • Tong Yu, Adobe Research

    Tong is a Research Scientist at Adobe Research. He received his Ph.D. from the Department of Electrical and Computer Engineering at Carnegie Mellon University. His current research focuses on LLMs, generative models and reinforcement learning, with applications in conversational recommender systems and dialog systems.

  • Ruiyi Zhang, Adobe Research

    Ruiyi is a Research Scientist at Adobe Research. His research interests include (interactive) machine learning, reinforcement learning and NLP, especially the intersection of them. Before that, Ruiyi obtained a Ph.D. from the Department of Computer Science, Duke University.

  • Nesreen K. Ahmed, Intel Labs

    Nesreen is a senior member of the research staff at Intel Labs. She received her Ph.D. from the Computer Science Department at Purdue University. Her research lies in the field of large-scale machine learning and spans the theory and algorithms of graphs, statistical machine learning methods, and their applications in social and information networks.

Published

2024-11-10