Responsible AI
🏛️

Responsible AI

About This Research Theme

Over the past few decades, we have made significant strides toward building an equitable society for diverse groups of people, and we cannot let the integration of AI into society regress our efforts. Instead, we want AI to embody and promote essential societal values such as fairness and diversity. Many past examples have shown challenges in building ethical AI systems. Especially, discriminatory prediction algorithms that inherit biases and stereotypes of the society have become a hotly-debated issue. As AI finds its way into areas we might not have foreseen, other subtler forms of ethical concerns will continuously emerge beyond such overt forms of discrimination, rendering the development of ethical AI not only a pressing task at hand but also a problem that would persist in our future with AI. Our lab’s goal is to identify the ever-evolving ethical challenges presented by AI-powered systems and develop a mathematical framework and practical tools that can effectively quantify and mitigate such ethical issues.

👩🏻‍🔬 Projects in this Theme

Project Name
People
Last edited
Themes
Sep 27, 2024 8:12 PM
Jul 10, 2025 10:23 PM
Jack ShoemakerHarold MoRohan Bhatt
Apr 16, 2025 7:28 AM
Youngseok YoonDainong Hu
Oct 24, 2024 10:38 PM
Cheryl Stanley
Dec 28, 2023 9:24 PM
Yao QinYoungseok Yoon
Sep 27, 2024 8:12 PM
Arjun Nichani
Sep 27, 2024 8:11 PM
Iain Weissburg
Sep 27, 2024 8:11 PM
Jun 19, 2025 5:25 PM

📚 Selected Publications

  • Alghamdi, Wael, Hsiang Hsu, Haewon Jeong, Hao Wang, Peter Michalak, Shahab Asoodeh, and Flavio Calmon. "Beyond Adult and COMPAS: Fair multi-class prediction via information projection." Advances in Neural Information Processing Systems 35 (2022).
  • Jeong, Haewon, Hao Wang, and Flavio P. Calmon. "Fairness without imputation: A decision tree approach for fair prediction with missing values." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 9, pp. 9558-9566 (2022).
  • Jeong, Haewon, Michael D. Wu, Nilanjana Dasgupta, Muriel Médard, and Flavio Calmon. "Who Gets the Benefit of the Doubt? Racial Bias in Machine Learning Algorithms Applied to Secondary School Math Education." AIED (2022).