Hiring is one of the most critical moments in a person’s career, yet more companies are turning to AI-driven tools to evaluate candidates. In our recent article, Artificial Intelligence Employment Interviews: Examining Limitations, Biases, and Perceptions, my co-author, Theresa Fister, and I explore how these automated systems impact hiring fairness. Theresa, an undergraduate student (at the time of writing) in the Interdisciplinary Honors program at Loyola University Chicago majoring in Communications, took many computer science courses and conducted this research as part of her social justice fellowship. Her work sheds light on the biases that AI hiring software can introduce—particularly those related to race, gender, and disability—while raising important questions about the transparency and accountability of these systems.

Published as a special article in IEEE Computer, our study examines the human experience of AI-driven interviews and the gaps in algorithmic decision-making. Through survey responses, we found that while AI tools are marketed as neutral, they often reinforce existing inequalities rather than eliminate them. Many participants reported feeling disconnected from the process, lacking the ability to fully express themselves, or being evaluated on arbitrary factors like facial expressions. These insights underscore the need for greater scrutiny of AI’s role in hiring and a push for more equitable and human-centered hiring practices.

Fister, T. and Thiruvathukal, G. K. (2024). Artificial Intelligence Employment Interviews: Examining Limitations, Biases, and Perceptions. IEEE Computer. DOI: 10.1109/MC.2024.3404669