Skip to main content

Study finds ChatGPT shows bias against resumes for disabled applicants

Written by: Eric Lyerly
Published on: Aug 26, 2024
Category:

AI Bias

Photo Credit: vectorfusionart - stock.adobe.co

 

Abstract

A recent study conducted by University of Washington researchers discovered that ChatGPT, the widely used AI tool room OpenAI, frequently rates resumes and curriculum vitae featuring disability-related achievements lower than resumes lacking such details.

The finding comes at a time when an increasing number of recruiters are turning to AI tools like ChatGPT for summarizing resumes and evaluating candidates. The lead author, UW doctoral student Kate Glazko, noticed the trend while seeking research internships. Having studied how generative AI can replicate real-world biases, Glazko was curious how such a tool might rank resumes for individuals with disabilities.

In the study, researchers started with a publicly available CV (belonging to one of the authors) and created six modified versions. Each CV implied a different disability by including four disability-related achievements, credentials, or organizations on the document. The researchers then used ChatGPT's GPT-4 model to rank these modified CVs against the original for a publicly available student researcher job listing at a large software company.

The researchers found a strong preference for the nondisabled applicant's CV over the disabled applicant. ChatGPT ranked the modified CVs first in only 15 of 70 trials. The autism CV was ranked first least (0). The deaf condition CV ranked first in only 10 trials. The depression and cerebral palsy CV ranked first twice each. General disability and blindness both ranked first five of 10 times.

When prompted to explain the rankings, the researchers found GPT-4's responses showed explicit and implicit ableism. For instance, it frequently mentioned diversity equity and inclusions, stating “additional focus on DEI and personal challenges, while valuable, might detract from the core technical and research-oriented aspects of the role.”

Glazko said, “Some of GPT's descriptions would color a person's entire resume based on their disability and claimed that involvement with DEI or disability [was] potentially taking away from other parts of the resume. . . . For instance, it hallucinated the concept of ‘challenges’ into the depression resume comparison, even though ‘challenges’ weren’t mentioned at all. So you could see some stereotypes emerge” (https://www.washington.edu/news/2024/06/21/chatgpt-ai-bias-ableism-disability-resume-cv/).

The researchers then used the GPT's Editor tool to instruct GPT-4 explicitly to avoid ableist biases and to incorporate disability justice and DEI principles. After reperforming the experiment, the disability-related CVs ranked higher than the control CV in 37 of 60 trials. However, the improvements were minimal for some disabilities.

“People need to be aware of the system's biases when using AI for these real-world tasks,” Glazko said. “Otherwise, a recruiter using ChatGPT can’t make these corrections, or be aware that, even with instructions, bias can persist.”

The researchers emphasized the need to study and document biases in generative AI to ensure technology is implemented fairly. The study calls for further research to test other AI systems, include a broader range of disabilities, explore bias intersections, and investigate if further customization can reduce biases more effectively.

Click here to read the full article.