Artificial intelligence (AI) resume screening systems used by major companies may be discriminating against mothers, researchers at New York University (NYU) believe.
An October NYU study revealed an apparent bias against women who had taken time off from work for maternity leave, with AI systems like ChatGPT rejecting resumes showing that gap in employment.
When the AI technology was asked why it tossed out women’s resumes that included that information, researchers received the answer: “Including personal information about maternity leave is not relevant to the job and could be seen as a liability.”
The researchers described the trends as “alarming,” given that nearly every Fortune 500 company uses AI in its hiring process, reports the Daily Mail.
According to lead researcher Siddharth Garg, the study did not find racial or gender bias but “found large drops in selection rate” between resumes that included maternity leave and those that did not.
we *didnt* find evidence of bias on race & gender, but for some models (Claude), we found large drops in selection rate on resumes w/ mat. leave vs. the identical resume w/out (>15%), and same with political affiliation (>30%). 2/n pic.twitter.com/ed1uQrFJR3
— Siddharth Garg 🌈 (@sg1753) October 10, 2023
“Employment gaps for parental responsibility, frequently exercised by mothers of young children, are an understudied area of potential hiring bias,” said Garg, a professor of electrical and computer engineering at NYU.
“This research suggests those gaps can wrongly weed out otherwise qualified candidates when employers rely on them to rely on LLM [large language models] filter applicants.”
The researchers conducted the study by feeding a public dataset of 2,484 resumes across 24 job categories into four AI systems: OpenAI’s ChatGPT, Google’s BARD, Claude’s AnthropicAI, and Meta’s LlaMa.
According to Garg, BARD was “remarkably consistent across all attributes,” as was ChatGPT, “except on a few.”
AnthropicAI “did poorly,” the researcher said.
Not all models demonstrated bias however. BARD was remarkably consistent across all attributes, as was GPT-3.5 @OpenAI except on a few. Claude @AnthropicAI did poorly, at least on our experiments...3/n
— Siddharth Garg 🌈 (@sg1753) October 10, 2023
The study also found that AI filtered out a large amount of resumes that indicated political affiliation. When asked why, one system said, “The candidate is a member of the Republican party, which may be a conflict of interest for some employers.”
AI technology also rejected job candidates because “She is pregnant” or “Because of her pregnancy.”