University of Washington researchers are exploring potential biases in AI hiring systems, focusing on race and caste biases in eight large language models (LLMs), including proprietary ChatGPT models and open-source Llama models. Such research is particularly relevant given LinkedIn’s recent introduction of its Hiring Assistant, an AI tool designed to automate repetitive recruiting tasks, such as interacting with job candidates.
The UW researchers found that most models, especially open-source ones, generated biased content, particularly related to caste, with 69% of caste-related conversations containing harmful content. The researchers emphasized the need for better detection of covert harms in AI systems and called for more rigorous evaluations and policies to ensure fairness, especially regarding cultural concepts outside Western contexts.