News

News

News

March 9, 2024

March 9, 2024

March 9, 2024

A new study shows severe language bias in AI models

A new study shows severe language bias in AI models

A new study shows severe language bias in AI models

Large language models (LLMs) are more likely to criminalise users that use African American English, the results of a new Cornell University study show.

Anna Desmarais reports for Euronews on a recent study from Cornell University highlighting a concerning trend where Large Language Models (LLMs) such as OpenAI's ChatGPT and GPT-4, Meta's LLaMA2, and French Mistral's 7B, may disproportionately associate users who communicate in African American English with negative stereotypes, inclduing criminality. This study delves into the subtle racism embedded within LLMs, a type of deep learning algorithm tasked with generating human-like text. The investigation by Cornell University suggests that the dialect one speaks significantly influences the AI's assumptions about one's character, job prospects, and criminal tendencies.

In an effort to explore this issue, researchers conducted a matched guise probing study, feeding the LLMs prompts in both African American English and Standard American English to see how the models would perceive speakers of each dialect. Valentin Hofmann of the Allen Institute for AI found particularly alarming results, e.g., that the AI was more inclined to predict harsher punishments, like death sentences, for speakers of dialects commonly used by African Americans, without any reference to the speakers' race. Hofmann raised concerns about the implications of such biases, especially as AI systems are increasingly applied in business and legal contexts.

The study also found that these models tended to assign less prestigious jobs to speakers of African American English, despite no racial identifiers being provided. It noted that larger LLMs were better at understanding African American English and were less likely to use explicitly racist language. However, this did not seem to mitigate their underlying biases.

Hofmann expressed concern that the decreasing overt racism in LLM responses might be misinterpreted as a resolution of racial bias rather than a shift in how such biases manifest. Traditional methods of training LLMs, such as incorporating human feedback, have not been effective in addressing these covert biases. Instead, these methods might only teach the models to more subtly hide their deep-seated prejudices. Euronews Next reached out to both OpenAI and Meta for their commends on the study's findings, reflecting the growing urgency to address these AI-related ethical concerns.




Credits

Anna Desmarais initially wrote and reported this story for Euronews on March 9, 2024, under the title → "AI models found to show language bias by recommending Black defendants be 'sentenced to death.'."

Photo: An artist’s illustration of artificial intelligence (AI). This illustration depicts language models that generate text. It was created by Wes Cockx as part of the Visualising AI project. Photo © Google DeepMind.