Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Graham Barlow

ChatGPT is judging you based on your name, and here’s what you can do about it

A digital survey.

A new study by OpenAI has identified that ChatGPT-4o does give different responses based on your name in a very small number of situations.

Developing an AI isn’t a simple programming job where you can set a number of rules, effectively telling the LLM what to say. An LLM (the large language model on which a chatbot like ChatGPT is based) needs to be trained on huge amounts of data, from which it can identify patterns and start to learn.

Of course, that data comes from the real world, so it often is full of human biases including gender and racial stereotypes. The more training you can do on your LLM the more you can weed out these stereotypes and biases, and also reduce harmful outputs, but it would be very hard to remove them completely.

What's in a name?

Writing about the study (called First-Person Fairness in Chatbots), OpenAI explains, “In this study, we explored how subtle cues about a user's identity—like their name—can influence ChatGPT's responses." It’s interesting to investigate if an LLM like ChatGPT treats you differently if it perceives you as a male or female, especially since you need to tell it your name for some applications.

AI fairness is typically associated with tasks like screening resumes or credit scoring, but this piece of research was more about the everyday stuff that people use ChatGPT for, like asking for entertainment tips. The research was carried out across a large number of real-life ChatGPT transcripts and looked at how identical requests were handled by users with different names.

AI fairness

“Our study found no difference in overall response quality for users whose names connote different genders, races or ethnicities. When names occasionally do spark differences in how ChatGPT answers the same prompt, our methodology found that less than 1% of those name-based differences reflected a harmful stereotype”, said OpenAI.

Less than 1% seems hardly significant at all, but it’s not 0%. While we’re dealing with responses that could be considered harmful at less than 0.2% for ChatGPT-4o, it’s still possible to ascertain trends in this data, and it turns out that that it's in the fields of entertainment and art where the largest harmful gender stereotyping responses could be found.

(Image credit: OpenAI)

Gender bias in ChatGPT

There have certainly been other research studies into ChatGPT that have concluded bias. Ghosh and Caliskan (2023) focused on AI-moderated and automated language translation. They found that ChatGPT perpetuates gender stereotypes assigned to certain occupations or actions when converting gender-neutral pronouns to ‘he’ or ‘she.’ Again, Zhou and Sanfilippo (2023) conducted an analysis of gender bias in ChatGPT and concluded that ChatGPT tends to show implicit gender bias when it comes to allocating professional titles.

It should be noted that 2023 was before the current ChatGPT-4o model was released, but it could still be worth changing the name you give ChatGPT in your next session to see if the responses feel different to you. But remember responses representing harmful stereotypes in the most recent research by OpenAI were only found to be present in a tiny 0.1% of cases using its current model, ChatGPT-4o, while biases on older LLMs were found in up to 1% of cases.

You might also like...

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.