Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Zenger
Zenger
Technology
Oluwapelumi Adejumo

Artificial Intelligence Algorithms Show Gender Bias

Google Bard VS OpenAI ChatGPT displayed on Mobile with OpenAI and Google logo on the screen seen in this photo illustration, Feb. 7, 2023 in Belgium. JONATHAN RAA/GETTY IMAGES

Artificial intelligence (AI) has attracted huge interest from almost every primary industry and company. But a recent Guardian report has highlighted issues, such as the gender bias of AI tools.

The Guardian investigation looked at AI effectiveness when detecting sexual suggestiveness, i.e. the raciness of images.

With the increasing need for content moderation, particularly on social media, AI tools have been touted as a means of flagging images that may be inappropriate for the audience. The Guardian research, however, calls into question the neutrality of such tools.

In this photo illustration, the logo of DataRobot is seen displayed on a mobile phone screen with AI written in the background, on Feb. 17, 2023. AI techniques have been hailed as a way of detecting photos that may be inappropriate for the audience in light of the growing demand for content moderation, particularly on social media.  IDREES ABBAS/SOPA/GETTY IMAGES

AI Algorithms Show Gender Bias: Guardian Report

To test this claim, Guardian journalists analyzed hundreds of pictures of men and women working out, in underwear, as well as medical tests with partial nudity using AI tools. The result was that more images of women, even in normal situations, were labeled as sexually suggestive.

The AI tools also had the same results with medical pictures, even of pregnant women. AI algorithms mostly rated images of women as racier than similar images of men.

Such findings suggest that any platform deploying AI tools may end up censoring more photos of women, further limiting the reach of female-related businesses.

Some AI tools tested by the journalists include those from major tech firms like Google, Amazon, and Microsoft — indicating the problem was not unique to startups.

That these companies provide content moderation AI tools to major platforms such as LinkedIn and Instagram is no secret. Several users have previously reported being shadowbanned or having their content removed from social media platforms due to it being deemed racy by AI. Women were disproportionately affected in this regard.

AI’s Bias History

This is not the first time that AI biases have been exposed. Researchers from the Ben-Gurion University of the Negev and Canada’s Western University found that AI had more exaggerated biases compared to humans when it comes to facial age recognition.

A New York Times report from 2019, meanwhile, warned that AI tools are learning from a large trove of digitized information and could also pick up human biases.

For example, a Google AI technology called BERT associated men with computer programming and gave less credit to females. In most cases, AI tools echo gender and racial biases in today’s society. This is evident in a paper by Cambridge University researchers noting that AI recruitment tools did little to improve diversity or reduce bias.

MetaNews previously reported that AI selfie application Lensa generated controversies over its sexualization of women.

The example AI has chosen for a boy is football while for a girl is dancing. Let’s pause for a while and think about it. #ai #gender #bias #chatgpt pic.twitter.com/af7TUZwIFL

— Onur Buyukceran (@onur_buyukceran) February 7, 2023

Companies Confront AI Bias

So far, several companies are striving to meet ethical AI guidelines. But there is no ready-made solution to prevent such biases from emerging. Thus, some companies are opting to sunset products which exhibit signs of biases.

IN FILE – A person gets checked by an automated AI temperature screening system on August 22, 2020, in New York City. Any platform that uses AI technologies may wind up restricting more images of women, thereby reducing the market for companies that cater to women. NOAM GALAI/GETTY IMAGES

In 2018, Amazon scrapped plans for an AI-powered recruitment engine after it noticed that the tool discriminated against women. Microsoft also announced in 2022 that it would stop selling AI-based facial-analysis software tools due to the problematic bias and inaccuracies of the algorithm.

With artificial intelligence making a major comeback in 2023, concerns about ethical issues and inheriting human biases remain a pressing challenge for companies and developers.

 

Produced in association with MetaNews.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.