Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Gabby Shacknai

Beauty A.I. has inherent racial bias—but it doesn't have to

(Credit: Getty Images)

Joy Buolamwini’s idea seemed simple. For a class project while at graduate school at MIT, she wanted to create a mirror that would inspire her every day by projecting digital images of her heroes onto her face. But when she started using the basic facial recognition software needed to program the mirror, she came across an unexpected issue: it couldn’t detect her face. Unsure of what was wrong, Buolamwini had a few friends and colleagues test the software on themselves, but it recognized each and every one of them without fail.

Suddenly, the problem became clear, as the grad student reached for a white mask and saw that her face was instantly detected: the A.I. facial recognition couldn’t pick up on her dark skin.

The experience stuck with Buolamwini and inspired her to conduct some research on the matter. “I had some questions,” she recalls. “Was this just my face, or are there other things at play?” The grad student began investigating skin type and gender bias in commercial A.I. from companies like Amazon, Google, Microsoft, and IBM, eventually writing her thesis on the subject, and she discovered a troubling theme. These systems performed better on light-skinned faces than on dark-skinned, Buolamwini found, and while error rates for lighter-skinned men were less than 1%, they were over 30% for darker-skinned women.

At the time, usage of A.I. was rising rapidly, and every industry and sector was beginning to embrace its capabilities; even so, it was obvious to Buolamwini that this was only the beginning. “This problem became urgent to me because I was seeing how A.I. was used in more and more parts of life—who gets hired, who gets fired, who gets access to a loan,” she explains. “Opportunities were being governed by algorithmic gatekeepers, and that meant oftentimes, these gatekeepers were choking opportunity on the basis of race and the basis of gender.”

After finishing grad school, Biolamwini decided to continue her research on A.I.’s racial bias and quickly realized that much of this was a result of the non-diverse datasets and imagery used by a disproportionately white, male tech workforce to train A.I. and inform its algorithms.

And by 2018, major publications, like the New York Times, started shining a light on her findings, forcing tech companies to pay attention. As tech-world players retreated to the defense and obfuscated their own involvement, though, for many consumers and brands looking to use A.I., the problem became glaring—and for those who had experienced it first-hand, it felt like there was finally an explanation.

“This is absolutely something that I’ve experienced as a Black American moving through the world,” says Dr. Ellis Monk, associate professor of sociology at Harvard University’s T.H. Chan School of Public Health. He’s encountered cameras that won’t take his photo in certain lighting, automatic hand dryers that can’t detect his hand, and even photos of all white babies when searching for “cute babies” on a search engine. “You just notice that a lot of technologies take for granted that they work for everyone, and in reality, they just kind of ignore your existence, which can feel very dehumanizing.”

Dr. Monk, who has been researching skin tone stratification and colorism for over a decade, has long been privy to the discrimination based on skin tone that has been widespread in the United States since the time of slavery.

“Even though people talk about racial inequality and racism, there’s a lot of heterogeneity in differences in and across these census categories that we tend to use all the time—Black, Asian, Latinx, white, et cetera—and these differences aren’t necessarily picked up very easily if we just stay at the level of these broad census categories, which lump everyone together regardless of their phenotypical appearance,” he says. “But what my research shows is that almost everything that we talk about when we think of racial inequality—from the education system to how we deal with police and judges to mental and physical health, wages, income, everything we can think of—is actually based in skin tone inequality or skin tone stratification. So, there are incredible amounts of life outcomes related to the lightness or darkness of someone’s skin.”

With something so deeply engrained in the sociology of Americans, Dr. Monk says it’s only natural that it would extend to technologies programmed by them. “When we think about transitioning into the world of tech, the same things that are being marginalized and ignored by the conversations we have around racial inequality in the U.S.—skin tone and colorism—are also being marginalized and ignored in the tech world,” he explains. “People historically haven’t tested their products across different racial categories, which certainly includes the skin tone aspects of computer-vision technologies.”

As a result, from the very outset, A.I. products are not made with the intention that they will work well for everyone. “If you’re not intentional about designing your products to work well across the entire skin tone continuum and rigorously testing to make sure that’s the case, then you’re going to have these huge issues in technology,” the Harvard professor adds.

Dr. Monk believes that the growing adoption of A.I., particularly by non-tech industries, has helped shine a light on the technological shortcomings surrounding colorism—but more importantly, it’s brought attention to the underlying issue: colorism as a whole. He thinks that if this is considered and addressed, remedying A.I.’s racial bias and changing the dynamics on which it operates is entirely possible. And it’s with that in mind that Dr. Monk launched a partnership with Google earlier this year.

The collaboration came to be after some people working in responsible A.I. reached out to Dr. Monk a few years ago to discuss his research on skin tone bias and A.I. machine-learning. They soon learned about a skin tone scale that the sociology professor had designed and been using in his personal work and research, which was shown to be significantly more inclusive than the Fitzpatrick Scale, the industry standard for decades, and as inclusive as a 40-point scale.

“What the scale enables us to do it make sure that we’re measuring skin tone well so that we have data and analysis that speak to these forms of inequality and can begin to have a more robust, and frankly, more honest, discussion about how race matters in the U.S. and beyond,” Dr. Monk says.

Google announced in May that it would release the Monk Skin Tone Scale and integrate it across its platforms to improve representation in imagery and to evaluate how well its products or features work across skin tones. It also hopes that doing so will usher in change across A.I., well beyond the bounds of Google, whereby all kinds of A.I.-powered products and services are built with more representative datasets and can therefore break away from the racial bias that has long dominated the technology.

Dr. Monk believes that his partnership with Google is a testament to the ability to correct the historical wrongs present in A.I., but he does point out that it doesn’t have to come to correction if it’s done the right way to begin with. “A lot of the time, there’s such a rush to be the first to do something that it can supersede the kind of caution that we need to take whenever we introduce any form of this technology into society,” he says. “I would say is that there probably needs to be a lot more caution about launching these technologies in the first place, so, it’s not just about mitigating the things that are already out there and trying to fix them.”

And while that kind of thinking may not yet be the norm, some younger players in the A.I. space have made an effort to address and remedy racial bias from the start. One such company, is leading A.I. provider Perfect Corp., whose products have been licensed by countless beauty and fashion brands, including Estée Lauder, Neutrogena, and Target, and several tech companies, like Meta and Snap. Unlike some of the tech companies that came to the scene before there was any awareness of A.I.’s racial bias, execs at Perfect Corp. feel a sense of responsibility to create technologies that work for everyone, regardless of skin tone.

“Inclusivity across the complete range of skin tones was a priority from the initial conception of the technology and one that helped to direct the development of our tools,” says Wayne Liu, the chief growth officer of Perfect Corp. The company, which was founded by Alice Chang, a woman of color, was aware of A.I.’s limitations from the beginning, so it worked to find solutions before going to market.

“We developed advanced technologies, like advanced auto-adjust settings for adaptive lighting and angles, in order to ensure an inclusive and accurate experience that incorporated the complete range of skin tones,” Liu explains.

But Perfect Corp. knew that as a provider of A.I.-powered products to other brands, navigating the technology’s deficiencies didn’t stop with its team, so the company made a point of also working with its brand partners to ensure that any racial biases were addressed in the development phase. “The widespread and accurate application of our A.I. solutions as it applies to all consumers is essential to the success of our tools and solutions, and necessary in order for brands and consumers to depend on this type of technology as a utility to aid them in their purchase decisions,” Liu adds.

Several years after launching its A.I. Shade Finder and its A.I. Skin Analysis tools, Perfect Corp. has remained true to its initial goal of inclusion. Its technology boasts 95% test-retest reliability and continues to match or surpass human shade-matching and skin analysis. Even with these myriad efforts and consistently impressive results, however, Liu knows that, despite Perfect Corp.’s name, no company is perfect and there will always be room for improvement. He and his colleagues feel that feedback and adaptability are essential to the growth of their technology and to the industry as a whole.

“It’s critical that we listen to all feedback, both from brand partners and retailers, and that which we observe from evolving consumer behaviors, in order to continue developing and delivering technology that aids in the consumer shopping journey,” he says. “A.I. is an experience for all, not an experience for most, and the success of the technology as a true tool to aid in the consumer shopping experience is dependent on its accuracy and ability to work for all consumers, not just a segment of them.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.