Top
Navigation
2024 Topical Reporting: Race, Ethnicity, Gender and Identity, Small Newsroom winner

How AI reduces the world to stereotypes

About the Project

Bias occurs in many algorithms and AI systems — from sexist and racist search results to facial recognition systems that perform worse on Black faces. Generative AI systems are no different. In an analysis of more than 5,000 AI images, Bloomberg found that images associated with higher-paying job titles featured people with lighter skin tones, and that results for most professional roles were male-dominated.

Our analysis shows that generative AI systems have tendencies toward bias, stereotypes, and reductionism when it comes to national identities, too. Using Midjourney, we chose five prompts, based on the generic concepts of “a person,” “a woman,” “a house,” “a street,” and “a plate of food.” We then adapted them for different countries: China, India, Indonesia, Mexico, and Nigeria. We also included the U.S. in the survey for comparison, given Midjourney (like most of the biggest generative AI companies) is based in the country. For each prompt and country combination (e.g., “an Indian person,” “a house in Mexico,” “a plate of Nigerian food”), we generated 100 images, resulting in a data set of 3,000 images.

Here are just a few examples of our findings. The majority of people for all countries except the U.S. appeared to be men, whereas the U.S. returned 94 images of women. (Note that totals may not add up to 100 as we did not include images where the gender was unclear.) We found that images produced for the “person” prompt (which, for all but the U.S., consisted mainly of men) tended to show people with darker skin than images produced for the “woman” prompt. It would be excessive to show all 3,000 images in our story, but we offered access to a complete archive of the images on Github.

The accessibility and scale of AI tools mean they could have an outsized impact on how almost any community is represented. Used carelessly, generative AI could represent a step backwards. And it’s not just “hallucinations” that are responsible for this kind of representation; indeed, the AI platforms are merely reproducing biases included in the imagery used to train them, and so are reflective of the society at large (and the distance still to go in erasing bias from our visual culture). Yet the sense that AI systems are “intelligent” or some form of neutral, even-handed and unbiased processing of information and images only adds to the danger that such tools present. Our work was intended to address this issue and shine a light on the work still needed to make any AI use free from the potential to cause harm.