Hold the Sombrero: How Hispanic Agencies Address Racial Bias in AI
Get up to speed on AI and all things tech at NexTech, November 14-15 in NYC (+ virtual). Hear from leaders at Roc Nation, Legitimate, Tracer and more. Save 20% now.
When the multicultural agency, Lerma, prompted the generative artificial intelligence tool Mid Journey to generate an image of a Hispanic or Latinx man at work, the results were troubling. The team consistently received stereotypical images featuring a male figure with a prominent mustache and wearing a sombrero.
“And no matter how we regenerated prompts hoping to get different results, we ended up getting the same result,” said Pedro Lerma, the agency’s founder and CEO.
Similarly, agency Alma used Mid Journey tools to present a proof-of-concept for its client, Rockstar Energy Drink, previewing what the campaign targeting a multicultural audience would look like. That, too, led to problems.
“The tool would occasionally produce images featuring individuals with blonde hair when we requested images of women,” Mike Sotelo, vp of digital at Alma said. “Multicultural audiences typically don’t have blonde hair.”
And when seeking images of a Latinx man, Alma too would get stereotypical cultural pictures, such as a Latinx individual wearing a mariachi hat while eating a taco.
“That’s what we don’t want and that is what happens quite a bit,” added Sotelo.
These are just a few of the examples of racial biases stemming from generative AI—both image and text-based tools—which are becoming a bigger concern for Hispanic and broader multicultural agencies. It’s particularly prominent in client-facing projects, where creating accurate visual references for storyboards becomes a crucial aspect of their work. Whenever agencies refine the prompts and image requirements, the results are either distorted or lack diverse visual output altogether.
The use of AI tools is only expected to grow. Major corporations are expected to generate approximately 30% of their marketing content via gen AI tech tools by 2025, according to Gartner. This struggle to accurately represent Latinx culture undermines the progress made in achieving inclusive representation within marketing.
The problem with changing prompts
To counteract inaccurate image generation, agencies like Alma use highly specific prompts, including specifying hair color. However, refining prompts don’t always work.
Earlier this year, while working on an internal project creating an AI-generated music video, Lerma encountered several cultural stereotypes in the representation of human models. This problem occurred across several AI tools, including Mid Journey, Stable Diffusion and Adobe’s Firefly.
“We found that the tools were deficient in trying to represent who we are as an agency,” Lerma.
In an attempt to rectify these issues, Lerma attempted to edit the text further in Mid Journey for the music video, with details like location, visual composition, atmosphere, inclusion of the camera, and body language. As a result, the images took on a distorted and unnatural appearance, with some showing humans with missing body parts.
In another case, when responding to the prompt of “three Hispanic males standing in the middle of a street,” the AI tool generated an image of three people closely resembling each other, lacking the diversity of features within specific groups that the agency had intended to represent.
“It’s important for [AI] platforms to consider looking at consultants, such as marketing planners or even anthropologists, to provide input onto these tools to try and avoid those biases from happening,” said Alma’s Sotelo.
Addressing these biases
Lerma has instead built and trained its own open-source AI model, LERM@NOS, released on the Hugging Face platform, which helps the agency’s creative teams deliver client-facing work such as storyboard compositions without running into biased results.
The agency ran a photoshoot involving its diverse employees, capturing them from various angles and under different lighting setups. In total, the agency generated a dataset of nearly 18,000 variations of facial features, which served as the training data for LERM@NOS.
Consequently, images from LERM@NOS do not attribute any single facial feature to a specific individual based on ethnicity or background. Throughout its training process, the agency refrained from offering explicit directives or categorizations regarding the ethnicity of the images used in the model. The only instruction provided was that it was being exposed to images of “people.” Consequently, the model learns from the images and is undergoing training based on feedback to refine output and address any inaccuracies or biases.
“It consistently adapts and diversifies its output with each request, ensuring that it avoids any rigid associations between facial features and a particular person’s identity,” said Pranav Kumar, digital project manager at Lerma.
https://www.adweek.com/programmatic/how-hispanic-agencies-address-racial-bias-in-ai/