Hold the Sombrero: How Hispanic Agencies Address Racial Bias in AI


.article-native-ad { border-bottom: 1px solid #ddd; margin: 0 45px; padding-bottom: 20px; margin-bottom: 20px; } .article-native-ad svg { color: #ddd; font-size: 34px; margin-top: 10px; } .article-native-ad p { line-height:1.5; padding:0!important; padding-left: 10px!important; } .article-native-ad strong { font-weight:500; color:rgb(46,179,178); }

Get up to speed on AI and all things tech at NexTech, November 14-15 in NYC (+ virtual). Hear from leaders at Roc Nation, Legitimate, Tracer and more. Save 20% now.

When the multicultural agency, Lerma, prompted the generative artificial intelligence tool Mid Journey to generate an image of a Hispanic or Latinx man at work, the results were troubling. The team consistently received stereotypical images featuring a male figure with a prominent mustache and wearing a sombrero.

“And no matter how we regenerated prompts hoping to get different results, we ended up getting the same result,” said Pedro Lerma, the agency’s founder and CEO.

Similarly, agency Alma used Mid Journey tools to present a proof-of-concept for its client, Rockstar Energy Drink, previewing what the campaign targeting a multicultural audience would look like. That, too, led to problems.

“The tool would occasionally produce images featuring individuals with blonde hair when we requested images of women,” Mike Sotelo, vp of digital at Alma said. “Multicultural audiences typically don’t have blonde hair.”

And when seeking images of a Latinx man, Alma too would get stereotypical cultural pictures, such as a Latinx individual wearing a mariachi hat while eating a taco.

“That’s what we don’t want and that is what happens quite a bit,” added Sotelo.

These are just a few of the examples of racial biases stemming from generative AI—both image and text-based tools—which are becoming a bigger concern for Hispanic and broader multicultural agencies. It’s particularly prominent in client-facing projects, where creating accurate visual references for storyboards becomes a crucial aspect of their work. Whenever agencies refine the prompts and image requirements, the results are either distorted or lack diverse visual output altogether.

The use of AI tools is only expected to grow. Major corporations are expected to generate approximately 30% of their marketing content via gen AI tech tools by 2025, according to Gartner. This struggle to accurately represent Latinx culture undermines the progress made in achieving inclusive representation within marketing.

The problem with changing prompts

To counteract inaccurate image generation, agencies like Alma use highly specific prompts, including specifying hair color. However, refining prompts don’t always work.

Earlier this year, while working on an internal project creating an AI-generated music video, Lerma encountered several cultural stereotypes in the representation of human models. This problem occurred across several AI tools, including Mid Journey, Stable Diffusion and Adobe’s Firefly.

Refined prompts led to distorted images from AI tools. Lerma

“We found that the tools were deficient in trying to represent who we are as an agency,” Lerma.

In an attempt to rectify these issues, Lerma attempted to edit the text further in Mid Journey for the music video, with details like location, visual composition, atmosphere, inclusion of the camera, and body language. As a result, the images took on a distorted and unnatural appearance, with some showing humans with missing body parts.

In another case, when responding to the prompt of “three Hispanic males standing in the middle of a street,” the AI tool generated an image of three people closely resembling each other, lacking the diversity of features within specific groups that the agency had intended to represent.

“It’s important for [AI] platforms to consider looking at consultants, such as marketing planners or even anthropologists, to provide input onto these tools to try and avoid those biases from happening,” said Alma’s Sotelo.

AI tools sometimes lack the diversity of features within specific groups leading to people closely resembling each other.Lerma

Addressing these biases

Lerma has instead built and trained its own open-source AI model, LERM@NOS, released on the Hugging Face platform, which helps the agency’s creative teams deliver client-facing work such as storyboard compositions without running into biased results.

The agency ran a photoshoot involving its diverse employees, capturing them from various angles and under different lighting setups. In total, the agency generated a dataset of nearly 18,000 variations of facial features, which served as the training data for LERM@NOS.

Lerma built its on AI model by capturing its diverse employees from various angles and under different lighting setups. Lerma Agency

Consequently, images from LERM@NOS do not attribute any single facial feature to a specific individual based on ethnicity or background. Throughout its training process, the agency refrained from offering explicit directives or categorizations regarding the ethnicity of the images used in the model. The only instruction provided was that it was being exposed to images of “people.” Consequently, the model learns from the images and is undergoing training based on feedback to refine output and address any inaccuracies or biases.

“It consistently adapts and diversifies its output with each request, ensuring that it avoids any rigid associations between facial features and a particular person’s identity,” said Pranav Kumar, digital project manager at Lerma.

.font-primary { } .font-secondary { } #meter-count { position: fixed; z-index: 9999999; bottom: 0; width:96%; margin: 2%; -webkit-border-radius: 4px; -moz-border-radius: 4px; border-radius: 4px; -webkit-box-shadow: 0 0px 15px 4px rgba(0,0,0,.2); box-shadow:0 0px 15px 4px rgba(0,0,0,.2); padding: 15px 0; color:#fff; background-color:#343a40; } #meter-count .icon { width: auto; opacity:.8; } #meter-count .icon svg { height: 36px; width: auto; } #meter-count .btn-subscribe { font-size:14px; font-weight:bold; padding:7px 18px; color: #fff; background-color: #2eb3b2; border:none; text-transform: capitalize; margin-right:10px; } #meter-count .btn-subscribe:hover { color: #fff; opacity:.8; } #meter-count .btn-signin { font-size:14px; font-weight:bold; padding:7px 14px; color: #fff; background-color: #121212; border:none; text-transform: capitalize; } #meter-count .btn-signin:hover { color: #fff; opacity:.8; } #meter-count h3 { color:#fff!important; letter-spacing:0px!important; margin:0; padding:0; font-size:16px; line-height:1.5; font-weight:700; margin: 0!important; padding: 0!important; } #meter-count h3 span { color:#E50000!important; font-weight:900; } #meter-count p { font-size:14px; font-weight:500; line-height:1.4; color:#eee!important; margin: 0!important; padding: 0!important; } #meter-count .close { color:#fff; display:block; position:absolute; top: 4px; right:4px; z-index: 999999; } #meter-count .close svg { display:block; color:#fff; height:16px; width:auto; cursor:pointer; } #meter-count .close:hover svg { color:#E50000; } #meter-count .fw-600 { font-weight:600; } @media (max-width: 1079px) { #meter-count .icon { margin:0; padding:0; display:none; } } @media (max-width: 768px) { #meter-count { margin: 0; -webkit-border-radius: 0px; -moz-border-radius: 0px; border-radius: 0px; width:100%; -webkit-box-shadow: 0 -8px 10px -4px rgba(0,0,0,0.3); box-shadow: 0 -8px 10px -4px rgba(0,0,0,0.3); } #meter-count .icon { margin:0; padding:0; display:none; } #meter-count h3 { color:#fff!important; font-size:14px; } #meter-count p { color:#fff!important; font-size: 12px; font-weight: 500; } #meter-count .btn-subscribe, #meter-count .btn-signin { font-size:12px; padding:7px 12px; } #meter-count .btn-signin { display:none; } #meter-count .close svg { height:14px; } }

Enjoying Adweek’s Content? Register for More Access!

https://www.adweek.com/programmatic/how-hispanic-agencies-address-racial-bias-in-ai/