After AI-generated porn report, Washington Lottery pulls down interactive web app

A user of the Washington Lottery's "Test Drive a Win" website says it used AI to generate (the unredacted version of) this image with her face on a topless body.
Enlarge / A user of the Washington Lottery’s “Test Drive a Win” website says it used AI to generate (the unredacted version of) this image with her face on a topless body.

The Washington State Lottery has taken down a promotional AI-powered web app after a local mother reported that the site generated an image with her face on the body of a topless woman.

The lottery’s “Test Drive a Win” website was designed to help visitors visualize various dream vacations they could pay for with their theoretical lottery winnings. The site included the ability to upload a headshot that would be integrated into an AI-generated tableau of what you might look like on that vacation.

But Megan (last name not given), a 50-year-old from Olympia suburb Tumwater, told conservative Seattle radio host Jason Rantz that the image of her “swim with the sharks” dream vacation on the website showed her face atop a woman sitting on a bed with her breasts exposed. The background of the AI-generated image seems to show the bed in some sort of aquarium, complete with fish floating through the air and sprawling undersea flora sitting awkwardly behind the pillows.

The corner of the image features the Washington Lottery logo.

“Our tax dollars are paying for that! I was completely shocked. It’s disturbing to say the least,” Megan told Rantz. “I also think whoever was responsible for it should be fired.”

“We don’t want something like this purported event to happen again”

The non-functional "Test Drive a Win" website as it appeared Thursday.
Enlarge / The non-functional “Test Drive a Win” website as it appeared Thursday.

In a statement provided to Ars Technica, a Washington Lottery spokesperson said that the lottery “worked closely with the developers of the AI platform to establish strict parameters to govern image creation.” Despite this, the spokesperson said they were notified earlier this week that “a single user of the AI platform was purportedly provided an image that did not adhere to those guidelines.”

Despite what the spokesperson said were “thousands” of inoffensive images that the site generated in over a month, the spokesperson said that “one purported user is too many and as a result we have shut down the site” as of Tuesday.

The spokesperson did not respond to specific questions about which AI models or third-party vendors may have been used to create the site or on the specific safeguards that were crafted in an attempt to prevent results like the one reported by Megan.

Speaking to Rantz, a lottery spokesperson said the organization had “agreed to a comprehensive set of rules” for the site’s AI images, “including that people in images be fully clothed.” Following the report of the topless image, the spokesperson said they “had the developers check all the parameters for the platform.” And while they were “comfortable with the settings,” the spokesperson told Rantz they “chose to take down the site out of an abundance of caution, as we don’t want something like this purported event to happen again.”

Not a quick fix?

On his radio show, Rantz expressed surprise that the lottery couldn’t keep the site operational after rejiggering the AI’s safety settings. “In my head I was thinking, well, presumably once they heard about this they went back to the backend guidelines and just made sure it said, ‘Hey, no breasts, no full-frontal nudity,’ those kinds of things, and then they fixed it, and then they went on with their day,” Rantz said.

But it might not be that simple to effectively rein in the endless variety of visual output an AI model can generate. While models like Stable Diffusion and DALL-E have filters in place to prevent the generation of sexual or violent images, researchers have found that those models still responded to problematic prompts by generating images that were judged as “unsafe” by an image classifier a significant minority of the time. Malicious users can also use prompt-engineering tricks to get around these built-in safeguards when using popular text-based image-generation models.

We’ve seen these kinds of AI image-safety issues blow back on major corporations, too, as when Facebook’s AI sticker generator put weapons in the hands of children’s cartoon characters. More recently, a Microsoft engineer publicly accused the company’s Copilot image-generation tool of randomly creating violent and sexual imagery even after the team was warned of the issue.
The Washington Lottery’s AI issue comes a week after a report found a New York City government chatbot confabulating incorrect advice about city laws and regulations. “It’s wrong in some areas and we gotta fix it,” New York City Mayor Eric Adams said this week. “Any time you use technology, you need to put it in the real environment to iron out the kinks. You can’t live in a lab. You can’t stay in a lab forever.” https://arstechnica.com/?p=2015007