TV isn’t what it used to be. Join the Convergent TV Summit in LA this October 25 with media, technology and marketing leaders to prepare for new trends and make industry connections.
Authenticating AI-generated content, or watermarking, has gained traction these last few months.
In July, companies including OpenAI, Google and Meta made voluntary commitments to the White House to implement guardrails to help make invisible watermarking safer and more transparent. Meta’s Instagram appears to be testing new notices to identify content created or modified by AI. In June, Publics Groupe joined the Coalition for Content Provenance and Authenticity (C2PA) and is working on the wide adoption of digital watermarks.
Analyzing the history of content, or its provenance, helps brands and creators implement safety measures, build audience trust, and ensure fair compensation so that the owner is fairly remunerated.
“For our brand advertisers, that provenance ensures that the content brands use has a clear chain of ownership,” said Ray Lansigan, evp, corporate strategy and solutions, Publicis Digital Experience. Tech and media companies, including Adobe, Microsoft, BBC, Sony and Intel are also part of the coalition.
This comes as almost 9 in 10 Americans want AI-generated content to be labeled as such, according to a Greenough Pulse survey of over 2,000 adults. Despite the logic in identifying AI-generated content, watermarks pose key challenges, especially the ease with which they can be removed.
Visible or invisible watermarks
Content like images contain metadata, or text information, embedded into the files and include details like how the image was created, or where and when the photo was taken.
Recently, some tech companies have added AI-specific metadata watermarks into their products to allow the identification of human-produced content versus those created by large language models (LLM) like ChatGPT.
“A motivation for [tech giants] is they don’t want to use their content to feed back into the next generation of their LLM model,” said Chirag Shah, professor in the Information School at the University of Washington. “This creates a feedback loop within the model. In the long run, it’s costly and creates a siloed view of the world.”
Adobe’s content credentials tool—a free, open-source technology that serves as a digital “nutrition label” for content—tracks images edited by generative AI. Content produced using Adobe’s generative AI tools, like Photoshop Generative Fill, contains metadata that indicates whether the artwork created is partially or wholly AI-generated. The information for digital content stays with the file wherever it’s published or stored and can be accessed by anyone.