“It is our intent to build Generative AI in a way that enables customers to monetize their talents,” a company spokesperson said. “We are developing a compensation model for Stock contributors.”
Earlier this year, Microsoft announced media provenance capabilities to Bing Image Creator and Microsoft Designer. This lets people verify whether an image or video was generated by AI. The technology, according to a blog by Microsoft, uses cryptographic methods to mark and sign AI-generated content with metadata about its origin.
Meanwhile, some companies employ visible watermarks, like OpenAI’s Dall-E, which adds rainbow-like strips to its images. Watermarking text-based content from ChatGPT is also being tested. And the C2PA is developing a user interface to display content provenance when someone hovers over a piece.
Future limitations
The biggest challenge with watermarks, both visible and those embedded in the metadata, is that they can easily be removed.
“It’s possible to create tools that could completely remove the watermark, although they don’t exist today,” said Tom Goldstein, professor at the University of Maryland. “The question is what quality degradation occurs when you try to remove the watermark.”
According to Sam Gregory, executive director of the human rights organization Witness, people might assume that any content without a watermark is less reliable, possibly false, or not generated by AI.