OpenAI working on new AI image detection tools

OpenAI has added a new tool to detect if an image was made with its DALL-E AI image generator, as well as new watermarking methods to more clearly flag content it generates.

In a blog post, OpenAI announced that it has begun developing new provenance methods to track content and prove whether it was AI-generated. These include a new image detection classifier that uses AI to determine whether the photo was AI-generated, as well as a tamper-resistant watermark that can tag content like audio with invisible signals. 

The classifier predicts the likelihood that a picture was created by DALL-E 3. OpenAI claims the classifier works even if the image is cropped or compressed or the saturation is changed. While the tool can detect if images were made with DALL-E 3 with around 98 percent accuracy, its performance at figuring out if the content was from other AI models is not as good, flagging only 5 to 10 percent of pictures from other image generators, like Midjourney.  

OpenAI previously added content credentials to image metadata from the Coalition of Content Provenance and Authority (C2PA). Content credentials are essentially watermarks that include information about who owns the image and how it was created. OpenAI, along with companies like Microsoft and Adobe, is a member of C2PA. This month, OpenAI also joined C2PA’s steering committee. 

Both the image classifier and the audio watermarking signal are still being refined. OpenAI says it needs to get feedback from users to test its effectiveness. Researchers and nonprofit journalism groups can test the image detection classifier by applying it to OpenAI’s research access platform. 

OpenAI has been working on detecting AI-generated content for years. However, in 2023, it had to end a program that attempted to identify AI-written text because the AI text classifier consistently had low accuracy. 

https://www.theverge.com/2024/5/7/24151482/openai-image-detection-ai-watermarking-audio