On Tuesday, Adobe added a new tool to its Photoshop beta called “Generative Fill,” which uses cloud-based image synthesis to fill selected areas of an image with new AI-generated content based on a text description. Powered by Adobe Firefly, Generative Fill works similarly to a technique called “inpainting” used in DALL-E and Stable Diffusion releases since last year.
At the core of Generative Fill is Adobe Firefly, which is Adobe’s custom image-synthesis model. As a deep learning AI model, Firefly has been trained on millions of images in Adobe’s stock library to associate certain imagery with text descriptions of them. Now part of Photoshop, people can type in what they want to see (i.e., “a clown on a computer monitor”), and Firefly will synthesize several options for the user to choose from. Generative Fill uses a well-known AI technique called “inpainting” to create a context-aware generation that can seamlessly blend synthesized imagery into an existing image.
To use Generative Fill, users select an area of an existing image they want to modify. After selecting it, a “Contextual Task Bar” pops up that allows users to type in a description of what they want to see generated in the selected area. Photoshop sends this data to Adobe’s servers for processing, then returns results in the app. After generating, the user has the option to select between several options of generations or to create more options to browse through.
When used, the Generative Fill tool creates a new “Generative Layer,” allowing for non-destructive alterations of image content, such as additions, extensions, or removals, driven by these text prompts. It automatically adjusts to the perspective, lighting, and style of the selected image.
Generative Fill isn’t the only AI-powered feature added to the Photoshop beta. Firefly has also enabled Photoshop to remove parts of an image entirely, erase objects from a scene, or extend the dimensions of an image by generating new content that surrounds the existing image, an AI technique known as “outpainting.”
These features have been available in OpenAI’s DALL-E 2 image generator and editor since August of last year (and in various homebrew releases of Stable Diffusion since around the same time), so Adobe is just now catching up to adding them to its flagship image editor. Admittedly, it’s a fast turnaround for a company that might have a huge liability target painted on its back when it comes to issues of harmful or socially stigmatized content generation, utilizing artists’ images for training data, and powering propaganda or disinformation.
Along those lines, Adobe not only blocks certain copyrighted, violent, and sexual keywords from generating but also relies on terms of use that restrict people from generating “abusive, illegal, or confidential” content.
Also, with Generative Fill’s ability to easily warp the apparent media reality of a photo (admittedly, something Photoshop has been doing since its inception), Adobe is doubling down on its Content Authenticity Initiative, which uses Content Credentials to add metadata to generated files that help track their provenance.
Generative Fill in the Photoshop beta app is currently available to all Creative Cloud members with a Photoshop subscription or trial through the “Beta apps” section of the Creative Cloud app. It is not yet available for commercial use, not accessible to individuals under 18, not available in China, and currently supports English-only text prompts. Adobe plans to make Generative Fill available to all Photoshop customers by the end of the year so that anyone can make yard clowns with ease.
If you aren’t a Creative Cloud subscriber, you can also try Generative Fill for free on the Adobe Firefly website with an Adobe login through a web-based tool. Adobe recently removed the waitlist from its Firefly beta.
https://arstechnica.com/?p=1941261