Listen to this story
|
With text-to-image AI tools such as DALL.E 2 and Stable Diffusion, images are scraped off the internet and used without proper attribution. The artist’s information is unavailable and the process largely remains shrouded in anonymity. This has led to artists accusing these platforms of stealing their artwork.
Enter Stable Attribution, which looks to address this by backtracking the human element to the AI created art. It works on reverse engineering principles to extract the images that were used to create an AI image. The platform also allows attribution to the artists whose images were used for the creation. Stemming from the policies of infringement and copyrighting, the tool helps safeguard the rights of the owner and at the same time, build a repository of artists’ works.
Chroma, a startup trained to make AI solutions from machine learning, created Stable Attribution. The application was predominantly built by Jeffrey Huber, alongside Anton Troynikov and others. In Stable Attribution, if you post an AI image from an AI-generative application, say, Stable Diffusion, the source images that were used to generate the AI image will pop up.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
In the below example, Stable Diffusion created the AI image from the text prompt, “Stormtroopers having ice cream in Central Park”. Upon uploading that image in Stable Attribution, the source images used for creating it appears. There is an option to attribute each picture with the artist’s link.


Stable Attribution works on models such as LAION and Stable Diffusion where the datasets are available. However, a tool like Dall-E 2 by OpenAI does not have its training dataset exposed, which blocks Stable Attribution from crawling. The application is still learning from the datasets it has. The algorithm of Version 1 of Stable Attribution deconstructs an AI image by matching with the most similar images from the dataset it has.
As it is still learning, the repository is limited and the images it pulls out are the most influential and visually matching. Apart from addressing an artist’s copyright issues, this application has limited use in the current version. Unless they add new features, Stable Attribution is a mere image deconstruction tool.