With AI getting integrated into almost all businesses, Adobe Photoshop has also released an update to Photoshop version 22.0 that comes with a host of AI-powered features. Some of these AI features include a sky replacement tool, which is an improved AI edge selection, however, the star of the show is the suit of image-editing tools that Adobe calls “neural filters.”
According to the company website, these AI-based neural filters is a new workspace in Photoshop with a library of filters that drastically reduces difficult work that was previously done manually, can now be done with a few clicks using machine learning-powered by Adobe Sensei. These neural filters are like a tool that would empower the users to try non-destructive, generative filters and explore creative ideas in seconds. These filters once applied, can easily improve the images by generating new contextual pixels that are not actually present in the original image.
When asked, Maria Yap, Adobe’s vice president of digital imaging stated to the media that this is why Photoshop can be considered the world’s most advanced AI application where creating things in images is possible that weren’t there before.
To facilitate these effects, Adobe is leveraging the power of generative adversarial networks, GANs, which is a type of machine learning technique that helps in generating visual imagery. According to the news, some of this processing is done locally, whereas, some was done in the cloud, depending on the computational requirements of each individual tool, but all these neural filters just take a few seconds to make the changes in the images.
These new tools deliver fast and good quality results however all the AI-powered edits aren’t flawless, and most professional retouchers usually make minor changes on top of these filters to get accurate results, but they indeed speed up many editing tasks.
In order to create the neural filter that retouches the skin and removes blemishes, Adobe collected thousands of before and after shots of edits made by professional photographers, and fed the data into their algorithms to make it as accurate as possible. This sort of advancement in Adobe’s products makes it one of the topmost photo editing apps of the current era and also scales up the world of AI research.
Along with this Adobe collaborated with Nvidia to develop these neural filters, as well as to reduce the biases in the model. It added another 130,000 stock images to the training dataset for addressing these biases like ethnicity and age. It is also enabling the users to spot the biases and provide feedback with an anonymised image to Adobe that can be added to the training dataset. After retraining the model, the company incorporated high-resolution techniques from Photoshop to improve the quality of the output.
Some of the different filters are featured neural filters that help in skin smoothing and style transfer; beta neural filters that create a smart portrait, makeup transfer, depth-aware haze, colourise, superzoom and JPEG artifacts removal; future neural filters, which works for photo restoration, dust and scratches, noise reduction, face cleanup, photo to sketch, sketch to portrait, pencil artwork and face to caricature editing.
Check out how to use these neural filters here.