Amid AI Generated Hoax, Adobe Introduces Firefly in Photoshop

Now, with text prompts, designers can add, extend, or remove content in Photoshop
Listen to this story

Software giant Adobe has introduced Generative Fill in Photoshop, a feature which brings generative AI powered Firefly directly into the hands of designers. Now, with simple text prompts, designers can add, extend, or remove content from their creations in seconds. Though the integration opens up a world of possibilities, the news comes after yesterday a fake AI generated image of an explosion at the Pentagon went viral among some well-known media houses.

Generative Fill, currently available as a desktop beta app for Photoshop and as a module within the Firefly beta app, automatically adapts to match perspective, lighting, and style, producing results. With its natural language interface, users can generate and edit content without damaging their original image. 

The recent launch of Firefly, just six weeks ago, has already been used to generate over 100 million assets, making it one of Adobe’s most popular beta releases to date. Initially focused on image and text effects, Firefly (which is trained on Adobe Stock Images) has expanded to include vector recoloring and now, generative fill. 

Read: Adobe Firefly: Too Little, Too Late?

Ashley Still, Senior Vice President of Digital Media at Adobe, expressed the transformative nature of integrating Firefly into workflows, stating, “By integrating Firefly directly into workflows as a creative co-pilot, Adobe is accelerating ideation, exploration, and production for all of our customers.” 

Ethically Focused

Focusing on AI ethics, Adobe has developed Content Credentials, to ensure transparency in content creation and data usage. Serving as digital “nutrition labels,” Content Credentials provide information about whether content was human-created, AI-generated, or AI-edited. This approach aligns with Adobe’s AI Ethics principles, allowing proper attribution and informed decision-making.

The need for the attention on labelling images comes in the backdrop of the AI generated hoax image of an explosion near the Pentagon in Washington DC surfaced on social media last night, causing collateral damage to the US stock market. Prominent Indian media houses including Republic and News 18 unwittingly fell prey to disseminating the image, propagating misinformation through their channels. The Arlington police officials have trashed the photo, calling it AI-generated. 

Read: Republic, News 18 and Others Break Fake AI News

Download our Mobile App

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

Is Sam Altman a Hypocrite? 

While on the one hand, Altman is advocating for the international community to build strong AI regulations, he is also worried when someone finally decides to regulate it