Listen to this story
|
Microsoft recently unveiled a new model called ‘Visual ChatGPT‘, which incorporates different types of Visual Foundation Models (VFMs) including Transformers, ControlNet, and Stable Diffusion with ChatGPT. The system enables interaction with ChatGPT beyond language.
This connection enables sending messages through chat and receiving images during the chat, while also injecting a series of visual model prompts to edit the images as well.
Click here to check out the GitHub repository.
The paper titled, ‘Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models‘, highlights how all the visual transformer models are only experts of specific tasks with fixed inputs and outputs, and the same is the case with ChatGPT as it is only trained on text. Combining all of them makes image generation and manipulation limitless.

In order to bridge the gap between ChatGPT and VFMs, the paper proposes the use of a Prompt Manager that includes the following features:
- Explicitly inform ChatGPT about the capabilities of each VFM and specify the necessary input-output formats.
- Convert various types of visual information—such as png images, depth images, and mask matrices—into language format to aid ChatGPT’s understanding.
- Manage the histories, priorities, and conflicts of different VFMs.
By utilising the Prompt Manager, ChatGPT can effectively leverage VFMs and receive their feedback in an iterative manner until the users’ requirements are met or a concluding condition is reached.
This enables users to interact with ChatGPT using images as well, more than only text. Moreover, users can also ask complex image questions or visual editing by the collaboration of different AI models in multi-steps. Users can also ask for corrections and feedback on results.