Listen to this story
|
Google researchers recently introduced VideoPoet, a new large language model (LLM) for video generation. This model is designed to perform a range of tasks including text-to-video, image-to-video, video stylisation, video inpainting and outpainting, and video-to-audio conversion.
The introduction of VideoPoet addresses the challenge of creating coherent large motions in videos, a limitation in current video generation technologies.
This new model differentiates itself by integrating multiple video generation capabilities within a single LLM framework, contrasting with the segmented approach of existing models. It uses various modalities and is trained with multiple tokenizers, such as MAGVIT V2 for video and image, and SoundStream for audio. This allows VideoPoet to perform diverse tasks, from animating images to editing and stylising videos based on text inputs.
How it compares with others
In the evolving landscape of AI-generated video technology, VideoPoet emerges as a significant advancement, distinguishing itself from existing models like Imagen Video, RunwayML, Stable Video Diffusion, Pika, and the most recent ‘Animate Anyone’ from Alibaba Group. through its enhanced capabilities in text fidelity and motion interestingness. This new model outshines its counterparts by more accurately following text prompts and generating videos with more engaging motions.
Key points of comparison include: Zero-Shot Capabilities, VideoPoet, like other contemporary models, excels in generating content from minimal input, such as a single text prompt or image, without needing specific training on that content.
However, unlike other models which may struggle with large motion coherence, VideoPoet showcases a higher degree of accuracy in translating text prompts into video, enhancing the user experience. Where other models often face challenges in creating large, artefact-free motions, VideoPoet demonstrates a marked improvement, creating more dynamic and fluid videos.
As with Google announcements
Amidst the announcement of VideoPoet by Google Research on December 19, 2023, some scepticism exists within the community regarding its practical application and effectiveness. While VideoPoet showcases advancements in text fidelity and motion interestingness in video generation, critics question the reliance on specific prompting techniques.
There are observations that the use of terms like “8k” in prompts, a trick from previous AI models like VQGAN + CLIP and Stable Diffusion, may be employed to artificially enhance photorealism, raising concerns about the model’s genuine capability.
Overall, while VideoPoet represents a significant step in video generation technology, its real-world application, effectiveness, and impact remain subjects of debate and speculation within the community.