Listen to this story
Meta on Thursday announced the launch of a new AI system ‘Make-A-Video’ that allows users to input text prompts to generate high quality video clips. This new benchmark in AI is built on the company’s recent progress in generative technology research with a potential to open new opportunities for creators and artists.
With paired text-image data, the AI system would learn what the world looks like—moving from video footage with no associated text.
Generative AI research has played a vital role in pushing creative expressions forward by providing users with tools to create new content with ease. With just a few words or sentences, Make-A-Video can bring users’ imagination to life and generate videos that are one-of-a-kind with vivid colors, landscapes, and characters. The AI system is also capable of creating videos from images and existing videos to create new, similar content.
Sign up for your weekly dose of what's up in emerging technology.
Source: Meta, LinkedIn
As part of its continued commitment to open science, Meta has shared the details of the system in a research paper. The firm also plans to release a demo experience for its users.
Commenting on the new launch, Meta says, “Make-A-Video follows our announcement earlier this year of Make-A-Scene, a multimodal generative AI method that gives people more control over the AI generated content they create. With Make-A-Scene, we demonstrated how people can create photorealistic illustrations and storybook-quality art using words, lines of text, and freeform sketches.”
The company claims that the new generative AI system uses publicly available datasets—adding further transparency to the research. By openly sharing the generative AI research and results with the community, Meta aims to use the feedback to refine its approach to form a responsible AI framework in the emerging technology.
Meta is yet to disclose when and how Make-A-Video will be available in the public domain. To know more, interested users can fill up the sign-up form.