MITB Banner

NVIDIA Builds Framework That Can Generate Motion Capture Animation Using Only Video Inputs

The researchers use AI to capture individual movements from a video input and convert them into digital avatars.

Share

Researchers from NVIDIA, the Vector Institute and the University of Toronto, have proposed a motion capture method that only uses video input to improve past motion-capture animation models. This new system doesn’t require the previously used expensive motion-capture hardware. This work is expected to lead to more scalable human motion synthesis as there are large online video resources.

The existing methods need accurate motion capture data to train which is expensive. With the new system, the researchers can capture individual movements using AI solely through video input and translate them into a digital avatar. In the paper, the researchers have introduced a new framework to train motion synthesis models from raw video pose estimations without using motion capture data. The framework also refines noisy pose estimates by enforcing physics constraints through contact invariant optimisation, including the computation of contact forces. Such an optimisation yields corrected 3D poses and motions, and their corresponding contact forces. The results of physically-corrected motions have significantly outperformed prior work on pose estimation.

Source: NVIDIA

The proposed framework then trains generative models of physically plausible human motion directly from monocular RGB videos, that are much more widely available.

Then, they train a time-series generative model on the refined poses and synthesise both future motion and contact forces. The results demonstrated significant performance boosts in pose estimation via the physics-based refinement, as well as motion synthesis results from video.

Such a framework is expected to bring people one step closer to working and playing inside virtual worlds. With this, developers can animate human motion more affordably and with a greater diversity of motions.

Share
Picture of Meeta Ramnani

Meeta Ramnani

Meeta’s interest lies in finding out real practical applications of technology. At AIM, she writes stories that question the new inventions and the need to develop them. She believes that technology has and will continue to change the world very fast and that it is no more ‘cool’ to be ‘old-school’. If people don’t update themselves with the technology, they will surely be left behind.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.