If you’re an aspiring animator searching for a programme, you can look into Plask. Plask is a web-based 3D animation editor and motion capture tool driven by AI. It includes the required animation tools, allowing you to record, edit, and animate your projects without ever leaving your browser. However, Plask’s most significant feature is its AI-assisted ability to animate your characters using any video as a mocap. To capture keyframes, simply upload any films or record motions with any camera straight on Plask. It retargets, rigs, and optimises the output on the editor automatically. You can also use a webcam to record yourself and receive a video.
This step allows the user to take a person’s video, crop it, and extract the 3D motion. Plask further applies the extracted 3D motion to the character. These are certain guidelines to follow to capture the motion accurately:
- Plask can identify only people’s poses in the video. It is recommended to include the person’s entire body in all settings.
- The video must only include one individual.
- Plask does not currently support multiple characters.
- The person in the video should be clear to improve motion accuracy.
- For video recording, a horizontal angle is suggested. Angled video may have an impact on the outcome.
- Users should view the video at 30 frames per second. The running time for other frames is altered.
- Video formats supported: mp4, mov, and avi
Plask provides for both manual and automatic retargeting. Automatic retargeting is performed based on keywords and hierarchies in the image below. A modal will notify if automatic retargeting fails. A user can manually retarget by going to the Retargeting tab in the Control panel and visualising the model, based on 24 source bones, PLASK motion capture is extracted.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Once done, users can easily export the file in FBX, GLB, and BVH formats.
Download our Mobile App
AI powered animation- an emerging field
Generative models, which may create very realistic results in sectors such as semantic photos or movies, are among the most recent technological achievements. Using machine learning capabilities, Disney has simplified the process of designing and modelling 3D faces. A nonlinear 3D face-modelling system based on neural networks has been proposed by Disney researchers. This system learns a network architecture that converts a face’s neutral 3D model into the desired facial expression. In contrast, an animation technology start-up Midas Interactive has already started the process. Jiayi Chong, a former technical director at Pixar, has developed Midas Creature, a new tool that automates sophisticated 2D character animation. Artists and designers use Midas Creature to tell the engine to choreograph and figure out the movements for them, and the engine does it.
Also read: Facial Motion Capture for Animation Using First Order Motion Model