Magic Pencil For Real: Meta AI’s Model To Bring Children’s Drawings To Life

Children’s drawings are often unique, inventive, and mostly abstract, pushing adults to think differently to recognise people and things in the pictures.
Children Drawings

“I can vividly remember listening, as a child, to Harold and the Purple Crayon, a story about a boy with a magic crayon whose drawings came to life. This was something I would have wanted to exist when I was a child, and now, all these years later, I could help make it real,” said Jesse Smith, a postdoctoral researcher who is working with the Creativity Pillar of Meta AI. He was part of the team that built the first of its kind AI-powered animation tool that can automatically animate children’s drawings of human figures. The whole process just takes a few minutes.

Why do we need this?

Children’s drawings are often unique, inventive, and mostly abstract, pushing adults to recognise people and things in the pictures differently. Humans, like a parent or a teacher, would find it easy to recognise and understand what such drawings mean, but for an AI, this illegibility poses a massive challenge. For example, even a state of the art AI system that is trained to spot objects in photo realistic images may get confused by the fanciful and unique ways these children’s drawings look.

Smith mentioned that what surprised him was how difficult it is to get a model to predict a good character segmentation to enable animation. One plausible reason for this is that many characters are drawn in a hollow manner – meaning a part or the whole body is outlined by a stroke, but the inside part remains unmarked. The inside and outside of the picture is the same colour and texture, and we cannot rely on them to infer which pixels belong to the character.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

To this end, Meta has introduced an AI system that automatically animates children’s hand-drawn figurines and brings them to life. A user can use this system by simply uploading the pictures to the prototype system, then animating the picture to perform tasks like dancing, jumping, skipping, etc. These animations can be downloaded and shared with their loved ones.

https://www.facebook.com/watch/?v=196558752689269

Object Detection for Children’s Drawings

As the first step, the researchers devised mechanisms to distinguish the human figure from the background and other characters in the whole picture. For extracting the human-like characters from the drawings, the Meta AI researchers used the CNN-based model detection model – Mask R-CNN, as implemented in Detectron2. The model was then fine-tuned using ResNet-50+FPN to predict a single class, “human figure”. About 1000 drawings were used to train the AI model.

Credit: Meta AI

After identifying and extracting human figures, the next step is masking. It is the process of separating the human figures from the other parts of the scene that closely mirrors the contours of the figures, which would be then used to create a mesh that would be eventually deformed to produce the animation.

The researchers developed a classical image processing approach to crop the image using the predicted bounding box for each detected character. They then applied adaptive thresholding and morphological closing operations, flood filling from the box edges, and assumed that the mask is the largest polygon not touched by this flood fill.

To identify the key points in the human figures, the researchers used AlphaPose, a model that is trained for human pose detection. Then, using the pose detector trained on the initial data set, the researchers created an internal tool that allows parents to upload and animate their children’s drawings.

For the final animation task, the researchers used the extracted mask to generate the mesh and texture it with the original drawing; they created a skeleton for the character by using the predicted joint locations. They took advantage of the ‘twisted perspective’ children use to draw. The researchers then determine whether the motion is more recognisable from the front or side view while considering the lower and upper body separately. The motion is then projected onto a single 2D plane and drives the character. The results are then validated using perpetual user studies run using Mechanical Turk.

As we advance

The researchers believe that this study could help apply more tailored motions to subcategories of figures. Further, a more fine-grained analysis could be useful to increase any animation’s appeal. They also anticipate that an AI system could create a detailed animation for a complex drawing. 

More Great AIM Stories

Shraddha Goled
I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM