MITB Banner

Can AI Learn from Newborn Babies?

This is similar to the Meta AI Chief Yann LeCun’s version of autonomous machine intelligence

Share

Can AI learn from newborn babies_ (1)

Illustration by Nikhil Kumar

Listen to this story

In an interesting discovery, data scientists at New York University have decoded the ability of AI models to glean insights from the cute babble of infants. While humans have long been recognised for their specialised traits for language acquisition, this study proves the notion that AI, too, can harness the essence of learning from minimal data sets. 

“We ran exactly this experiment. We trained a neural net (which we call CVCL, related to CLIP by its use of a contrastive objective) on headcam video, which captured slices of what a child saw and heard from 6 to 25 months. 

It’s an unprecedented look at one child’s experience, but still, the data is limited: just 61 hours (transcribed) or about 1% of their waking hours,” said Wai Keen Vong, one of the researchers of ‘Grounded language acquisition through the eyes and ears of a single child’ study. 

This is similar to the Meta AI Chief Yann LeCun’s version of autonomous machine intelligence. The Turing Prize winner has long argued that teaching AI systems to observe like children might be the way forward to more intelligent systems. He has predicted that with his ‘world model’, which is similar to how the human brain works, might be the ideal way forward for AI systems to become intelligent.

Learning from kid’s experiences

Despite working with limited data, the research study has shown that the AI model can effectively learn word-referent associations with tens to hundreds of examples. It can generalise seamlessly to new visual datasets and demonstrates the ability to achieve multi-modal alignment. 

“Our findings address a classic long-standing debate in philosophy and cognitive science: What ingredients do children need to learn words? Given their everyday experience, do they (or any learner) need language-specific inductive biases or innate knowledge to get going? Or can joint representation and associative learning suffice? Our work shows that we can get more with just learning than commonly thought,” Vong added. 

Despite its advancements, the current model, Child’s View for Contrastive Learning (CVCL), falls short compared to a typical 2-year-old’s vocabulary and word-learning abilities. 

Several factors contribute to this gap, including CVCL’s lack of sensory experiences such as taste, touch, and smell, its passive learning approach compared to a child’s active engagement, and its absence of social cognition. 

Unlike children, CVCL doesn’t perceive desires, goals, or social cues, nor does it grasp that language serves as a means of fulfilling wants. 

Child’s Play, A Way Forward to More Intelligent Systems

Observing children has proven invaluable in advancing AI’s understanding of the physical world. Researchers at Google DeepMind confirmed that developmental psychologists had identified key physical concepts by studying infants’ innate knowledge of physics. They devised methods like the violation-of-expectation paradigm to measure them. 

Inspired by developmental psychology, the team created PLATO (Physics Learning through Auto-encoding and Tracking Objects). This model represents the world as evolving objects and makes predictions based on their interactions. 

Training PLATO on simple physical interactions, it was found that it surpassed other models lacking object-based representations, indicating the importance of this framework in intuitive physics learning. 

PLATO demonstrated the ability to learn with as little as 28 hours of visual experience and could generalise to new stimuli without re-training. This work highlights the potential of child development research to inform the development of AI systems capable of understanding and navigating the complexities of the physical world.

AI Can Help a Child, Too!

In another groundbreaking innovation, researchers at the University of California, Los Angeles developed a new AI application, Chatterbaby, to interpret babies’ cries and provide insights into what they are trying to communicate. 

Dr. Ariana Anderson and her team uploaded 2,000 audio samples of infant cries, which were able to predict why babies are crying with an accuracy of 90%. They then used AI algorithms to distinguish between cries induced by hunger, pain, and irritation. 

Share
Picture of Vidyashree Srinivas

Vidyashree Srinivas

Vidyashree is enthusiastic about investigative journalism. Now trying to explore how AI solves for all.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.