Facebook Introduces New Platform For Building Robots

Droidlet is capable of building embodied agents that can recognise, react, and navigate their surroundings.

Facebook has introduced Droidlet, an open-source, modular, heterogeneous embodied agent architecture. The Droidlet platform can be used to build embodied agents using natural language processing, computer vision, and robotics. Now with Facebook Droidlet platform, researchers can build more intelligent real-world robots. In addition, it simplifies the integration of a wide range of state-of-the-art machine learning algorithms and robotics to facilitate rapid prototyping.

Droidlet

A droid agent is considered to be made up of a collection of components, which are both heuristic and learned. Droidlet allows for component swapping and customisable designs. This software platform gives researchers a debug dashboard as well as a markup interface for correcting errors and annotation. Robotic-focused features such as robot-specific wrappers and environments and models built specifically for robots are also included.

Not only does the Droidlet platform have different tools and advancements in artificial intelligence, like PyRobot, AllenNLP, and Detectron, but it also connects these iterations of tools to provide a better and unified experience.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Droidlet lets researchers use different computer vision or NLP algorithms with their robots. In addition, they can use Droidlet to accomplish complex tasks in both real world or within a simulated environment like Minecraft or Habitat. 




Droidlet is capable of building embodied agents that can recognise, react, and navigate their surroundings. It simplifies the integration of various cutting-edge machine learning algorithms in these systems, allowing users to prototype new ideas faster than ever before.

According to the research paper, “Droidlet: modular, heterogenous, multi-modal agents”, The objective of the platform is to build intelligent agents that can learn continuously from their encounters with the real world.

The researchers hope that the platform for Droidlets may help to further their understanding of various areas of research including self-supervised learning, multi-modal learning, interactive learning, human-robot interaction, and lifelong learning.

Droidlet provides “battery-included” systems for researchers and hobbyists that have access to trained object detection and pose estimation models to collect their observations and then store them in the robot’s memory. To convert a natural language statement like “Go to the red chair” into a programme, the system activates a pre-trained neural semantic parser.

 

What’s Droidlet made of? 

Droidlet comprises a collection of various components, some based on statistical heuristics, and others trained via machine learning. The user interface of the Droidlet is made up of the following elements:

  • Memory system: An information storage and retrieval system that operates across various modules. 
  • Perceptual modules: A set of perceptual modules which are typically used to obtain data from the environment and store it in memory. 
  • Lower-level tasks: Tasks that, at the lower level, affect the agent’s environment include things like “move three feet forward” and “place item in hand at given coordinates.”
  • Controller: A controller that decides which tasks to perform depending on the current state of the memory. 

 

Summing up 

This platform is capable of delivering robust and sophisticated results while operating as an independent component. In addition, the Droidlet system will be able to grow even more robust over time, as they introduce new functions and capabilities based on sensory modalities or hardware setups others have added to the system. 

Meanwhile, Robotics software development is increasingly becoming a large portion of Facebook’s tech operations. It recently teamed up with Carnegie Mellon and Berkeley to teach robots how to adjust in response to different environments in real-time. It will be interesting to see how Facebook’s latest open-source platform works out for researchers in the coming days.

Ritika Sagar
Ritika Sagar is currently pursuing PDG in Journalism from St. Xavier's, Mumbai. She is a journalist in the making who spends her time playing video games and analyzing the developments in the tech world.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

The Gold Standard for Recognizing Excellence in Data Science and Tech Workplaces

With Best Firm Certification, you can effortlessly delve into the minds of your employees, unveil invaluable perspectives, and gain distinguished acclaim for fostering an exceptional company culture.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.