For this week’s practitioners series, Analytics India Magazine (AIM) got in touch with Murali Gopalakrishna, Head of Product Management, Autonomous Machines and General Manager for Robotics. He also leads the business development team focusing on robots, drones, industrial IoT and enterprise collaboration products at NVIDIA. In this interview, we discuss in detail the robotics solutions developed by NVIDIA and their significance.
AIM: Can you tell us about how NVIDIA is building robotics solutions to be used at scale?
Murali: Robotics algorithms can be mainly classified into (1) sensing/perception, (2) mobility (motion/path planning), and (3) robot control. All these fields are seeing significant innovation in the recent past with AI/Deep Learning playing an important role. With NVIDIA GPU-accelerated AI at the edge computing platforms, manufacturers will be able develop complex algorithms and deploy robotics at scale.
Robots have to sense, plan and act. To develop robots that are autonomous and efficient, developers have to accelerate algorithms for the complete stack. Algorithms such as object detection, pose estimation and depth estimation are used to perceive the environment, create a map of the environment and localise the robot in the environment. Algorithms such as free space segmentation are used for planning the efficient path for the robot, while control algorithms determine the commands for the robot to go on the planned path. Advances in AI and GPU-accelerated computing are making all these algorithms more accurate and faster, creating robots that are more capable and safe.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Ease of use and deployment have made the NVIDIA Jetson platform a logical choice for over half a million developers, researchers, and manufacturers building and deploying robots worldwide. We provide a full suite of tools and SDKs for developers and companies scaling robotics and automation applications:
- Open source packages for ROS/ROS2 (Human Pose Estimation, Accelerated AprilTags), Docker containers, Cuda library support and more.
- For training: NVIDIA Transfer Learning Toolkit (TLT) helps reduce costs associated with large scale data collection, labeling, and eliminates the burden of training AI/ML models from the ground up. This enables developers to build and scale production quality pre-trained models faster with no code. Auto Mixed Precision allows developers to train with half precision while maintaining the network accuracy achieved with single precision, enabling significantly faster training time.
- For real-time inference: NVIDIA TensorRT is a high-performance SDK for deep learning, including a DL inference optimizer and runtime that delivers low latency and high throughput for inference applications.
- NVIDIA Triton Inference Server simplifies the deployment of AI models at scale in production. It is an open source inference serving software that lets teams deploy trained AI models from any framework on any GPU or CPU-based infrastructure (cloud, data center, or edge).
- For perception: NVIDIA DeepStream SDK helps developers build and scale AI-powered Intelligent Video Analytics apps and services. DeepStream offers a multi-platform scalable framework with TLS security to deploy on the edge and connect to any cloud.
- NVIDIA Fleet Command is a hybrid-cloud platform for managing and scaling AI at the edge. From one control plane, anyone with a browser and internet connection can deploy applications, update software over the air, and monitor location health.
- NVIDIA Jarvis is an application framework for multimodal conversational AI services that delivers real-time performance on GPUs.
AIM: What is the scope of these solutions?
Murali: Powerful GPU-based AI-at-the-edge computing, along with a full spectrum of sensors, are widely implemented in the field today. Fueled by AI and DL, sensor technologies that power perception for real-time decision making have revolutionised several areas of robotics, including navigation, visual recognition and object manipulation.
Today’s AI-enabled robots perform myriad tasks and functions, allowing them to work as “cobots” in close collaboration with humans in complex environments including warehouses, retail stores, hospitals and industrial environments as well as in our homes. AI and DL continue to play a significant role in the programming of robots, speeding development time for roboticists and helping advance these systems from single functionality to multi functionality.
And there’s no arguing the pandemic accelerated the need and urgency for robotics deployment, especially in healthcare, logistics, manufacturing and retail.
- Healthcare: To minimise contact and support shortage of staff and resources, robots have found invaluable use in the delivery of medicine/supplies, patient monitoring, medical procedures, temperature detection, and UV disinfectant applications in public and private spaces.
- Logistics: From pick-n-place to last mile delivery, robots have clearly become indispensable with the ever-increasing need for efficiencies across the supply chain and e-commerce.
- Manufacturing: Using AI/DL to create the factory-of-the future, leveraging robots and cobots for no-touch manufacturing as well as enabling zero downtime to increase productivity and efficiency.
- Retail: From cleaning, inventory and safety (temperature detection, mask detection, social distancing) to shelf-scanning and self-checkout, robots are transforming the shopping experience.
We have a large customer base in a diverse set of industries like agriculture, manufacturing, healthcare and logistics (e.g., John Deere in agriculture and Komatsu in construction). Most of the last mile delivery robots are using NVIDIA technology (Postmates, JD-X , Cianio, etc.)
AIM: Tell us about NVIDIA Isaac Sim.
Murali: NVIDIA created the Isaac robotics platform, including the Isaac Sim application on the NVIDIA Omniverse platform for simulation and training of robots in virtual environments before deploying them in the real world. NVIDIA Omniverse is the underlying foundation for all our simulators, including the Isaac platform. We’ve added many features in our latest Isaac Sim open beta release including ROS/ROS2 compatibility and multi camera support, as well as enhanced synthetic data generation and domain randomization capabilities which are important for generating datasets to train perception models for AI based robots.
Simulation technology like Isaac Sim on Omniverse can be used for every aspect: from design and development of the mechanical robot, then training the robot in navigation and behavior, to deploying in a “digital twin” in which the robot is simulated and tested in an accurate and photorealistic environment before deployed in the real world.
AIM: What are the current challenges and what does the future hold for robotics?
Murali: One of the most interesting areas of development is cobots, which can be deployed in areas where robots have not been used thus far. Traditionally, the use of robots on factory floors posed safety risks and were deemed too dangerous to work alongside humans, and therefore these machines were typically placed in isolated environments or caged. Enter cobots. Though designed to work in close proximity with humans, cobots faced several challenges like limited capabilities and inability to think, putting a damper on their widespread adoption.
But now, thanks to advancements in AI, which brings intelligence to cobots, we’re seeing these systems make real-time decisions that ensure safety in the factory-of-the future, while maintaining and optimizing productivity. This includes training a cobot to perceive the environment around it and adapt accordingly — allowing it to reduce its speed, adjust its force/strength, detect changing working conditions, or even shut down safely before it interferes with a human in its proximity. By leveraging the power of AI, coupled with changes in cobot design (softer materials, new types of joints, removal of sharp edges, etc.), we’re seeing the emergence of applications and use cases that were not previously feasible (e.g., robots in commercial kitchens, etc.)
Robots are being taught what to do, and how to improve upon complex tasks, as quickly as within a few hours or overnight (versus what used to take weeks or even months)! AI techniques such as one-shot learning, transfer learning, imitation learning, reinforcement learning, etc. are no longer confined to research papers; many of these methods are in practical use today for real-world robotics deployments.
AIM: How do you see the Robotics landscape evolving in India?
Murali: Manufacturing is increasingly reliant on robotic production. For example, the automotive industry. Our collaboration with BMW for example, begins with creating a digital twin of a future factory in Omniverse and laying out the entire robotic managed production lines digitally, before committing to physical construction. Other industries benefiting from robotics are the industrial & nuclear power sectors. For example, warehouse and inventory management, materials transportation, quality inspection and predictive maintenance in the former & internal reactor inspection and emergency response in the latter.