The concept of autonomous machines dates back to medieval times, but the research into the practical and potential use of robots began only in the 20th century. Today, there are numerous scholars, inventors, engineers, and technicians that are working to develop machines that mimic human behaviour and manage tasks in a human-like fashion.
While artificial intelligence plays a crucial role in the development and advancement of robotics, the rise of general-purpose robots poses a question of whether robotics has begun to lag AI. People often confuse robotics for industrial automation and academia and research. While for most high-end research robots, it has deep learning embedded into them such as computer vision linked objects, feature detection and classification, industrial robots are beginning to include the maturity of camera-based object detection and classification.
The purpose of the computer vision may fluctuate, such as improving localisation and mapping by identifying features in the environment. This involves a computational problem of continuously creating or updating a map of an unfamiliar environment while simultaneously maintaining track of an agent’s location within the environment. For static robots, they usually utilise object detection for picking and segmentation, and classification for placing or stacking objects correctly.
Does Lack Of AI Makes Robots Uncompetitive Compared To Human Factory Workers?
Human hands allow us to solve a wide range of tasks, and similar utility has been attempted to be replicated in robots for over 60 years. Companies have been designing separate grippers for each task, which represents a gap in robotics for a smart grasper that can recognise and pick various objects efficiently. The modern manufacturing industry demands more flexibility and efficiency to be able to come close to replacing humans.
While many research teams across large tech companies and universities are working on robotics arms using simulation training and reinforcement learning for factory automation, there is a lot of room for improvement.
For instance, Amazon has adopted robotics in its warehouses in a constrained environment which is structured around performing specific tasks. Further, the Amazon Picking Challenge challenged roboticists to build a robot that could pick and stow 12 products from a shelf, into a bag. It was done to automate its warehousing processes.
There is also plenty of research on manufacturing dexterous robotic hands, but it is not suitable for industrial applications, which requires robots to recognise many different objects and select the appropriate grasp. For instance, we can look at OpenAI’s successfully solving Rubik’s Cube with robot hand fingers in a non-trivial setting.
How OpenAI Integrated AI With Robotics To Solve A Complex Problem?
OpenAI used neural networks to solve a Rubik’s Cube with a human-like robot hand. They trained neural networks entirely in simulation, using reinforcement learning. This was coupled with a new technique invented by OpenAI called Automatic Domain Randomization (ADR), which helped train the system to handle situations it never witnessed during training, such as being shoved by a stuffed toy. This proved that using reinforcement learning can make robots deal with complex situations.
According to the OpenAI blog, researchers and engineers have worked several decades seeking to develop general-purpose robotic hardware, but only with limited success due to their high degrees of freedom. And new hardware is a big challenge in robotics itself. Even in the case of the OpenAI robot hand, the hardware used was not new, and it had been around for the last 15 years. The company stated that only the software approach was novel.
Open AI had been trying to train the human-like robotic hand to solve the Rubik’s Cube since 2017. It was largely based on the assumption that training a robot hand to perform complex manipulation tasks can lay the foundation for general-purpose robots. While Open AI could solve Rubik’s Cube in simulation after a few months, in terms of hardware, it took another year for the robot to manipulate the block. However, it could solve the task only 20% of the time. The experiment was successful but demonstrated how complex and challenging it is to develop a human hand that could replace actual humans in various tasks.
AI Is Helping In Evolutionary Robotics, But There Are Challenges
While engineers and AI professionals are developing evolutionary robotics using reinforcement learning and learning classifier systems, it can take considerable time and resources, and hardware tools to bring it to reality. Researchers are using simulation techniques to create better robots and to explore the nature of evolution.
Because the process often requires multiple generations of robots to be simulated, this technique may be run utilising a robot simulator software package, then experimented on real robots once the evolved algorithms are reliable enough. On the other hand, the problem with simulation training is that even small discrepancies between the simulation and reality — like the size of the cube’s bezels or a few more parts in the robotic hand — can significantly deteriorate a model’s performance in the real world, even with a lot of domain randomisation during training.
The above arguments suggest that while robotics has seen a drastic improvement over the years, infusing AI in it still needs to be explored. Even in cases where it has been used, there is a scope for a lot of improvement.
Provide your comments below
If you loved this story, do join our Telegram Community.
Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India. Reach out at firstname.lastname@example.org