An AI-enabled robot pet, Moflin, with emotional capabilities and an ability to learn, won the Best of Innovation Award in Robotics at the Consumer Electronic Show 2021 last week.
Moflin, a furry pet that traffics in cute sounds and movements, uses a nature-inspired algorithm to learn patterns from sensor inputs, giving it a ‘unique personality’ and ‘tremendous therapeutic benefits’, according to its manufacturer, Vanguard Industries.
Vanguard Industries plans to roll out the pet in March 2021. If you want to play with it, you can donate ¥41,800 (around $400) into Moflin’s Kickstarter campaign.
Anthropomorphism in robots has become a hot topic in recent times. Let’s take a look at the positive and negative implications.
Positive Implications And Advantages
A recent study showed the human brains react in the same way while making eye-contact with a humanoid robot and another human, making a case for producing service robots with eyes. As robots start working along with humans in warehouses and manufacturing floors, the eye feature will ensure a smooth Human-Robot Interaction.
Robots will soon start working in public places and stand to encounter abuses from humans. It’s an excellent strategy to pack robots with human/animal features and/or add character to their interactions to ensure the robots receive fair treatment from co-humans.
Like Moflin, more robots will make way into your homes. Social robots acting as an antidote to your loneliness or helping address your mental health issues have many potential applications. Robots can be your Man Friday, hand-holding you in different stations in life.
Further, advances in psychological and social development research could be milked to deploy human-like robots in a controlled environment.
Ethical Concerns And Threats
Past research has observed social robots as the next milestone in ‘cultural simulation’. The study denounces the use of anthropomorphism to create social bonds between humans and robots. Implicit to this criticism is the conviction that anthropomorphic projections correspond to false beliefs. Such beliefs can have serious consequences. Imagine a child starts believing that a robotic caretaker actually cares for her.
Download our Mobile App
The proliferation of anthropomorphic robots encourages people to think AI has advanced further than the reality making them Cyborg wary without enough proof in that direction. Tech titans like Elon Musk stoking their fears is not helping the matters either.
Further, many theorists think that focusing on human-like AI is a hindrance to the progress of AI. Instead, it should concentrate on mindless intelligence or a ‘generic’ form of intelligence. Focusing on human-level AI also takes away the attention of AI’s current and more fundamental problems. For instance, addressing the resulting discrimination from algorithmic biases.
Anthropomorphising robots also leave us with an untrustworthy way of testing the intelligence in AI systems deployed in the machine. In a best-case scenario, this obscures both AI’s actual achievements and how far it has to go to produce genuinely intelligent machines and in the worst case, it leads to researchers making false claims.
Lastly, people’s likeability of a robot increases the more human-like it becomes, but it comes with a caveat. Humans get uncomfortable if the similarity is too much. This phenomenon is called the ‘uncanny valley’.
A research paper argued that social robots or anthropomorphic technologies are entirely unethical and wrest people away from social relationships. The study makes a case for eliminating anthropomorphising computational technologies.
On the flip side, recent research takes a different position in this debate, claiming that all social robots do not come under the same umbrella; not only are they used for different applications, but because there are different forms of anthropomorphism as well.
While a social robot could falsely make kids feel it cares for them, it could also help autistic kids develop social skills. Issues concerning the illusion of a reciprocal caring in the second case remain relevant, and thus, the study proposes ‘synthetic ethics’. Synthetic ethics does not exclude traditional ethical questions but reframes within a research perspective that views social robots as a means to empower our relationships.
Social robots are inevitable. And social robots with good causes such as the one helping autistic kids should be encouraged. Hence, anathematising all social robots is pointless. Instead, we should forge strategies to enable human-robot symbiosis with enough checks and balances.