Listen to this story
|
When the Chief ordered his subordinates to take Johnny Five to ‘stolen goods’, he protested in contempt, “I’m not stolen goods”. While getting arrested, he questions in despair, “But hath not a robot eyes? Hath not a robot hands, organs, dimensions, senses, affections, passions? If you prick us, do we not bleed?”
When we discuss robots and the possibility of them becoming ‘conscious’, we reach a profound philosophical difficulty.
As seen in this iconic scene from ‘Short Circuit 2’ (1988). Should these “beings” have any claim to moral and legal standing?
Much of the philosophy of rights is ill-equipped to deal with the case of artificial intelligence. Most claims for rights, whether human or animal, are focused on the question of consciousness.
Unfortunately, there is no concrete understanding of what defines ‘consciousness’. Some believe that it’s immaterial and yet others believe that it’s a state of matter, like gas or liquid. Regardless of the precise definition, humans have an intuitive knowledge of consciousness because we experience it.
Consciousness entitles beings to have rights owing to the awareness of emotions, like pain and suffering, in response to an external stimulus which is tied to the contemporary understanding of the idea of consciousness. However, robots don’t ‘suffer’, at least not in the way a conscious being would. Without the experience of pain or pleasure, there are no preferences which consequently renders the notion of rights meaningless.
Human rights are deeply symbiotic to our own conscience. For instance, we dislike pain because our brains are evolved to keep us alive and well. As a result of that universal consciousness, we came up with rights that protect us from infringements that cause us pain. Even more abstract rights like ‘freedom’ or ‘equality’ are rooted in the way our brains are wired to detect what is ‘fair’ and ‘unfair’.
But what if we programmed robots to experience pain and emotions? To be able to choose justice over injustice, pleasure over pain while also being conscious of making these choices? Would that make them ‘human’? More importantly, would that be the sole criterion that grants them rights?
This notion sheds light on the need to define another abstract concept—robots.
What is a robot?
AI and Robot Ethics scholar Dr David J. Gunkel believes that our contemporary understanding of robots arises from fiction and not scientific facts, calling it ‘science fiction prototyping’.
“Unlike artificial intelligence, which is the product of an academic conference in the mid-1950s, the word robot actually is the product of science fiction. It comes to us from Karel Čapek in his 1920 hit play, R.U.R., or Rossum’s Universal Robots. And he used the word robot, which is derived from the Czech word “robota,” or forced labour. And since this time you can see robots have dominated science fiction. They’re all over the place from Star Trek to Star Wars to Westworld”
Dr David J. Gunkel, scholar in AI and robot ethics.
He further emphasises that the main advantage of science fiction prototyping is that it allows non-experts to understand what is in play, interpret what this technology entails and how they can attempt to grapple with some of the major questions that need necessary probing and resolution.
However, such representations do undermine the developmental efforts of engineers, AI scientists and roboticists who are constantly struggling against fictitious expectations that don’t measure up to real-world research. It’s a double-edged sword.
George A. Bekey, American roboticist and Professor Emeritus of Computer Science, Electrical Engineering and Biomedical Engineering, University of Southern California, defines a robot as a device that senses, thinks and acts. But, that’s a rather broad definition—almost too broad because lots of technologies could then be considered robots.
“Given the versatility and wide availability of robots for a wide variety of applications in today’s world, it will be difficult to nail down one single definition for a robot”
Dr Karthik Ramesh, VP–Head International Markets and Innovation at Emids.
He believes that although without human-like consciousness, robots can help achieve tasks that prove beyond human capacity. An excellent contemporary example is the James Webb Telescope orbiting deep space to explore other planets and galaxies for proof of life. However, when considered from a technical point of view—robots are merely the sum of their parts.
Such a vast spectrum of definitions allows anything from a thermostat to a smartphone to Tesla’s upcoming humanoid to be deemed as a robot. But logically, they are all very different devices, serving a range of purposes.
Gunkel believes that the understanding of the term ‘robot’ is going to change as the context around it evolves and more importantly, as our own experiences with technology transform in time.
“It is essentially one of those terms that are pregnant with ambiguity, but I think that offers us the opportunity to get more specific and to talk about things in a much more context”
Dr David J. Gunkel, scholar in AI and Robot Ethics
Anthropomorphizing robots
Regardless of how we define robots, our assumptions about them take roots in Tinseltown and then evolve owing to our innate ability to anthropomorphize.
For instance, when former Google engineer, Blake Lemoine published his conversations with Google’s LaMDA and claimed that the AI was sentient on Twitter, the internet erupted with a range of reactions. We witnessed a very similar reaction to Sophia, a social humanoid robot, when she remarked about destroying humans at the 2016 SFX conference.
“As humans, we tend to anthropomorphize. So the question we need to ask is, is the behaviour that we see truly intelligent? AI can fool some of the people all of the time and all of the people some of the time, but that does not make it sentient or intelligent”
Dr Oren Etzioni, CEO, AI2.
Read also: Paul Allen liked the fact that I wasn’t an academic: Dr Oren Etzioni, CEO, AI2
In contrast, Dr Ramesh concurs with Alan Turing and Barrington Bayley who believed that the understanding of consciousness could be stretched to include inanimate objects and that a difference in the nature of consciousness solely can not account for the exclusion of robots.
“Cultures like [the] Japanese have already imbibed reverence for robots as human equivalents such as ‘monk robots’ beyond seeing them in their robot cages. Many researchers established that beliefs in animism have no impact or correlation to thinking of a robot as having a soul. As robots become more pervasive not just in official and commercial spaces but also in our personal homes and spaces, humans are expecting more human-like interactions and the need for “social” robots. So, anthropomorphism has been subconsciously accepted in human interactions where, for example, a bot is associated with a particular gender as well,” says Dr Karthik Ramesh, VP–Head International Markets and Innovation, Emids.
In focus—Social Robots
‘Social robots’ can be defined as robots that interact with humans and each other in a socially acceptable fashion, conveying intention in a manner perceptible to humans and are empowered to offer solutions to fellow agents, be they human or robot.
Furhat Robotics, a social robotics and conversational AI company, explains social robots as “the next major user interface, that are typically designed based on the oldest user interface we as humans know—the face”
When Furhat Robotics built its first robot, they aimed to build an intuitive interactive robot capable of emulating social interactions between humans. They wanted the robot to be capable of impersonating different characters to increase its usability and establish its distinction from other ‘fixed personality’ robots.
So when we think of rights, are we considering such robots—those with an uncanny resemblance to human appearance, behaviour and capability?
Dr David Gunkel believes that the research and development of robot technology has important implications for human life and attempts must be made to better understand the changes such technological advancements may mean to our collective existence in the future.
“. . . I think we’ve come to the point where we realise that research and development that is not in touch with the social consequences and an understanding of what this is going to do for us and to us is irresponsible. And that responsible development of technology has become sort of the watchword. We want to make sure not only are we devising these brand new toys and tools and everything else, but we are thinking about what they mean for us and how they will affect [us].” — Dr David J. Gunkel, scholar in AI and Robot Ethics.
Human rights for robots?
Raging debates on the current state of human rights in the world surround us all. The selective application of universal tenets of humanity is a collective concern. What then is the relevance of debates about robot rights?
Dr Kathik Ramesh says, “While this is not yet a matter of grave concern, this will become a pertinent topic in coming times as more of our ‘human’ worlds get inundated with robots, autonomous vehicles.”
“The need for a common consensus or framework across cultures, geographies and even human perceptions of how a robot would be perceived is a must before any pre-determination of whether robot rights are required or not. With the increase in accessibility to robots across the world and its super intelligence growing in quantum leaps, it will not be far away when a robot may actually be equivalent to a human in terms of real-time decision-making and function. However, without feelings, a concept of soul or consciousness and the existence of DNA; some purists [may] argue even on the need for consideration of robot rights.”
Dr Karthik Ramesh
Both sides are important
K-2SO: I can blend in.
I’m an Imperial droid.
The city is under Imperial occupation.
Jyn Erso: Half the people here wanna reprogram you.The other half wanna put a hole in your head.
Rogue One: A Star Wars Story (2016)
Tech companies developing interactive technology like robots often gatekeep how it interacts with the public.
Dr David Gunkel believes that such practices should be energetically discouraged and emphasises the importance of citizen participation in decision-making when it comes to technological advancements that may impact their everyday lives.
“You have people who are vocal advocates for AI ethics as a way of helping curb the sort of capitalist accumulation of power that is happening in big tech. And then the big tech people who are like, you know, we don’t want regulation or we want limited regulation so that we evolve this technology and implement it in ways that we think is going to serve the public interest. And this is just good democracy. This kind of argument is just what happens when democratic citizens get involved in the shape and direction of their own destinies. And I think it’s actually a good thing”, says Gunkel.
He further elaborates—“I think the real question we have is who has the power here to make decisions and implement these things. Power asymmetries are very crucial to recognize and to begin to do something about, because we’re not all equal in this conversation. And I think getting to the point where we can rely on our governments to create a more equitable exchange of ideas and concerns and interests, I think is going to be to everyone’s benefit.”
Rights are here and so is the invasion
According to Dr Karthik Ramesh, robot rights would imply empowering any machine regardless of its level of intelligence—a legal, innate claim on life, liberty, moral ethics or values, much like a human being. For instance, the humanoid robot, ‘Sophia’ was granted Saudi Arabian citizenship in 2017.
In yet another instance, in Pennsylvania, U.S., autonomous delivery drones are allowed to manoeuvre on sidewalks and roadways and are now technically considered “pedestrians”.
According to Axios, there are now a dozen states within the U.S. including Pennsylvania, Idaho, Virginia, Florida, Washington, D.C. and Wisconsin where it is legal for personal delivery robots to share the streets with people. In these states, personal delivery robots are granted the same rights and responsibilities that belong to human pedestrians.
However, it is noteworthy that, in granting this particular status and the associated rights and responsibilities to the personal delivery robot, the state makes no attempt to seek resolution or address the significant questions surrounding robot moral standing or robot/AI personhood. All they’re doing is merely attempting to provide a legal framework for integrating these particular devices on our city streets and to align that integration with existing legal practices.

“So this is what the robot invasion looks like. It doesn’t take place as, as it’s looking in science fiction with the robots revolting against their human masters and attacking us and rising up in revolution, it’s going to be very mundane. It looks less like Terminator. It looks more like a very boring episode of Law & Order”, remarks Dr David Gunkel.