Listen to this story
|
Jacy Reese Anthis’ book ‘The End of Animal Farming’ and the legendary Transformer paper came out around the same time in late 2017. It had been about five years since AlexNet, ImageNet and deep learning and things were starting to accelerate. Anthis and the team had enough funding as well as an impetus to start focusing on AI and that’s been his research ever since.
In an exclusive interview with Analytics India Magazine, Anthis, the co-founder of the Sentience Institute, shared his two cents on the state of affairs in rights for AI. “Our interest has always been the expansion of humanity’s moral circle. So we started with a broad mandate to focus on non-human intelligence,” he began.
Defining AI Rights
Anthis recently published a detailed piece on why ‘We need an AI rights movement’ even though currently “they have no inner life, no happiness or suffering, at least no more than an insect”. But it may not be long before they do, the PhD fellow at the University of Chicago believes.
“There are a lot of different conceptions of rights in the philosophical literature. For example, rights come alongside responsibilities. For most adult humans, this is how we conceive of it. The fact that you have the right to free expression goes alongside the responsibility to participate in a democracy or in another government process for other humans, such as children or developmentally disabled people. For non-humans, these conceptions of rights don’t work as well,” he said while defining AI rights.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
He also explained that rights are something that needs to be adjusted to the interests of the sentient being in question. “For example, some animals are used for labour, but some of them we keep around in our homes as companions, but they still have a right to be free from abuse and exploitation. Similarly, AI have their own suite of capabilities, responsibilities and roles in societies that their rights need to be built around. Fundamentally, it starts at the fact that they’re sentient beings, and therefore they have interests that need to be protected,” he said.

Drawing Parallels
The opinions on the subject vary greatly. Gary Marcus, an active voice in AI responded to the idea for AI rights. He tweeted, “People arguing for AI rights based on complex text processing algorithms need to ask whether they would assign the same rights to calculators, smart watches, and the internet.”
People arguing for AI rights based on complex text processing algorithms need to ask whether they would assign the same rights to calculators, smart watches, and the internet.
— Gary Marcus (@GaryMarcus) March 25, 2023
“I don’t quite get it how works” + “it surprises me” ≠ it could maybe be sentient if I squint. https://t.co/GvmBWRhhqm
Marcus is not the only one to draw parallels between AI and calculators. The internet has recently been flooded with an image of maths professors protesting against calculators. But there is a fundamental difference.
The analogy is unfair as a calculator performs arithmetic and logic operations with far more accuracy and speed than what a human mind can process. The resistance against it was not for the growth of electronics research as such, but its implementation, and that too in schools only.
Read: Stop Confusing Calculators with GPT-4
Explaining the reason behind the AI rights movement, Anthis said, “For moral progress, it’s very important that we start early.” He addressed the many atrocities throughout history where people were not prepared, like slavery and the beginning of environmental degradation.
“It’s incredibly hard to predict the future of AI in detail but there are many doing a good job of forecasting in broad strokes what it overall looks like, even if models today have absolutely zero emotions, zero sentience, zero mental capacities of these sorts,” he added.
There’s a long history of people in the field, failing to predict advances and people being so sceptical. Anthis quoted the incident when Douglas Hofstadter admitted that his view on chess in the 1990s and earlier was that once AI solves chess it will be the pinnacle of expression and intelligence. But then it happened in 1997 with Garry Kasparov and Deep Blue. The expectation had to be reevaluated for what an AI could do and what it would mean.
Criteria for Rights
Living things are incredibly diverse, from bacteria to primates, and the rights of AIs would need to cover that wide diversity. The notions would also extend to non-sentient AI like the language models you work with today.
Research scientist at MIT Media Labs, Kate Darling’s research has looked at the notion of a robot dog being abused by humans as something “we’re very uncomfortable with”.
“When we do have diverse sentient systems in the future, some of them might be subservient, powerless, like animals today. We can have AIs that can fully participate in the political process and might have many responsibilities that we assign to humans. We need an AI Bill of rights that would consider that full spectrum,” Anthis said.
Focusing on what philosophical framework should the rights be based on, Anthis said, rights as a legal notion can be built on any of the common philosophical frameworks. “Above all, ‘do no harm’ can be an important foundation.”
The End Goal
The American social scientist said there is an important discussion happening right now in the social science of AI around automation and augmentation. Quoting Erik Brynjolfsson’s Turing Trap, he explained the idea that we get baited into trying to recreate ourselves in the machine – that’s not what these machines are best at.
If we focus on just the human level, machine intelligence is doing exactly what we do, but that won’t be fully unlocking the potential of these systems.
The end game is a safe form of AGI, Anthis believes. “But unfortunately, there have been many signs that we’re not headed towards that goal, but instead an unsafe outcome. I worry a lot about people who just want to build more and more powerful systems and are trying to bake into their research these notions of alignment and safety,” he concluded.