MITB Banner

What Is AI Called In Your Mother Tongue?

“I think that any future where we have to rely on companies like Google to do translation work for us is not a future where I do see a kind of justice focus or rights-focused kinds of translation”

Share

Over the last few years, the conversation around emerging technologies like AI and machine learning has increased massively. However, this conversation is limited only to the research and developers’ community. The general public, which is at the receiving end, is largely left out of such conversations. This is mainly because there has been very little effort to give cultural and linguistic context to such technologies. To give an example, most of us might be unaware of what AI is called in our local tongue or worse; there might not be any local term to refer to AI to begin with.

Recognising this problem, Asvatha Babu, a PhD candidate at the School of Communication at the American University, has been studying the interaction between technology, media, governance, and the public. Analytics India Magazine caught up with Asvatha for a detailed conversation.

Asvatha Babu

Edited excerpts from the interview:

AIM: What are the challenges of translating technological terms/concepts into a local cultural context?

Asvatha Babu: I am a PhD candidate at the School of Communication at the American University; I did my masters from the same university, focused on cybersecurity and technology policy. I worked briefly on blockchain and its use in creating social impact (for example, its possible use in the Aadhaar system). A series of events and deeper research got me interested in facial recognition technology (FRT).

My current dissertation work is focused on FRT and its proposed use in solving large social, humanitarian problems. As part of my dissertation work, I have interacted with the police and understood how they use these technologies (currently, in the context of Tamil Nadu). In addition, I also try to understand the police’s attitude towards this technology and whether it is really helping them ease their work or adding more burden.

Another aspect of my research is to understand society’s general attitude towards FRT — what they think and talk about it. So when we say FRT, we culturally construct multiple things under this umbrella. In pursuit of understanding the cultural implications of FRT, I am studying the media coverage around this tech. For example, I have realised that even though the Tamil Nadu police have been using FRT for the last three years, we don’t really have a name in the local language (Tamil). 

There are so many ways to translate the term facial recognition to convey the purpose of the technology. That is what journalists do to report about FRT in the local language. Because of this, there is a tendency to talk about FRT in a more functional way (what it does and why it is required). This approach is a double-edged sword. On the one hand, this translation makes FRT appear highly contextualised, which is good. But on the other, my study shows that such an approach also leads to media reporting about FRT from the police/authorities’ perspective. This means that there is less focus on the critical analysis of such a tool.

AIM: What can be done to improve the understanding of technologies like FRT, especially through proper and contextual translations?

Asvatha Babu: A lot more attention needs to be paid to the process of translation. There are a lot of really good research organisations, civil society organisations and think tanks working on the concept of facial recognition and surveillance in India, algorithmic surveillance, digital rights and data privacy. But there is a lack of focus on grassroots level translation. How the technology gets translated or how it gets locally constructed within a specific language is important to study. So, for the first step – I think we need to pay more attention to that.

In the second step, there needs to be more engagement between language studies scholars, digital rights scholars, as well as the people who study the effect or the impact of these technologies, along with social justice and social rights like think tanks. And there needs to be more engagement between all of these people and the engineering community. Currently, the translation softwares are created by bigger organisations like Google and used by authorities like the police. There is a lack of critical voices. We’re missing this whole perspective that needs to be present in order for people to understand the technology for what it is and for the media to cover it more comprehensively.

AIM: How good are softwares like Google Translate when it comes to grassroots level tech-translation?

Asvatha Babu: I think that any future where we have to rely on companies like Google to do translation work for us is not a future where I see a kind of justice focus or rights-focused kinds of translation, because, their translation services are constructed with the aim of being used more and being more profitable as a product. 

To this end, the indigenous people of New Zealand are fighting to keep their Maori language alive without the interference of big conglomerates like Google. They understand that once the tech giants gain access to this language, they (the tribe) will leave all sense of contractual ownership of this language. Companies like Google, Microsoft or IBM don’t care about the language as such; they’re interested in the concept of having this language in their arsenal, so then more people have to rely on these companies. So, the tribe is now building their own process of automation. There are alternate ways of automating the translating process that doesn’t have to rely on these big tech companies motivated solely by profit.

AIM: Given the risk of surveillance and breach of privacy technologies like FRT pose, activists believe that they should be completely banned. What is your take?

Asvatha Babu: Yes, this is one of the major concerns about the increasing prevalence of technologies like FRT. 

Surveillance has always existed; people have always watched other people, especially in the context of, say, public health or prisons. The notion that people need to be monitored or watched for public safety and security is not new. What has changed in the last century is the development of large scale information storage and processing technologies, and the entry of private companies with business models targeted at making a profit out of this information. This has made surveillance simultaneously more sophisticated and mundane.
In addition, we are living in a society where there is a large and growing power asymmetry between different groups: the government and the citizens, the rich and the poor, the police and the public. Developing more advanced and pervasive tools of surveillance in such a society would likely further exacerbate that asymmetry and directly harm those that are vulnerable. So, really – any amount of this surveillance is too much.  

The often-repeated claims by the authorities and those who develop such technologies are that they make life easier and more convenient. The question here that must be addressed is whose life is made better? To what end? And at what cost? Also, at the stage of development that these technologies are currently in, whether they actually enable the police to serve the public better and increase public safety and security is a whole other question. 

These technologies cannot be “undeveloped”. They already exist and they are gaining momentum worldwide, but they can be regulated. Calls for bans are effective in reminding us that technology is not unstoppable or uncontrollable. Dangerous technologies have been taken or regulated out of circulation before and can be again. But at the very least, we need to be thoughtful about what tools we’re using to achieve what purposes. Not only do we(developers, media, researchers, and authorities), need to ensure that people have adequate and well-rounded information about these technologies, we also need to make sure that those made most vulnerable by these technologies are included in making decisions about them. 

Share
Picture of Shraddha Goled

Shraddha Goled

I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.