This is the sixth article in the weekly series of Expert’s Opinion, where we talk to academics who study AI or other emerging technologies and their impact on society and the world.
This week, we spoke to Dr Urvashi Aneja, Co-Founder and Director, Tandem Research, whose current research focuses on the societal impact and governance frameworks of India’s algorithmic decision-making systems.
Dr Aneja is an Associate Fellow at the Asia Pacific program at Chatham House, UK and a T-20 Task Force member for the Future of Work in G20 countries. In the past, she has been a member of government committees on Frontier Technology in India and Skills for a Digital India. She has also led knowledge sessions for members of parliament on the use of AI in India.
Analytics India Magazine caught up with Dr Aneja to get insights into her recent research on the ‘Promise and Peril of Big Tech in India’.
AIM: What are some of the implications and threats of the Big Tech based on the characteristics (market power, informational gateway, privacy, sovereign interests) you have used to define it in your paper?
Dr Aneja: In our report, we argue that four conceptual markers characterise Big Tech – data-centric business models, network effects, how they provide essential market and informational infrastructure; and that they play a civic role or have civic functions. We also think Big Tech is better understood as a concept, rather than a static set of companies. New companies may enter this category just as existing ones may drop out of it.
The four points that you mention in your question are four areas in which we see Big Tech companies impacting India’s digital economy and society. Across these four areas, we see Big Tech companies bringing both benefits and harms.
For example, in the case of Big Tech market power, Big Tech companies provide digital infrastructure for other businesses, promote R&D and innovation, and have the capacity to represent industry perspectives in policy discussions. But they also have certain anti-competitive and monopolistic practices. They are able to use their position as both platform providers and platform participants to privilege their products and services and enter new product markets.
Similarly, with regard to the informational gateway point, most Indian internet users rely on one or more Big Tech platforms to access information, communicate, and participate in political and social life. But this also gives them inordinate influence in shaping the exercise of the constitutional right of free speech. Through their algorithms, which curate and amplify news and information, they wield immense gatekeeping powers.
Big Tech companies augment state capacity by providing digital infrastructure, using data for social good, and enabling the state to communicate with underserved populations. But the intersection of Big Tech companies’ interests and State functions raises concerns around democratic accountability and sovereign independence, the health of domestic markets, law enforcement, and equitable taxation.
The impact of Big Tech companies also differs between India and industrialised economies. Big Tech companies fill important gaps in market and state infrastructure and are an essential part of India’s development story. This makes the trade-offs and policy choices different for developing countries like India. Big Tech companies also play important civic roles in India, but there are limitations to the extent that Indian policymakers and the public can hold them accountable.
We want to find ways to preserve those benefits and also address those threats.
AIM: What are the steps taken by the Indian government to address the Big Tech threats, and how effective have these steps and their implementation been?
Dr Aneja: For the most part, Indian policymakers have been fairly reactive. We need to move from a reactive to an anticipatory strategy around Big Tech.
Three measures can be used to address the market power of Big Tech companies. First, we need to update the competition policy to include control over data and network effects. Platform neutrality should also be mandated so that Big Tech platforms cannot unfairly discriminate against other businesses using their platform. Platform interoperability can further enable consumer choice and reduce the weight of network effects. There has been some discussion on these issues, a few legal battles. Still, we are yet to see any concrete steps from competition regulators to reconsider competition issues for a digital economy. One of the motivations behind recent policy discussions around non-personal data is to rein in the power of Big Tech. But I don’t think this is the appropriate solution – it can create new domestic monopolies, stifle smaller players in the Indian market, and enable unwarranted access to data for the state and private players. The conversation seems driven by a set of ideological positions, rather than an evidence-based diagnosis of the challenge of Big Tech.
With regard to social media, the conversation has become mired in political fighting between parties. The recent report by WSJ on Facebook is a good example. Conversations around encryption and social media identity verification are similarly a misdiagnosis of the problem. We need to focus on the proper diagnosis of the problems. Big Tech platforms are not just neutral publishers of content, but curators and editors. Big Tech social media platforms should thus also be held to the same ethical standards as legacy media. Algorithmic Accountability mechanisms are also needed to identify, assess and penalise harmful algorithmic amplification.
AIM: Is data the key to addressing the threats Big Tech monopolies pose?
Dr Aneja: Yes and no. On the one hand, breaking up the data monopolies that the Big Tech actors have is very important, and there are concrete ways in which one can do that. One of the things we need to do is start thinking of Big Tech companies as infrastructure providers for our digital economy. And as infrastructure providers, they shouldn’t be allowed to own both: the infrastructure and the services being provided by the infrastructure. But these monopolies have been enabled by pretty old-school techniques as well. A lot of money spent on lobbying, and many companies and talent being acquired also help build monopolies. Hence, all of it is not a data conversation.
AIM: With such a power imbalance between individuals/smaller communities and the Big Tech, what is the kind of redressal infrastructure needed to hold the Big Tech accountable?
Dr Aneja: There are several mechanisms that can address this. To hold the Big Tech accountable in terms of their influence on markets, we need to have much better transparency around the type of data being collected and used along with algorithmic transparency mechanisms. We should be able to audit the algorithms Big Tech companies are using and make sure they are not giving any preferential treatment to certain actors on their platforms. At the same time, we should insist on algorithms that do not amplify harmful content.
Secondly, on the privacy bit, we have to move from a consent-based framework to an accountability framework. We need to start putting bigger responsibility on people who are collecting data.
Thirdly, you cannot regulate Big Tech unless we have the institutional capacity to do so, which is one of India’s biggest challenges.
Lastly, Big Tech doesn’t have their key decision-makers sitting in India, and you need them to be here to hold them accountable. If you look at some of the big protests in Big Tech, like people walking out in Google, you don’t see that kind of presence in India in numbers or decision-makers to have a pushback. The accountability issue is a very complicated one.
AIM: What are the factors to consider while forming data stewardship models in India?
Dr Aneja: Firstly, when we are looking for these solutions, we always assume that more data is good. For instance, when we think about why certain policies fail, it is not always because we did not have enough data. We need to address where we actually need data and if data is really the missing piece of the puzzle in many of these cases.
There is a lot of enthusiasm around data stewardship models or data trusts, and there is a lot of potential there. Still, most of it comes down to institutional design and institutional trust. Now, the trust can’t be a result of data. We have been told, for instance, that Aadhaar is working well, but there is enough evidence that it is also failing people. Hence, that trust is broken. Unless that trust is established, no stewardship model is going to fix it.
There is a lot of work to be done around how to design these institutions. I think that not looking at these data stewardship models’ political economy and how they work, you will miss opportunities for good design.
AIM: How can we incentivise Big Tech to invest in projects that focus on the social front? Would their approach change with the right incentives?
Dr Aneja: Why should they change? They are private companies, and profits are their bottom line. They say they are doing tech for social good, but that’s just a legitimating device that allows them to do whatever they want. Let companies be companies and make profits. It is not their job to do social good. They also don’t have the democratic legitimacy to do that.
These companies are not social enterprises. They are private companies. Hence, instead of checking whether these companies do social good or promote democracy, we have to ensure that they do not harm healthy markets, people, and democracy. This has to be the focus of the regulation.