Timnit Gebru launched Distributed Artificial Intelligence Research (DAIR) Institute, an independent, community-driven organisation that aims to counter Big Tech’s influence on the research and development of AI. In the past, Gebru has also spoken about big tech’s excessive (often unregulated) say on the AI landscape; she hopes to create an independent space where researchers from varied backgrounds can come together and set an agenda for AI research that is rooted in their communities and experiences.
Gebru is one of the most respected voices in the field of AI Ethics. Last year, Gebru, who then worked as the co-lead of the Ethics AI team, was ousted from Google for allegedly co-authoring a paper on large language models and their adverse effects.
“AI needs to be brought back down to earth. It has been elevated to a superhuman level, which leads us to believe it is inevitable and beyond our control. When AI research, development and deployment is rooted in people and communities from the start, we can get in front of these harms and create a future that values equity and humanity,” said Gebru.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
The new institute established by Gebru is currently a Code for Science and Society project and has received $3 million in funding from Ford Foundation, the John D. and Catherine T. MacArthur Foundation, the Kapor Center and the Open Society Foundation, among others. With DAIR, investors and founders hope to build a field of public interest technology – meaning harnessing the power of emerging technologies for the public good. This will ultimately build the movement towards an inclusive and equitable technology. DAIR eventually plans to establish itself as a non-profit organisation.
DAIR will develop use cases for AI that are unlikely to be developed anywhere else (read big tech companies) to further inspire others to give technology a new direction. One of the first projects that the institute will work on is creating a public dataset of aerial imagery of South Africa to examine how and if apartheid is still etched into land use. A preliminary analysis showed that most vacant land in the densely populated region that was once restricted to non-white and poor people, developed between 2011 and 2017, has now been converted to wealthy residential areas. DAIR will soon be publishing a paper on this project. It will mark DAIR’s debut in the academic AI research circle at the NeurIPS conference.
Need for Independent Research in AI
Researchers like Gebru have been speaking about freeing AI research from the clutches of big tech. These companies exert a lot of influence and power over their fields since AI underpins some of their popular products like Google search engine and Amazon’s Alexa. In this endeavour, companies routinely publish influential research papers, fund important conferences, establish data centres for large scale AI research, and hire top researchers in the field. Studies have shown that the majority of tenure track faculty at four top universities received backing from Big tech companies.
Another 2019 report revealed that Google had poured more than $250 million since 2005 into academia. Similarly, Samsung pumped $1.5 billion into Korean research institutions through a funding programme launched in 2013. Receiving funding from these large conglomerates helps researchers be free of the financial burden and is also a pathway for a full-fledged industry career.
But all this comes at a cost. There is a serious conflict of interest in such cases. Other than that, usually, these research projects must be related to the funding company’s business interest.