MITB Banner

Meta Needs You in Its Generative AI Gambit

Meta is quietly experimenting with deliberative democratic programs involving thousands of people around the world - but why?

Share

meta
Listen to this story

Meta wants you to help them with their generative AI initiatives and is all out conducting community programs to achieve it. In November last year, Meta announced that it would run Community Forums as a way to help the company make decisions on their technologies. Allowing a diverse group of people to discuss issues and offer perspectives and recommendations, Meta believes, would ‘improve the quality of governance’. Meta’s focus at that time was metaverse. 

Collaborating with Stanford University, the results of the first global deliberative poll was released last month, which involved 6300 people from 32 countries and nine regions around the world. The participants spent hours in conversations via online group sessions and interacted with non-Meta experts about the issues under discussion. The topic: Moderation and monitoring systems for bullying and harassment in the metaverse. Months of experiment and ironically, metaverse is no longer relevant. However, 82% of the participants recommended the same deliberative democracy format be followed by Meta for making future decisions — Meta has decided to follow a similar process for their generative AI tech. 

Humans in the Loop 

Quite literally, keeping people in the loop for decision-making is Meta’s new model. Last month, the company launched a Community Forum on Generative AI with the goal of gaining feedback on what people would ‘want to see reflected in new AI technologies’. Meta believes in incorporating people and experts’ say in product and policy decisions around generative AI, and they claim to be actively working with academics, researchers and community leaders. But, why the push? 

Having faced enough flak in the past around capturing user information and breaching data privacy on social media platforms (Facebook and Instagram), Mark Zuckerberg must be probably pulling a reverse move by seeming to give control to people for formulating the next step. 

Meta is also a founding member of Partnership on AI, a non-profit community, since 2016, where they work with industry experts, organisations, media and others to address concerns about the future of AI and to formulate ‘right ethical boundaries’. Ironically, Meta’s recently launched microblogging platform Threads, coerces users to give access to personal information on the phone in order to use the app. 

Not The Best Approach 

The human-feedback system that Meta is experimenting does come with its limitations. How much of people’s feedback is flawless and how much of it can be implemented in the system is questionable. On their pilot community program for mitigating bullying in metaverse, the participants were not aligned with punishing the users involved in repeated bullying and harassment. For instance, removing members-only spaces that saw repeated bullying had only 43% support. 

Furthermore, the participants had no interaction with the decision makers i.e. Meta employees which made the process seem like a simple survey or an experiment on data-gathering rather than a democratic exercise. 

In Others I Trust

With the countless talks that’s surfacing around AI safety guidelines and the need for universal regulatory policies, every major tech is claiming to work towards it. Meta is no exception in following another tech company’s work — OpenAI. Meta is trying hard to catch up with OpenAI and speeding ahead in the open source LLM race. Tracing the reigning chatbot maker’s path, Meta seems to be going through the democratic decision-making policy that OpenAI is pursuing. 

OpenAI had announced grants with $1 million to fund experiments to democratise AI rules and best solve AI safety mishaps. The company also announced another million for their cybersecurity grant program, for creation and advancement of AI-powered cybersecurity tools and technologies. In other words, a program where people can help create/fix the company’s security framework. 

While the move can be critically looked at as a tactic to avoid the government from interfering with the company’s plan for AI regulation, or even as a way to seem like a responsible company working ‘for the people’, big tech is slowly adopting the democratic route. 

Recently, Anthropic spoke about how they would work better on constitutional AI by talking to people and not just experts. DeepMind, recently released a paper that addresses and investigates how international institutions can help manage and mitigate AI risks. In one of the complementary models that the company proposed, an AI safety project aims to bring together researchers and engineers to access advanced AI models for research.  

Share
Picture of Vandana Nair

Vandana Nair

As a rare blend of engineering, MBA, and journalism degree, Vandana Nair brings a unique combination of technical know-how, business acumen, and storytelling skills to the table. Her insatiable curiosity for all things startups, businesses, and AI technologies ensures that there's always a fresh and insightful perspective to her reporting.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.