MITB Banner

ChatGPT Craves Human Expertise

Looks like RLHF was not enough human interaction for OpenAI’s ChatGPT, which now craves more
Listen to this story

OpenAI recently announced a program to fund experiments to democratise AI rules, which promises to allocate grants worth $1 million to whoever contributes best to solve its safety mishaps. While this might be the biggest offering made by any tech company to ‘fix’ an ongoing crisis, one might wonder if this is the next tactic by closed-door OpenAI to come across as a company that is giving more power to the people. In other words, you are to be blamed if something goes wrong with ChatGPT.

Thanks to ChatGPT, AI is now shrouded in the need for regulation policies, and the whole affair is going as far as equating it to a nuclear war threat with tech leads rushing to sign a collaborative statement to show that everyone agrees to the AI threat. With the government stepping in to find ways to regulate AI through discussions and Senate hearings, and troubled parent Sam Altman agreeing to having an agency for regulating AI, the next step is obviously to come up with a whole plan for it.

From releasing an elaborate blog on governance of super-intelligence a few days after the Senate hearing to setting up democratisation plans for AI regulations, OpenAI is moving way ahead, probably even surpassing the government, to build a plan that would ultimately give the company unsurpassable power. ChatGPT is being portrayed as this ideal child, leaving other AI kids in despair and driving their creators/parents to trail OpenAI’s path, for the greater good of humanity and AI-kind. This brings us to question, if this is the right path to take in the first place. 

United, we rule

A couple of weeks ago, OpenAI co-founder Greg Brockman, at an AI Forward event in San Francisco, spoke about how the company is working on a democratic approach to formulate AI regulations. He compared this approach to a Wikipedia-like model where people with various perspectives work together to reach a consensus on any entry. He also mentioned that the company does not want to write rules for everyone, instead will consider a ‘democratic decision-making’ process involving different kinds of people to shape the future of AI. 

Unsurprisingly, within three days of the event, OpenAI officially announced the plan for ‘democratic inputs to AI.’    

OpenAI has cleverly passed on the work of AI regulation to people thereby allowing the company time to focus on their next expansion plans and maybe more advanced GPT models. By getting appropriate feedback from a wide community, the companies’ ‘open-source’ approach is another way to gather user feedback and help improve their system. Considering how ChatGPT is now allowing users to block their data from being used for training the GPT model, the current democratisation plan will help with gaining user feedback. The company had also mentioned in their announcement that the company would want to ‘learn from these experiments’ and use them for a more ‘global and ambitious process going forward.’

Distraction is the best medicine 

Looks like by democratising the process for formulating AI rules, the company is creating more chaos and distraction, and slowly washing their hands off future repercussions. By preempting any future restrictions that may not be in the interests of the company, OpenAI announced this grant to proactively come up with a plan which in future may be even suggested to government or other bodies to use as framework. With certain countries and organisations banning the chatbot, a framework made by the people and a larger community is in a way transferring ownership of risks from the company to the larger audience. Either way, OpenAI will be unaffected by this and emerge as a winner at the end of this. 

Obsessed with humans

Looks like RLHF was not enough human interaction for OpenAI’s ChatGPT, and now the company craves for more. With limited or no access to user data after releasing the option to not save ‘chat history’, OpenAI continues to find ways to work without forgoing user feedback to further train and better their model. 

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Vandana Nair

Vandana Nair

As a rare blend of engineering, MBA, and journalism degree, Vandana Nair brings a unique combination of technical know-how, business acumen, and storytelling skills to the table. Her insatiable curiosity for all things startups, businesses, and AI technologies ensures that there's always a fresh and insightful perspective to her reporting.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories