Listen to this story
|
“Development by smart and well-resourced people behind closed doors can make great things, but development in the open by a huge community of people is just a more effective and equitable mode of development,” said Colin Raffel, a faculty researcher at Hugging Face and professor at UNC Chapel Hill, in an exclusive interview with Analytics India Magazine.
After finishing his PhD in 2016, Raffel joined Google Brain’s AI Residency Programme. He initially worked on models for sequences, then on semi-supervised learning, transfer learning, and deep generative models earlier. “At a broad level, I always aim to do work that impacts the real world and publicly release everything to make technology applicable and available to as many people as possible,” Raffel said.
Most of Raffel’s work involves making it possible to use valuable technologies with fewer resources as well as expertise. The most recent one is Petals, which can run AI like ChatGPT by fastening Internet users’ resources. With Petals, one can donate hardware power to ease a portion of a text-generating workload and team up with others to complete larger tasks.
He believes the AI community would be seeing rapid progress and have a better understanding of the capabilities and limitations of the model if it were developed by a large community of stakeholders “instead of letting companies like OpenAI decide how the model should be updated and improved”. This totally makes sense against the backdrop of recent study around LLMs limitations, which seems to have not resolved ever since the release of GPT-2.
Raffel worries that we don’t know as much about OpenAI’s recent ChatGPT, if it is continually improving. “There are relatively fewer examples of models that we are aware of that are getting better and better over time,” he added. He said this is uncommon in the research community.
“ChatGPT (built on GPT 3.5) is exciting because it puts technology in the hands of everyone. In some ways it represents a meaningful and exciting step forward in terms of the capabilities of the best language models. But there were language models out there before that were either lagging behind an API or just are not released at all,” said Raffel, without revealing the names. However, some of the popular alternatives include BLOOM, Gopher, and so on.
The Cat’s Out of The Bag
Stable diffusion had a similar effect to ChatGPT in the sense that it existed to some extent behind closed doors. However, as soon as it went into the hands of people, incredible applications were developed in integrations along with several worrisome ones too. Raffel said the work being done is far ahead, so, to researchers like him, it only makes a small difference when such models are made public.
On the flip side, the exciting thing is updating the public’s perception of what’s possible and watching the applications of what these models can achieve when put in the hands of all kinds of different people. So, it represents a meaningful and exciting step forward in terms of the capabilities of the best language models.
AGI Misses the Point
Raffel believes a generally intelligent system would cover most ML problems but to him, AGI misses the point a little bit. There would always be tasks a computer can do much faster and better than a person, but we wouldn’t necessarily want, trust, or need a computer to do.
“I don’t care so much about human-level intelligence across all tasks humans can do. However, I do care about figuring out which problems can’t be solved by humans and building a system to automate as many tasks or all of them as possible,” he opined.
Bringing the importance of data into the picture, Raffel said, unfortunately, a lot of largely available data is not that useful for what the researchers want to do. Data tailored better could get researchers farther in terms of what they want the models to do.