Listen to this story
|
When dealing with high-tech giants, the notion of voluntary commitment to building AI responsibly is a joke. Google and Amazon’s union-busting attempts, Meta’s Cambridge Analytica scandal, and Microsoft and OpenAI’s copyrights violations are only the tip of the iceberg showing voluntary commitments should not be expected, for better results.
The issues with AI models keep several researchers up at night. Swedish philosopher Nick Bostrom is among the ones to ponder, “How do we ensure that highly cognitively capable systems — and eventually superintelligent AIs — do what their designers intend for them to do?”. Bostrom has delved deeper into the unsolved technical problem in his book ‘Super Intelligence’ to draw more attention to the subject.
An infamous example of misalignment: When an algorithm by Google was trained on a data set of millions of labelled images, it was able to sort photos into categories as fine-grained as “Graduation” — yet classified people of colour as “Gorillas”. But, why do these issues continue to persist despite companies’ “efforts” in building better algorithms and models time and again?
Irene Solaiman, policy director at Hugging Face, believes that a part of the trouble of alignment issues is the lack of consensus on what constitutes the field of value alignment. “Generally a definition is following developer intent, but so many peoples affected are not reflected in developer teams,” she added.
Solaiman, who was a part of the GPT-2 and GPT-3 teams, is a strong proponent of inclusive value alignment that recognizes asymmetries in who is affected. For example, she said, “I would consider harmful stereotyping and biases part of alignment work. The number of researchers differs based on how we consider technical and social scientists and how close they are to model development and release.”
The State of Aligned AI
The latest State of AI report also points out the lack of researchers in AI actively working on preventing these models from being too misaligned. Cumulatively, there is a tiny group — across seven lead organisations of less than a hundred researchers, an extremely tiny fraction of the AI research community worldwide.
As per the report, Google DeepMind has the largest and most established AI alignment team of 40 members led by co-founder Shane Legg. In comparison, OpenAI has a team of 11 members, and its rival startup Anthropic has 10.
OpenAI recently formed a team called ‘Preparedness‘ to assess, evaluate and probe AI models to protect against what it describes as “catastrophic risks”. A few months ago, it also announced a super alignment project. The idea boiled down to, ‘we observe that advanced AI is dangerous, so we build an advanced AI to fix this problem’.
Professor Olle Haggstrom, who was one of the signatories to call for a 6-month pause on building more advanced AI than GPT-4, called it ‘a dangerous leap out in the dark’. “Tomorrow’s AI poses risks today,” he emphasised, citing insights from a Nature article.
In the ongoing AI discussions, a regrettable divide persists between those who focus on immediate AI risks and those who consider longer-term dangers. However, with growing awareness that a maximally dangerous breakthrough might not be decades away, these two factions should join forces, Haggstrom advises.
“Neither of these groups of AI ethicists are getting what they want,” he said while noting the lack of focus on building aligned AI.
Just a Label
While DeepMind has advocated for safer, aligned models, since day one. Its counterpart Google has other plans. The tech titan recently pledged $20 million for a responsible AI fund. The search giant made $60 billion in profit in 2022 which means 0.0003% of their profit is directed towards building better models.
Google has been carrying around the ‘bold and responsible’ label since its executives have not shut up about generative AI since 2023 began. The new mantra has been repeated over and over in all the conferences held from Mountain View to Bengaluru.
But it’s not the only one. For the majority of big tech companies being “responsible” has become a tactic to stay in the good books of the media and communities. Behind the back, their shenanigans continue showing that organisations akin to Google and OpenAI do not genuinely care about ethics over profitability.