Listen to this story
What’s common among big tech companies such as Microsoft, Google and OpenAI? Their ability to masquerade under the label of ‘consciously working towards AI safety’. What’s concerning is how each of the big tech companies have given their version of AI safety and even spoken at great lengths about it, but all of these are what they sound to be – just ‘claims’.
So, ‘unclear AI safety measures’ – a factor that should probably differentiate the biggies in the AI chatbot race, has ended up becoming the common binding factor. With the apparent slowdown in releasing more capable models, the question on whether the companies will now focus on their safety measures or just claim to take care of them, remains a question.
Recently, in an interview with CBS, Sundar Pichai, spoke about some of the safety measures Alphabet has in place. By including a ‘Google it’ button in Bard, the company believes it can address hallucinations, and by adding safety filters, hate speech and bias are controlled.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Perspective API, a free API that employs machine learning to identify and reduce toxic text, can be used while training chatbots to result in reduced toxic output. Google’s PaLM API, generative AI support for Vertex AI and Bard is said to include Perspective and similar tools built on it to either run over the user input or output.
Pichai also spoke about AI system alignment with human values and focussed on society to figure things out rather than the company alone, as it should involve professionals including social scientists, ethicists, and philosophers.
Conversely, he also spoke about the ambiguous nature of AI technology and referred to ‘black box’ – an aspect where you “don’t fully understand” nor figure out why it responded in a certain way or “why it got wrong.” However, experts believe that the black box is present due to the decisions made by intermediate neurons on the way to making a network’s final decision. It exists owing to non-intuitive decisions. With models such as GPT-3 and LLaMa, AI is no longer being treated as a mystery.
Going by how big techs are not willing to talk about any measures to address AI safety, a question that lurks behind these tall claims is whether there are in fact steps anyone can take to address these issues?
In an exclusive interview with AIM, founder and CEO of Wolfram Research, Stephen Wolfram, mentions the ambiguity around how one can formulate the general principles required for AIs to follow. He believes that if there is one set of principles for the world in which AI operates, it will be disastrous. Wolfram also said that because ethics is a human-generated thing, it gets complicated and it cannot be automated.
While the companies may be competing with each other in the AI race, when it comes to policies and rules governing AI safety on their models, they all have their own set of measures, which are neither concrete nor convincing.
AI Ethics and Safety Drama Unfolds
To add to the rising hype of AI ethics and safety, companies are actively claiming to focus on it. However, going by the uncomfortable fact that Microsoft and Google have fired their AI ethical teams recently, the question of how they are going to work towards safety remains shrouded in mystery. Microsoft last month fired its AI ethics and society team, the same people who had played a critical role in meeting the company’s AI principles. The irony is that during the launch of Bing with ChatGPT integration, Satya Nadella mentioned about “being clear-eyed about the unintended consequences of any new technology”. And the next we know, he goes ahead and fires the AI team the following month.
Google had also been in controversy in the past over firing one of their top leaders of their Ethical AI team, Timnit Gebru, after she pointed out flaws in their AI system. It was followed by a couple of other exits from the Ethics team.
Previously, in an interview with AIM, Dan Hendryicks, who heads Center for AI Safety, spoke about how big techs are not behaving responsibly and expressed his concerns over how companies such as Google doesn’t have a safety team. “There are machine learning communities of thousands of people who work on ‘capabilities,’ but very few who work on ‘safety’,” said Hendryicks.
With the uncertainty of AI technology and the dangers surrounding it including data misuse and toxicity, companies are claiming that they are still working towards improving their models. OpenAI has made over three statements in a span of two weeks, only to talk about how focussed they are in bringing “safety” to their systems. None of those statements/interviews concretely spoke about the active changes they would make to the system, but only grazed the issue of safety. Sam Altman even announced that GPT-5 is not in the works, and that he would focus on safety issues of the current model of GPT-4 which were “totally left out”.