Listen to this story
|
With the rising mutiny against the rapid growth of AI models, OpenAI’s elaborate blog on AI safety released yesterday couldn’t have been timed any better. However, the company’s statement only scratched the surface without addressing real problems that are emerging with ChatGPT. The company sticks to the principle of “building increasingly safe AI systems” without actually talking about the “how” part.
This is bitterly disappointing, vacuous, PR window-dressing.
— Geoffrey Miller (@primalpoly) April 5, 2023
You don't even mention the existential risks from AI that are the central concern of many citizens, technologists, AI researchers, & AI industry leaders, including your own CEO @sama.@OpenAI is betraying its…
Contradictory Claims?
There’s an elaborate explanation on how OpenAI is working on safety. In the statement, OpenAI said that after GPT-4 was trained, they spent over six months across the organisation before releasing it in order to make it “safer and more aligned”. Interestingly, when asked about AI alignment in Lex Friedman’s podcast last month, Sam Altman admitted that “they have not yet discovered a way to align a super powerful system”.
Here's OpenAI 𝗴𝗮𝘀𝗹𝗶𝗴𝗵𝘁𝗶𝗻𝗴 about AI safety today, continuing a scary pattern.
— Liron Shapira (@liron) April 5, 2023
It's a post called "Our approach to AI safety" that shows no acknowledgement that many observers are worried they're elevating humanity's near-term existential risk.https://t.co/ptUzzdNY9h
Regulations
OpenAI supports the view of being subjected to “rigorous safety evaluations” and will engage with governments to come up with the “best form” of regulation. With Sam Altman’s world tour—planned in the coming months—to engage with users and probably fraternise with various government officials, the announcement can also be construed as a peace offering. This speculation also arises when one considers how Italy banned ChatGPT owing to an alleged breach of Europe’s privacy regulations.
What will change?
OpenAI claims that the company uses data to make their models helpful “for people” and not for selling services, advertising, or building profiles. Their LLMs are trained on information that is publicly available (data till 2021). However, it is uncertain what the company is planning to do with the new data that the users feed into their chatbots, including sensitive and confidential information.
How much these claims push organisations to increasingly use OpenAI’s products, such as ChatGPT, is something that remains to be seen. The company’s privacy stance has been under scrutiny of late. With the recent goof up at Samsung factory where employees entered confidential data such as programme source code and internal meeting notes into ChatGPT, the risk of what becomes of such data is unknown. The string of incidents impelled Samsung Semiconductor to develop an in-house AI tool for internal use to steer clear of any future sensitive-data leaks.
OpenAI talks about improving factual accuracy by building on user feedback. GPT-4 is said to produce 40% more factual content than GPT-3.5. However, hallucinations and biases—the most common problems with chatbots—have not been addressed fully in the statement. There is only a mention of how they have “much more work” to do to reduce hallucinations, and educate people on the limitations of the tool.
Incorrect information generated by the chatbot is slowly landing OpenAI in trouble. Australian mayor, Brian Hood, is gearing to file a defamation case against OpenAI over ChatGPT’s false claims about him serving prison time for bribery.
this whole thread is worth reading. and it’s chilling.
— Gary Marcus (@GaryMarcus) April 5, 2023
the complete pollution of the information of the ecosphere that I have been warning about has begun. https://t.co/bz9Jsq4fjU
Gary Marcus, in a podcast interview with Meghan McCarty Carino, said, “ChatGPT is a very unpredictable beast”. He compared it to a bull in a china shop, for being powerful, reckless and untameable.
Children’s Protection?
With rising privacy concerns, children are not left behind. OpenAI announced measures to counter exploitation and abuse of children. The company claims to have taken important steps to reduce content that harms children. They have further mentioned that efforts are taken to “minimise the potential for our models to generate content that harms children”. However, they have not clarified which “models” will minimise this abuse.
OpenAI has implemented rules wherein, if a user uploads any image related to child sexual abuse, the platform will block and report it to the concerned authority. However, it remains to be seen whether the system can be manipulated to exploit the image functionality, similar to ChatGPT’s alter ego, DAN.
OpenAI mandates users to be older than 18 years, or children above 13 years with parental approval. But, the verification options to implement the same are still under progress.
OpenAI did not address bigger concerns of societal and existential risks. By publishing a statement on AI safety without delving into the intricacies of how it looks to achieve these measures reflects poorly on the company. Overall, with gaps and nothing spectacularly new, OpenAI’s statement on AI safety reads much like a washout.