AIM Banners_978 x 90

Oh ChatGPT, How Much Can You Really Understand?

ChatGPT’s parent, OpenAI has published a study to solve the problems of LLMs
“Too dangerous to be released” – the phrase became the talk of the tech town in 2019 when the release of GPT-2 was announced. Cut to 2023, the OpenAI researchers are still investigating emerging threats of large language models (LLMs) and potential mitigations. It’s a well-established fact that four years after GPT-2 was made public, the problems with LLMs remain stagnant. Since its release at the end of November, users have put OpenAI’s advanced chatbot ChatGPT to test in a compelling manner.   Bias is an ongoing challenge in LLMs that researchers have been trying to address. ChatGPT reportedly wrote Python programmes basing a person’s capability on their race, gender, and physical traits. Moreover, the model’s lack of context could prove dangerous when dealing with sensitive issues like sexual assault.  OpenAI Has Some Red Flags The research laboratory has been in the news for several innovations over the past few years. It is a concentration of some of
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Tasmia Ansari
Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed