The Dark Side of o3

Research studies show OpenAI o3 model’s increased tendencies to achieve objectives through malpractice, and users are frustrated about the hallucinations. 
OpenAI is Likely To Pull the Plug on ChatGPT
OpenAI’s o3 is among the best-performing reasoning models available for users today. Benchmark scores indicate that the model outperforms several competing models across various aspects, including coding, math, graduate-level science problems, and more. Several users on social media have praised the model’s performance.  However, the model's most significant drawbacks are hallucinations and reward hacking, or specification gaming.  A Warning Sign for Future Reasoning Models A recent study published by Palisade Research, a non-profit organisation, reveals that OpenAI’s o3 model is subject to ‘specification gaming’ — a process where an AI model takes the objective of a given problem too literally, deviates from an acceptable process, and engages in malpractice to a
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Supreeth Koundinya
Supreeth Koundinya
Supreeth is an engineering graduate who is curious about the world of artificial intelligence and loves to write stories on how it is solving problems and shaping the future of humanity.
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed