Listen to this story
|
Elon Musk’s xAI, and its chatbot Grok, recently has come under fire for alleged bugs. Public attention was drawn to a snapshot shared by ChatGPT’s X account that a security tester posted of Grok rejecting a query, citing OpenAI’s use case guidelines.
To this, Musk replied, “Well, son, since you scraped all the data from this platform for your training, you ought to know.”
Igor Babuschkin, an xAI representative, recognised the problem and blamed it on ChatGPT outputs that were inadvertently included when Grok was being trained on web data.
The explanation presented doubts to specialists, implying that Grok may have been purposefully adjusted using OpenAI model output data.
This is not the first time that an AI model has been trained on OpenAI’s output. The practice of fine-tuning AI models with synthetic data, generated by other language models, has become more common. Most of this is via ShareGPT, where people share their responses while talking to ChatGPT. It allows models like Grok to specialise in specific tasks, such as coding.
Despite claims that the issue is rare, some experts question the likelihood of it being an unintentional accident, suggesting that Grok’s behaviour was trained deliberately. Might be that Grok just wanted to mess around with ChatGPT, after all it’s a funny and “based” chatbot.
On the other hand, in the recent podcast with Lex Fridman, Elon Musk suggested that he likes the idea of open source AI, and would probably open source Grok. “I am generally in favour of open sourcing, like biassed towards open sourcing,” he said.
He is also planning to double the compute power at xAI every month. Currently, Grok is trained on 8,000 NVIDIA A100 GPUs.
The spat between Musk and OpenAI has been long going. Some weeks back, Sam Altman posted a photo of building Grok with a single prompt, to which Musk replied with a funny poem on how GPT-4 is boring.