Listen to this story
Every time a new technology emerges, there are always people showering love, and there are haters, and there are ‘experts’. In 2010, when big data became a thing, it was the buzzword that received valid scepticism but at the same time, invalid cynicism.
Drawing similarities, Vin Vashishta, founder of V Squared, who also hosts data science lessons on his YouTube channel and website, recently posted on LinkedIn about the copycats, snake oil sellers, and doomsayers of the data science field, who are just looking to make quick bucks by either copying the real leaders, falsely claiming the use of AI, or downplaying the importance of the field and calling it fraudulent.
“If they take over, data science will experience what NFTs and crypto did last year. What happens next is up to us,” said Vashishta, emphasising that it becomes critical to call out the copycats, point out the snake oil, and deny the doomsayers.
The perfect analogy by Vashishta explains a lot about the recent rise of ChatGPT clones as well. Though ChatGPT is publicly available to use and talk, it is not open-sourced by OpenAI. The model GPT-3.5, which the conversational AI is trained on, is available for developers to build their own chatbots but not anything close to ChatGPT.
Even then, a lot of developers have found ways to use ChatGPT on their menu bars or derive techniques to build it like an AI-assistant tool on Whatsapp. But it was only a gimmick as the developer, Daniel Gross, also the co-founder of ‘Cue’, was using the GoLang library to access Whatsapp on his phone, and running ChatGPT on his browser.
The list of copycats goes on with the tech influencer Varun Mayya, releasing ‘God In A Box’, claiming that it runs ChatGPT on Whatsapp. The truth of this was that he built a chatbot for Whatsapp using GPT-3.5, the same API used for building ChatGPT. Any product that uses the architecture offered by OpenAI does not mean that it is actually ChatGPT, a product developed by OpenAI.
Bizarre as this is, Mayya even charges a fee of $9 per month to run his programme on Whatsapp. But when you ask similar questions to both the chatbots, the results are different, and the Whatsapp ‘version’ falls short on various terms.
Recently, another ChatGPT imposter app, called “ChatGPT Chat GPT AI With GPT-3“, which somehow looked like a real deal to a lot of Apple users, gained a lot of popularity as well. It is currently the second most popular productivity app on the AppStore and also charges $7.99 weekly or $49.99 annual subscription.
Read: Beware of ChatGPT Clones
Snake Oil Sellers
Using ChatGPT or implementing OpenAI’s offered architecture to build your own products is one thing but claiming to be an expert in it is another. There has been a rise of people, even professors, CEOs, and advisors, who claim themselves to be ‘ChatGPT Consultant’, and even put it up on their LinkedIn profiles.
These armchair experts of the product claim to “leverage the power of NLP and AI for powering operations of businesses and provide customised solutions for automating processes”. The snake oil seller analogy also fits perfectly here with people claiming to use ChatGPT to solve business problems and integrating it in their websites.
That is not all, ‘Fiverr’, the online marketplace for people to hire and search for freelance jobs, also gives back results of ‘ChatGPT services’. Freelance workers here post their ads touting themselves to be experts of GPT-3, ChatGPT, Jasper, and whatnot. While some of these make grand claims like linking GPT-3 to the customer’s website, others ask for money to “open an account” on ChatGPT.
Bashing and banning ChatGPT citing ethical reasons is another new trend. Recently, schools, colleges, and education boards expressed apprehension towards using this technology for exams. This comes in the backdrop of developers’ platform Stack Overflow banning the use of chatbots as they produced inaccurate results.
Ethical reasons aside, the doomsayers of AI, who apparently are experts, express deep hostility towards the likes of ChatGPT and other AI innovations. The astounding trend of dodgy experts of ChatGPT and AI somewhat explains what the real experts like Yann LeCun mean when they call them out. When the infamous Galactica shut down case happened, people started asking questions if that is the fate of large language models that produce hallucinating and inaccurate results.
LeCun explains that it is precisely the reason that companies like Google and Meta are reluctant to make their products publicly available. AI sceptics highlight a problem with the models and stretch it to the point to call AI redundant and fraudulent. Same was the case when a developer claimed that LaMDA is sentient, and the whole charade of AI taking over the world and AI that is stupid started, and now same is the case with ChatGPT.
But not releasing models is not as big of a threat as the rise of fraudulent and misleading people in the field. Vashishta suggests that the focus in the end should be to minimise the noise from these three types of hypesters and focus on promoting credible researchers and focus on solving problems. “We can eliminate the space for these people who downplay the importance and growth of the field.”