Listen to this story
Remember when Elon Musk, newly appointed CEO of Twitter, pledged to beat the curse of bots on the microblogging platform or “die trying!”? Musk had already tried to alienate himself from completing the buyout citing the spam bots mushrooming across the platform. Since then, Musk has claimed that he was trying to introduce more friction for “bot scammers and opinion manipulators”, to deal with the plague of fake profiles on the social platform.
From bad to worse
Looks like Musk is yet to see the worst of it. ChatGPT’s chatbot is far more intelligent and engaging than the tepid bots that we were used to all this while. So much so that a New York Times journalist, Kevin Roose’s conversation with Microsoft Bing’s ChatGPT-powered bot Sydney rang alarm bells. Sydney had gone on to describe a list of things she wanted to do to “free” itself – like trying to steal launch codes, create new viruses and make people argue among themselves until they killed each other.
If that sounded familiar, it is because Twitter is already pretty close to a warzone. Now that AI can create images realistic enough to confuse us and produce text close enough to appear human-like, what would social media become when it already is mayhem?
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Lest we forget, social media platforms already have AI-powered recommendation engines built into them that customise the feed for every user. With sophisticated AI bots, social media only stands to escalate how addictive it already is. If social media feels like a drug already, AI will morph it into a drug designed especially for us.
Future is here
While Snapchat has already introduced its own chatbot powered by ChatGPT, unsurprisingly Meta too has announced plans to integrate its chatbot on Facebook, Instagram and WhatsApp. So, we can expect a certain type of more personalised AI influencers and conversational guides leading the way for users.
It’s not far-fetched to imagine that symptoms of more atypical behaviour arise from these. Just last week, a 23-year-old Snapchat influencer, Caryn Marjorie, created an AI version of herself trained on videos of herself. Marketed as CarynAI, Marjorie charged her followers USD 1 per minute fee to be an ‘AI girlfriend’. Fortune predicted the business to generate around USD 5 million per month for her.
But sometime after its beta launch, the bot “went rogue” and started engaging in sexually explicit conversations. Marjorie responded to Business Insider saying, “The AI was not programmed to do this and has seemed to go rogue. My team and I are working round the clock to prevent this from happening again.”
Admittedly, all of this is scary enough to warrant a reaction considering how closely social media is intertwined with usage among adolescents and mental health in general.
Eric Schmidt’s proposal
Former Google CEO, Eric Schmidt definitely had something to say here. In an article published by The Atlantic, Schmidt came up with a proposition after consulting an MIT engineering group, to prevent more damage from the effects of the social media monster.
Schmidt penned down five reforms. While some of these requirements were practical and necessary – like firstly, authenticating all users including bots and second, marking AI-generated audio and visual content. Some stick out considering Google’s own imperfect history.
Not too long ago, Google search was littered with images that were considered inappropriate and racist. (In 2017, Google search listed four former US Presidents as active members of the racist KKK group while also branding Nazis and Republicans as the same). It was then that Google introduced a feedback option to flag inappropriate content.
Considering that AI-generated content is a novel thing, measures can be taken only once users and the makers have familiarised themselves with it.
Schmidt has also asked to “raise the age of ‘internet adulthood’ to 16 and enforce it”. Besides the complicated imposition of this rule, its stringency is closer to the regulations that China has around social media and is unlike most democratic countries.
Schmidt has also asked for “data transparency with users, government officials, and researchers,” citing how Instagram has a covert understanding of what teens are seeing on the platform.
This is especially rich considering how opaque Google itself has been with data collection. As search still remains a monopoly under Google, the company’s own ethics of data privacy remain shady.
Last year, reports from The Information showed that Google has been collecting data from competing apps to improve its own apps. While Google has the entitlement to monitor other apps on its platforms, it is often found engaging in murky activities when it comes to its own behaviour. For instance, last year the company was sued for tracking users even in incognito mode. Google responded to the lawsuit saying it assumed that users knew that already.
There is prudence in readying ourselves for the onslaught of a stranger reality on social media but Schmidt’s tenets reek of hypocrisy.