Google is Officially Killing the Internet with AI

While Google has long preached ‘helpful content written by people, for people, in search results’, its recent actions suggest otherwise
Listen to this story

For almost three decades, Google has been diligently doling out information on demand to netizens about anything and everything under the Sun — and all this in real time. The curiosity market has experienced a paradigm shift since the rise of AI tools generating content for and on behalf of humans. While Google has long preached ‘helpful content written by people, for people, in search results’, its recent actions indicate otherwise. 

Behind the users’ back, the company quietly rewrote its own rules to acknowledge the rise of AI-generated content on the internet. In the latest iteration of the company’s ‘Helpful Content Update’, the phrase “written by people” has been replaced by a statement that search giant is constantly monitoring “content created for people” to rank sites on its search engine.

The linguistic pivot shows that the company does recognise the significant impact AI tools have on content creation. Despite prior declarations of intentions to distinguish between AI and human-authored content, with this move, it appears that the company is contradicting its own stance on the omnipresent AI-generated material on the internet. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Yesterday, 404 Media’s Emanuel Maiberg pointed out that the first picture that pops up if you search “tank man” on Google is not the iconic picture of the unidentified Chinese man who stood in protest in front of the tanks leaving the Tiananmen Square anymore, but a fake, AI-generated selfie of the history.




Not on the Same Page 

Since AI-powered tools hallucinating is hardly a novel concept, the fact that these models also make up stuff is inevitable. While these chatbots rehash content on the internet, the possibility of them churning out false information is imaginable. Hence, in the near future, AI bots double-checking “facts” on the basis of their own previously generated content doesn’t look like a good idea. 

Google is one of the leading contributors to fight this phenomenon. At the I/O conference, the company’s executives announced plans to take significant steps to identify and contextualise AI content available on its Search. While measures like watermarking and implementing metadata aims to ensure transparency and enable users to differentiate between AI-generated and authentic images, it can only be applied to images as there is no obvious way to watermark AI-generated text.

Owning its messiah complex, yesterday, Google introduced a bunch of notable features to its AI chatbot Bard, including a way to cross-reference its answers through the “Google it” button. The button, which previously let users explore topics related to Bard’s answer on Google, now evaluates whether Bard’s answers align with or contradict information found through Google Search.

Even more concerning is Bard’s newfound responsibility to fact-check its own AI-generated outputs using Google’s search results — reducing the chances of the response being error-free. 

Incoming: Internet Collapse

While Google is busy updating its “transparent” policies behind closed doors, allowing Search to be flooded with unfiltered AI data, in future AI models including Bard are going to be trained on this data. Hence, the risk of unfiltered spam datasets to train these models increases.

As the boundaries of AI replication blurs, the looming question is, what happens when AI-generated content proliferates across the internet, becoming the primary source for AI model training? The ominous answer — an impending digital collapse.

Google issued a statement in the recent past asserting its commitment to fortifying Search results against spam, emphasising that employing AI-generated content to manipulate Search rankings is a violation of spam policies within Alphabet. But, the latest updates tell a different tale. 

Standards continue to evolve as AI proliferates. For now, Google appears steadfast in pushing forward the AI advancements by every means possible — be it by updating policies or adding features to Bard, the company’s sole representative against OpenAI’s ChatGPT. 

The company seems to be grappling with the duality of AI-generated content, caught between championing its potential and safeguarding its search results. While Google is navigating the slippery slope, the company’s moves hold the potential to make or mar the future AI-infused digital frontier. 

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Strengthen Critical AI Skills with Trusted Corporate AI Training

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR