YouTube Joins The Irresponsible AI Club 

A change in long-standing policies is inevitable but the big tech does whatever floats its boat
Listen to this story

Yesterday, video-sharing platform YouTube released a blog stating, “All content uploaded to YouTube is subject to our Community Guidelines—regardless of how it’s generated—but we also know that AI will introduce new risks and will require new approaches. We’re in the early stages of our work, and will continue to evolve our approach as we learn more.” 

YouTube has announced that in the coming months, the platform will introduce updates that inform viewers when the content they’re seeing is synthetic. “Specifically, we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools,” they added, which means the onus is on content creators, instead of the platform. 

The company has taken into its hands the job of removing AI-generated content impersonating an individual or music mimicking an artist’s voice or style through the privacy request process. 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Concentration of Power

In March, YouTube revised its advertising policy, allowing content creators to monetize material containing a moderate dose of profanity. This adjustment followed creators expressing discontent with YouTube’s stringent profanity policy, deeming it both overly restrictive and insufficient for ad monetization.




Curiously, just a few months prior, in November 2022, YouTube had updated its advertiser-friendly content guidelines, explicitly barring the use of swear words in the initial seven seconds of a video. Videos commencing with explicit language, like the f-word, risked ineligibility for ad revenue. Even videos with moderate profanity throughout faced restrictions on ad earnings.

The policy impacted its earnings leading YouTube to take an about turn. 

His words hold true since before YouTube picked out the ‘Responsible AI’ page from the big tech’s playbook it made a deal with the devil.

The Irresponsible AI Club

YouTube is not the first or only one to release this monotonous, sort-of-mandatory statement, and change its policies for its own benefits, responsibly. 

Google has also been carrying around the ‘Bold and Responsible’ placard for a while now. Yet, behind the curtains it has also been updating its privacy policies to suit its advertising purpose. To get back on top in the AI race, the folks behind Google are trying to release new products without much oversight.

Even though Google says it values ethics, its actions suggest otherwise. The team in charge of ethics has been all over the place for the past two years, and recently, the company got tangled up in a couple of legal battles.

OpenAI is dealing with a bunch of legal issues too. In January, Microsoft got rid of its team focused on ethics and society. Under new owner Elon Musk, Twitter cut more than half of its workers, including its small team working on ethical AI. In March, Twitch, owned by Amazon, also let go of its ethical AI team.

Time and again it has been pointed out that voluntary commitments from big tech companies are nothing but an illusion of security. Even the 2023 State of AI report highlighted the lack of researchers working on the AI alignment issues across some of the biggest AI research labs. 

A change in the long-standing policies of companies is inevitable. But in the past one year the companies that consider themselves leaders of the AI race, like Google and OpenAI (to name a few), have been doing whatever floats their boat. How long will this go on?

Tasmia Ansari
Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Strengthen Critical AI Skills with Trusted Corporate AI Training

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR