YouTube has announced that content creators on its platform will now be required to disclose any use of generative AI in their videos if they wish to earn revenue from them. This move is aimed at curbing the spread of misinformation such as deepfakes.
Starting next month, as per the new guidelines, YouTubers must indicate whether their videos contain AI-generated images or any other type of AI-manipulated content.
Failure to comply with this disclosure requirement could lead to demonetization, and repeated offenses may result in more severe consequences, including video removal, account suspension, or expulsion from the YouTube Partner Program.
YouTube’s decision underscores the potential for AI to be misused in creating content that can deceive viewers, such as making it appear as though someone said or did something they never actually did.
To address this, videos containing generative AI content will also feature a prompt labeling them as “Altered or synthetic content. Sounds or visuals were altered or generated digitally.”
The platform has clarified that even content created using YouTube’s own generative AI products and features will be subject to this labeling.
YouTube has taken a step further by allowing users and music partners to request the removal of AI-generated content that they find inaccurate or that features an identifiable individual based on their likeness or voice.
This policy revision reflects YouTube’s commitment to maintaining transparency and trust within its community while navigating the challenges posed by emerging AI technologies.
As the digital landscape continues to evolve, YouTube’s proactive measures aim to ensure that its platform remains a space for authentic and reliable content creation.