YouTube introduces new rules for labelling AI-generated videos
YouTube released a tool that will make creators clearly label the parts of their content that are generated by AI.
Starting Monday, YouTube creators will be required to label when realistic-looking videos were made using artificial intelligence, part of a broader effort by the company to be transparent about content that could otherwise confuse or mislead users.
When a user uploads a video to the site, they will see a checklist asking if their content makes a real person say or do something they didn’t do, alters footage of a real place or event, or depicts a realistic-looking scene that didn’t actually occur.
The disclosure is meant to help prevent users from being confused by synthetic content amid a proliferation of new, consumer-facing generative AI tools that make it quick and easy to create compelling text, images, video and audio that can often be hard to distinguish from the real thing. Online safety experts have raised alarms that the proliferation of AI-generated content could confuse and mislead users across the internet, especially ahead of elections in the United States and elsewhere in 2024.
YouTube creators will be required to identify when their videos contain AI-generated or otherwise manipulated content that appears realistic — so that YouTube can attach a label for viewers — and could face consequences if they repeatedly fail to add the disclosure.
The platform announced that the update would be coming in the fall, as part of a larger rollout of new AI policies.
Leave A Comment
You need login first to leave a comment