On March 18, 2024, YouTube introduced a new way for creators to be upfront about the content they share, especially when it dips into the realm of artificial intelligence. This latest update allows (well, it's mandatory) video makers to tick a box during the upload process, indicating whether their videos contain AI-generated or synthetic material that looks convincingly real.
This includes content where a person appears to say or do something they never actually did, manipulated footage of real events, or entirely fabricated scenarios made to look authentic.
YouTube's move to require creators to disclose AI-altered content follows its announcement in November 2023, setting the stage for more transparent content sharing.
The policy distinguishes between different types of content, applying stricter rules to music while offering more flexibility elsewhere. For example, AI-generated music that could potentially infringe on an artist's rights, such as a deepfake rendition of a Drake song, can be removed at the request of the artist's label.
Yet, for the average person featured in a deepfake, the removal process might involve a more complex privacy request form.
A Conversation About AI-Generated Content
The reliance on creators to label their content truthfully sparks a conversation about the effectiveness of the honor system in an era where AI-generated content is becoming increasingly sophisticated. YouTube has mentioned it's exploring tools to detect such content, though current technology struggles with accuracy.
Moreover, YouTube has positioned itself to add disclaimers to videos if creators fail to do so, particularly when the content could potentially mislead viewers. The new requirement comes just after the EU's announcement about a new law regarding AI content.
The EU's Artificial Intelligence Act
Parallel to YouTube's policy, the European Union has taken a monumental step by adopting the Artificial Intelligence Act on March 13, 2024. The landmark legislation, approved by a majority in the European Parliament, aims to safeguard fundamental rights and foster innovation by setting standards for AI use.
The act outlines clear prohibitions against certain AI applications that pose threats to citizens' rights, like indiscriminate facial recognition and emotion tracking in schools or workplaces.
The EU's approach to AI regulation emphasizes protecting democracy, the rule of law, and personal freedoms while encouraging technological advancement. High-risk AI systems, such as those used in law enforcement or critical infrastructure, will now have to adhere to stringent requirements, including transparency, accuracy, and human oversight.
Additionally, the legislation introduces measures to support innovation, ensuring small and medium-sized enterprises have the resources to develop AI responsibly.
Is the US Next?
While the US is still behind on state-by-state Privacy Law enactment—and the ones that exist still fall far short of the stringent consumer protections that the EU GDPR offers (for example: nonprofits are exempt from California's CCPA)—Pennsylvania is already considering regulations similar to the EU's AI Act:
As AI is increasingly integrated into our digital lives, and cybercriminals and bad actors use AI nefariously, initiatives like YouTube's content labeling and the EU's Artificial Intelligence Act represent critical steps toward balancing innovation with ethical considerations.
These developments highlight the importance of transparency and accountability in the digital age, ensuring that advancements in AI technology enhance, rather than undermine, our societal values.