TRUEPIC BLOG

The critical role of interoperable content transparency and provenance in AI policy

Dome of US Capitol building with a Content Credentials displaying "camera-captured."

In this article

Subscribe to updates

Stay up to date with our latest resources and articles.

Mounir Ibrahim

October 2024

In recent years, governments, industry, and civil society have increasingly focused on addressing the challenges posed by AI. A primary concern has been AI’s potential to accelerate the erosion of facts and truth, further undermining our information ecosystem. While early efforts were marked by uncertainty, the importance of transparency in online content has now become central to AI policy discussions. Key areas of focus include provenance (secure metadata), watermarking, and content authenticity, reflecting a more sophisticated legislative approach to AI. The Coalition for Content Provenance and Authenticity (C2PA) interoperable specification is expected to be pivotal in shaping best practices and future legislation.

AI induced panic 

The power of AI became evident when ChatGPT, Stability AI, and several text-to-image generator platforms were introduced to the public in 2022. This sea change in content creation technology triggered significant concerns among legislators and policymakers, who feared that society would struggle to distinguish between authentic and synthetic content. AI usage surged from approximately 7 million to over 100 million users worldwide within two years, raising additional concerns that we may be entering a post-fact era where deception and fraud could proliferate unchecked. 

Initial legislative responses were reactive and occasionally uninformed. For instance, countries such as Italy and China attempted to ban certain AI applications like ChatGPT to slow down adoption, while others worked to adapt existing laws to address AI. These early efforts focused on broad, one-size-fits-all solutions, with some legislators demanding binary AI detection services that would immediately identify and remove fake or AI-generated content. However, given the sophistication of today’s AI tools, these methods were quickly deemed impractical as detecting AI-generated content in real-time and at scale is probabilistic at best. As a result, the conversation has started to evolve toward more practical and scalable solutions. 

Interoperable transparency policy for policymakers

Legislators are now embracing a shift that industry experts began to advocate for in 2021 – focusing on verifying what is authentic rather than trying to detect what is fake. This approach towards transparency and authenticity reflects a more pragmatic and tangible strategy for addressing the challenges posed by AI and image deception. The goal is not to suppress AI-generated content but to empower users with the information they need to differentiate between authentic and manipulated material. This enables content consumers to make more informed decisions about what they see and hear online, reducing the likelihood that hyper-realistic or fraudulent content will mislead them. The Partnership on AI (PAI) refers to this as “Disclosure,” and has been identified as a best practice for AI, but useful for all digital content

Provenance and watermarking: Essential tools for transparency

Provenance, also known as Content Credentials, uses cryptographically secured metadata to verify the origins of a piece of content. Content Credentials align to the C2PA open specification, ensuring the metadata remains intact as they move across different platforms, websites, and devices, making it a necessity for modern internet transparency. Major online platforms such as LinkedIn, YouTube, Meta, and TikTok have all announced implementations or intent to implement Content Credentials at various speeds and levels. For example, Truepic recently uploaded the first authentic video onto YouTube, which identified and labeled it as such - this is a powerful example of interoperability at play. For policymakers, this is key to tackling AI-related risks and preventing deceptive content online. The specification is under review by ISO Task Force ISO TC 171/SC 2 and is anticipated to become a global standard by 2025 which will help accelerate its adoption.

First authentic video uploaded to youtube with Truepic Lens camera on October 15, 2024

Watermarking is another important approach, as it embeds invisible information into digital content, which can offer additional security and traceability. Watermarks are generally thought to be harder to remove than Content Credentials and can embed any information, including creation and origin details. However, watermarking on its own lacks interoperability, as proprietary decoders – often controlled by single entities – are required to access the embedded information. This limited access presents a challenge, as it could allow manipulation by bad actors while restricting broader transparency efforts. When combined with provenance, however, watermarking can enhance the overall robustness of digital content authenticity.

Despite their challenges, provenance and watermarking remain the most promising approaches for increasing transparency online and mitigating the downsides of AI. Misunderstandings persist among policymakers and media, who often use "watermarking" as a catch-all term for digital transparency. Without a clear distinction, legislation may inadvertently prioritize proprietary systems that undermine true transparency. For AI content verification to succeed, interoperability must be core, ensuring broad, platform-agnostic digital transparency.

A surge in transparency initiatives 

There has been a significant 75% increase1 in the number of legislative and industry initiatives aimed at ensuring digital content authenticity through provenance, watermarking, and related terms. Such initiatives are not limited to any one region – they are being developed across 17 jurisdictions, including the United States, the European Union, China, India, Brazil, Australia, and numerous U.S. states. The growing momentum behind these efforts reflects a global consensus that transparency and authenticity in digital content are critical for combating the rise of misinformation, fraud, and AI-generated deception.

Key legislative actions and global initiatives

Several key events and initiatives have served as catalysts for the growing emphasis on transparency and provenance in AI regulation. Here are some of the highlights: 

U.S. federal level

In 2023, the White House AI Voluntary Commitments laid the groundwork for industry players to adopt responsible AI practices, including the use of provenance and other authenticity measures. In addition, the White House’s Executive Order (E.O.) on AI underscored the importance of interoperable standards for traceable and tamper-evident content provenance. The E.O. instructs federal agencies to “develop effective labeling and content provenance mechanisms so that Americans are able to determine when content is generated using AI and when it is not.”  The directive served as an important catalyst, encouraging tech-driven initiatives like the AI Election Accord, where major technology companies committed to supporting election security through voluntary guardrails and best practices. Federal agencies such as the National Institute of Standards and Technology (NIST) have been actively examining transparency mechanisms through various forums and have already issued a draft report on technology approaches to digital content transparency. 

In Congress, bipartisan efforts around transparency and authenticity have been introduced into the National Defense Authorization Act (NDAA) and other bills, such as the Deepfake Task Force Act. The Senate AI Insight Forum highlighted the importance of provenance throughout its thorough and multi-month examination of AI and its implications. The Bipartisan Senate AI Working Group comprised of Majority Leader Chuck Schumer (D-NY), Senator Mike Rounds (R-SD), Senator Martin Heinrich (D-NM), and Senator Todd Young (R-IN) recommended “developing legislation that incentivizes providers of software products using generative AI and hardware products such as cameras and microphones to provide content provenance information and to consider the need for legislation that requires or incentivizes online platforms to maintain access to that content provenance information.” 

Senate AI Insight Forum on Transparency & Explainability and Intellectual Property & Copyright, November 29, 2023

International level

On the international stage, the European Union’s AI Act, particularly Article 50,  advances the conversation around transparency by outlining requirements for AI-generated content. It emphasizes technologies that provide clear information about content origins, thereby fostering transparency in digital media.  The act is particularly unique as it imposes significant fines and notable compliance requirements on AI companies and platforms. The UK’s Online Safety Act (OSA) creates the framework by which transparency and authenticity will become foundational for future legislation in the UK. Australia’s Voluntary AI Safety Standard and proposed mandatory guidelines specifically note transparency and use of emerging standards associated with the C2PA as a best practice “for relevant AI systems, consider implementing or obtaining systems that comply with the Coalition for Content Provenance and Authority (C2PA) Technical Specification.”  

China’s Generative AI law is wide-ranging, but Article 9 mandates that transparency markers on AI outputs to help users distinguish between AI-generated and human-created content. Similar measures are being implemented in Brazil, Colombia, China, Uruguay, and many other countries have followed suit. The United Nations General Assembly passed a resolution urging member states to adopt transparency measures in their AI regulation efforts, signaling widespread global recognition of the issue. 

U.S. state level

U.S. states are some of the most forward and innovative groups of legislative bodies thinking through how to enhance online transparency. One noteworthy example is California’s AB 3211, a thoughtful piece of legislation that introduces interoperable standards to guide the development of the content authenticity industry. Though this bill was held in the state’s legislature, it serves as a model for future content provenance initiatives, as it emphasizes the need for clear and enforceable standards that enable transparency across the internet. Further, California Governor Newsom signed at least three other laws to increase transparency in AI outputs and models, particularly for election-related content. In New York, lawmakers are considering bills to apply interoperable content provenance, like that of the C2PA, on AI and are giving the option for non-AI content throughout the state, including government use. Other states, such as Utah, have already passed legislation mandating the use of disclosure and provenance in advertisements online. 

The path forward for AI and content transparency

Governments and organizations worldwide are beginning to understand that interoperable standards, like those proposed by the C2PA, are essential for creating a healthier information ecosystem. Through initiatives prioritizing provenance, watermarking, and transparency, we are moving toward a future where the authenticity of digital content can be trusted, mitigating the risks posed by AI-generated deception. With the continued collaboration of governments, industries, and multilateral organizations, the promise of a transparent and trustworthy digital ecosystem is within reach. In the coming years - whether through mandates or industry-led best practices - future digital content must incorporate disclosures or nutrition labels to help us make more informed decisions. These disclosures must also be interoperable and have the ability to flow from one website or platform to another or one smartphone to the next. Without that critical feature, fragmented information ecosystems and the erosion of facts will prevail. 

---
1 Based on research performed by Global Counsel, September 2024 

Subscribe to Truepic updates

Stay up to date with our latest resources and articles.

Get started
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Share this article