Truepic BlogUnpacking Content Provenance in the AI Elections Accord

Last week, Truepic signed the AI Elections Accord alongside 19 other tech companies, including Microsoft, Google, Meta, Adobe, TikTok, and others. I wanted to take a moment to unpack its significance and why widespread tech adoption of content provenance is highlighted as one of the 7 core goals in the Accord.

This year, billions of people will be exercising their right to vote in elections worldwide. At the same time, deceptive AI-generated content poses a notable risk: bad actors are using synthetic media to mimic public figures and distort reality online to mislead or sway voters. We’ve already seen this dynamic play out with synthetic videos of political candidates in the US and abroad. Without greater transparency online about what is synthetic and what is not, we will also likely see the Liar’s Dividend being used to undermine the credibility of all media.

To mitigate these challenges, content provenance is an essential piece of the puzzle that helps to eliminate the ambiguity around what is authentic and what is not. Content provenance can tell us how something was created (generated by AI, captured on a smartphone, etc.) and what significant changes (like AI erasing, cropping, etc.) it has undergone since creation.

C2PA Content Credentials are content provenance in action, and in the last several weeks, we’ve seen growing support for them across the tech ecosystem. From Google joining Truepic, Microsoft, Adobe, and others on the C2PA’s steering committee to OpenAI integrating Content Credentials into Dall-E 3 outputs and announcing it will also integrate C2PA for Sora (should the new text-to-video model be made publicly available). These are notable strides for C2PA Content Credentials and toward a more transparent internet.

As a founding member and leading implementer of C2PA, Truepic powers the most secure, robust implementations of C2PA Content Credentials for diverse enterprise partners around the world, like Microsoft, Qualcomm, Hugging Face, and social impact organizations working to secure elections like Ballotpedia. Our technology works for both disclosing synthetic content and safeguarding authentic content. C2PA Content Credentials provide a tamper-evident signal for tech platforms and viewers to know if what they are looking at is AI-generated or a real photo or video. For example, viewers can see if an image of a political candidate was genuinely captured by a camera or generated by AI.

The momentum we’ve seen across the tech ecosystem over the last several weeks is encouraging, but we must sustain it. In a digital age where many of our decisions are based on digital content, transparency about the source and authenticity of images, videos, and audio clips is paramount.

Request more information