Truepic BlogGovernmental Trends on Content Provenance

The Coalition for Content Provenance and Authenticity (C2PA) has helped raise awareness on the potential of digital content provenance as a counterweight to image and audio fabrication and synthesis. In 2021, various governments around the world recognized this potential and began passing and introducing legislation with regard to digital content provenance and its utility to increase transparency online. We expect this trend to continue into 2022. Here are some notable pieces of legislation:

  • United States: Senators Gary Peters (D-IL) and Rob Portman (R- OH) introduced the Deepfakes Task Force Act (2021) to establish the National Deepfake and Digital Provenance Task Force, which will explore how the development and deployment of provenance standards could assist with reducing the proliferation of disinformation and digital content forgeries. Senator Portman introduced the bill on July 29th, 2021. The bipartisan Deepfake Task Force Act will assist the Department of Homeland Security (DHS) in countering deepfake technology. The task force would be chaired by DHS and composed of experts from academia, government, civil society, and industry. The task forces would be charged with exploring how digital content provenance could assist in reducing the spread of deepfakes, develop tools for content creators to authenticate content and its origin, and improve the ability for civil society and industry leaders to relay information about the source of the deepfakes. This bill was unanimously reported out of committee in August 2021 and awaits a Senate floor vote.

  • UK: The UK Centre for Data Ethics and Innovation (CDEI) cited digital content provenance and emerging open standards from the C2PA as examples of how transparency can support platforms to deal with misinformation. The CDEI issued the report to the UK Government in August 2021. The CDEI is a government expert body focused on the trustworthy use of data and AI. The CDEI’s team of specialists has expertise in data policy, public engagement, and computer science. The CDEI is supported by an advisory board to deliver trustworthy approaches to data and AI governance across the UK. The independent advisory body will bring people together from across sectors to shape recommendations for the government that support responsible innovation and help build trust.

  • Australia: Australian Code of Practice on Disinformation and Misinformation was signed by tech companies in response to the government’s request for a framework to provide safeguards against harm associated with such content and that empowers users to make better-informed decisions around digital content. To date, the code has been adopted by Adobe, Apple, Facebook, Google, Microsoft, Redbubble, TikTok, and Twitter. All participating companies commit to protecting Australians against harm from online disinformation and misinformation using a range of measures that reduce its spread. Participating companies also commit to releasing an annual transparency report that will help improve understanding of misinformation and disinformation over time. The first set of reports was published May 22nd, 2021, and are available here.

Request more information