Enabling ethical AI research through provenance disclosure
Truepic worked with researchers from the Affective Computing group at MIT Media Lab to add secure, cryptographic provenance to over 30 deepfake videos.View the Research Project Here
Secure disclosure for ethical AI research
Safe, scalable research is critical to understanding the impacts of generative AI and synthetic media. As research on generative AI and synthetic media grows, content transparency can help researchers inform participants and add lasting attribution to synthetic content. This kind of cryptography-powered disclosure and attribution helps to reduce the risks of synthetic media being taken out of context after it is used in academic research.
Sign each image and video
Truepic worked with researchers from the Affective Computing group at MIT Media Lab to add secure, cryptographic provenance to over 30 deepfake videos.
Display provenance details
The research team used provenance to debrief participants in their study and ensure these synthetic media files were traceable back to their institution.
Transparency for a growing field of study and innovation
Using the C2PA open standard, the researchers were able to disclose and attribute the AI-generated videos they had shown to participants during the experiment. Correct attribution, sealed into each video file, helps reduce the risk that videos used in an experiment will be taken out of context.
Content TransparencyA first-of-its kind example. A growing best practice.
Informing participants is crucial to ethical research, especially research that involves hyperrealistic, often deceptive synthetic content.