Enabling ethical AI research through provenance disclosure
OUR PURPOSE
Secure disclosure for ethical AI research
Safe, scalable research is critical to understanding the impacts of generative AI and synthetic media. As research on generative AI and synthetic media grows, content transparency can help researchers inform participants and add lasting attribution to synthetic content. This kind of cryptography-powered disclosure and attribution helps to reduce the risks of synthetic media being taken out of context after it is used in academic research.
Sign each image and video
Truepic worked with researchers from the Affective Computing group at MIT Media Lab to add secure, cryptographic provenance to over 30 deepfake videos.
Display provenance details
The research team used provenance to debrief participants in their study and ensure these synthetic media files were traceable back to their institution.
C2PA STANDARD
Transparency for a growing field of study and innovation
HOW IT WORKS
A first-of-its kind example. A growing best practice.
Integrate and display Content Credentials on your platform. With Truepic, you can embed, sign, and showcase verified media, ensuring transparency and trust in every piece of content you present.