Identifying AI-Generated Content from Open-Source Models
A community demonstration of C2PA signing at point-of-generation, in partnership with Hugging Face and Steg.AI.Try the Demo Now
Navigating the Challenge of AI-Generated Content
Distinguishing between human and machine-generated media has become increasingly challenging with the rise of high-quality AI content. This challenge becomes particularly crucial when images depict events that never occurred or when they cleverly blur the line between fiction and reality. Understanding and addressing this evolving landscape is paramount in today’s media environment.
We combined C2PA signing with Truepic’s certificate authority and CLI with watermarking provided by Steg.AI in a demo on Hugging Face’s Spaces.
Showcasing the Power of Generative AI
We set out to illustrate the capabilities of generative AI platforms, empowering creators to seamlessly embed Content Credentials in their content, clearly marking it as computer-generated from the moment of creation.
Additionally, we wanted the space to demonstrate the ability to recover a restored, signed version of an image from before the data was decoupled in instances where Content Credentials are lost. This retrieval is made possible through the use of an invisible watermark, adding an extra layer of security and authenticity to the creative process.
Shared Success: Securing Image Integrity
This demo represents a significant stride in fostering transparency and accountability in the digital landscape. Beyond technological advancements, it emphasizes the importance of ethics, accountability, and a transparent digital future. While larger tech companies address these concerns, this demonstration provides a clear and accessible path for smaller enterprises and individual developers.
Content Credentials and watermarking methods harmonize to form a resilient approach to transparently disclosing computer-generated images.
Integrating signing with C2PA into your server-side workflow, particularly right after image generation, is a seamless process. If you’re curious about the ease of implementation, feel free to reach out.
Labeling Open-Source AI
Labeling content from open-source models as AI-generated fosters a culture of transparency and accountability in communal software development.
OUR PARTNERSInforming the design of content attribution solutions
“As generative AI is seeing growing adoption and helping create images that are becoming more difficult to distinguish from their training data, helping users better manage and annotate these new types of media will be essential to fostering more responsible and reliable development of the technology.”
Machine Learning & Society Lead
“We’re seeing broad support for watermarking AI-generated content from US and EU government leaders and private industry. We are excited to be partnered with Truepic and Hugging Face to deliver the first publicly usable forensic watermarking solution in a package that truly raises the bar for responsible AI.”
Dr. Eric Wengrowski
Co-Founder & CEO
You may also likeLearn about our other collaborations
What happens if real is actually fake? Truepic and acclaimed production studio, Revel.ai, have partnered to produce the world’s first authenticated deepfake video using transparent engineering.
MIT Media Lab
Enabling ethical AI research through provenance disclosure: Truepic worked with researchers from the Affective Computing group at MIT Media Lab to add secure, cryptographic provenance to over 30 deepfake videos.
Interested in collaborating? We’d love to hear from you.
About Hugging Face
Hugging Face is the leading open-source and community-driven AI platform, providing tools that enable users to build, explore, deploy and train machine learning models and datasets.
Steg.AI builds state-of-the-art forensic watermarking software. Leading brands trust the company’s AI-powered watermarks to protect and authenticate their digital libraries. Steg.AI is venture-backed and supported by the United States National Science Foundation (NSF).