See how much you could save with Truepic Vision! Try our Savings Calculator

No matter your industry or level of integration, we have a product that works best for you:

Truepic Vision redefines virtual inspections as a secure, seamless platform you can trust

Learn More

Our SDK works with your existing applications, adding tamper-evident media capture with breakthrough cryptographic hashing

Learn More

Sherif Hanna
VP of R&D @ Truepic

7 min read

< Back

Aug 03, 2020 Provenance is the future of media on the internet. And that future begins today

Sherif Hanna | VP of R&D @ Truepic | 7 min read

Truepic is a collaborator in the Content Authenticity Initiative (CAI), a gathering of technology, news, academic, and advocacy organizations working together to create an open standard for provenance-based content authentication. Today, the CAI is releasing a white paper that outlines our proposal for a system architecture for content authentication, the principles that guided the selection of that architecture, and the next steps needed to turn it into an open standard. You can read the white paper here.


Truepic was founded on the core thesis that provenance is the most reliable way to establish the integrity of the data contained in photo and video files. We pioneered Controlled Capture technology, which authenticates the pixel contents, date, time, and location of a photo or video from when the capture button is pressed. Controlled Capture photos and videos have been taken in over 150 countries, and the technology is used every day by advocacy organizations, non-profits, and enterprises of all sizes.

This is why we were incredibly excited when Adobe, Twitter, and the New York Times announced the Content Authenticity Initiative last fall and why we joined enthusiastically and contributed heavily to its work over the past several months. The CAI represents the first industry-wide recognition of the soundness of the provenance approach as an alternative to the detection approach.

Last December, Truepic CEO Jeff McGregor wrote a blog post outlining why we believe that forensic detection will never be adequate for restoring trust in visual media, especially at scale. Since then, the results of the Deepfake Detection Challenge (DFDC) have only bolstered this argument. Despite worldwide competition and a large prize purse, the winning DFDC algorithm achieved an accuracy of only 65%, slightly better than a coin toss. Not only would that accuracy not be useful at scale, and putting aside the fact that synthetic media detectors and generators will forever be locked in an arms race, the desire for a binary outcome of “real” vs. “fake” is structurally problematic. An algorithm is incapable of assessing intent or making a proper value judgment. On the other hand, a properly-informed person can be.

Objectives and Guiding Principles

The CAI is taking a different approach: empower people with trustworthy information about the digital content they’re consuming, established by a transparentunbroken chain of custody from the content’s provenance so that they can make informed value judgments for themselves.

As one can imagine, there are many possible ways to achieve that goal. So how did CAI decide on which way to go about achieving it? We considered potential real-world scenarios based on our collective experience and aimed to create an architecture designed with the following principles in mind, among others:

  1. Minimize harm and reduce the potential for misuse
  2. Make privacy the foundational principle of the system. For example, attaching the creator’s identity to content is an opt-in choice, not a mandatory feature.
  3. Allow for creative freedom without passing value judgments as to whether the edits are benign or not.
  4. Make all information about the lineage of the content available transparently to the viewers.
  5. Design for maximum scalability and security.

You can read more details on these guiding principles in Section 2 of the white paper.

The Architecture

You can think of a CAI asset (e.g., a photo or a video) as a file that has a collection of statements of fact, called assertions in CAI parlance. The assertions are bundled together into a cryptographically signed data structure called a claim embedded in the file. The claims are created, signed, and embedded in the file by a Claim Recorder.

To facilitate the storage of a richer, more detailed amount of information about an asset, assertions can optionally be stored in the cloud, with a pointer to them from the claim embedded in the file.

In the CAI system, Claim Recorders will exist in smartphone camera hardware (e.g., Truepic firmware running inside the secure enclave of a mobile SoC), camera apps (e.g., Truepic Vision mobile apps), in image editing tools (e.g., Adobe Photoshop), and the media ingestion pipelines or content management systems of online platforms (e.g., Twitter).

Please note that there is no mandate for Claim Recorders to use distributed ledgers such as a public or private blockchain to store hashes or digital signatures. The claims, their hashes, and the hashes of the assertions they contain are all embedded in the files themselves.

Claim Recorders sign the claims using private keys attested to by certificate authorities that are part of the CAI Trust List, the root of trust in the CAI system. This means the keys are not actually issued to individuals and are not tied to individual identities but rather to the Claim Recorder firmware or software in CAI-compliant cameras, editing tools, or media processing pipelines. This is intended to guard the privacy of creators if they wish to remain anonymous.

Truepic is going a step further. When technically viable, we intend to use one-time private/public keypairs that form the basis of hash-based signature schemes once standardized as part of NIST’s post-quantum effort. This would provide the ultimate cryptographic protection for photographers and videographers that wish to remain anonymous: no two CAI-compliant Truepic assets would be signed by the same private key, even if they come from the same device or app. Until such time that hash-based signature schemes are standardized and ready for adoption, Truepic will use more common keys and signature algorithms and limit the duration during which they can be used to achieve a similar effect.

If the creator or editor wants attribution for their CAI asset, their identity can be embedded in the asset file as an assertion which is then cryptographically sealed into the file.

To facilitate both creative and privacy-protective edits, we conceived of a predecessor → transform → successor model for CAI assets. A predecessor is an asset that may itself have an earlier predecessor (e.g., an export of a photo which was edited in Adobe Photoshop) or may be an original asset with no predecessors (e.g., a Truepic captured on secure smartphone hardware).

All transforms made in a CAI-compliant tool to a predecessor, which results in the successor, are recorded and embedded in the successor asset by the tool’s Claim Recorder. A transform may be an assertion of edits made to the predecessor (e.g., crop, resize, healing brush), a redaction of a prior assertion (e.g., removing the precise GPS coordinates for privacy protection), or the addition of a new assertion (e.g., the identity of the editor using the tool).

In the new claim generated for the successor asset, not only will a record of each passthrough, new or redacted assertions be included in the claim, but the Claim Recorder makes an explicit reference to the predecessor claim and carries it forward into the successor asset. This way, a complete genealogy of the asset and its predecessors is available in the file itself.

The fundamental objective here is transparency: no edit, addition, or redaction of assertions from an asset is forbidden, but each action is recorded and cryptographically sealed. This way, the information can be made available to the viewer to understand the history and genealogy of the content they’re viewing. The challenge of displaying this wealth of information to the viewers in a helpful rather than overwhelming manner is a key work item for the CAI.

You can read more about the system architecture and some sample workflows in Sections 4 and 5 of the white paper.

Next Steps

The CAI will pursue a two-track approach: formalizing working groups that will tackle the standardization of different aspects of the system (including support for additional content types and file formats) and a prototyping effort to put this emerging architecture to practice.

For Truepic, we view the validity and the integrity of the assertions sealed into CAI-compliant photos and videos to be of the utmost importance. To this extent, we continue to work with Qualcomm Technologies to build a secure image capture system and a CAI-compliant Claim Recorder in the Trusted Execution Environment (TEE) of the Qualcomm Snapdragon 865 Mobile Platform. The resulting CAI-compliant photos would feature assertions whose contents were captured and encoded securely, fortifying them against potentially malicious software running on the device and guaranteeing the highest possible level of data integrity. We aim to showcase the results of this effort in the next few months and communicate our learning from this work to the wider community.

At Truepic, we’ve always held the conviction that the rapid proliferation of synthetic media and the erosion of trust in photos and videos imply that it makes more sense to prove what’s real than to try to detect what’s fake. Today’s white paper release paves the way for an ecosystem-wide actualization of that thesis. We are proud to be working with Adobe, Twitter, the New York Times, WITNESS, and the rest of the members of the Content Authenticity Initiative on this incredibly important effort, and we hope that it will bear fruit in the form of a collaborative and thoughtful open standard that’s widely adopted and genuinely useful.

Qualcomm Snapdragon is a product of Qualcomm Technologies, Inc or its subsidiaries.