It’s been just over two years since researchers at Berkeley found that humans have a 50/50 chance of being able to tell if headshots were AI-generated or human. Since then, we have seen and heard a wide range of hyperrealistic synthetic images, video, and audio that unmistakably enter the ‘Uncanny Valley’ – that unsettling place where something that is not human – like a deepfake – seems decidedly human such that we can no longer trust our perception of reality.

In this nebulous digital world, we need technologies to help us navigate the authenticity of digital content. Two technological approaches have emerged: provenance and detection.

Provenance, a robust solution that begins at the point of origin.

Provenance has long been used in the art world to authenticate the origin of artistic works by documenting how, when, and where something is made. As the use of generative AI grows, we expect the sheer quantity of synthetic content online to rapidly outpace and ‘crowd out’ authentic content. Knowing what is human vs. computer generated will be paramount for trust on the internet. A provenance approach focuses on a proactive solution to this challenge by establishing the origin of content as early as possible in the value chain of digital media and displaying that origin to viewers through popular publishing platforms, including news and social media.

Digital content provenance works by cryptographically sealing the known origin of a file into its metadata. The most widely accepted approach to media provenance has been popularized by the C2PA standards body, and implements a service called Content Credentials. These credentials, once sealed, are later viewable when content is published so that we can see, for example, if the origin of an image was generated by a computer (artificial intelligence) or captured by a traditional camera. Having a unified and standardized way to establish and display provenance information benefits consumers by letting them know what they are interacting with online, much like nutrition labels provide us with information about what is in the food we consume every day.

Digital content provenance is a tamper-evident, proactive measure to establish and disclose the origin and history of digital content. Similar to a protective seal on a consumer product, if the provenance ‘seal’ on a piece of digital content is intact, you can know that that content has not changed since the seal was applied. Otherwise, you would be able to tell if the seal had been broken. For decisions of high consequence, this proactive guarantee is often imperative. The cryptographic assurance that an image has been unchanged since it was captured or created eliminates some uncertainty for decision makers evaluating that content. Truepic has worked with hundreds of enterprise customers to help them capture tens of millions of authentic images and videos with verified provenance that are used to make critical business decisions. Truepic is also working with AI companies, hardware providers, and others to help integrate digital content provenance into their systems for an even wider distribution of Content Credentials.

Content Credentials are designed to be interoperable so that when digital content inevitably travels between compliant tools and platforms, the content’s provenance stays intact. That said, the most significant drawback to the provenance approach is that these new provenance technologies and standards have to be implemented at critical junctures in the in content value chain, most notably the point of content creation. Interoperability allows for significant scaling of the provenance approach, but scaling also requires broad adoption across the tech ecosystem such that consumers start seeing Content Credentials on a daily basis.

Detection, a frictionless solution that can be applied to any content.

Detection of AI-generated content, on the other hand, is probabilistic and is applied reactively. It gives decision-makers the probability that something was or was not AI-generated based on pattern analysis. For example, a detection tool might tell us that there is an 80% likelihood (plus or minus a margin of error) that something was AI-generated. 80% is high, but it is also far from a guarantee.

Detection can be an extremely helpful directional indicator, especially if content of unknown origin is circulating online without Content Credentials and quick action needs to be taken to determine the nature of content. One of the primary benefits of detection systems is that they do not require broad, ecosystem level adoption to be useful. Rather, they can be applied as a point solution, often via APIs or on-prem systems, and applied only to content that is called into question. This yields immediate benefit to decision makers who are looking to perform a probabilistic analysis on the authenticity of content.

Where detection falls short is on certainty. Because of the editable and ephemeral nature of metadata in image, video, and audio file formats – detection can never provide a definitive answer about the origin of content, or whether data has been manipulated along the way. For example, simply changing the time and date on your smartphone and capturing an image imprints a “Camera Original” time and date that are incorrect. Detection systems are not, and will never, be capable of determining this type of “cheapfake.” And, as AI systems progress in quality, it will be increasingly difficult for detection approaches to keep up. Given the substantial commercial incentives for highly sophisticated and widely accessible content generation, this dynamic is further fueled by far more capital investment in generative capabilities than detection capabilities.

Only individual decision makers can know what is right for them and their use case, but there are instances when misidentifying something as AI-generated could be quite harmful. Even if a detection tool was accurate 99% of the time at determining if content was AI-generated or not, that 1% gap at internet scale is massive. Experts estimate that in 2023 alone, 15 billion images were generated by AI. When all it takes is one convincingly fabricated image to degrade trust, 150 million undetected images is a pretty unsettling number.

Provenance + Detection, better together.

The reality is that we need both content provenance and detection technologies to mitigate the risks of synthetic content online today. Only provenance provides cryptographic assurance about how a piece of content came to be, but not all media has provenance. While provenance continues to become more widely adopted, detection can provide an interim solution to help decision makers evaluate digital content that does not have provenance.

Provenance is becoming a proactive best practice across many content creation tools. For example, now when you use DALL·E 3, a Content Credential is automatically and proactively sealed into the metadata of each media file to show that the image was created using DALL·E 3. If that Content Credential is stripped (advertently or inadvertently) from the file, detection will then have a role to play.

Adoption of Content Credentials will continue to grow given the need for transparency into the origin of content. And detection tools will continue to add value as the ecosystem moves through waves of provenance adoption. Together, these two technologies can help decision makers better understand the authenticity of content as we enter a completely new era for digital media.

Last week, Truepic signed the AI Elections Accord alongside 19 other tech companies, including Microsoft, Google, Meta, Adobe, TikTok, and others. I wanted to take a moment to unpack its significance and why widespread tech adoption of content provenance is highlighted as one of the 7 core goals in the Accord.

This year, billions of people will be exercising their right to vote in elections worldwide. At the same time, deceptive AI-generated content poses a notable risk: bad actors are using synthetic media to mimic public figures and distort reality online to mislead or sway voters. We’ve already seen this dynamic play out with synthetic videos of political candidates in the US and abroad. Without greater transparency online about what is synthetic and what is not, we will also likely see the Liar’s Dividend being used to undermine the credibility of all media.

To mitigate these challenges, content provenance is an essential piece of the puzzle that helps to eliminate the ambiguity around what is authentic and what is not. Content provenance can tell us how something was created (generated by AI, captured on a smartphone, etc.) and what significant changes (like AI erasing, cropping, etc.) it has undergone since creation.

C2PA Content Credentials are content provenance in action, and in the last several weeks, we’ve seen growing support for them across the tech ecosystem. From Google joining Truepic, Microsoft, Adobe, and others on the C2PA’s steering committee to OpenAI integrating Content Credentials into Dall-E 3 outputs and announcing it will also integrate C2PA for Sora (should the new text-to-video model be made publicly available). These are notable strides for C2PA Content Credentials and toward a more transparent internet.

As a founding member and leading implementer of C2PA, Truepic powers the most secure, robust implementations of C2PA Content Credentials for diverse enterprise partners around the world, like Microsoft, Qualcomm, Hugging Face, and social impact organizations working to secure elections like Ballotpedia. Our technology works for both disclosing synthetic content and safeguarding authentic content. C2PA Content Credentials provide a tamper-evident signal for tech platforms and viewers to know if what they are looking at is AI-generated or a real photo or video. For example, viewers can see if an image of a political candidate was genuinely captured by a camera or generated by AI.

The momentum we’ve seen across the tech ecosystem over the last several weeks is encouraging, but we must sustain it. In a digital age where many of our decisions are based on digital content, transparency about the source and authenticity of images, videos, and audio clips is paramount.

By Jeff McGregor, CEO

Today, Google joined the steering committee of the Coalition for Content Provenance and Authenticity (C2PA), an interoperable open technical standard founded by organizations like Truepic, Microsoft, Adobe, BBC, and others to standardize how transparent and authentic digital content can travel online. 

With Meta’s announcement earlier this week that it will label AI-generated images on Facebook and Instagram based on signals like C2PA Content Credentials, and OpenAI’s integration of C2PA to sign outputs from Dall-E, Google joining the Coalition is the latest in a string of critical developments for the content authenticity industry. In the last year Qualcomm and Truepic announced breakthrough C2PA capability in chipsets; Leica announced they will use C2PA Content Credentials in their M-11P camera, and Stability AI’s API, Microsoft’s Bing’s Image Generator, Adobe Firefly, and Truepic’s Hugging Face Space have all incorporated C2PA Content Credentials to disclose AI-generated content. In a moment when AI-generated content is more prevalent than ever before, these are huge strides for the authenticity industry toward building a healthier information ecosystem.

As one of the world’s largest technology providers, Google joining the C2PA marks a significant moment for the global movement toward greater transparency in digital content. C2PA Content Credentials are intentionally designed to be open (any platform or tool can adopt) and interoperable (ingested, read, and displayed across tools and platforms) as digital content has a dynamic lifecycle online, moving rapidly between different tools and platforms. Tamper-evident C2PA Content Credentials provide a clear line of sight into how digital content was created and edited, so content consumers and distributors can understand if what they are looking at is AI-generated, captured on camera, or somewhere in between. As more platforms and tools like Google and Open AI join C2PA, C2PA Content Credentials will become more resilient and readable signals across the digital platforms and tools we use every day, driving greater transparency into how the content we consume each day was created and/or edited. 

Google joining C2PA today is a critical step toward a more transparent internet. While there is still much work ahead, I am confident that growing C2PA adoption across software and hardware providers will drive material, positive changes online. Truepic will continue to power secure, enterprise-grade C2PA implementations, so that authentic, transparent, credentialed digital content will be easier to create, share, and display around the world.

By Jeff McGregor & Judd Heape 

Generative Artificial Intelligence (Gen AI) is here to stay. It is already estimated that more than 65% of people born between 1981 and 2012 regularly use Gen AI. As Gen AI is becoming a central part of our collective societies and economies, every industry and government will develop policies to address this reality. Like all digital innovation, these breakthrough capabilities will be a staple on mobile devices. We will soon live in a world where any smartphone user will be able to generate hyper-realistic, synthetic content to share, send, or post anywhere online. Given that approximately 85% of the global population has access to a smartphone – a number that continues to increase – the scale and impact of these capabilities will undeniably reshape the internet.

We are in the midst of a dramatic shift in our information ecosystem where one’s ability to decipher between authentically created and AI created content is a necessity. “Real or AI?” will become the fundamental question of content consumers. In this evolving digital landscape, transparency and authenticity are the foundations required to maintain integrity of our shared informational ecosystem. Simply put, transparency is necessary to foster trust, ensure accuracy, and safeguard against the spread of deception and fraud, and it all starts on mobile devices. 

Breakthrough: Transparency and Authenticity on Chip

Qualcomm and Truepic have been working tirelessly over the course of many years to prepare for this reality. Today, at Qualcomm’s Snapdragon Summit, we announced the world’s first chipset to power transparency and authenticity in digital content across smartphones worldwide. The Snapdragon® 8 Gen 3 Mobile Platform was unveiled featuring Truepic’s unique technology and helps chart a path to a more transparent future. This first of a kind chipset will power any device to securely sign either an authentic original image or generate synthetic media with full transparency right from the smartphone. This marks a watershed moment for the ethical and transparent use of Generative AI. 

How Does it Work?

This breakthrough is made possible by embedding Truepic’s technology into the Snapdragon’s Trusted Execution Environment to ensure that the original details of the media are verified and cryptographically sealed upon creation. By leveraging Truepic’s technology in firmware, applications on a supported device can add Content Credentials to any image output – synthetic or authentic. Content Credentials are image details that have been digitally signed according to an open technical specification developed and maintained by the Coalition for Content Provenance and Authenticity (C2PA), a cross-industry standards development organization Truepic helped develop. Content Credentials, which can be added to any Gen AI output from a device powered by the Snapdragon chipset, make it clear if the content has been altered since its creation. 

As synthetic media proliferates, the value and utility of authentic media – created by light will also increase. Truepic and Qualcomm went a step further to ensure you can differentiate between the two. Our teams have also enabled a process that will take advantage of the Snapdragon’s Trusted Execution Environment, to produce a transparent image with Content Credentials applied directly as the photo is produced. Therefore, media created on a device can be sent to any C2PA compliant website, platform, phone, or browser and display its origin & history. Further, media can be edited on device or moved to a compliant editing platform, like Adobe’s Creative Suite, to maintain full transparency of edits/changes.

Significance

As technologists, government, academics, and institutions work together to produce a healthier information ecosystem, most agree that transparency in digital content is critical. Transparency, authenticity, and provenance are supported by legislators, forums, and government officials around the world, most notably in the White House’s voluntary commitments, the EU’s AI act, and by thought leaders like the Partnership on AI. 

In an effort to support these goals, Qualcomm and Truepic believe device level transparency is fundamental as the majority of digital content – synthetic or authentic – will come directly from smartphones. Truepic and Qualcomm are proud to lead the way through the combined innovations atop the Snapdragon Mobile Platform. OEMs will be able to leverage these capabilities in the chipset’s second wave of production in mid-2024. 

Jeff McGregor is the CEO of Truepic 

Judd Heape is the VP of Product Management for Camera, Computer Vision and Video Technology at Qualcomm

Mallory Lindahl is a Master’s degree candidate at Carnegie Mellon University and a Communications and Public Affairs intern at Truepic. 

I recently explored the interoperability of C2PA Content Credentials between Truepic’s secure capture software development kit (SDK) Lens, and Adobe Creative Cloud. Both Truepic and Adobe are founding members of the C2PA, which provides an open technical specification for greater transparency and authenticity in digital content using Content Credentials.

What are Content Credentials?

Content Credentials can show when a piece of content was captured or produced, what program was used to produce it, and what edits have been made to it since creation. They are a form of digital signature, powered by cryptography, that lets creators document their editing process, make notes for later projects, and establish the authenticity of their produced media. Content Credentials also provide transparency as to whether something was AI-generated and if it has been edited using AI.

The image above, captured using Truepic’s Lens SDK, has Content Credentials, displayed on the upper right hand corner of the image when you hover over the icon.

Because the C2PA is an open standard, it allows different technologies to interoperate, including Truepic’s Lens and Adobe Creative Cloud. Interoperability means that Content Credentials are preserved when jumping between compatible tools and platforms. 

To demonstrate how the open standard works across Lens and Creative Cloud, I edited an image multiple times and documented the process below. 

Step 1: Capture an Authentic Photo with Truepic Lens

I used Lens to take the original photo for this project. Lens assures and preserves the authenticity of images and videos from the instant they’re captured by verifying and securing the metadata– the details of where, when, how, original content was captured.

This is the original photo I took with Truepic Lens at the annual Picklesburgh festival in Pittsburgh, PA. The metadata, secured and verified using cryptography, is documented on the right-hand side of the image, showing the exact date, time, and location of the photo. With Content Credentials, viewers can confirm that this is a real photo. 

Step 2: Turn on Content Credentials in Adobe Photoshop

Next, I opened the image in Adobe Photoshop. Before starting the editing process, I turned on Content Credentials by selecting “Content Credentials (Beta)” underneath the window tab at the top of my screen. To view the Content Credentials throughout the editing process, I selected the Content Credentials icon in the Photoshop toolbar. This step is crucial for any creator who wants to document and disclose their editing process.

How to turn on and preview Content Credentials in your Adobe Photoshop project.

The “i” symbol to the left of your tab will indicate if Content Credentials have successfully been enabled. The platform  gives users the option to learn more about Content Credentials before enabling, and users can turn them off at any time. The C2PA standard is opt-in, so Content Credentials will never be attached to your work without your consent. 

Step 2: Alter the Image Using Photoshop’s Generative Fill Feature

My goal with this image was to change it from an authenticated photo of the annual Picklesburgh festival in Pittsburgh, PA to a composite image of a protest, using AI and other editing tools in Photoshop. Photoshop’s ‘Generative Fill’  allowed me to significantly alter this photo in minutes.

After selecting the parts of the photo I wanted to change, I input a detailed text prompt into the generative fill text box to generate the desired edits. I removed the giant pickle, filled in the skyline, added sunglasses to a man in the front, and inserted generic signs and flags throughout the crowd to imitate a protest. Because I enabled Content Credentials before making these changes, each of these AI-powered alterations was documented and saved as part of the new composite image’s Content Credentials. 

Photoshop’s Generative Fill tool allows users to describe the image they want to generate with text prompts. It will generate the desired image in the selected portion of the image.

Using the other editing tools in Photoshop, I made a few more changes to make the image look more realistic. I used blur tools and color matching to keep the signs out of focus, so they looked like they fit into the real photo’s lighting and layers. This all took me less than an hour. When I was finished, I downloaded the composite image as a png. 

Step 4: Display and Verify Content Credentials

Here is the final product with Content Credentials attached and displayed using Truepic’s Display Library

Truepic’s Display shows the full Content Credentials for a given piece of media. In this case, it includes time, date, and location of the original photo, along with any modifications that have been made to the picture. You can also see the original image used to create this new image.

The Content Authenticity Initiative’s open source Verify tool is another option for viewing Content Credentials on a given media file. The Verify tool also shows time, date, and location and notifies the user that AI tools have been used to enhance the image.

The Content Authenticity Initiative’s Verify tool is another way to see an image has Content Credentials.

Without Content Credentials disclosed to the viewer, someone viewing this image on social media could easily think they are looking at a recent protest in downtown Pittsburgh. But with Content Credentials, they can see that what they are looking at has been heavily edited. Powered by the C2PA open standard, Content Credentials provide tamper-evident transparency that interoperates across tools, like Truepic’s SDK and Adobe’s creative suite. 

Gallery: From Authentic Capture to Composite Edit

From left to right: The original capture authenticated with Truepic’s Lens SDK, a second version to demonstrate what the edits would look like with no Content Credentials attached, and the third and final version, a composite edit using Adobe Photoshop with Content Credentials attached. 

While zero trust is a cornerstone of cybersecurity, have you ever thought about it in the context of media? As AI-generated content spreads across our digital ecosystem, we won’t be able to tell the difference between a genuine photo taken with a camera and a synthetic AI-generated image. We need more information to verify the origins and history of media, also known as provenance. Truepic is focused on just that: In 2016, Truepic was founded to restore authenticity and transparency to online digital content. Truepic’s successful digital platform gives organizations across various sectors, from major insurance companies to nonprofits, the means to securely and seamlessly collect 100% authentic, verified digital images and videos. Truepics new iOS and Android Software Development Kit (SDK), Lens, allows any company to integrate Truepic’s secure image capture technology into their existing mobile applications. Lens has won numerous awards since its launch, including a 2023 CSO50 Award for being a risk management and security leader. 

Truepic combines its proprietary methods for capturing authentic images with conformance to an interoperable, open-source specification it helped co-create. Lens cryptographically secures the provenance information collected from every image into the file using the Coalition for Content Provenance and Authenticity (C2PA) specification, an open industry standard for content provenance. The specification quickly evolved to require PKI in its trust model. This prompted Truepic to stand up the world’s first purpose-built C2PA certificate authority, which is integrated into Lens. Truepic adopted Keyfactor’s EJBCA and SignServer, to create a highly scalable and reliable next-generation technology. 

Truepic’s powerful tools enable organizations to easily collect trusted, authenticated content to drive better decisions for them and their customers. The Lens SDK enables applications to record and display where and how content is created. It uses cryptography to protect media from tampering before it reaches its intended recipients, proving that the media has not been manipulated. Should editing be necessary, it also supports signing details about the changes into the file, creating a history over time without breaking the cryptographic chain of custody. When this information is displayed alongside the media, consumers are properly informed about what they see, no more guessing needed.

Learn more about Truepic Lens, here: https://truepic.com/truepic-lens/

Generative A.I. and deepfake software has steadily advanced over the years and can now deliver realistic results, providing a tool that blurs the line between fact and fiction. Deepfake technology has been around for years and has been used to create music videos, nonconsensual pornography, and even political deception. The New York Times recently reported on synthetic media being weaponized in disinformation campaigns. According to a Graphika report, Pro-Chinese disinformation actors used commercially available deepfake technology to generate videos of two fictitious characters. The AI-generated avatars were positioned as news anchors for a fabricated media outlet called “Wolf News.” The deepfake videos circulated on Facebook and Twitter before being removed. This marks one of the most high-profile manifestations of the weaponization of deepfakes for disinformation. Ease of use and access has made dissemination scalable. 

The before mentioned videos were part of a pro-China misinformation campaign dubbed “spamouflage.” In these campaigns, political spam accounts plant content online and use other accounts to amplify it across various social media platforms. In this particular instance, one of the fictitious avatars, Anna, made a passionate speech in a robotic monotone voice supporting Burkina Faso’s new government. Facebook disabled an account connected to the pro-China deepfake videos after being contacted by The New York Times. Researchers noted the use of deepfake technology more than the actual impact of the videos, which were not seen by a large audience. As deepfake technology advances, it is believed that next-level deepfakes will be increasingly hard to detect, making them difficult to verify as real or fake.

Experts predict that by 2025, 90% of digital content online will be synthetic or generated by AI. Once a niche security concern, deepfake and synthetic media technologies have become increasingly sophisticated, efficient, and easy to access. Deepfake videos are well on their way to becoming hyperrealistic and indistinguishable from real footage. As these technologies advance, so does the risk of deepfake fraud. 

According to the Identity Theft Resource Center (ITRC)’s 17th Annual Data Breach Report, the number of data compromises in 2022 was 1,802, just 60 events short of the all-time high set in 2021. The potential for financial manipulation through deepfakes is real and could lead to severe losses. Fabricated, or synthesized audio clips can be used to access accounts and sensitive personal and financial data. Deepfakes make impersonation easy and convincing and can be used to manipulate markets and behavior by propagating fake news or fabricated audio and video of executives and industry leaders. 

Recently, a photo of a U.S. diplomat hunting in Pakistan with a dead markhor, the country’s national animal, went viral on Twitter. The post sparked an outcry online who  viewed it as an offensive display of power. Geo TV, a Pakistani news outlet, fact-checked the Tweet and concluded that the photo was fabricated, possibly synthetically.

This trend is not only growing, and synthetic media is already being used for various forms of deception. In the past year, Deepfake technology was used to make a synthetic version of Elon Musk as part of a cryptocurrency scam. In another instance, scammers deepfaked the Chief Communications Officer at Binance, Patrick Hillmann. Project teams were tricked into believing Hillmann was conducting meetings regarding opportunities to have tokens listed on the Binance platform. In June 2022, the Federal Bureau of Investigation (FBI) warned of deepfakes interviewing for technology roles. The FBI has received increasing complaints that cybercriminals are using Personally Identifiable Information (PII) and deepfakes to apply for remote positions. 

Deepfakes present various challenges that need to be addressed for society to avoid falling victim to fraudulent activities. Companies must stay vigilant to ensure they examine best practices to help mitigate deepfake threats such as fraud, phishing attacks, and to protect their brand. Advanced authentication and identity verification measures should be implemented to reduce the risk of such attacks.

Generative AI (Gen AI) is going mainstream. From the popularity of ChatGPT’s explosive rise to AI-created avatars popping up across social media, AI is set to change the digital landscape. Generative AI refers to artificial intelligence that can generate digital content with minimal human prompting. Enter a quick text prompt and Gen AI tools like ChatGPT and DALL-E 2 will produce hyperrealistic content. While AI is well on its way to transforming businesses and revolutionizing industries, it raises many ethical questions.

Generative AI can be used to increase productivity, optimize tasks, reduce costs, and create new growth opportunities, but it has been known to produce biased or otherwise offensive outputs, which could lead to public backlash if released in an uncontrolled environment. Lack of transparency across digital ecosystems makes content difficult to trace, attribute and identify, even for the savviest internet users. Additionally, Gen AI is trained on large datasets collected from artists, writers, academics, and everyday internet users without their knowledge or consent. Privacy and IP concerns about how the training data for Gen AI models have been collected are starting to surface. 

On January 14, 2023, a group of artists filed a class action lawsuit against Stability AI, Midjourney, and Deviant Art alleging that these Generative AI companies are infringing on the rights of artists and other creators under the guise of alleged artificial intelligence. In another notable AI legal battle, Getty Images is suing the creators of Stable Diffusion for scraping images from its website. The adoption of Generative AI is also predicted to complicate the use of video evidence in legal procedures. Wilmerhale’s Matthew Ferraro and Brent Gurney recently explained that the ease of synthetic media creation increases the risk of falsified evidence and makes it more likely for parties to challenge its integrity.

As artists and creators begin to navigate the Generative AI world, digital content authenticity (DCA) and provenance are solutions that can help them protect and document ownership of their content transparently and at scale. The Coalition for Content Provenance and Authenticity (C2PA) has developed a provenance-based standard to help address the issues of trust and authenticity online. The open technical standard provides publishers, creators, and consumers with opt-in ways to document and trace the origin and authenticity of original and synthetic content

Nearly six months ago, Trusted Future launched with expert insights on the year to come in visual media. Dr. Eric Horvitz and Dr. Hany Farid both warned that the sophistication and democratization of deep-fake technologies, “paired with the ubiquity of social media,” pose dangerous threats to business and society. Author Nina Schick also opined that “It will also be a breakthrough year for synthetic video.” Those warnings proved prescient.

Soon after, visual and synthetic media was weaponized to produce false and misleading narratives in Ukraine. Everything from poorly made deepfakes with fabricated claims to cheapfakes of the Ukrainian President littered social media. The inability to discern real from fake has prompted major news outlets to create guides for their readers on how to spot fakes. Examples include Reuters Fact Check and The Washington Post, see here.

Issues around visual trust, authenticity, and transparency go well beyond the conflict zone. The problem is multifaceted and affects both business and society. Today it is at the forefront of our daily conversation. Former President Barack Obama – in his high-profile speech to Stanford University in April – noted that the implications of image deception, “for our entire social order are frightening and profound.” Soon after, Elon Musk’s reported purchase of Twitter, reignited debates on trust, transparency in user-generated content, and speech protections.

Business is only starting to understand how trust in visual media will affect operations, products, and consumers. Dove created a campaign, through its Dove Self-Esteem Project, to illustrate how social media platforms are flooded with toxic beauty advice and manipulated imagery. The powerful campaign helps drive home the negative effects of untrustworthy visual media. According to Psychology Today, social media consumption can contribute to body dissatisfaction in both adults and children, and social media platforms that focus on visual images can be worse for body image than those that do not.

Government too has been active. The United States along with 60 Global Partners launched A Declaration for the Future of the Internet, with one of its primary themes being “Trust in the Digital Ecosystem.” Additionally, the European Union adopted the Digital Services Act (DSA) in part to establish transparency and a clear accountability framework for online platforms to better protect consumers and their fundamental rights online. One of the key goals of the act, for society at large, is the mitigation of systemic risks such as manipulation or disinformation, as many online services are misused by manipulative algorithms to amplify the spread of disinformation for malicious purposes.

The elevation and increased velocity of the debate and discussion of online trust and transparency marks the start of a move toward a more transparent internet.

The good news is consumers increasingly want sustainable products. Simon-Kucher and Partners studied over 10,000 people across 17 countries and found that 86% have shifted purchasing towards more sustainability in the past 5 years. The Economist Intelligence Unit and WWF found a 71% increase in online searches for sustainable goods between 2015 and 2020. Edelman Trust barometer found 86% of customers expect brands to take actions beyond their business. Importantly, these studies found trends in both developing and developed countries.

The world’s big companies from tech to apparel have noticed, some aiming to be net-zero within 20 years. All this positive action, sadly, is still open to fraud. Most companies self-report their numbers and self-audit their practices. When regulated, some companies find creative ways to cheat. Volkswagen famously designed engines to reduce emissions, only during an EPA test. Some even bribe regulators to pass inspections, as in high-profile coal mining cases.

Even when a company acts in good faith, the bigger issue is the global supply chain. The US EPA estimates that for many products, 90% of emissions are from suppliers – the parts, materials, transportation, and embedded energy. 

Companies have less control of their suppliers and even less control of their supplier’s supplier. This has led to high-profile scandals for the world’s top brands.

The key question becomes, how can private industry build faith, trust, and transparency in the environmental impact of global supply chains? Companies, NGOs, and governments are collaborating to establish standards, but those can be substandard and ambiguous to end consumers. ESG accounting practices, which have historically focused on a company’s direct impact, are learning to report these supply chain or “Scope 3” impacts.

But for consumers, investors, and regulators to have real faith in these supply chain metrics, they need the evidence to be verifiable. They need ways to confirm a bill of sale for green steel is authentic, that a power bill for solar was not photoshopped, etc. There is also a demand to go beyond a regulator’s spot check, which might assure the factory was clean on the day of inspection. 

Digital content provenance and image authentication technology gives each stakeholder a way to continuously monitor a factory, and elevated transparency to consumers. Imagine an environmental manager in a private industry verifying green materials arrived at a small supplier’s factory, waste was not dumped into a nearby river, and workers were treated well. Imagine that organization, say Apple or a major clothing manufacturer, being able to provide that info, for each individual product, to every customer, investor, and regulator.

If trust tech can increase our faith in the carbon impact of our economic choices, this faith can align consumers, producers, investors, and regulators to each do their part. Honest information is the key to a healthy planet.

Shouvik Banerjee is the Founder and CEO of AverPoint, a news product that promotes media literacy to counter disinformation. He spent 10 years in clean energy, including the Obama Administration, SolarCity, and McKinsey.

Matthew F. Ferraro

The increased use of rudimentary deception techniques in visual imagery and the growth of hyperreal synthetic media raises several significant and developing legal issues.  Whether related to their positive uses for entertainment or accessibility, or their misuse for abuse and deception, image alteration and deception pose evolving questions for attorneys, legislators, and businesses.  Here is an overview of some of these considerations:  

Cheapfakes: Crudely edited, mislabeled, or decontextualized imagery, audios, and videos, commonly called “cheapfakes,” are the most common form of manipulated media online today.  Experience shows that it does not require cutting edge synthetic media generation or alteration to propel false narratives.  The surge of cheapfakes will serve to only deepen the distrust many feel toward all media, leading to a growth in the “liar’s dividend.”  That is the benefit malefactors can draw from being able to dismiss authentic media as fake, because the public will be primed to doubt the veracity of all inconvenient evidence.  These challenges will only grow with the democratization and greater believability of advanced synthetic media.

Deepfakes:The advent of artificial intelligence (AI)-generated synthetic media, “deepfakes,” creates new and controversial challenges that legislators, attorneys, and business must now grapple with, such as:

Ownership. Who owns a deepfake?  It is often an open question.  The source data that is fed into the AI generator to create synthetic images may belong to one or more people, affording the rightsholders copyright claims in the generated media.  Determining when the use of underlying source data to create a deepfake is “fair use” or when the output is sufficiently “transformed” from the source imagery that it is no longer covered by copyright law will vary case-by-case.  In the meantime, businesses that want to use deepfakes for commercial purposes will need to consider the provenance of source data, secure appropriate licenses, and address similar intellectual-property implications of their use.

Deepfake-Specific Laws. Legislators around the country have moved with notable speed to legislate around deepfakes.  So far, seven states have passed laws that bar deepfakes of some kind.  Congress has passed and the President has signed four laws related to deepfakes.  About thirty bills on deepfakes in roughly twelve states and Congress are under consideration. 

The laws that have been adopted fall into basically four categories: (1) banning most nonconsensual deepfake pornography, (2) barring many deepfakes of political candidates distributed before an election, (3) regulating the digital likenesses of certain individuals after death, and (4) directing reports and research on deepfake technology and potential countermeasures. 

Traditional Laws. Run-of-the-mill laws that protect a person’s reputation and image can also be applied to the new world of manipulated media.  For example, many states recognize a legal claim of “false light,” where a person can bring a lawsuit when something highly offensive is falsely implied to be true about them.  This standard could govern claims involving a victim’s face that was manipulated by AI to create explicit or offensive content.

Courtroom Evidence. In the recent trial of a January 6 rioter, the defendant’s attorney asked an FBI agent on the stand if an incriminating video of the defendant had been checked to see if it was a “deepfake.”  The witness said the video showed no signs of manipulation or falsity.  The mere line of questioning in such a high-profile case raises important broader questions for litigants about how media evidence will be treated in the courtroom going forward—evidence that, in the past, would have been usually accepted as trustworthy without question.  

As Stanford’s Riana Pfefferkorn has observed, trial lawyers will increasingly need to consider how and when to introduce or attempt to exclude video evidence; judges will have to rule on those motions; experts may be called to verify (or challenge) the veracity of media; and jurors will have to decide for themselves if evidence can be accepted as true or may be the result of manipulation.  And, critically, lawyers will need to reflect on their ethical duties to submit to courts only authentic evidence, while not baselessly stoking jurors’ skepticism for advantage by claiming without good reason that evidence is manipulated. 

The proliferation of deepfakes will occasion what author Nina Schick calls a “social media revolution.”  This burgeoning upheaval has already impacted the legal world, with much more to follow.  Attorneys, businesses, technologists, and society at large will need to work together to address these issues in this new era.

Matthew F. Ferraro (@MatthewFFerraro), a former U.S. intelligence officer, is a visiting fellow at the National Security Institute at George Mason University and a counsel at Wilmer Cutler Pickering Hale & Dorr.

The Coalition for Content Provenance and Authenticity (C2PA)  addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history or provenance of digital media content. On January 26, the C2PA held a groundbreaking event announcing and previewing the 1.0 version of the end-to-end open technical standard for certifying the source and history of media content.  The event – Possibilities & Opportunities for the Future of the Internet in the Deepfake and Disinformation Era – featured policymakers, academics, and industry leaders weighing in on the future of responsible digital media creation, publication, and sharing.  Here are highlights:

  • Dr. Eric Horvtiz, Chief Scientific Officer at Microsoft, opened the event by noting that restoring trust in digital content is an ambitious goal that will require diverse perspectives and participation. Horvitz emphasized that it is encouraging to see governments, research institutions, and others begin to understand and embrace the provenance approach.
  • US Senator Rob Portman (R-OH) and US Senator Gary Peters (D-MI) delivered video messages explaining their work to introduce the bi-partisan Deepfakes Task Force Act to help counter the malicious use of synthetic media with digital content provenance. Both Senators emphasized that synthetic media and the erosion of trust online represent significant national security and economic challenges. 
  • Dana Rao, General Counsel and EVP, Adobe hosted a fireside chat with White House Senior Policy Advisor for Technology Strategy, Lindsay Gorman. Gorman stressed that disinformation is a national security challenge in addition to a societal one and noted that without technologies that protect and authenticate information, authoritarians can rewrite history and avoid being held accountable.  
  • Author Nina Schick moderated a panel featuring industry experts including Dr. Matt Turek, Program Manager at DARPA, Dr. Hany Farid, Professor at UC Berkeley, Sam Gregory Program Director at Witness, and Todd O’Boyle, Senior Manager of Public Policy at Twitter.  The panel discussed the current challenges to identifying image fraud and deception online and the possibilities, challenges, and utility of digital content provenance. 
  • The C2PA previewed the first-ever preview of the v.1.0 technical specification with a short video that can be seen here. The preview was followed by a Q/A panel with experts from C2PA member companies – Adobe, BBC, Microsoft, and Truepic. 
  • The event wrapped up with closing comments from Jamie Angus, Senior Controller of News Output and Commissioning, BBC who emphasized that the solutions the C2PA is working on reinforce the trust that news brands need to have with their audiences and play to the safety of journalists operating in difficult or dangerous environments.

Click here to view the event.

We’ve taken an enormous step forward on this journey. Truepic has launched our iOS and Android Software Development Kit (SDK), called Truepic Lens. This new product allows any mobile application to integrate our award-winning secure camera technology, directly in their own app.

The result? Any mobile app can now capture and display the industry’s most trusted digital photo. Even better — every photo captured through Lens conforms to the Coalition for Content Provenance and Authenticity’s (C2PA) open internet specification, launched at a special event on January 26th. At the event, we previewed the entire system in action, from capture to display.

Read the full article here

The Coalition for Content Provenance and Authenticity (C2PA) has helped raise awareness on the potential of digital content provenance as a counterweight to image and audio fabrication and synthesis. In 2021, various governments around the world recognized this potential and began passing and introducing legislation with regard to digital content provenance and its utility to increase transparency online. We expect this trend to continue into 2022. Here are some notable pieces of legislation:

  • United States: Senators Gary Peters (D-IL) and Rob Portman (R- OH) introduced the Deepfakes Task Force Act (2021) to establish the National Deepfake and Digital Provenance Task Force, which will explore how the development and deployment of provenance standards could assist with reducing the proliferation of disinformation and digital content forgeries. Senator Portman introduced the bill on July 29th, 2021. The bipartisan Deepfake Task Force Act will assist the Department of Homeland Security (DHS) in countering deepfake technology. The task force would be chaired by DHS and composed of experts from academia, government, civil society, and industry. The task forces would be charged with exploring how digital content provenance could assist in reducing the spread of deepfakes, develop tools for content creators to authenticate content and its origin, and improve the ability for civil society and industry leaders to relay information about the source of the deepfakes. This bill was unanimously reported out of committee in August 2021 and awaits a Senate floor vote.

  • UK: The UK Centre for Data Ethics and Innovation (CDEI) cited digital content provenance and emerging open standards from the C2PA as examples of how transparency can support platforms to deal with misinformation. The CDEI issued the report to the UK Government in August 2021. The CDEI is a government expert body focused on the trustworthy use of data and AI. The CDEI’s team of specialists has expertise in data policy, public engagement, and computer science. The CDEI is supported by an advisory board to deliver trustworthy approaches to data and AI governance across the UK. The independent advisory body will bring people together from across sectors to shape recommendations for the government that support responsible innovation and help build trust.

  • Australia: Australian Code of Practice on Disinformation and Misinformation was signed by tech companies in response to the government’s request for a framework to provide safeguards against harm associated with such content and that empowers users to make better-informed decisions around digital content. To date, the code has been adopted by Adobe, Apple, Facebook, Google, Microsoft, Redbubble, TikTok, and Twitter. All participating companies commit to protecting Australians against harm from online disinformation and misinformation using a range of measures that reduce its spread. Participating companies also commit to releasing an annual transparency report that will help improve understanding of misinformation and disinformation over time. The first set of reports was published May 22nd, 2021, and are available here.

Request more information