As part of its effort to fight deepfakes, a coalition of tech companies co-led by Adobe has finalized the details for a standard way to verify how a photo or video was captured and to document any subsequent edits.
Why it matters: It’s increasingly easy to create a video or image that looks legitimate but has been altered to completely change its meaning.
The latest: Adobe and its partners — including Microsoft, Arm, Intel TruePic and the BBC — will announce Wednesday that they have finalized version 1.0 of their standard for digital content provenance. That means that companies can implement the approach without worrying that the specification will change and break their products.
How it works: The new standard, developed over the last year, identifies when and where an image or video was first created as well as the changes that have been made.
“The speed with which this was developed and implemented was pretty much unprecedented,” Adobe’s Andy Parsons told Axios, saying it reflects the need to fight a growing misinformation problem, as well as to give content creators a way to prove their work is indeed their own.
Last year Adobe showed how the technology could work within Photoshop, using a not-yet-final version of the specification.
Adobe plans to release the code it used via open source so that others can include authentication data in their own apps.
Between the lines: The next part of the work is educating the public on what it means to be authenticated, since an image or video can be put through this system and significantly altered. All the authentication standard guarantees is that you’ll be able to see what has been changed.
What’s next: Adobe wants to add an option to authenticate audio-only recordings as well.