Truepic BlogQuick Take: Experts Now Suggest 90% of Digital Content Will be Synthetic by 2025

Experts predict that by 2025, 90% of digital content online will be synthetic or generated by AI. Once a niche security concern, deepfake and synthetic media technologies have become increasingly sophisticated, efficient, and easy to access. Deepfake videos are well on their way to becoming hyperrealistic and indistinguishable from real footage. As these technologies advance, so does the risk of deepfake fraud. 

According to the Identity Theft Resource Center (ITRC)’s 17th Annual Data Breach Report, the number of data compromises in 2022 was 1,802, just 60 events short of the all-time high set in 2021. The potential for financial manipulation through deepfakes is real and could lead to severe losses. Fabricated, or synthesized audio clips can be used to access accounts and sensitive personal and financial data. Deepfakes make impersonation easy and convincing and can be used to manipulate markets and behavior by propagating fake news or fabricated audio and video of executives and industry leaders. 

Recently, a photo of a U.S. diplomat hunting in Pakistan with a dead markhor, the country’s national animal, went viral on Twitter. The post sparked an outcry online who  viewed it as an offensive display of power. Geo TV, a Pakistani news outlet, fact-checked the Tweet and concluded that the photo was fabricated, possibly synthetically.

This trend is not only growing, and synthetic media is already being used for various forms of deception. In the past year, Deepfake technology was used to make a synthetic version of Elon Musk as part of a cryptocurrency scam. In another instance, scammers deepfaked the Chief Communications Officer at Binance, Patrick Hillmann. Project teams were tricked into believing Hillmann was conducting meetings regarding opportunities to have tokens listed on the Binance platform. In June 2022, the Federal Bureau of Investigation (FBI) warned of deepfakes interviewing for technology roles. The FBI has received increasing complaints that cybercriminals are using Personally Identifiable Information (PII) and deepfakes to apply for remote positions. 

Deepfakes present various challenges that need to be addressed for society to avoid falling victim to fraudulent activities. Companies must stay vigilant to ensure they examine best practices to help mitigate deepfake threats such as fraud, phishing attacks, and to protect their brand. Advanced authentication and identity verification measures should be implemented to reduce the risk of such attacks.

Request more information