This month the FBI issued an official notification warning private industry of the ‘almost certain’ use of synthetic content for fraud purposes. The notice went on to explain that such threats will likely become evident as an extension of spearfishing campaigns or social engineering attacks. The same notice explained a new and emerging threat known as “Business Identity Compromise” (BIC), which leverages Generative Adversarial Networks (GANs) to develop synthetic corporate personas or emulate an existing employee. In these scenarios, according to the notice, the likely goal will be to deliver “financial and reputational” harm to victim businesses and organizations.
This warning expands the important discussion around deepfakes and synthetic media. To date, much of the public discourse around deepfakes has been related to non-consensual pornography and national, global, and societal security threats — which are very real and concerning threats. But this warning demonstrates that businesses must understand the threats which synthetic media poses to private industry too. As the FBI notes, this is a near-term threat scenario (12–18 months). It is also fair to assume that it could lead to billions in theft, corporate espionage, and significant reputational harm to victim businesses. This notification comes the same month the FBI also announced a 20% increase in existing online financial crime, with losses of more than $4 billion. Businesses need to prepare as bad actors will evolve and start deploying malicious synthetic media to defraud businesses, consumers, and industries further online.
One can argue that businesses and financial markets are even more susceptible to the adverse effects of visual fabrications today because of the digitized nature of the modern economy. COVID-19 only accelerated the digitization of commerce as online spending increased by $183 billion alone in 2020. According to the Adobe digital index, Americans are on track to spend over $1 trillion online by 2022. The pre-existing trend compounded with the global pandemic forced private industry to shift operations, products, and services to digital platforms, which also opened up new threat vectors for the deployment of weaponized synthetic media. If business operations are run via digital interaction and media, how can we ensure sound decision-making is based on real or authentic media?
Unfortunately, there are no silver bullet solutions to instantly proving media true or false, and there won’t be in the near future. The rapid advancement of AI has created a race between bad actors and those working on detection techniques. But the sad reality is that when ways to detect deepfakes are identified, bad actors will adjust course to beat the system. University of California San Diego researchers recently noted that existing deepfake detectors could be beaten between 78–99% of the time based on the training of the AI model.