Truepic BlogQuick Take: Generative AI’s Most Recent Ethical Challenge

Generative AI (Gen AI) is going mainstream. From the popularity of ChatGPT’s explosive rise to AI-created avatars popping up across social media, AI is set to change the digital landscape. Generative AI refers to artificial intelligence that can generate digital content with minimal human prompting. Enter a quick text prompt and Gen AI tools like ChatGPT and DALL-E 2 will produce hyperrealistic content. While AI is well on its way to transforming businesses and revolutionizing industries, it raises many ethical questions.

Generative AI can be used to increase productivity, optimize tasks, reduce costs, and create new growth opportunities, but it has been known to produce biased or otherwise offensive outputs, which could lead to public backlash if released in an uncontrolled environment. Lack of transparency across digital ecosystems makes content difficult to trace, attribute and identify, even for the savviest internet users. Additionally, Gen AI is trained on large datasets collected from artists, writers, academics, and everyday internet users without their knowledge or consent. Privacy and IP concerns about how the training data for Gen AI models have been collected are starting to surface. 

On January 14, 2023, a group of artists filed a class action lawsuit against Stability AI, Midjourney, and Deviant Art alleging that these Generative AI companies are infringing on the rights of artists and other creators under the guise of alleged artificial intelligence. In another notable AI legal battle, Getty Images is suing the creators of Stable Diffusion for scraping images from its website. The adoption of Generative AI is also predicted to complicate the use of video evidence in legal procedures. Wilmerhale’s Matthew Ferraro and Brent Gurney recently explained that the ease of synthetic media creation increases the risk of falsified evidence and makes it more likely for parties to challenge its integrity.

As artists and creators begin to navigate the Generative AI world, digital content authenticity (DCA) and provenance are solutions that can help them protect and document ownership of their content transparently and at scale. The Coalition for Content Provenance and Authenticity (C2PA) has developed a provenance-based standard to help address the issues of trust and authenticity online. The open technical standard provides publishers, creators, and consumers with opt-in ways to document and trace the origin and authenticity of original and synthetic content

Request more information