Shortly after rumors leaked of former President Donald Trump’s impending indictment in March, images purporting to show his arrest appeared online. These images looked like news photos. However, they were fake and created by a generative artificial intelligence (“AI”) system. Generative AI – in the form of image generators like DALL-EMidjourney and Stable Diffusion – and text generators, such as BardChatGPTChinchilla and LLaMA, has exploded in the public sphere. By combining clever machine-learning algorithms with billions of pieces of human-generated content, these systems can do anything from create an eerily realistic image from a caption, synthesize a speech in President Joe Biden’s (or Kim Kardashian’s) voice, replace one person’s likeness with another in a video, generate images of the Pope in a puffer jacket, or write a coherent 800-word op-ed from a title prompt.

Even in these early days, generative AI is capable of creating highly realistic content, with the average person unable to reliably distinguish an image of a real person from an AI-generated person. Although audio and video have not yet fully passed through the uncanny valley – images or models of people that are unsettling because they are close to but not quite realistic – they are likely to soon. When this happens, and it is all but guaranteed to, it will become increasingly easier to distort reality.

In this new world, it will be a snap to generate a video of a CEO saying her company’s profits are down 20 percent, which could lead to billions in market-share loss, or to generate a video of a world leader threatening military action, which could trigger a geopolitical crisis, or to insert the likeness of anyone into a sexually explicit video. 

Advances in generative AI will soon mean that fake but visually convincing content will proliferate online, leading to an even messier information ecosystem. A secondary consequence is that detractors will be able to easily dismiss as fake actual video evidence of everything from police violence and human rights violations to a world leader burning top-secret documents. As society stares down the barrel of what is almost certainly just the beginning of these advances in generative AI, there are reasonable and technologically feasible interventions that can be used to help mitigate these abuses. One key method comes in the form of watermarking.

Watermarks to Prove Provenance

There is a long history of marking documents and other items to prove their authenticity, indicate ownership and counter counterfeiting. Today, Getty Images, a massive image archive, adds a visible watermark to all digital images in their catalog. This allows customers to freely browse images while protecting Getty’s assets. (The company’s watermarks have come up in its copyright and trademark lawsuit against Stability AI.) Imperceptible digital watermarks are also used for digital rights management. A watermark can be added to a digital image by, for example, tweaking every 10th image pixel so that its color (typically a number in the range 0 to 255) is even-valued. Because this pixel tweaking is so minor, the watermark is imperceptible, and bbecause this periodic pattern is unlikely to occur naturally, and can easily be verified, it can be used to verify an image’s provenance. 

Even medium-resolution images contain millions of pixels, which means that additional information can be embedded into the watermark, including a unique identifier that encodes the generating software and a unique user ID. This same type of imperceptible watermark can be applied to audio and video.

The ideal watermark is one that is imperceptible and that is also resilient to simple manipulations like cropping, resizing, color adjustment and converting digital formats. Although the pixel color watermark example is not resilient because the color values can be changed, many watermarking strategies have been proposed that are robust – though not impervious – to attempts to remove them. 

Watermarks & AI

These watermarks can be baked into the generative AI systems by watermarking all the training data, after which the generated content will contain the same watermark. This baked-in approach to watermarks is attractive because it means that generative AI tools can be open-sourced – as the image generator Stable Diffusion is – without concerns that a watermarking process could be removed from the image generator’s software. Stable Diffusion has a watermarking function, but because it is open source, anyone can simply remove that part of the code. At the same time, OpenAI is experimenting with a system to watermark ChatGPT’s creations. Characters in a paragraph cannot, of course, be tweaked like a pixel value, so text watermarking takes on a different form. 

Text-based generative AI is based on producing the next most-reasonable word in a sentence. For example, starting with the sentence fragment “an AI system can…,” ChatGPT will predict that the next word should be “learn,” “predict” or “understand.” Associated with each of these words is a probability corresponding to the likelihood of each word appearing next in the sentence. ChatGPT learned these probabilities from the large body of text it was trained on. Generated text can be watermarked by secretly tagging a subset of words and then biasing the selection of a word to be a synonymous tagged word. For example, the tagged word “comprehend” can be used instead of “understand.” By periodically biasing word selection in this way, a body of text is watermarked based on a particular distribution of tagged words. This approach will not work for short tweets but is generally effective with text of 800 or more words depending on the specific watermark details.

Generative AI systems can watermark all their content, allowing for easier downstream identification and, if necessary, intervention. If the industry will not do this voluntarily, lawmakers could pass regulation to enforce this rule. Unscrupulous people will, of course, not comply with these standards. But, if the major online gatekeepers – Apple and Google app stores, Amazon, Google, Microsoft cloud services and GitHub – enforce these rules by banning noncompliant software, the harm will be significantly reduced.

Signing authentic content

Tackling the problem from the other end, a similar approach could be adopted to authenticate original audiovisual recordings at the point of capture. A specialized camera app could cryptographically sign the recorded content as it is recorded. There is no way to tamper with this signature without leaving evidence of the attempt. The signature is then stored on a centralized list of trusted signatures. Although not applicable to text, audiovisual content can then be verified as human-generated. The Coalition for Content Provenance and Authentication (“C2PA”), a collaborative effort to create a standard for authenticating media, recently released an open specification to support this approach. 

With major institutions including Adobe, Microsoft, Intel, BBC and many others joining this effort, the C2PA is well positioned to produce effective and widely deployed authentication technology.

The combined signing and watermarking of human-generated and AI-generated content will not prevent all forms of abuse, but it will provide some measure of protection. Any safeguards will have to be continually adapted and refined as adversaries find novel ways to weaponize the latest technologies. In the same way that society has been fighting a decades-long battle against other cyber threats like spam, malware and phishing, we should prepare ourselves for an equally protracted battle to defend against various forms of abuse perpetrated using generative AI.


Hany Farid is a Professor of Computer Science at the University of California, Berkeley. (This article was initially published by The Conversation.)

Type “Teddy bears working on new artificial intelligence research on the moon in the 1980s” into any of the recently released text-to-image artificial intelligence (“AI”) image generators, and after just a few seconds the sophisticated software will produce an eerily pertinent image. Seemingly bound by only your imagination, this trend in synthetic media has delighted many, inspired others, and struck fear in some. Google, research firm OpenAI and AI vendor Stability AI have each developed a text-to-image generators powerful enough that some observers are questioning whether in the future people will be able to trust the photographic record.

Although their digital precursor dates back to 1997, the first synthetic images splashed onto the scene just five years ago. In their original incarnation, so-called generative adversarial networks (“GANs”) were the most common technique for synthesizing images of people, cats, landscapes and anything else. A GAN consists of two main parts: generator and discriminator. Each is a type of large neural network, which is a set of interconnected processors roughly analogous to neurons.

Tasked with synthesizing an image of a person, the AI generator starts with a random assortment of pixels and passes this image to the discriminator, which determines if it can distinguish the generated image from real faces. If it can, the discriminator provides feedback to the generator, which modifies some pixels and tries again. These two systems are pitted against each other in an adversarial loop. Eventually the discriminator is incapable of distinguishing the generated image from real images.

Text-to-image

Just as people were starting to grapple with the consequences of GAN-generated deepfakes – including videos that show someone doing or saying something they didn’t – a new player emerged on the scene: text-to-image deepfakes. In this latest incarnation, a model is trained on a massive set of images, each captioned with a short text description. The model progressively corrupts each image until only visual noise remains, and then trains a neural network to reverse this corruption. Repeating this process hundreds of millions of times, the model learns how to convert pure noise into a coherent image from any caption.

While GANs are only capable of creating an image of a general category, text-to-image synthesis engines are more powerful. They are capable of creating nearly any image, including images that include an interplay between people and objects with specific and complex interactions, for instance “The president of the United States burning classified documents while sitting around a bonfire on the beach during sunset.” OpenAI’s text-to-image image generator, DALL-E, took the internet by storm when it was unveiled on Jan. 5, 2021. A beta version of the tool was made available to 1 million users on July 20, 2022. Users around the world have found seemingly endless ways to prompt DALL-E, yielding delightful, bizarre and fantastical imagery.

A wide range of people, from computer scientists to legal scholars and regulators, however, have pondered the potential misuses of the technology. Deep fakes have already been used to create nonconsensual pornography, commit small- and large-scale fraud, and fuel disinformation campaigns. These even more powerful AI image generators could add jet fuel to these misuses.

Three AI generators, three different approaches

Aware of the potential abuses, Google declined to release its text-to-image technology. OpenAI took a more open, and yet still cautious, approach when it initially released its technology to only a few thousand users (myself included). They also placed guardrails on allowable text prompts, including no nudity, hate, violence or identifiable persons. Over time, OpenAI has expanded access, lowered some guardrails and added more features, including the ability to semantically modify and edit real photographs.

Stability AI took yet a different approach, opting for a full release of their Stable Diffusion with no guardrails on what can be synthesized. In response to concerns of potential abuse, the company’s founder, Emad Mostaque, said “Ultimately, it’s peoples’ responsibility as to whether they are ethical, moral and legal in how they operate this technology.”

Nevertheless, the second version of Stable Diffusion removed the ability to render images of NSFW content and children because some users had created child abuse images. In responding to calls of censorship, Mostaque pointed out that because Stable Diffusion is open source, users are free to add these features back at their discretion.

The genie is out of the bottle

Regardless of what you think of Google’s or OpenAI’s approach, Stability AI made their decisions largely irrelevant. Shortly after Stability AI’s open-source announcement, OpenAI lowered their guardrails on generating images of recognizable people. When it comes to this type of shared technology, society is at the mercy of the lowest common denominator – in this case, Stability AI.

Stability AI boasts that its open approach wrestles powerful AI technology away from the few, placing it in the hands of the many. Chances are, few would be so quick to celebrate an infectious disease researcher publishing the formula for a deadly airborne virus created from kitchen ingredients, while arguing that this information should be widely available. Image synthesis does not, of course, pose the same direct threat, but the continued erosion of trust has serious consequences ranging from people’s confidence in election outcomes to how society responds to a global pandemic and climate change.

Moving forward, technologists will need to consider both the upsides and downsides of their technologies and build mitigation strategies before predictable harms occur. Researchers will have to continue to develop forensic techniques to distinguish real images from fakes. Regulators are going to have to start taking more seriously how these technologies are being weaponized against individuals, societies and democracies. And everyone is going to have to learn how to become more discerning and critical about how they consume information online.


Hany Farid is a Professor of Computer Science at the University of California, Berkeley. (This article was initially published by The Conversation.)