Is imitation the sincerest form of flattery, or is it theft? Perhaps it comes down to the imitator. Text-to-image artificial intelligence (“AI”) systems, such as DALL-E 2, Midjourney, and Stable Diffusion, are trained on huge amounts of image data, including art, that is scraped from the web. As a result, these platforms often generate outputs that resemble (and may infringe) real artists’ work and style – albeit, without the artists’ authorization. It is safe to say that artists are not impressed. To further complicate things, although intellectual property law guards against the misappropriation of individual works of art, this does not extend to emulating an individual artist’s style.
Against this background, it is becoming difficult for artists to promote their work online without contributing infinitesimally to the creative capacity of generative AI. Many are now asking if it is possible to compensate creatives whose art is used in this way. One approach – courtesy of photo licensing service Shutterstock – goes some way towards addressing the issue.
Old contributor model, meet computer vision
Media content licensing services, such as Shutterstock, take contributions from photographers and artists and make them available for third parties to license. In these cases, the commercial interests of the licenser, licensee, and creative are straightforward. Customers pay to license an image, and a portion of this payment (in Shutterstock’s case 15-40 percent) goes to the creative who provided the intellectual property. Issues of intellectual property in this scenario are pretty straightforward: If someone uses a Shutterstock image without a license, or for a purpose that falls outside its terms, it is a clear breach of the copyright holder (i.e., the photographer or other artist)’s rights. However, Shutterstock’s terms of service also allow it to pursue a new way to generate income from intellectual property. Its current contributors’ site has a large focus on computer vision, which it defines as: “A scientific discipline that seeks to develop techniques to help computers ‘see’ and understand the content of digital images such as photographs and videos.”
Computer vision is not new. Have you ever told a website you are not a robot and identified some warped text or pictures of bicycles? If so, you have been actively training AI-run computer vision algorithms. Now, computer vision is allowing Shutterstock to create what it calls an “ethically sourced, totally clean, and extremely inclusive” AI image generator.
What makes Shutterstock’s approach ‘ethical’?
An immense amount of work goes into classifying millions of images to train the large language models used by AI image generators, such as DALL-E 2, Midjourney, and Stable Diffusion. But services, such as Shutterstock, are uniquely positioned to do this, as they have access to high-quality images from some two million contributors, all of which are described in some level of detail. It is the perfect recipe for training a large language model. (Getty makes a similar claim in the lawsuit that it is waging against Stability AI, asserting that its images are “highly desirable” for use in training artificial intelligence programs, such as Stable Diffusion, “because of their high quality, and because they are accompanied by content-specific, detailed captions and rich metadata.”)
These models are essentially vast multidimensional neural networks; the network is fed training data, which it uses to create data points that combine visual and conceptual information. The more information there is, the more data points the network can create and link up.
This distinction between a collection of images and a constellation of abstract data points lies at the heart of the issue of compensating creatives whose work is used to train generative AI. Even in the case where a system has learned to associate a very specific image with a label, there is no meaningful way to draw a clear line from that training image to the outputs. In other words, we cannot really see what the systems measure or how they “understand” the concepts they learn.
Shutterstock’s solution is to compensate every contributor whose work is made available to a commercial partner for computer vision training. It describes the approach on its site: “We have established a Shutterstock Contributor Fund, which will directly compensate Shutterstock contributors if their IP was used in the development of AI-generative models, like the OpenAI model, through licensing of data from Shutterstock’s library. Additionally, Shutterstock will continue to compensate contributors for the future licensing of AI-generated content through the Shutterstock AI content generation tool.”
The amount of money that goes into the Shutterstock Contributor Fund will be proportional to the value of the dataset deal Shutterstock makes. But, of course, the fund will be split among a large proportion of Shutterstock’s contributors. Whatever equation Shutterstock develops to determine the fund’s size, it is worth remembering that any compensation is not the same as fair compensation. As such, Shutterstock’s model sets the stage for new debates about value and fairness. Arguably the most important debates will focus on the amount of specific individuals’ contributions to the “knowledge” gleaned by a trained neural network. There is not (and may never be) a way to accurately measure this.
Shutterstock has promised to give contributors a choice to opt out of future dataset deals. Its terms make it the first business of its type to address the ethics of providing contributors’ works for training generative AI (and other computer-vision-related uses). It offers what is perhaps the simplest solution yet to a highly fraught dilemma.
Time will tell if contributors themselves consider this approach fair. Intellectual property law may also evolve to help establish contributors’ rights, so it could be Shutterstock is trying to get ahead of the curve. And either way, we can expect more give and take before everyone is happy.
Brendan Paul Murphy is a Lecturer in Digital Media at CQUniversity Australia. (This article was initially published by The Conversation.)