The developer behind an generative AI-powered app that enables users to “swap” faces with “well-known individuals and fictional characters” is looking to escape a proposed class action lawsuit that is being waged against it in a California federal court. According to the motion to dismiss that it filed on May 31, NeoCortext, Inc., the company behind Reface, claims that the suit that Kyland Young lodged against it in April should be tossed out on the basis that the reality TV personality not only fails to adequately plead a right of publicity claim, but even if he could, that claim is preempted by the Copyright Act and barred by the First Amendment. 

For some background: Young filed suit against NeoCortext, accusing the company behind the deepfake app of running afoul of California’s right of publicity law by enabling users to swap faces with famous figures – albeit without ever receiving authorization from those well-known individuals to use their likenesses. In his complaint, Young asserts that NeoCortext has “commercially exploit[ed] his and thousands of other actors, musicians, athletes, celebrities, and other well-known individuals’ names, voices, photographs, or likenesses to sell paid subscriptions to its smartphone application, Reface, without their permission,” thereby, giving rise to a single cause of action under the California Right of Publicity Statute (Cal. Civ. Code § 3344). 

Examples of face swaps from Reface

Fast forward to May 31 and NeoCortext is angling to get Young’s complaint dismissed on three key grounds … 

Preemption – The Reface developer claims that Young has brought “a copyright infringement action masquerading as a right of publicity case” in an attempt to avoid the fact that “as one of many performers in [the] shows [in which he appears], Young almost certainly does not own the copyrights in the shows or photo stills from them,” the latter of which NeoCortext uses to enable those who use its app to “swap” faces and create “deepfake” imagery. (The core of Young’s right of publicity claim is that it “used photographs and videos of him from the CBS television program, Big Brother, in the free version of its Reface app,” NeoCortext claims,) “Faced with that inconvenient fact, [Young] instead claims [that NeoCortext] has used his likeness without his consent.” 

The problem, per NeoCortext is that “where a right of publicity claim is based entirely on display, reproduction, or modification of a copyrighted work, like an episode of a TV show, the Copyright Act preempts the claim.” In furtherance of this argument, NeoCortext asserts that Young’s right of publicity claim centers on “rights that are equivalent to those protected by copyright law,” as Young “does not identify any use of his name, voice, photograph, or likeness independent of [its] use of the copyrighted photos or videos in which [he] is depicted.” Instead, he claims that NeoCortext violated his right of publicity by “displaying photos he appears [in] … in its online database” and “allowing end users to ‘generate a new watermarked image or video where the individual depicted in the catalogue has his or her face swapped’ with the face that was uploaded by the free user.”

As such, Young’s claim “presumes that Reface displays an expressive work – his photo or clips from Big Brother – and allows users to create and distribute derivative works from that work without his permission, both of which are exclusive rights under the copyright law.” 

First Amendment – “Even if copyright did not preempt [Young’s] claim, the First Amendment bars it,” NeoCortext maintains, asserting that by way of his right of publicity claim, Young is “seek[ing] to quash the creative efforts of Reface users.” However, “because the uses of likenesses were purely to enable users to create their own unique, sometimes humorous and absurd expressions, the First Amendment protects the use.” NeoCortext claims that the language in Young’s complaint “acknowledges” the “transformative purpose” of the Reface app, “alleging that the app ‘uses a [generative] AI algorithm to allow users to swap faces with actors, musicians, athletes, celebrities, and/or other well-known individuals.” And since “the very purpose of Reface is to transform a photo or video in which [Young’s] (or others) image appears into a new work in which [his] face does not appear,” NeoCortext asserts that its “display of the original work is a necessary pre-cursor to this transformative process.” 

Failure to Plead a Prima Facie Claim – Finally, NeoCortext asserts that Young falls short of pleading a prima facie right of publicity claim under California law, which requires a plaitniff to allege that the defendant “knowingly uses [his] name, voice, signature, photograph, or likeness” for advertising purposes.” Specifically, NeoCortext argues that Young “does not allege that [it] knowingly used [his] name, photographs, and likeness, or that his name was even used in the first instance … nor does [he] sufficiently allege that its use of a watermark [on images created by the free version of the Reface app] constitutes advertising.” 

On the knowledge front, NeoCotrext asserts that “nothing in the complaint indicates that it knew that photos and GIFs containing [Young’s] face were included in the database,” which consists of a “library of movie and show clips and images from online sources, such as mybestgif.com, tenor.com, Google Video, and Bing Video.” In the same vein, Young also fails to sufficiently plead that it uses his name in the Reface app, per NeoCortext, which states that the complaint “does not allege that users can search for [Young] by name, or whether video clips in which he appears are returned by a search for ‘Big Brother.’” And even if users could search for him by name, Young’s complaint “fails to allege that the search is of data maintained by [NeoCortext] instead of the third party sources of photos and video clips like Tenor and Google video.” 

With the foregoing in mind, NeoCortext requests that the court grant its motion to dismiss Young’s complaint. 

And an anti-SLAPP motion to strike … Not finished there, though, NeoCortext has also lodged a motion to strike Young’s right of publicity claim under California’s anti-SLAPP statute on the basis that: (1) the display of images of celebrities and other public figures in Reface are statements made in a public forum in connection with issues of public interest; (2) Young cannot demonstrate a probability of prevailing on his right of publicity claim (for the reasons set out in NeoCortext’s motion to dismiss); and (3) Young fails to plead a prima facie violation of his right of publicity.

THE BIGGER PICTURE: Young’s lawsuit is one of the latest to center on the products of generative AI outputs – joining cases like the ones that have been lodged against Stability AI and co. In light of the rush by companies to adopt AI, Young’s case “may foretell the emergence of a new breed of class action litigation brought about by artificial intelligence,” Covington & Burling’s Zach Glasser wrote in a recent note, stating that “intellectual property class actions are traditionally relatively rare, but generative AI makes it possible to create a large number of potentially infringing – but not identical – new works with one technology.” 

When a single technology is involved, he asserts that “plaintiffs like Young may allege that there are enough common questions to make a class action appropriate,” while “defense lawyers may come to rely on core intellectual property doctrines and defenses – e.g., substantial similarity, fair use, and the First Amendment – in opposing class certification in ways they have not before.” 

More broadly, other name, image or likeness (“NIL”) issues will almost certainly come about as a result of the rise of generative AI. The Reface app appears to consciously use celebrity images in order to enable users to swap their faces with such famed figures. However, given that the models that drive generative AI outputs are trained on sizable quantities of data from the web (including images in many cases), there is a chance that this will lead to “the possibility that some user prompts may cause the output to include the NIL of a celebrity,” according to Sheppard Mullin’s James Gatto, who states that “responsible companies are taking proactive steps to minimize the likelihood that their generative AI tools inadvertently violate the right of publicity.” Some examples of these steps include “attempting to filter out celebrity images from those used to train the generative AI models and filtering prompts to prevent users from requesting outputs that are directed to celebrity-based NIL.” 

The case is Kyland Young v. NeoCortext, Inc., 2:23-cv-02496 (C.D. Cal.).

OpenAI CEO Sam Altman urged lawmakers to consider how they could regulate artificial intelligence (“AI”) during his Senate testimony on May 16. That recommendation raises the question of what comes next for Congress. The solutions Altman proposed – creating an AI regulatory agency and requiring licensing for companies – are interesting, but what the other experts on the same panel suggested is at least as important: requiring transparency on training data and establishing clear frameworks for AI-related risks. Another point left unsaid was that, given the economics of building large-scale AI models, the industry may be witnessing the emergence of a new type of tech monopoly. 

Altman’s suggestions have highlighted important issues, but they do not provide answers in and of, themselves. Regulation would be helpful, but in what form? Licensing also makes sense, but for whom? And any effort to regulate the AI industry will need to account for the companies’ economic power and political sway.

An agency to regulate AI?

Lawmakers and policymakers across the world have already begun to address some of the issues raised in Altman’s testimony. The  European Union’s AI Act is based on a risk model that assigns AI applications to three categories of risk: unacceptable, high risk, and low or minimal risk. This categorization recognizes that tools for social scoring by governments and automated tools for hiring pose different risks than those from the use of AI in spam filters, for example. 

The U.S. National Institute of Standards and Technology likewise has an AI risk management framework that was created with extensive input from multiple stakeholders, including the U.S. Chamber of Commerce and the Federation of American Scientists, as well as other business and professional associations, technology companies and think tanks. Federal agencies – such as the Equal Employment Opportunity Commission and the Federal Trade Commission – have already issued guidelines on some of the risks inherent in AI. Still yet, the Consumer Product Safety Commission and other agencies have a role to play as well. 

Rather than create a new agency that runs the risk of becoming compromised by the technology industry it is meant to regulate, Congress can support private and public adoption of the NIST risk management framework and pass bills, such as the Algorithmic Accountability Act. That would have the effect of imposing accountability in much the same way as the Sarbanes-Oxley Act and other regulations transformed reporting requirements for companies. Congress can also adopt comprehensive laws around data privacy

Regulating AI should involve collaboration among academia, industry, policy experts and international agencies. Experts have likened this approach to international organizations, such as the European Organization for Nuclear Research, known as CERN, and the Intergovernmental Panel on Climate Change. The internet has been managed by nongovernmental bodies involving nonprofits, civil society, industry and policymakers, such as the Internet Corporation for Assigned Names and Numbers and the World Telecommunication Standardization Assembly. Those examples provide models for industry and policymakers today.

Licensing auditors, not companies

Though OpenAI’s Altman suggested that companies could be licensed to release artificial intelligence technologies to the public, he clarified that he was referring to artificial general intelligence, meaning potential future AI systems with humanlike intelligence that could pose a threat to humanity. That would be akin to companies being licensed to handle other potentially dangerous technologies, like nuclear power. But licensing could have a role to play well before such a futuristic scenario comes to pass.

Algorithmic auditing would require credentialing, standards of practice, and extensive training. Requiring accountability is not just a matter of licensing individuals but also requires companywide standards and practices. Experts on AI fairness contend that issues of bias and fairness in AI cannot be addressed by technical methods alone but require more comprehensive risk mitigation practices such as adopting institutional review boards for AI. Institutional review boards in the medical field help uphold individual rights, for example. Academic bodies and professional societies have likewise adopted standards for responsible use of AI, whether it is authorship standards for AI-generated text or standards for patient-mediated data sharing in medicine.

Strengthening existing statutes on consumer safety, privacy, and protection while introducing norms of algorithmic accountability would help demystify complex AI systems. It is also important to recognize that greater data accountability and transparency may impose new restrictions on organizations. 

Scholars of data privacy and AI ethics have called for “technological due process” and frameworks to recognize harms of predictive processes. The widespread use of AI-enabled decision-making in such fields as employment, insurance and health care calls for licensing and audit requirements to ensure procedural fairness and privacy safeguards. Requiring such accountability provisions, though, demands a robust debate among AI developers, policymakers and those who are affected by broad deployment of AI. In the absence of strong algorithmic accountability practices, the danger is narrow audits that promote the appearance of compliance.

AI monopolies?

What was also missing in Altman’s testimony is the extent of investment required to train large-scale AI models, whether it is GPT-4, which is one of the foundations of ChatGPT, or text-to-image generator Stable Diffusion. Only a handful of companies, such as Google, Meta, Amazon and Microsoft, are responsible for developing the world’s largest language models. Given the lack of transparency in the training data used by these companies, AI ethics experts Timnit Gebru, Emily Bender and others have warned that large-scale adoption of such technologies without corresponding oversight risks amplifying machine bias at a societal scale

It is also important to acknowledge that the training data for tools such as ChatGPT includes the intellectual labor of a host of people such as Wikipedia contributors, bloggers and authors of digitized books. The economic benefits from these tools, however, accrue only to the technology corporations.

Proving technology firms’ monopoly power can be difficult, as the Department of Justice’s antitrust case against Microsoft demonstrated. The most feasible regulatory options for Congress to address potential algorithmic harms from AI may be to strengthen disclosure requirements for AI firms and users of AI alike, to urge comprehensive adoption of AI risk assessment frameworks, and to require processes that safeguard individual data rights and privacy.


Anjana Susarla is a Professor of Information Systems at Michigan State University. (This article was initially published by the Conversation.)

The rush to deploy powerful new generative artificial intelligence (“AI”) technologies, such as ChatGPT, has raised alarms about potential harm and misuse. The law’s glacial response to such threats has prompted demands that the companies developing and using these technologies implement AI “ethically” (i.e., in a manner that falls in line with a larger environmental, social, and corporate governance (“ESG”) framework aimed at managing risks). But what, exactly, does that mean? The straightforward answer would be to align a business’s operations with one or more of the dozens of sets of AI ethics principles that governments, multi-stakeholder groups, and academics have produced. But that is easier said than done. 

Two years of interviews with and surveys of AI ethics professionals across a range of sectors, which were aimed at understating how they sought to achieve ethical AI – and what they might be missing, revealed that pursuing AI ethics on the ground is less about mapping ethical principles onto corporate actions than it is about implementing management structures and processes that enable an organization to spot and mitigate threats. This is likely to be disappointing news for organizations looking for unambiguous guidance that avoids gray areas, and for consumers hoping for clear and protective standards. But it points to a better understanding of how companies can pursue ethical AI. 

Grappling with ethical uncertainties

Our study centered on those responsible for managing AI ethics issues at major companies that use AI. From late 2017 to early 2019, we interviewed 23 such managers, whose titles ranged from privacy officer and privacy counsel to one that was new at the time but increasingly common today: Data ethics officer. Conversations with these AI ethics managers produced four main takeaways. 

First, along with its many benefits, business use of AI poses substantial risks, and the companies know it. AI ethics managers expressed concerns about privacymanipulation, bias, opacity, inequality, and labor displacement. In one well-known example, Amazon developed an AI tool to sort résumés and trained it to find candidates similar to those it had hired in the past. Male dominance in the tech industry meant that most of Amazon’s employees were men. The tool accordingly learned to reject female candidates. Unable to fix the problem, Amazon ultimately had to scrap the project. Generative AI raises additional worries about misinformation and hate speech at large scale and infringement of intellectual property.

Second, companies that pursue ethical AI do so largely for strategic reasons. They want to sustain trust among customers, business partners, and employees. And they want to preempt or prepare for emerging regulations. The Facebook-Cambridge Analytica scandal – in which Cambridge Analytica used Facebook user data, shared without consent, to infer the users’ psychological types, and target them with manipulative political ads – showed that the unethical use of advanced analytics can eviscerate a company’s reputation or even, as in the case of Cambridge Analytica, itself, bring it down. The companies we spoke to wanted to be viewed, instead, as responsible stewards of people’s data.

The challenge that AI ethics managers faced was figuring out how best to achieve “ethical AI.” They looked first to AI ethics principles, particularly those rooted in bioethics or human rights principles, but found them insufficient. It was not just that there are many competing sets of principles. It was that justice, fairness, beneficence, autonomy, and other such principles are contested and subject to interpretation and can conflict with one another. 

This led to our third takeaway: Managers needed more than high-level AI principles to decide what to do in specific situations. One AI ethics manager described trying to translate human rights principles into a set of questions that developers could ask themselves to produce more ethical AI software systems. “We stopped after 34 pages of questions,” the manager said.

Fourth, professionals grappling with ethical uncertainties turned to organizational structures and procedures to arrive at judgments about what to do. Some of these were clearly inadequate. But others, while still largely in development, were more helpful. These included: (1) Hiring an AI ethics officer to build and oversee the program; (2) establishing an internal AI ethics committee to weigh and decide hard issues; (3) crafting data ethics checklists and requiring front-line data scientists to fill them out; (4) reaching out to academics, former regulators and advocates for alternative perspectives; (5) conducting algorithmic impact assessments of the type already in use in their ESG and privacy governance frameworks. 

Ethics as responsible decision-making

The key idea that emerged from our study is this: Companies seeking to use AI ethically as part of a larger ESG structure should not expect to discover a simple set of principles that delivers correct answers from an all-knowing, God’s-eye perspective. Instead, they should focus on the very human task of trying to make responsible decisions in a world of finite understanding and changing circumstances, even if some decisions end up being imperfect. 

In the absence of explicit legal requirements, companies, like individuals, can only do their best to make themselves aware of how AI affects people and the environment and to stay abreast of public concerns and the latest research and expert ideas. They can also seek input from a large and diverse set of stakeholders and seriously engage with high-level ethical principles. This simple idea changes the conversation in important ways. It encourages AI ethics professionals to focus their energies less on identifying and applying AI principles – though they remain part of the story – and more on adopting decision-making structures and processes to ensure that they consider the impacts, viewpoints, and public expectations that should inform their business decisions.

Ultimately, laws and regulations will need to provide substantive benchmarks for organizations to aim for. But the structures and processes of responsible decision-making and an ESG-minded approach to AI are a place to start and should, over time, help to build the knowledge needed to craft protective and workable substantive legal standards. Indeed, the emerging law and policy of AI focuses on process. New York City passed a law requiring companies to audit their AI systems for harmful bias before using these systems to make hiring decisions. Members of Congress have introduced bills that would require businesses to conduct algorithmic impact assessments before using AI for lending, employment, insurance and other such consequential decisions. 

These laws emphasize processes that address in advance AI’s many threats. 

Some of the developers of generative AI have taken a very different approach. Sam Altman, the CEO of OpenAI, initially explained that, in releasing ChatGPT to the public, the company sought to give the chatbot “enough exposure to the real world that you find some of the misuse cases you wouldn’t have thought of so that you can build better tools.” To us, that is not responsible AI. It is treating human beings as guinea pigs in a risky experiment. Altman’s call at a May 2023 Senate hearing for government regulation of AI shows greater awareness of the problem. But we believe he goes too far in shifting to government the responsibilities that the developers of generative AI must also bear. Maintaining public trust, and avoiding harm to society, will require companies more fully to face up to their responsibilities.


Dennis Hirsch is a Professor of Law and Computer Science, and the Director of the Program on Data and Governance at The Ohio State University. 

Piers Norris Turner is an Associate Professor of Philosophy & PPE Coordinator and the Director of the Center for Ethics and Human Values at The Ohio State University. 

This article was initially published by The Conversation.

Companies across industries have been investing in and making use of machine learning/artificial intelligence (“AI”) to varying extents for years, with some of the most immediate use cases in retail being AI-driven personalized product recommendations, optimized pricing, inventory monitoring, and automatic product cataloguing, the latter of which was teased by Farfetch several years ago. So, while AI is not a new type of technology, the implementation of AI – and in particular, generative AI – seems to have hit a fever pitch in recent months, as tech titans like Google Meta, and Microsoft have made headlines of their AI-centric initiatives/investments, and more businesses in the retail segment (and beyond) have started looking to utilize – or at least, experiment with – AI powered tools that stand to help them to be “more productive, get to market faster, and serve customers better.”  

Against this background, we take a running look at what companies are using AI across their businesses (and when), and what major players in the retail space are saying about the rise of machine learning and how they are putting this tech to work for them … 

May 24 – Myntra

Myntra announced that it is using ChatGPT to assist customers in searching for products on its fashion e-commerce platform. “We are arguably the first fashion, beauty and lifestyle platform, globally, to roll out this feature to the entire customer base at this scale,” said Raghu Krishnananda, chief product and technology officer at Flipkart-owned Myntra. He noted that “this latest innovation will empower our customers to express their fashion needs to Myntra in an intuitive manner and allow them to choose looks from over 2 million styles.” The Bengaluru, India-headquartered company revealed that the “MyFashion GPT” tech – which was developed by an in-house team at Myntra using ChatGPT 3.5 – enables Myntra shoppers to search for specific fashion products by typing text in a manner resembling natural speech. The ChatGPT response is then processed by Myntra’s search ecosystem to show curated lists of products. 

May 18 – Farfetch

In a Q1 2023 earnings call on May 18, Farfetch founder and CEO José Neves spoke at some length about how the e-commerce company has been making use of AI in recent years and how it plans to do so in the future. Primarily, Neves stated that “recent developments that large language models are bringing to the field of AI” is one of the areas that Farfetch management is “most excited” about. With Farfetch having been “active in this space for many years now,” he said that Farfetch’s “longstanding partnership with Microsoft has opened up the opportunity to access the most advanced version of ChatGPT, and our tech teams have been developing several concrete applications of ChatGPT for the luxury space,” which could be “a significant development for Farfetch.”

Neves further claimed that “large language models open up areas like search and discoverability and storytelling of our brand catalog to provide a much easier to use hyper personalized interface for our luxury customers,” noting that at the same time large language models also offer “added potential applications to augment the productivity and quality of providing customer service and creating product descriptions, for example.”

To date, Neves said that Farfetch has been “laser focused … [on] deploying AI and machine learning algorithms to personalization, [which] is one of the vectors of growth and the opportunities we see in the marketplace.” Going forward, he stated that he is “very excited by the near-term prospects of rolling out consumer facing applications of these new AI developments,” and is looking forward to “opportunities for Farfetch platform solutions to collaborate with brands in developing AI applications to enhance their own digital channels.”

May 10 – Adore Me/Victoria’s Secret 

Google revealed at its I/O conference that Adore Me and its owner Victoria’s Secret are among the companies utilizing its artificial intelligence products, with Victoria’s Secret using AI in Google Docs to draft ad copy.

May 4 – Shopify 

During its Q1 2023 earnings call on May 4, Shopify President Harley Finkelstein revealed that the e-commerce software provider launched a new AI tool called Shopify Magic, which assists merchants in drafting language for their product descriptions. “Just list a few details about your product or keywords you want to rank for in search engines, and the tool will automatically generate a product description for you,” the company stated in a recent release. The launch followed from Shopify’s roll out of “Shop at AI,” which Finkelstein described as “the coolest shopping concierge on the planet, whereby you as a consumer … can browse through hundreds of millions of products and you can say things like, ‘I want to have a barbecue and here’s the theme’ and it will suggest great products, and you can buy it right in line right through the shopping concierge.”

Reflecting on Shopify’s adoption of AI more broadly, Finkelstein stated, “I think we are very fortunate to be amongst the companies with the best chances of using AI to help our customers, our merchants. And that’s how we think about the usage of AI here. How do we integrate it into the tools that help us build and ship better products to our merchants. You’re already seeing that in certain areas of Shopify.”   

Apr 20 – Valentino

Valentino and chat platform GameOn Technology announced a partnership that will see the Italian fashion brand incorporate AI-powered chat into its Spring and Summer global activation, “Unboxing Valentino.” GameOn’s intelligent chat platform uses select elements of GPT technology to empower authentic conversational experiences, enabling Valentino to create dynamic social interactions. For example, customers can type utterances like “shipping,” “inspire me,” or even “style icon quiz” within the Valentino app to receive assistance on anything from customer care to product discovery. The parties said in a statement that the consumers interactions “will leverage GPT technology, within guardrails set by GameOn, to reduce transaction risk for Valentino and drive instantaneous, accurate feedback for customers while still driving a personalized and safe experience.”

Apr 19 – Zalando

In April, Zalando revealed the impending launch a beta version of a fashion assistant powered by ChatGPT across its app and web platforms. The Berlin-based fashion e-commerce platform said that the chatbot will enable it to “unlock the potential of generative AI to enhance the experience of discovering and shopping for fashion online.” 

With the new fashion assistant, Zalando says that “customers will be able to navigate through Zalando’s assortment using their own words or fashion terms, making the process more intuitive and natural. For example, if a customer asks, ‘What should I wear for a wedding in Santorini in July?’, Zalando’s fashion assistant is able to understand that this is a formal event, what the weather is in Santorini in July, and therefore, provide a written explanation with recommendations for clothing based on that input. This could be combined in the future with customer preferences, such as brands they follow and products available in their sizes, to deliver a more personalized selection of products.”

Mar 30 – KNXT

KNXT, the innovation arm of Gucci-owner Kering, unveiled the “first AI personal shopper leveraging OpenAI’s ChatGPT” in order to “reinvent the way we shop online.” According to KNXT, the generative AI bot – called /madeline – provides consumers with a way to avoid the “endless scrolling” that comes with e-commerce via a new way “to find the perfect luxury pieces from prestigious houses.”

Mar 27 – Walmart 

Hardly a new adopter of AI, Walmart’s senior vice president of tech strategy and commercialization Anshu Bhardwaj, nonetheless, shed light on how the retail behemoth is using AI and machine learning now. She told CNBC that, among other things, Walmart has “trained its algorithms to discern the different brands and their inventory positions, taking into account how much light there is or how deep the shelf is, with more than 95 percent accuracy. When a product gets to a pre-determined level, the stock room is automatically alerted so that the item is always available.” 

“This is how we close the loop. We never want to be out of stock on any item,” Bhardwaj said, noting that AI is also powering the Walmart shopping app. “For example, if a customer orders Pampers on the app, it can now recognize when this customer last ordered the product and whether the size is still appropriate.” 

Mar 27 – Levi’s 

Denim-maker Levi Strauss announced in March that it is planning to test uses of AI-generated models on its e-commerce channels in partnership with Lalaland.ai, a digital fashion studio that creates realistic AI-generated fashion models. Since “most products advertised on the Levi’s app or website can only be viewed on a single clothing model,” according to the Verge, Levi’s has touted the adoption of AI models as a way to introduce more diversity into the marketing of its products. The company stated, “Lalaland.ai’s technology, and AI more broadly, can potentially assist us by allowing us to publish more images of our products on a range of body types more quickly.”   

Mar 17 – Secoo

Chinese luxury e-commerce platform SECOO announced in March that it will combine the advantages of OpenAI’s new GPT-4 tech and the Chinese version of ChatGPT Baidu ERNIE Bot to better “understand [its] users’ needs, improve its intelligent marketing capabilities, explore more intelligent luxury goods marketing models,” etc. SECOO said that it is angling to use the AI chatbot combo to “complete product recommendations, selling point explanations, discount promotions, and generate visual images and videos,” ultimately, helping to reduce costs.

Through the two platforms of OpenAI and ERNIE Bot, Secoo Group said that it can “accurately understand user needs, improve its intelligent marketing capabilities, explore more intelligent luxury goods marketing models, and make luxury goods intelligent marketing scenes more accurate.”

Feb 9 – Tapestry 

Coach, Kate Spade, and Stuart Weitzman owner Tapestry Inc.’s CEO Joanne Crevoiserat said in a Q2 earnings call in February that the company is “leverag[ing] new data analytics capabilities to optimize our product allocation processes, such as utilizing artificial intelligence to forecast customer demand, and better position inventory and stores.” The result: “An increase of inventory availability and help to ensure our product was in the right place at the right time, as we match supply with demand to help deliver superior customer experiences.”

This article was initially published on April 15 and has been updated accordingly.

Generative artificial intelligence (“AI”) is transforming the way people learn and create. Used correctly, this technology has the potential to create content, products, and experiences that were once unimaginable. However, its rapid advancement has raised legal concerns, including issues of copyright infringement, data privacy, and liability – challenges that are not limited to any type of business, as generative AI tools can be used across many settings and industries. Making use of these new tools can come at a steep price tag if AI use runs afoul of legal requirements. With this in mind, it is worth considering what steps companies can take to leverage the power of generative AI while actively mitigating the associated legal risks.

AI and Copyright Infringement

The ability of generative AI to produce original content, such as music, images, and text, has created new challenges in intellectual property (“IP”) law. Companies must ensure that their use of AI-generated content does not infringe on the rights of copyright holders, and it is currently unclear to what extent the output of such models is protected by copyright. To mitigate these risks, companies should carefully evaluate the use cases of generative AI and consider using dedicated AI models trained on data that is legally obtained with appropriate licenses in place. Lawsuits have already been filed, in which the plaintiffs allege that the use of images generated by AI models infringes the copyrights in the images contained in the training data. 

Companies using content created by AI tools should consider establishing guidelines for the use of such AI-generated content, especially since such underlying data may not be protected by copyright everywhere. This can present an issue especially if the output is crucial to the company’s product since it will be harder to take legal action against copycats and counterfeiters. The law is still developing on this point and the outcome may be different in different jurisdictions. In the European Union, for instance, copyrightable work generally needs to be the (human) author’s intellectual creation, a condition that is not met by AI. At the same time, the U.S. Copyright Office has issued guidelines stating that the output of generative AI tools is generally not protected, whereas copyright law in the United Kingdom potentially does protect computer generated works where there was no human involvement, but this area is under review.

Data Privacy and Security

Data privacy is a critical issue when training, developing, and using AI tools. Generative AI models carry high risks because of the vast amount of data used to train them. There is a risk that personal data used to train these models was not used lawfully or could be reverse engineered by asking AI the right questions, creating both privacy and security risks. As such, any business developing or using generative AI will need to ensure that they are doing so in compliance with local laws, such as the General Data Protection Regulation (“GDPR”) in the EU and the UK GDPR in the UK. 

The first step on this front is to identify whether personal data (which is defined widely to include information relating to an identified or identifiable natural person) is being used as at all. In the event that personal data is used for development, this should be for a specific purpose and under a specific legal basis. The personal data will need to be used in line with legal principles and special considerations will need to be made as to how individuals could exercise their data rights. For example, would it be possible to provide any individual with access to information about them? 

When using AI to create outputs, these should be monitored for any potential data leakages that could amount to a data breach. For instance, where an individual has published information about themselves on social media, it does not necessarily mean it is legal to use that information for other purposes, such as to create a report about potential customers to target for an advertising campaign? 

Contracts and Confidentiality

Before implementing or permitting the use of any generative AI tool, companies should also check the terms under which the tool is provided. These terms may restrict how the output can be used or give the provider of the tool broad rights in anything used as a prompt or other input. This is particularly important if tools are used to translate, summarize, or modify internal documents, which, aside from containing personal data, may also include information that the company would rather keep proprietary or confidential. Uploading such information to a third-party service could breach non-disclosure agreements and trigger serious liability risks.

AI and Sector-Specific Regulation

In addition to laws surrounding AI, international businesses should be aware that there are specific laws being developed that cover the use of AI in the EU. The current draft legislation creates obligations for companies based on the risk that AI creates. Where it is used in a high-risk scenario, the providers and users of these systems will need to do more to meet compliance requirements (while some applications are deemed to be an unacceptable risk). In contrast, the UK has recently put out a white paper stating that AI will not have specific regulation, but instead it will be up to sector specific regulators. How generative AI falls within either of these frameworks will depend on the context in which it is used. Therefore, any business planning to use generative AI to offer international products or services should consider EU and UK legal stances early in the development to mitigate the risks of potential fines or the requirement to redevelop that product or service. 

THE BOTTOM LINE: Generative AI offers tremendous potential for companies to innovate, streamline, and increase their efficiency. However, businesses must be diligent in addressing the legal risks associated with the technology. By implementing, monitoring, and enforcing policies based on the guidelines outlined above, companies can harness the power of generative AI while mitigating potential legal pitfalls.


Felix Hilgert is a Partner at Osborne Clarke, where he focuses on technology and video games, and helping North American companies expand and succeed abroad. 

Emily Barwell is an associate on Osborne Clarke’s U.S. team who specializes in data protection and technology contracts.