Shein’s RICO Lawsuit: A Look at the Role of Responsible AI

Image: Shein

Shein’s RICO Lawsuit: A Look at the Role of Responsible AI

This summer, three independent designers filed a lawsuit against Shein in a federal court in California, alleging that the China-founded, Singapore-based ultra-fast fashion company sold exact copies of their works, thereby, infringing their copyrights and violating the U.S. ...

December 11, 2023 - By Ridwaan Boda, Waldo Steyn, Shaaista Tayob

Shein’s RICO Lawsuit: A Look at the Role of Responsible AI

Image : Shein

Case Documentation

Shein’s RICO Lawsuit: A Look at the Role of Responsible AI

This summer, three independent designers filed a lawsuit against Shein in a federal court in California, alleging that the China-founded, Singapore-based ultra-fast fashion company sold exact copies of their works, thereby, infringing their copyrights and violating the U.S. Racketeer Influenced and Corrupt Organizations (“RICO”) Act. Originally put in place to target organized crime, the RICO Act also provides for civil action to be taken against “racketeering,” which includes certain acts relating to criminal infringement of copyright. The trio of artists further alleged in their complaint that Shein has a “secret” algorithm that it utilizes to manipulate market data, search results, and unfairly drive out competitors, leading to monopolistic practices. That algorithm “could not work without generating the kinds of exact copies that can greatly damage an independent designer’s career,” the plaintiffs asserted. They also argue that “Shein’s artificial intelligence (‘AI’) is smart enough to misappropriate the pieces with the greatest commercial potential.” 

The headline-making case is important, as it will provide a glimpse into the stance that courts may take in the future in regulating AI, as well as assist in the development of recommendations regarding the ethical use of AI systems. For example, the plaintiffs claim that Shein’s algorithms have been programed to generate false or misleading information on the Shein app regarding product popularity, customer reviews, or pricing trends. By artificially inflating their own performance metrics and suppressing negative feedback, Shein could have created a skewed perception of their products’ desirability and quality.

Such manipulation of market data could have severe implications, including deceiving consumers into making purchasing decisions based on inaccurate or biased information. This not only undermines the trust of consumers, but it also hampers the ability of competitors to compete on a level playing field. By allegedly distorting market data, Shein’s AI algorithms could impact the purchasing decisions of customers, potentially leading to an unfair advantage for the company. 

Against that background, the use of AI algorithms for manipulating market data highlights the potential risks and challenges associated with the deployment of advanced technologies, which some commentators argue necessitates the creation of Responsible AI-use regulation. Responsible AI refers to the framework of principles and practices aimed at ensuring the fair and ethical use of AI technologies. By integrating responsible AI practices, organizations can proactively minimize the risk of legal controversies such as the current Shein lawsuit. Actions that organizations can take include … 

Governance: A company’s board needs to ensure that proper structures are put in place as well as safeguards employed in order to ensure the adoption of Responsible AI. These may include establishing Centers of Excellence, dedicated task teams, and or other structures whose focus is ensuring that AI is adopted in a Responsible manner in keeping with the values and culture of the company and also in order to mitigate legal, technical, and financial risk;

Policy implementation: A sound policy for the adoption of Responsible AI needs to be implemented. These would include not only mechanisms to mitigate legal, technical and financial risk but also ensure that ethical boundaries have been established based on the company’s own value system;

Training: companies should ensure that staff are trained at various levels and that training be adapted depending on what role staff members undertake as part of the company’s AI initiatives. For example: (i) legal and technical teams should undergo training on more than just the legal and technical risk of AI adoption but also on AI ethics and financial risks and (ii) board of directors need to be trained on both ethical and legal considerations in order to establish a culture of Responsible AI;

Contracting: As companies would rely on third-party service providers in order to deploy AI solutions, companies should ensure that they establish sound contracting standards in order to mitigate against the risk of a supplier providing tools and/or solutions which may give rise to claims and such supplier not being liable due to restrictive liability provisions. Further, the usual due diligence in supplier selection needs to also be adopted;

Ethical impact assessments: Although not mandatory, it is a useful tool to ensure that any projects undertaken, or AI being adopted complies with the company’s policies and applicable laws;

Ethical reviews: As part of this, companies may wish to establish a distinct AI ethics review board, which would also engage in the approval of projects based on ethical impact assessments undertaken.

Pioneering industry initiatives or codes of conduct: Leading companies may wish to pioneer the adoption of industry acceptable codes of conduct, including obtaining approvals from regulatory authorities such as the Information Regulator; and

Auditing and monitoring: As with any compliance initiative, boards should ensure that proper resources are dedicated to ensuring compliance with interventions adopted, as well as dealing with violates of company policies.

The Shein RICO lawsuit serves as a wake-up call for organizations to adopt responsible AI practices and address the potential legal pitfalls associated with advanced algorithms. By adhering to ethical frameworks and regulations, implementing robust data governance, conducting continuous testing, and monitoring, and fostering collaboration and accountability, organizations can mitigate the risks of legal controversies arising from AI technologies, especially in the absence of regulation. 


Ridwaan Boda is an Executive in ENSafrica‘s Corporate Commercial practice, where he heads up the Technology, Media, and Telecommunications team. 

Waldo Steyn is an Executive in ENSafrica‘s Intellectual Property practice, where he heads up the commercial IP team. 

Shaaista Tayob is an Associate in ENSafrica’s Corporate Commercial practice, specializing in information technology, data protection, privacy, and cybersecurity. 

related articles