AI Technology Regulations: A Look at the Global Landscape

Image: OpenAI

AI Technology Regulations: A Look at the Global Landscape

Parties engaged in artificial intelligence (“AI”) technology projects should be mindful of the regulatory landscape, and the changes taking place within it. For example, the European Commission adopted the proposal for regulations to lay down harmonized rules on artificial ...

AI Technology Regulations: A Look at the Global Landscape

Image : OpenAI

Case Documentation

AI Technology Regulations: A Look at the Global Landscape

Parties engaged in artificial intelligence (“AI”) technology projects should be mindful of the regulatory landscape, and the changes taking place within it. For example, the European Commission adopted the proposal for regulations to lay down harmonized rules on artificial intelligence – the “AI Act” – in April 2021. The proposal aims to provide AI developers, deployers, and users with clear requirements and obligations regarding specific uses of AI, robotics, and related technologies. Fast forward to December 2022, and the European Union reached agreement on a draft version of the AI Act, which will now be debated and discussed by EU governments, the Commission and European Parliament, following agreement by the European Parliament of its common position. 

Despite such agreement at the EU level, there have been disagreements between key political groups, in particular as to how the law classifies AI systems as “high risk” – many groups are keen to ensure that only truly high-risk cases are included in the list of high-risk scenarios (contained in Annex III of the draft text). They are also seeking contractual freedom to allocate responsibility to various operators along the value chain and no overlap or competing obligations with existing legislation. The result of these disagreements is that the full parliamentary vote is now likely to be delayed until April 2023 at the earliest.

The current draft text seeks to distinguish AI from simpler software systems by defining AI as systems developed through machine learning approaches and logic and knowledge-based approaches. It looks to prohibit certain AI practices (such as use of AI for social scoring) and will create obligations and duties for those operating “high risk” applications. The proposed rules will also deal with enforcement after AI systems are placed on the market and provide a governance structure at European and national level. Once an AI system is on the market, designated authorities will provide market surveillance while providers will be subject to a post-market monitoring system and will have to report serious incidents and malfunctioning.

The U.S.: A Voluntary Set of Standards

While not a regulator, on January 26, 2023, the U.S. National Institute of Standards and Technology released version 1.0 of its AI Risk Management Framework, a voluntary set of standards intended to address risks in the design and use of AI products, services, and systems. Meanwhile, the EU-US TTC Joint Roadmap for Trustworthy AI and Risk Management was published in December 2022 “to guide the development of tools, methodologies, and approaches to AI risk management and trustworthy AI by the EU and the U.S. in order to advance a shared interest in supporting international standardization efforts and promoting trustworthy AI on the basis of a shared dedication to democratic values and human rights.” The roadmap aims to “take practical steps to advance trustworthy AI and uphold a shared commitment to the Organisation for Economic Co-operation and Development Recommendation on AI.”

Regulations for AI in the UK

The United Kingdom is currently far from adopting a singular regulatory framework for AI. In October 2022, the Department for Digital, Culture, Media & Sport and the Office for Artificial Intelligence launched a survey to understand the UK’s AI sector and how it is growing. This followed on from the July 2022 AI regulation policy paper and the September 2021 National AI Strategy – a key theme throughout these has been the government’s pro-innovation approach to regulating AI. That same month, the government also launched an inquiry into the governance of AI and the public sessions of the inquiry began in January 2023. The UK Government is expected to publish a white paper on AI governance later this year to set out the government’s position on the possible risks and harms that AI technologies may bring and regulatory solutions.

One challenge identified in the January 2023 public sessions is the lack of a standard international definition of AI with doubt expressed that there will be a unifying definition. Governance may, therefore, be based on applications, rather than a universal conception of AI. On the “black box” and lack of explainability problem, discussion has centered on regulations based on the design of the algorithms that are producing black box AI models, rather than regulations that try to inspect the models to see how they produce their outputs. The reality for UK businesses using AI is that a less centralized approach will mean that they will need to deal with multiple regulators including: Communications regulator Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority, and the Medicine and Healthcare Products Regulatory Agency. 

Still yet, the Data Protection and Digital Information Bill – which was laid before the UK Parliament in July 2022 – also includes measures on AI. The reasoning behind this approach seems to be that the sector specific regulators understand the context of how AI is being deployed within their own sectors and the kinds of harms that can occur. Additionally, they also have the best understanding of the existing rules and requirements that are in place, and therefore, what may need to be built on or where future AI regulations may be needed. 

However, in the January public session it was also acknowledged that while there is a tremendous amount of guidance, regulation, and standards out there (some of which is overlapping), there are also gaps. These overlapping areas and gaps suggest a need for a mapping exercise and an allocated body to help oversee it, such as the Office for AI, which can convene the right regulators to look at how they plug those gaps in a coherent and coordinated way. 

In the meantime, the interactive online platform – the AI standards hub (also launched in October 2022) aims to help UK organizations to navigate the evolving landscape of AI standardization and related policy developments as well as funnel the UK’s contribution to the development of international standards for AI. The UK will also look to other international initiatives, such as Singapore’s “AI Verify,” an AI governance testing framework and toolkit that will allow industry, through a series of technical tests and process checks, to demonstrate their deployment of responsible AI directly to government. 

A significant number of tech companies and other businesses will be looking to use AI technologies and many of these companies will be contracting with overseas businesses. Managing regulatory risk will be challenging with a lack of alignment between regimes. It will, therefore, fall to the individual parties to the project to develop practices that enable them to comply with the relevant national frameworks. 


Helen Armstrong is a Partner at international law firm RPC, where she specializes in resolution of complex technology and commercial disputes.

Ricky Cella is a Senior Associate on the IP & Technology team at RPC.

related articles