Is the United Kingdom Getting AI Regulation Right?

Image: Unsplash

Is the United Kingdom Getting AI Regulation Right?

The latest generation of artificial intelligence (“AI”), such as ChatGPT and Google’s conversational AI model Bard, is expected to revolutionize the way we live and work. These AI technologies could significantly improve education, healthcare, ...

June 12, 2023 - By Asress Adimi Gikay, TFL

Is the United Kingdom Getting AI Regulation Right?

Image : Unsplash

Case Documentation

Is the United Kingdom Getting AI Regulation Right?

The latest generation of artificial intelligence (“AI”), such as ChatGPT and Google’s conversational AI model Bard, is expected to revolutionize the way we live and work. These AI technologies could significantly improve education, healthcare, transport and welfare, but there are downsides, too: rampant infringementjobs automated out of existencesurveillance abuses, and discrimination, including in healthcare and policing. Against this background, there is general agreement that AI needs to be subject to regulation and governments across the globe are actively taking steps to draft – and in some early-mover cases, such as China, implement – legislation.

The European Union, for example, has proposed one approach, based on potential problems in the realm of AI. Meanwhile, the United Kingdom is proposing a different, pro-business, approach. This year, the UK government published a white paper unveiling how it intends to regulate AI, with an emphasis on flexibility in order to avoid stifling innovation. The document favors voluntary compliance, with five principles meant to tackle AI risks. Strict enforcement of these principles by regulators could be added later if it is required. But is such an approach too lenient given the risks?

Crucial Components of an AI Framework

The UK approach differs from the EU’s risk-based regulation. The EU’s proposed AI Act prohibits certain AI uses, such as live facial recognition technology, where people shown on a camera feed are compared against police “watch lists,” in public spaces. The EU approach also creates stringent standards for so-called high-risk AI systems, including systems used to evaluate job applications, student admissions, and eligibility for loans and public services.

Meanwhile, the UK approach to AI regulation has three crucial components …

First, it relies on existing legal frameworks such as privacy, data protection and product liability laws, rather than implementing new AI-centered legislation. Second, five general principles – each consisting of several components – would be applied by regulators in conjunction with existing laws. These principles are: (1) safety, security and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. During initial implementation, regulators would not be legally required to enforce the principles; a statute imposing these obligations would be enacted later, if considered necessary. Organizations would, therefore, be expected to comply with the principles voluntarily in the first instance. Third, regulators could adapt the five principles to the subjects they cover, with support from a central coordinating body. So, there will not be a single enforcement authority.

Promising Approach to AI Regulation?

The UK’s regime is promising for a few reasons. Primarily, it promises to use evidence about AI in its correct context, rather than applying an example from one area to another inappropriately. Beyond that, it is designed so that rules can be easily tailored to the requirements of AI used in different areas of everyday life. And finally, there are advantages to its decentralized approach. For example, a single regulatory organization, were it to underperform, would affect AI use across the board.

How would it use evidence about AI? As AI’s risks are yet to be fully understood, predicting future problems involves guesswork. To fill the gap, evidence with no relevance to a specific use of AI could be appropriated to propose drastic and inappropriate regulatory solutions. For instance, some U.S.-based internet companies use algorithms to determine a person’s sex based on facial features. These showed poor performance when presented with photos of darker-skinned women. This finding has been cited in support of a ban on law enforcement use of face recognition technology in the UK. However, the two areas are quite different and problems with gender classification do not imply a similar issue with facial recognition in law enforcement. These U.S. gender algorithms work under relatively lower legal standards. Face recognition used by UK law enforcement undergoes rigorous testing, and is deployed under strict legal requirements.

Another advantage of the UK approach is its adaptability. It can be difficult to predict potential risks, particularly with AI that could be appropriated for purposes other than the ones foreseen by its developers and machine learning systems, which improve in their performance over time. The framework allows regulators to quickly address risks as they arise, avoiding lengthy debates in parliament. Responsibilities would be spread between different organizations. Centralizing AI oversight under a single national regulator could lead to inefficient enforcement; regulators with expertise in specific areas, such as transportation/aviation or financial markets, are arguably better suited to regulate the use of AI within their fields. 

This decentralized approach could minimize the effects of corruption, regulators becoming preoccupied with concerns other than the public interest, and differing approaches to enforcement. It also avoids a single point of enforcement failure.

AI Enforcement and Coordination

Some businesses could – and inevitably, will – resist established AI standards. So, if (and when) regulators are granted enforcement powers, they should be able to levy fines where appropriate. At the same time, the public should also have the right to seek compensation for harms caused by AI systems. Enforcement need not undermine flexibility; regulators can still tighten or loosen standards as required. However, the UK framework could encounter difficulties where AI systems fall under the jurisdiction of multiple regulators, resulting in overlaps. For example, transport, insurance, and data protection authorities could all issue conflicting guidelines for self-driving cars. To tackle this, the UK’s white paper suggests establishing a central body that would ensure the harmonious implementation of guidance. It is vital to compel the different regulators to consult this organization rather than leaving the decision up to them.

The UK approach shows promise for fostering innovation and addressing risks. But to strengthen the country’s position as a leader in the area, the framework must be aligned with regulation elsewhere, especially the EU. Fine-tuning the framework can enhance legal certainty for businesses and bolster public trust. It will also foster international confidence in the UK’s system of regulation for this transformative technology.


Asress Adimi Gikay is a Senior Lecturer in AI, Disruptive Innovation, and Law at Brunel University London. (This article was initially published by The Conversation.)

related articles