With New Executive Order, President Biden Looks to Regulate AI

Image: Unsplash

With New Executive Order, President Biden Looks to Regulate AI

On Monday, U.S. President Joe Biden released a wide ranging and ambitious executive order on artificial intelligence (“AI”) – catapulting the U.S. to the front of conversations about regulating AI. In doing so, the US is leap-frogging over other states in the ...

November 1, 2023 - By Toby Walsh

With New Executive Order, President Biden Looks to Regulate AI

Image : Unsplash

Case Documentation

With New Executive Order, President Biden Looks to Regulate AI

On Monday, U.S. President Joe Biden released a wide ranging and ambitious executive order on artificial intelligence (“AI”) – catapulting the U.S. to the front of conversations about regulating AI. In doing so, the US is leap-frogging over other states in the race to rule over AI. Europe previously led the way with its AI Act, which was passed by the European Parliament in June 2023, but which will not take full effect until 2025. Biden’s executive order is a grab bag of initiatives for regulating AI – some of which are good, and others which seem a bit half-baked. It aims to address harms ranging from the immediate, such as AI-generated deepfakes and intermediate harms like job losses, to longer-term harms, including the much-disputed existential threat AI may pose to humans.

In the U.S., lawmakers have been slow to pass significant regulation of big tech companies, and this executive order is likely both an attempt to sidestep an often-deadlocked Congress, as well as to kick-start action. For example, the order calls upon Congress to pass bipartisan data privacy legislation. 

The executive order, which will reportedly be implemented over the next three months to one year, covers eight areas: (1) safety and security standards for AI; (2) privacy protections; (3) equity and civil rights; (4) consumer rights; (5) jobs; (6) innovation and competition; (7) international leadership; and (8) AI governance. 

On one hand, the order covers many concerns raised by academics and the public. For example, one of its directives is to issue official guidance on how AI-generated content may be watermarked to reduce the risk from deepfakes.  It also requires companies developing AI models to prove they are safe before they can be rolled out for wider use. President Biden said “that means companies must tell the government about the largescale AI systems they are developing and share rigorous independent test results to prove they pose no national security or safety risk to the American people.”

AI’s potentially disastrous use in warfare

At the same time, the order fails to address a number of pressing issues. For instance, it does not directly address how to deal with AI robots, a vexing topic that was under discussion over the past two weeks at the General Assembly of the United Nations. This concern should not be ignored. The Pentagon is developing swarms of low-cost autonomous drones as part of its recently announced Replicator program. Similarly, Ukraine has developed homegrown AI-powered attack drones that can identify and attack Russian forces without human intervention

And what about protecting elections from AI-powered weapons of mass persuasion? A number of outlets have reported on how the recent election in Slovakia may have been influenced by deepfakes. Many experts are also concerned about the misuse of AI in the upcoming U.S. presidential election. 

Unless strict controls are implemented, we risk living in an age where nothing you see or hear online can be trusted. If this sounds like an exaggeration, consider that the U.S. Republican Party has already released a campaign advert which appears entirely generated by AI. 

Missed opportunities

Many of the initiatives in the executive order could and should be replicated elsewhere. We also should, as the order requires, provide guidance to landlords, government programs, and government contractors on how to ensure AI algorithms are not being used to discriminate against individuals. At the same time, we should, as the order requires, address algorithmic discrimination in the criminal justice system where AI is increasingly being used in high stakes settings, including for sentencing, parole and probation, pre-trial release and detention, risk assessments, surveillance, and predictive policing, to name a few. 

Perhaps the most controversial aspect of the executive order is that which addresses the potential harms of the most powerful so-called “frontier” AI models. Some experts believe these models – “highly capable foundation models” that are being developed by companies such as Open AI, Google and Anthropic – pose an existential threat to humanity. Others believe that such concerns are overblown and might distract from more immediate harms, such as misinformation, infringement, and inequity, that are already hurting society. 

Biden’s order invokes extraordinary war powers (specifically the 1950 Defense Production Act introduced during the Korean war) to require companies to notify the federal government when training such frontier models. It also requires they share the results of “red-team” safety tests, wherein internal hackers use attacks to probe a software for bugs and vulnerabilities. 

It is going to be difficult, and perhaps impossible, to police the development of frontier models. Among other things, the above directives will not stop companies developing such models overseas, where the U.S. government has limited power. The open-source community can also develop them in a distributed fashion – one which makes the tech world “borderless.” 

With the foregoing in mind, the impact of the executive order will likely have the greatest impact on the government itself, and how it goes about using AI, rather than businesses. Nevertheless, it is a welcome piece of action. The UK Prime Minister Rishi Sunak’s AI Safety Summit, taking place over the next two days, now looks to be somewhat of a diplomatic talk fest in comparison. 


Toby Walsh is a Professor of AI and a Research Group Leader at UNSW Sydney. (This article was initially published by The Conversation.)

related articles