On Monday, U.S. President Joe Biden released a wide ranging and ambitious executive order on artificial intelligence (“AI”) – catapulting the U.S. to the front of conversations about regulating AI. In doing so, the US is leap-frogging over other states in the race to rule over AI. Europe previously led the way with its AI Act, which was passed by the European Parliament in June 2023, but which will not take full effect until 2025. Biden’s executive order is a grab bag of initiatives for regulating AI – some of which are good, and others which seem a bit half-baked. It aims to address harms ranging from the immediate, such as AI-generated deepfakes and intermediate harms like job losses, to longer-term harms, including the much-disputed existential threat AI may pose to humans.

In the U.S., lawmakers have been slow to pass significant regulation of big tech companies, and this executive order is likely both an attempt to sidestep an often-deadlocked Congress, as well as to kick-start action. For example, the order calls upon Congress to pass bipartisan data privacy legislation. 

The executive order, which will reportedly be implemented over the next three months to one year, covers eight areas: (1) safety and security standards for AI; (2) privacy protections; (3) equity and civil rights; (4) consumer rights; (5) jobs; (6) innovation and competition; (7) international leadership; and (8) AI governance. 

On one hand, the order covers many concerns raised by academics and the public. For example, one of its directives is to issue official guidance on how AI-generated content may be watermarked to reduce the risk from deepfakes.  It also requires companies developing AI models to prove they are safe before they can be rolled out for wider use. President Biden said “that means companies must tell the government about the largescale AI systems they are developing and share rigorous independent test results to prove they pose no national security or safety risk to the American people.”

AI’s potentially disastrous use in warfare

At the same time, the order fails to address a number of pressing issues. For instance, it does not directly address how to deal with AI robots, a vexing topic that was under discussion over the past two weeks at the General Assembly of the United Nations. This concern should not be ignored. The Pentagon is developing swarms of low-cost autonomous drones as part of its recently announced Replicator program. Similarly, Ukraine has developed homegrown AI-powered attack drones that can identify and attack Russian forces without human intervention

And what about protecting elections from AI-powered weapons of mass persuasion? A number of outlets have reported on how the recent election in Slovakia may have been influenced by deepfakes. Many experts are also concerned about the misuse of AI in the upcoming U.S. presidential election. 

Unless strict controls are implemented, we risk living in an age where nothing you see or hear online can be trusted. If this sounds like an exaggeration, consider that the U.S. Republican Party has already released a campaign advert which appears entirely generated by AI. 

Missed opportunities

Many of the initiatives in the executive order could and should be replicated elsewhere. We also should, as the order requires, provide guidance to landlords, government programs, and government contractors on how to ensure AI algorithms are not being used to discriminate against individuals. At the same time, we should, as the order requires, address algorithmic discrimination in the criminal justice system where AI is increasingly being used in high stakes settings, including for sentencing, parole and probation, pre-trial release and detention, risk assessments, surveillance, and predictive policing, to name a few. 

Perhaps the most controversial aspect of the executive order is that which addresses the potential harms of the most powerful so-called “frontier” AI models. Some experts believe these models – “highly capable foundation models” that are being developed by companies such as Open AI, Google and Anthropic – pose an existential threat to humanity. Others believe that such concerns are overblown and might distract from more immediate harms, such as misinformation, infringement, and inequity, that are already hurting society. 

Biden’s order invokes extraordinary war powers (specifically the 1950 Defense Production Act introduced during the Korean war) to require companies to notify the federal government when training such frontier models. It also requires they share the results of “red-team” safety tests, wherein internal hackers use attacks to probe a software for bugs and vulnerabilities. 

It is going to be difficult, and perhaps impossible, to police the development of frontier models. Among other things, the above directives will not stop companies developing such models overseas, where the U.S. government has limited power. The open-source community can also develop them in a distributed fashion – one which makes the tech world “borderless.” 

With the foregoing in mind, the impact of the executive order will likely have the greatest impact on the government itself, and how it goes about using AI, rather than businesses. Nevertheless, it is a welcome piece of action. The UK Prime Minister Rishi Sunak’s AI Safety Summit, taking place over the next two days, now looks to be somewhat of a diplomatic talk fest in comparison. 


Toby Walsh is a Professor of AI and a Research Group Leader at UNSW Sydney. (This article was initially published by The Conversation.)

Today, if you want to find a good moving company, you might ask your favorite search engine – Google, Bing, or DuckDuckGo perhaps – for some advice. After wading past half a page of adverts, you get a load of links to articles on moving companies. You click on one of the links and finally read about how to pick a good one. But not for much longer. In a major reveal this week, Google announced plans to add its latest artificial intelligence (“AI”) chatbot, LaMDA, to the Google search engine. The chatbot has been called the “Bard.” Ask the Bard how, and he will reply almost immediately with a logical eight-step plan: starting with reading reviews and getting quotes and ending with taking up references. No more wading through pages of links; the answer is immediate. 

Microsoft responded swiftly to Google, saying it would incorporate the ChatGPT chatbot into its search engine, Bing. It was only recently that Microsoft announced it would invest $10 billion in OpenAI, the company behind ChatGPT, on top of a previous investment of a billion or more in 2022. ChatGPT has already been added to Microsoft’s Teams software. You can expect it to turn up soon in Word, where it will write paragraphs for you. In Outlook it will compose entire emails, and in PowerPoint it will help you prepare slides for your next talk. Not to be outdone, Chinese web giant Baidu has also sprung into action. It recently announced its latest chatbot would be released in March. Baidu’s chatbot will be trained on 50 percent more parameters than ChatGPT and will be bilingual. The company’s share price jumped 15 percent in response.

AI-driven search

Google, along with the other tech giants, has been using AI in search for many years already. AI algorithms, for example, order the search results Google returns. The difference now is that instead of searching based on the words you type; these new search engines will try to “understand” your question. And instead of sending you links, they will try to answer the questions, too. But new chatbot technology is far from perfect. ChatGPT sometimes just makes stuff up. Chatbots can also be tricked into saying things that are inappropriate, offensive, or illegal – although researchers are working hard to reduce this. 

For Google, this has been described by the New York Times not just as an AI race, but a race to survive. When ChatGPT first came out late last year, alarm bells rang for the search giant. Google’s founders, Larry Page and Sergey Brin, returned from their outside activities to oversee the response. Advertising revenue from Google Search results contributes about three-quarters of the $283 billion annual revenue of Alphabet, Google’s parent company. If people start using AI chatbots to answer their questions rather than Google Search, what will happen to that income? Even if Google users stick with Google, but get their answers directly from the Bard, how will Google make money when no links are being clicked anymore?

Microsoft may see this as an opportunity for its search engine, Bing, to overtake Google. It’s not out of the question that it will. In the 1990s, before Google came out, I was very happy with AltaVista – the best search engine of the day. But I quickly jumped ship as soon as a better search experience arrived.

The artificial intelligence arms race

Google had previously not made its LaMDA chatbot available to the public due to concerns about it being misused or misunderstood. Indeed, it famously fired one of its engineers, Blake Lemoine, after he claimed LaMDA was sentient.

There are a host of risks associated with big tech’s rush to cement the future of AI search. For one, if tech companies won’t make as much money from selling links, what new income streams will they create? Will they try to sell information gleaned from our interactions with search chatbots? And what about people who will use these chatbots for base purposes? They may be perfect for writing personalized and persuasive messages to scam unsuspecting users – or to flood social media with conspiracy theories. Not to mention we have already seen ChatGPT do a good job of answering most homework questions. For now, public schools in New South Wales, Queensland, Victoria, Western Australian and Tasmania have banned its use to prevent cheating – but it seems unlikely they could (or should) ban access to Google or Bing.

When Apple launched Macintosh, it was the start of a revolution. Rather than typing cryptic instructions, we could just point and click on a screen. That revolution continued with the launch of Apple’s iPhone – an interface that shrunk computers and the web into the palm of our hand. Perhaps the biggest impact from AI-driven search tools will be on how we interact with the myriad ever-smarter devices in our lives. We will stop pointing, clicking, and touching, and will instead start having entire conversations with our devices. We can only speculate on what this might mean in the longer term. But, for better or worse, how we interact with computers is about to change.


Toby Walsh is a Professor of AI and Research Group Leader at UNSW Sydney. (This article was initially published by The Conversation.)

ChatGPT is the latest and most impressive artificially intelligent chatbot yet. It was released in December, and in its first five days hit a million users. It is being used so much that its servers have reached capacity several times. OpenAI, the company that developed it, is already being discussed as a potential Google slayer. Why look up something on a search engine when ChatGPT can write a whole paragraph explaining the answer? (There is even a Chrome extension that lets you do both, side by side.) But what if we never know the secret sauce behind ChatGPT’s capabilities? The chatbot takes advantage of a number of technical advances published in open scientific literature in the past couple of decades. But any innovations unique to it are secret. OpenAI could well be trying to build a technical and business moat to keep others out.

What it can (and can’t do)

ChatGPT is very capable. Want a haiku on chatbots? It can do that. How about a joke about chatbots? No problem. ChatGPT can do many other tricks. It can write computer code to a user’s specification, draft business letters or rental contracts, compose homework essays and even pass university exams. Ultimately, ChatGPT is a bit like autocomplete on your phone. Your phone is trained on a dictionary of words so it completes words. ChatGPT is trained on pretty much all of the web, and can therefore complete whole sentences – or even whole paragraphs. However, it does not understand what it is saying, just what words are most likely to come next. 

In the past, advances in AI have been accompanied by peer-reviewed literature. In 2018, for example, when the Google Brain team developed the BERT neural network on which most natural language processing systems are now based (and we suspect ChatGPT is too), the methods were published in peer-reviewed scientific papers and the code was open-sourced. And in 2021, DeepMind’s AlphaFold 2, a protein-folding software, was Science’s Breakthrough of the Year. The software and its results were open-sourced so scientists everywhere could use them to advance biology and medicine. 

Following the release of ChatGPT, we have only a short blog post describing how it works. There has been no hint of an accompanying scientific publication, or that the code will be open-sourced. To understand why ChatGPT could be kept secret, you have to understand a little about the company behind it. OpenAI is perhaps one of the oddest companies to emerge from Silicon Valley. It was set up as a non-profit in 2015 to promote and develop “friendly” AI in a way that “benefits humanity as a whole.” Elon Musk, Peter Thiel and other leading tech figures pledged $1 billion towards its goals. Their thinking was we could not trust for-profit companies to develop increasingly capable AI that aligned with humanity’s prosperity. AI therefore needed to be developed by a non-profit and, as the name suggested, in an open way. 

In 2019, OpenAI transitioned into a capped for-profit company (with investors limited to a maximum return of 100 times their investment) and took a $1 billion investment from Microsoft so it could scale and compete with the tech giants. It seems money may have gotten in the way of OpenAI’s initial plans for openness.

Profiting from users

On top of this, OpenAI appears to be using feedback from users to filter out the fake answers ChatGPT hallucinates. According to its blog, OpenAI initially used reinforcement learning in ChatGPT to downrank fake and/or problematic answers using a costly hand-constructed training set. But ChatGPT now seems to be being tuned by its more than a million users. I imagine this sort of human feedback would be prohibitively expensive to acquire in any other way. We are now facing the prospect of a significant advance in AI using methods that are not described in the scientific literature and with datasets restricted to a company that appears to be open only in name.

Where next?

In the past decade, AI’s rapid advance has been in large part due to openness by academics and businesses alike. All the major AI tools we have are open-sourced. But in the race to develop more capable AI, that may be ending. If openness in AI dwindles, we may see advances in this field slow down as a result. We may also see new monopolies develop. 

And if history is anything to go by, we know a lack of transparency is a trigger for bad behavior in tech spaces. So, while we go on to laud (or critique) ChatGPT, we should not overlook the circumstances in which it has come to us. Unless we are careful, the very thing that seems to mark the golden age of AI may in fact mark its end.


Toby Walsh is a Professor of AI at UNSW and Research Group Leader at UNSW Sydney. (This article was initially published by The Conversation.)