Who will govern AI? The race of nations to regulate artificial intelligence

Who will govern AI The race of nations to regulate artificial intelligence

Artificial intelligence (AI) is a very broad term: it can refer to many activities undertaken by computing machines, with or without human intervention. Our familiarity with AI technologies depends largely on where they play a role in our lives, for example in facial recognition tools, chatbots, photo editing software or self-driving cars.

The term “artificial intelligence” is also evocative of tech giants – Google, Meta, Alibaba, Baidu – and emerging players – OpenAI and Anthropic, among others. While governments come to mind less easily, they are the ones who shape the rules under which AI systems operate.

Since 2016, different regions and nations keen on new technologies in Europe, Asia-Pacific and North America have implemented regulations targeting artificial intelligence. Other nations are lagging behind, such asAustralia [editor’s note: where do the authors of this article work], which is still studying the possibility of adopting such rules.

There are currently more than 1 public policies and strategies on AI in the world. The European Union, China, the United States and the United Kingdom have become leading figures in AI development and governance, while a international summit on AI security in the United Kingdom in early November.

Accelerate AI regulation

AI regulatory efforts began to accelerate in April 2021, when the EU proposed an initial regulation framework called AI Act. These rules aim to set obligations for providers and users, based on the risks associated with different AI technologies.

While European AI law was on hold, China has proposed its own AI regulations. In the Chinese media, political decision-makers have spoken of their desire to be the first to act and provide global leadership in AI development and governance.

While the EU has taken a comprehensive approach, China has regulated specific aspects of AI one after the other. These aspects range from "algorithmic recommendations" (for example platforms like YouTube) to the image or voice synthesis or technologies used to generate “deepfake” and Generative AI.

Chinese AI governance will be supplemented by other regulations, still to come. This iterative process allows regulators to strengthen their bureaucratic know-how and their regulatory capacity, and allows flexibility to implement new legislation in the face of emerging risks.

A warning for the United States?

Progress on Chinese AI regulations may have been a wake-up call for the United States. In April, an influential lawmaker, Chuck Shumer, said his country should not “allow China to take the first position in terms of innovation, nor to write the rules of the road” when it comes to AI.

On October 30, 2023, the White House issued a decree (executive order) on safe, secure and trustworthy AI. This executive order attempts to clarify very broad questions of equity and civil rights, while also addressing specific applications of technology.

Alongside the dominant players, countries with growing IT sectors, such as Japan, Taiwan, Brazil, Italy, Sri Lanka and India, have also sought to implement defensive strategies to mitigate potential risks associated with widespread AI integration.

These global AI regulations reflect a race against foreign influence. Geopolitically, the United States competes with China, whether economically or militarily. The EU emphasizes establishing its own digital sovereignty and strives to be independent of the United States.

At the national level, these regulations can be seen as favoring large incumbent technology companies over emerging competitors. This is because it is often costly to comply with legislation, requiring resources that small businesses may lack.

Alphabet, Meta and Tesla supported calls for a AI regulation. At the same time, Google, owned by Alphabet, like Amazon, has invested billions in Anthropic, OpenAI's competitor; while xAI, owned by Elon Musk, the boss of Tesla, has just launched its first product, a chatbot called Grok.

A shared vision

The European AI law, China's AI regulations and the White House executive order show that the countries involved share common interests. Together, they prepared the ground for "Bletchley Declaration", published on 1er November, in which 28 countries, including the United States, the United Kingdom, China, Australia and several EU members [editor’s note: including France and the European Union itself], are committed to cooperating on AI security.

Countries or regions consider that AI contributes to their economic development, their national security, and their international leadership. Despite the recognized risks, all jurisdictions are working to support AI development and innovation.

By 2026, global spending on AI-centric systems could exceed 300 billion US dollars, according to one estimate. By 2032, according to a Bloomberg report, the generative AI market could be worth US$1,3 trillion alone.

Such figures tend to dominate media coverage of AI, as well as the supposed benefits of using AI for technology companies, governments and consulting firms. Critical voices are often put aside.

Divergent interests

Beyond economic promises, countries are also turning to AI systems for defense, cybersecurity and military applications.

At the International AI Security Summit in the UK, international tensions were evident. While China endorsed Bletchley's statement made on the first day of the summit, it was excluded from public events on the second day.

One of the points of disagreement is the social credit system of China, which operates in a less than transparent manner. THE AI Act European Union considers that social rating systems of this type create an unacceptable risk.

The United States views China's investments in AI as threat to their national and economic security, particularly in terms of cyberattacks and disinformation campaigns. These tensions are of course likely to hamper global collaboration on binding AI regulations.

The limits of current rules

Existing AI regulations also have significant limitations. For example, there is no clear and common definition across jurisdictions of different types of AI technologies.

Current legal definitions of AI tend to be very broad, raising concerns about their applicability in practice, as the regulations accordingly cover a wide range of systems that pose different risks and might merit different treatments.

Likewise, many regulations do not clearly define the notions of risk, safety, transparency, fairness and non-discrimination, which poses problems in precisely ensuring any legal compliance.

We are also seeing local jurisdictions initiate their own regulations within the national framework, to address particular concerns and balance regulation and economic development of AI.

So, the California introduced two bills aimed at regulating AI in employment. Shanghai proposed a system for classifying, managing and supervising AI development at the municipal level.

However, narrowly defining AI technologies, as China has done, poses the risk that companies will find ways to circumvent the rules.

Go forward

Sets of “best practices” for AI governance are emerging from local and national jurisdictions and transnational organizations, under the control of groups such as the UN advisory council on AI and National Institute of Standards and Technology the United States. The forms of governance that exist in the United Kingdom, the United States, Europe and, to a lesser extent, China, are likely to serve as a framework for global governance.

Global collaboration on AI governance will be underpinned by ethical consensus and, more importantly, national and geopolitical interests.

Fanyang, Research fellow at Melbourne Law School, the University of Melbourne and the ARC Center of Excellence for Automated Decision-Making and Society., The University of Melbourne et Ausma Bernot, Postdoctoral Research Fellow, Australian Graduate School of Policing and Security, Charles Sturt University

This article is republished from The Conversation under Creative Commons license. Read theoriginal article.

The opinions expressed in this article do not necessarily reflect those of InfoChrétienne.

Image credit: Creative Commons / Flickr

In the Media section >



Recent news >