Artificial Intelligence and Global Governance
The State of AI
Artificial intelligence is one of the most impactful and powerful technologies humanity has produced. In the future, it will create medical technology that enhances billions of lives. It will dramatically improve data analytics and allow the synthesis of vast quantities of information in ways that were previously impossible.
With access to effectively the entire catalog of the content generated by mankind since the dawn of history, it will be able to create new, immersive content that captures our attention, unlike anything before it.
In just a few months, the release of OpenAI’s large language model (LLM) ChatGPT to the public has led an arms race among the biggest players in the tech industry. Google, Microsoft, and Apple are training models with greater and greater sophistication and integrating them into their consumer software.
Soon, Microsoft Office, Apple OS apps, and Google’s search engine will all use AI to mostly replace catalogs of web links with content tailor-made for the end user. This generative technology is where AI’s greatest danger lies, for now at least.
We are now amid something of a technological singularity, where exponential improvements in technology are outstripping our ability to understand and regulate them. A technology that has only been available for a few months has completely taken over policy discourse and caused tremendous anxiety over its impact on the job market.
Jobs that once were thought of as safe (“learn to code”) are now on the chopping block. Ironically, it is the truck drivers that still have a career path and programmers that look to be in trouble. This very unpredictability will be a major source of social and political upheaval out into the future, and we are only just beginning the conversation of how to regulate it.
At the risk of being overwhelmed by the whirl of events, I’m going to attempt a contribution to this conversation.
We have not reached the point of a so-called Artificial General Intelligence (AGI) that mimics human intelligence in its applicability to a wide range of circumstances using creative problem-solving in new and innovative ways.
AIs intelligence, particularly LLMs, is a very narrow sort of intelligence. It involves absorbing vast amounts of data, making statistical associations, and regurgitating that information in response to a specific query. It can’t be said to be truly acting creatively; it is a data synthesis engine. It makes nothing new, just variations on existing themes created by humans.
Generative AI and Its Consequences
This specific type of AI, known as generative artificial intelligence (GAI) is what I want to focus on. There are both benign and malignant uses of this technology. Creators in the arts can use these tools to create music, literature, animations, and visual art like magic, simply with the spoken word. A rich and robust world of immersive artwork is on its way.
It can also be wielded to destructive and self-centered ends. It will empower criminals, terrorists, and rogue states to code malware, distort financial markets, shape public opinion, and generally cause chaos with ease. If you thought misinformation was a problem before, imagine a proliferation of sophisticated audio and video deepfakes created with a few clicks of the mouse.
It can be used by states to produce the most intrusive forms of surveillance known, including advanced facial recognition. Cyber warfare will take place at the level of individual psychological manipulation, as our attention gets filtered more and more through the digital world. Propaganda will be tailored to each person; imagine ad micro-targeting, but with content that integrates the entirety of your revealed preferences through your internet activity.
There are also unintended consequences and disruptions. Job displacement in the millions will happen quickly, and workers will be thrown into a constant flux of re-education simply to keep up. The reality is, we don’t know which jobs it will come for, in what timeframe, and in what order. We know only that it’s lurking, waiting.
The addictive, immersive nature of new AI tools will impose costs on human well-being. Interactions with LLMs will replace many interactions with living, breathing humans, and may cause intense social disruptions that are irreversible. Already we’ve seen this play out with the greater incidence of mental health problems among the young. Generative AI will deepen and broaden this trend.
How Should We Respond?
The solution to such a large problem requires clear definitions and flexible, creative thinking. Part of the problem we run into is a lack of clarity in language.
The very phrase “artificial intelligence” is misleading, as it implies the presence of a human-like mind with agency that pursues its own goals. The technology we are discussing is not that. If we are talking about generative AI, then we mean a neural network simulating a statistical model that creates associations among the data it’s fed and produces a result in response to a specific language prompt.
It cannot prompt itself, and it must use information created and supplied by humans. How the model makes the associations is something of a black box, but beyond that, we know it is simply a model that takes inputs and produces outputs robotically.
Additionally, these models are specific programs, and specific instances of them run on specific servers. The computing power required by these models is a limiting factor in their geographic spread. Only massive multinational technology companies and states have access to the resources and technical knowledge to maintain these models. The rest of us simply make requests to these models over the internet.
The fear that small, rogue actors will be able to run their instances of these programs is simply based on misconceptions of how they work. It’s why, rather than a blanket pause on the tech that was recently proposed, specific, targeted interventions should suffice.
Right now, the tech companies developing and deploying these LLMs are the regulators. They are the only entities with the technical know-how to pull it off. But, the honor system is not a sustainable long-term strategy for a technology that is so disruptive. Companies that exist to make a profit (not a bad thing!) cannot be trusted to protect the interests of humans, states, and the global system.
In addition to climate change, regulating AI is the most pressing issue for global governance and requires a coordinated response among nations domestically and internationally, and the cooperation of the great powers, namely the United States and China. Countries with dramatically different systems of government will have to agree to a common regulatory framework at some level. That is a heavy lift.
We run into something of a prisoner’s dilemma concerning international cooperation on AI. As discussed above, there are military uses for generative AI, and there is a reluctance among states to give up a potentially game-changing tool on the promise that other states will do the same. But, like how we are creating an international anti-proliferation regime for nuclear weapons, we may have to do the same for artificial intelligence.
The United States has used a hands-off approach that enables the greatest economic impact of the technology and the greatest technological advantage. The Chinese have regulated the technology aggressively out of concern for it undermining the Chinese government, and to deploy it for population control. The EU has taken a mixed approach with a focus on mitigating the negative social impacts of AI.
Global Governance of AI
These differing internal approaches aside, there must be a common framework for dealing with international impacts and state conflict. AI bots should be strictly limited as tools of statecraft. The use of deepfakes and other generative tools to wage psychological warfare on the populations of other countries must be banned, like other conventions banning chemical weapons.
Of course, given the vast potential benefits of AI usage, a non-proliferation regime should be focused more on equitable access and banning malign uses, rather than preventing other countries from acquiring it. This is for two reasons:
There are legitimate, non-destructive uses of AI. This is in contrast to nuclear weapons, which are purely destructive.
The internet is transnational, so keeping access to AI siloed off by national boundaries is not realistic.
Additionally, given the exponentially growing gap in capabilities between countries with AI and countries without, it would create a massive source of tension that would make international cooperation much more difficult and centralize global government in the hands of a technological elite based in a few countries.
In addition to broad restrictions on military use cases, we must create international political institutions specifically focused on AI, with study and working groups that can improve our understanding of the policy implications of AI and craft specialized approaches that can inform political agreements. This also creates a base of knowledge outside major tech companies that is democratically accountable.
From here, we can develop new approaches and respond to new innovations in AI as they arise. Perhaps through these efforts, and the effort to tackle climate change, we can begin creating a truly global system of governance that does not leave the world’s common problems to an anarchic state system. Or maybe it will just stop AI bots. Either result is good.
The major diplomatic breakthroughs required for this will occur through the cooperation of the world’s two most powerful states who are increasingly at odds: China and the United States. Given historical precedent, we should be optimistic.
Even at the height of the Cold War, the United States and the Soviet Union managed to cooperate on regulating nuclear weapons and their proliferation. That was in a world that was much more thoroughly divided than the still very interconnected relationship between today’s great powers.
However, the two Cold War rivals continued to build up their stockpiles until the thawing of relations during detente and into the Reagan Era. The moral of this historical lesson is that powerful countries who do not entirely trust each other can cooperate on issues where they share common interests.
Conclusion
Moving forward, we need to recognize the massive disruptive potential of generative AI and its awesome benefits. While different countries will take different approaches to internal regulation, we need to agree on a baseline of restrictions on using this technology in military applications. At the same time, we need to put systems in place to study the impact of AI and develop policy proposals for global governance informed by experts and broadly accountable to the political system.
We don’t know where generative AI is going for certain, or what its ultimate impact will be. The first task is creating a framework for global governance. Then, we must fill in the details and, hopefully, we can harness the power of this technology for the general welfare of people globally while mitigating its potential to produce disaster.