More countries with frontier models makes AI doom less likely
so Europe should have an AI initiative, to save the world
This article is in two parts.
The first part looks at an AI arms race through a game-theoretical lens and comes to some conclusions.
The second part takes those conclusions and makes some practical suggestions.
Part I: a simplified model of an AI arms race
Imagine a world on the brink of creating artificial superintelligence (ASI). ASI would transform the economy of any state that had it, and make it more militarily powerful than anyone else: they would rule the world. The speed of an ASI-led economic takeoff is such that there is a big first mover advantage.
I will model it thus:
the current situation 0 points
winning the AI arms race 100 points
losing the AI arms race -100 pointsA country (or rather its leadership, since they are the ones making the decisions) regards winning the AI arms race as +100 points. Losing is -100 points; and if no-one wins or loses everyone gets 0 points.
Now image there are two countries (A and B) in the arms race. They are well-calibrated (i.e. have accurate probabilities of whether they will win).
A thinks there is a 60% chance they’ll win and B thinks they have a 40% chance. These add up to 100%, as someone is bound to win.
Should A and B agree to a treaty banning development of ASI? If there is such a treaty, the outcome for both players is 0.
If there is no treaty, the outcome for B is 0.4*100+0.6*-100 = -20. So B has an expected negative outcome from not agreeing to a treaty, and so B wants to sign one.
But the outcome for A is 0.6*100+0.4*-100 = 20. So A expects to do better without a treaty than with one, because they’ll probably win the arms race. If A is a rational actor, they won’t sign the treaty.
Adding another player
Let’s add another player, C. C effort to win the arms race, which is entirely separate from A’s or B’s programmes, has a 25% chance of winning, which reduces A’s chance to 45% and B’s to 30%.
The expected utilities for the three players, in the event the arms race continues, are now:
Player P(win) Expected utility
====== ====== ================
A 45% -10
B 30% -40
C 25% -50Thus all three players now have a negative expected utility if the arms race continues, because all of them have a P(win) of <50%. If they are rational actors, they will all agree to the arms control treaty.
In general, the more players there are in a winner-takes-all situation, the less likely it is that a particular one of them will win.
Adding P(doom) to our model
So far our model doesn’t take account of the possibility that an ASI will kill everyone: it doesn’t love us, it doesn’t hate us, but we are using resources it can use for something else.
We’ll call the probability of AI killing everyone P(doom), and give that outcome a very low utility of -1000. The higher P(doom) is, the less attractive continuing the arms race is, so rational actors would be more willing to go for an arms control treaty.
For example, if P(doom) is 9.1% then even if A thinks they have a 100% chance of winning the arms race, their expected utility is (1-0.091)*100 + 0.091*(-1000) = -0.1 which is negative, and thus they would rationally chose an arms control treaty instead.
Conclusions
If there are three major players in an AI arms race, then because each one has a lower probability of winning, it is easier for them all to agree to discontinue the arms race.
The higher P(doom) is perceived as being, the more likely it is the player will want to pause ASI development, as they won’t want humanity to go extinct.
Part II: practical suggestions
The two countries in the AI arms race are the United States of America and the People’s Republic of China.
The most likely third player to enter the race would be Europe, because:
Europe collectively has a large economy: adding together all EU countries and all European countries that’re in NATO, it amounts to about 20% of world GDP.
while Europe is lots of little countries and not one big one, it does see itself as having a common culture and common identity, and has common coordinating institutions such as the EU
Europe should enter the AI arena
Outside of the USA and China, Europe is the largest economy. I have argued above that a third player in an AI arms race makes an AI arms control treaty more likely. This in turn makes it more likely that humans will survive.
Therefore Europe should have a major AI initiative, aimed at:
developing frontier models leading to ASI
do lots of AI safety research
inviting countries around the world to join the initiative, and push for AI arms control
The purpose of developing frontier models is to make it more likely that neither America nor China will win an AI arms race, and thus making it more likely they will agree to an arms control treaty. Europe needs to make it clear to everyone that it is developing frontier models as a bargaining chip to get a treaty pausing advanced AI until it can be properly aligned.
When I say “Europe” I mean either the EU or a new organisation set up to do it.
Europe should furthermore invite lots of countries from around the world to join its AI initiative. Many will want to do so, because:
If either China or USA win, they will effectively rule the world, something other countries don’t want.
China has hostile relations with many countries in Asia, particularly Japan, South Korea, Taiwan, Vietnam, Philipines and India.
The USA’s current leader, Donald Trump, as manages to antagonised many countries, including major US allies such as Canada and Australia.
Europe would be in a good position to push for an AI arms control treaty
In discussing the technical enforceability of an AI pause, Pause AI notes:
Luckily for us, the compute supply chain has various chokepoints. The hardware needed to train the biggest models (specialized GPUs) is being produced by just 1 or 3 companies
There are multiple monopolies in the supply chain for AI training hardware:
ASML is the only company that produces EUV lithography machines
TSMC is the only company that can manufacture the most advanced chips
NVidia is the only company that designs the most advanced GPUs
ASML is a European (Dutch) company, so Europe can use that as leverage.
TSMC is a Taiwanese company. If Europe got Taiwan to back the AI initiative, that would mean both ASML and TSMC were controlled by the AI initiative, which would give it (and the organisation controlling it) further leverage.
What incentives could Europe supply to get Taiwan to be part of the AI initiative? Taiwan is not recognised as a country by the UN or by most countries, and is under perpetual threat of invasion from China. So Europe could offer:
a mutual defence treaty
security guarantees such as European soldiers stationed in Taiwan plus a nuclear deterrent
enhanced access to the EU market
EU membership
Should the EU be running this, or a new organisation?
One problem with the EU is that it is largely run on consensus, requiring unanimity for major decisions. This means that decision making can be glacially slow, which is far from ideal. I have in the past suggested a European Military Alliance which would overcome these defects.
What is needed is an organisation that can make decisions quickly, including decisions on new members joining.
A military alliance and an AI initiative are difference kinds of things so if a European Military Alliance is created, it might not be the best organisation to run the AI initiative; but maybe it would be good to have it in the background in case either China or the USA turn nasty.
What would an ideal AI arms control treaty look like?
Pause AI’s proposal is to ban AIs from being trained if they are above certain limits:
Only allow training of general AI systems if their safety can be guaranteed.
By general AI models, we mean models that are either 1) larger than 10^12 parameters, 2) have more than 10^25 FLOPs used for training or 3) capabilities that are expected to exceed a score of 86% on the MMLU benchmark. Note that this does not target narrow AI systems, like image recognition used for diagnosing cancer.
It may be possible that the AI alignment problem is never solved - it may be unsolvable. In that case, we should never allow training of such systems.
I do think a lot of money and effort should be put into AI alignment research -- this should be a major international effort.
In terms of the number of parameters, OpenAI do not disclose the number of parameters in their models, but 10^12 is roughly the upper end of size estimates of the of OpenAI’s GPT4 model or the lower end for the GPT5 model. It’s also 4 times as big as the 236 billion parameters in DeepSeek’s V2.5 model.
Further reading
The AI 2027 report, which discusses AI advances as an arms race
Pause AI hopes to pause AI until the alignment problem is solved
My proposal for a European Military Alliance
This article is an expansion of the ideas in my previous article Will Trump or Xi rule the universe forever?



What if China invades Taiwan?
It's interesting how you've modeled this, it kinda reminds me of trying to find the perfect center in Pilates. But do you think leaders in real life always follow such a rational calculaton?