The singularity is coming. Will the universe be ruled forever by the uploaded mind of Donald Trump, Xi Jinping or someone similar? I think something like this might happen.
I also think it would be undesirable. So I suggest steps Europe could take to prevent it, and more generally to maximise the chance of a good singularity happening.
In a previous post I said the singularity would very likely happen by 2050. This was based on the AI 2027 report, which predicts AGI in 2027 and ASI in 2030. You should read both before reading this article, but to summarise, I said:
AI will become good enough to write programs (It can kinda do this already, but not properly). Since the AI is itself a program, once this happens it can create a better version of itself, which can then create an even better version, etc. [Thus] within 10 years (at the latest) of AI being able to do programming, artificial superintelligence (ASI), better than all humans at everything, will come into being. [...] I think it will probably happen in the first half of the century, i.e. before 2050.
The singularity could happen earlier -- in the 2030s for example.
I also said AI alignment is important:
[If] the AI is aligned to human values and desires, we get paradise; but if it isn't, we go extinct.
On further thought I now think this is overoptimistic: AI alignment is necessary to get a good singularity, but it is not sufficient. This is because we need to care whose human values the AI is aligned with.
Immortal leaders ruling forever
For example, if AI is aligned to the values of (i.e. to be loyal to) the Chinese government, it will do what they want. After the singularity, there will be a massive increasing in technological capability, so mind uploading will probably be possible, and the general secretary of the Chinese Communist Party may decide to upload their mind, become immortal, and rule the universe forever.
Similarly, if ASI is developed in the USA, perhaps the CEO of an AI company becomes immortal ruler of the universe. Or maybe the US president. Futurama imagined Richard Nixon's disembodied head living forever:
I imagine the same thing, but with Trump, immortal and living inside a computer:
He's still writing rambling bombastic tweets:
I’m the GREATEST Universal Emperor EVER, folks! Immortal, unstoppable, and ruling the ENTIRE cosmos -- nobody does it better than me, believe me! The Martians LOVE me, the Venusians are begging for my leadership, and those little green guys? They’re chanting “TRUMP FOREVER!” Even the penguins love me, after I cut their tariffs! I’ve got the best head in a jar, people say it’s TREMENDOUS, glowing neon like a billion stars -- nobody glows better than me! The Fake News Black Holes are saying I can’t rule forever, but I’ve been here for 10,000 years, and I’m just getting started! Sad! #TrumpTheImmortal #RulerOfTheUniverse
Is mind uploading possible?
Uploading one's mind to run in a computer system is of course not possible now.
It could in principle be done by reading the weights and connections of all the neurons in someone's brain than then emulating them in a computer. (Perhaps the reading could be done destructively, e.g. by freezing the brain, cutting it into thin slices and then making 3-D images of the slices and piecing them together.) In any case, this is a hard problem, but you can imagine that if you had a billion Einstein-level geniuses working on it for 100 years, significant progress could be made.
But each instance of an ASI will be an Einstein-level genius. And it may well be able to think 100 (or more) times faster than humans, since human neurons compute very slowly compared to computer circuits. So if you had a billion instances of an AGI, you could have 100 years equivalent of research done in less than a year. (Of course, there may well be bottlenecks because science experiments don't just involve thinking, have to interact with the physical world.)
So I think it's therefore entirely possible (though obviously not certain!) that within a year or so of ASI existing, mind uploading will be possible.
Europe is lagging behind USA and China
If you look at top AI models they are all American or Chinese, as this chart from Maximilian Moehring points out:
(Deepseek and Alibaba are Chinese, the rest American.)
Moehring goes on to say:
The United States just committed $500 billion to Project Stargate. To put this in perspective, that's more money than the entire EU plans to spend on all digital transformation projects over the next decade. This isn't just an investment – it's a declaration of intent.
they understand what Europe seems to have forgotten: AGI isn't just another technology – it's the key to ruling the digital world.
AGI promises an even more dramatic revolution [than electricity]. But here's the crucial difference: while electricity's benefits spread globally, AGI might not be so democratically shared.
Imagine a world where the most powerful AI systems are controlled by a single nation or region. Imagine being locked out of breakthrough technologies because you're on the wrong side of a digital iron curtain.
What side of history will Europe choose to be on?
On Project Stargate, the BBC writes:
The creator of ChatGPT, OpenAI, is teaming up with another US tech giant, a Japanese investment firm and an Emirati sovereign wealth fund to build $500bn (£405bn) of artificial intelligence (AI) infrastructure in the United States.
The new company, called The Stargate Project, was announced at the White House by President Donald Trump who billed it "the largest AI infrastructure project by far in history" and said it would help keep "the future of technology" in the US.
How Europe can catch up
Europe should build its own AI endeavour. This would include:
infrastructure such as AI data centers
producing advanced AI models, with particular emphasis on making AI good at programming
lots of research on AI alignment, because unaligned ASI means we're all dead
This would be funded by the EU, by countries (in and out of Europe) who want to take part, and by the private sector.
Why the emphasis on an AI being good at programming? Two reasons:
(1) The AI is itself a program so an AI that's good at programming can speed up AI research.
(2) Programming is the sort of domain that it ought to be relatively easy to train an AI to be good at, as it is a digital symbolic activity of perfect knowledge, as is the game Go, where the AI program AlphaGo Zero went from no knowledge of the game (other than the rules) to superhuman ability in a few days.
ASML gives Europe leverage
There is only one company in the world capable of making the extreme ultraviolet lithography (EUV) machines that are needed to make advanced chips, the European company ASML. So Europe controls the only company that makes machines that can make the chips that build large AI systems.
This ought to give the Europeans some advantage/leverage in the AI race.
The largest manufacturer of the GPU chips used in AIs is the Taiwanese company TSMC (Taiwan Semiconductor Manufacturing Company). One way Europe could use ASML to get leverage is to offer to Taiwan and its chip manufacturers a deal where Europe offers:
TSMC and other Taiwanese companies get priority access to ASML technologies
Taiwan gets security and economic guarantees from Europe (e.g. a nuclear deterrent, Europe troops on their soil, EU membership)
Money. Everyone likes money, especially in combination with other things they greatly desire, such as needed technologies and security.
And in return:
Europe and European AI initiative gets priority access to Taiwanese/TSMC GPU chips
Saving the world
The AI 2027 forecast suggests that AI advances will cause geopolitical consequences as advanced AI may upset the balance of power:
Finally, diplomats consider what an “AI arms control” treaty might look like. If AI progress threatened to overturn nuclear deterrence, could America and China avoid nuclear war? If someone found evidence of AIs going rogue, could the two countries halt research until they better understood the threat? How could such an agreement be monitored and enforced?
In a world where only America and China have advanced AI, any treaty made regarding it will have them as the only two players. But if Europe (and interested countries outside Europe) has their own advanced AI, that means they can get a seat at the table too. Thus having one's own advanced AI model is the table stake to take part in the negotiations on worldwide AI governance. Since the future of the entire human species rests on this, it is rather important.
And if the European AI system is really good at alignment, it may well be that it becomes the basis of a worldwide consensus model, because leaders in all countries don't want humans to go extinct.
In the AI 2027 scenario, the USA and China are the only two players, and both countries see AI advances as an arms race. If this situation, one or both of them may be tempted to skimp on alignment (telling themselves the AI will be aligned, because that's what they want to believe) in the race to beat their competitor.
I cannot stress enough that AI alignment is really, really important. Humanity has one chance to get this right, and if we fail everything that we value dies.
Further reading
the AI 2027 report
Oh stop. LOL the only thing I have left is thinking that one day, this will all end ;o) No more Starmer, no more Tories, Reform in the bin where they belong and Trump in the ground! (Or a fiery furnace - I really don’t care which!)