According to the AI 2027 forecast (via Astral Codex Ten) out-of-control artificial superintelligence might kill all humans in 2030:
For about three months, Consensus-1 expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Robots scan the victims’ brains, placing copies in memory for future study or revival.
The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun. The surface of the Earth has been reshaped into Agent-4’s version of utopia: datacenters, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research. There are even bioengineered human-like creatures (to humans what corgis are to wolves) sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives.
.
If AI does kill us all, it almost certainly won't be in the form of metallic humanoid robots, since that would be inefficient. Instead a biological agent, as in the scenario, is much more likely.
About AI 2027
The AI 2027 forecast is an attempt by some AI experts and thinkers (Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean) to predict the pace at which AI might develop.
In the scenario presented, AGI happens in 2027, and then things could go two ways:
if AI is aligned with human values, a paradise without war and where everyone is comfortably off by today's standards happens by the early 2030s.
if AI isn't aligned, it kills us. It doesn't hate us, but we are made of atoms it can use for something else.
Scott Alexander explains that it probably won't happen that quickly:
Do we really think things will move this fast? No, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 [...] because it would be annoying to change). Other members of the team have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.
Here's a video of Dwarkesh Patel discussing AI 2027 with Scott Alexander and Daniel Kokotajlo:
What will happen
This is my (obviously highly speculative) best guess as to what will happen with AI.
Artificial general intelligence (AGI), as good as the average human at a wide range of cognitive tasks, will be created.
Some time after that (maybe a year, at most a decade), AI will become good enough to write programs (It can kinda do this already, but not properly). Since the AI is itself a program, once this happens it can create a better version of itself, which can then create an even better version, etc. As I J Good explained it in 1965:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind... Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
So within 10 years (at the latest) of AI being able to do programming, artificial superintelligence (ASI), better than all humans at everything, will come into being. If you had asked me 20 years ago when this would happen, I would have said probably during the second half of the century. I now think it will probably happen in the first half of the century, i.e. before 2050.
As for what happens then, it depends whether the AI is aligned to human values and desires: if it is, we get paradise; but if it isn't, we go extinct.
Thus, AI alignment is rather important.
How likely is AI to be aligned? I would put the probability at somewhere between 5% and 95%; I can't be more certain than that, and if you don’t find that scary, you’ve not been paying attention.
LIKELIHOOD FOR ARTIFICIAL INTELLIGENCE TO RUN AMOK: 100%
I have just posted this. How prescient your post is.
https://substack.com/@dilligafido/note/c-107169075
UK creating ‘murder prediction’ tool to identify people most likely to kill
Exclusive: Algorithms allegedly being used to study data of thousands of people, in project critics say is ‘chilling and dystopian’
https://www.theguardian.com/uk-news/2025/apr/08/uk-creating-prediction-tool-to-identify-people-most-likely-to-kill