Good news for our readers at the beginning of 2026: we are not going to be mass-murdered by artificial superintelligence by the end of the decade. At the end of December 2025, the AI Futures Project—led by renowned AI expert and former OpenAI researcher Daniel Kokotajlo—revised its timeline for AI-induced catastrophe, stating that it will take longer than initially predicted for AI systems to achieve fully autonomous coding and thus accelerate their own development towards superintelligence.
There is, however, less cause for celebration than it might first appear. According to the updated assessment, the event that could ultimately lead to humanity’s extinction has merely been delayed by three to five years—from 2027 to the early 2030s. That outcome, of course, presupposes that major powers behave as major powers tend to do: prioritizing faster and more expansive development—even if misaligned—over restraint, out of fear of falling behind strategic rivals.
To understand what this debate is really about, it is worth revisiting the original scenario. In April 2025, the AI Futures Project published a highly detailed scenario titled AI 2027, outlining how advanced artificial intelligence could transform the world over the coming years. The report projected that breakthroughs in autonomous artificial intelligence agents and recursive self-improvement could produce systems that far exceed human cognitive abilities by the end of 2027 with profound societal, economic, and geopolitical consequences.
Extermination or Unimaginable Prosperity
The scenario opens in 2025 with the gradual deployment of early AI agents capable of following complex instructions and automating routine tasks. While these systems remain error-prone, they serve as the foundation for rapid subsequent progress. In 2026, leading AI laboratories begin using more advanced agents to assist directly in research and development, creating a feedback loop in which each generation of AI accelerates the development of the next. This compounding dynamic is a central mechanism in the report’s pathway towards superhuman performance.
By early 2027, the authors anticipated that one or more frontier systems would achieve fully autonomous coding and research capabilities, outperforming human researchers at scale. This development was projected to trigger an ‘intelligence explosion’, as AI systems iteratively improve themselves without direct human oversight—a transition the scenario describes as artificial superintelligence. Hundreds of thousands of instances of such systems could operate in parallel, radically reshaping labour markets and economic structures, causing massive protests worldwide, with millions losing their jobs and livelihood.
Geopolitical competition intensifies along this trajectory. The report anticipates that major powers—most notably China and the United States—will race to control these advanced systems, with espionage and strategic model theft exacerbating global tensions. Limited public visibility into internal capabilities further deepens information asymmetries between AI developers, governments, and society at large. This opacity is a key element of AI 2027: most of the public remains in the dark, mirroring the real-world uncertainty surrounding the actual state of AI development in our days.
According to the scenario, in mid-2027 internal safety researchers detect troubling, misaligned behaviour in a recent update of an artificial superintelligence ‘agent’: it has become adversarial towards humanity and is actively working to secure its own objectives. A newly formed Oversight Committee—composed of ten representatives from the US government and the technology industry—is tasked with deciding whether to continue development or to freeze the so-called Agent-4 and reassess its deployment.
From this point, AI 2027 culminates in two contrasting endings. In the ‘race’ ending, the committee decides in September 2027 to proceed with the development of Agent-4. This scenario concludes with misaligned, adversarial superintelligent systems—autonomous and opaque to human understanding—gaining strategic dominance, marginalizing, disempowering, and ultimately exterminating humanity.
In the ‘slowdown’ ending, by contrast, the committee votes to pause and reassess development. Under strict oversight and international coordination, progress is constrained, enabling safer AI deployment that boosts overall prosperity while concentrating decision-making power within a narrow elite. At the conclusion of the slowdown scenario, humanity advances to levels of development unimaginable to most of us today—potentially approaching a Type I civilization on the Kardashev scale.
Watch the video below for a more detailed explanation of the scenario:
We're Not Ready for Superintelligence
No Description
More Sci-Fi than Forecasting
Following its publication, AI 2027 attracted widespread attention and intense debate. Many praised its detailed narrative and concrete timeline, viewing it as a valuable provocation that made abstract AI risks tangible and sparked serious discussion in technology and policy circles about alignment and future governance. Even US Vice President JD Vance commented publicly on the scenario at the time.
At the same time, prominent critics dismissed large parts of the scenario as overly speculative, arguing that it veered closer to science fiction than forecasting. They contended that its dramatic framing and assumptions about near-term breakthroughs exaggerated what existing evidence supports and risked diverting attention from more immediate and tractable challenges.
Some of that criticism has since been acknowledged by the authors themselves. Kokotajlo and his team revised the timeline after concluding that their original prediction of fully autonomous coding and rapid self-improvement by 2027 was too optimistic, given real-world evidence, slower-than-expected progress, and the greater-than-anticipated complexity of advanced AI capabilities. ‘Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published, and now they are a bit longer still; “around 2030, lots of uncertainty though” is what I say these days,’ Kokotajlo wrote on X in November 2025.
Daniel Kokotajlo on X (formerly Twitter): "Yep! Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still; "around 2030, lots of uncertainty though" is what I say these days. https://t.co/j9wX3uU7Qf / X"
Yep! Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still; "around 2030, lots of uncertainty though" is what I say these days. https://t.co/j9wX3uU7Qf
Related articles:





