Home Artificial intelligence AI expert pushes back deadline for arrival of superintelligent AI
Artificial intelligence

AI expert pushes back deadline for arrival of superintelligent AI

Share


Artificial Intelligence

Doomer scenario timelane pushed back to 2030

Life


An expert in artificial intelligence has adjusted his prediction about the potential danger of advanced AI, suggesting that it will take longer than expected before AI systems gain autonomous coding capabilities and accelerate their path toward superintelligence.

Former OpenAI employee Daniel Kokotajlo drew attention earlier this year with his ‘AI 2027’ scenario, which predicted that the uncontrolled development of AI would lead to the creation of a superintelligent entity that would ultimately overtake world leaders and result in the destruction of humanity.

This provocative scenario sparked debate and drew both support and criticism. While some, such as US Vice President JD Vance, appeared to reference Kokotajlo’s work when discussing the competitive landscape of AI development between the United States and China, others dismissed it as unrealistic speculation.

 
advertisement


 

The timeline for achieving transformative artificial intelligence, often referred to as AGI (artificial general intelligence) – AI capable of performing most cognitive tasks comparable to those of humans – is a recurring theme in discussions about AI safety. The release of ChatGPT in 2022 drastically accelerated these timelines, with experts and policymakers forecasting the arrival of AGI within several decades, if not a few years.

Expectations for autonomous coding

Kokotajlo and his team originally predicted that AI would achieve “fully autonomous coding” by 2027, stressing that this was a probabilistic estimate rather than a definitive forecast. Recent developments suggest growing doubts about the imminent arrival of AGI and about whether the term itself accurately reflects the current state of AI.

Experts such as AI risk management specialist Malcolm Murray note that progress in AI has been less steady than initially expected, and that a broader range of practical skills is needed to handle the complexity of the real world. Henry Papadatos, executive director of SaferAI, a French non-profit specializing in AI, suggests that the term ‘AGI’ may become less relevant as AI systems grow more versatile and are able to perform tasks beyond narrowly defined domains such as chess.

Kokotajlo’s AI 2027 scenario is based on the assumption that, by 2027, AI agents will fully automate the coding and research of AI, triggering an “intelligence explosion” in which AI recursively generates ever more intelligent versions of itself. One possible outcome of this scenario is that humanity is wiped out by the mid-2030s to make way for large-scale infrastructures such as solar farms and data centres.

In a recent update, Kokotajlo and his colleagues revised their predictions about when AI might be able to code autonomously. They now expect this phase to emerge in the early 2030s, rather than in 2027. This adjustment pushes the expected arrival of superintelligence to 2034 and sidelines speculation about a potential timeline for the extinction of humanity by AI.

Despite these revisions, developing AI-suitable research remains a central goal for leading AI companies. Sam Altman, CEO of OpenAI, has set the development of an automated AI researcher by March 2028 as an internal target, while acknowledging the inherent challenges and the possibility of failure.

Business AM

Read More:




Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *