Skip to content
Accueil » A global coalition calls for slowing the race toward artificial superintelligence

A global coalition calls for slowing the race toward artificial superintelligence

Future-of-Life-Institute

On Wednesday, October 22, 2025, a warning was issued by more than 800 influential figures from science, politics, technology, and even the entertainment world.

Their message is clear: the development of artificial superintelligence (AI)—a technology capable of surpassing human abilities—must be stopped until safety guarantees and a global consensus are in place.

The initiative, led by the Future of Life Institute, a US organization dedicated to studying technological risks, highlights growing concerns about progress seen as both promising and perilous.

Superintelligence on the horizon and an uncertain future?

The call comes as AI progress is accelerating at a dizzying pace. Sam Altman, CEO of OpenAI and a leading figure in the field, recently said at an event hosted by the Axel Springer group in September that superintelligence could become a reality within five years.

An AI that surpasses human intelligence could revolutionize medicine, education, and industry, but it could also slip beyond any control, with potentially catastrophic consequences.

A diverse coalition united by a common cause

What makes this call particularly striking is the diversity of the signatories. Among them are leading scientists such as:

  • Geoffrey Hinton, winner of the 2024 Nobel Prize in Physics,
  • Stuart Russell, computer science expert at the University of California, Berkeley,
  • Yoshua Bengio, professor at the University of Montreal— all three recognized for their major contributions to AI.

Tech entrepreneurs such as:

  • Steve Wozniak, co-founder of Apple,
  • Richard Branson, head of the Virgin Group, are joining the call.

The political world is also represented, with figures such as:

  • Steve Bannon, former adviser to Donald Trump,
  • Susan Rice, former National Security Advisor under Barack Obama.

Even religious figures, such as Paolo Benanti, an AI expert advising the Vatican, and celebrities like singer will.i.am or Prince Harry and Meghan Markle, have voiced their support.

A clear message: caution above all

The initiative, relayed by the Future of Life Institute, stresses the need for a moratorium. “We are calling for a halt to the development of superintelligence until there is scientific consensus that it can be built in a controlled and safe way, and until there is public support,” the organization states.

This plea is built around a central idea: humanity must move forward cautiously when faced with a technology whose implications remain largely unpredictable.

This mobilization raises the question of collective responsibility in the face of an innovation that is redefining the boundaries of what it means to be human. As labs around the world, from OpenAI to xAI, push the limits of artificial intelligence, this call invites a global reflection. Should we slow down to better anticipate? The answer, according to these 800 signatories, seems to be a resounding yes.

Source: superintelligence-statement.org

Leave a Reply

Your email address will not be published. Required fields are marked *

Glen

Glen