Harry and Meghan among 800 public figures calling for AI ‘superintelligence’ ban
Prince Harry and Meghan Markle have joined leading scientists, Nobel laureates, and global figures in calling for a global ban on developing superintelligent artificial intelligence (AI) until it can be proven safe.
In a joint statement organized by the Future of Life Institute (FLI), the Duke and Duchess of Sussex joined a group of high-profile signatories demanding a “prohibition on the development of superintelligence” — AI systems that could surpass human intelligence across all cognitive tasks, News.Az reports, citing foreign media.
The letter urges that the ban remain in place until there is broad scientific consensus that such systems can be developed “safely and controllably,” with strong public backing.
Prominent figures including AI pioneers Geoffrey Hinton and Yoshua Bengio, Apple co-founder Steve Wozniak, entrepreneur Richard Branson, former US national security adviser Susan Rice, former Irish president Mary Robinson, and British author Stephen Fry have also signed the statement.
Nobel laureates Beatrice Fihn, Frank Wilczek, John C. Mather, and Daron Acemoğlu are among other supporters.
The FLI, known for its advocacy on AI safety, previously called for a pause in advanced AI development in 2023 following the launch of ChatGPT, which sparked global debate over AI’s risks and potential.
Tech leaders like Mark Zuckerberg have recently claimed that superintelligent AI is “now in sight.” However, experts say such claims may reflect competition among tech giants rather than real scientific progress.
The FLI warns that uncontrolled AI advancement could lead to mass job losses, erosion of civil liberties, national security threats, and even pose existential risks to humanity if systems become uncontrollable.
A recent FLI poll found that 75% of Americans support strict AI regulation, and 60% believe superhuman AI should not be created until it is proven safe.
Major AI firms including OpenAI and Google are already pursuing artificial general intelligence (AGI) — a step below superintelligence — which some experts say could still present serious risks if left unregulated.





