European lawmakers, Nobel laureates, former leaders, and AI experts urged binding international regulations on dangerous AI.
They launched the campaign Monday at the UN’s 80th General Assembly in New York.
The initiative calls for governments to agree by 2026 on “red lines” banning the most harmful AI applications.
Signatories include Enrico Letta, Mary Robinson, MEPs Brando Benifei and Sergey Lagodinsky, ten Nobel Prize winners, and tech leaders.
They warned unchecked AI could create pandemics, disinformation campaigns, human rights violations, and loss of control over advanced systems.
Over 200 public figures and 70 organisations from politics, science, human rights, and industry back the campaign.
AI Risks Highlight Urgent Need
Studies show chatbots like ChatGPT, Claude, and Gemini give inconsistent or unsafe responses to suicide-related questions.
Researchers warn these gaps could worsen mental health crises and noted several suicides linked to AI interactions.
Maria Ressa cautioned that AI could trigger “epistemic chaos” and systematic human rights abuses without safeguards.
Yoshua Bengio emphasized societies are unprepared for risks from rapidly advancing AI systems.
Supporters compared AI “red lines” to global treaties banning nuclear weapons, biological weapons, and human cloning.
They argued EU and national rules alone cannot manage a technology that crosses borders.
Moving Toward a Global AI Treaty
Backers called for an independent body to enforce AI safety rules worldwide.
Ahmet Üzümcü warned unchecked AI could cause “irreversible damages to humanity.”
They proposed banning AI from launching nuclear attacks, conducting mass surveillance, or impersonating humans.
Signatories stressed that only a global agreement can ensure consistent standards and enforcement.
They hope a UN resolution will appear by 2026, initiating treaty negotiations for binding worldwide rules.

