Draft:ControlAI
Non-profit organisation mitigating risks from advanced artificial intelligence
From Wikipedia, the free encyclopedia
ControlAI is a non-profit organisation dedicated to mitigating the existential risks posed by advanced artificial intelligence (AI). Founded by Andrea Miotti, the organisation campaigns for a global prohibition on the development of artificial superintelligence (ASI) to ensure that humanity remains in control of its future.
Review waiting, please be patient.
This may take 2 months or more, since drafts are reviewed in no specific order. There are 4,117 pending submissions waiting for review.
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
Reviewer tools
|
Comment: Please fix your citation formatting and remove all citations to social media sites. Wisenerd (talk) 08:05, 2 April 2026 (UTC)
| This is a draft article. It is a work in progress open to editing by anyone. Please ensure core content policies are met before publishing it as a live Wikipedia article. Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL Last edited by Wisenerd (talk | contribs) 12 days ago. (Update)
This draft has been submitted and is currently awaiting review. |
History and Mission
ControlAI was founded by Andrea Miotti, who previously served as Head of Strategy and Governance at the AI safety startup Conjecture.[1] The organisation operates across the United Kingdom, the United States, Canada, and Germany. Its primary objective is to prevent the development of artificial superintelligence—systems more intelligent than humanity combined—which ControlAI and leading AI scientists warn could lead to human extinction. To achieve this, ControlAI advocates for binding global regulations, public awareness campaigns, and direct engagement with policymakers, helping thousands of citizens take civic action.[2]
Political Engagement
United Kingdom
ControlAI has been highly active in UK politics, briefing over 150 cross-party parliamentarians and the Prime Minister's office. In late 2025, more than 100 UK lawmakers backed ControlAI's campaign calling for strict AI regulation to prevent extinction risks.[3][4] The organisation has also drafted an AI bill and presented it to the UK Prime Minister's office. This lobbying occurred amid criticisms of the UK's delayed AI policy and calls for the Science and Tech Secretary, Liz Kendall, to deliver robust oversight.[5][6][7][8]
In January 2026, the House of Lords held multiple debates and prepared briefings heavily focused on the topics championed by ControlAI. These included debates on "AI Systems: Risks" on January 8 and "Superintelligent AI" on January 29, supported by preparatory documents questioning whether the development of autonomous and superintelligent AI systems should be stopped.[9][10][11][12]
Canada
ControlAI has engaged heavily with the Parliament of Canada. In early 2026, CEO Andrea Miotti and ControlAI’s Canadian representative, Samuel Buteau, provided expert testimony to parliamentary committees regarding the existential risks of advanced AI models.[13][14]
United States
The organisation has monitored and warned against the international race for AGI, particularly between the United States and China.[15] ControlAI has specifically highlighted the potential risks associated with President Donald Trump's "Genesis Mission"—an executive initiative aimed at deploying AI across the US scientific ecosystem to counter Chinese influence.[16]
Public Campaigns and Media Presence
ControlAI and its leadership are frequent contributors to the public discourse on AI safety. Miotti has authored numerous op-eds, including in TIME, where he called for a global movement to prohibit superintelligent AI, and The Progressive, advocating against the integration of AI in nuclear warfare.[17][18] The organisation also supported calls for a moratorium on superintelligence research.[19]
ControlAI has frequently pointed to the resignations of top AI researchers who quit with public warnings as evidence of the industry's perilous trajectory.[20]
Media outlets have frequently featured ControlAI's commentary on the societal impacts of AI, including job displacement concerns,[21] international regulatory perspectives,[22] and the broader implications of AI failures and biases.[23] Miotti has represented ControlAI in extensive interviews on platforms such as BBC Newshour and BBC Radio.[24][25]
