P(doom)

Probability of existentially catastrophic outcomes in AI From Wikipedia, the free encyclopedia

In AI safety, P(doom) is the probability of existentially catastrophic outcomes (so-called "doomsday scenarios") as a result of artificial intelligence.[1][2] The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial intelligence.[3]

Originating as a shorthand for communication in the rationalist community and among AI researchers, the term came to prominence in 2023 following the release of GPT-4, as high-profile figures such as Geoffrey Hinton[4] and Yoshua Bengio[5] began to warn of the risks of AI.[6] In a 2023 survey, AI researchers were asked to estimate the probability that future AI advancements could lead to human extinction or similarly severe and permanent disempowerment within the next 100 years. The mean value from the responses was 14.4%, with a median value of 5%.[7]

Notable P(doom) values

More information Name, Notes ...
Name P(doom) Notes
Dario Amodei 10-25%[8] CEO of Anthropic
Marc Andreessen 0%[9] Co-founder of venture capital firm Andreessen Horowitz
Sam Altman >0%[10] CEO of OpenAI
Yoshua Bengio 50%[3][Note 1] Computer scientist and scientific director of the Montreal Institute for Learning Algorithms and most-cited living scientist
Grady Booch 000% c.0%[1][Note 2] American software engineer
Vitalik Buterin 12%[11] Co-founder of Ethereum
Paul Christiano 50%[12] Head of research at the US AI Safety Institute
Andrew Critch 85%[13] Founder of the Center for Applied Rationality
Lex Fridman 10%[14] American computer scientist and host of Lex Fridman Podcast
Demis Hassabis >0%[15] Co-founder and CEO of Google DeepMind and Isomorphic Labs and 2024 Nobel Prize laureate in Chemistry
Dan Hendrycks >80%[1][Note 3] Director of Center for AI Safety
Geoffrey Hinton 10-20% (all-things-considered); >50% (independent impression)[16] "Godfather of AI" and 2024 Nobel Prize laureate in Physics
Holden Karnofsky 50%[17] Executive Director of Open Philanthropy
Lina Khan 015% c.15%[6] Former chair of the Federal Trade Commission
Daniel Kokotajlo 70–80%[18] AI researcher and founder of AI Futures Project, formerly of OpenAI
Connor Leahy 90%+[19] German-American AI researcher; cofounder of EleutherAI.
Yann LeCun <0.01%[20][Note 4] Chief AI Scientist at Meta
Shane Legg c.5–50%[21] Co-founder and Chief AGI Scientist of Google DeepMind
Jan Leike 10–90%[1] AI alignment researcher at Anthropic, formerly of DeepMind and OpenAI
Benjamin Mann 0–10%[22] Co-founder of Anthropic
Emad Mostaque 50%[23] Co-founder of Stability AI
Zvi Mowshowitz 70%[24] Writer on artificial intelligence, director on the board of the Center for Applied Rationality, former competitive Magic: The Gathering player
Elon Musk c.10–30%[25] Businessman and CEO of X, Tesla, and SpaceX
Casey Newton 5%[1] American technology journalist
Toby Ord 10%[26] Australian philosopher and author of The Precipice
Emmett Shear 5–50%[6] Co-founder of Twitch and former interim CEO of OpenAI
Nate Silver 5–10%[27] Statistician, founder of FiveThirtyEight
Max Tegmark >90%[28] Swedish-American physicist, machine learning researcher, and author, best known for theorising the mathematical universe hypothesis and co-founding the Future of Life Institute.
Roman Yampolskiy 99.9%–99.999999%[29][30][Note 5] Latvian computer scientist, formerly a research advisor of the Machine Intelligence Research Institute, and an AI safety fellow of the Foresight Institute
Eliezer Yudkowsky >95%[1] Founder of the Machine Intelligence Research Institute, author of If Anyone Builds It, Everyone Dies.
Close

Criticism

There has been some debate about the usefulness of P(doom) as a term, in part due to the lack of clarity about whether or not a given prediction is conditional on the existence of artificial general intelligence, the time frame, and the precise meaning of "doom".[6][31]

In 2024, Australian rock band King Gizzard & the Lizard Wizard launched their new label, named p(doom) Records.[32]

See also

Notes

  1. Based on an estimated "50 per cent probability that AI would reach human-level capabilities within a decade, and a greater than 50 per cent likelihood that AI or humans themselves would turn the technology against humanity at scale."
  2. Equivalent to "P(all the oxygen in my room spontaneously moving to a corner thereby suffocating me)".
  3. Up from ~20% 2 years prior.
  4. "Less likely than an asteroid wiping us out".
  5. Within the next 100 years.

References

Related Articles

Wikiwand AI