Skip to main content

Advertisement

Log in

Classification of global catastrophic risks connected with artificial intelligence

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of AI development, namely, (1) before it starts self-improvement, (2) during its takeoff, when it uses various instruments to escape its initial confinement, or (3) after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

Download references

Acknowledgements

We would like to thank Roman Yampolskiy and Seth Baum for their interesting ideas in this article. This article represents views of the authors and does not necessarily represent the views of the Global Catastrophic Risk Institute or the Alliance to Feed the Earth in Disasters. No external sources of funding were used for this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexey Turchin.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Turchin, A., Denkenberger, D. Classification of global catastrophic risks connected with artificial intelligence. AI & Soc 35, 147–163 (2020). https://doi.org/10.1007/s00146-018-0845-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-018-0845-5

Keywords

Navigation