TY - BOOK ID - 64931059 TI - Artificial Superintelligence: Coordination & Strategy AU - Yampolskiy, Roman AU - Duettmann, Allison PY - 2020 SN - 3039218549 3039218557 PB - MDPI - Multidisciplinary Digital Publishing Institute DB - UniCat KW - strategic oversight KW - multi-agent systems KW - autonomous distributed system KW - artificial superintelligence KW - safe for design KW - adaptive learning systems KW - explainable AI KW - ethics KW - scenario mapping KW - typologies of AI policy KW - artificial intelligence KW - design for values KW - distributed goals management KW - scenario analysis KW - Goodhart’s Law KW - specification gaming KW - AI Thinking KW - VSD KW - AI KW - human-in-the-loop KW - value sensitive design KW - future-ready KW - forecasting AI behavior KW - AI arms race KW - AI alignment KW - blockchain KW - artilects KW - policy making on AI KW - distributed ledger KW - AI risk KW - Bayesian networks KW - artificial intelligence safety KW - conflict KW - AI welfare science KW - moral and ethical behavior KW - scenario network mapping KW - policymaking process KW - human-centric reasoning KW - antispeciesism KW - AI forecasting KW - transformative AI KW - ASILOMAR KW - judgmental distillation mapping KW - terraforming KW - pedagogical motif KW - AI welfare policies KW - superintelligence KW - artificial general intelligence KW - supermorality KW - AI value alignment KW - AGI KW - predictive optimization KW - AI safety KW - technological singularity KW - machine learning KW - holistic forecasting framework KW - simulations KW - existential risk KW - technology forecasting KW - AI governance KW - sentiocentrism KW - AI containment UR - https://www.unicat.be/uniCat?func=search&query=sysid:64931059 AB - Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield “collateral benefits” for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility. ER -