Bild 1 von 1

Galerie
Bild 1 von 1

Ähnlichen Artikel verkaufen?
Wenn jemand es baut, stirbt jeder: Warum übermenschlich e KI uns alle töten würde, Har...
US $25,96
Ca.CHF 20,76
Artikelzustand:
Neuwertig
Buch, das wie neu aussieht, aber bereits gelesen wurde. Der Einband weist keine sichtbaren Gebrauchsspuren auf. Bei gebundenen Büchern ist der Schutzumschlag vorhanden (sofern zutreffend). Alle Seiten sind vollständig vorhanden, es gibt keine zerknitterten oder eingerissenen Seiten und im Text oder im Randbereich wurden keine Unterstreichungen, Markierungen oder Notizen vorgenommen. Der Inneneinband kann minimale Gebrauchsspuren aufweisen. Minimale Gebrauchsspuren. Genauere Einzelheiten sowie eine Beschreibung eventueller Mängel entnehmen Sie bitte dem Angebot des Verkäufers.
2 verfügbar13 verkauft
Oops! Looks like we're having trouble connecting to our server.
Refresh your browser window to try again.
Versand:
Kostenlos USPS Media MailTM.
Standort: Jessup, Maryland, USA
Lieferung:
Lieferung zwischen Di, 21. Okt und Mo, 27. Okt nach 94104 bei heutigem Zahlungseingang
Rücknahme:
14 Tage Rückgabe. Käufer zahlt Rückversand. Wenn Sie ein eBay-Versandetikett verwenden, werden die Kosten dafür von Ihrer Rückerstattung abgezogen.
Zahlungen:
Sicher einkaufen
Der Verkäufer ist für dieses Angebot verantwortlich.
eBay-Artikelnr.:357564680146
Artikelmerkmale
- Artikelzustand
- Book Title
- If Anyone Builds It, Everyone Dies : Why Superhuman Ai Would Kill
- ISBN
- 9780316595643
Über dieses Produkt
Product Identifiers
Publisher
Little Brown & Company
ISBN-10
0316595640
ISBN-13
9780316595643
eBay Product ID (ePID)
27075653312
Product Key Features
Number of Pages
272 Pages
Language
English
Publication Name
If Anyone Builds It, Everyone Dies : Why Superhuman Ai Would Kill Us All
Publication Year
2025
Subject
Intelligence (Ai) & Semantics, Public Policy / Science & Technology Policy
Type
Textbook
Subject Area
Political Science, Computers
Format
Hardcover
Dimensions
Item Weight
16.4 Oz
Item Length
9.6 in
Item Width
6.4 in
Additional Product Features
Intended Audience
Trade
Reviews
"The definitive book about how to take on 'humanity's final boss'--the hard-to-resist urge to develop superintelligent machines--and live to tell the tale."-- Jaan Tallinn, philanthropist, cofounder of the Center for the Study of Existential Risk, and cofounder of Skype, "A clarion call...Everyone with an interest in the future has a duty to read what [Yudkowsky] and Soares have to say."-- The Guardian, "A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike."-- Scott Alexander, founder, Astral Codex Ten, "A serious book in every respect. In Yudkowsky and Soares's chilling analysis, a super-empowered AI will have no need for humanity and ample capacity to eliminate us. If Anyone Builds It, Everyone Dies is an eloquent and urgent plea for us to step back from the brink of self-annihilation."-- Fiona Hill, former senior director, White House National Security Council, "Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous."-- Emmett Shear, former interim CEO of OpenAI, " If Anyone Builds It, Everyone Dies isn't just a wake-up call; it's a fire alarm ringing with clarity and urgency. Yudkowsky and Soares pull no punches: unchecked superhuman AI poses an existential threat. It's a sobering reminder that humanity's future depends on what we do right now."-- Mark Ruffalo, actor, "Essential reading for policymakers, journalists, researchers, and the general public. A masterfully written and groundbreaking text, If Anyone Builds It, Everyone Dies provides an important starting point for discussing AI at all levels."-- Bart Selman, professor of computer science, Cornell University, "Once only the realm of sci-fi, superintelligence is almost at our doorstep. We don't know for sure what is going to happen when it arrives, but I'm glad we at least have this book raising the tough questions that needs to be asked while the rest of the industry buries its head in the sand."-- Liv Boeree, philanthropist and poker champion, "Everyone should read this book. There's a 70% chance that you--yes, you reading this right now--will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance."-- Daniel Kokotajlo, AI Futures Project, " If Anyone Builds It, Everyone Dies is an urgent, well-reported and persuasive warning about the grave danger humanity faces from reckless AI development."-- Alex Winter, actor and filmmaker, "Fascinating and downright frightening...argues that AI companies' unchecked charge toward superhuman AI will be disastrous, lays out some theoretical scenarios detailing how it could lead to our annihilation and suggests what might be done to change our doomed trajectory...[Yudkowsky and Soares] make a pretty convincing case that we are playing with fire."-- AARP, "[Yudkowsky and Soares's] diagnosis of AI's potential pitfalls evinces a sustained engagement with the subject...they have a commendable willingness to call BS on big Silicon Valley names, accusing Elon Musk and Yann LeCun, Meta AI's chief scientist, of downplaying real risks."-- San Francisco Chronicle, "The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster."-- Stephen Fry, "A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike."-- Scott Alexander, creator, Astral Codex Ten, "Claims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong."-- Huw Price, Bertrand Russell Professor Emeritus, Trinity College, Cambridge, "If you want to be able to assess the risk posed by AI, you will need to understand the worst-case scenario. This book is an exceptionally lucid and rigorous account of how very wrong humankind's quest for a general AI could go. You have been warned!"-- Christopher Clark, Regius Professor of History, University of Cambridge, "The best no-nonsense, simple explanation of the AI risk problem I've ever read."-- Yishan Wong, Former CEO of Reddit, "A fire alarm for anyone shaping the future. Whether one agrees with its conclusions or not, the book demands serious attention and reflection."-- Booklist (starred review), "The authors present in clear and simple terms the dangers inherent in 'superintelligent' artificial brains that are 'grown, not crafted' by computer scientists. A quick and worthwhile read for anyone who wants to understand and participate in the ongoing debate about whether and how to regulate AI."-- Joan Feigenbaum, Grace Murray Hopper Professor of Computer Science, Yale University, "You will feel actual emotions when you read this book. We are currently living in the last period of history where we are the dominant species. Humans are lucky to have Soares and Yudkowsky in our corner, reminding us not to waste the brief window of time that we have to make decisions about our future in light of this fact."-- Grimes, musician, "This book outlines a thought-provoking scenario of how the emerging risks of AI could drastically transform the world. Exploring these possibilities helps surface critical risks and questions we cannot collectively afford to overlook."-- Yoshua Bengio, Full Professor, Université de Montréal; Co-President and Scientific Director, LawZero; Founder and Scientific Advisor, Mila - Quebec AI Institute, "Everyone should read this book. There's a 70% chance that you--yes, you reading this right now--will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance."-- Daniel Kokotajlo, OpenAI whistleblower and executive director, AI Futures Project, "A shocking book that captures the insanity and hubris of efforts to create thinking machines that could kill us all. But it's not over yet. As the authors insist: 'where there's life, there's hope.'"-- Dorothy Sue Cobble, Distinguished Professor Emerita, Labor Studies, Rutgers University, "A stark and urgent warning delivered with credibility, clarity, and conviction, this provocative book challenges technologists, policymakers, and citizens alike to confront the existential risks of artificial intelligence before it's too late. Essential reading for anyone who cares about the future."-- Emma Sky, senior fellow, Yale Jackson School of Global Affairs, " If Anyone Builds It, Everyone Dies is a sharp and sobering read. As someone who has spent years pushing for responsible AI policy, I found it to be an essential warning about what's at stake if we get this wrong. Yudkowsky and Soares make the case with clarity, urgency, and heart."-- Joely Fisher, National Secretary-Treasurer, SAG-AFTRA, "The most important book of the decade. This captivating page-turner, from two of today's clearest thinkers, reveals that the competition to build smarter-than-human machines isn't an arms race but a suicide race, fueled by wishful thinking."-- Max Tegmark, author of Life 3.0: Being Human in the Age of AI, "If we build superintelligent machines without guardrails, we're not just risking jobs or art, we're risking everything. This book doesn't exaggerate. It tells the truth. If we don't act, we may not get another chance."-- Frances Fisher, actor, "The most important book I've read for years: I want to bring it to every political and corporate leader in the world and stand over them until they've read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster."-- Stephen Fry, actor, "A.I. is coming, whether we want it or not. It's too late to stop it, but not too late to keep this handy survival guide close and start demanding real guardrails before the edges start to fray."-- Patton Oswalt, actor, "An incredibly serious issue that merits -- really demands -- our attention. You don't have to agree with the prediction or prescriptions in this book, nor do you have to be tech or AI savvy, to find it fascinating, accessible, and thought-provoking."-- Suzanne Spaulding, former undersecretary, Department of Homeland Security, " If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can."-- Tim Urban, cofounder, Wait But Why, " If Anyone Builds It, Everyone Dies makes a compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action."-- Jon Wolfsthal, former special assistant to the president for national security affairs
Synopsis
The scramble to create superhuman AI has put us on the path to extinction--but it's not too late to change course, as two of the field's earliest researchers explain in this clarion call for humanity. "May prove to be the most important book of our time."--Tim Urban, Wait But Why In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next. For decades, two signatories of that letter--Eliezer Yudkowsky and Nate Soares--have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us--and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close. How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive. The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies. "The best no-nonsense, simple explanation of the AI risk problem I've ever read."--Yishan Wong, Former CEO of Reddit, INSTANT NEW YORK TIMES BESTSELLER The scramble to create superhuman AI has put us on the path to extinction--but it's not too late to change course, as two of the field's earliest researchers explain in this clarion call for humanity. "May prove to be the most important book of our time."--Tim Urban, Wait But Why In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next. For decades, two signatories of that letter--Eliezer Yudkowsky and Nate Soares--have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us--and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close. How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive. The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies. "The best no-nonsense, simple explanation of the AI risk problem I've ever read."--Yishan Wong, Former CEO of Reddit
Artikelbeschreibung des Verkäufers
Info zu diesem Verkäufer
Great Book Prices Store
97,5% positive Bewertungen•1.4 Mio. Artikel verkauft
Angemeldet als gewerblicher Verkäufer
Verkäuferbewertungen (397'849)
Dieser Artikel (4)
Alle Artikel (397'849)
- Automatische Bewertung von eBay- Bewertung vom Käufer.Letzter MonatBestellung pünktlich und problemlos geliefert
- Automatische Bewertung von eBay- Bewertung vom Käufer.Letzter MonatBestellung pünktlich und problemlos geliefert
- e***s (1004)- Bewertung vom Käufer.Letzter MonatBestätigter Kaufthe book itself was fine, but the contents was a little over my head.
- eBay automated Feedback- Bewertung vom Käufer.Letzter MonatOrder delivered on time with no issues
- eBay automated Feedback- Bewertung vom Käufer.Letzter MonatOrder delivered on time with no issues
- eBay automated Feedback- Bewertung vom Käufer.Letzter MonatOrder delivered on time with no issues