A New York Times bestsellerSuperintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful – possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom’s work nothing less than a reconceptualization of the essential task of our time.
Nick Bostrom
Oxford University Press; Reprint edition (May 1, 2016)
390 pages
English
978-0198739838
File Size: 82 MB
Available File Formats: PDF AZW3 DOCX EPUB MOBI TXT or Kindle audiobook Audio CD(Several files can be converted to each other)
Language: English, Francais, Italiano, Espanol, Deutsch, chinese
“I highly recommend this book” –Bill Gates”Terribly important. Groundbreaking, extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines – engineering, natural sciences, medicine, social sciences and philosophy – into a comprehensible whole. If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson’s Silent Spring from 1962, or ever.” –Olle Haggstrom, Professor of Mathematical Statistics”Nick Bostrom’s excellent book “Superintelligence” is the best thing I’ve seen on this topic. It is well worth a read.” –Sam Altman, President of Y Combinator and Co-Chairman of OpenAI”Worth reading. We need to be super careful with AI. Potentially more dangerous than nukes” –Elon Musk, Founder of SpaceX and Tesla”Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era.” –Stuart Russell, Professor of Computer Science, University of California, Berkley”This superb analysis by one of the world’s clearest thinkers tackles one of humanity’s greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn’t become the last?” –Professor Max Tegmark, MIT”Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking” –The Economist”There is no doubting the force of [Bostrom’s] arguments. The problem is a research challenge worthy of the next generation’s best mathematical talent. Human civilisation is at stake.” –Clive Cookson, Financial Times”Those disposed to dismiss an ‘AI takeover’ as science fiction may think again after reading this original and well-argued book.” –Martin Rees, Past President, Royal Society “Every intelligent person should read it.” –Nils Nilsson, Artificial Intelligence Pioneer, Stanford University About the Author Nick Bostrom is Swedish-born philosopher and polymath with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is Professor at Oxford University, where he leads the Future of Humanity Institute as its founding director. (The FHI is a multidisciplinary university research center; it is also home to the Center for the Governance of Artificial Intelligence and to teams working on AI safety, biosecurity, macrostrategy, and various other technology or foundational questions.) He is the author of some 200 publications, including Anthropic Bias(2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. Bostrom’s widely influential work, which traverses philosophy, science, ethics, and technology, has illuminated the links between our present actions and long-term global outcomes, thereby casting a new light on the human condition.He is recipient of a Eugene R. Gannon Award, and has been listed on Foreign Policy’s Top 100 Global Thinkers list twice. He was included on Prospect’s World Thinkers list, the youngest person in the top 15. His writings have been translated into 28 languages, and there have been more than 100 translations and reprints of his works. He is a repeat TED speaker and has done more than 2,000 interviews with television, radio, and print media. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the doom and gloom of his Swedish roots. <div id="
The title looked good, so did the cover. Some of the reviewers are impressive. I work in large IT systems, AI, Robotics and devices. I found this book to be too much ‘What if this happened…’ Watch the movies iRobot, Transformers, Terminator, and 1984. Then summarize the movies and you have this book. The books point is we have to watch out for the time (now?) when humans create an intelligent system that dwarfs us and takes over. I didn’t read anything new that most sci-fi movies haven’t covered related to technology ethics. However, the way this is written there is so much what if and speculation, that reading it becomes tiresome. Sorry, didn’t enjoy the book.
Artificial general intelligence is a ghost story. Why would such an intrinsically human thing we can’t define but call “intelligence” emerge in machines? This fiction stems from an intrinsic belief that improvements to technology move us along some kind of “spectrum of intelligence.” There’s no basis for it.Artificial intelligence is a fraudulent hoax — or in the best cases it’s a hyped-up buzzword that confuses and deceives. The much better, precise term would instead usually be machine learning – which is genuinely powerful and everyone oughta be excited about it.It’s time for the term AI to be “terminated”!Eric Siegel, Ph.D.Author, Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, orDie
Hand-wavey high-level descriptions filled with unsubstantiated “axioms” and a thin layer of mathematical logic that was laughably glued together. The book had about 75 pages of valid content and 325 pages of blatant overly verbose treatments of simplistic tenants. I have to say the editor should have taken out the glib generalities that were a substitute for thought and substance. In general it was embarrassingly overblown and overly speculative with not a clue in how modern software and semiconductor computer architecture development is conducted. A big disappointment, you should not have to fight to find bits of content that are worthy of you time.There was quite a bit of content pulled directly from several movies. War Games, Westworld, 2001: A Space Odyssey, The Matrix, Eagle Eye, Saturn 3, The Terminator, I Robot and most egregious the Star Trek Episode “Spock’s Brain” that was actually voted the worst episode from all of Star Trek (reference the 2016 Star Trek convention in Las Vegas). Too bad there was not enough material lifted from “Colossus: The Forbin Project (1970)”
I picked this book up because I have a kid at CalTech majoring in AI programming, machine learning. He seems to see only upside, no real concerns about 99% of the population being put put of work, and what I believe is inadequate apprehension about what could go wrong. Mom is a huge fan of Stephen Hawking and he was more than a bit apprehensive about the potential problems with self-learning machines. Most of the books and articles I have read on the topic are cursory or naieve. Nick Bostrom’s book is fairly comprehensive and in depth. I am enjoying it as much as an excellent read in philosophy of science., as I am for his expanding the boundaries of the conversation, indeed, broaching it in many areas. I honestly do not know whether he says everything which needs to be said, but he has clearly thought it through and done a good deal of exploring, consulting, conversing ,collaborating. It is far and away the best book I have read on the topic {though there are some good pieces in MIT Technology Review as well).This is a book which is important and timely. We must seriously consider and weigh the potential for harm as well as good before creating a monster. While there may be areas which he has missed, I feel that when I read about a brute force approach to building human level AI by recreating a brain at the quantum level using Schrodinger’s equation, the man is clearly pushing the boundaries. If nothing else it is a very good start to an important conversation.I picked this up because I was considering sending a copy to my son, but read it first because he is a busy guy and chooses his side reading carefully. There are books and articles I might mention or even recommend, and others I tell him not to waste his time on, this is one I will be sending him {though I would be very very surprised if someone at Cal Tech did not broach …all of what is contained here). I will let him determine if it is redundant. It is well written and thorough, and also very approachable. He says in the prologue that overly technical sections may be skipped without sacrificing any meaning. I have not encountered one I needed to skip, and have, in fact, very much enjoyed the level of discourse.Read it if you are in the field to make sure you are covering all the bases. Read it if you are a scientist, philosopher, engineer to enjoy some very good writing. Read it if you are just encountering AI and want to quickly get to speed on the issues. It is not only a book I would recommend, but have, to anyone who would listen 😉
The first chapter is an interesting, concise history of AI. The following chapters, though… I have to say that if anything, Bostrom’s writing reminds me of theology. It’s not lacking in rigor or references. Bostrom seems highly intelligent and well-read. The problem (for me) is rather that the main premise he starts with is one that I find less than credible. Most of the book boils down to “Let’s assume that there exists a superintelligence that can basically do whatever it wants, within the limits of the laws of physics. With this assumption in place, let’s then explore what consequences this could have in areas X, Y, and Z.” The best Bostrom can muster in defense of his premise that superintelligence will (likely) be realized (sometime in the future) are the results of various surveys of AI researchers about when they think human-level AI and superintelligence will be achieved. These summaries don’t yield any specific answer as to when human-level AI will be attained (it’s not reported ), and Bostrom is evasive as to what his own view is. However, Bostrom seems to think, if you don’t commit to any particular timeline on this question, you can assume that at some point human-level AI will be attained. Now, once human-level AI is achieved, it’ll be but a short step to superintelligence, says Bostrom. His argument as to why this transition period should be short is not too convincing. We are basically told that the newly developed human-level AI will soon engineer itself (don’t ask exactly how) to be so smart that it can do stuff we can’t even begin to comprehend (don’t ask how we can know this), so there’s really no point in trying to think about it in much detail. The AI Lord works in mysterious ways! With these foundations laid down, Bostrom can then start his speculative tour-de-force that goes through various “existential risk” scenarios and the possibilities of preventing or mitigating them, the economics of AI/robot societies, and various ethical issues relating to AI. I found the chapters on risks and AI societies to be pure sci-fi with even less realism than “assume spherical cows”. The chapters on ethics and value acquistion did however contain some interesting discussion.All in all, throughout the book I had an uneasy feeling that the author is trying to trick me with a philosophical sleight of hand. I don’t doubt Bostrom’s skills with probability calculations or formalizations, but the principle “garbage in – garbage out” applies to such tools also. If one starts with implausible premises and assumptions, one will likely end up with implausible conclusions, no matter how rigorously the math is applied. Bostrom himself is very aware that his work isn’t taken seriously in many quarters, and at the end of the book, he spends some time trying to justify it. He makes some self-congratulatory remarks to assure sympathethic readers that they are really smart, smarter than their critics (e.g. “[a]necdotally, it appears those currently seriously interested in the control problem are disproportionately sampled from one extreme end of the intelligence distribution” [p. 376]), suggests that his own pet project is the best way forward in philosophy and should be favored over other approaches (“We could postpone work on some of the eternal questions for a little while […] in order to focus our own attention on a more pressing challenge: increasing the chance that we will actually have competent successors” [p. 315]), and ultimately claims that “reduction of existential risk” is humanity’s principal moral priority (p. 320). Whereas most people would probably think that concern for the competence of our successors would push us towards making sure that the education we provide is both of high quality and widely available and that our currently existing and future children are well fed and taken care of, and that concern for existential risk would push us to fund action against poverty, disease, and environmental degradation, Bostrom and his buddies at their “extreme end of the intelligence distribution” think this money would be better spent funding fellowships for philosophers and AI researchers working on the “control problem”. Because, if you really think about it, what of a millions of actual human lives cut short by hunger or disease or social disarray, when in some possible future the lives of 10^58 human emulations could be at stake? That the very idea of these emulations currently only exists in Bostrom’s publications is no reason to ignore the enormous moral weight they should have in our moral reasoning!Despite the criticism I’ve given above, the book isn’t necessarily an uninteresting read. As a work of speculative futurology (is there any other kind?) or informed armchair philosophy of technology, it’s not bad. But if you’re looking for an evaluation of the possibilites and risks of AI that starts from our current state of knowledge – no magic allowed! – then this is definitely not the book for you.
It was persistent recommendation through listening to Sam Harris’ fine podcasts that eventually convinced me to read this book.Nick Bostrom spells out the dangers we potentially face from a rogue, or uncontrolled, superintelligences unequivocally: we’re doomed, probably.This is a detailed and interesting book though 35% of the book is footnotes, bibliography and index. This should be a warning that it is not solely, or even primarily aimed at soft science readers. Interestingly a working knowledge of philosophy is more valuable in unpacking the most utility from this book than is knowledge about computer programming or science. But then you are not going to get a book on the existential threat of Thomas the Tank engine from the Professor in the Faculty of Philosophy at Oxford University.Also a good understanding of economic theory would also help any reader.Bostrom lays out in detail the two main paths to machine superintelligence: whole brain emulation and seed AI and then looks at the transition that would take place from smart narrow computing to super-computing and high machine intelligence.At times the book is repetitive and keeps making the same point in slightly different scenarios. It was almost like he was just cut and shunting set phrases and terminology into slightly different ideas.Overall it is an interesting and thought provoking book at whatever level the reader interacts with it, though the text would have been improved by more concrete examples so the reader can better flesh out the theories.“Everything is vague to a degree you do not realise till you have tried to make it precise” the book quotes.
This book goes well beyond its remit and stretches off into fanciful flights of whimsy far far into the future. This completely put me off. It’s also terribly written. As a long time popular science and philosophy reader this book was hard going, bored me to tears and was largely uninsightful.
A clear, compelling review of the state of the art, potential pitfalls and ways of approaching the immensely difficult task of maximising the chance that we’ll all enjoy the arrival of a superintelligence. An important book showcasing the work we collectively need to do BEFORE the fact. Given the enormity of what will likely be a one-time event, this is the position against which anyone involved in the development of AI must justify their approach, whether or not they are bound by the Official Secrets Act.The one area in which I feel Nick Bostrom’s sense of balance wavers is in extrapolating humanity’s galactic endowment into an unlimited and eternal capture of the universe’s bounty. As Robert Zubrin lays out in his book
You can tell the author’s made a thorough work in studying and compiling all his thoughts and facts but unfortunately the book ends up being really difficult to navigate for people with a general interest in AI but who lacks an IT background and the appetite for all the technical details
About Aaovo.com :
We are committed to sharing all kinds of e-books, learning resources, collection and packaging, reading notes and impressions. The book resources of the whole station are collected and sorted by netizens and uploaded to cloud disk, high-definition text scanning version and full-text free version. This site does not provide the storage of the file itself.
Description of file download format: (Note: this website is completely free)
The e-books shared by this site are all full versions, most of which are manually refined, and there are basically no omissions. Generally, there may be multiple versions of files. Please download the corresponding format files as needed. If there is no version you need, it is recommended to use the file format converter to read after conversion. Scanned PDF, text PDF, ePub, Mobi, TXT, docx, Doc, azw3, zip, rar and other file formats can be opened and read normally by using common readers.
Copyright Disclaimer :
This website does not store any files on its server. We only index and link to the content provided by other websites. If there is any copyrighted content, please contact the content provider to delete it and send us an email. We will delete the relevant link or content immediately.
Download link description :
We usually use Dropbox, Microsoft onedrive and Google drive to store files. Of course, we may also store backup files in other cloud content management service platforms such as Amazon cloud drive, pcloud, mega, mediafire and box. They are also great. You can choose the download link on demand.