The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our …
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.
This book is a great tour of the intelligence problem we face.
It is quite opinionated and as a first book in this space you may be lead to believe too much of what Nick Bostrom says as being consensus opinion. Regardless he introduces and summaries the main points to give one an understanding of the space quite well. The subtitle is quite an apt description of the book, what is and how we will get superintelligence, the problems and pitfalls of superintelligence and thoughts and ideas on what we can do to try and avoid it.
The book also in some ways makes the implied argument that it is one moral imperative to work on this problem first!
Fancied Harari’s [b:Homo Deus|31138556|Homo Deus A History of Tomorrow|Yuval Noah Harari|https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1468760805l/31138556.SY75.jpg|45087110]? If you’re up for a deeper conversation on the future of brain emulation and machine intelligence, be sure to read Superintelligence.
Nick Bostrom (1973) discusses the likelihood and risks of superintelligence – any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest – from a technological and philosophical point of view. Of course, even the director of the Future of Humanity Institute at the University of Oxford cannot predict the future (‘many of the points made in this book are probably wrong’), but he does set out interesting thoughts. As a non-technical layman, I found it difficult to assess the probability of certain scenarios, but they gave me some insight nonetheless.
There are different paths to enhance intelligence, such as brain emulation or biological cognition, but Bostrom …
Fancied Harari’s [b:Homo Deus|31138556|Homo Deus A History of Tomorrow|Yuval Noah Harari|https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1468760805l/31138556.SY75.jpg|45087110]? If you’re up for a deeper conversation on the future of brain emulation and machine intelligence, be sure to read Superintelligence.
Nick Bostrom (1973) discusses the likelihood and risks of superintelligence – any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest – from a technological and philosophical point of view. Of course, even the director of the Future of Humanity Institute at the University of Oxford cannot predict the future (‘many of the points made in this book are probably wrong’), but he does set out interesting thoughts. As a non-technical layman, I found it difficult to assess the probability of certain scenarios, but they gave me some insight nonetheless.
There are different paths to enhance intelligence, such as brain emulation or biological cognition, but Bostrom is most keen on machine intelligence as the road to superintelligence; the possibility of a singularity runs like a thread through his narrative. Bostrom is quick to explain we should not try to anthropomorphise in any way, as too often happens in science fiction.
There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments: these complex adaptations would require deliberate expensive effort to recreate in AIs.
The author then describes different scenarios in case a non-human superintelligence emerges. According to his orthogonality thesis, such an intelligence could have a huge range of goals. The next question would therefore be how a superintelligence could still be kept under control. Bostrom distinguishes between capability control methods and motivation control methods. His confidence in the former kind is low, so he delves deeper into the latter. Science fiction readers will remember Isaac Asimov’s laws ([b:I, Robot|40226738|I, Robot (Robot, #0.1)|Isaac Asimov|https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1536494104l/40226738.SY75.jpg|1796026]), as Bostrom does himself:
Embarrassingly for our species, Asimov’s laws remained state-of-the-art for over half a century: this despite obvious problems with the approach, some of which are explored in Asimov’s own writings (Asimov probably having formulated the laws in the first place precisely so that they would fail in interesting ways, providing fertile plot complications for his stories).
In the final chapters, Bostrom goes more into detail. I was not necessarily interested in his take on the future of our economy or labour market, but I certainly appreciated the author’s thoughts on the value-loading problem and the impersonal perspective, which includes considering future (unborn) generations. Bostrom’s ideas on the common good principle and the importance of collaboration bring to mind Mariana Mazzucato’s political economy ([b:Mission Economy|55742686|Mission Economy A Moonshot Guide to Changing Capitalism|Mariana Mazzucato|https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1603166951l/55742686.SY75.jpg|73121548]).
Finally, Superintelligence offers a nice background for reading science fiction stories. I have a more technical understanding now of novels like [b:2001: A Space Odyssey|46026742|2001 A Space Odyssey (Space Odyssey, #1)|Arthur C. Clarke|https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1558985212l/46026742.SY75.jpg|208362] and [b:Klara and the Sun|55111243|Klara and the Sun|Kazuo Ishiguro|https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1622500451l/55111243.SY75.jpg|84460796].
Next on my list is Mustafa Suleyman’s [b:The Coming Wave|90590134|The Coming Wave Technology, Power, and the Twenty-first Century's Greatest Dilemma|Mustafa Suleyman|https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/books/1685351813l/90590134.SY75.jpg|114865406].
This is a good look at the history and future of artificial intelligence. It is especially concerned with the possible ramifications of superintelligence, which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest."
Once a superintelligent AI is created, will we be able to control it? Not if it's smarter than us. The dystopian futures of science fiction are cautionary tales. We read those books, watch those movies, then rush madly forward developing these AIs anyway. When the robots start their extermination campaign, we won't be able to say that we weren't warned.
Final Notes (quoted from 12min)
"Recommended by everyone from Bill Gates to Elon Musk, 'Superintelligence' is a really outstanding book that covers so much ground in its 400 densely populated pages, there are really just a few AI-related books you’ll need …
[My review of the 12min summary]
This is a good look at the history and future of artificial intelligence. It is especially concerned with the possible ramifications of superintelligence, which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest."
Once a superintelligent AI is created, will we be able to control it? Not if it's smarter than us. The dystopian futures of science fiction are cautionary tales. We read those books, watch those movies, then rush madly forward developing these AIs anyway. When the robots start their extermination campaign, we won't be able to say that we weren't warned.
Final Notes (quoted from 12min)
"Recommended by everyone from Bill Gates to Elon Musk, 'Superintelligence' is a really outstanding book that covers so much ground in its 400 densely populated pages, there are really just a few AI-related books you’ll need to read besides it.
"Moreover, it is a very timely book on a very timely subject. 'If this book gets the reception that it deserves,' wrote in a review mathematician Olle Haggstorm, "it may turn out the most important alarm bell since Rachel Carson's "Silent Spring" from 1962, or ever.'
"It’s your job to make sure Bostrom’s book will earn this standing."
An excellent overview of a still barely-explored landscape of concerns about runaway artificial intelligence. Bostrom covers many facets of this conundrum in a thorough way -- neither hiding nor hiding behind the uncertainties inevitable to such an investigation.
the idea itself and the implications are very scifi and futurist but the reflection on it seems as much about prediction as it does about the present nature of human society.
Excellent analysis of the types of intelligence. The main point of the book is how we can get to superintelligence and what are the dangers of having greater-than-human artificial intelligence. Although it seems to me superintelligence is very far away, the author seems to believe is nearer.
In comparison to Gödel Escher Bach, another famous book on intelligence, this one is a highly systematic examination on the consequences of super, instead of how intelligence works, which is (part of) the subject of GEB.