Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

A Big Picture look at the impacts of AI

I am hesitant to recommend people to read this book: if you already find the future scary and complex, be warned that this book explores some of the more extreme elements of that future. However, if you are feeling robust, it is well worth reading through it to get some idea of what may lie ahead.

In essence, the book explores the field of Artificial Intelligence (AI), and looks quite deeply into where it may evolve in the medium to longer term. One of Bostrom’s key points is that we are naïve to think the eventual outcome of AI is merely matching human intelligence: if and when it gets that far, it will almost certainly have the ability to move well beyond human capability, into what he calls superintelligence. He also points out that we risk using our own limited human analogies to imagine what human intelligence might look like: rather than imagine that the relationship between superintelligence and good human intelligence is like that between a clever person and a fairly dim-witted person, he says it may be more like the difference between a very clever person and a worm. And we’re the worm!

One thing which impressed me about this book is that it is pretty clear and readable, especially considering it is dealing with highly complex issues, some technical, some philosophical. He explores what the key components of superintelligence would need to be, and various ways in which it might evolve. One focus of the book which I found surprising was that a lot of it assumes that there would be one superintelligence which would emerge and be pre-eminent globally: Bostrom spends quite a lot of the book exploring the Doomsday scenario in which this dominant sole superintelligence could literally dominate our world and even the world of outer space, and could destroy humanity in the process.

He clearly regards this as a serious threat, and it is worth noting that Bostrom is highly regarded by many of the leading figures in the technology sector in the US. In fact the reason I bought the book was because of a feature in the Harvard Business School bulletin in which a leading Silicon Valley entrepreneur said that he regarded Bostrom as the best world expert on the future of AI.

The book provides a lot of useful information. For example, he collates a range of forecasts about when human-level machine intelligence will be attained: the consesus is a 50% probability by 2040, and 90% probability by 2075. He also surveys the crunchy question of how long it would take for AI to move from human-level intelligence to superintelligence: the general view is that there is a 75% probability of this happening within a further 30 years.

One benefit of the book is that it will help you see more clearly what are the components of human intelligence in the full sense. For example, he offers a table of the key capabilities which a machine would need in order to achieve superintelligence. Interestingly, these include social manipulation, i.e. the ability to persuade humans, and hacking, i.e. finding and exploiting security flaws in computer systems: which could of course include hacking out of the control systems which humans have put around it. An equally interesting exploration in this book is how to give a superintelligence moral values, and how those values would be chosen. Developers are already encoding some degree of values into AI, and the power they deploy by this will only grow a lot further.

Bostrom also provides a valuable, but not always encouraging exploration of the likely impacts on humans of non-human superintelligence. Job displacement is a major threat. He believes there could be substantial overall economic growth as a result, but there are big questions as to whether this would translate into a better quality of life for most people, or chronic under-employment.

A recent feature in New Yorker magazine observes that the Chinese government is far outspending the US on AI, “with benefits for espionage and security.” The current Chinese five-year plan commits $150 billion to AI. The US government spent $1.2 billion in 2016. Who might win out on superintelligence?