A Book Review of Smarter than Us

Stuart Armstrong’s Smarter than Us is an exceptionally brief book of barely 54 pages, including the bibliography. It is not based on fieldwork, the references are few, and it can be easily read in two hours. The entire work reads as a series of thought experiments regarding the future of artificial intelligence (AI). It is also as disturbing as it is insightful.

Armstrong’s work opens with a clever vignette, in which the robot from the Terminator returns in time to eliminate a computer program for artificial intelligence. The author’s point is to describe the threat that artificial intelligence could pose, which exists more in the software and web than in the robotic bodies favored by Hollywood. He then describes a series of scenarios that demonstrate why artificial intelligence could be so destructive, not because it would necessarily be malicious, but rather because it would be so alien (chapter five). The core thesis of the book is that we are rapidly moving towards artificial intelligence. Indeed, Armstrong suggests that every time computers achieve a key milestone -such as the ability to play chess- we rapidly change the definition of intelligence to exclude that activity (p. 10). The reality, however, is computers are approaching human levels of skill in many areas, and once they achieve those abilities, humans are unlikely to ever equal their skill again (13-15). This idea haunts much of popular culture. What will happen when computers can write code, and create their own motherboards using 3D printers?

Armstrong suggests that the future of humans relationship with artificial intelligence will be difficult, not only because computers are literal,  but also because they will develop their own goals. How do you program for ethics? The issue is so complicated as to be almost unsolvable. Perhaps the most intriguing part of Armstrong’s argument is his assertion is that this problem already exists. Computer trading algorithms have caused a stock market to crash, not because they worked improperly, but rather that the rules create emergent behaviors that we don’t fully understand. If this point is true, perhaps the danger of AI exists not at some distant point in the future, but rather in the current moment, because we have already begun to cede control to a form of artificial intelligence. Perhaps the transition to AI control will happen gradually, rather than in a dramatic moment sometimes called the singularity, a term coined by a science fiction author.

In the end, this brief book is pessimistic about our ability to manage the rise of artificial intelligence. The author suggests in the last two pages that more research is needed, and that there is reason for hope. Nonetheless, his previous chapters leave the impression that the problem may be quite unsolvable. As I discussed in an earlier post, there is a great deal of attention given to China’s rising power, and the danger of conflict in the South China sea. After reading Armstrong’s book, I wonder whether we should not be giving as much though to the rise of AI, as to the rise of China. When we look back a century from now, which issue will have been more important? Armstrong titles chapter eight “We need to get it exactly right.” As he suggests, once AI has been created, we are unlikely to ever be able to go back. This book would make a good brief reading in a first-year class on ethics or technology.

Shawn Smallman

Portland State University

Privacy & Cookies: This site uses cookies. See our Privacy Policy for details. By continuing to use this website, you agree to their use. If you do not consent, click here to opt out of Google Analytics.