Apr 1, 2017

Thoughts on Superintelligence: Paths, Dangers, Strategies

Last week I finished up Nick Bostrom's Superintelligence: Paths, Dangers, Strategies. Coming into the book, I'd often heard it as the pessimist's response to Kurzweil's The Singularity is Near. I didn't really find that to be the case though: Bostrom certainly paints some scary pictures of potential futures as artificial intelligence develops, but he's not in denial about all of the positive potential. It's more that Bostrom felt there hadn't been a sufficient treatment of the downsides (and strategies to mitigate them) in the existing literature, so he sought to create that balance.

Getting through the read was a slog. It's probably one of the longest periods of time (a couple of months) I've spent on a book and still finished it. It's a topic I'm really passionate about too, more's the pity. The reality is that it's hard to relate to the topics that Bostrom digs into to a sufficient degree to justify spending time on the detail that he goes into. He spends 100 pages digging into the different structures of AGI systems and their relative merits and downsides vis-a-vis their capabilities and their potential to destroy humanity. I would be fascinated by the blog post. 100 pages is tough.

That being said, I don't think Superintelligence is a bad book. In fact, I think it serves as a great handbook to form a baseline for practitioners' future efforts to address the riskiness of developing AGI. After an initial read-through, the book may have lasting value as a reference guide when developers and researchers are diving into the actual development of these mitigation systems.

In all, I think Superintelligence is a must-read for any serious AI advocate. Most of the topics covered and the arguments presented won't be novel for someone who has spent time in the space, but it does provide a common language and frame of reference to drive future discussion. Relevant topics: superintelligence take-off scenarios, mediums (silicon, biological, swarm, etc.), types of systems, organizations that might pursue/achieve AGI.