Tag Archives: existential risks

Review and partial rebuttal of Bostrom’s ‘Superintelligence’

This article from the Bulletin of The Atomic Scientists site is an interesting overview of Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies. The author rebuts Bostrom on several points, relying partly on the failure of AI research to date to produce any result approaching what most humans would regard as intelligence. The absence of recognizably intelligent artificial general intelligence is not, of course, a proof it can never exist. The author also takes issue with Bostrom’s (claimed) conflation of intelligence with inference abilities—an assumption the author says AI researchers found to be false.

Will self-improving AI inevitably lead to catastrophe?

Paul W sent the following TED Talk link and said

If AI is by definition a program designed to improve its ability to access and process information, I suspect we cannot come up with serious AI that is not dangerous. It will evolve so fast and down such unpredictable pathways that it will leave us in the dust. The mandate to improve information-processing capabilities implicitly includes a mandate to compete for resources (need’s better hardware, better programmers, technicians, etc.) It will take these from us, and just as we do following a different gene replication mandate, from all other life forms. How do we program in a fail safe against that? How do we make sure that everyone’s AI creation has such a failsafe — one that works?

What do you think? I also recommend Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies