Will self-improving AI inevitably lead to catastrophe?

Paul W sent the following TED Talk link and said

If AI is by definition a program designed to improve its ability to access and process information, I suspect we cannot come up with serious AI that is not dangerous. It will evolve so fast and down such unpredictable pathways that it will leave us in the dust. The mandate to improve information-processing capabilities implicitly includes a mandate to compete for resources (need’s better hardware, better programmers, technicians, etc.) It will take these from us, and just as we do following a different gene replication mandate, from all other life forms. How do we program in a fail safe against that? How do we make sure that everyone’s AI creation has such a failsafe — one that works?

What do you think? I also recommend Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies

Profile photo of Mark H

About Mark H

Information technologist, knowledge management expert, and writer. Academic background in knowledge management, social and natural sciences, information technologies, learning, educational technologies, and philosophy. Married with one adult child who's married and has a teenage daughter.

One thought on “Will self-improving AI inevitably lead to catastrophe?

  1. @paul-w not all AI is self-improving, but that is the sort that prompts concerns over existential risks. Self-improving artificial general intelligence (AGI, or alternatively, GAI) appears to be at least 25 years off. (Not long, I know.) Current machine learning systems are specialized, like Alpha Go’s go-playing genius and AIs that drive or fly vehicles. The latter sort is receiving much R&D because of its potential military, industrial, and consumer benefits. A car-driving AI is the output of a machine learning system. I’ve read that the versions installed in vehicles apply what they already know, but don’t continue learning. I don’t believe that–particularly of Tesla’s driving system. Why would a commercial competitor cut itself off from such a huge training base. On the other hand, Elon Musk is one of the AI-must-be-kept-on-a-tight-leash guys. It’s possible Tesla vehicles collect and Tesla R&D aggregates real-world driving data to feed to its master AI learning system back at the lab. Then, whatever is learned that’s of general value goes into the next driving software update.

Leave a Reply