Will self-improving AI inevitably lead to catastrophe?
Paul W sent the following TED Talk link and said
If AI is by definition a program designed to improve its ability to access and process information, I suspect we cannot come up with serious AI that is not dangerous. It will evolve so fast and down such unpredictable pathways that it will leave us in the dust. The mandate to improve information-processing capabilities implicitly includes a mandate to compete for resources (need’s better hardware, better programmers, technicians, etc.) It will take these from us, and just as we do following a different gene replication mandate, from all other life forms. How do we program in a fail safe against that? How do we make sure that everyone’s AI creation has such a failsafe — one that works?
What do you think? I also recommend Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies
@Paul Watson not all AI is self-improving, but that is the sort that prompts concerns over existential risks. Self-improving artificial general intelligence (AGI, or alternatively, GAI) appears to be at least 25 years off. (Not long, I know.) Current machine learning systems are specialized, like Alpha Go’s go-playing genius and AIs that drive or fly vehicles. The latter sort is receiving much R&D because of its potential military, industrial, and consumer benefits. A car-driving AI is the output of a machine learning system. I’ve read that the versions installed in vehicles apply what they already know, but don’t continue learning.… Read more »