Will self-improving AI inevitably lead to catastrophe?

Paul W sent the following TED Talk link and said

If AI is by definition a program designed to improve its ability to access and process information, I suspect we cannot come up with serious AI that is not dangerous. It will evolve so fast and down such unpredictable pathways that it will leave us in the dust. The mandate to improve information-processing capabilities implicitly includes a mandate to compete for resources (need’s better hardware, better programmers, technicians, etc.) It will take these from us, and just as we do following a different gene replication mandate, from all other life forms. How do we program in a fail safe against that? How do we make sure that everyone’s AI creation has such a failsafe — one that works?

What do you think? I also recommend Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies

1
Leave a Reply

Please Login to comment
1 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
1 Comment authors
Mark H Recent comment authors

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  Subscribe  
newest oldest most voted
Notify of