Will self-improving AI inevitably lead to catastrophe?

Will self-improving AI inevitably lead to catastrophe?

Paul W sent the following TED Talk link and said

If AI is by definition a program designed to improve its ability to access and process information, I suspect we cannot come up with serious AI that is not dangerous. It will evolve so fast and down such unpredictable pathways that it will leave us in the dust. The mandate to improve information-processing capabilities implicitly includes a mandate to compete for resources (need’s better hardware, better programmers, technicians, etc.) It will take these from us, and just as we do following a different gene replication mandate, from all other life forms. How do we program in a fail safe against that? How do we make sure that everyone’s AI creation has such a failsafe — one that works?

What do you think? I also recommend Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies

0 0 votes
Article Rating
Subscribe
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
1
0
Would love your thoughts, please comment.x
()
x