Here’s a useful artificial intelligence introductory lesson from an MIT course:
This NY Times article is worth your time, if you are interested in AI–especially if you are still under the impression AI has ossified or lost its way.
Here’s an interesting interview with an author whose book explains his concept of neurocapitalism, or cognitive capitalism, which is the result of the ongoing feedback between us and the increasingly penetrating technologies we adopt.
Paul W sent the following TED Talk link and said
If AI is by definition a program designed to improve its ability to access and process information, I suspect we cannot come up with serious AI that is not dangerous. It will evolve so fast and down such unpredictable pathways that it will leave us in the dust. The mandate to improve information-processing capabilities implicitly includes a mandate to compete for resources (need’s better hardware, better programmers, technicians, etc.) It will take these from us, and just as we do following a different gene replication mandate, from all other life forms. How do we program in a fail safe against that? How do we make sure that everyone’s AI creation has such a failsafe — one that works?
What do you think? I also recommend Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies