Here’s an interesting interview with an author whose book explains his concept of neurocapitalism, or cognitive capitalism, which is the result of the ongoing feedback between us and the increasingly penetrating technologies we adopt.
Artificial intelligence (AI) is being incorporated into an increasing range of engineered systems. Potential benefits are so desirable, there is no doubt that humans will pursue AI with increasing determination and resources. Potential risks to humans range from economic and labor disruptions to extinction, making AI risk analysis and mitigation critical.
Specialized (narrow and shallow-to-deep) AI, such as Siri, OK Google, Watson, and vehicle-driving systems acquire pattern recognition accuracy by training on vast data sets containing the target patterns. Humans provide the operational goals (utility functions) and curate the items in the training data sets to include only information directly related to the goal. For example, a driving AI’s utility functions involve getting the vehicle to a destination while keeping the vehicle within various parameters (speed, staying within lane, complying with traffic signs and signals, avoiding collisions, etc.).
Artificial general intelligence (AGI or GAI) systems, by contrast, are capable of learning and performing the full range of intellectual work at or beyond human level. AGI systems can achieve learning goals without explicitly curated training data sets or detailed objectives. They can learn ‘in the wild’, so to speak. For example, an AGI with the goal of maximizing a game score requires only a visual interface to the game (so it can sense the game environment and the outcomes of its own actions) and an ability to interact with (play) the game. It figures out everything on its own.
Some people have raised alarms that AGIs, because their ability to learn is more generalized, are likely to suddenly surpass humans in most or all areas of intellectual achievement. By definition, once AGI minds surpass ours, we will not be able to understand much of their reasoning or actions. This situation is often called the technological singularity–a sort of knowledge horizon we’ll not be able to cross. The concerns arise from our uncertainty that superintelligent AIs will value us or our human objectives or–if they do value us–that they will be able to translate that into actions that do not degrade our survival or quality of existence.
• Demis Hassabis on Google Deep Mind and AGI (video, 14:05, best content starts a 3:40)
• Google Deep Mind (Alpha Go) AGI (video, 13:44)
• Extra: Nick Bostrom on Superintelligence and existential threats (video, 19:54) – part of the talk concerns biological paths to superintelligence
• Primary reading (long article): Superintelligence: Fears, Promises, and Potentials
• Deeper dive (for your further edification): Superintelligence; Paths, Dangers, and Strategies, by Nick Bostrom
Members may RSVP for this discussion at https://www.meetup.com/abq_brain_mind_consciousness_AI/events/234823660/. Based on participant requests, attendance is capped at 10 to promote more and deeper discussion. Those who want to attend but are not in the first 10 may elect to go on the waiting list. It is not unusual for someone to change a “Yes” RSVP to “No”, which will allow the next person on the waiting list to attend. If the topic attracts a large wait list, we may schedule additional discussion.
Members of this site who can’t attend the meeting are welcome to participate in the extended discussion by commenting on this announcement.
Paul W sent the following TED Talk link and said
If AI is by definition a program designed to improve its ability to access and process information, I suspect we cannot come up with serious AI that is not dangerous. It will evolve so fast and down such unpredictable pathways that it will leave us in the dust. The mandate to improve information-processing capabilities implicitly includes a mandate to compete for resources (need’s better hardware, better programmers, technicians, etc.) It will take these from us, and just as we do following a different gene replication mandate, from all other life forms. How do we program in a fail safe against that? How do we make sure that everyone’s AI creation has such a failsafe — one that works?
What do you think? I also recommend Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies
NVIDIA, the company you remember for their graphics cards, is now a leader in advanced chips for artificial intelligence. The company’s work in deep learning produced an AI that skillfully operated a car in all the normal driving conditions a New Jersey driver might encounter.
A NY Times article reports on research conducted by Keith Stanovich and others that (a) finds intelligence and rationality are different qualities, (b) they are only weakly positively correlated, and (c) one’s rationality can be improved through targeted training but not one’s intelligence. Moreover, Stanovich proposed a rationality quotient (RQ) and that standardized tests be devised to assess one’s RQ.
Ten energetic folks met last night at Albuquerque’s North Domingo Baca Multigenerational Center to discuss the malleability of memory and its implications. Research findings increasingly indicate that our memories are not explicit copies of the events they represent.
Research increasingly indicates that our memories are not explicit, unchanging recordings. Sensory-perceptual processes filter what is initially stored. Each time you recall a memory, it is modified. Counterintuitively, frequently recalled memories—especially those we compare with others’ tellings and media representations—change over time.
Resources we had reviewed before the discussion included the following:
- How Reliable is Your Memory? (17 minutes)
- Memory Hackers (1-hour, if you don’t have an hour, view this excerpt)
- Extra resources: TED Talks Memory Playlist
The following questions guided our discussion:
- Are there memorable events you and others experienced when you were young that the others remember significantly differently than you do? Is your memory more accurate (less biased or altered) than theirs?
- Have you ever encountered evidence that one of your long-held memories was inaccurate? Can you share an example?
- What, if any, evolutionary value might there be to having a highly malleable memory?
- If illusory memories are so common, what implications might there be for
– criminal justice, eye-witness testimonies, etc.?
– personal relationships?
– self-perception (of current vs remembered selves, for example)
Welcome to the community site of the Albuquerque Brain, Mind, Consciousness and Artificial Intelligence (BMCAI) discussion group!