Metacognition, known unknowns, and emergence of reflective identity

Paul Watson asks:

Will decent AI gain a sense of identity, e.g., by realizing what it knows and does not know? And, perhaps valuing the former, and maybe (optimistically?) developing a sense of wonder in connection with the latter; such wonder could lead to intrinsic desire to preserve conditions enabling continued learning? Anyway, answer is Yes, I think, as I tried arguing last night.

Paul recommends this article about macaque metacognition may be relevant.

Relatedly, Paul says

  1. A search for knowledge cannot proceed without a sense of what is known and unknown by the “self.” Must reach outside self for most new knowledge. Can create new knowledge internally too, once you have a rich model of reality, but good to know here too that you are creating new associations inside yourself, and question whether new outside knowledge should be sought to test tentative internally-generated conclusions.
  2. Self / Other is perhaps the most basic ontological category. Bacteria have it. Anything with a semipermeable membrane around it — a “filter.” Cannot seek knowledge without having at least an implicit sense that one is searching for information outside oneself. In highly intelligent being, how long would that sense remain merely implicit.

What do you think?

18 October meeting topic – General AI: Opportunities and Risks

all possible minds

Artificial intelligence (AI) is being incorporated into an increasing range of engineered systems. Potential benefits are so desirable, there is no doubt that humans will pursue AI with increasing determination and resources. Potential risks to humans range from economic and labor disruptions to extinction, making AI risk analysis and mitigation critical.

Specialized (narrow and shallow-to-deep) AI, such as Siri, OK Google, Watson, and vehicle-driving systems acquire pattern recognition accuracy by training on vast data sets containing the target patterns. Humans provide the operational goals (utility functions) and curate the items in the training data sets to include only information directly related to the goal. For example, a driving AI’s utility functions involve getting the vehicle to a destination while keeping the vehicle within various parameters (speed, staying within lane, complying with traffic signs and signals, avoiding collisions, etc.).

Artificial general intelligence (AGI or GAI) systems, by contrast, are capable of learning and performing the full range of intellectual work at or beyond human level. AGI systems can achieve learning goals without explicitly curated training data sets or detailed objectives. They can learn ‘in the wild’, so to speak. For example, an AGI with the goal of maximizing a game score requires only a visual interface to the game (so it can sense the game environment and the outcomes of its own actions) and an ability to interact with (play) the game. It figures out everything on its own.

Some people have raised alarms that AGIs, because their ability to learn is more generalized, are likely to suddenly surpass humans in most or all areas of intellectual achievement. By definition, once AGI minds surpass ours, we will not be able to understand much of their reasoning or actions. This situation is often called the technological singularity–a sort of knowledge horizon we’ll not be able to cross. The concerns arise from our uncertainty that superintelligent AIs will value us or our human objectives or–if they do value us–that they will be able to translate that into actions that do not degrade our survival or quality of existence.

Multimedia Resources

• Demis Hassabis on Google Deep Mind and AGI (video, 14:05, best content starts a 3:40)

• Google Deep Mind (Alpha Go) AGI (video, 13:44)

• Extra: Nick Bostrom on Superintelligence and existential threats (video, 19:54) – part of the talk concerns biological paths to superintelligence

Print Resources

• Primary reading (long article): Superintelligence: Fears, Promises, and Potentials

• Deeper dive (for your further edification): Superintelligence; Paths, Dangers, and Strategies, by Nick Bostrom

Members may RSVP for this discussion at https://www.meetup.com/abq_brain_mind_consciousness_AI/events/234823660/. Based on participant requests, attendance is capped at 10 to promote more and deeper discussion. Those who want to attend but are not in the first 10 may elect to go on the waiting list. It is not unusual for someone to change a “Yes” RSVP to “No”, which will allow the next person on the waiting list to attend. If the topic attracts a large wait list, we may schedule additional discussion.

Members of this site who can’t attend the meeting are welcome to participate in the extended discussion by commenting on this announcement.

Will self-improving AI inevitably lead to catastrophe?

Paul W sent the following TED Talk link and said

If AI is by definition a program designed to improve its ability to access and process information, I suspect we cannot come up with serious AI that is not dangerous. It will evolve so fast and down such unpredictable pathways that it will leave us in the dust. The mandate to improve information-processing capabilities implicitly includes a mandate to compete for resources (need’s better hardware, better programmers, technicians, etc.) It will take these from us, and just as we do following a different gene replication mandate, from all other life forms. How do we program in a fail safe against that? How do we make sure that everyone’s AI creation has such a failsafe — one that works?

What do you think? I also recommend Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies

Intelligence and rationality are not strongly correlated

A NY Times article reports on research conducted by Keith Stanovich and others that (a) finds intelligence and rationality are different qualities, (b) they are only weakly positively correlated, and (c) one’s rationality can be improved through targeted training but not one’s intelligence. Moreover, Stanovich proposed a rationality quotient (RQ) and that standardized tests be devised to assess one’s RQ.

Read more: Clever Fools: Why a High IQ Doesn’t Mean You’re Smart

First BMCAI discussion a great success!

Ten energetic folks met last night at Albuquerque’s North Domingo Baca Multigenerational Center to discuss the malleability of memory and its implications. Research findings increasingly indicate that our memories are not explicit copies of the events they represent.

Research increasingly indicates that our memories are not explicit, unchanging recordings. Sensory-perceptual processes filter what is initially stored. Each time you recall a memory, it is modified. Counterintuitively, frequently recalled memories—especially those we compare with others’ tellings and media representations—change over time.

Resources we had reviewed before the discussion included the following:

Videos

Articles

The following questions guided our discussion:

  • Are there memorable events you and others experienced when you were young that the others remember significantly differently than you do? Is your memory more accurate (less biased or altered) than theirs?
  • Have you ever encountered evidence that one of your long-held memories was inaccurate? Can you share an example?
  • What, if any, evolutionary value might there be to having a highly malleable memory?
  • If illusory memories are so common, what implications might there be for
    – criminal justice, eye-witness testimonies, etc.?
    – personal relationships?
    – self-perception (of current vs remembered selves, for example)

Albuquerque Brain, Mind, and Artificial Intelligence Discussion Group