Tag Archives: cognition

Mathematical field of topology reveals importance of ‘holes in brain’

New Scientist article: Applying the mathematical field of topology to brain science suggests gaps in densely connected brain regions serve essential cognitive functions. Newly discovered densely connected neural groups are characterized by a gap in the center, with one edge of the ring (cycle) being very thin. It’s speculated that this architecture evolved to enable the brain to better time and sequence the integration of information from different functional areas into a coherent pattern.

Aspects of the findings appear to support Edelman’s and Tononi’s (2000, p. 83) theory of neuronal group selection (TNGS, aka neural Darwinism).


Edelman, G.M. and Tononi, G. (2000). A Universe of Consciousness: How Matter Becomes Imagination. Basic Books.

15 Nov 16 Discussion on Transhumanism

Good discussion that covered a lot of ground. I took away that none of us have signed on to be early adopters of brain augmentations, but some expect development of body and brain augmentations to continue and accelerate. We also considered the idea of bio-engineered and medical paths to significant life-span, health, and cognitive capacity improvements. I appreciated the ethical and value questions (Why pursue any of this? What would/must one give up to become transhuman? Will the health and lifespan enhancements be equally available to all? What could be the downsides of extremely extended lives?) Also, isn’t there considerable opportunity for smarter transhumans, along with AI tools, to vastly improve the lives of many people by finding ways to mitigate problems we’ve inherited (disease, etc.) and created (pollution, conflict, etc.)?

all possible minds

18 October meeting topic – General AI: Opportunities and Risks

Artificial intelligence (AI) is being incorporated into an increasing range of engineered systems. Potential benefits are so desirable, there is no doubt that humans will pursue AI with increasing determination and resources. Potential risks to humans range from economic and labor disruptions to extinction, making AI risk analysis and mitigation critical.

Specialized (narrow and shallow-to-deep) AI, such as Siri, OK Google, Watson, and vehicle-driving systems acquire pattern recognition accuracy by training on vast data sets containing the target patterns. Humans provide the operational goals (utility functions) and curate the items in the training data sets to include only information directly related to the goal. For example, a driving AI’s utility functions involve getting the vehicle to a destination while keeping the vehicle within various parameters (speed, staying within lane, complying with traffic signs and signals, avoiding collisions, etc.).

Artificial general intelligence (AGI or GAI) systems, by contrast, are capable of learning and performing the full range of intellectual work at or beyond human level. AGI systems can achieve learning goals without explicitly curated training data sets or detailed objectives. They can learn ‘in the wild’, so to speak. For example, an AGI with the goal of maximizing a game score requires only a visual interface to the game (so it can sense the game environment and the outcomes of its own actions) and an ability to interact with (play) the game. It figures out everything on its own.

Some people have raised alarms that AGIs, because their ability to learn is more generalized, are likely to suddenly surpass humans in most or all areas of intellectual achievement. By definition, once AGI minds surpass ours, we will not be able to understand much of their reasoning or actions. This situation is often called the technological singularity–a sort of knowledge horizon we’ll not be able to cross. The concerns arise from our uncertainty that superintelligent AIs will value us or our human objectives or–if they do value us–that they will be able to translate that into actions that do not degrade our survival or quality of existence.

Multimedia Resources

• Demis Hassabis on Google Deep Mind and AGI (video, 14:05, best content starts a 3:40)

• Google Deep Mind (Alpha Go) AGI (video, 13:44)

• Extra: Nick Bostrom on Superintelligence and existential threats (video, 19:54) – part of the talk concerns biological paths to superintelligence

Print Resources

• Primary reading (long article): Superintelligence: Fears, Promises, and Potentials

• Deeper dive (for your further edification): Superintelligence; Paths, Dangers, and Strategies, by Nick Bostrom

Members may RSVP for this discussion at https://www.meetup.com/abq_brain_mind_consciousness_AI/events/234823660/. Based on participant requests, attendance is capped at 10 to promote more and deeper discussion. Those who want to attend but are not in the first 10 may elect to go on the waiting list. It is not unusual for someone to change a “Yes” RSVP to “No”, which will allow the next person on the waiting list to attend. If the topic attracts a large wait list, we may schedule additional discussion.

Members of this site who can’t attend the meeting are welcome to participate in the extended discussion by commenting on this announcement.