Mathematical field of topology reveals importance of ‘holes in brain’

New Scientist article: Applying the mathematical field of topology to brain science suggests gaps in densely connected brain regions serve essential cognitive functions. Newly discovered densely connected neural groups are characterized by a gap in the center, with one edge of the ring (cycle) being very thin. It’s speculated that this architecture evolved to enable the brain to better time and sequence the integration of information from different functional areas into a coherent pattern.

Aspects of the findings appear to support Edelman’s and Tononi’s (2000, p. 83) theory of neuronal group selection (TNGS, aka neural Darwinism).


Edelman, G.M. and Tononi, G. (2000). A Universe of Consciousness: How Matter Becomes Imagination. Basic Books.

About Mark H

Information technologist, knowledge management expert, and writer. Academic background in knowledge management, social and natural sciences, information technologies, learning, educational technologies, and philosophy. Married with one adult child who's married and has a teenage daughter.

2 thoughts on “Mathematical field of topology reveals importance of ‘holes in brain’

  1. The brain topology “mind the gaps” article is a very good read.

    Probably the primary reason for segregation of specialized information processing units in the brain is to avoid confusing cross-talk, as stated in the article. Also mentioned briefly in the article, this separation makes it easier to control which brain areas are interacting at any given time. This in turn not only controls the direction of unconscious information processing, but greatly affects your moment-to-moment conscious reality, including felt sensations, felt emotions, and felt thoughts.

    Having limited and specific pathways linking neuronal functional groups that could, in principle, get involved in constructing your conscious reality of affect the outcome of unconscious information processing tasks makes managing the “inter-group” conference call easier to manage. And it must be managed, by default, by organs in the limbic system, that we know have massive ascending projections to all cortical areas, in ways that keep the cortex focused on solving fitness-limiting problems, including all manner of social navigation tactics and strategies.

    One more thing and this is sparked by a comment Mark makes elsewhere in the “Finding the seat of consciousness” thread. Consciousness is a whole brain process. So AI systems, to become increasingly self-aware, probably will need to integrate more and more information, in a way that Edelman refers to as “reentry” – where the activity of every neuronal functional group in on the conference call at any given affects the functioning of all other or many other groups. But note again that human self-awareness transparently waxes and wanes in an adaptive fashion. Again, this is controlled by the limbic system, which has an obsessive handle on our dynamic hierarchy of reproductive needs. It adaptively modulates what we sense, feel and think at any given moment to optimize our minds to solve reproductively relevant problems according to both environmentally determined opportunity and problem severity vis’a-vis our expected lifetime inclusive fitness.

    I wonder what an AI system would be like that was programmed to have “maximal and fair (unbiased) access to everything it could sense, feel, and know in a given moment. As if it had no limbic system. To my mind, that would be a system fully capable of the kind of objectivity that “spiritual” humans feebly struggle toward using reflective / contemplative practices. It typically would have very good, perhaps what we should regard as Wise answers to problems that we are often blocked from even momentarily considering, especially consciously. In a way, then, it would be God-like, or at least Guru-like, although why it would want to show we humans “The Way” I do not know….. We would seem like such hopeless fools…. ????

  2. Interesting thoughts. I read alien-encounter sci-fi occasionally. One of the recurring notions is that any aliens capable of evading self-destruction and becoming galactic or universal would, by definition, be so alien to our limbic-driven ways of being that we would be unable to comprehend them. The same may apply to the sort of AI you’re envisioning. It seems it would need something like a limbic system to assign importance to information that’s more relevant to its utility functions (goals) so that it would give that information priority of attention. This would affect what the AI notices and remembers. Self-preservation (if that were a goal) would require effective risk perception and possibly something functionally equivalent to fear (though perhaps without the broader irrationality associated with our sort of fear). I’m very interested in how a mind with far fewer structural and cognitive biases would operate in comparison with the standard human mind.

Leave a Reply