Mark H

  • It’s common for brain functions to be described in terms of digital computing, but this metaphor does not hold up in brain research. Unlike computers, in which hardware and software are separate, organic brains’ […]

    • This reminds me of the principle reason why the meme thing never really led anywhere. Memes are informational replicators that reside in mind/brains. By definition, they replicate via imitation. But when one brain receives a piece of information from another, that receipt process is nothing like copying information from one computer hard drive to another. Instead, a highly personalized representation of the original meme is created in the recipient. It is not like DNA replication. Memes just are not good replicators. — Paul

  • Mark H and Profile picture of Edward BergeEdward Berge are now friends 1 week, 6 days ago

  • An article at considers the pros and cons of making the voice interactions of AI assistants more humanlike.

    The assumption that more human-like speech from AIs is naturally better may prove as […]

  • Confirmation bias is a human problem. It afflicts throughout the range of political perspectives.

  • Another study adds weight to findings that mental health declines as Facebook usage increases. The effect is thought mainly to result from involuntary judgments we make about ourselves in comparison with others […]

  • Interesting thoughts. I read alien-encounter sci-fi occasionally. One of the recurring notions is that any aliens capable of evading self-destruction and becoming galactic or universal would, by definition, be so alien to our limbic-driven ways of being that we would be unable to comprehend them. The same may apply to the sort of AI you’re…[Read more]

  • New Scientist article: Applying the mathematical field of topology to brain science suggests gaps in densely connected brain regions serve essential cognitive functions. Newly discovered densely connected neural […]

    • The brain topology “mind the gaps” article is a very good read.

      Probably the primary reason for segregation of specialized information processing units in the brain is to avoid confusing cross-talk, as stated in the article. Also mentioned briefly in the article, this separation makes it easier to control which brain areas are interacting at any given time. This in turn not only controls the direction of unconscious information processing, but greatly affects your moment-to-moment conscious reality, including felt sensations, felt emotions, and felt thoughts.

      Having limited and specific pathways linking neuronal functional groups that could, in principle, get involved in constructing your conscious reality of affect the outcome of unconscious information processing tasks makes managing the “inter-group” conference call easier to manage. And it must be managed, by default, by organs in the limbic system, that we know have massive ascending projections to all cortical areas, in ways that keep the cortex focused on solving fitness-limiting problems, including all manner of social navigation tactics and strategies.

      One more thing and this is sparked by a comment Mark makes elsewhere in the “Finding the seat of consciousness” thread. Consciousness is a whole brain process. So AI systems, to become increasingly self-aware, probably will need to integrate more and more information, in a way that Edelman refers to as “reentry” – where the activity of every neuronal functional group in on the conference call at any given affects the functioning of all other or many other groups. But note again that human self-awareness transparently waxes and wanes in an adaptive fashion. Again, this is controlled by the limbic system, which has an obsessive handle on our dynamic hierarchy of reproductive needs. It adaptively modulates what we sense, feel and think at any given moment to optimize our minds to solve reproductively relevant problems according to both environmentally determined opportunity and problem severity vis’a-vis our expected lifetime inclusive fitness.

      I wonder what an AI system would be like that was programmed to have “maximal and fair (unbiased) access to everything it could sense, feel, and know in a given moment. As if it had no limbic system. To my mind, that would be a system fully capable of the kind of objectivity that “spiritual” humans feebly struggle toward using reflective / contemplative practices. It typically would have very good, perhaps what we should regard as Wise answers to problems that we are often blocked from even momentarily considering, especially consciously. In a way, then, it would be God-like, or at least Guru-like, although why it would want to show we humans “The Way” I do not know….. We would seem like such hopeless fools…. ????

    • Interesting thoughts. I read alien-encounter sci-fi occasionally. One of the recurring notions is that any aliens capable of evading self-destruction and becoming galactic or universal would, by definition, be so alien to our limbic-driven ways of being that we would be unable to comprehend them. The same may apply to the sort of AI you’re envisioning. It seems it would need something like a limbic system to assign importance to information that’s more relevant to its utility functions (goals) so that it would give that information priority of attention. This would affect what the AI notices and remembers. Self-preservation (if that were a goal) would require effective risk perception and possibly something functionally equivalent to fear (though perhaps without the broader irrationality associated with our sort of fear). I’m very interested in how a mind with far fewer structural and cognitive biases would operate in comparison with the standard human mind.

  • In preparation for the March meeting topic, Your Political Brain, please recommend any resources you have found particularly enlightening about why humans evolved political thinking. Also, please share references […]

  • Brain imaging research indicates some aspects of individual political orientation correlate significantly with the mass and activity of particular brain structures including the right amygdala and the insula. This […]

  • “When something is memorable, it tends to be the thing you think of first, and then it has an outsize influence on your […]

  • “Until recently, scientists had thought that most synapses of a similar type and in a similar location in the brain behaved in a similar fashion with respect to how experience induces plasticity,” Friedlander […]

  • New scientific findings support the idea that different humans’ brains store and recall story scenes the same way, rather than each person developing unique memory patterns about stories. Also, people generally do […]

  • Mark H wrote a new post, MIT AI Primer 6 months ago

    Here’s a useful artificial intelligence introductory lesson from an MIT course: 

  • Cognitive bias article of the day: How to Convince Someone When Facts Fail

    A concise, timely look at how worldview-driven cognitive dissonance leads people to double down on their misbeliefs in the face of […]

  • This NY Times article is worth your time, if you are interested in AI–especially if you are still under the impression AI has ossified or lost its way.

  • “MRI scans show that running may affect the structure and function of the brain in ways similar to complex tasks like playing a musical instrument”

  • Technology (in some labs, for now) enables gamers to see  their brain activity while they play.

  • Load More