Mark H

  • Zakstein makes a compelling argument. It’s interesting that both the cybernetic and the ecosystem metaphors emerged from general systems thinking. When I was studying GST in the early 1980s, the models of mind were complex adaptive living systems. Living systems have characteristic functions, including information processing. The error is to make…[Read more]

  • The Global Consciousness Project, Institute of Noetic Sciences (IONS, for which I was once Hawaii state coordinator) and Princeton Engineering Anomalies Research (PEAR) are collaborating to release a smart phone […]

  • A neuron that encircles the mouse brain emanates from the claustrum (an on/off switch for awareness) and has dense links with both brain hemispheres. Scientists including Francis Crick and Christoph Koch have […]

  • Programmers of social media apps and increasing range of other apps apply neuroscience to program your brain.

  • In line with our July joint meeting with the NM Tech Council, I’m reading a fascinating book (Stealing Fire) on the variety of ways humans can experience states of flow (optimal states of consciousness and pe […]

  • It’s common for brain functions to be described in terms of digital computing, but this metaphor does not hold up in brain research. Unlike computers, in which hardware and software are separate, organic brains’ […]

    • This reminds me of the principle reason why the meme thing never really led anywhere. Memes are informational replicators that reside in mind/brains. By definition, they replicate via imitation. But when one brain receives a piece of information from another, that receipt process is nothing like copying information from one computer hard drive to another. Instead, a highly personalized representation of the original meme is created in the recipient. It is not like DNA replication. Memes just are not good replicators. — Paul

    • I also like this article by Zak Stein on a better metaphor: http://www.zakstein.org/your-mind-is-not-like-a-computer-its-like-an-ecosystem/

      • Zakstein makes a compelling argument. It’s interesting that both the cybernetic and the ecosystem metaphors emerged from general systems thinking. When I was studying GST in the early 1980s, the models of mind were complex adaptive living systems. Living systems have characteristic functions, including information processing. The error is to make the linear sort of information processing associated with traditional computers the entire model. Also, organisms learn (in several senses, including unconscious conditioning and conceptual knowledge acquisition and modification) and exercising influence over learning outcomes can have meaningful parallels with programming. I agree with the author that the information processing metaphor has been radically overextended (as metaphors tend to be), but there’s still value in thinking about how brains process information and how behaviors and concepts are formed (some aspects of which parallel ‘programming’ in a general sense).

  • An article at Wired.com considers the pros and cons of making the voice interactions of AI assistants more humanlike.

    The assumption that more human-like speech from AIs is naturally better may prove as […]

  • Confirmation bias is a human problem. It afflicts throughout the range of political perspectives.

  • Another study adds weight to findings that mental health declines as Facebook usage increases. The effect is thought mainly to result from involuntary judgments we make about ourselves in comparison with others […]

  • Interesting thoughts. I read alien-encounter sci-fi occasionally. One of the recurring notions is that any aliens capable of evading self-destruction and becoming galactic or universal would, by definition, be so alien to our limbic-driven ways of being that we would be unable to comprehend them. The same may apply to the sort of AI you’re…[Read more]

  • New Scientist article: Applying the mathematical field of topology to brain science suggests gaps in densely connected brain regions serve essential cognitive functions. Newly discovered densely connected neural […]

    • The brain topology “mind the gaps” article is a very good read.

      Probably the primary reason for segregation of specialized information processing units in the brain is to avoid confusing cross-talk, as stated in the article. Also mentioned briefly in the article, this separation makes it easier to control which brain areas are interacting at any given time. This in turn not only controls the direction of unconscious information processing, but greatly affects your moment-to-moment conscious reality, including felt sensations, felt emotions, and felt thoughts.

      Having limited and specific pathways linking neuronal functional groups that could, in principle, get involved in constructing your conscious reality of affect the outcome of unconscious information processing tasks makes managing the “inter-group” conference call easier to manage. And it must be managed, by default, by organs in the limbic system, that we know have massive ascending projections to all cortical areas, in ways that keep the cortex focused on solving fitness-limiting problems, including all manner of social navigation tactics and strategies.

      One more thing and this is sparked by a comment Mark makes elsewhere in the “Finding the seat of consciousness” thread. Consciousness is a whole brain process. So AI systems, to become increasingly self-aware, probably will need to integrate more and more information, in a way that Edelman refers to as “reentry” – where the activity of every neuronal functional group in on the conference call at any given affects the functioning of all other or many other groups. But note again that human self-awareness transparently waxes and wanes in an adaptive fashion. Again, this is controlled by the limbic system, which has an obsessive handle on our dynamic hierarchy of reproductive needs. It adaptively modulates what we sense, feel and think at any given moment to optimize our minds to solve reproductively relevant problems according to both environmentally determined opportunity and problem severity vis’a-vis our expected lifetime inclusive fitness.

      I wonder what an AI system would be like that was programmed to have “maximal and fair (unbiased) access to everything it could sense, feel, and know in a given moment. As if it had no limbic system. To my mind, that would be a system fully capable of the kind of objectivity that “spiritual” humans feebly struggle toward using reflective / contemplative practices. It typically would have very good, perhaps what we should regard as Wise answers to problems that we are often blocked from even momentarily considering, especially consciously. In a way, then, it would be God-like, or at least Guru-like, although why it would want to show we humans “The Way” I do not know….. We would seem like such hopeless fools…. ????

    • Interesting thoughts. I read alien-encounter sci-fi occasionally. One of the recurring notions is that any aliens capable of evading self-destruction and becoming galactic or universal would, by definition, be so alien to our limbic-driven ways of being that we would be unable to comprehend them. The same may apply to the sort of AI you’re envisioning. It seems it would need something like a limbic system to assign importance to information that’s more relevant to its utility functions (goals) so that it would give that information priority of attention. This would affect what the AI notices and remembers. Self-preservation (if that were a goal) would require effective risk perception and possibly something functionally equivalent to fear (though perhaps without the broader irrationality associated with our sort of fear). I’m very interested in how a mind with far fewer structural and cognitive biases would operate in comparison with the standard human mind.

  • In preparation for the March meeting topic, Your Political Brain, please recommend any resources you have found particularly enlightening about why humans evolved political thinking. Also, please share references […]

  • Brain imaging research indicates some aspects of individual political orientation correlate significantly with the mass and activity of particular brain structures including the right amygdala and the insula. This […]

  • https://www.wired.com/2017/02/cognitive-bias-president-trump-understands-better/ “When something is memorable, it tends to be the thing you think of first, and then it has an outsize influence on your […]

  • “Until recently, scientists had thought that most synapses of a similar type and in a similar location in the brain behaved in a similar fashion with respect to how experience induces plasticity,” Friedlander […]

  • Load More