Paul Watson

  • Well, for me the “real problem” of consciousness is pinning down the selection pressures that built various sorts and qualities of consciousness across taxa, especially and including humans. That project will really help us elucidate the functional design of consciousness.

    But, that said. I dislike it when somebody from one discipline claims…[Read more]

  • Glad to know about this paper. I will probably use it in my Evolution of Religiosity & Human Coalitional Psychology this coming spring semester. Knowing about physical world may be confounded with analytical thinking style, which increases religious disbelief. (Let me know if you want related papers.) It would be very cool if knowledge about how…[Read more]

  • I am not rejecting or trying to minimize the importance of this line of thinking. Just saying we must be very careful. Human minds are designed to become mired in the woo-woo muck, especially when it comes to any aspect of self-understanding. Care is always called for. Best — Paul

  • I have read most of Dennett’s stuff and should read this as well. I’m a “critical fan” of his. So, I would be glad read this book in the context of this group. — paul

  • I would be very happy to discuss this paper and related “embodied” self / mind materials. I do think one can go too far with it. Slippery slope which can cause one to end up in woo-woo muck. To make the most of this important perspective on mind, must think hard about what natural selection would favor in a mind, remembering that natural selection…[Read more]

  • I regret that I’ll probably miss next two meetings. I’ll be in Costa Rica July, and Shrewsbury, UK, August. Best to All, Paul

  • This reminds me of the principle reason why the meme thing never really led anywhere. Memes are informational replicators that reside in mind/brains. By definition, they replicate via imitation. But when one brain receives a piece of information from another, that receipt process is nothing like copying information from one computer hard drive to…[Read more]

  • The brain topology “mind the gaps” article is a very good read.

    Probably the primary reason for segregation of specialized information processing units in the brain is to avoid confusing cross-talk, as stated in the article. Also mentioned briefly in the article, this separation makes it easier to control which brain areas are interacting at…[Read more]

  • No, AI companies absolutely will NOT self-regulate. As goes equally for human genetic engineering enterprises, science-based governmental agencies with heavy enforcement powers (where are you, Jack Bauer? In custody, in Moscow, I guess) will be needed to shut down a plethora of overt and covert rogue operations, I believe. — Best, PJW

  • Yes. But, I think it is misleading to talk abut a “seat of consciousness.” That’s just journalistic marketing. Consciousness arises from a dynamic set of neural processes that is pretty much a whole brain affair. But for sure, parts of the limbic system and brainstem are heavily and necessarily involved, not just the modern human “higher”…[Read more]

  • Some clarifications (hopefully): “Value systems” (an Edelman term of art) are the limbic/unconscious regulatory parts of our brains that (1) constantly monitor our biological needs in relation to effectively and efficiently solving fitness-limiting reproductive problems and (2) correspondingly tightly control the activity of our “higher cognitive…[Read more]

  • All bodily capacities, including the most impressive, uniquely human cognitive and metacognitive ones, coevolve with regulatory mechanisms. Regulatory mechanisms operate unconsciously, and control the expression […]

    • Mark H replied 1 year ago

      Some of your terms are new to me, but I think I get the point. I’ll have to do some anatomical and neurological fact finding to fully follow. Are the regulatory/value systems those that unconsciously direct our attention, shape our perceptions, and (largely) determine our preferences and behaviors? When we talk about “limbic” functions, are we (generally) referring to affective/emotional processes? Clearly, any significant change in (post-)human nature will require a greater appreciation of our evolved state and means and mechanisms for exerting more deliberate control over our further development. It’s quite boggling.

      • Some clarifications (hopefully): “Value systems” (an Edelman term of art) are the limbic/unconscious regulatory parts of our brains that (1) constantly monitor our biological needs in relation to effectively and efficiently solving fitness-limiting reproductive problems and (2) correspondingly tightly control the activity of our “higher cognitive processes,” so that they stay on task using their incredible power to solve those fitness-limiting problems. Unconscious and sometimes (often adaptively warped) conscious emotions TOTALLY canalize, prune, and drive “higher cognition.” It feels like we are thinking out of the “subhuman / mechanically” utilitarian gene propagation box, but we are not. Please post this comment for me if you wish. — PJW

  • TED talk of possible interest:

    Comment I posted there:
    Here is an interdisciplinary “moon-shot” suggestion that we should at least start talking about, now, before it is too late. Let’s massively collaborate to […]

    • Mark H replied 1 year ago

      Learning to govern our own ‘evolution’ appears to be the only viable path to a future worth being part of. I look forward to viewing the video and discussing this further. Thanks for posting!

    • Excellent talk. The current situation (likely to persist for years) is that we usually are blind to the inner logic of AI algorithms. She points to evidence of poor-performing algorithms’ outputs being used in highly consequential decision scenarios, including judicial sentencing. Even when they perform well, we don’t know why. (But we can, presumably, tell whether they work well–at least in many areas.)

      “Purposed-designed AI systems feeding off of growing databases … could greatly speed our understanding of how to make these genetic adjustments to ourselves, the only hope for our survival…” I agree human long-term survival in any state worth existing in will require remaking ourselves. I think another earlier necessary application of AI would be to aid us in identifying the ways our innate and learned cognitive biases influence our values, goals and objectives, success criteria selection, and other factors that affect the details of what we might seek to change in our genes. For example, I suspect part of the basis for the common perception that engineering human nature must always be unethical stems from past abuses by authoritarians (so-called eugenicists). Understanding how what began as a concept for improving quality of life for all quickly became violently abusive will be necessary to avoid the wrong parties taking lead–and to preclude well-intentioned parties causing unacceptable unintended outcomes.