Paul Watson

  • I regret that I’ll probably miss next two meetings. I’ll be in Costa Rica July, and Shrewsbury, UK, August. Best to All, Paul

  • This reminds me of the principle reason why the meme thing never really led anywhere. Memes are informational replicators that reside in mind/brains. By definition, they replicate via imitation. But when one brain receives a piece of information from another, that receipt process is nothing like copying information from one computer hard drive to…[Read more]

  • The brain topology “mind the gaps” article is a very good read.

    Probably the primary reason for segregation of specialized information processing units in the brain is to avoid confusing cross-talk, as stated in the article. Also mentioned briefly in the article, this separation makes it easier to control which brain areas are interacting at…[Read more]

  • No, AI companies absolutely will NOT self-regulate. As goes equally for human genetic engineering enterprises, science-based governmental agencies with heavy enforcement powers (where are you, Jack Bauer? In custody, in Moscow, I guess) will be needed to shut down a plethora of overt and covert rogue operations, I believe. — Best, PJW

  • Yes. But, I think it is misleading to talk abut a “seat of consciousness.” That’s just journalistic marketing. Consciousness arises from a dynamic set of neural processes that is pretty much a whole brain affair. But for sure, parts of the limbic system and brainstem are heavily and necessarily involved, not just the modern human “higher”…[Read more]

  • Some clarifications (hopefully): “Value systems” (an Edelman term of art) are the limbic/unconscious regulatory parts of our brains that (1) constantly monitor our biological needs in relation to effectively and efficiently solving fitness-limiting reproductive problems and (2) correspondingly tightly control the activity of our “higher cognitive…[Read more]

  • All bodily capacities, including the most impressive, uniquely human cognitive and metacognitive ones, coevolve with regulatory mechanisms. Regulatory mechanisms operate unconsciously, and control the expression […]

    • Some of your terms are new to me, but I think I get the point. I’ll have to do some anatomical and neurological fact finding to fully follow. Are the regulatory/value systems those that unconsciously direct our attention, shape our perceptions, and (largely) determine our preferences and behaviors? When we talk about “limbic” functions, are we (generally) referring to affective/emotional processes? Clearly, any significant change in (post-)human nature will require a greater appreciation of our evolved state and means and mechanisms for exerting more deliberate control over our further development. It’s quite boggling.

      • Some clarifications (hopefully): “Value systems” (an Edelman term of art) are the limbic/unconscious regulatory parts of our brains that (1) constantly monitor our biological needs in relation to effectively and efficiently solving fitness-limiting reproductive problems and (2) correspondingly tightly control the activity of our “higher cognitive processes,” so that they stay on task using their incredible power to solve those fitness-limiting problems. Unconscious and sometimes (often adaptively warped) conscious emotions TOTALLY canalize, prune, and drive “higher cognition.” It feels like we are thinking out of the “subhuman / mechanically” utilitarian gene propagation box, but we are not. Please post this comment for me if you wish. — PJW

  • TED talk of possible interest:

    Comment I posted there:
    Here is an interdisciplinary “moon-shot” suggestion that we should at least start talking about, now, before it is too late. Let’s massively collaborate to […]

    • Learning to govern our own ‘evolution’ appears to be the only viable path to a future worth being part of. I look forward to viewing the video and discussing this further. Thanks for posting!

    • Excellent talk. The current situation (likely to persist for years) is that we usually are blind to the inner logic of AI algorithms. She points to evidence of poor-performing algorithms’ outputs being used in highly consequential decision scenarios, including judicial sentencing. Even when they perform well, we don’t know why. (But we can, presumably, tell whether they work well–at least in many areas.)

      “Purposed-designed AI systems feeding off of growing databases … could greatly speed our understanding of how to make these genetic adjustments to ourselves, the only hope for our survival…” I agree human long-term survival in any state worth existing in will require remaking ourselves. I think another earlier necessary application of AI would be to aid us in identifying the ways our innate and learned cognitive biases influence our values, goals and objectives, success criteria selection, and other factors that affect the details of what we might seek to change in our genes. For example, I suspect part of the basis for the common perception that engineering human nature must always be unethical stems from past abuses by authoritarians (so-called eugenicists). Understanding how what began as a concept for improving quality of life for all quickly became violently abusive will be necessary to avoid the wrong parties taking lead–and to preclude well-intentioned parties causing unacceptable unintended outcomes.