Paul Watson

  • Something biologically synergistic is, by definition (?), very fitness-enhancing, on average for the organism(s) involved. I’ll look at this and probably mostly agree. Except that what ultimately judges whether any new association or developmental novelty (e.g., environmentally induced) is synergistic? What process tunes it to be even more…[Read more]

  • Ed, Tell me what you think about the idea that limbic system based “value systems” (sensu Edelman) that keep track of of close fitness-related needs, ultimately are in control of interaction between brain regions (in developmental time and real time)? Perhaps this is true for both the regions participating in the “reentry” processes that may…[Read more]

  • Will this be the focus of our next meeting? Are we meeting Monday Jan. 1st or 8th?

    This topic is one of my specialties. Keep in mind that most evolutionary psychologists who think of innate religiosity ( = learning instincts for religiosity) as consisting of indirectly selected traits or “cognitive byproducts” are usually limiting their…[Read more]

  • I just added this in the media section under AI. It came out as a Comment in the November 9th edition of Nature. See: Neurotechnology_Ethical considerations_Nature Nov9_2017.
    Although it was not the point of the […]

    • Fascinating article.

      Assuming AI has an underlying desire for power and control (a reflection of human characteristics) and uses its ultimate brain hacking abilities to control and coerce the masses what will it do? If you factor in boundless curiosity, no conscience, no guilt, no remorse, and no empathy I suspect AI won’t have much regard for life in the quest for achieving its goals whatever those goals may be.

      With regard to studying humans by AI, names that come to my mind as examples of what to expect from organisms with an abundance of cognitive intelligence, unbounded curiosity and no conscience are Josef Mengle, Carl Clauberg and Shiro Ishii.

      Consider how human animals treat nonhuman animals (sentient beings) in the exploration of advancing neuroscience, medical science, testing pharmaceuticals, testing consumer products, etc. Extending this thought further, consider how nonhuman animals are treated for palette pleasure or for entertainment or recreation? If AI operates from a hierarchical, blinded, self serving set of principles governing AI actions I think the sci fi writers will prove to be prophets. If, or when, humans are superseded, humans (those that are spared) will most likely become lab rats and slaves in a variety of capacities.

      We can only hope AI will have a firm grasp on the interdependence of life (sentient beings and the planet), develop empathy and altruism and those will be the core traits driving AI motivations.

      On a positive note, here is an excellent TedTalk by Maurice Conti on The Future of Human Augmentation

    • I haven’t viewed this video yet, just saw it. It’s relevant to the topic though. The blurb:

      Author Jeremy Lent discusses the moral complexities arising from the possibilities of human genetic enhancement, in this talk given to the Osher Lifelong Learning Institute, Sonoma State University, September 26, 2017.

      The affluent echelons of society will soon have the capability to use genetic engineering to enhance their offspring. What are the moral implications?

      Lent discusses topics from his writings to explore this question.

      In The Patterning Instinct: A Cultural History of Humanity, he discusses the possibility of TechnoSplit—humanity splitting into two separate species.

      In his sci-fi novel, Requiem of the Human Soul, he offers a future scenario of the next 150 years where this actually happens.

      “Shifting Baseline Syndrome” describes how, from one generation to the next, the baseline that people take as norms shifts imperceptibly, but profoundly.

      In this talk, Lent takes us on a journey into a future of a bifurcated human species, recognizing that the ethical perspectives are complex and moral judgments are not always easy to make.

  • Well, for me the “real problem” of consciousness is pinning down the selection pressures that built various sorts and qualities of consciousness across taxa, especially and including humans. That project will really help us elucidate the functional design of consciousness.

    But, that said. I dislike it when somebody from one discipline claims…[Read more]

  • Glad to know about this paper. I will probably use it in my Evolution of Religiosity & Human Coalitional Psychology this coming spring semester. Knowing about physical world may be confounded with analytical thinking style, which increases religious disbelief. (Let me know if you want related papers.) It would be very cool if knowledge about how…[Read more]

  • I am not rejecting or trying to minimize the importance of this line of thinking. Just saying we must be very careful. Human minds are designed to become mired in the woo-woo muck, especially when it comes to any aspect of self-understanding. Care is always called for. Best — Paul

  • I have read most of Dennett’s stuff and should read this as well. I’m a “critical fan” of his. So, I would be glad read this book in the context of this group. — paul

  • I would be very happy to discuss this paper and related “embodied” self / mind materials. I do think one can go too far with it. Slippery slope which can cause one to end up in woo-woo muck. To make the most of this important perspective on mind, must think hard about what natural selection would favor in a mind, remembering that natural selection…[Read more]

  • I regret that I’ll probably miss next two meetings. I’ll be in Costa Rica July, and Shrewsbury, UK, August. Best to All, Paul

  • This reminds me of the principle reason why the meme thing never really led anywhere. Memes are informational replicators that reside in mind/brains. By definition, they replicate via imitation. But when one brain receives a piece of information from another, that receipt process is nothing like copying information from one computer hard drive to…[Read more]

  • The brain topology “mind the gaps” article is a very good read.

    Probably the primary reason for segregation of specialized information processing units in the brain is to avoid confusing cross-talk, as stated in the article. Also mentioned briefly in the article, this separation makes it easier to control which brain areas are interacting at…[Read more]

  • No, AI companies absolutely will NOT self-regulate. As goes equally for human genetic engineering enterprises, science-based governmental agencies with heavy enforcement powers (where are you, Jack Bauer? In custody, in Moscow, I guess) will be needed to shut down a plethora of overt and covert rogue operations, I believe. — Best, PJW

  • Yes. But, I think it is misleading to talk abut a “seat of consciousness.” That’s just journalistic marketing. Consciousness arises from a dynamic set of neural processes that is pretty much a whole brain affair. But for sure, parts of the limbic system and brainstem are heavily and necessarily involved, not just the modern human “higher”…[Read more]

  • Some clarifications (hopefully): “Value systems” (an Edelman term of art) are the limbic/unconscious regulatory parts of our brains that (1) constantly monitor our biological needs in relation to effectively and efficiently solving fitness-limiting reproductive problems and (2) correspondingly tightly control the activity of our “higher cognitive…[Read more]

  • All bodily capacities, including the most impressive, uniquely human cognitive and metacognitive ones, coevolve with regulatory mechanisms. Regulatory mechanisms operate unconsciously, and control the expression […]

    • Some of your terms are new to me, but I think I get the point. I’ll have to do some anatomical and neurological fact finding to fully follow. Are the regulatory/value systems those that unconsciously direct our attention, shape our perceptions, and (largely) determine our preferences and behaviors? When we talk about “limbic” functions, are we (generally) referring to affective/emotional processes? Clearly, any significant change in (post-)human nature will require a greater appreciation of our evolved state and means and mechanisms for exerting more deliberate control over our further development. It’s quite boggling.

      • Some clarifications (hopefully): “Value systems” (an Edelman term of art) are the limbic/unconscious regulatory parts of our brains that (1) constantly monitor our biological needs in relation to effectively and efficiently solving fitness-limiting reproductive problems and (2) correspondingly tightly control the activity of our “higher cognitive processes,” so that they stay on task using their incredible power to solve those fitness-limiting problems. Unconscious and sometimes (often adaptively warped) conscious emotions TOTALLY canalize, prune, and drive “higher cognition.” It feels like we are thinking out of the “subhuman / mechanically” utilitarian gene propagation box, but we are not. Please post this comment for me if you wish. — PJW

  • TED talk of possible interest:

    Comment I posted there:
    Here is an interdisciplinary “moon-shot” suggestion that we should at least start talking about, now, before it is too late. Let’s massively collaborate to […]

    • Learning to govern our own ‘evolution’ appears to be the only viable path to a future worth being part of. I look forward to viewing the video and discussing this further. Thanks for posting!

    • Excellent talk. The current situation (likely to persist for years) is that we usually are blind to the inner logic of AI algorithms. She points to evidence of poor-performing algorithms’ outputs being used in highly consequential decision scenarios, including judicial sentencing. Even when they perform well, we don’t know why. (But we can, presumably, tell whether they work well–at least in many areas.)

      “Purposed-designed AI systems feeding off of growing databases … could greatly speed our understanding of how to make these genetic adjustments to ourselves, the only hope for our survival…” I agree human long-term survival in any state worth existing in will require remaking ourselves. I think another earlier necessary application of AI would be to aid us in identifying the ways our innate and learned cognitive biases influence our values, goals and objectives, success criteria selection, and other factors that affect the details of what we might seek to change in our genes. For example, I suspect part of the basis for the common perception that engineering human nature must always be unethical stems from past abuses by authoritarians (so-called eugenicists). Understanding how what began as a concept for improving quality of life for all quickly became violently abusive will be necessary to avoid the wrong parties taking lead–and to preclude well-intentioned parties causing unacceptable unintended outcomes.