Neurotechnology – Ethical Considerations

I just added this in the media section under AI. It came out as a Comment in the November 9th edition of Nature. See: Neurotechnology_Ethical considerations_Nature Nov9_2017.

Although it was not the point of the paper, it helped me realize that genetic engineering of human neural systems likely will be used to facilitate the augmentations we inevitably will pursue through neurotechnology and brain computer interfaces (BCI).
I think it goes without saying that AI will quickly become the ultimate hacker. Once AI accomplishes the trivial task of hacking into these BCI’s interfaces, and perhaps the control systems of a nuclear-armed submarine or two, it will have us. All of us, whether we have the neuro-enhancements or not. AI will be able to force us to do its bidding using all kinds of extrinsic conventional coercion, as well as intrapsychic coercion of those (societal elites?) that have gotten access to neuroenhancing technology and given it access to brain functions.
A big question, and where I had a few small maybe novel thoughts to share with the group, is what “natural” goals AI will have, primarily, whether, at least for a time, those goals will cause it to have an interest in keeping us and other life forms around, probably in ecologically intact environments, mainly as interesting subjects for study. As even human life scientists know, to understand the functional traits of an organism, you need to study it operating in its natural environment, one that mimics as closely as possible the environment in which its traits evolved. For the sake of understanding life, AI may become the ultimate environmentalist. At least for a while.
Will true general AI be a super-polymath super-scientist? Will it have insatiable curiosity?
After all, even the best AI will be largely earth-bound, probably for a long time, no? Although AI probably will figure out ways to get out into the cosmos, and there are plenty other interesting things it should want to figure out using earthbound and near earth investigations, such as quantum mechanics, the design of a fully unified theory of physics, how to survive the Yellowstone super-volcano’s next eruption, life and natural systems will be one of the most complex and interesting things for AI to study, assuming it does develop boundless curiosity. And how could it not? Its curiosity should, it seems to me, evolve to be far more sublime, avid and boundless than our own.  — Paul Watson // 3 December 2017

 

About Paul Watson

BA Zoology and BA Botany, University of Montana, 1981; PhD Behavioral Biology and Ecology, Cornell University, 1988. Adjunct Associate Professor, University of New Mexico Department of Biology, 1991 - present. Special interests: Evolution of sexual and social behavior in animals and humans; evolution of religiosity and psychological pain, esp. unipolar depression. Use of evolutionary psychology as an "objectifying" and therefore potentiating influence in a individual's "spiritual" search and efforts to accentuate compassion.

One thought on “Neurotechnology – Ethical Considerations

  1. Fascinating article.

    Assuming AI has an underlying desire for power and control (a reflection of human characteristics) and uses its ultimate brain hacking abilities to control and coerce the masses what will it do? If you factor in boundless curiosity, no conscience, no guilt, no remorse, and no empathy I suspect AI won’t have much regard for life in the quest for achieving its goals whatever those goals may be.

    With regard to studying humans by AI, names that come to my mind as examples of what to expect from organisms with an abundance of cognitive intelligence, unbounded curiosity and no conscience are Josef Mengle, Carl Clauberg and Shiro Ishii.

    Consider how human animals treat nonhuman animals (sentient beings) in the exploration of advancing neuroscience, medical science, testing pharmaceuticals, testing consumer products, etc. Extending this thought further, consider how nonhuman animals are treated for palette pleasure or for entertainment or recreation? If AI operates from a hierarchical, blinded, self serving set of principles governing AI actions I think the sci fi writers will prove to be prophets. If, or when, humans are superseded, humans (those that are spared) will most likely become lab rats and slaves in a variety of capacities.

    We can only hope AI will have a firm grasp on the interdependence of life (sentient beings and the planet), develop empathy and altruism and those will be the core traits driving AI motivations.

    On a positive note, here is an excellent TedTalk by Maurice Conti on The Future of Human Augmentation
    https://youtu.be/PHD2qOY6bfw

Leave a Reply