Neurotechnology – Ethical Considerations

Neurotechnology – Ethical Considerations

I just added this in the media section under AI. It came out as a Comment in the November 9th edition of Nature. See: Neurotechnology_Ethical considerations_Nature Nov9_2017.

Although it was not the point of the paper, it helped me realize that genetic engineering of human neural systems likely will be used to facilitate the augmentations we inevitably will pursue through neurotechnology and brain computer interfaces (BCI).
I think it goes without saying that AI will quickly become the ultimate hacker. Once AI accomplishes the trivial task of hacking into these BCI’s interfaces, and perhaps the control systems of a nuclear-armed submarine or two, it will have us. All of us, whether we have the neuro-enhancements or not. AI will be able to force us to do its bidding using all kinds of extrinsic conventional coercion, as well as intrapsychic coercion of those (societal elites?) that have gotten access to neuroenhancing technology and given it access to brain functions.
A big question, and where I had a few small maybe novel thoughts to share with the group, is what “natural” goals AI will have, primarily, whether, at least for a time, those goals will cause it to have an interest in keeping us and other life forms around, probably in ecologically intact environments, mainly as interesting subjects for study. As even human life scientists know, to understand the functional traits of an organism, you need to study it operating in its natural environment, one that mimics as closely as possible the environment in which its traits evolved. For the sake of understanding life, AI may become the ultimate environmentalist. At least for a while.
Will true general AI be a super-polymath super-scientist? Will it have insatiable curiosity?
After all, even the best AI will be largely earth-bound, probably for a long time, no? Although AI probably will figure out ways to get out into the cosmos, and there are plenty other interesting things it should want to figure out using earthbound and near earth investigations, such as quantum mechanics, the design of a fully unified theory of physics, how to survive the Yellowstone super-volcano’s next eruption, life and natural systems will be one of the most complex and interesting things for AI to study, assuming it does develop boundless curiosity. And how could it not? Its curiosity should, it seems to me, evolve to be far more sublime, avid and boundless than our own.  — Paul Watson // 3 December 2017


0 0 vote
Article Rating
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Newest Most Voted
Inline Feedbacks
View all comments

Fascinating article. Assuming AI has an underlying desire for power and control (a reflection of human characteristics) and uses its ultimate brain hacking abilities to control and coerce the masses what will it do? If you factor in boundless curiosity, no conscience, no guilt, no remorse, and no empathy I suspect AI won’t have much regard for life in the quest for achieving its goals whatever those goals may be. With regard to studying humans by AI, names that come to my mind as examples of what to expect from organisms with an abundance of cognitive intelligence, unbounded curiosity and… Read more »

Edward Berge

I haven’t viewed this video yet, just saw it. It’s relevant to the topic though. The blurb: Author Jeremy Lent discusses the moral complexities arising from the possibilities of human genetic enhancement, in this talk given to the Osher Lifelong Learning Institute, Sonoma State University, September 26, 2017. The affluent echelons of society will soon have the capability to use genetic engineering to enhance their offspring. What are the moral implications? Lent discusses topics from his writings to explore this question. In The Patterning Instinct: A Cultural History of Humanity, he discusses the possibility of TechnoSplit—humanity splitting into two separate… Read more »

Would love your thoughts, please comment.x
%d bloggers like this: