Psychologist Robert Epstein, the former editor of Psychology Today, challenges anyone to show the brain processing information or data. The IP metaphor, he says, is so deeply embedded in thinking about thinking it prevents us from learning how the brain really works. Epstein also takes on popular luminaries including Ray Kurzweil and Henry Markram, seeing both exemplifying the extremes of wrongness we get into with the IP metaphor and the notion mental experience could persist outside the organic body.
During our recent meeting to discuss animal intelligence, Eve mentioned elephants communicating over large distances by transmitting and receiving low-frequency waves through their skeletons and feet. This was in the context of my question, “Is physical embodiment necessary to higher cognition?” This article and video from KQED show and explain the phenomenon.
Kurzweil builds and supports a persuasive vision of the emergence of a human-level engineered intelligence in the early-to-mid twenty-first century. In his own words,
With the reverse engineering of the human brain we will be able to apply the parallel, self-organizing, chaotic algorithms of human intelligence to enormously powerful computational substrates. This intelligence will then be in a position to improve its own design, both hardware and software, in a rapidly accelerating iterative process.
In Kurzweil's view, we must and will ensure we evade obsolescence by integrating emerging metabolic and cognitive technologies into our bodies and brains. Through self-augmentation with neurotechnological prostheses, the locus of human cognition and identity will gradually (but faster than we'll expect, due to exponential technological advancements) shift from the evolved substrate (the organic body) to the engineered substrate, ultimately freeing the human mind to develop along technology's exponential curve rather than evolution's much flatter trajectory.
The book is extensively noted and indexed, making the deep-diving reader's work a bit easier.
If you have read it, feel free to post your observations in the comments below. (We've had a problem with the comments section not appearing. It may require more troubleshooting.)
A Guardian article last October brings the darker aspects of the attention economy, particularly the techniques and tools of neural hijacking, into sharp focus. The piece summarizes some interaction design principles and trends that signal a fundamental shift in means, deployment, and startling effectiveness of mass persuasion. The mechanisms reliably and efficiently leverage neural reward (dopamine) circuits to seize, hold, and direct attention toward whatever end the designer and content providers choose.
The organizer of a $1,700 per person event convened to show marketers and technicians “how to manipulate people into habitual use of their products,” put it baldly.
subtle psychological tricks … can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation”
Particularly telling of the growing ethical worry are the defections from social media among Silicon Valley insiders.
Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”, … confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
It is revealing that many of these younger technologists are weaning themselves off their own products, sending their children to elite Silicon Valley schools where iPhones, iPads and even laptops are banned. They appear to be abiding by a Biggie Smalls lyric from their own youth about the perils of dealing crack cocaine: never get high on your own supply.
If you read the article, please comment on any future meeting topics you detect. I find it a vibrant collection of concepts for further exploration.
In this 20-minute video Jeremy Lent gives a brief introduction into his system of liology, his response to substance dualism. Conventional science maintains this dualism, so it is up to the ecological science of dynamical systems theory to correct it. He finds a precursor of systems science in Chinese Neo-Confucianism, which seems a bit of romantic retro-fitting to me, given their own environmental degradation which he minimalizes in his book The Patterning Instinct. That aside, he’s right about the emerging paradigm of systems science as a necessary metaphoric shift if we are to have any chance of curtailing climate change and implementing a sustainable and humane future.
“A new picture is taking shape in which conscious experience is seen as deeply grounded in how brains and bodies work together to maintain physiological integrity – to stay alive.”
“The brain is locked inside a bony skull. All it receives are ambiguous and noisy sensory signals that are only indirectly related to objects in the world. Perception must therefore be a process of inference, in which indeterminate sensory signals are combined with prior expectations or ‘beliefs’ about the way the world is, to form the brain’s optimal hypotheses of the causes of these sensory signals.”
“A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. […] We’ve found that people consciously see what they expect, rather than what violates their expectations.”
This article discusses a new paper in the European Journal of Social Psychology that shows our brain’s penchant for seeing patterns can go awry. Illusory pattern perception is displayed for example in climate science denial, 9/11 truthers, Pizzagate etc. This phenomenon correlates with irrational beliefs that connect dots that aren’t there. We all have this tendency to confirm our biases. However training in critical thinking can reduce the effects of this syndrome.
An MIT Technology Review article introduces the man responsible for the 30-year-old deep learning approach, explains what deep machine learning is, and questions whether deep learning may be the last significant innovation in the AI field. The article also touches on a potential way forward for developing AIs with qualities more analogous to the human brain’s functioning.
Should it surprise us that human biases find their way into human-designed AI algorithms trained using data sets of human artifacts?
Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.