Psychologist Robert Epstein, the former editor of Psychology Today, challenges anyone to show the brain processing information or data. The IP metaphor, he says, is so deeply embedded in thinking about thinking it prevents us from learning how the brain really works. Epstein also takes on popular luminaries including Ray Kurzweil and Henry Markram, seeing both exemplifying the extremes of wrongness we get into with the IP metaphor and the notion mental experience could persist outside the organic body.
“How consciousness evolved and how consciousness has come to affect evolutionary processes are related issues. This is because biological consciousness–the only form of consciousness of which we are aware–is entailed by a particular, fairly sophisticated form of animal cognition, an open-ended ability to learn by association or, as we call it, ‘unlimited associative learning’ (UAL). Animals with UAL can assign value to novel, composite stimuli and action-sequences, remember them, and use what has been learned for subsequent (future), second-order, learning. In our work we argue that UAL is the evolutionary marker of minimal consciousness (of subjective experiencing) because if we reverse-engineer from this learning ability to the underlying system enabling it, this enabling system has all the properties and capacities that characterize consciousness. These include…”
“This recent study finally answers these questions by showing that volitionally controlling our respiration, even merely focusing on one’s breathing, yield additional access and synchrony between brain areas. This understanding may lead to greater control, focus, calmness, and emotional control.”
Excellent article by David Lane. Therein he goes into Edelman’s primary and higher-order consciousness. While acknowledging that natural selection has no purpose it is indeed ironic that we humans, with our self-aware higher consciousness that creates purpose, ended up at the top of the selection process. The downside of the latter is that it is a double-edged sword; it can make up stories that serve the purpose of giving us comfort but not be true. However it also has the capacity via the scientific method to correct those stories with new insights and stories from empirical experiment, hence our superior ability to flourish. Nevertheless, the new stories are still based in natural selection versus supernatural causes. They are better, more accurate stories open to revision pending further evidence. And they are indeed the result of our higher-order consciousness.
The NYU Center for Mind, Brain & Consciousness hosts presentations, including topical debates among leading neuroscience researchers. Many of the sessions are recorded for later viewing. The upcoming debate among Joseph LeDoux (Center for Neural Science, NYU), Yaïr Pinto (Psychology, University of Amsterdam), and Elizabeth Schechter (Philosophy, Washington University in St. Louis), will tackle the question, “Do Split-brain patients have two minds?” Previous topics addressed animal consciousness, hierarchical predictive coding and perception, AI ‘machinery,’ AI ethics, unconscious perception, research replication issues, neuroscience and art, explanatory power of mirror neurons, child vs adult learning, and brain-mapping initiatives.
Article here from Frontiers in Human Neuroscience, 2017, 11:51. The abstract (note my italicized highlighting):
“Neurofeedback is attracting renewed interest as a method to self-regulate one’s own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. It not only promises new avenues as a method for cognitive enhancement in healthy subjects, but also as a therapeutic tool. In the current article, we present a review tutorial discussing key aspects relevant to the development of electroencephalography (EEG) neurofeedback studies. In addition, the putative mechanisms underlying neurofeedback learning are considered. We highlight both aspects relevant for the practical application of neurofeedback as well as rather theoretical considerations related to the development of new generation protocols. Important characteristics regarding the set-up of a neurofeedback protocol are outlined in a step-by-step way. All these practical and theoretical considerations are illustrated based on a protocol and results of a frontal-midline theta up-regulation training for the improvement of executive functions. Not least, assessment criteria for the validation of neurofeedback studies as well as general guidelines for the evaluation of training efficacy are discussed.”
This article was originally published at Aeon and has been republished under Creative Commons.
Cassandra woke up to the rays of the sun streaming through the slats on her blinds, cascading over her naked chest. She stretched, her breasts lifting with her arms as she greeted the sun. She rolled out of bed and put on a shirt, her nipples prominently showing through the thin fabric. She breasted boobily to the stairs, and titted downwards.
This particular hyperbolic gem has been doing the rounds on Tumblr for a while. It resurfaced in April 2018, in response to a viral Twitter challenge posed by the US podcaster Whitney Reynolds: women, describe yourself the way a male writer would.
The dare hit a sweet spot. Many could summon up passages from books containing terrible, sexualised descriptions of women. Some of us recalled Haruki Murakami, whose every novel can be summarised as: ‘Protagonist is an ordinary man, except lots of really beautiful women want to sleep with him.’ Others remembered J M Coetzee, and his variations on the plot: ‘Tenured male professor in English literature sleeps with beautiful female undergraduate.’ It was a way for us to joke about the fact that so much great literature was written by men who could express perfectly detailed visual descriptions of the female body, and yet possessed such an impoverished understanding of the female mind.
This is why the philosophical project of trying to map the contours of other minds needs a reality check. If other humans are beyond our comprehension, what hope is there for understanding the experience of animals, artificial intelligence or aliens?
I am a literature scholar. Over thousands of years of literary history, authors have tried and failed to convey an understanding of Others (with a capital ‘O’). Writing fiction is an exercise that stretches an author’s imagination to its limits. And fiction shows us, again and again, that our capacity to imagine other minds is extremely limited.
It took feminism and postcolonialism to point out that writers were systematically misrepresenting characters who weren’t like them. Male authors, it seems, still struggle to present convincing female characters a lot of the time. The same problem surfaces again when writers try to introduce a figure with a different ethnicity to their own, and fail spectacularly.
I mean, ‘coffee-coloured skin’? Do I really need to find out how much milk you take in the morning to know the ethnicity you have in mind? Writers who keep banging on with food metaphors to describe darker pigmentation show that they don’t appreciate what it’s like to inhabit such skin, nor to have such metaphors applied to it.
Conversely, we recently learnt that some publishers rejected the Korean-American author Leonard Chang’s novel The Lockpicker (2017) – for failing to cater to white readers’ lack of understanding of Korean-Americans. Chang gave ‘none of the details that separate Koreans and Korean-Americans from the rest of us’, one publisher’s letter said. ‘For example, in the scene when she looks into the mirror, you don’t show how she sees her slanted eyes …’ Any failure to understand a nonwhite character, it seems, was the fault of the nonwhite author.
Fiction shows us that nonhuman minds are equally beyond our grasp. Science fiction provides a massive range of the most fanciful depictions of interstellar space travel and communication – but anthropomorphism is rife. Extraterrestrial intelligent life is imagined as Little Green Men (or Little Yellow or Red Men when the author wants to make a particularly crude point about 20th-century geopolitics). Thus alien minds have been subject to the same projections and assumptions that authors have applied to human characters, when they fundamentally differ from the authors themselves.
For instance, let’s look at a meeting of human minds and alien minds. The Chinese science fiction author Liu Cixin is best known for his trilogy starting with The Three-Body Problem (2008). It appeared in English in 2014 and, in that edition, each book has footnotes – because there are some concepts that are simply not translatable from Chinese into English, and English readers need these footnotes to understand what motivates the characters. But there are also aliens in this trilogy. From a different solar system. Yet their motivations don’t need footnoting in translation.
Splendid as the trilogy is, I find that very curious. There is a linguistic-cultural barrier that prevents an understanding of the novel itself, on this planet. Imagine how many footnotes we’d need to really grapple with the motivations of extraterrestrial minds.
Our imaginings of artificial intelligence are similarly dominated by anthropomorphic fantasies. The most common depiction of AI conflates it with robots. AIs are metal men. And it doesn’t matter whether the press is reporting on swarm robots invented in Bristol or a report produced by the House of Lords: the press shall plaster their coverage with Terminator imagery. Unless the men imagining these intelligent robots want to have sex with them, in which case they’re metal women with boobily breasting metal cleavage – a trend spanning the filmic arts from Fritz Lang’s Metropolis (1927) to the contemporary TV series Westworld (2016-). The way that we imagine nonhumans in fiction reflects how little we, as humans, really get each other.
All this supports the idea that embodiment is central to the way we understand one another. The ridiculous situations in which authors miss the mark stem from the difference between the author’s own body and that of the character. It’s hard to imagine what it’s like to be someone else if we can’t feel it. So, much as I enjoyed seeing a woman in high heels outrun a T-Rex in Jurassic World (2015), I knew that the person who came up with that scene clearly has no conception of what it’s like to inhabit a female body, be it human or Tyrannosaurus.
Because stories can teach compassion and empathy, some people argue that we should let AIs read fiction in order to help them understand humans. But I disagree with the idea that compassion and empathy are based on a deep insight into other minds. Sure, some fiction attempts to get us to understand one another. But we don’t need any more than a glimpse of what it’s like to be someone else in order to empathise with them – and, hopefully, to not want to kill and destroy them.
As the US philosopher Thomas Nagel claimed in 1974, a human can’t know what it is like to be a bat, because they are fundamentally alien creatures: their sensory apparatus and their movements are utterly different from ours. But we can imagine ‘segments’, as Nagel wrote. This means that, despite our lack of understanding of bat minds, we can find ways to keep a bat from harm, or even nurse and raise an orphaned baby bat, as cute videos on the internet will show you.
The problem is that sometimes we don’t realise this segment of just a glimpse of something bigger. We don’t realise until a woman, a person of colour, or a dinosaur finds a way to point out the limits of our imagination, and the limits of our understanding. As long as other human minds are beyond our understanding, nonhuman ones certainly are, too.
Kanta Dihal is a postdoctoral research assistant and the research project coordinator of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.
This article was originally published at Aeon and has been republished under Creative Commons.
Our discussions all, to some extent, relate to cognition. An important area of inquiry concerns whether some form of physical embodiment is required for a brain to support cognition in general and the self-aware sort of cognition we humans possess.
Philosophy In The Flesh: The Embodied Mind And Its Challenge To Western Thought, by George Lakoff and Mark Johnson. Please note, while the title includes “Philosophy,” we are not a philosophy group and the book and discussion will revolve around scientific concepts and implications, not spiritualistic or metaphysical ideas.
– Amazon (used copies in the $6 range, including shipping)
RSVP by email to email@example.com if you plan to attend our discussion on the afternoon of Saturday, November 3, 2018.
While our group enjoys socializing and will plan other events to that end, this meeting is for focused discussion among people who invest the time in advance to inform themselves on the topic. As a courtesy to those who will do their ‘homework,’ before the meeting please read and consider Part 1 (the first eight chapters) of the book. As you read, jot down your thoughts and questions on the book’s claims, supporting evidence, and implications for our core topics–brain, mind, and artificial intelligence. If you are not able to invest this effort prior to the meeting, please do not attend. Thank you for your understanding.
From David Barash, evolutionary biologist and professor of psychology at University of Washington.
“Brief explanatory excursion: it is a useful exercise to ask what brains are for. From an evolutionary perspective, brains evolved not simply to give us a more accurate view of the world, or merely to orchestrate our internal organs or coordinate our movements, or even our thoughts. Rather, brains exist because they maximise the reproductive success of the genes that helped create them and of the bodies in which they reside. To be adaptive, consciousness must be like that. Insofar as it has evolved via natural selection, consciousness must exist because brains that produced consciousness were evolutionarily favoured over those that did not. But why? One possibility is that consciousness gave its possessors the capacity to overrule the tyranny of pleasure and pain.”
“Even more intriguing than its use as a facilitator of impulse control is the possibility that consciousness evolved in the context of our social lives. Human societies privilege a kind of Machiavellian intelligence whereby success in competition and co-operation depends on our evolved ability to imagine another’s situation no less than our own. That isn’t so much out of intended benevolence (although this, too, could be the case) but because such leaps of the imagination allow us to maximise our own interests in the very complex landscape of human societies. Thus, consciousness is not only an unfolding story that we tell ourselves, moment by moment, about what we are doing, feeling and thinking. It also includes our efforts to interpret what otherindividuals are doing, feeling and thinking, as well as how those others are likely to perceive us in return.[…] The more conscious our ancestors were, according to this argument, the more able they were to modify — to their own benefit — others’ impressions of them and, hence, their evolutionary success.”
“It therefore appears at present that human beings, although probably not unique in possessing Theory of Mind, are nonetheless unusual in the degree of its sophistication, specifically in the extent to which they can accurately model the minds of others. It seems highly likely that those who possessed an accurate Theory of Mind enjoyed an advantage when it came to modelling the intentions of others, an advantage that continues to this day, and was an active ingredient in the evolution of human consciousness. And it is at least possible that the more conscious you are, the more accurate is your Theory of Mind, since cognitive modellers should be more effective if they know, cognitively and self-consciously, not only what they are modelling, but that they are doing so.”
Albuquerque Brain, Mind, and Artificial Intelligence Discussion Group