I was reminded of the video below, and this longer examination of the ideas therein. Here’s the blurb from the latter:
“Divided Brain, Divided World explores the significance of the scientific fact that the two hemispheres of our brains have radically different ‘world views’. It argues that our failure to learn lessons from the crash, our continuing neglect of climate change, and the increase in mental health conditions may stem from a loss of perspective that we urgently need to regain.
“Divided Brain, Divided World examines how related issues are illuminated by the ideas developed in author and psychiatrist Iain McGilchrist’s critically acclaimed work: The Master and his Emissary. It features a dialogue between McGilchrist and Director of RSA’s Social Brain Centre, Dr Jonathan Rowson, which informed a workshop with policymakers, journalists and academics.
“This workshop led to a range of written reflections on the strength and significance of the ideas, including critique, clarification and illustrations of relevance in particular domains, including economics, behavioural economics, climate change, NGO campaigning, patent law, ethics, and art.”
Team Human by Douglas Rushkoff investigates the impacts of current and emerging technologies and digital culture on individuals and groups and seeks ways to evade or extract ourselves from their corrosive effects.
After you read the book, please post your thoughts as comments to this post or, if you prefer, as new posts. There are interviews and other resources about the book online. Feel free to recommend in the comments those you find meaningful. Also, the audiobook is available through the Albuquerque Public Library but may have a long wait queue (I’m aiming for a record number of ‘q’s in this sentence).
Please use the tag and/or category ‘Rushkoff’ in your new posts. Use any other tags or categories you want. To access categories and tags while composing a post, click ‘Document’ at the top of the options area on the right side of the editing page.
Any comments you add to this post should inherit the post’s categories and tags. Add any additional ones as you like.
Last, this site includes a book reviews app for registered site members. To use it, log in and select Review under the New menu.
From this piece located at the publications page of the International Computer Science Institute. “Mathematical models help describe reality, but only by ignoring its inherent integrity.” Computers work on binary logic and the world is full of ‘noise.’ Hence computers, and mathematical models for that matter, can only approximate reality by eliminating that noise.
“Can a bunch of bits represent reality exactly, in a way that can be controlled and predicted indefinitely? The answer is no, because nature is inherently chaotic, while a bunch of bits representing a program can never be so, by definition.”
Which leads us to ask: “Are our mathematical models just a desperate, failed attempt to de-noise an otherwise very confusing, extremely blurred reality?”
So yes, math and computers are quite useful as long as we keep the above in mind instead of assuming they reveal reality as it is. And as long as we also search for that noisy humanity in the spaces between binary logic, which will never be revealed by math or computers alone.
“In this episode of Tech Effects, we explore the impact of music on the brain and body. From listening to music to performing it, WIRED’s Peter Rubin looks at how music can change our moods, why we get the chills, and how it can actually change pathways in our brains.”
For me the most interesting part was later in the video (10:20), how when we improvise we shut down the pre-frontal planning part of the brain and ‘just go with the flow,’ which is our most creative and innovation moments. This though does depend on having used the pre-frontal cortex in learning the techniques of music to get them so ingrained in memory that we are then free to play with what we’ve programmed.
Ideally, automation would yield a Star Trek reality of increasing leisure and quality of choice and experience. Why isn’t this our experience? An article on Medium offers insight into why this is not occurring on any significant scale.
Evolved behavioral strategies explained by the prisoner’s dilemma damn the majority of humans to a constant doubling down. We exchange the ‘leisure dividend’ (free time) granted by automation for opportunities to outcompete others.
Apparently, the sort of reciprocal social learning that could lead us to make healthy choices with our leisure opportunities depends on us and our competitors being able to mutually track our outcomes across consecutive iterations of the ‘game’. That ‘traceability’ quickly breaks down with the complexity inherent in vast numbers of competitors. When we conclude that any viable competitor may use her leisure dividend to further optimize her competitive position, rather than to pause to enjoy her life, we tend to do the same. Each assumes the other will sprint ahead and so chooses to sprint ahead. Both forfeit the opportunity to savor the leisure dividend.
The prisoner’s dilemma shows that we (most humans) would rather be in a grueling neck-and-neck race toward an invisible, receding finish line than permit the possibility a competitor may increase her lead.
Any strategy that’s so endemic must have evolutionary roots. Thoughts?
Psychologist Robert Epstein, the former editor of Psychology Today, challenges anyone to show the brain processing information or data. The IP metaphor, he says, is so deeply embedded in thinking about thinking it prevents us from learning how the brain really works. Epstein also takes on popular luminaries including Ray Kurzweil and Henry Markram, seeing both exemplifying the extremes of wrongness we get into with the IP metaphor and the notion mental experience could persist outside the organic body.
This article was originally published at Aeon and has been republished under Creative Commons.
Cassandra woke up to the rays of the sun streaming through the slats on her blinds, cascading over her naked chest. She stretched, her breasts lifting with her arms as she greeted the sun. She rolled out of bed and put on a shirt, her nipples prominently showing through the thin fabric. She breasted boobily to the stairs, and titted downwards.
This particular hyperbolic gem has been doing the rounds on Tumblr for a while. It resurfaced in April 2018, in response to a viral Twitter challenge posed by the US podcaster Whitney Reynolds: women, describe yourself the way a male writer would.
The dare hit a sweet spot. Many could summon up passages from books containing terrible, sexualised descriptions of women. Some of us recalled Haruki Murakami, whose every novel can be summarised as: ‘Protagonist is an ordinary man, except lots of really beautiful women want to sleep with him.’ Others remembered J M Coetzee, and his variations on the plot: ‘Tenured male professor in English literature sleeps with beautiful female undergraduate.’ It was a way for us to joke about the fact that so much great literature was written by men who could express perfectly detailed visual descriptions of the female body, and yet possessed such an impoverished understanding of the female mind.
This is why the philosophical project of trying to map the contours of other minds needs a reality check. If other humans are beyond our comprehension, what hope is there for understanding the experience of animals, artificial intelligence or aliens?
I am a literature scholar. Over thousands of years of literary history, authors have tried and failed to convey an understanding of Others (with a capital ‘O’). Writing fiction is an exercise that stretches an author’s imagination to its limits. And fiction shows us, again and again, that our capacity to imagine other minds is extremely limited.
It took feminism and postcolonialism to point out that writers were systematically misrepresenting characters who weren’t like them. Male authors, it seems, still struggle to present convincing female characters a lot of the time. The same problem surfaces again when writers try to introduce a figure with a different ethnicity to their own, and fail spectacularly.
I mean, ‘coffee-coloured skin’? Do I really need to find out how much milk you take in the morning to know the ethnicity you have in mind? Writers who keep banging on with food metaphors to describe darker pigmentation show that they don’t appreciate what it’s like to inhabit such skin, nor to have such metaphors applied to it.
Conversely, we recently learnt that some publishers rejected the Korean-American author Leonard Chang’s novel The Lockpicker (2017) – for failing to cater to white readers’ lack of understanding of Korean-Americans. Chang gave ‘none of the details that separate Koreans and Korean-Americans from the rest of us’, one publisher’s letter said. ‘For example, in the scene when she looks into the mirror, you don’t show how she sees her slanted eyes …’ Any failure to understand a nonwhite character, it seems, was the fault of the nonwhite author.
Fiction shows us that nonhuman minds are equally beyond our grasp. Science fiction provides a massive range of the most fanciful depictions of interstellar space travel and communication – but anthropomorphism is rife. Extraterrestrial intelligent life is imagined as Little Green Men (or Little Yellow or Red Men when the author wants to make a particularly crude point about 20th-century geopolitics). Thus alien minds have been subject to the same projections and assumptions that authors have applied to human characters, when they fundamentally differ from the authors themselves.
For instance, let’s look at a meeting of human minds and alien minds. The Chinese science fiction author Liu Cixin is best known for his trilogy starting with The Three-Body Problem (2008). It appeared in English in 2014 and, in that edition, each book has footnotes – because there are some concepts that are simply not translatable from Chinese into English, and English readers need these footnotes to understand what motivates the characters. But there are also aliens in this trilogy. From a different solar system. Yet their motivations don’t need footnoting in translation.
Splendid as the trilogy is, I find that very curious. There is a linguistic-cultural barrier that prevents an understanding of the novel itself, on this planet. Imagine how many footnotes we’d need to really grapple with the motivations of extraterrestrial minds.
Our imaginings of artificial intelligence are similarly dominated by anthropomorphic fantasies. The most common depiction of AI conflates it with robots. AIs are metal men. And it doesn’t matter whether the press is reporting on swarm robots invented in Bristol or a report produced by the House of Lords: the press shall plaster their coverage with Terminator imagery. Unless the men imagining these intelligent robots want to have sex with them, in which case they’re metal women with boobily breasting metal cleavage – a trend spanning the filmic arts from Fritz Lang’s Metropolis (1927) to the contemporary TV series Westworld (2016-). The way that we imagine nonhumans in fiction reflects how little we, as humans, really get each other.
All this supports the idea that embodiment is central to the way we understand one another. The ridiculous situations in which authors miss the mark stem from the difference between the author’s own body and that of the character. It’s hard to imagine what it’s like to be someone else if we can’t feel it. So, much as I enjoyed seeing a woman in high heels outrun a T-Rex in Jurassic World (2015), I knew that the person who came up with that scene clearly has no conception of what it’s like to inhabit a female body, be it human or Tyrannosaurus.
Because stories can teach compassion and empathy, some people argue that we should let AIs read fiction in order to help them understand humans. But I disagree with the idea that compassion and empathy are based on a deep insight into other minds. Sure, some fiction attempts to get us to understand one another. But we don’t need any more than a glimpse of what it’s like to be someone else in order to empathise with them – and, hopefully, to not want to kill and destroy them.
As the US philosopher Thomas Nagel claimed in 1974, a human can’t know what it is like to be a bat, because they are fundamentally alien creatures: their sensory apparatus and their movements are utterly different from ours. But we can imagine ‘segments’, as Nagel wrote. This means that, despite our lack of understanding of bat minds, we can find ways to keep a bat from harm, or even nurse and raise an orphaned baby bat, as cute videos on the internet will show you.
The problem is that sometimes we don’t realise this segment of just a glimpse of something bigger. We don’t realise until a woman, a person of colour, or a dinosaur finds a way to point out the limits of our imagination, and the limits of our understanding. As long as other human minds are beyond our understanding, nonhuman ones certainly are, too.
Kanta Dihal is a postdoctoral research assistant and the research project coordinator of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.
This article was originally published at Aeon and has been republished under Creative Commons.
“We are the ones that create human nature by inculcating cooperation and care over selfishness and power.”
The view you express, Ed, contesting Harari’s claim in Homo deus, seems to edge up closely to the “pre-modern” standard social science of model of human nature, i.e., that it is almost solely a product of culture, with no or minimal influence of naturally selected genes and very fancy naturally selected epigenetic mechanisms for gene regulation. It is the idea that we pretty much are born, mentally, a blank slate. That is demonstrably wrong. There is a deep and mighty pan-cultural, species-typical human nature that impacts all our intrapsychic life and behavior. It is designed only to be impacted in very specific and limited biologically fitness-enhancing ways by local cultural influences. Harari is correct, at least in the sense that our basic nature is only contingently to value truth, that is, only to the extent that it increases our power to generate greater lifetime inclusive fitness.
Yet, and here is where you and I can find, IMO, great and expansive common ground, natural selection in our species created a mind designed to compete in complex multi-partner, multi-currency socioeconomic bargaining, and thus for status (i.e., power), with great acumen, during an ongoing intraspecific arms race with other humans, including close social partners, over the last several hundred thousand years. Importantly, non-trivial metacognition and mentalization (theory of mind) capacities evolved as part of our package of competitive cognitive capacities; these can be used to evaluate, predict, and manipulate others, and to observe and study ourselves. Imaginative capacities and an ability to believe deeply in both fantasy and evidence also evolved to allow us to cohabit “adaptively subjective dreamworlds” (ASD) that hold human groups together. For example, one example of a written down, very dear and pretty darn auspicious ASD is the US Constitution.
Natural selection has zero foresight. This is the only reason we have any chance of beginning to alter how our minds operate. Down the road, once some leaders develop the capacity to make good decisions about how to genetically modify ourselves to be more compassionate and sustainable, probably with the help of evolutionary psychology, a massive program of intentional genetic evolution may be what’s really necessary to get us through our current very dangerous technological adolescence.
Robust, transparent (nonconscious), sly and clever neurological regulatory mechanisms assuredly have evolved to more or less (denoting very slight individual variation in brain development) lock us into making effective and efficient (i.e., powerful) use of our outstanding cognitive abilities to maximize lifetime gene propagation, whether we know this is what we are up to or not.
Yet, this same program of natural selection, epiphenomenally, gave all or most of us the potential — almost always hard won and seldom truly accessed — to employ evolutionarily novel intrapsychic maneuvers, learned from our most sophisticated ancestors, to weaken or “get ahead of” the above-mentioned regulatory mechanisms. Here I am referring to introspective techniques that help us see our own mental operations more objectively, not techniques that just lead to relaxation or greater happiness. This unnaturally objectified seeing can happen in real time (best) or during reflection upon past events (dicey).
An analogy, accidentally constructed by the Wachowskis (?), for using the introspective techniques I’m referring to is vividly given in “The Matrix” trilogy, when Morpheus and his team, eventually especially Neo, purposely send their minds into the matrix via skillful intrapsychic hacking procedures. They are not going in there to sunbathe… even though that would be nice. They cannot. The regulatory mechanisms that already are in place are quite, albeit imperfectly, adaptive in real time. They have the ability to learn. They are seldom are far behind and their prime mandate is to encapsulate or literally destroy the complex neural circuits (i.e, symbolized by Matrix characters like Trinity, Morpheus, Mouse, Sipher) that may collaborate to enable biologically subversive attempts at gaining deep objective self-knowledge. These regulatory mechanisms are key to biologically adaptive neurodevelopment, and they are extraordinarily resourceful and ruthless. They may be limbically based, but any part of the brain can be recruited to help them fulfill their mission, as was “The Matrix” character Sipher.
My own mind largely has been ruined, I feel, by engaging in this process. A lot of my essential “freedom circuitry” has been repeatedly hammered. But, I still believe success is possible for some, particularly if they can learn from the mistakes and rare successes of others. Call it faith in consciousness.
A new analogy has hit me. We are born into a cognitive-emotional prison cell full of delights as well as sources of suffering. (As per astute Buddhist teachings, it’s really all suffering.) But, we may notice that hanging from the ceiling, outside the cell bars but more or less within reach, there are various sets of shiny keys. Usually, one of them opens our cell door. Others keys in the set open additional doors spread throughout an unknown intrapsychic labyrinth. Opening some of those doors triggers an instant alarm, others a delayed alarm, maybe others no alarm at all, especially if the key is inserted and turned correctly. Some sets of keys open doors that lead to traps and cul-de-sacs. You can easily end up in a seemingly nicer jail cell. Or a worse one. Perhaps you can end up in enticing cells, but with no keys hanging outside the bars. It may be hard to tell if one has progressed in any meaningful way.
A legitimate teacher, or cultural tradition, and/or a modern scientific tradition may help us learn something of the labyrinth, and which set of keys to pick that lead to real freedom, or at least time-limited degrees of it. We can learn to go farther and farther. But the prison is larger and more complex than we typically can conceive, especially anywhere near to our starting position, and especially if we try to do so alone.
Perhaps the best path is right around a nearby intrapsychic corner. But if anyone tells you so, beware. — Paul
PS: I’ll try to post this on our web site, since it took a couple hours to write, and may have some value for our upcoming discussion(s).
By Fox et al. (2018), Annals of the New York Academy of Sciences, 12 May, pp. 1 – 27. The abstract:
“Despite increasing scientific interest in self-generated thought—mental content largely independent of the immediate environment—there has yet to be any comprehensive synthesis of the subjective experience and neural correlates of affect in these forms of thinking. Here, we aim to develop an integrated affective neuroscience encompassing many forms of self-generated thought—normal and pathological, moderate and excessive, in waking and in sleep. In synthesizing existing literature on this topic, we reveal consistent findings pertaining to the prevalence, valence, and variability of emotion in self-generated thought, and highlight how these factors might interact with self-generated thought to influence general well-being. We integrate these psychological findings with recent neuroimaging research, bringing attention to the neural correlates of affect in self-generated thought. We show that affect in self-generated thought is prevalent, positively biased, highly variable (both within and across individuals), and consistently recruits many brain areas implicated in emotional processing, including the orbitofrontalcortex amygdala, insula, and medial prefrontal cortex. Many factors modulate these typical psychological and neural patterns, however; the emerging affective neuroscience of self-generated thought must endeavor to link brain function and subjective experience in both everyday self-generated thought as well as its dysfunctions in mental illness.”