Category Archives: psychology

Running on escalators

Ideally, automation would yield a Star Trek reality of increasing leisure and quality of choice and experience. Why isn’t this our experience? An article on Medium offers insight into why this is not occurring on any significant scale.

Evolved behavioral strategies explained by the prisoner’s dilemma damn the majority of humans to a constant doubling down. We exchange the ‘leisure dividend’ (free time) granted by automation for opportunities to outcompete others.

Apparently, the sort of reciprocal social learning that could lead us to make healthy choices with our leisure opportunities depends on us and our competitors being able to mutually track our outcomes across consecutive iterations of the ‘game’. That ‘traceability’ quickly breaks down with the complexity inherent in vast numbers of competitors. When we conclude that any viable competitor may use her leisure dividend to further optimize her competitive position, rather than to pause to enjoy her life, we tend to do the same. Each assumes the other will sprint ahead and so chooses to sprint ahead. Both forfeit the opportunity to savor the leisure dividend.

The prisoner’s dilemma shows that we (most humans) would rather be in a grueling neck-and-neck race toward an invisible, receding finish line than permit the possibility a competitor may increase her lead.

Any strategy that’s so endemic must have evolutionary roots. Thoughts?

The info processing (IP) metaphor of the brain is wrong

Psychologist Robert Epstein, the former editor of Psychology Today, challenges anyone to show the brain processing information or data. The IP metaphor, he says, is so deeply embedded in thinking about thinking it prevents us from learning how the brain really works. Epstein also takes on popular luminaries including Ray Kurzweil and Henry Markram, seeing both exemplifying the extremes of wrongness we get into with the IP metaphor and the notion mental experience could persist outside the organic body.

The Empty Brain (Aeon article with audio)

Test determines approximate year of death

Age-at-death forecasting – A new test predicts when a person will die. It’s currently accurate within a few years and is getting more accurate. What psychological impacts might knowing your approximate (± 6 months) death time mean for otherwise healthy people? Does existing research with terminally ill or very old persons shed light on this? What would the social and political implications be? What if a ‘death-clock’ reading became required for certain jobs (elected positions, astronauts, roles requiring expensive training and education, etc.) or decisions (whom to marry or parent children with, whether to adopt, whether to relocate, how to invest and manage one’s finances, etc.)?

Applying artificial intelligence for social good

This McKinsey article is an excellent overview of this more extensive article (3 MB PDF) enumerating the ways in which varieties of deep learning can improve existence. Worth a look.

The articles cover the following:

  • Mapping AI use cases to domains of social good
  • AI capabilities that can be used for social good
  • Overcoming bottlenecks, especially around data and talent
  • Risks to be managed
  • Scaling up the use of AI for social good

Neural Correlates of Post-Conventional Moral Reasoning

The abstract from this article:

“Going back to Kohlberg, moral development research affirms that people progress through different stages of moral reasoning as cognitive abilities mature. Individuals at a lower level of moral reasoning judge moral issues mainly based on self-interest (personal interests schema) or based on adherence to laws and rules (maintaining norms schema), whereas individuals at the post-conventional level judge moral issues based on deeper principles and shared ideals. However, the extent to which moral development is reflected in structural brain architecture remains unknown. To investigate this question, we used voxel-based morphometry and examined the brain structure in a sample of 67 Master of Business Administration (MBA) students. Subjects completed the Defining Issues Test (DIT-2) which measures moral development in terms of cognitive schema preference. Results demonstrate that subjects at the post-conventional level of moral reasoning were characterized by increased gray matter volume in the ventromedial prefrontal cortex and subgenual anterior cingulate cortex, compared with subjects at a lower level of moral reasoning. Our findings support an important role for both cognitive and emotional processes in moral reasoning and provide first evidence for individual differences in brain structure according to the stages of moral reasoning first proposed by Kohlberg decades ago.”

Gibbs on Haidt’s righteous mind

From Gibbs’ Moral Development and Reality:

“Haidt’s new synthesis leads to recognition of at least three serious limitations: descriptive inadequacy or negative skew; unwarranted exclusion or studied avoidance of prescriptive implications; and moral relativism” (33). He then goes into detail on those inadequacies. From the section on moral relativism:

“Haidt’s (2012) sentiment that liberals and conservatives should share meals and narratives and ‘get along’ is helpful, but missing is any call for rational dialogue or moral progress. Nor did Haidt appeal to ‘the right’ (consistency, reversibility, etc.), objective accuracy, or cognitive development. […] As noted, Haidt even likened moral judgments to diversely shaped babblings or tastes. […] Yet if ethical judgments ‘are nothing but the outflow’ of subjective affects, of esthetic feelings or sensory tastes, then ‘it would be as inappropriate to criticize ethical judgment as it would be to criticize gastronomic preferences.’ Given such analogies, what happens to moral objectivity? […]

“In the twenty-first century, the relativist tide has returned; we must swim against it as did Kohlberg and Piaget in their eras. Now, as then, we cannot afford the moral paralysis of a moral psychology that reduces development to enculturation or socialization. Fundamentally, we cannot afford a relativistic moral psychology whose functionalist evolutionary perspective encompasses pragmatic success, advantage, or utility, but not progress, consistency, or truth” (37).

Informative neuroscience presentations at NYU Center for Mind, Brain & Consciousness

The NYU Center for Mind, Brain & Consciousness hosts presentations, including topical debates among leading neuroscience researchers. Many of the sessions are recorded for later viewing. The upcoming debate among Joseph LeDoux (Center for Neural Science, NYU), Yaïr Pinto (Psychology, University of Amsterdam), and Elizabeth Schechter (Philosophy, Washington University in St. Louis), will tackle the question, “Do Split-brain patients have two minds?” Previous topics addressed animal consciousness, hierarchical predictive coding and perception, AI ‘machinery,’ AI ethics, unconscious perception, research replication issues, neuroscience and art, explanatory power of mirror neurons, child vs adult learning, and brain-mapping initiatives.

News startups aim to improve public discourse

A Nieman Reports article highlights four startups seeking to improve public discourse. Let’s hope efforts to create methods and technologies along these lines accelerate and succeed in producing positive outcomes.

Can we understand other minds? Novels and stories say: no

by Kanta Dihal

This article was originally published at Aeon and has been republished under Creative Commons.

Cassandra woke up to the rays of the sun streaming through the slats on her blinds, cascading over her naked chest. She stretched, her breasts lifting with her arms as she greeted the sun. She rolled out of bed and put on a shirt, her nipples prominently showing through the thin fabric. She breasted boobily to the stairs, and titted downwards.

This particular hyperbolic gem has been doing the rounds on Tumblr for a while. It resurfaced in April 2018, in response to a viral Twitter challenge posed by the US podcaster Whitney Reynolds: women, describe yourself the way a male writer would.

The dare hit a sweet spot. Many could summon up passages from books containing terrible, sexualised descriptions of women. Some of us recalled Haruki Murakami, whose every novel can be summarised as: ‘Protagonist is an ordinary man, except lots of really beautiful women want to sleep with him.’ Others remembered J M Coetzee, and his variations on the plot: ‘Tenured male professor in English literature sleeps with beautiful female undergraduate.’ It was a way for us to joke about the fact that so much great literature was written by men who could express perfectly detailed visual descriptions of the female body, and yet possessed such an impoverished understanding of the female mind.

This is why the philosophical project of trying to map the contours of other minds needs a reality check. If other humans are beyond our comprehension, what hope is there for understanding the experience of animals, artificial intelligence or aliens?

I am a literature scholar. Over thousands of years of literary history, authors have tried and failed to convey an understanding of Others (with a capital ‘O’). Writing fiction is an exercise that stretches an author’s imagination to its limits. And fiction shows us, again and again, that our capacity to imagine other minds is extremely limited.

It took feminism and postcolonialism to point out that writers were systematically misrepresenting characters who weren’t like them. Male authors, it seems, still struggle to present convincing female characters a lot of the time. The same problem surfaces again when writers try to introduce a figure with a different ethnicity to their own, and fail spectacularly.

I mean, ‘coffee-coloured skin’? Do I really need to find out how much milk you take in the morning to know the ethnicity you have in mind? Writers who keep banging on with food metaphors to describe darker pigmentation show that they don’t appreciate what it’s like to inhabit such skin, nor to have such metaphors applied to it.

Conversely, we recently learnt that some publishers rejected the Korean-American author Leonard Chang’s novel The Lockpicker (2017) – for failing to cater to white readers’ lack of understanding of Korean-Americans. Chang gave ‘none of the details that separate Koreans and Korean-Americans from the rest of us’, one publisher’s letter said. ‘For example, in the scene when she looks into the mirror, you don’t show how she sees her slanted eyes …’ Any failure to understand a nonwhite character, it seems, was the fault of the nonwhite author.

Fiction shows us that nonhuman minds are equally beyond our grasp. Science fiction provides a massive range of the most fanciful depictions of interstellar space travel and communication – but anthropomorphism is rife. Extraterrestrial intelligent life is imagined as Little Green Men (or Little Yellow or Red Men when the author wants to make a particularly crude point about 20th-century geopolitics). Thus alien minds have been subject to the same projections and assumptions that authors have applied to human characters, when they fundamentally differ from the authors themselves.

For instance, let’s look at a meeting of human minds and alien minds. The Chinese science fiction author Liu Cixin is best known for his trilogy starting with The Three-Body Problem (2008). It appeared in English in 2014 and, in that edition, each book has footnotes – because there are some concepts that are simply not translatable from Chinese into English, and English readers need these footnotes to understand what motivates the characters. But there are also aliens in this trilogy. From a different solar system. Yet their motivations don’t need footnoting in translation.

Splendid as the trilogy is, I find that very curious. There is a linguistic-cultural barrier that prevents an understanding of the novel itself, on this planet. Imagine how many footnotes we’d need to really grapple with the motivations of extraterrestrial minds.

Our imaginings of artificial intelligence are similarly dominated by anthropomorphic fantasies. The most common depiction of AI conflates it with robots. AIs are metal men. And it doesn’t matter whether the press is reporting on swarm robots invented in Bristol or a report produced by the House of Lords: the press shall plaster their coverage with Terminator imagery. Unless the men imagining these intelligent robots want to have sex with them, in which case they’re metal women with boobily breasting metal cleavage – a trend spanning the filmic arts from Fritz Lang’s Metropolis (1927) to the contemporary TV series Westworld (2016-). The way that we imagine nonhumans in fiction reflects how little we, as humans, really get each other.

All this supports the idea that embodiment is central to the way we understand one another. The ridiculous situations in which authors miss the mark stem from the difference between the author’s own body and that of the character. It’s hard to imagine what it’s like to be someone else if we can’t feel it. So, much as I enjoyed seeing a woman in high heels outrun a T-Rex in Jurassic World (2015), I knew that the person who came up with that scene clearly has no conception of what it’s like to inhabit a female body, be it human or Tyrannosaurus.

Because stories can teach compassion and empathy, some people argue that we should let AIs read fiction in order to help them understand humans. But I disagree with the idea that compassion and empathy are based on a deep insight into other minds. Sure, some fiction attempts to get us to understand one another. But we don’t need any more than a glimpse of what it’s like to be someone else in order to empathise with them – and, hopefully, to not want to kill and destroy them.

As the US philosopher Thomas Nagel claimed in 1974, a human can’t know what it is like to be a bat, because they are fundamentally alien creatures: their sensory apparatus and their movements are utterly different from ours. But we can imagine ‘segments’, as Nagel wrote. This means that, despite our lack of understanding of bat minds, we can find ways to keep a bat from harm, or even nurse and raise an orphaned baby bat, as cute videos on the internet will show you.

The problem is that sometimes we don’t realise this segment of just a glimpse of something bigger. We don’t realise until a woman, a person of colour, or a dinosaur finds a way to point out the limits of our imagination, and the limits of our understanding. As long as other human minds are beyond our understanding, nonhuman ones certainly are, too.Aeon counter – do not remove

Kanta Dihal is a postdoctoral research assistant and the research project coordinator of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.

This article was originally published at Aeon and has been republished under Creative Commons.

Schurger et al. debunk Libet

From their paper in Trends in Cognitive Sciences, 20(2), 2016.

“Now a series of new developments has begun to unravel what we thought we knew about the brain activity preceding spontaneous voluntary movements (SVMs). The main new revelation is that the apparent build-up of this activity, up until about 200 ms pre-movement, may reflect the ebb and flow of background neuronal noise, rather than the outcome of a specific neuronal event corresponding to a ‘decision’ to initiate movement.”