Mark suggested this book as a future group reading and discussion and I agree. Rushkoff provides a very brief summary of his new book on the topic in the TED talk below. It starts with tech billionaires main concern being: Where do I build my bunker at the end of the world? So what happened to the idyllic utopias we thought tech was working toward, a collaborative commons of humanity? The tech boom became all about betting on stocks and getting as much money as possible for me, myself and I while repressing what makes us human. The motto became: “Human beings are the problem and technology is the solution.” Rushkoff is not very kind to the transhumanist notion of AI replacing humanity either, a consequence of that motto. He advises that we embed human values into the tech so that it serves us rather than the reverse.
Ideally, automation would yield a Star Trek reality of increasing leisure and quality of choice and experience. Why isn’t this our experience? An article on Medium offers insight into why this is not occurring on any significant scale.
Evolved behavioral strategies explained by the prisoner’s dilemma damn the majority of humans to a constant doubling down. We exchange the ‘leisure dividend’ (free time) granted by automation for opportunities to outcompete others.
Apparently, the sort of reciprocal social learning that could lead us to make healthy choices with our leisure opportunities depends on us and our competitors being able to mutually track our outcomes across consecutive iterations of the ‘game’. That ‘traceability’ quickly breaks down with the complexity inherent in vast numbers of competitors. When we conclude that any viable competitor may use her leisure dividend to further optimize her competitive position, rather than to pause to enjoy her life, we tend to do the same. Each assumes the other will sprint ahead and so chooses to sprint ahead. Both forfeit the opportunity to savor the leisure dividend.
The prisoner’s dilemma shows that we (most humans) would rather be in a grueling neck-and-neck race toward an invisible, receding finish line than permit the possibility a competitor may increase her lead.
Any strategy that’s so endemic must have evolutionary roots. Thoughts?
Age-at-death forecasting – A new test predicts when a person will die. It’s currently accurate within a few years and is getting more accurate. What psychological impacts might knowing your approximate (± 6 months) death time mean for otherwise healthy people? Does existing research with terminally ill or very old persons shed light on this? What would the social and political implications be? What if a ‘death-clock’ reading became required for certain jobs (elected positions, astronauts, roles requiring expensive training and education, etc.) or decisions (whom to marry or parent children with, whether to adopt, whether to relocate, how to invest and manage one’s finances, etc.)?
Lent makes many of the points we had in our discussion of Harari’s book Homo Deus. Lent said:
“Apparently unwittingly, Harari himself perpetuates unacknowledged fictions that he relies on as foundations for his own version of reality. Given his enormous sway as a public intellectual, Harari risks causing considerable harm by perpetuating these fictions. Like the traditional religious dogmas that he mocks, his own implicit stories wield great influence over the global power elite as long as they remain unacknowledged. I invite Harari to examine them here. By recognizing them as the myths they actually are, he could potentially transform his own ability to help shape humanity’s future.”
I will only list the bullet point fictions below. See the link for the details:
1. Nature is a machine.
2. There is no alternative.
3. Life is meaningless so it’s best to do nothing.
4. Humanity’s future is a spectator sport.
The articles cover the following:
- Mapping AI use cases to domains of social good
- AI capabilities that can be used for social good
- Overcoming bottlenecks, especially around data and talent
- Risks to be managed
- Scaling up the use of AI for social good
The abstract from this article:
“Going back to Kohlberg, moral development research affirms that people progress through different stages of moral reasoning as cognitive abilities mature. Individuals at a lower level of moral reasoning judge moral issues mainly based on self-interest (personal interests schema) or based on adherence to laws and rules (maintaining norms schema), whereas individuals at the post-conventional level judge moral issues based on deeper principles and shared ideals. However, the extent to which moral development is reflected in structural brain architecture remains unknown. To investigate this question, we used voxel-based morphometry and examined the brain structure in a sample of 67 Master of Business Administration (MBA) students. Subjects completed the Defining Issues Test (DIT-2) which measures moral development in terms of cognitive schema preference. Results demonstrate that subjects at the post-conventional level of moral reasoning were characterized by increased gray matter volume in the ventromedial prefrontal cortex and subgenual anterior cingulate cortex, compared with subjects at a lower level of moral reasoning. Our findings support an important role for both cognitive and emotional processes in moral reasoning and provide first evidence for individual differences in brain structure according to the stages of moral reasoning first proposed by Kohlberg decades ago.”
Speaking of metaphors, article by David Sloan Wilson. Some excerpts:
“[Adam] Smith was critical of Mandeville and presented a more nuanced view of human nature in his Theory of Moral Sentiments (1759), but modern economic and political discourse is not about nuance. Rational choice theory takes the invisible hand metaphor literally by trying to explain the length and breadth of human behavior on the basis of individual utility maximization, which is fancy talk for the narrow pursuit of self-interest.”
“The collapse of our economy for lack of regulation was preceded by the collapse of rational choice theory. It became clear that the single minimalistic principle of self-interest could not explain the length and breadth of human behavior. Economists started to conduct experiments to discover the actual preferences that drive human behavior. […] Actual human preferences are all about regulation. […] Once the capacity for regulation is provided in the form of rewards and punishments that can be implemented at low cost, cooperation rises to high levels.”
“Functioning as large cooperative groups is not natural. Large human groups scarcely existed until the advent of agriculture a mere 10 thousand years ago. This means that new cultural constructions are required that interface with our genetically evolved psychology for human society to function adaptively at a large scale.”
“Theories and metaphors are the cultural equivalent of genes. They influence our behaviors, which have consequences in the real world. Mother nature practices tough love. When a theory or a metaphor leads to inappropriate behaviors, we suffer the consequences at scales small and large. To change our behaviors, we need to change our theories and metaphors.”
“New theories are not good enough, however. We also need to change the metaphors that guide behavior in everyday life to avoid the disastrous consequences of our current metaphor-guided behaviors. That is why the metaphor of the invisible hand should be declared dead. Let there be no more talk of unfettered competition as a moral virtue. Cooperative social life requires regulation. Regulation comes naturally for small human groups but must be constructed for large human groups. Some forms of regulation will work well and others will work poorly. We can argue at length about smart vs. dumb regulation but the concept of no regulation should be forever laid to rest.”
The NYU Center for Mind, Brain & Consciousness hosts presentations, including topical debates among leading neuroscience researchers. Many of the sessions are recorded for later viewing. The upcoming debate among Joseph LeDoux (Center for Neural Science, NYU), Yaïr Pinto (Psychology, University of Amsterdam), and Elizabeth Schechter (Philosophy, Washington University in St. Louis), will tackle the question, “Do Split-brain patients have two minds?” Previous topics addressed animal consciousness, hierarchical predictive coding and perception, AI ‘machinery,’ AI ethics, unconscious perception, research replication issues, neuroscience and art, explanatory power of mirror neurons, child vs adult learning, and brain-mapping initiatives.
A Scientific American interview with famed primatologist and evolutionary theorist Sarah Blaffer Hrdy.
“If we really want to raise Darwin’s consciousness we need to expand evolutionary perspectives to include the Darwinian selection pressures on mothers and on infants. So much of our human narrative is about selection pressures but, when you stop to think and parse the hypotheses, they’re really about selection pressures on males: hunting hypotheses or lethal intergroup conflict hypotheses to explain human brains. Well, does that mean that females don’t have brains?”
Open access book by Giorgio Griziotti is here. Technical book for you techies. The blurb:
“Technological change is ridden with conflicts, bifurcations and unexpected developments. Neurocapitalism takes us on an extraordinarily original journey through the effects that cutting-edge technology has on cultural, anthropological, socio-economic and political dynamics. Today, neurocapitalism shapes the technological production of the commons, transforming them into tools for commercialization, automatic control, and crisis management. But all is not lost: in highlighting the growing role of General Intellect’s autonomous and cooperative production through the development of the commons and alternative and antagonistic uses of new technologies, Giorgio Griziotti proposes new ideas for the organization of the multitudes of the new millennium.”