This article is relevant to our recent discussions and Zak Stein’s (see Edward’s recent post) suggestion that great destabilizing events open gaps in which new structures can supplant older, disintegrating systems–with the inherent risks and opportunities.
- What is humanity’s situation with respect to surviving long-term with a good quality of life? (Frame the core opportunities and obstacles.)
- What attributes of our evolved, experientially programmed brains contribute to this situation? (What are the potential leverage points for positive change within our body-brain-mind system?)
- What courses of research and action (including currently available systems, tools, and practices and current and possible lines of R&D) have the potential to improve our (and the planetary life system’s) near- and long-term prospects?
Following is a list of (only some!) of the resources some of us have consumed and discussed online, in emails, or face-to-face in 2019. Sample a few to jog your thoughts and provoke deeper dives. Please add your own additional references in the comments below this post. For each, give a short (one line is fine) description, if possible.
- In The Age of AI (Frontline video – about 2 hours)
- Cognitive aspects of interactive technology use
- The origins and evolutionary effects of consciousness
- Damasio on consciousness
- Range: Why Generalists Triumph in a Specialized World
- Team Human by Rushkoff
- Life 3.0 (video interview – about 1 hr 23 min)
- Life 3.0 synopsis
- Eric Brynjolfsson and Max Tegmark on ‘Life 3.0’ – exponential change (video – about 1 hr)
- Zero marginal cost society – collective commons
- Syntegration – key to innovation
- Storytelling as adaptive collective sensemaking
- How does music affect the brain?
- The neuroscience of creativity
- Is the info processing (IP) metaphor of the brain wrong?
- Intra-species evolutionary arms race drove brainpower leaps
- Evolutionary theory: Fringe or central to psychological science
- Climate change and social transformations
- Algorithm, not talent or merit, determines wealth distribution
- 2019 ‘best’ year on record for humans
- Influence of capitalism on well-being
- Does altruism exist?
- Networks thinking themselves (video – about 1 hour)
- Did ability to enter trance states enable formation of human society?
- Cultural evolution
- Free, Fair and Alive: The insurgent power of the commons
- New scientific model can predict moral and political development
- Do our models get in the way?
- 40-year update on meme theory
- Beyond free will: The embodied emergence of conscious agency
- How the internet is affecting your brain
- Ideas of Stuart Kauffman
- This link shows 12 positive benefits of meditation supported by scientific studies
- Part of the collective commons transformation is how humanity has become a hybrid cyborg with the machine, meaning the personal computer and an internet connection. It has fundamentally changed our nature to one of a mass-communicated collaborative commons. The Frontiers ebook is the tech side of that development, whereas the more social side is what Rifkin writes about.
- Heart-rate variability and social coherence
- How cooperatives are driving the new economy
- Yuval Noah Harari Is Worried About Our Souls
- The Age of Entanglement
- Fungi as a new model for cooperation and communication?
- The landscape of 21st century science
- The collective computation or reality in nature and society (among other great SFI resources)
- Brain tunes itself to criticality, maximizing information processing
- Evolved biocultural beings
- Editorial: Evolutionary Theory: Fringe or Central to Psychological Science
- From computers to cultivation: reconceptualizing evolutionary psychology
- Evolved computers with culture. Commentary: From computers to cultivation: reconceptualizing evolutionary psychology
- Information-Processing and Embodied, Embedded, Enactive Cognition Part 1
- Frontiers – Peer-reviewed, free-access scientific journals
- Divided brain, divided world (video – about 11 mins)
- Is the power law really all dat?
- Scale-free networks are rare
- Consciousness in humanoid robots
- Journal: Human Arenas
- SFI: InterPlanetary Round Table Discussion: Our Future in Space (Neal Stephenson and others)
- Thinking devices – imitation, mind-reading, language and others – are neither hard-wired nor designed by genetic evolution
- EU Ethics guidelines for trustworthy AI
- The neural and cognitive foundations of math
- AI will never conquer humanity
- The agency of cognitive artifacts
- Neuroscience: Deep breathing changes your brain
Since this came up in our book discussion or Range yesterday, something relevant from this article. It’s interesting how the salience network mediates between and integrates two normally one on, one off networks. And how it is the connections between networks that seems to do the trick akin to the book’s description of how those with range make analogous connections between ideas and domains.
“Three of these distinct brain networks — the default mode, the executive control network and the salience network — have been identified by Dr Beaty and colleagues as being associated with creativity.
“The default mode network is activated when people are relaxed and their mind is wandering to different topics or experiences, associated with remembering past experiences, thinking about possible future experience and daydreaming.
“The executive control network comes into play when you need to pay close attention and focus on something in the environment. It comes online when we have to focus our attention and cognitive resources on more demanding tasks that require us to hone our attention and manage multiple things in our mind at one time, directing the content of our thoughts.
“The salience network plays a significant role in detecting and filtering important — or salient — information. It’s called salience because it helps us to pick up on salient information in the environment or internally. Interestingly, the default mode and the executive control networks don’t typically work together — when one network is activated, the other tends to be deactivated. One thing that we think the salience network might be doing is switching between an idea-generation mode, which is more of a default process, and the idea-evaluation mode, which is more of a control way of thinking. […] More creative people tended to have more network connections.”
The articles cover the following:
- Mapping AI use cases to domains of social good
- AI capabilities that can be used for social good
- Overcoming bottlenecks, especially around data and talent
- Risks to be managed
- Scaling up the use of AI for social good
The NYU Center for Mind, Brain & Consciousness hosts presentations, including topical debates among leading neuroscience researchers. Many of the sessions are recorded for later viewing. The upcoming debate among Joseph LeDoux (Center for Neural Science, NYU), Yaïr Pinto (Psychology, University of Amsterdam), and Elizabeth Schechter (Philosophy, Washington University in St. Louis), will tackle the question, “Do Split-brain patients have two minds?” Previous topics addressed animal consciousness, hierarchical predictive coding and perception, AI ‘machinery,’ AI ethics, unconscious perception, research replication issues, neuroscience and art, explanatory power of mirror neurons, child vs adult learning, and brain-mapping initiatives.
A Nieman Reports article highlights four startups seeking to improve public discourse. Let’s hope efforts to create methods and technologies along these lines accelerate and succeed in producing positive outcomes.
by Kanta Dihal
This article was originally published at Aeon and has been republished under Creative Commons.
Cassandra woke up to the rays of the sun streaming through the slats on her blinds, cascading over her naked chest. She stretched, her breasts lifting with her arms as she greeted the sun. She rolled out of bed and put on a shirt, her nipples prominently showing through the thin fabric. She breasted boobily to the stairs, and titted downwards.
This particular hyperbolic gem has been doing the rounds on Tumblr for a while. It resurfaced in April 2018, in response to a viral Twitter challenge posed by the US podcaster Whitney Reynolds: women, describe yourself the way a male writer would.
The dare hit a sweet spot. Many could summon up passages from books containing terrible, sexualised descriptions of women. Some of us recalled Haruki Murakami, whose every novel can be summarised as: ‘Protagonist is an ordinary man, except lots of really beautiful women want to sleep with him.’ Others remembered J M Coetzee, and his variations on the plot: ‘Tenured male professor in English literature sleeps with beautiful female undergraduate.’ It was a way for us to joke about the fact that so much great literature was written by men who could express perfectly detailed visual descriptions of the female body, and yet possessed such an impoverished understanding of the female mind.
This is why the philosophical project of trying to map the contours of other minds needs a reality check. If other humans are beyond our comprehension, what hope is there for understanding the experience of animals, artificial intelligence or aliens?
I am a literature scholar. Over thousands of years of literary history, authors have tried and failed to convey an understanding of Others (with a capital ‘O’). Writing fiction is an exercise that stretches an author’s imagination to its limits. And fiction shows us, again and again, that our capacity to imagine other minds is extremely limited.
It took feminism and postcolonialism to point out that writers were systematically misrepresenting characters who weren’t like them. Male authors, it seems, still struggle to present convincing female characters a lot of the time. The same problem surfaces again when writers try to introduce a figure with a different ethnicity to their own, and fail spectacularly.
I mean, ‘coffee-coloured skin’? Do I really need to find out how much milk you take in the morning to know the ethnicity you have in mind? Writers who keep banging on with food metaphors to describe darker pigmentation show that they don’t appreciate what it’s like to inhabit such skin, nor to have such metaphors applied to it.
Conversely, we recently learnt that some publishers rejected the Korean-American author Leonard Chang’s novel The Lockpicker (2017) – for failing to cater to white readers’ lack of understanding of Korean-Americans. Chang gave ‘none of the details that separate Koreans and Korean-Americans from the rest of us’, one publisher’s letter said. ‘For example, in the scene when she looks into the mirror, you don’t show how she sees her slanted eyes …’ Any failure to understand a nonwhite character, it seems, was the fault of the nonwhite author.
Fiction shows us that nonhuman minds are equally beyond our grasp. Science fiction provides a massive range of the most fanciful depictions of interstellar space travel and communication – but anthropomorphism is rife. Extraterrestrial intelligent life is imagined as Little Green Men (or Little Yellow or Red Men when the author wants to make a particularly crude point about 20th-century geopolitics). Thus alien minds have been subject to the same projections and assumptions that authors have applied to human characters, when they fundamentally differ from the authors themselves.
For instance, let’s look at a meeting of human minds and alien minds. The Chinese science fiction author Liu Cixin is best known for his trilogy starting with The Three-Body Problem (2008). It appeared in English in 2014 and, in that edition, each book has footnotes – because there are some concepts that are simply not translatable from Chinese into English, and English readers need these footnotes to understand what motivates the characters. But there are also aliens in this trilogy. From a different solar system. Yet their motivations don’t need footnoting in translation.
Splendid as the trilogy is, I find that very curious. There is a linguistic-cultural barrier that prevents an understanding of the novel itself, on this planet. Imagine how many footnotes we’d need to really grapple with the motivations of extraterrestrial minds.
Our imaginings of artificial intelligence are similarly dominated by anthropomorphic fantasies. The most common depiction of AI conflates it with robots. AIs are metal men. And it doesn’t matter whether the press is reporting on swarm robots invented in Bristol or a report produced by the House of Lords: the press shall plaster their coverage with Terminator imagery. Unless the men imagining these intelligent robots want to have sex with them, in which case they’re metal women with boobily breasting metal cleavage – a trend spanning the filmic arts from Fritz Lang’s Metropolis (1927) to the contemporary TV series Westworld (2016-). The way that we imagine nonhumans in fiction reflects how little we, as humans, really get each other.
All this supports the idea that embodiment is central to the way we understand one another. The ridiculous situations in which authors miss the mark stem from the difference between the author’s own body and that of the character. It’s hard to imagine what it’s like to be someone else if we can’t feel it. So, much as I enjoyed seeing a woman in high heels outrun a T-Rex in Jurassic World (2015), I knew that the person who came up with that scene clearly has no conception of what it’s like to inhabit a female body, be it human or Tyrannosaurus.
Because stories can teach compassion and empathy, some people argue that we should let AIs read fiction in order to help them understand humans. But I disagree with the idea that compassion and empathy are based on a deep insight into other minds. Sure, some fiction attempts to get us to understand one another. But we don’t need any more than a glimpse of what it’s like to be someone else in order to empathise with them – and, hopefully, to not want to kill and destroy them.
As the US philosopher Thomas Nagel claimed in 1974, a human can’t know what it is like to be a bat, because they are fundamentally alien creatures: their sensory apparatus and their movements are utterly different from ours. But we can imagine ‘segments’, as Nagel wrote. This means that, despite our lack of understanding of bat minds, we can find ways to keep a bat from harm, or even nurse and raise an orphaned baby bat, as cute videos on the internet will show you.
The problem is that sometimes we don’t realise this segment of just a glimpse of something bigger. We don’t realise until a woman, a person of colour, or a dinosaur finds a way to point out the limits of our imagination, and the limits of our understanding. As long as other human minds are beyond our understanding, nonhuman ones certainly are, too.
This article was originally published at Aeon and has been republished under Creative Commons.
There are some open source and free textbooks online that are good background resources for our group. The British Columbia’s BCCampus OpenEd is a good entry point for discovering free digital textbooks written by higher education faculty. For example, the Social Psychology title touches on several topics we often hit on, such as in-/out-group dynamics. Another good source is OpenStax.
If you find other good resources, especially at low or no cost, please share their links.
An improvement to the Neural Simulation Tool (NEST) algorithm, the primary tool of the Human Brain Project, expanded the scope of brain neural data management (for simulations) from the current 1% of discrete neurons (about the number in the cerebellum) to 10%. The NEST algorithm can scale to store 100% of BCI-derived or simulated neural data within near-term reach as supercomputing capacity increases. The algorithm achieves its massive efficiency boost by eliminating the need to explicitly store as much data about each neuron’s state.
Abstract of Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
During our next discussion meeting, we’ll explore the status, future potential, and human implications of neuroprostheses–particularly brain-computer interfaces. If you are local to Albuquerque, check our Meetup announcement to join or RSVP. The announcement text follows.
What are neuroprostheses? How are they used now and what may the future hold for technology-enhanced sensation, motor control, communications, cognition, and other human processes?
Resources (please review before the meeting)
• New Brain-Computer Interface Technology (video, 18 m)
• Imagining the Future: The Transformation of Humanity (video, 19 m)
• The Berlin Brain-Computer Interface: Progress Beyond Communication and Control (research article, access with a free Frontiers account)
• The Elephant in the Mirror: Bridging the Brain’s Explanatory Gap of Consciousness (research article)
Other resources (recommend your own in the comments!)
• DARPA implant (planned) with up to 1 million neural connections (short article)
Extra Challenge: As you review the resources, think of possible implications from the perspectives of the other topics we’ve recently discussed:
• the dilemma of so much of human opinion and action deriving from non-conscious sources
• questions surrounding what it means to ‘be human’ and what values we place on our notions of humanness (e.g., individuality and social participation, privacy, ‘self-determination’ (or the illusion thereof), organic versus technologically enhanced cognition, etc.)