The articles cover the following:
- Mapping AI use cases to domains of social good
- AI capabilities that can be used for social good
- Overcoming bottlenecks, especially around data and talent
- Risks to be managed
- Scaling up the use of AI for social good
The articles cover the following:
The NYU Center for Mind, Brain & Consciousness hosts presentations, including topical debates among leading neuroscience researchers. Many of the sessions are recorded for later viewing. The upcoming debate among Joseph LeDoux (Center for Neural Science, NYU), Yaïr Pinto (Psychology, University of Amsterdam), and Elizabeth Schechter (Philosophy, Washington University in St. Louis), will tackle the question, “Do Split-brain patients have two minds?” Previous topics addressed animal consciousness, hierarchical predictive coding and perception, AI ‘machinery,’ AI ethics, unconscious perception, research replication issues, neuroscience and art, explanatory power of mirror neurons, child vs adult learning, and brain-mapping initiatives.
A Nieman Reports article highlights four startups seeking to improve public discourse. Let’s hope efforts to create methods and technologies along these lines accelerate and succeed in producing positive outcomes.
by Kanta Dihal
This article was originally published at Aeon and has been republished under Creative Commons.
Cassandra woke up to the rays of the sun streaming through the slats on her blinds, cascading over her naked chest. She stretched, her breasts lifting with her arms as she greeted the sun. She rolled out of bed and put on a shirt, her nipples prominently showing through the thin fabric. She breasted boobily to the stairs, and titted downwards.
This particular hyperbolic gem has been doing the rounds on Tumblr for a while. It resurfaced in April 2018, in response to a viral Twitter challenge posed by the US podcaster Whitney Reynolds: women, describe yourself the way a male writer would.
The dare hit a sweet spot. Many could summon up passages from books containing terrible, sexualised descriptions of women. Some of us recalled Haruki Murakami, whose every novel can be summarised as: ‘Protagonist is an ordinary man, except lots of really beautiful women want to sleep with him.’ Others remembered J M Coetzee, and his variations on the plot: ‘Tenured male professor in English literature sleeps with beautiful female undergraduate.’ It was a way for us to joke about the fact that so much great literature was written by men who could express perfectly detailed visual descriptions of the female body, and yet possessed such an impoverished understanding of the female mind.
This is why the philosophical project of trying to map the contours of other minds needs a reality check. If other humans are beyond our comprehension, what hope is there for understanding the experience of animals, artificial intelligence or aliens?
I am a literature scholar. Over thousands of years of literary history, authors have tried and failed to convey an understanding of Others (with a capital ‘O’). Writing fiction is an exercise that stretches an author’s imagination to its limits. And fiction shows us, again and again, that our capacity to imagine other minds is extremely limited.
It took feminism and postcolonialism to point out that writers were systematically misrepresenting characters who weren’t like them. Male authors, it seems, still struggle to present convincing female characters a lot of the time. The same problem surfaces again when writers try to introduce a figure with a different ethnicity to their own, and fail spectacularly.
I mean, ‘coffee-coloured skin’? Do I really need to find out how much milk you take in the morning to know the ethnicity you have in mind? Writers who keep banging on with food metaphors to describe darker pigmentation show that they don’t appreciate what it’s like to inhabit such skin, nor to have such metaphors applied to it.
Conversely, we recently learnt that some publishers rejected the Korean-American author Leonard Chang’s novel The Lockpicker (2017) – for failing to cater to white readers’ lack of understanding of Korean-Americans. Chang gave ‘none of the details that separate Koreans and Korean-Americans from the rest of us’, one publisher’s letter said. ‘For example, in the scene when she looks into the mirror, you don’t show how she sees her slanted eyes …’ Any failure to understand a nonwhite character, it seems, was the fault of the nonwhite author.
Fiction shows us that nonhuman minds are equally beyond our grasp. Science fiction provides a massive range of the most fanciful depictions of interstellar space travel and communication – but anthropomorphism is rife. Extraterrestrial intelligent life is imagined as Little Green Men (or Little Yellow or Red Men when the author wants to make a particularly crude point about 20th-century geopolitics). Thus alien minds have been subject to the same projections and assumptions that authors have applied to human characters, when they fundamentally differ from the authors themselves.
For instance, let’s look at a meeting of human minds and alien minds. The Chinese science fiction author Liu Cixin is best known for his trilogy starting with The Three-Body Problem (2008). It appeared in English in 2014 and, in that edition, each book has footnotes – because there are some concepts that are simply not translatable from Chinese into English, and English readers need these footnotes to understand what motivates the characters. But there are also aliens in this trilogy. From a different solar system. Yet their motivations don’t need footnoting in translation.
Splendid as the trilogy is, I find that very curious. There is a linguistic-cultural barrier that prevents an understanding of the novel itself, on this planet. Imagine how many footnotes we’d need to really grapple with the motivations of extraterrestrial minds.
Our imaginings of artificial intelligence are similarly dominated by anthropomorphic fantasies. The most common depiction of AI conflates it with robots. AIs are metal men. And it doesn’t matter whether the press is reporting on swarm robots invented in Bristol or a report produced by the House of Lords: the press shall plaster their coverage with Terminator imagery. Unless the men imagining these intelligent robots want to have sex with them, in which case they’re metal women with boobily breasting metal cleavage – a trend spanning the filmic arts from Fritz Lang’s Metropolis (1927) to the contemporary TV series Westworld (2016-). The way that we imagine nonhumans in fiction reflects how little we, as humans, really get each other.
All this supports the idea that embodiment is central to the way we understand one another. The ridiculous situations in which authors miss the mark stem from the difference between the author’s own body and that of the character. It’s hard to imagine what it’s like to be someone else if we can’t feel it. So, much as I enjoyed seeing a woman in high heels outrun a T-Rex in Jurassic World (2015), I knew that the person who came up with that scene clearly has no conception of what it’s like to inhabit a female body, be it human or Tyrannosaurus.
Because stories can teach compassion and empathy, some people argue that we should let AIs read fiction in order to help them understand humans. But I disagree with the idea that compassion and empathy are based on a deep insight into other minds. Sure, some fiction attempts to get us to understand one another. But we don’t need any more than a glimpse of what it’s like to be someone else in order to empathise with them – and, hopefully, to not want to kill and destroy them.
As the US philosopher Thomas Nagel claimed in 1974, a human can’t know what it is like to be a bat, because they are fundamentally alien creatures: their sensory apparatus and their movements are utterly different from ours. But we can imagine ‘segments’, as Nagel wrote. This means that, despite our lack of understanding of bat minds, we can find ways to keep a bat from harm, or even nurse and raise an orphaned baby bat, as cute videos on the internet will show you.
The problem is that sometimes we don’t realise this segment of just a glimpse of something bigger. We don’t realise until a woman, a person of colour, or a dinosaur finds a way to point out the limits of our imagination, and the limits of our understanding. As long as other human minds are beyond our understanding, nonhuman ones certainly are, too.
This article was originally published at Aeon and has been republished under Creative Commons.
There are some open source and free textbooks online that are good background resources for our group. The British Columbia’s BCCampus OpenEd is a good entry point for discovering free digital textbooks written by higher education faculty. For example, the Social Psychology title touches on several topics we often hit on, such as in-/out-group dynamics. Another good source is OpenStax.
If you find other good resources, especially at low or no cost, please share their links.
An improvement to the Neural Simulation Tool (NEST) algorithm, the primary tool of the Human Brain Project, expanded the scope of brain neural data management (for simulations) from the current 1% of discrete neurons (about the number in the cerebellum) to 10%. The NEST algorithm can scale to store 100% of BCI-derived or simulated neural data within near-term reach as supercomputing capacity increases. The algorithm achieves its massive efficiency boost by eliminating the need to explicitly store as much data about each neuron’s state.
Abstract of Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
During our next discussion meeting, we’ll explore the status, future potential, and human implications of neuroprostheses–particularly brain-computer interfaces. If you are local to Albuquerque, check our Meetup announcement to join or RSVP. The announcement text follows.
What are neuroprostheses? How are they used now and what may the future hold for technology-enhanced sensation, motor control, communications, cognition, and other human processes?
Resources (please review before the meeting)
• New Brain-Computer Interface Technology (video, 18 m)
• Imagining the Future: The Transformation of Humanity (video, 19 m)
• The Berlin Brain-Computer Interface: Progress Beyond Communication and Control (research article, access with a free Frontiers account)
• The Elephant in the Mirror: Bridging the Brain’s Explanatory Gap of Consciousness (research article)
Other resources (recommend your own in the comments!)
• DARPA implant (planned) with up to 1 million neural connections (short article)
Extra Challenge: As you review the resources, think of possible implications from the perspectives of the other topics we’ve recently discussed:
• the dilemma of so much of human opinion and action deriving from non-conscious sources
• questions surrounding what it means to ‘be human’ and what values we place on our notions of humanness (e.g., individuality and social participation, privacy, ‘self-determination’ (or the illusion thereof), organic versus technologically enhanced cognition, etc.)
I’ve found some thought-provoking answers on the Q&A social media site, Quora. Follow the link to a perceptive and helpful answer to, “Can a person be able to objectively identify exactly when and how their thinking processes are being affected by cognitive biases?”
The author provides some practical (if exhausting) recommendations that, if even partly followed by a third-to-half of people (my guestimate), would possibly collapse the adversarial culture in our country.
Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, introduces a framework for defining types of life based on the degree of design control that sensing, self-replicating entities have over their own ‘hardware’ (physical forms) and ‘software’ (“all the algorithms and knowledge that you use to process the information from your senses and decide what to do”).
It’s a relatively non-academic read and well worth the effort for anyone interested in the potential to design the next major forms of ‘Life’ to transcend many of the physical and cognitive constraints that have us now on the brink of self-destruction. Tegmark’s forecast is optimistic.
If you’ve read the book, please share your observations and questions in the comments below this article. (If you are not a member and would like to be able to comment, send your preferred email address to email@example.com. Please provide a concise description of your interests relevant to our site. Links to relevant books and articles will be accepted. No other advertising or unrelated comments will be accepted and submitters may be banned.)
Edward has posted some great thoughts and resources on embodied cognition (EC). I stumbled on some interesting information on a line of thinking within the EC literature. I find contextualist, connectivist approaches compelling in their ability to address complex-systems such as life and (possibly) consciousness. Wild systems theory (WST) “conceptualizes organisms as multi-scale self-sustaining embodiments of the phylogenetic, cultural, social, and developmental contexts in which they emerged and in which they sustain themselves. Such self-sustaining embodiments of context are naturally and necessarily about the multi-scale contexts they embody. As a result, meaning (i.e., content) is constitutive of what they are. This approach to content overcomes the computationalist need for representation while simultaneously satisfying the ecological penchant for multi-scale contingent interactions.”1 While I find WST fascinating, I’m unclear on whether it has been or can be assessed empirically. What do you think? Is WST shackled to philosophy?
Can one person know another’s mental state? Physicalists focus on how each of us develops a theory of mind (TOM) about each of the other people we observe. TOM is a theory because it is based on assumptions we make about others’ mental states by observing their behaviors. It is not based on any direct reading or measurement of internal processes. In its extreme, the physicalist view asserts that subjective experience and consciousness itself are merely emergent epiphenomena and not fundamentally real.
EC theorists often describe emergent or epiphenomenal subjective properties such as emotions and conscious experiences as “in terms of complex, multi-scale, causal dynamics among objective phenomena such as neurons, brains, bodies, and worlds.” Emotions, experiences, and meanings are seen to emerge from, be caused by or identical with, or be informational aspects of objective phenomena. Further, many EC proponents regards subjective properties as “logically unnecessary to the scientific description.” Some EC theorists conceive of the non-epiphenomenal reality of experience in a complex systems framework and define experience in terms of relational properties. In Gibson’s (1966) concept of affordances, organisms perceive behavioral possibilities in other organisms and in their environment. An affordance is a perceived relationship (often in terms of utility), such a how an organism might use something–say a potential mate, prey/food, or a tool. Meaning arises from “bi-directional aboutness” between an organism and what it perceives or interacts with. Meaning is about relationship.
(A very good, easy read on meaning arising from relationships is the book Learning How to Learn, by Novak and Gowin. In short, it’s the connecting/relating words such as is, contains, produces, consumes, etc., that enable meaningful concepts to be created in minds via language that clarifies context.)
Affordances and relationality at one level of organization and analysis carve out a non-epiphenomenal beachhead but do not banish epiphenomena from that or other levels. There’s a consideration of intrinsic, non-relational properties (perhaps mass) versus relational properties (such as weight). But again, level/scale of analysis matters (“mass emerges from a particle’s interaction with the Higgs field” and is thus relational after all) and some take this line of thinking to a logical end where there is no fundamental reality.
In WST, “all properties are constituted of and by their relations with context. As a result, all properties are inherently meaningful because they are naturally and necessarily about the contexts within which they persist. From this perspective, meaning is ubiquitous. In short, reality is inherently meaningful.”
1. Jordan, J. (2017). Wild Systems Theory: Overcoming the Computational-Ecological Divide via Self-Sustaining Systems. (PDF Download Available). Available from: https://www.researchgate.net/publication/228570467_Wild_Systems_Theory_Overcoming_the_Computational-Ecological_Divide_via_Self-Sustaining_Systems [accessed Nov 09 2017].
BMAI members repository copy (PDF): https://albuquirky.net/download/277/embodied-grounded-cognition/449/wild-systems-theory_bodies-are-meaning.pdf