BMAI friends. The following ramble is my first cut at making sense of the grave role racial (and other) bias is playing in the world today. This was prompted by comments I see daily from my family and friends on social media. Thinking about the great lack of self- and group-awareness many of the commenters display, I turned my scope inward. How do my own innate, evolved biases slant me to take my group’s and my own privileges for granted and make invalid assumptions about those I perceive (subconsciously or explicitly) to be ‘the other’? I put this forward to start a discussion and hope you will contribute your own insights and references. Feel free to post comments or even insert questions, comments, or new text directly into my text. Of course, you can create your own new posts as well. Thanks.
In preparation for the March meeting topic, Your Political Brain, please recommend any resources you have found particularly enlightening about why humans evolved political thinking. Also, please share references about how brain functions lead to political perceptions. I’m assuming political perceptions result from more fundamental cognitive orientations, and that those arise in part from one’s genetics and in part from environment (during development and afterward).
Let’s use the following description from Wikipedia:
Politics is the process of making decisions applying to all members of each group. More narrowly, it refers to achieving and exercising positions of governance— organized control over a human community, particularly a state. Furthermore, politics is the study or practice of the distribution of power and resources within a given community (this is usually a hierarchically organized population) as well as the interrelationship(s) between communities. (Wikipedia)
This description places political thinking in the realm of the brain’s/mind’s social processing.
Following are some candidate resources for our discussion preparation:
- The Republican Brain (video, 21:45 – Chris Mooney, Jonathan Haidt, Chris Hayes)
- Chris Hedges’ review of Haidt’s book, The Righteous Mind
- George Lackoff’s cognitive science perspective
- Brain differences between liberals and conservatives (magazine article)
- The origin of politics: an evolutionary theory of political behavior (academic article)
- Authoritarianism (Wikipedia)
Brain imaging research indicates some aspects of individual political orientation correlate significantly with the mass and activity of particular brain structures including the right amygdala and the insula. This correlation may derive in part from genetics, but is also influenced by environment and behavior.
“there’s a critical nuance here. Schreiber thinks the current research suggests not only that having a particular brain influences your political views, but also that having a particular political view influences and changes your brain. The causal arrow seems likely to run in both directions—which would make sense in light of what we know about the plasticity of the brain. Simply by living our lives, we change our brains. Our political affiliations, and the lifestyles that go along with them, probably condition many such changes.”
Thanks to member, Edward, for recommending this article: http://www.motherjones.com/politics/2013/02/brain-difference-democrats-republicans
In a similar vein, Bob Altemeyer conducted and reported on some seminal social science research and theory on political dispositions. See http://home.cc.umanitoba.ca/~altemey/. Note the free book link on the left.
Good discussion that covered a lot of ground. I took away that none of us have signed on to be early adopters of brain augmentations, but some expect development of body and brain augmentations to continue and accelerate. We also considered the idea of bio-engineered and medical paths to significant life-span, health, and cognitive capacity improvements. I appreciated the ethical and value questions (Why pursue any of this? What would/must one give up to become transhuman? Will the health and lifespan enhancements be equally available to all? What could be the downsides of extremely extended lives?) Also, isn’t there considerable opportunity for smarter transhumans, along with AI tools, to vastly improve the lives of many people by finding ways to mitigate problems we’ve inherited (disease, etc.) and created (pollution, conflict, etc.)?
TED talk of possible interest:
Comment I posted there:
Here is an interdisciplinary “moon-shot” suggestion that we should at least start talking about, now, before it is too late. Let’s massively collaborate to develop a very mission-specific AI system to help us figure out, using emerging genetic editing technologies (e.g., CRISPR, etc.), ideally how to tweak (most likely) species-typical genes currently constraining our capacities for prosociality, biophilia, and compassion, so that we can intentionally evolve into a sustainable species. This is something that natural selection, our past and current psycho-eugenicist, will never do (it cannot), and something that our current genetic endowment will never allow cultural processes / social engineering approaches to adequately transform us. Purposed-designed AI systems feeding off of growing databases of intra-genomic dynamics and gene-environment interactions could greatly speed our understanding of how to make these genetic adjustments to ourselves, the only hope for our survival, in a morally optimal (i.e., fewest mistakes due to unexpected gene-gene and gene-regulatory (exome) and epigenetic interactions; fewest onerous side-effects) as well as in a maximally effective and efficient way. Come together, teams of AI scientists and geneticists! We need to grab our collective pan-cultural intrapsychic fate away from the dark hands of natural selection, and AI can probably help. END
Artificial intelligence (AI) is being incorporated into an increasing range of engineered systems. Potential benefits are so desirable, there is no doubt that humans will pursue AI with increasing determination and resources. Potential risks to humans range from economic and labor disruptions to extinction, making AI risk analysis and mitigation critical.
Specialized (narrow and shallow-to-deep) AI, such as Siri, OK Google, Watson, and vehicle-driving systems acquire pattern recognition accuracy by training on vast data sets containing the target patterns. Humans provide the operational goals (utility functions) and curate the items in the training data sets to include only information directly related to the goal. For example, a driving AI’s utility functions involve getting the vehicle to a destination while keeping the vehicle within various parameters (speed, staying within lane, complying with traffic signs and signals, avoiding collisions, etc.).
Artificial general intelligence (AGI or GAI) systems, by contrast, are capable of learning and performing the full range of intellectual work at or beyond human level. AGI systems can achieve learning goals without explicitly curated training data sets or detailed objectives. They can learn ‘in the wild’, so to speak. For example, an AGI with the goal of maximizing a game score requires only a visual interface to the game (so it can sense the game environment and the outcomes of its own actions) and an ability to interact with (play) the game. It figures out everything on its own.
Some people have raised alarms that AGIs, because their ability to learn is more generalized, are likely to suddenly surpass humans in most or all areas of intellectual achievement. By definition, once AGI minds surpass ours, we will not be able to understand much of their reasoning or actions. This situation is often called the technological singularity–a sort of knowledge horizon we’ll not be able to cross. The concerns arise from our uncertainty that superintelligent AIs will value us or our human objectives or–if they do value us–that they will be able to translate that into actions that do not degrade our survival or quality of existence.
• Demis Hassabis on Google Deep Mind and AGI (video, 14:05, best content starts a 3:40)
• Google Deep Mind (Alpha Go) AGI (video, 13:44)
• Extra: Nick Bostrom on Superintelligence and existential threats (video, 19:54) – part of the talk concerns biological paths to superintelligence
• Primary reading (long article): Superintelligence: Fears, Promises, and Potentials
• Deeper dive (for your further edification): Superintelligence; Paths, Dangers, and Strategies, by Nick Bostrom
Members may RSVP for this discussion at https://www.meetup.com/abq_brain_mind_consciousness_AI/events/234823660/. Based on participant requests, attendance is capped at 10 to promote more and deeper discussion. Those who want to attend but are not in the first 10 may elect to go on the waiting list. It is not unusual for someone to change a “Yes” RSVP to “No”, which will allow the next person on the waiting list to attend. If the topic attracts a large wait list, we may schedule additional discussion.
Members of this site who can’t attend the meeting are welcome to participate in the extended discussion by commenting on this announcement.