I know, to free will or not to free will, that is the hackneyed question debated in philosophical circles since we learned how to talk. But here’s a cognitive neuroscientist’s research on “how neuronal code underlies top-down mental causation.” It’s a long video, over 2 hours, and I have yet to complete it. Here is Peter Tse’s CV. Here is his book on the topic is. Here is a good summary of Tse’s work on the topic.
I’m integrating a file-sharing capability into this site. For it and posts, I would like to implement a hierarchy of topical categories. A structured set of terms (taxonomy) will make it easier for us to categorize new content and find existing content. If you are aware of existing taxonomies we might borrow from, please provide links in comments to this post. I propose we start with a relatively high-level taxonomy of categories (limited to two or three levels) and use less-formal tags for highly-specific and infrequently used labels. If we need to amend or grow the taxonomy of categories later, we can easily do so.
If you were not aware, web content platforms like the one (WordPress) this site is built on use two methods for labeling and organizing content items.
The more formal method is a hierarchy of pre-determined categories. When creating posts or uploading files or media, authors select relevant categories from a list. A category hierarchy might include the following, for example:
- genetic engineering
- group selection
- natural selection
The content author could choose any or all of the relevant categories but usually would select at least the lowest (most embedded) category from the hierarchy. Once content is associated with a category, it’s possible for search tools and grouped, sorted, and filtered views to improve the findability of topical content.
The informal method is tagging (also called folksonomy). Authors associate terms with their content in a more ad hoc way. Tags usually display under a web article’s title and in interactive tag clouds like the one on the right side of our site’s pages.
Some taxonomies we could consider:
- Wikipedia Neuroscience topics
- Wikipedia Artificial Intelligence topics
- Wikipedia Evolutionary Biology topics
- Wikipedia Psychology topics
- Brain Science Podcast categories
- Society for Science & The Public topics
Thanks in advance for your suggestions.
Frontiers in Human Neuroscience, 2017; 11: 126. Some excerpts:
“In this article we suggest the idea that the processing of self-referential stimuli in cortical midline structures (CMS) may represent an important part of the conscious self, which may be supplemented by an unconscious part of the self that has been called an ’embodied mind’ (Varela et al., 1991), which relies on other brain structures.”
“When we describe the self as structure and organization we understand it as a system. But the concept of the embodied self states that the self or cognition is not an activity of the mind alone, but is distributed across the entire situation including mind, body, environment (e.g., Beer, 1995), thereby pointing to an embodied and situated self.”
“Furthermore, we argue that through embodiment the self is also embedded in the environment. This means that our self is not isolated but intrinsically social. […] Hence, the self should not be understood as an entity located somewhere in the brain, isolated from both the body and the environment. In contrast, the self can be seen as a brain-based neurosocial structure and organization, always linked to the environment (or the social sphere) via embodiment and embeddedness.”
It occurred to me that memes are a lot like frames as Lakoff describes them. Lakoff has done extensive cognitive scientific work on schemas, metaphors and frames. Check out this lengthy article in Frontiers in Human Neuroscience, 2014; 8: 958, “Mapping the brain’s metaphor circuitry.” Even though they don’t relate this to the concept of memes, there are some striking similarities. E.g.:
“Reddy had found that the abstract concepts of communication and ideas are understood via a conceptual metaphor: Ideas Are Objects; Language Is a Container for Idea-Objects; Communication Is Sending Idea-Objects in Language-Containers.”
The Google talk on his new book, From Bacteria to Bach and Back: The Evolution of Minds. The blurb:
“How did we come to have minds? For centuries, this question has intrigued psychologists, physicists, poets, and philosophers, who have wondered how the human mind developed its unrivaled ability to create, imagine, and explain. Disciples of Darwin have long aspired to explain how consciousness, language, and culture could have appeared through natural selection, blazing promising trails that tend, however, to end in confusion and controversy. Even though our understanding of the inner workings of proteins, neurons, and DNA is deeper than ever before, the matter of how our minds came to be has largely remained a mystery. That is now changing, says Daniel C. Dennett. In From Bacteria to Bach and Back, his most comprehensive exploration of evolutionary thinking yet, he builds on ideas from computer science and biology to show how a comprehending mind could in fact have arisen from a mindless process of natural selection. Part philosophical whodunit, part bold scientific conjecture, this landmark work enlarges themes that have sustained Dennett’s legendary career at the forefront of philosophical thought.”
Several of us met on Labor Day with the goal of identifying topics for at least five future monthly meetings. (Thanks, Dave N, for hosting!) Being the overachievers we are, we pushed beyond the goal. Following are the resulting topics, which will each have its own article on this site where we can begin organizing references for the discussion:
- sex-related influences on emotional memory
- gross and subtle brain differences (e.g., “walls of the third ventricle – sexual nuclei”)
- “Are there gender-based brain differences that influence differences in perceptions and experience?”
- epigenetic factors (may need an overview of epigenetics)
- embodied cognition
- computational grounded cognition (possibly the overview and lead-in topic)
- neuro-reductionist theory vs. enacted theory of mind
- “Could embodied cognition influence brain differences?” (Whoever suggested this, please clarify.)
- brain-gut connection (relates to embodied cognition, but can stand on its own as a topic)
- behavioral priming (one or multiple discussions)
- neuroscience of empathy – effects on the brain, including on neuroplasticity
- comparative effects of various meditative practices on the brain
- comparative effects of various psychedelics on the brain
- effects of childhood poverty on the brain
If I missed anything, please edit the list (I used HTML in the ‘Text’ view to get sub-bullets). If you’re worried about the formatting, you can email your edits to email@example.com and Mark will post your changes.
An article in Wired cites two studies that show carb-free diets improved the memories and extended the lives of lab mice. While there are many DIY human experiments underway, scientific trials are needed to clarify the effects of ketogenic diets on people.
From this article, which first describes the progress in grounded cognition theories, then goes into how this should be applied to robotics and artificial intelligence. Some excepts:
“Grounded theories assume that there is no central module for cognition. According to this view, all cognitive phenomena, including those considered the province of amodal cognition such as reasoning, numeric, and language processing, are ultimately grounded in (and emerge from) a variety of bodily, affective, perceptual, and motor processes. The development and expression of cognition is constrained by the embodiment of cognitive agents and various contextual factors (physical and social) in which they are immersed. The grounded framework has received numerous empirical confirmations. Still, there are very few explicit computational models that implement grounding in sensory, motor and affective processes as intrinsic to cognition, and demonstrate that grounded theories can mechanistically implement higher cognitive abilities. We propose a new alliance between grounded cognition and computational modeling toward a novel multidisciplinary enterprise: Computational Grounded Cognition. We clarify the defining features of this novel approach and emphasize the importance of using the methodology of Cognitive Robotics, which permits simultaneous consideration of multiple aspects of grounding, embodiment, and situatedness, showing how they constrain the development and expression of cognition.”
“According to grounded theories, cognition is supported by modal representations and associated mechanisms for their processing (e.g., situated simulations), rather than amodal representations, transductions, and abstract rule systems. Recent computational models of sensory processing can be used to study the grounding of internal representations in sensorimotor modalities; for example, generative models show that useful representations can self-organize through unsupervised learning (Hinton, 2007). However, modalities are usually not isolated but form integrated and multimodal assemblies, plausibly in association areas or ‘convergence zones'” (Damasio, 1989; Simmons and Barsalou, 2003).
“An important challenge is explaining how abstract concepts and symbolic capabilities can be constructed from grounded categorical representations, situated simulations and embodied processes. It has been suggested that abstract concepts could be based principally on interoceptive, meta-cognitive and affective states (Barsalou, 2008) and that selective attention and categorical memory integration are essential for creating a symbolic system” (Barsalou, 2003).
Here is Thompson’s talk on the topic. As a dancer and martial artist, as well as an embodied cognitioner, this talk is particularly relevant to me. I’ve been saying since forever that these arts are meditative disciplines in themselves. And one doesn’t necessarily need the sitting still sort of meditation to achieve meta-cognition.
Having done both kinds my anectodic report is that both sitting and moving meditation induce meta-cognition. But there are no studies on movement meditation to confirm it as yet. That’s part of what Thompson is complaining about, and encouraging the scientific meditative researchers to start investigating.
Around 14:20 he said that research has show that perception is different when one initiates movement than when one is passively moved. He did not directly compare perception with movement to perception while completely still, so not sure of those differences.
At 18:20 he reiterates a point made elsewhere, that individual meta-cognition is an internalized form of social cognition, a point I used in the paper on collective enlightenment. He then brings in Vygotsky’s work along this line, different than Piaget’s. In our paper I also brought in Habermas’ use of Mead in this regard. For reference, also see Edwards’ 3-part series at Integral World on the depth of the exteriors.
At 23:40 is an important point to my initial inquiry about comparing sitting and moving meditation: “If two cognitive systems include different cognitive practices, the two systems can have different cognitive properties, even when the neural network activations are the same.”
At 30:20 Thompson said that attention has no specific location in the brain but is the whole embodied subject. Attention isn’t a particular process or even a collection of processes, but a mode in which processes are related. I’m reminded of this discussion on amodal and supramodal processing, although that is limited to the brain and not the brain/mind/body/environment enaction Thompson discusses.
Finishing the talk he reiterates the need to extend scientific meditative research to the movement arts. From the above he seems to suggest that movement mediation, which perhaps activating the same brain areas, means something very different via its enaction than sitting meditation. So it is not the same meta-cognitive experience with the two forms.
Having done both kinds I find moving meditation activates and refines the spatial-temporal bodily image schema in a way that sitting meditation does not. In so doing it literally gives multiple views of objects within an immediate field of attention, thereby opening to multiple points of view rather than a fixed point of reference in sitting.
However the attention in sitting meditation, while opening to whatever arises, be it a sound or a thought, or even by focusing one one object, is still within a fixed center or perspective, this notion of a bare attention that theoretically has no center or ego reference. But that rests on an assumption that bare attention itself is beyond reference or perspective, while moving meditation’s sort of bare attention makes no such assumption given its ever shifting physical perspective. It seems that sitting mediation is literally fixated while moving meditation is multi-perspectival with no fixed center.
Just some biased ruminations that are sure to fire up the sitters! Have at it.
Should it surprise us that human biases find their way into human-designed AI algorithms trained using data sets of human artifacts?
Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.