Recall the anterior cingulate cortex’s (ACC) role in meditative states from the last post. This neuroscience article by the above name claims that “self-awareness is a pivotal component of conscious experience. It is correlated with a paralimbic network of medial prefrontal/anterior cingulate and medial parietal/posterior cingulate cortical ‘hubs’ and associated regions. Electromagnetic and transmitter manipulation have demonstrated that the network is not an epiphenomenon but instrumental in generation of self-awareness.”
Concerning meditation and this brain network: “The new understanding of the physiology and pathophysiology of self-awareness outlined in Section 4 may lead to the application of unconventional therapeutical strategies to increase dopaminergic activity and to improve paralimbic interaction. These strategies include relaxation meditation like yoga nidra or mindfulness meditation, which in independent studies have been shown to increase dopaminergic tone and induce growth in paralimbic structures.”
eNeuro, 10 March 2017, 4(2). This might be neuroscientific evidence for my speculations on the syntegration of consciousness states and stages via meditative discipline. To be determined. The abstract:
“Unraveling how brain regions communicate is crucial for understanding how the brain processes external and internal information. Neuronal oscillations within and across brain regions have been proposed to play a crucial role in this process. Two main hypotheses have been suggested for routing of information based on oscillations, namely communication through coherence and gating by inhibition. Here, we propose a framework unifying these two hypotheses that is based on recent empirical findings. We discuss a theory in which communication between two regions is established by phase synchronization of oscillations at lower frequencies, which serve as temporal reference frame for information carried by higher frequency activity. Our framework, consistent with numerous recent empirical findings, posits that cross-frequency interactions are essential for understanding how large-scale cognitive and perceptual networks operate.”
And implies an event horizon of the human brain. There’s a mouthful, a new title in NeuroQuantology (15:3, September 2017). The abstract follows, also a brainful. This will take some reading and digesting, provided I have the requisite capacity to understand it (which remains to be seen).
“Our brain is not a ‘stand alone’ information processing organ: it acts as a central part of our integral nervous system with recurrent information exchange with the entire organism and the cosmos. In this study, the brain is conceived to be embedded in a holographic structured field that interacts with resonant sensitive structures in the various cell types in our body. In order to explain earlier reported ultra-rapid brain responses and effective operation of the meta-stable neural system, a field-receptive mental workspace is proposed to be communicating with the brain. Our integral nervous system is seen as a dedicated neural transmission and multi-cavity network that, in a non-dual manner, interacts with the proposed supervening meta-cognitive domain. Among others, it is integrating discrete patterns of eigen-frequencies of photonic/solitonic waves, thereby continuously updating a time-symmetric global memory space of the individual. Its toroidal organization allows the coupling of gravitational, dark energy, zero-point energy field (ZPE) as well as earth magnetic fields energies and transmits wave information into brain tissue, that thereby is instrumental in high speed conscious and sub-conscious information processing. We propose that the supposed field-receptive workspace, in a mutual interaction with the whole nervous system, generates self-consciousness and is conceived as operating from a 4th spatial dimension (hyper-sphere). Its functional structure is adequately defined by the geometry of the torus, that is envisioned as a basic unit (operator) of space-time. The latter is instrumental in collecting the pattern of discrete soliton frequencies that provided an algorithm for coherent life processes, as earlier identified by us. It is postulated that consciousness in the entire universe arises through, scale invariant, nested toroidal coupling of various energy fields, that may include quantum error correction. In the brain of the human species, this takes the form of the proposed holographic workspace, that collects active information in a ‘brain event horizon,’ representing an internal and fully integral model of the self. This brain-supervening workspace is equipped to convert integrated coherent wave energies into attractor type/standing waves that guide the related cortical template to a higher coordination of reflection and action as well as network synchronicity, as required for conscious states. In relation to its scale-invariant global character, we find support for a universal information matrix, that was extensively described earlier, as a supposed implicate order as well as in a spectrum of space-time theories in current physics. The presence of a field-receptive resonant workspace, associated with, but not reducible to, our brain, may provide an interpretation framework for widely reported, but poorly understood transpersonal conscious states and algorithmic origin of life. It also points out the deep connection of mankind with the cosmos and our major responsibility for the future of our planet.”
See this article. A few excerpts:
“A new picture is taking shape in which conscious experience is seen as deeply grounded in how brains and bodies work together to maintain physiological integrity – to stay alive.”
“The brain is locked inside a bony skull. All it receives are ambiguous and noisy sensory signals that are only indirectly related to objects in the world. Perception must therefore be a process of inference, in which indeterminate sensory signals are combined with prior expectations or ‘beliefs’ about the way the world is, to form the brain’s optimal hypotheses of the causes of these sensory signals.”
“A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. […] We’ve found that people consciously see what they expect, rather than what violates their expectations.”
AI system can isolate individuals’ voices from other environmental noise, including other voices. Such a system has many potential uses, both benign and nefarious. The ability is rapidly improving to untangle signals from noise and identify which signals are from which sources. The approach should be able to apply to other kinds of signals too, not only sounds.
I know, to free will or not to free will, that is the hackneyed question debated in philosophical circles since we learned how to talk. But here’s a cognitive neuroscientist’s research on “how neuronal code underlies top-down mental causation.” It’s a long video, over 2 hours, and I have yet to complete it. Here is Peter Tse’s CV. Here is his book on the topic is. Here is a good summary of Tse’s work on the topic.
It occurred to me that memes are a lot like frames as Lakoff describes them. Lakoff has done extensive cognitive scientific work on schemas, metaphors and frames. Check out this lengthy article in Frontiers in Human Neuroscience, 2014; 8: 958, “Mapping the brain’s metaphor circuitry.” Even though they don’t relate this to the concept of memes, there are some striking similarities. E.g.:
“Reddy had found that the abstract concepts of communication and ideas are understood via a conceptual metaphor: Ideas Are Objects; Language Is a Container for Idea-Objects; Communication Is Sending Idea-Objects in Language-Containers.”
Should it surprise us that human biases find their way into human-designed AI algorithms trained using data sets of human artifacts?
Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.
In a few previous posts I posted articles on new scientific research questioning some of Piaget’s original premises. This Wikipedia article discusses those neo-Piagetians who have taken into account the more recent science. Also see this article that discusses some of the neo-Piagetians but then focuses on Kurt Fischer’s work.
Caltech researchers have identified the brain mechanisms that enable primates to quickly identify specific faces. In a feat of efficiency, surprisingly few feature-recognition neurons are involved in a process that may be able to distinguish among billions of faces. Each neuron in the facial-recognition system specializes in noticing one feature, such as the width of the part in the observed person’s hair. If the person is bald or has no part, the part-width-recognizing neuron remains silent. A small number of such specialized-recognizer neurons feed their inputs to other layers (patches) that integrate a higher-level pattern (e.g., hair pattern), and these integrate at yet higher levels until there is a total face pattern. This process occurs nearly instantaneously and works regardless of the view angle (as long as some facial features are visible). Also, by cataloging which neurons perform which functions and then mapping these to a relatively small set of composite faces, researchers were able to tell which face a macaque (monkey) was looking at.
These findings seem to correlate closely with Ray Kurzweil’s (Google’s Chief Technology Officer) pattern-recognition theory of mind.
Scientific American article
BMCAI library file (site members only)