What is humanity’s situation with respect to surviving long-term with a good quality of life? (Frame the core opportunities and obstacles.)
What attributes of our evolved, experientially programmed brains contribute to this situation? (What are the potential leverage points for positive change within our body-brain-mind system?)
What courses of research and action (including currently available systems, tools, and practices and current and possible lines of R&D) have the potential to improve our (and the planetary life system’s) near- and long-term prospects?
Following is a list of (only some!) of the resources some of us have consumed and discussed online, in emails, or face-to-face in 2019. Sample a few to jog your thoughts and provoke deeper dives. Please add your own additional references in the comments below this post. For each, give a short (one line is fine) description, if possible.
This link shows 12 positive benefits of meditation supported by scientific studies
Part of the collective commons transformation is how humanity has become a hybrid cyborg with the machine, meaning the personal computer and an internet connection. It has fundamentally changed our nature to one of a mass-communicated collaborative commons. The Frontiers ebook is the tech side of that development, whereas the more social side is what Rifkin writes about.
Jordan Hall of the Neurohacker Collective on decentralized collective intelligence. Sounds a lot how our group works, our collaborations creating something greater than our individual contributions, even though the latter are part and parcel of the process. What happens when we node thyself.
Humans have some intentional control over our brains (and minds and bodies) and focused breathing is one of those control mechanisms.
“This recent study finally answers these questions by showing that volitionally controlling our respirational, even merely focusing on one’s breathing, yield additional access and synchrony between brain areas. This understanding may lead to greater control, focus, calmness, and emotional control.”
Musk said his neuroscience company, Neuralink, has about 85 of “the highest per capita intelligence” group of engineers he has ever assembled — with the mission of building a hard drive for your brain.
“The long-term aspiration with Neuralink would be to achieve a symbiosis with artificial intelligence.”
Wait. What? “To achieve a sort of democratization of intelligence, such that it is not monopolistically held in a purely digital form by governments and large corporations.”
The above is the title to a new, free Frontiers book subtitled “Bridging separate evolutionary paradigms.” I thought it would be of interest to this group. I can be found here, then scrolling down. From the Introduction:
“The nervous system is the product of biological evolution and is shaped by the interplay between extrinsic factors determining the ecology of animals, and by intrinsic processes that dictate the developmental rules that give rise to adult functional structures. This special topic is oriented to develop an integrative view from behavior and ecology to neurodevelopmental processes. We address questions such as how do sensory systems evolve according to ecological conditions? How do neural networks organize to generate adaptive behavior? How does cognition and brain connectivity evolve? What are the developmental mechanisms that give rise to functional adaptation? Accordingly, the book is divided in three sections, (i) Evolution of sensorimotor systems; (ii) Cognitive computations and neural circuits, and (iii) Development and brain evolution. We hope that this initiative will support an interdisciplinary program that addresses the nervous system as a unified organ, subject to both functional and developmental constraints, where the final outcome results of a compromise between different parameters rather than being the result of several single variables acting independently of each other.”
Article here from Frontiers in Human Neuroscience, 2017, 11:51. The abstract (note my italicized highlighting):
“Neurofeedback is attracting renewed interest as a method to self-regulate one’s own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. It not only promises new avenues as a method for cognitive enhancement in healthy subjects, but also as a therapeutic tool. In the current article, we present a review tutorial discussing key aspects relevant to the development of electroencephalography (EEG) neurofeedback studies. In addition, the putative mechanisms underlying neurofeedback learning are considered. We highlight both aspects relevant for the practical application of neurofeedback as well as rather theoretical considerations related to the development of new generation protocols. Important characteristics regarding the set-up of a neurofeedback protocol are outlined in a step-by-step way. All these practical and theoretical considerations are illustrated based on a protocol and results of a frontal-midline theta up-regulation training for the improvement of executive functions. Not least, assessment criteria for the validation of neurofeedback studies as well as general guidelines for the evaluation of training efficacy are discussed.”
In the field of intuition it is widely accepted that problem solving proceeds in a more or less graded fashion from problem formulation to problem solution as previously encoded information is activated by clues to coherence. The resulting pattern of activation differentially sensitizes a person to new information that is pertinent for the solution. Eventually, the continuous (and rapid) build-up of coherent information is sufficient to cross a threshold of awareness or noticing. Accordingly, implicitly acquired knowledge and experience play an important role because their content is assumed to be non-consciously and gradually activated in memory from clues in the environment that initiate an automatic spreading of activation. These assumptions are summarized in what has been known as the continuity model of intuition.
On the contrary, the current literature on insight problem solving favors a discontinuity model. Particularly, insight is linked to processes that restructure the mental representation of a problem. It is assumed that prior knowledge and inappropriate assumptions result in self-imposed constraints that establish a biased representation of the problem and thus prevent a solution. Consequently, a discontinuity model suggests the first intuitive apprehension of the problem to lead to an impasse and has to be overcome by relaxing these constraints to find a solution.
Until now, there has neither been theoretical discussion nor empirical investigation on the continuity/discontinuity distinction. Our open research questions include the following:
1. Are continuity/discontinuity different sides of the same coin distinguishing different stages within a continuous solution process, or do they stand for mutual exclusive processes?
2. If intuition is seen as “coherence building mechanism”, is it conceivable to describe the different stages within insight problem solving as coherence changing processes?
3. What are the underlying neuro-cognitive mechanisms that allow the search for coherence, respectively the change of coherence (representational change)? Both processes might go beyond a simple spreading activation account.
4. How does re-combination and the generation of new and novel solutions fit into the intuitive framework?
5. Could the application of Darwinian principles help to inform us about the underlying principles of both?
Listen to this 3-part podcast entitled “Solving the generator problems of existence” with Daniel Schmachtenberger, co-founder of the Neurohacker Collective and founder of Emergence Project. A few brief excerpts from the blurb follow. See the link to listen if you feel it’s to your taste and passes the smell test. Had to get all the senses in there.
“In order to avoid extinction, we have to come up with different systems altogether, and replace rivalry with anti-rivalry. One of the ways to do that is moving from ownership of goods towards access to shared common resources. […] He also proposes a new system of governance which would allow groups of people that have different goals and values to come to decisions together on various issues. […He] argues that it is not the most competitive ecosystem that makes it through, but the most self-stabilizing one.”
“The biosphere is a complex self-regulating system. It is also a closed-loop system, meaning that once a component stops serving its function, it gets recycled and reincorporated back into the system. In contrast, the systems humans have created are complicated, open loop systems. They are neither self-organizing nor self-repairing. Complex systems, which come from evolution, are anti-fragile. Complicated systems, designed by humans, are fragile. Complicated open-loop systems are the second generator function of existential risks.”
“This is the first time scientists have been able to identify a patient’s own brain cell code or pattern for memory and, in essence, ‘write in’ that code to make existing memory work better, an important first step in potentially restoring memory loss”
“We showed that we could tap into a patient’s own memory content, reinforce it and feed it back to the patient,” Hampson said. “Even when a person’s memory is impaired, it is possible to identify the neural firing patterns that indicate correct memory formation and separate them from the patterns that are incorrect. We can then feed in the correct patterns to assist the patient’s brain in accurately forming new memories, not as a replacement for innate memory function, but as a boost to it.”
An improvement to the Neural Simulation Tool (NEST) algorithm, the primary tool of the Human Brain Project, expanded the scope of brain neural data management (for simulations) from the current 1% of discrete neurons (about the number in the cerebellum) to 10%. The NEST algorithm can scale to store 100% of BCI-derived or simulated neural data within near-term reach as supercomputing capacity increases. The algorithm achieves its massive efficiency boost by eliminating the need to explicitly store as much data about each neuron’s state.
Abstract of Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.