Informative video on this process. Ofttimes we need to descend into hell before we can ascend into a new life. And this seems the overall process of human development, that for each stage we must go through this spiraling process of dissolution and reorganization. Hence we are far more than twice-born; we are multiply born anew at each stage. It seems though that the further we go in this process the greater the risks and rewards.
Speaking of which, the inaugural issue of Phi Mi Sci will address this issue:
“The inaugural issue of PhiMiSci will be a Special Topic on Radical Disruptions of Self-Consciousness (see the Manifesto of the Selfless Minds workshop). The call for papers for this Special Topic was closed on May 1. Submissions are currently under review. The guest editors of this Special Topic are Thomas Metzinger (Mainz) & Raphaël Millière (Oxford). The expected publication date of this Special Topic is late 2019.”
In his new book, Range: Why Generalists Triumph in a Specialized World,David J. Epstein investigates the significant advantages of generalized cognitive skills for success in a complex world. We’ve heard and read many praises for narrow expertise in both humans and AIs (Watson, Alpha Go, etc.). In both humans and AIs, however, narrow+deep expertise does not translate to adaptiveness when reality presents novel challenges, as it does constantly.
As you ingest this highly readable, non-technical book, please add your observations to the comments below.
Here’s an interesting infographic of the main concepts and thinkers in complexity science across time. Notice S. Kauffman is slated in the 1980s column, suggesting the graphic depicts when influential thinkers first make their marks.
Ebook from Frontiers in Science. From the lead article:
“Readers of this volume will notice a sharp demarcation between descriptions of traditional Evolutionary Psychology, which several authors (Barret et al.; Stotz; Stulp et al.) have presented as indistinguishable from the information processing approach, and newer conceptualizations of EP. Indeed one of the major themes running through several of the contributions (Burke; Barret et al.; Stephen; Stotz; Stulp et al.) concerns the appropriate conceptualization of EP itself, with the Santa Barbara school of massive modularity (made famous by John Tooby and Leda Cosmides) receiving the most scrutiny. As Barret et al. and Stotz describe, early conceptualizations of EP embraced the notion of massive modularity of mind. Individual modules were presumed to act as evolved computers, sensitive to domain specific information and processing it in adaptive ways. Framed in this manner, EP fits well within even a very strict definition of a computational theory of mind and could hardly be seen as the source of an alternative meta-theoretical approach to understanding brain and behavior.
“It may not be appropriate, however, to view either the computational theory of mind or the field of EP so narrowly. As Klasios argues, many evolutionary psychologists adopt a more generic notion of computation, one that commits more to the abstract representation and manipulation of information, rather than to digital computation in its literal sense (although see also Bryant). EP too, is no longer wed to notions of massive modularity (Stephen), with the majority of research in the field motivated by consideration of first principles of evolutionary theory and is neither constrained nor informed by assumptions of massive modularity or domain specific mechanisms (Burke). With these considerations in mind, Klasios and Bryant both argue that computation is still the most profitable account of the mind and is able to accommodate both evolutionary and e-cognition (extended, embodied approaches which place emphasis on the role played by the whole organism and its environment in the decision-making process, rather than simply the brain) perspectives, that favor notions of neural adaptations that are “complex, widely distributed, and highly diffuse” (Klasios) over the more strictly isolated mental modules supposed by massive modularity.”
“The idea that humans have cognitive instincts is a cornerstone of evolutionary psychology, pioneered by Leda Cosmides, John Tooby and Steven Pinker in the 1990s. […] This all seems plausible and intuitive, doesn’t it? The trouble is, the evidence behind it is dubious. In fact, if we look closely, it’s apparent that evolutionary psychology is due for an overhaul. Rather than hard-wired cognitive instincts, our heads are much more likely to be populated by cognitive gadgets, tinkered and toyed with over successive generations. Culture is responsible not just for the grist of the mind – what we do and make – but for fabricating its mills, the very way the mind works.”
“The evidence for cognitive instincts is now so weak that we need a whole new way of capturing what’s distinctive about the human mind. The founders of evolutionary psychology were right when they said that the secret of our success is computational mechanisms – thinking machines – specialised for particular tasks. But these devices, including imitation, mind-reading, language and many others, are not hard-wired. Nor were they designed by genetic evolution. Rather, humans’ thinking machines are built in childhood through social interaction, and were fashioned by cultural, not genetic, evolution. What makes our minds unique are not cognitive instincts but cognitive gadgets.”
“The mind of a newborn human baby is not a blank slate. Like other animals, we are born with – we genetically inherit – a huge range of abilities and assumptions about the world. We’re endowed with capacities to memorise sequences, to control our impulses, to learn associations between events, and to hold several things in mind while we work on them. […] These skills and beliefs are part of the ‘genetic starter kit’ for mature human cognition. They are crucial because they direct our attention to other people, and act as cranes in the construction of new thinking machines. But they are not blueprints for Big Special cognitive mechanisms such as imitation, mind-reading and language.”
“To be fair, evolutionary psychology did something crucially important. It showed that viewing the mind as a kind of software running on the brain’s hardware can advance our understanding of the origins of human cognition. Now it’s time to take a further step: to recognise that our distinctively human apps have been created by cultural, not genetic, evolution.”
If you are familiar with complex systems theorist Dr. Stuart Kauffman’s ideas you know he covers a broad range of disciplines and concepts, many in considerable depth, and with a keen eye for isomorphic and integrative principles. If you peruse some of his writings and other communications, please share with us how you see Kauffman’s ideas informing our focal interests: brain, mind, intelligence (organic and inorganic), and self-aware consciousness.
Do you find Kauffman’s ideas well supported by empirical research? Which are more scientific and which, if any, more philosophical? What intrigues, provokes, or inspires you? Do any of his perspectives or claims help you better orient or understand your own interests in our focal topics?
Following are a few reference links to get the conversation going. Please add your own in the comments to this post. If you are a member and have a lot to say on a related topic, please create a new post, tag it with ‘Stuart Kauffman,’ and create a link to your post in the comments to this post.
We propose ‘multi-level evolution’, a bottom-up automatic process that designs robots across multiple levels and niches them to tasks and environmental conditions. Multi-level evolution concurrently explores constituent molecular and material building blocks, as well as their possible assemblies into specialized morphological and sensorimotor configurations. Multi-level evolution provides a route to fully harness a recent explosion in available candidate materials and ongoing advances in rapid manufacturing processes.
Good quick summary of some of Deacon’s ideas. Deacon: “We need to stop thinking about hierarchic evolution in simple Darwinian terms. We need to think about it both in terms of selection and the loss of selection or the reduction of selection. And that maybe it’s the reduction of selection that’s responsible for the most interesting features” (9:40).
Ideally, automation would yield a Star Trek reality of increasing leisure and quality of choice and experience. Why isn’t this our experience? An article on Medium offers insight into why this is not occurring on any significant scale.
Evolved behavioral strategies explained by the prisoner’s dilemma damn the majority of humans to a constant doubling down. We exchange the ‘leisure dividend’ (free time) granted by automation for opportunities to outcompete others.
Apparently, the sort of reciprocal social learning that could lead us to make healthy choices with our leisure opportunities depends on us and our competitors being able to mutually track our outcomes across consecutive iterations of the ‘game’. That ‘traceability’ quickly breaks down with the complexity inherent in vast numbers of competitors. When we conclude that any viable competitor may use her leisure dividend to further optimize her competitive position, rather than to pause to enjoy her life, we tend to do the same. Each assumes the other will sprint ahead and so chooses to sprint ahead. Both forfeit the opportunity to savor the leisure dividend.
The prisoner’s dilemma shows that we (most humans) would rather be in a grueling neck-and-neck race toward an invisible, receding finish line than permit the possibility a competitor may increase her lead.
Any strategy that’s so endemic must have evolutionary roots. Thoughts?
“How consciousness evolved and how consciousness has come to affect evolutionary processes are related issues. This is because biological consciousness–the only form of consciousness of which we are aware–is entailed by a particular, fairly sophisticated form of animal cognition, an open-ended ability to learn by association or, as we call it, ‘unlimited associative learning’ (UAL). Animals with UAL can assign value to novel, composite stimuli and action-sequences, remember them, and use what has been learned for subsequent (future), second-order, learning. In our work we argue that UAL is the evolutionary marker of minimal consciousness (of subjective experiencing) because if we reverse-engineer from this learning ability to the underlying system enabling it, this enabling system has all the properties and capacities that characterize consciousness. These include…”