I linked to this article for our recent discussion of brain networks. The abstract is below.
“Here, we organize different definitions of scale-free networks and construct a severe test of their empirical prevalence using state-of-the-art statistical tools applied to nearly 1000 social, biological, technological, transportation, and information networks. Across these networks, we find robust evidence that strongly scale-free structure is empirically rare, while for most networks, log-normal distributions fit the data as well or better than power laws.”
Mark suggested this book as a future group reading and discussion and I agree. Rushkoff provides a very brief summary of his new book on the topic in the TED talk below. It starts with tech billionaires main concern being: Where do I build my bunker at the end of the world? So what happened to the idyllic utopias we thought tech was working toward, a collaborative commons of humanity? The tech boom became all about betting on stocks and getting as much money as possible for me, myself and I while repressing what makes us human. The motto became: “Human beings are the problem and technology is the solution.” Rushkoff is not very kind to the transhumanist notion of AI replacing humanity either, a consequence of that motto. He advises that we embed human values into the tech so that it serves us rather than the reverse.
Reich explains that narrative is necessary to provide a structure to belief systems. Just telling the truth is not enough without the right story. He breaks down the 4 major stories Americans have operated within: the triumphant individual; the benevolent community; the mob at the gates; the rot at the top. All four can be told with the truth or with lies. Reich provides examples and how the Dems abandoned some of these stories, while the Repugs maintained the negative versions. So how do progressives regain the truth of these four stories? Hint: Sanders, AOC and their ilk are doing exactly that.
Ideally, automation would yield a Star Trek reality of increasing leisure and quality of choice and experience. Why isn’t this our experience? An article on Medium offers insight into why this is not occurring on any significant scale.
Evolved behavioral strategies explained by the prisoner’s dilemma damn the majority of humans to a constant doubling down. We exchange the ‘leisure dividend’ (free time) granted by automation for opportunities to outcompete others.
Apparently, the sort of reciprocal social learning that could lead us to make healthy choices with our leisure opportunities depends on us and our competitors being able to mutually track our outcomes across consecutive iterations of the ‘game’. That ‘traceability’ quickly breaks down with the complexity inherent in vast numbers of competitors. When we conclude that any viable competitor may use her leisure dividend to further optimize her competitive position, rather than to pause to enjoy her life, we tend to do the same. Each assumes the other will sprint ahead and so chooses to sprint ahead. Both forfeit the opportunity to savor the leisure dividend.
The prisoner’s dilemma shows that we (most humans) would rather be in a grueling neck-and-neck race toward an invisible, receding finish line than permit the possibility a competitor may increase her lead.
Any strategy that’s so endemic must have evolutionary roots. Thoughts?
Psychologist Robert Epstein, the former editor of Psychology Today, challenges anyone to show the brain processing information or data. The IP metaphor, he says, is so deeply embedded in thinking about thinking it prevents us from learning how the brain really works. Epstein also takes on popular luminaries including Ray Kurzweil and Henry Markram, seeing both exemplifying the extremes of wrongness we get into with the IP metaphor and the notion mental experience could persist outside the organic body.
Underlying our tech vision is a gnostic belief system of leaving the body behind, as it is an inferior biological system thwarting our evolution. Hence all the goals of downloading our supposed consciousness into a machine. It’s an anti-human and anti-environment religion that has no concern for either, imagining that tech is our ultimate savior.
And ironic enough, it’s a belief system that teamed up with the US human potential movement at Esalen. What started as an embodied based human potential program, with practices geared at integrating our minds with our bodies and the environment, got sidetracked by this glorious evolution beyond all that messy material and biological stuff.
And then there’s the devil’s bargain of this religion with our social media, like Facebook and Google, who use tech merely as a means of manipulating us for their own capitalistic purposes. Apparently it has been accepted that there is no alternative to capitalism, since the latter also assumes that humanity is strictly utilitarian and self-interested, the latter also being just mere algorithmic computations determined by an equally algorithmic ‘natural’ selection. Since tech can do all that better then what’s all the fuss?
An interesting take on the agency of artifacts in light of the discussion of memes and temes. From Sinha, S. (2015). “Language and other artifacts: Socio-cultural dynamics of niche construction.” Frontiers in Psychology.
“If (as I have argued) symbolic cognitive artifacts have the effect of changing both world and mind, is it enough to think of them as mere ‘tools’ for the realization of human deliberative intention, or are they themselves agents? This question would be effectively precluded by some definitions of agency […] In emphasizing the distinction, and contrasting agents with artifacts, it fails to engage with the complex network of mediation of distinctly human, social agency by artifactual means. It is precisely the importance of this network for both cognitive and social theory that Latour highlights by introducing the concept of ‘interobjectivity.’ […] Symbolic cognitive artifacts are not just repositories, the are also agents of change. […] We can argue that the agency is (at least until now) ultimately dependent on human agency, without which artifactual agency would neither exist nor have effect. But it would be wrong to think of artifactual agency as merely derivative.”
Age-at-death forecasting – A new test predicts when a person will die. It’s currently accurate within a few years and is getting more accurate. What psychological impacts might knowing your approximate (± 6 months) death time mean for otherwise healthy people? Does existing research with terminally ill or very old persons shed light on this? What would the social and political implications be? What if a ‘death-clock’ reading became required for certain jobs (elected positions, astronauts, roles requiring expensive training and education, etc.) or decisions (whom to marry or parent children with, whether to adopt, whether to relocate, how to invest and manage one’s finances, etc.)?
“How consciousness evolved and how consciousness has come to affect evolutionary processes are related issues. This is because biological consciousness–the only form of consciousness of which we are aware–is entailed by a particular, fairly sophisticated form of animal cognition, an open-ended ability to learn by association or, as we call it, ‘unlimited associative learning’ (UAL). Animals with UAL can assign value to novel, composite stimuli and action-sequences, remember them, and use what has been learned for subsequent (future), second-order, learning. In our work we argue that UAL is the evolutionary marker of minimal consciousness (of subjective experiencing) because if we reverse-engineer from this learning ability to the underlying system enabling it, this enabling system has all the properties and capacities that characterize consciousness. These include…”
See the link for more.
Albuquerque Brain, Mind, and Artificial Intelligence Discussion Group