All posts by Edward Berge

ai will never conquer humanity

From this piece located at the publications page of the International Computer Science Institute.   “Mathematical models help describe reality, but only by ignoring its inherent integrity.” Computers work on binary logic and the world is full of  ‘noise.’ Hence computers, and mathematical models for that matter, can only approximate reality by eliminating that noise.

“Can a bunch of bits represent reality exactly, in a way that can be controlled and predicted indefinitely? The answer is no, because nature is inherently chaotic, while a bunch of bits representing a program can never be so, by definition.”

Which leads us to ask: “Are our mathematical models just a desperate, failed attempt to de-noise an otherwise very confusing, extremely blurred reality?”

So yes, math and computers are quite useful as long as we keep the above in mind instead of assuming they reveal reality as it is. And as long as we also search for that noisy humanity in the spaces between binary logic, which will never be revealed by math or computers alone.

the evolution of synergy

Good quick summary of some of Deacon’s ideas. Deacon: “We need to stop thinking about hierarchic evolution in simple Darwinian terms. We need to think about it both in terms of selection and the loss of selection or the reduction of selection. And that maybe it’s the reduction of selection that’s responsible for the most interesting features” (9:40).

how does music affect the brain?

The blurb:

“In this episode of Tech Effects, we explore the impact of music on the brain and body. From listening to music to performing it, WIRED’s Peter Rubin looks at how music can change our moods, why we get the chills, and how it can actually change pathways in our brains.”

For me the most interesting part was later in the video (10:20), how when we improvise we shut down the pre-frontal planning part of the brain and ‘just go with the flow,’ which is our most creative and innovation moments. This though does depend on having used the pre-frontal cortex in learning the techniques of music to get them so ingrained in memory that we are then free to play with what we’ve programmed.

scale-free networks are rare

I linked to this article for our recent discussion of brain networks. The abstract is below.

“Here, we organize different definitions of scale-free networks and construct a severe test of their empirical prevalence using state-of-the-art statistical tools applied to nearly 1000 social, biological, technological, transportation, and information networks. Across these networks, we find robust evidence that strongly scale-free structure is empirically rare, while for most networks, log-normal distributions fit the data as well or better than power laws.”

Rushkoff: Team Human

Mark suggested this book as a future group reading and discussion and I agree. Rushkoff provides a very brief summary of his new book on the topic in the TED talk below. It starts with tech billionaires main concern being: Where do I build my bunker at the end of the world? So what happened to the idyllic utopias we thought tech was working toward, a collaborative commons of humanity? The tech boom became all about betting on stocks and getting as much money as possible for me, myself and I while repressing what makes us human. The motto became: “Human beings are the problem and technology is the solution.” Rushkoff is not very kind to the transhumanist notion of AI replacing humanity either, a consequence of that motto. He advises that we embed human values into the tech so that it serves us rather than the reverse.

The real American story

Reich explains that narrative is necessary to provide a structure to belief systems. Just telling the truth is not enough without the right story. He breaks down the 4 major stories Americans have operated within: the triumphant individual; the benevolent community; the mob at the gates; the rot at the top. All four can be told with the truth or with lies. Reich provides examples and how the Dems abandoned some of these stories, while the Repugs maintained the negative versions. So how do progressives regain the truth of these four stories? Hint: Sanders, AOC and their ilk are doing exactly that.

Ruskoff: The anti-human religion of Silicon Valley

Underlying our tech vision is a gnostic belief system of leaving the body behind, as it is an inferior biological system thwarting our evolution. Hence all the goals of downloading our supposed consciousness into a machine. It’s an anti-human and anti-environment religion that has no concern for either, imagining that tech is our ultimate savior.

And ironic enough, it’s a belief system that teamed up with the US human potential movement at Esalen. What started as an embodied based human potential program, with practices geared at integrating our minds with our bodies and the environment, got sidetracked by this glorious evolution beyond all that messy material and biological stuff.

And then there’s the devil’s bargain of this religion with our social media, like Facebook and Google, who use tech merely as a means of manipulating us for their own capitalistic purposes. Apparently it has been accepted that there is no alternative to capitalism, since the latter also assumes that humanity is strictly utilitarian and self-interested, the latter also being just mere algorithmic computations determined by an equally algorithmic ‘natural’ selection. Since tech can do all that better then what’s all the fuss?

The agency of objects

An interesting take on the agency of artifacts in light of the discussion of memes and temes. From Sinha, S. (2015). “Language and other artifacts: Socio-cultural dynamics of niche construction.” Frontiers in Psychology.

“If (as I have argued) symbolic cognitive artifacts have the effect of changing both world and mind, is it enough to think of them as mere ‘tools’ for the realization of human deliberative intention, or are they themselves agents? This question would be effectively precluded by some definitions of agency […] In emphasizing the distinction, and contrasting agents with artifacts, it fails to engage with the complex network of mediation of distinctly human, social agency by artifactual means. It is precisely the importance of this network for both cognitive and social theory that Latour highlights by introducing the concept of ‘interobjectivity.’ […] Symbolic cognitive artifacts are not just repositories, the are also agents of change. […] We can argue that the agency is (at least until now) ultimately dependent on human agency, without which artifactual agency would neither exist nor have effect. But it would be wrong to think of artifactual agency as merely derivative.”