Category Archives: algorithmic bias

Rushkoff: We humans are things to the IoT

Meaning the Internet of Things. From Rushkoff:

“The algorithms directing these bots and chips patiently try one technique after another to manipulate our behavior until they get the results they have been programmed to deliver. These techniques haven’t all been prewritten by coders. Rather, the algorithms randomly try new combinations of colors, pitches, tones, and phraseology until one works. They then share this information with the other bots on the network for them to try on other humans. Each one of us is not just up against whichever algorithm is attempting to control us, but up against them all. If plants bind energy, animals bind space, and humans bind time, then what do networked algorithms bind? They bind us. On the internet of things, we the people.”

The empty brain

Article by Robert Epstein. He begins by noting the various metaphors we’ve used throughout the ages to describe the workings of our mind/brain: clay infused with spirit; the hydraulic model; springs and gears; and now the information processor (IP). While the author claims we can get to a real model without metaphor, he suggests the embodied model in direct interaction with the world. But that too is a metaphor, for we cannot escape using them to frame our minds, or anything, for that matter. His bottom line and with which I agree, is that the IP model is outdated, that our mind/brains do not process and store information like a computer, and it’s time to move on to the interactive mind/brain/body/environment metaphor, what we could just called the ecological metaphor. As a species we do seem to be making progress with our understanding, and this appears to be our next best guess.

Robert Epstein is a senior research psychologist at the American Institute for Behavioral Research and Technology in California. He is the author of 15 books, and the former editor-in-chief of Psychology Today.

Book: Range: Why Generalists Triumph in a Specialized World

In his new book, Range: Why Generalists Triumph in a Specialized World, David J. Epstein investigates the significant advantages of generalized cognitive skills for success in a complex world. We’ve heard and read many praises for narrow expertise in both humans and AIs (Watson, Alpha Go, etc.). In both humans and AIs, however, narrow+deep expertise does not translate to adaptiveness when reality presents novel challenges, as it does constantly. 

As you ingest this highly readable, non-technical book, please add your observations to the comments below. 

ai will never conquer humanity

From this piece located at the publications page of the International Computer Science Institute.   “Mathematical models help describe reality, but only by ignoring its inherent integrity.” Computers work on binary logic and the world is full of  ‘noise.’ Hence computers, and mathematical models for that matter, can only approximate reality by eliminating that noise.

“Can a bunch of bits represent reality exactly, in a way that can be controlled and predicted indefinitely? The answer is no, because nature is inherently chaotic, while a bunch of bits representing a program can never be so, by definition.”

Which leads us to ask: “Are our mathematical models just a desperate, failed attempt to de-noise an otherwise very confusing, extremely blurred reality?”

So yes, math and computers are quite useful as long as we keep the above in mind instead of assuming they reveal reality as it is. And as long as we also search for that noisy humanity in the spaces between binary logic, which will never be revealed by math or computers alone.

Ruskoff: The anti-human religion of Silicon Valley

Underlying our tech vision is a gnostic belief system of leaving the body behind, as it is an inferior biological system thwarting our evolution. Hence all the goals of downloading our supposed consciousness into a machine. It’s an anti-human and anti-environment religion that has no concern for either, imagining that tech is our ultimate savior.

And ironic enough, it’s a belief system that teamed up with the US human potential movement at Esalen. What started as an embodied based human potential program, with practices geared at integrating our minds with our bodies and the environment, got sidetracked by this glorious evolution beyond all that messy material and biological stuff.

And then there’s the devil’s bargain of this religion with our social media, like Facebook and Google, who use tech merely as a means of manipulating us for their own capitalistic purposes. Apparently it has been accepted that there is no alternative to capitalism, since the latter also assumes that humanity is strictly utilitarian and self-interested, the latter also being just mere algorithmic computations determined by an equally algorithmic ‘natural’ selection. Since tech can do all that better then what’s all the fuss?

Lent responds to Harari

Lent makes many of the points we had in our discussion of Harari’s book Homo Deus. Lent said:

“Apparently unwittingly, Harari himself perpetuates unacknowledged fictions that he relies on as foundations for his own version of reality. Given his enormous sway as a public intellectual, Harari risks causing considerable harm by perpetuating these fictions. Like the traditional religious dogmas that he mocks, his own implicit stories wield great influence over the global power elite as long as they remain unacknowledged. I invite Harari to examine them here. By recognizing them as the myths they actually are, he could potentially transform his own ability to help shape humanity’s future.”

I will only list the bullet point fictions below. See the link for the details:

1. Nature is a machine.
2. There is no alternative.
3. Life is meaningless so it’s best to do nothing.
4. Humanity’s future is a spectator sport.

Applying artificial intelligence for social good

This McKinsey article is an excellent overview of this more extensive article (3 MB PDF) enumerating the ways in which varieties of deep learning can improve existence. Worth a look.

The articles cover the following:

  • Mapping AI use cases to domains of social good
  • AI capabilities that can be used for social good
  • Overcoming bottlenecks, especially around data and talent
  • Risks to be managed
  • Scaling up the use of AI for social good

News startups aim to improve public discourse

A Nieman Reports article highlights four startups seeking to improve public discourse. Let’s hope efforts to create methods and technologies along these lines accelerate and succeed in producing positive outcomes.

Neurocapitalism: Technological Mediation and Vanishing Lines

Open access book by Giorgio Griziotti is here. Technical book for you techies. The blurb:

“Technological change is ridden with conflicts, bifurcations and unexpected developments. Neurocapitalism takes us on an extraordinarily original journey through the effects that cutting-edge technology has on cultural, anthropological, socio-economic and political dynamics. Today, neurocapitalism shapes the technological production of the commons, transforming them into tools for commercialization, automatic control, and crisis management. But all is not lost: in highlighting the growing role of General Intellect’s autonomous and cooperative production through the development of the commons and alternative and antagonistic uses of new technologies, Giorgio Griziotti proposes new ideas for the organization of the multitudes of the new millennium.”