From this piece located at the publications page of the International Computer Science Institute. “Mathematical models help describe reality, but only by ignoring its inherent integrity.” Computers work on binary logic and the world is full of ‘noise.’ Hence computers, and mathematical models for that matter, can only approximate reality by eliminating that noise.
“Can a bunch of bits represent reality exactly, in a way that can be controlled and predicted indefinitely? The answer is no, because nature is inherently chaotic, while a bunch of bits representing a program can never be so, by definition.”
Which leads us to ask: “Are our mathematical models just a desperate, failed attempt to de-noise an otherwise very confusing, extremely blurred reality?”
So yes, math and computers are quite useful as long as we keep the above in mind instead of assuming they reveal reality as it is. And as long as we also search for that noisy humanity in the spaces between binary logic, which will never be revealed by math or computers alone.
Underlying our tech vision is a gnostic belief system of leaving the body behind, as it is an inferior biological system thwarting our evolution. Hence all the goals of downloading our supposed consciousness into a machine. It’s an anti-human and anti-environment religion that has no concern for either, imagining that tech is our ultimate savior.
And ironic enough, it’s a belief system that teamed up with the US human potential movement at Esalen. What started as an embodied based human potential program, with practices geared at integrating our minds with our bodies and the environment, got sidetracked by this glorious evolution beyond all that messy material and biological stuff.
And then there’s the devil’s bargain of this religion with our social media, like Facebook and Google, who use tech merely as a means of manipulating us for their own capitalistic purposes. Apparently it has been accepted that there is no alternative to capitalism, since the latter also assumes that humanity is strictly utilitarian and self-interested, the latter also being just mere algorithmic computations determined by an equally algorithmic ‘natural’ selection. Since tech can do all that better then what’s all the fuss?
Lent makes many of the points we had in our discussion of Harari’s book Homo Deus. Lent said:
“Apparently unwittingly, Harari himself perpetuates unacknowledged fictions that he relies on as foundations for his own version of reality. Given his enormous sway as a public intellectual, Harari risks causing considerable harm by perpetuating these fictions. Like the traditional religious dogmas that he mocks, his own implicit stories wield great influence over the global power elite as long as they remain unacknowledged. I invite Harari to examine them here. By recognizing them as the myths they actually are, he could potentially transform his own ability to help shape humanity’s future.”
I will only list the bullet point fictions below. See the link for the details:
1. Nature is a machine. 2. There is no alternative. 3. Life is meaningless so it’s best to do nothing. 4. Humanity’s future is a spectator sport.
A Nieman Reports article highlights four startups seeking to improve public discourse. Let’s hope efforts to create methods and technologies along these lines accelerate and succeed in producing positive outcomes.
Open access book by Giorgio Griziotti is here. Technical book for you techies. The blurb:
“Technological change is ridden with conflicts, bifurcations and unexpected developments. Neurocapitalism takes us on an extraordinarily original journey through the effects that cutting-edge technology has on cultural, anthropological, socio-economic and political dynamics. Today, neurocapitalism shapes the technological production of the commons, transforming them into tools for commercialization, automatic control, and crisis management. But all is not lost: in highlighting the growing role of General Intellect’s autonomous and cooperative production through the development of the commons and alternative and antagonistic uses of new technologies, Giorgio Griziotti proposes new ideas for the organization of the multitudes of the new millennium.”
“Its goal: to research and brainstorm new legal and moral rules for artificial intelligence and other technologies built on complex algorithms.[…] ‘Companies are building technology that will have very, very significant impacts on our lives,’ says HLS Clinical Professor Christopher Bavitz, faculty co-director of the Berkman Klein Center and another leader of the AI initiative’s research. ‘They are raising issues that can only be addressed if you have lawyers, computer scientists, ethicists, economists and business folks working together.'”
A Guardian article last October brings the darker aspects of the attention economy, particularly the techniques and tools of neural hijacking, into sharp focus. The piece summarizes some interaction design principles and trends that signal a fundamental shift in means, deployment, and startling effectiveness of mass persuasion. The mechanisms reliably and efficiently leverage neural reward (dopamine) circuits to seize, hold, and direct attention toward whatever end the designer and content providers choose.
The organizer of a $1,700 per person event convened to show marketers and technicians “how to manipulate people into habitual use of their products,” put it baldly.
subtle psychological tricks … can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation”
Particularly telling of the growing ethical worry are the defections from social media among Silicon Valley insiders.
Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”, … confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
It is revealing that many of these younger technologists are weaning themselves off their own products, sending their children to elite Silicon Valley schools where iPhones, iPads and even laptops are banned. They appear to be abiding by a Biggie Smalls lyric from their own youth about the perils of dealing crack cocaine: never get high on your own supply.
If you read the article, please comment on any future meeting topics you detect. I find it a vibrant collection of concepts for further exploration.
Should it surprise us that human biases find their way into human-designed AI algorithms trained using data sets of human artifacts?
Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.