Category Archives: algorithmic bias

A dive into the black waters under the surface of persuasive design

A Guardian article last October brings the darker aspects of the attention economy, particularly the techniques and tools of neural hijacking, into sharp focus. The piece summarizes some interaction design principles and trends that signal a fundamental shift in means, deployment, and startling effectiveness of mass persuasion. The mechanisms reliably and efficiently leverage neural reward (dopamine) circuits to seize, hold, and direct attention toward whatever end the designer and content providers choose.

The organizer of a $1,700 per person event convened to show marketers and technicians “how to manipulate people into habitual use of their products,” put it baldly.

subtle psychological tricks … can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation”

Particularly telling of the growing ethical worry are the defections from social media among Silicon Valley insiders.

Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”,  … confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
It is revealing that many of these younger technologists are weaning themselves off their own products, sending their children to elite Silicon Valley schools where iPhones, iPads and even laptops are banned. They appear to be abiding by a Biggie Smalls lyric from their own youth about the perils of dealing crack cocaine: never get high on your own supply.

If you read the article, please comment on any future meeting topics you detect. I find it a vibrant collection of concepts for further exploration.

Gender role bias in AI algorithms

Should it surprise us that human biases find their way into human-designed AI algorithms trained using data sets of human artifacts?

Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.

TED Talk and PJW Comment

TED talk of possible interest:

Comment I posted there:
Here is an interdisciplinary “moon-shot” suggestion that we should at least start talking about, now, before it is too late. Let’s massively collaborate to develop a very mission-specific AI system to help us figure out, using emerging genetic editing technologies (e.g., CRISPR, etc.), ideally how to tweak (most likely) species-typical genes currently constraining our capacities for prosociality, biophilia, and compassion, so that we can intentionally evolve into a sustainable species. This is something that natural selection, our past and current psycho-eugenicist, will never do (it cannot), and something that our current genetic endowment will never allow cultural processes / social engineering approaches to adequately transform us. Purposed-designed AI systems feeding off of growing databases of intra-genomic dynamics and gene-environment interactions could greatly speed our understanding of how to make these genetic adjustments to ourselves, the only hope for our survival, in a morally optimal (i.e., fewest mistakes due to unexpected gene-gene and gene-regulatory (exome) and epigenetic interactions; fewest onerous side-effects) as well as in a maximally effective and efficient way. Come together, teams of AI scientists and geneticists! We need to grab our collective pan-cultural intrapsychic fate away from the dark hands of natural selection, and AI can probably help. END