Category Archives: artificial intelligence

A dive into the black waters under the surface of persuasive design

A Guardian article last October brings the darker aspects of the attention economy, particularly the techniques and tools of neural hijacking, into sharp focus. The piece summarizes some interaction design principles and trends that signal a fundamental shift in means, deployment, and startling effectiveness of mass persuasion. The mechanisms reliably and efficiently leverage neural reward (dopamine) circuits to seize, hold, and direct attention toward whatever end the designer and content providers choose.

The organizer of a $1,700 per person event convened to show marketers and technicians “how to manipulate people into habitual use of their products,” put it baldly.

subtle psychological tricks … can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation”

Particularly telling of the growing ethical worry are the defections from social media among Silicon Valley insiders.

Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”,  … confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
It is revealing that many of these younger technologists are weaning themselves off their own products, sending their children to elite Silicon Valley schools where iPhones, iPads and even laptops are banned. They appear to be abiding by a Biggie Smalls lyric from their own youth about the perils of dealing crack cocaine: never get high on your own supply.

If you read the article, please comment on any future meeting topics you detect. I find it a vibrant collection of concepts for further exploration.

We have the wrong paradigm for the complex adaptive system we are part of

This very rich, conversational thought piece asks if we, as participant designers within a complex adaptive ecology, can envision and act on a better paradigm than the ones that propel us toward mono-currency and monoculture.

We should learn from our history of applying over-reductionist science to society and try to, as Wiener says, “cease to kiss the whip that lashes us.” While it is one of the key drivers of science—to elegantly explain the complex and reduce confusion to understanding—we must also remember what Albert Einstein said, “Everything should be made as simple as possible, but no simpler.” We need to embrace the unknowability—the irreducibility—of the real world that artists, biologists and those who work in the messy world of liberal arts and humanities are familiar with.

In order to effectively respond to the significant scientific challenges of our times, I believe we must view the world as many interconnected, complex, self-adaptive systems across scales and dimensions that are unknowable and largely inseparable from the observer and the designer. In other words, we are participants in multiple evolutionary systems with different fitness landscapes at different scales, from our microbes to our individual identities to society and our species. Individuals themselves are systems composed of systems of systems, such as the cells in our bodies that behave more like system-level designers than we do.

Joichi Ito

Book review – Life 3.0: Being Human in the Age of Artificial Intelligence, by Max Tegmark

Max Tegmark’s new book, Life 3.0: Being Human in the Age of Artificial Intelligence, introduces a framework for defining types of life based on the degree of design control that sensing, self-replicating entities have over their own ‘hardware’ (physical forms) and ‘software’ (“all the algorithms and knowledge that you use to process the information from your senses and decide what to do”).

It’s a relatively non-academic read and well worth the effort for anyone interested in the potential to design the next major forms of ‘Life’ to transcend many of the physical and cognitive constraints that have us now on the brink of self-destruction. Tegmark’s forecast is optimistic.

Your brain on AI-powered, immersive, virtual reality social networks

Kevin Kelly, the founder of Wired Magazine, forecasts virtual reality (VR) becoming our primary social environment within five years. VR experiences will be increasingly interactive (physically and socially). Our brains will process VR sensations as real.

The price of this novelty is all your data, historical and biometric, and with that will come more advertising than ever. What is the beginning of a new dimension of fun, will be the end of privacy.

AI more advanced than what keeps people addicted to current social media and search platforms will attract and keep social VR participants. How will personal and group cognition and behavior change when VR becomes more compelling than ‘legacy reality?’

See Kelly’s 5-minute talk at

Deep clustering machine learning enables AI to distinguish individual voices in a crowd

AI system can isolate individuals’ voices from other environmental noise, including other voices. Such a system has many potential uses, both benign and nefarious. The ability is rapidly improving to untangle signals from noise and identify which signals are from which sources. The approach should be able to apply to other kinds of signals too, not only sounds.

State of AI progress

An MIT Technology Review article introduces the man responsible for the 30-year-old deep learning approach, explains what deep machine learning is, and questions whether deep learning may be the last significant innovation in the AI field. The article also touches on a potential way forward for developing AIs with qualities more analogous to the human brain’s functioning.

Computational grounded cognition

From this article, which first describes the progress in grounded cognition theories, then goes into how this should be applied to robotics and artificial intelligence. Some excepts:

“Grounded theories assume that there is no central module for cognition. According to this view, all cognitive phenomena, including those considered the province of amodal cognition such as reasoning, numeric, and language processing, are ultimately grounded in (and emerge from) a variety of bodily, affective, perceptual, and motor processes. The development and expression of cognition is constrained by the embodiment of cognitive agents and various contextual factors (physical and social) in which they are immersed. The grounded framework has received numerous empirical confirmations. Still, there are very few explicit computational models that implement grounding in sensory, motor and affective processes as intrinsic to cognition, and demonstrate that grounded theories can mechanistically implement higher cognitive abilities. We propose a new alliance between grounded cognition and computational modeling toward a novel multidisciplinary enterprise: Computational Grounded Cognition. We clarify the defining features of this novel approach and emphasize the importance of using the methodology of Cognitive Robotics, which permits simultaneous consideration of multiple aspects of grounding, embodiment, and situatedness, showing how they constrain the development and expression of cognition.”

“According to grounded theories, cognition is supported by modal representations and associated mechanisms for their processing (e.g., situated simulations), rather than amodal representations, transductions, and abstract rule systems. Recent computational models of sensory processing can be used to study the grounding of internal representations in sensorimotor modalities; for example, generative models show that useful representations can self-organize through unsupervised learning (Hinton, 2007). However, modalities are usually not isolated but form integrated and multimodal assemblies, plausibly in association areas or ‘convergence zones'” (Damasio, 1989; Simmons and Barsalou, 2003).

“An important challenge is explaining how abstract concepts and symbolic capabilities can be constructed from grounded categorical representations, situated simulations and embodied processes. It has been suggested that abstract concepts could be based principally on interoceptive, meta-cognitive and affective states (Barsalou, 2008) and that selective attention and categorical memory integration are essential for creating a symbolic system” (Barsalou, 2003).

Gender role bias in AI algorithms

Should it surprise us that human biases find their way into human-designed AI algorithms trained using data sets of human artifacts?

Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.