Tag Archives: metaphors

The info processing (IP) metaphor of the brain is wrong

Psychologist Robert Epstein, the former editor of Psychology Today, challenges anyone to show the brain processing information or data. The IP metaphor, he says, is so deeply embedded in thinking about thinking it prevents us from learning how the brain really works. Epstein also takes on popular luminaries including Ray Kurzweil and Henry Markram, seeing both exemplifying the extremes of wrongness we get into with the IP metaphor and the notion mental experience could persist outside the organic body.

The Empty Brain (Aeon article with audio)

News startups aim to improve public discourse

A Nieman Reports article highlights four startups seeking to improve public discourse. Let’s hope efforts to create methods and technologies along these lines accelerate and succeed in producing positive outcomes.

Book discussion event on embodied cognition

Our discussions all, to some extent, relate to cognition. An important area of inquiry concerns whether some form of physical embodiment is required for a brain to support cognition in general and the self-aware sort of cognition we humans possess.

THE BOOK

Philosophy In The Flesh: The Embodied Mind And Its Challenge To Western Thought, by George Lakoff and Mark Johnson. Please note, while the title includes “Philosophy,” we are not a philosophy group and the book and discussion will revolve around scientific concepts and implications, not spiritualistic or metaphysical ideas.

Amazon (used copies in the $6 range, including shipping)

eBook (free PDF)

RSVP TO ATTEND

RSVP by email to cogniphile@albuquirky.net if you plan to attend our discussion on the afternoon of Saturday, November 3, 2018.

YOUR PREPARATION

While our group enjoys socializing and will plan other events to that end, this meeting is for focused discussion among people who invest the time in advance to inform themselves on the topic. As a courtesy to those who will do their ‘homework,’ before the meeting please read and consider Part 1 (the first eight chapters) of the book. As you read, jot down your thoughts and questions on the book’s claims, supporting evidence, and implications for our core topics–brain, mind, and artificial intelligence. If you are not able to invest this effort prior to the meeting, please do not attend. Thank you for your understanding.

If you are a visual systematic learner, try creating a concept map of the book’s core concepts and ideas.

RELATED RESOURCES

Please see related resource links in the comments to this post. Also, you can search this site’s other relevant posts using the category and tag, ’embodied cognition.’

THE LOCATION

The location will be in the vicinity of UNM on Central Ave. When you RSVP to cogniphile@albuquirky.net, you will be sent the address.

A dive into the black waters under the surface of persuasive design

A Guardian article last October brings the darker aspects of the attention economy, particularly the techniques and tools of neural hijacking, into sharp focus. The piece summarizes some interaction design principles and trends that signal a fundamental shift in means, deployment, and startling effectiveness of mass persuasion. The mechanisms reliably and efficiently leverage neural reward (dopamine) circuits to seize, hold, and direct attention toward whatever end the designer and content providers choose.

The organizer of a $1,700 per person event convened to show marketers and technicians “how to manipulate people into habitual use of their products,” put it baldly.

subtle psychological tricks … can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation”

Particularly telling of the growing ethical worry are the defections from social media among Silicon Valley insiders.

Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”,  … confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
It is revealing that many of these younger technologists are weaning themselves off their own products, sending their children to elite Silicon Valley schools where iPhones, iPads and even laptops are banned. They appear to be abiding by a Biggie Smalls lyric from their own youth about the perils of dealing crack cocaine: never get high on your own supply.

If you read the article, please comment on any future meeting topics you detect. I find it a vibrant collection of concepts for further exploration.

Check out Ed Berge’s blog

We’ve come to appreciate Ed Berge’s thoughtful posts on consciousness, metaphorical thinking, etc. Check out his fun, informative blog, Proactive Progressive Propagation. (Where I work, that would definitely become ‘P3.’)

Should AI agents’ voice interactions be more like our own? What effects should we anticipate?

An article at Wired.com considers the pros and cons of making the voice interactions of AI assistants more humanlike.

The assumption that more human-like speech from AIs is naturally better may prove as incorrect as the belief that the desktop metaphor was the best way to make humans more proficient in using computers. When designing the interfaces between humans and machines, should we minimize the demands placed on users to learn more about the system they’re interacting with? That seems to have been Alan Kay’s assumption when he designed the first desktop interface back in 1970.

Problems arise when the interaction metaphor diverges too far from the reality of how the underlying system is organized and works. In a personal example, someone dear to me grew up helping her mother–an office manager for several businesses. Dear one was thoroughly familiar with physical desktops, paper documents and forms, file folders, and filing cabinets. As I explained how to create, save, and retrieve information on a 1990 Mac, she quickly overcame her initial fear. “Oh, it’s just like in the real world!” (Chalk one for Alan Kay? Not so fast.) I knew better than to tell her the truth at that point. Dear one’s Mac honeymoon crashed a few days later when, to her horror and confusion, she discovered a file cabinet inside a folder. A few years later, there was another metaphor collapse when she clicked on a string of underlined text in a document and was forcibly and instantly transported to a strange destination.

Having come to terms with computers through the command-line interface, I found the desktop metaphor annoying and unnecessary. Hyperlinking, however–that’s another matter altogether–an innovation that multiplied the value I found in computing.

On the other end of the complexity spectrum would be machine-level code. There would be no general computing today if we all had to speak to computers in their own fundamental language of ones and zeros. That hasn’t stopped some hard-core computer geeks from advocating extreme positions on appropriate interaction modes, as reflected in this quote from a 1984 edition of InfoWorld:

“There isn’t any software! Only different internal states of hardware. It’s all hardware! It’s a shame programmers don’t grok that better.”

Interaction designers operate on the metaphor end of the spectrum by necessity. The human brain organizes concepts by semantic association. But sometimes a different metaphor makes all the difference. And sometimes, to be truly proficient when interacting with automation systems, we have to invest the effort to understand less simplistic metaphors.

The article referenced in the beginning of this post mentions that humans are manually coding “speech synthesis markup tags” to cause synthesized voices of AI systems to sound more natural. (Note that this creates an appearance that the AI understands the user’s intent and emotional state, though this more natural intelligence is illusory.) Intuitively, this sounds appropriate. The down side, as the article points out, is that colloquial AI speech limits human-machine interactions to the sort of vagueness inherent in informal speech. It also trains humans to be less articulate. The result may be interactions that fail to clearly communicate what either party actually means.

I suspect a colloquial mode could be more effective in certain kinds of interactions: when attempting to deceive a human into thinking she’s speaking with another human; virtual talk therapy; when translating from one language to another in situations where idioms, inflections, pauses, tonality, and other linguistic nuances affect meaning and emotion; etc.

In conclusion, operating systems, applications, and AIs are not humans. To improve our effectiveness in using more complex automation systems, we will have to meet them farther along the complexity continuum–still far from machine code, but at points of complexity that require much more of us as users.