Category Archives: learning

Book review – Life 3.0: Being Human in the Age of Artificial Intelligence, by Max Tegmark

Max Tegmark’s new book, Life 3.0: Being Human in the Age of Artificial Intelligence, introduces a framework for defining types of life based on the degree of design control that sensing, self-replicating entities have over their own ‘hardware’ (physical forms) and ‘software’ (“all the algorithms and knowledge that you use to process the information from your senses and decide what to do”).

It’s a relatively non-academic read and well worth the effort for anyone interested in the potential to design the next major forms of ‘Life’ to transcend many of the physical and cognitive constraints that have us now on the brink of self-destruction. Tegmark’s forecast is optimistic.

Wild systems theory (WST) – context and relationships make reality meaningful

Edward has posted some great thoughts and resources on embodied cognition (EC). I stumbled on some interesting information on a line of thinking within the EC literature. I find contextualist, connectivist approaches compelling in their ability to address complex-systems such as life and (possibly) consciousness. Wild systems theory (WST) “conceptualizes organisms as multi-scale self-sustaining embodiments of the phylogenetic, cultural, social, and developmental contexts in which they emerged and in which they sustain themselves. Such self-sustaining embodiments of context are naturally and necessarily about the multi-scale contexts they embody. As a result, meaning (i.e., content) is constitutive of what they are. This approach to content overcomes the computationalist need for representation while simultaneously satisfying the ecological penchant for multi-scale contingent interactions.”While I find WST fascinating, I’m unclear on whether it has been or can be assessed empirically. What do you think? Is WST shackled to philosophy?

Can one person know another’s mental state? Physicalists focus on how each of us develops a theory of mind (TOM) about each of the other people we observe. TOM is a theory because it is based on assumptions we make about others’ mental states by observing their behaviors. It is not based on any direct reading or measurement of internal processes. In its extreme, the physicalist view asserts that subjective experience and consciousness itself are merely emergent epiphenomena and not fundamentally real.

EC theorists often describe emergent or epiphenomenal subjective properties such as emotions and conscious experiences as “in terms of complex, multi-scale, causal dynamics among objective phenomena such as neurons, brains, bodies, and worlds.” Emotions, experiences, and meanings are seen to emerge from, be caused by or identical with, or be informational aspects of objective phenomena. Further, many EC proponents regards subjective properties as “logically unnecessary to the scientific description.” Some EC theorists conceive of the non-epiphenomenal reality of experience in a complex systems framework and define experience in terms of relational properties. In Gibson’s (1966) concept of affordances, organisms perceive behavioral possibilities in other organisms and in their environment. An affordance is a perceived relationship (often in terms of utility), such a how an organism might use something–say a potential mate, prey/food, or a tool. Meaning arises from “bi-directional aboutness” between an organism and what it perceives or interacts with. Meaning is about relationship.

(A very good, easy read on meaning arising from relationships is the book Learning How to Learn, by Novak and Gowin. In short, it’s the connecting/relating words such as is, contains, produces, consumes, etc., that enable meaningful concepts to be created in minds via language that clarifies context.)

Affordances and relationality at one level of organization and analysis carve out a non-epiphenomenal beachhead but do not banish epiphenomena from that or other levels. There’s a consideration of intrinsic, non-relational properties (perhaps mass) versus relational properties (such as weight). But again, level/scale of analysis matters (“mass emerges from a particle’s interaction with the Higgs field” and is thus relational after all) and some take this line of thinking to a logical end where there is no fundamental reality.

In WST, “all properties are constituted of and by their relations with context. As a result, all properties are inherently meaningful because they are naturally and necessarily about the contexts within which they persist. From this perspective, meaning is ubiquitous. In short, reality is inherently meaningful.”

2. Jordan, J. S., Cialdella, V. T., Dayer, A., Langley, M. D., & Stillman, Z. (2017). Wild Bodies Don’t Need to Perceive, Detect, Capture, or Create Meaning: They ARE Meaning. Frontiers in psychology8, 1149. Available from: https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01149/full [accessed Nov 09 2017]

BMAI members repository copy (PDF): https://albuquirky.net/download/277/embodied-grounded-cognition/449/wild-systems-theory_bodies-are-meaning.pdf

State of AI progress

An MIT Technology Review article introduces the man responsible for the 30-year-old deep learning approach, explains what deep machine learning is, and questions whether deep learning may be the last significant innovation in the AI field. The article also touches on a potential way forward for developing AIs with qualities more analogous to the human brain’s functioning.

Should AI agents’ voice interactions be more like our own? What effects should we anticipate?

An article at Wired.com considers the pros and cons of making the voice interactions of AI assistants more humanlike.

The assumption that more human-like speech from AIs is naturally better may prove as incorrect as the belief that the desktop metaphor was the best way to make humans more proficient in using computers. When designing the interfaces between humans and machines, should we minimize the demands placed on users to learn more about the system they’re interacting with? That seems to have been Alan Kay’s assumption when he designed the first desktop interface back in 1970.

Problems arise when the interaction metaphor diverges too far from the reality of how the underlying system is organized and works. In a personal example, someone dear to me grew up helping her mother–an office manager for several businesses. Dear one was thoroughly familiar with physical desktops, paper documents and forms, file folders, and filing cabinets. As I explained how to create, save, and retrieve information on a 1990 Mac, she quickly overcame her initial fear. “Oh, it’s just like in the real world!” (Chalk one for Alan Kay? Not so fast.) I knew better than to tell her the truth at that point. Dear one’s Mac honeymoon crashed a few days later when, to her horror and confusion, she discovered a file cabinet inside a folder. A few years later, there was another metaphor collapse when she clicked on a string of underlined text in a document and was forcibly and instantly transported to a strange destination.

Having come to terms with computers through the command-line interface, I found the desktop metaphor annoying and unnecessary. Hyperlinking, however–that’s another matter altogether–an innovation that multiplied the value I found in computing.

On the other end of the complexity spectrum would be machine-level code. There would be no general computing today if we all had to speak to computers in their own fundamental language of ones and zeros. That hasn’t stopped some hard-core computer geeks from advocating extreme positions on appropriate interaction modes, as reflected in this quote from a 1984 edition of InfoWorld:

“There isn’t any software! Only different internal states of hardware. It’s all hardware! It’s a shame programmers don’t grok that better.”

Interaction designers operate on the metaphor end of the spectrum by necessity. The human brain organizes concepts by semantic association. But sometimes a different metaphor makes all the difference. And sometimes, to be truly proficient when interacting with automation systems, we have to invest the effort to understand less simplistic metaphors.

The article referenced in the beginning of this post mentions that humans are manually coding “speech synthesis markup tags” to cause synthesized voices of AI systems to sound more natural. (Note that this creates an appearance that the AI understands the user’s intent and emotional state, though this more natural intelligence is illusory.) Intuitively, this sounds appropriate. The down side, as the article points out, is that colloquial AI speech limits human-machine interactions to the sort of vagueness inherent in informal speech. It also trains humans to be less articulate. The result may be interactions that fail to clearly communicate what either party actually means.

I suspect a colloquial mode could be more effective in certain kinds of interactions: when attempting to deceive a human into thinking she’s speaking with another human; virtual talk therapy; when translating from one language to another in situations where idioms, inflections, pauses, tonality, and other linguistic nuances affect meaning and emotion; etc.

In conclusion, operating systems, applications, and AIs are not humans. To improve our effectiveness in using more complex automation systems, we will have to meet them farther along the complexity continuum–still far from machine code, but at points of complexity that require much more of us as users.

Neuroplasticity at the neuron and synapse level – Neurons sort into functional networks

“Until recently, scientists had thought that most synapses of a similar type and in a similar location in the brain behaved in a similar fashion with respect to how experience induces plasticity,” Friedlander said. “In our work, however, we found dramatic differences in the plasticity response, even between neighboring synapses in response to identical activity experiences.”

“Individual neurons whose synapses are most likely to strengthen in response to a certain experience are more likely to connect to certain partner neurons, while those whose synapses weaken in response to a similar experience are more likely to connect to other partner neurons,” Friedlander said. “The neurons whose synapses do not change at all in response to that same experience are more likely to connect to yet other partner neurons, forming a more stable but non-plastic network.”

Read more at: https://medicalxpress.com/news/2016-02-scientists-brain-plasticity-assorted-functional.html#jCp