Category Archives: complexity

Future discussion topic recommendations

Several of us met on Labor Day with the goal of identifying topics for at least five future monthly meetings. (Thanks, Dave N, for hosting!) Being the overachievers we are, we pushed beyond the goal. Following are the resulting topics, which will each have its own article on this site where we can begin organizing references for the discussion:

  • sex-related influences on emotional memory
    • gross and subtle brain differences (e.g., “walls of the third ventricle – sexual nuclei”)
    • “Are there gender-based brain differences that influence differences in perceptions and experience?”
    • epigenetic factors (may need an overview of epigenetics)
  • embodied cognition
    • computational grounded cognition (possibly the overview and lead-in topic)
    • neuro-reductionist theory vs. enacted theory of mind
    • “Could embodied cognition influence brain differences?” (Whoever suggested this, please clarify.)
  • brain-gut connection (relates to embodied cognition, but can stand on its own as a topic)
  • behavioral priming (one or multiple discussions)
  • neuroscience of empathy – effects on the brain, including on neuroplasticity
  • comparative effects of various meditative practices on the brain
  • comparative effects of various psychedelics on the brain
  • effects of childhood poverty on the brain

If I missed anything, please edit the list (I used HTML in the ‘Text’ view to get sub-bullets). If you’re worried about the formatting, you can email your edits to and Mark will post your changes.

‘Entangled’ consciousness app approaching release

The Global Consciousness Project, Institute of Noetic Sciences (IONS, for which I was once Hawaii state coordinator) and Princeton Engineering Anomalies Research (PEAR) are collaborating to release a smart phone app, Entangled, that aims to

  • Monitor your mind’s influence on your physical environment
  • Let you take part in large-scale consciousness experiments
  • Support ongoing development of a”consciousness technology” platform for developers and artists
  • Monitor global consciousness data in real-time

Before you think I’ve gone off the deep end, let me explain that I gently stepped away from IONS after nearly 20 years because I did not see enough focus on or progress toward their stated goal—scientifically researching consciousness. I fully enjoyed their practice-oriented emphases on intuitive, embodied, mindful living, but while they remained ‘entangled’ in New Age phenomenalism and esoteric speculations, true scientific programs at many universities and research organizations have made steady, sometimes frustratingly slow progress (which is how science typically works). So, please don’t take this post as a tacit endorsement of any of the sponsoring organizations. They each raise interesting questions and do some work of scientific merit or promise, but (in my view) if you’re interest is in verifiable, repeatable, causally intelligible phenomena, you must stay vigilant of the unscientific chaff.

That said, the spike in non-random streams in random number generators immediately prior to the 9-11 atrocity remains one of the very few well-documented phenomena that could be taken to imply a correlation between a specific objective event and human transpersonal consciousness. In the view of the Global Consciousness Project, by collecting large samples of the right sorts of data, they can test their hypothesis that “Coherent consciousness creates order in the world. Subtle interactions link us with each other and the Earth.” As I understand it, they are extrapolating to the transpersonal level how an individual brain achieves coherent, self-aware states. Also, they would say we’re aware of the apparent precognitive 9-11 phenomenon because someone was collecting the relevant data that could then be recognized as correlated. The Entanglement app aims to collect more of such data while also providing users real- or near-real-time feedback.

If truly well-designed scientific research programs can show significant evidence of direct, entanglement-like correlations between objectively observable phenomena and consciousness (shown in brain functioning), I’ll be excited to learn about it. I think this is a monumental challenge.

Should AI agents’ voice interactions be more like our own? What effects should we anticipate?

An article at considers the pros and cons of making the voice interactions of AI assistants more humanlike.

The assumption that more human-like speech from AIs is naturally better may prove as incorrect as the belief that the desktop metaphor was the best way to make humans more proficient in using computers. When designing the interfaces between humans and machines, should we minimize the demands placed on users to learn more about the system they’re interacting with? That seems to have been Alan Kay’s assumption when he designed the first desktop interface back in 1970.

Problems arise when the interaction metaphor diverges too far from the reality of how the underlying system is organized and works. In a personal example, someone dear to me grew up helping her mother–an office manager for several businesses. Dear one was thoroughly familiar with physical desktops, paper documents and forms, file folders, and filing cabinets. As I explained how to create, save, and retrieve information on a 1990 Mac, she quickly overcame her initial fear. “Oh, it’s just like in the real world!” (Chalk one for Alan Kay? Not so fast.) I knew better than to tell her the truth at that point. Dear one’s Mac honeymoon crashed a few days later when, to her horror and confusion, she discovered a file cabinet inside a folder. A few years later, there was another metaphor collapse when she clicked on a string of underlined text in a document and was forcibly and instantly transported to a strange destination.

Having come to terms with computers through the command-line interface, I found the desktop metaphor annoying and unnecessary. Hyperlinking, however–that’s another matter altogether–an innovation that multiplied the value I found in computing.

On the other end of the complexity spectrum would be machine-level code. There would be no general computing today if we all had to speak to computers in their own fundamental language of ones and zeros. That hasn’t stopped some hard-core computer geeks from advocating extreme positions on appropriate interaction modes, as reflected in this quote from a 1984 edition of InfoWorld:

“There isn’t any software! Only different internal states of hardware. It’s all hardware! It’s a shame programmers don’t grok that better.”

Interaction designers operate on the metaphor end of the spectrum by necessity. The human brain organizes concepts by semantic association. But sometimes a different metaphor makes all the difference. And sometimes, to be truly proficient when interacting with automation systems, we have to invest the effort to understand less simplistic metaphors.

The article referenced in the beginning of this post mentions that humans are manually coding “speech synthesis markup tags” to cause synthesized voices of AI systems to sound more natural. (Note that this creates an appearance that the AI understands the user’s intent and emotional state, though this more natural intelligence is illusory.) Intuitively, this sounds appropriate. The down side, as the article points out, is that colloquial AI speech limits human-machine interactions to the sort of vagueness inherent in informal speech. It also trains humans to be less articulate. The result may be interactions that fail to clearly communicate what either party actually means.

I suspect a colloquial mode could be more effective in certain kinds of interactions: when attempting to deceive a human into thinking she’s speaking with another human; virtual talk therapy; when translating from one language to another in situations where idioms, inflections, pauses, tonality, and other linguistic nuances affect meaning and emotion; etc.

In conclusion, operating systems, applications, and AIs are not humans. To improve our effectiveness in using more complex automation systems, we will have to meet them farther along the complexity continuum–still far from machine code, but at points of complexity that require much more of us as users.

Mathematical field of topology reveals importance of ‘holes in brain’

New Scientist article: Applying the mathematical field of topology to brain science suggests gaps in densely connected brain regions serve essential cognitive functions. Newly discovered densely connected neural groups are characterized by a gap in the center, with one edge of the ring (cycle) being very thin. It’s speculated that this architecture evolved to enable the brain to better time and sequence the integration of information from different functional areas into a coherent pattern.

Aspects of the findings appear to support Edelman’s and Tononi’s (2000, p. 83) theory of neuronal group selection (TNGS, aka neural Darwinism).

Edelman, G.M. and Tononi, G. (2000). A Universe of Consciousness: How Matter Becomes Imagination. Basic Books.

AI Creativity

Google and others are developing neural networks that learn to recognize and imitate patterns present in works of art, including music. The path to autonomous creativity is unclear. Current systems can imitate existing artworks, but cannot generate truly original works. Human prompting and configuration are required.

Google’s Magenta project’s neural network learned from 4,500 pieces of music before creating the following simple tune (drum track overlaid by a human):

Click Play button to listen->

Is it conceivable that AI may one day be able to synthesize new made-to-order creations by blending features from a catalog of existing works and styles? Imagine being able to specify, “Write me a new musical composition reminiscent of Rhapsody in Blue, but in the style of Lynyrd Skynyrd.

There is already at least one human who could instantly play Rhapsody in Blue in Skynyrd style, but even he does not (to my knowledge) create entirely original pieces.

Original article:

See also:

TED Talk and PJW Comment

TED talk of possible interest:

Comment I posted there:
Here is an interdisciplinary “moon-shot” suggestion that we should at least start talking about, now, before it is too late. Let’s massively collaborate to develop a very mission-specific AI system to help us figure out, using emerging genetic editing technologies (e.g., CRISPR, etc.), ideally how to tweak (most likely) species-typical genes currently constraining our capacities for prosociality, biophilia, and compassion, so that we can intentionally evolve into a sustainable species. This is something that natural selection, our past and current psycho-eugenicist, will never do (it cannot), and something that our current genetic endowment will never allow cultural processes / social engineering approaches to adequately transform us. Purposed-designed AI systems feeding off of growing databases of intra-genomic dynamics and gene-environment interactions could greatly speed our understanding of how to make these genetic adjustments to ourselves, the only hope for our survival, in a morally optimal (i.e., fewest mistakes due to unexpected gene-gene and gene-regulatory (exome) and epigenetic interactions; fewest onerous side-effects) as well as in a maximally effective and efficient way. Come together, teams of AI scientists and geneticists! We need to grab our collective pan-cultural intrapsychic fate away from the dark hands of natural selection, and AI can probably help. END