All posts by Mark H

About Mark H

Information technologist, knowledge management expert, and writer. Academic background in knowledge management, social and natural sciences, information technologies, learning, educational technologies, and philosophy. Married with one adult child who's married and has a teenage daughter.

Request for topic categories hierarchies

BMAI members,

I’m integrating a file-sharing capability into this site. For it and posts, I would like to implement a hierarchy of topical categories. A structured set of terms (taxonomy) will make it easier for us to categorize new content and find existing content. If you are aware of existing taxonomies we might borrow from, please provide links in comments to this post. I propose we start with a relatively high-level taxonomy of categories (limited to two or three levels) and use less-formal tags for highly-specific and infrequently used labels. If we need to amend or grow the taxonomy of categories later, we can easily do so.

If you were not aware, web content platforms like the one (WordPress) this site is built on use two methods for labeling and organizing content items.

The more formal method is a hierarchy of pre-determined categories. When creating posts or uploading files or media, authors select relevant categories from a list. A category hierarchy might include the following, for example:

  • biology
    • genetics
      • epigenetics
      • genetic engineering
      • inheritance
    • evolution
      • group selection
      • natural selection

The content author could choose any or all of the relevant categories but usually would select at least the lowest (most embedded) category from the hierarchy. Once content is associated with a category, it’s possible for search tools and grouped, sorted, and filtered views to improve the findability of topical content.

The informal method is tagging (also called folksonomy). Authors associate terms with their content in a more ad hoc way. Tags usually display under a web article’s title and in interactive tag clouds like the one on the right side of our site’s pages.

Some taxonomies we could consider:

Thanks in advance for your suggestions.

Future discussion topic recommendations

Several of us met on Labor Day with the goal of identifying topics for at least five future monthly meetings. (Thanks, Dave N, for hosting!) Being the overachievers we are, we pushed beyond the goal. Following are the resulting topics, which will each have its own article on this site where we can begin organizing references for the discussion:

  • sex-related influences on emotional memory
    • gross and subtle brain differences (e.g., “walls of the third ventricle – sexual nuclei”)
    • “Are there gender-based brain differences that influence differences in perceptions and experience?”
    • epigenetic factors (may need an overview of epigenetics)
  • embodied cognition
    • computational grounded cognition (possibly the overview and lead-in topic)
    • neuro-reductionist theory vs. enacted theory of mind
    • “Could embodied cognition influence brain differences?” (Whoever suggested this, please clarify.)
  • brain-gut connection (relates to embodied cognition, but can stand on its own as a topic)
  • behavioral priming (one or multiple discussions)
  • neuroscience of empathy – effects on the brain, including on neuroplasticity
  • comparative effects of various meditative practices on the brain
  • comparative effects of various psychedelics on the brain
  • effects of childhood poverty on the brain

If I missed anything, please edit the list (I used HTML in the ‘Text’ view to get sub-bullets). If you’re worried about the formatting, you can email your edits to and Mark will post your changes.

Gender role bias in AI algorithms

Should it surprise us that human biases find their way into human-designed AI algorithms trained using data sets of human artifacts?

Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.

‘Entangled’ consciousness app approaching release

The Global Consciousness Project, Institute of Noetic Sciences (IONS, for which I was once Hawaii state coordinator) and Princeton Engineering Anomalies Research (PEAR) are collaborating to release a smart phone app, Entangled, that aims to

  • Monitor your mind’s influence on your physical environment
  • Let you take part in large-scale consciousness experiments
  • Support ongoing development of a”consciousness technology” platform for developers and artists
  • Monitor global consciousness data in real-time

Before you think I’ve gone off the deep end, let me explain that I gently stepped away from IONS after nearly 20 years because I did not see enough focus on or progress toward their stated goal—scientifically researching consciousness. I fully enjoyed their practice-oriented emphases on intuitive, embodied, mindful living, but while they remained ‘entangled’ in New Age phenomenalism and esoteric speculations, true scientific programs at many universities and research organizations have made steady, sometimes frustratingly slow progress (which is how science typically works). So, please don’t take this post as a tacit endorsement of any of the sponsoring organizations. They each raise interesting questions and do some work of scientific merit or promise, but (in my view) if you’re interest is in verifiable, repeatable, causally intelligible phenomena, you must stay vigilant of the unscientific chaff.

That said, the spike in non-random streams in random number generators immediately prior to the 9-11 atrocity remains one of the very few well-documented phenomena that could be taken to imply a correlation between a specific objective event and human transpersonal consciousness. In the view of the Global Consciousness Project, by collecting large samples of the right sorts of data, they can test their hypothesis that “Coherent consciousness creates order in the world. Subtle interactions link us with each other and the Earth.” As I understand it, they are extrapolating to the transpersonal level how an individual brain achieves coherent, self-aware states. Also, they would say we’re aware of the apparent precognitive 9-11 phenomenon because someone was collecting the relevant data that could then be recognized as correlated. The Entanglement app aims to collect more of such data while also providing users real- or near-real-time feedback.

If truly well-designed scientific research programs can show significant evidence of direct, entanglement-like correlations between objectively observable phenomena and consciousness (shown in brain functioning), I’ll be excited to learn about it. I think this is a monumental challenge.

Giant neuron found encircling and intraconnecting mouse brain

A neuron that encircles the mouse brain emanates from the claustrum (an on/off switch for awareness) and has dense links with both brain hemispheres. Scientists including Francis Crick and Christoph Koch have speculated that the claustrum may play a role in enabling conscious thought. (Crick and Koch academic article)

We’ve frequently discussed how self-aware consciousness likely arises not from any single brain structure or signal, but from complex, recursive (reentrant), synchronized signaling among many structures organized into functional regions. (Did I get close to accurate there?) That a giant neuron provides another connection path among such regions can be taken to align with the reentrant signaling and coordination view of consciousness (ala Edelman and Tononi).

Embodied consciousness and the Flow Genome Project

In line with our July joint meeting with the NM Tech Council, I’m reading a fascinating book (Stealing Fire) on the variety of ways humans can experience states of flow (optimal states of consciousness and performance). The authors, Steven Kotler and Jamie Wheal, explain the significance of flow and introduce their Flow Dojo concept in the videos linked below. Applying methods for achieving flow is often categorized in the consciousness hacking movement, also called brain hacking.

What is Flow (6+ minutes)

The Flow Dojo (4+ minutes)

All Flow Genome Project videos

Long (1 hour) interview by Jason Silva follows:

Computer metaphor not accurate for brain’s embodied cognition

It’s common for brain functions to be described in terms of digital computing, but this metaphor does not hold up in brain research. Unlike computers, in which hardware and software are separate, organic brains’ structures embody memories and brain functions. Form and function are entangled.

Rather than finding brains to work like computers, we are beginning to design computers–artificial intelligence systems–to work more like brains. 

Should AI agents’ voice interactions be more like our own? What effects should we anticipate?

An article at considers the pros and cons of making the voice interactions of AI assistants more humanlike.

The assumption that more human-like speech from AIs is naturally better may prove as incorrect as the belief that the desktop metaphor was the best way to make humans more proficient in using computers. When designing the interfaces between humans and machines, should we minimize the demands placed on users to learn more about the system they’re interacting with? That seems to have been Alan Kay’s assumption when he designed the first desktop interface back in 1970.

Problems arise when the interaction metaphor diverges too far from the reality of how the underlying system is organized and works. In a personal example, someone dear to me grew up helping her mother–an office manager for several businesses. Dear one was thoroughly familiar with physical desktops, paper documents and forms, file folders, and filing cabinets. As I explained how to create, save, and retrieve information on a 1990 Mac, she quickly overcame her initial fear. “Oh, it’s just like in the real world!” (Chalk one for Alan Kay? Not so fast.) I knew better than to tell her the truth at that point. Dear one’s Mac honeymoon crashed a few days later when, to her horror and confusion, she discovered a file cabinet inside a folder. A few years later, there was another metaphor collapse when she clicked on a string of underlined text in a document and was forcibly and instantly transported to a strange destination.

Having come to terms with computers through the command-line interface, I found the desktop metaphor annoying and unnecessary. Hyperlinking, however–that’s another matter altogether–an innovation that multiplied the value I found in computing.

On the other end of the complexity spectrum would be machine-level code. There would be no general computing today if we all had to speak to computers in their own fundamental language of ones and zeros. That hasn’t stopped some hard-core computer geeks from advocating extreme positions on appropriate interaction modes, as reflected in this quote from a 1984 edition of InfoWorld:

“There isn’t any software! Only different internal states of hardware. It’s all hardware! It’s a shame programmers don’t grok that better.”

Interaction designers operate on the metaphor end of the spectrum by necessity. The human brain organizes concepts by semantic association. But sometimes a different metaphor makes all the difference. And sometimes, to be truly proficient when interacting with automation systems, we have to invest the effort to understand less simplistic metaphors.

The article referenced in the beginning of this post mentions that humans are manually coding “speech synthesis markup tags” to cause synthesized voices of AI systems to sound more natural. (Note that this creates an appearance that the AI understands the user’s intent and emotional state, though this more natural intelligence is illusory.) Intuitively, this sounds appropriate. The down side, as the article points out, is that colloquial AI speech limits human-machine interactions to the sort of vagueness inherent in informal speech. It also trains humans to be less articulate. The result may be interactions that fail to clearly communicate what either party actually means.

I suspect a colloquial mode could be more effective in certain kinds of interactions: when attempting to deceive a human into thinking she’s speaking with another human; virtual talk therapy; when translating from one language to another in situations where idioms, inflections, pauses, tonality, and other linguistic nuances affect meaning and emotion; etc.

In conclusion, operating systems, applications, and AIs are not humans. To improve our effectiveness in using more complex automation systems, we will have to meet them farther along the complexity continuum–still far from machine code, but at points of complexity that require much more of us as users.