Tag Archives: artificial intelligence

Computer metaphor not accurate for brain’s embodied cognition

It’s common for brain functions to be described in terms of digital computing, but this metaphor does not hold up in brain research. Unlike computers, in which hardware and software are separate, organic brains’ structures embody memories and brain functions. Form and function are entangled.

Rather than finding brains to work like computers, we are beginning to design computers–artificial intelligence systems–to work more like brains. 

https://www.wired.com/story/tech-metaphors-are-holding-back-brain-research/ 

Should AI agents’ voice interactions be more like our own? What effects should we anticipate?

An article at Wired.com considers the pros and cons of making the voice interactions of AI assistants more humanlike.

The assumption that more human-like speech from AIs is naturally better may prove as incorrect as the belief that the desktop metaphor was the best way to make humans more proficient in using computers. When designing the interfaces between humans and machines, should we minimize the demands placed on users to learn more about the system they’re interacting with? That seems to have been Alan Kay’s assumption when he designed the first desktop interface back in 1970.

Problems arise when the interaction metaphor diverges too far from the reality of how the underlying system is organized and works. In a personal example, someone dear to me grew up helping her mother–an office manager for several businesses. Dear one was thoroughly familiar with physical desktops, paper documents and forms, file folders, and filing cabinets. As I explained how to create, save, and retrieve information on a 1990 Mac, she quickly overcame her initial fear. “Oh, it’s just like in the real world!” (Chalk one for Alan Kay? Not so fast.) I knew better than to tell her the truth at that point. Dear one’s Mac honeymoon crashed a few days later when, to her horror and confusion, she discovered a file cabinet inside a folder. A few years later, there was another metaphor collapse when she clicked on a string of underlined text in a document and was forcibly and instantly transported to a strange destination.

Having come to terms with computers through the command-line interface, I found the desktop metaphor annoying and unnecessary. Hyperlinking, however–that’s another matter altogether–an innovation that multiplied the value I found in computing.

On the other end of the complexity spectrum would be machine-level code. There would be no general computing today if we all had to speak to computers in their own fundamental language of ones and zeros. That hasn’t stopped some hard-core computer geeks from advocating extreme positions on appropriate interaction modes, as reflected in this quote from a 1984 edition of InfoWorld:

“There isn’t any software! Only different internal states of hardware. It’s all hardware! It’s a shame programmers don’t grok that better.”

Interaction designers operate on the metaphor end of the spectrum by necessity. The human brain organizes concepts by semantic association. But sometimes a different metaphor makes all the difference. And sometimes, to be truly proficient when interacting with automation systems, we have to invest the effort to understand less simplistic metaphors.

The article referenced in the beginning of this post mentions that humans are manually coding “speech synthesis markup tags” to cause synthesized voices of AI systems to sound more natural. (Note that this creates an appearance that the AI understands the user’s intent and emotional state, though this more natural intelligence is illusory.) Intuitively, this sounds appropriate. The down side, as the article points out, is that colloquial AI speech limits human-machine interactions to the sort of vagueness inherent in informal speech. It also trains humans to be less articulate. The result may be interactions that fail to clearly communicate what either party actually means.

I suspect a colloquial mode could be more effective in certain kinds of interactions: when attempting to deceive a human into thinking she’s speaking with another human; virtual talk therapy; when translating from one language to another in situations where idioms, inflections, pauses, tonality, and other linguistic nuances affect meaning and emotion; etc.

In conclusion, operating systems, applications, and AIs are not humans. To improve our effectiveness in using more complex automation systems, we will have to meet them farther along the complexity continuum–still far from machine code, but at points of complexity that require much more of us as users.

Excellent article on the history and recent advances in AI

This NY Times article is worth your time, if you are interested in AI–especially if you are still under the impression AI has ossified or lost its way.

AI Creativity

Google and others are developing neural networks that learn to recognize and imitate patterns present in works of art, including music. The path to autonomous creativity is unclear. Current systems can imitate existing artworks, but cannot generate truly original works. Human prompting and configuration are required.

Google’s Magenta project’s neural network learned from 4,500 pieces of music before creating the following simple tune (drum track overlaid by a human):

Click Play button to listen->

Is it conceivable that AI may one day be able to synthesize new made-to-order creations by blending features from a catalog of existing works and styles? Imagine being able to specify, “Write me a new musical composition reminiscent of Rhapsody in Blue, but in the style of Lynyrd Skynyrd.

There is already at least one human who could instantly play Rhapsody in Blue in Skynyrd style, but even he does not (to my knowledge) create entirely original pieces.

Original article: https://www.technologyreview.com/s/601642/ok-computer-write-me-a-song/

See also: https://www.technologyreview.com/s/600762/robot-art-raises-questions-about-human-creativity/

TED Talk and PJW Comment

TED talk of possible interest:

Comment I posted there:
Here is an interdisciplinary “moon-shot” suggestion that we should at least start talking about, now, before it is too late. Let’s massively collaborate to develop a very mission-specific AI system to help us figure out, using emerging genetic editing technologies (e.g., CRISPR, etc.), ideally how to tweak (most likely) species-typical genes currently constraining our capacities for prosociality, biophilia, and compassion, so that we can intentionally evolve into a sustainable species. This is something that natural selection, our past and current psycho-eugenicist, will never do (it cannot), and something that our current genetic endowment will never allow cultural processes / social engineering approaches to adequately transform us. Purposed-designed AI systems feeding off of growing databases of intra-genomic dynamics and gene-environment interactions could greatly speed our understanding of how to make these genetic adjustments to ourselves, the only hope for our survival, in a morally optimal (i.e., fewest mistakes due to unexpected gene-gene and gene-regulatory (exome) and epigenetic interactions; fewest onerous side-effects) as well as in a maximally effective and efficient way. Come together, teams of AI scientists and geneticists! We need to grab our collective pan-cultural intrapsychic fate away from the dark hands of natural selection, and AI can probably help. END

Metacognition, known unknowns, and emergence of reflective identity

Paul Watson asks:

Will decent AI gain a sense of identity, e.g., by realizing what it knows and does not know? And, perhaps valuing the former, and maybe (optimistically?) developing a sense of wonder in connection with the latter; such wonder could lead to intrinsic desire to preserve conditions enabling continued learning? Anyway, answer is Yes, I think, as I tried arguing last night.

Paul recommends this article about macaque metacognition may be relevant.

Relatedly, Paul says

  1. A search for knowledge cannot proceed without a sense of what is known and unknown by the “self.” Must reach outside self for most new knowledge. Can create new knowledge internally too, once you have a rich model of reality, but good to know here too that you are creating new associations inside yourself, and question whether new outside knowledge should be sought to test tentative internally-generated conclusions.
  2. Self / Other is perhaps the most basic ontological category. Bacteria have it. Anything with a semipermeable membrane around it — a “filter.” Cannot seek knowledge without having at least an implicit sense that one is searching for information outside oneself. In highly intelligent being, how long would that sense remain merely implicit.

What do you think?

all possible minds

18 October meeting topic – General AI: Opportunities and Risks

Artificial intelligence (AI) is being incorporated into an increasing range of engineered systems. Potential benefits are so desirable, there is no doubt that humans will pursue AI with increasing determination and resources. Potential risks to humans range from economic and labor disruptions to extinction, making AI risk analysis and mitigation critical.

Specialized (narrow and shallow-to-deep) AI, such as Siri, OK Google, Watson, and vehicle-driving systems acquire pattern recognition accuracy by training on vast data sets containing the target patterns. Humans provide the operational goals (utility functions) and curate the items in the training data sets to include only information directly related to the goal. For example, a driving AI’s utility functions involve getting the vehicle to a destination while keeping the vehicle within various parameters (speed, staying within lane, complying with traffic signs and signals, avoiding collisions, etc.).

Artificial general intelligence (AGI or GAI) systems, by contrast, are capable of learning and performing the full range of intellectual work at or beyond human level. AGI systems can achieve learning goals without explicitly curated training data sets or detailed objectives. They can learn ‘in the wild’, so to speak. For example, an AGI with the goal of maximizing a game score requires only a visual interface to the game (so it can sense the game environment and the outcomes of its own actions) and an ability to interact with (play) the game. It figures out everything on its own.

Some people have raised alarms that AGIs, because their ability to learn is more generalized, are likely to suddenly surpass humans in most or all areas of intellectual achievement. By definition, once AGI minds surpass ours, we will not be able to understand much of their reasoning or actions. This situation is often called the technological singularity–a sort of knowledge horizon we’ll not be able to cross. The concerns arise from our uncertainty that superintelligent AIs will value us or our human objectives or–if they do value us–that they will be able to translate that into actions that do not degrade our survival or quality of existence.

Multimedia Resources

• Demis Hassabis on Google Deep Mind and AGI (video, 14:05, best content starts a 3:40)

• Google Deep Mind (Alpha Go) AGI (video, 13:44)

• Extra: Nick Bostrom on Superintelligence and existential threats (video, 19:54) – part of the talk concerns biological paths to superintelligence

Print Resources

• Primary reading (long article): Superintelligence: Fears, Promises, and Potentials

• Deeper dive (for your further edification): Superintelligence; Paths, Dangers, and Strategies, by Nick Bostrom

Members may RSVP for this discussion at https://www.meetup.com/abq_brain_mind_consciousness_AI/events/234823660/. Based on participant requests, attendance is capped at 10 to promote more and deeper discussion. Those who want to attend but are not in the first 10 may elect to go on the waiting list. It is not unusual for someone to change a “Yes” RSVP to “No”, which will allow the next person on the waiting list to attend. If the topic attracts a large wait list, we may schedule additional discussion.

Members of this site who can’t attend the meeting are welcome to participate in the extended discussion by commenting on this announcement.