This article is relevant to our recent discussions and Zak Stein’s (see Edward’s recent post) suggestion that great destabilizing events open gaps in which new structures can supplant older, disintegrating systems–with the inherent risks and opportunities.
In his new book, Range: Why Generalists Triumph in a Specialized World, David J. Epstein investigates the significant advantages of generalized cognitive skills for success in a complex world. We’ve heard and read many praises for narrow expertise in both humans and AIs (Watson, Alpha Go, etc.). In both humans and AIs, however, narrow+deep expertise does not translate to adaptiveness when reality presents novel challenges, as it does constantly.
As you ingest this highly readable, non-technical book, please add your observations to the comments below.
Here’s an interesting infographic of the main concepts and thinkers in complexity science across time. Notice S. Kauffman is slated in the 1980s column, suggesting the graphic depicts when influential thinkers first make their marks.
If you are familiar with complex systems theorist Dr. Stuart Kauffman’s ideas you know he covers a broad range of disciplines and concepts, many in considerable depth, and with a keen eye for isomorphic and integrative principles. If you peruse some of his writings and other communications, please share with us how you see Kauffman’s ideas informing our focal interests: brain, mind, intelligence (organic and inorganic), and self-aware consciousness.
Do you find Kauffman’s ideas well supported by empirical research? Which are more scientific and which, if any, more philosophical? What intrigues, provokes, or inspires you? Do any of his perspectives or claims help you better orient or understand your own interests in our focal topics?
Following are a few reference links to get the conversation going. Please add your own in the comments to this post. If you are a member and have a lot to say on a related topic, please create a new post, tag it with ‘Stuart Kauffman,’ and create a link to your post in the comments to this post.
Misleading and sensationalist news personalities have ceased to be noteworthy. They are the norm in American mainstream media. Interviewers strive to oversimplify and shape guests’ messages–tactics interviewees who are good communicators can cast in sharp relief. Experts tend to present information in systemic, relational, and process terms no longer welcome in or compatible with the aims of popular media outlets.
A fascinating article in The Atlantic not only surfaces these tactics (which may have become habits more than deliberate interviewing methods) but highlights the challenge any expert or systems thinker faces when attempting to convey concepts of any complexity or nuance.
I also found the interviewee’s (a sociologist) points very interesting in themselves. For example,
Peterson (expert): There’s this idea that hierarchical structures are a sociological construct of the Western patriarchy. And that is so untrue that it’s almost unbelievable. I use the lobster as an example: We diverged from lobsters evolutionarily history about 350 million years ago. And lobsters exist in hierarchies. They have a nervous system attuned to the hierarchy. And that nervous system runs on serotonin just like ours. The nervous system of the lobster and the human being is so similar that anti-depressants work on lobsters. And it’s part of my attempt to demonstrate that the idea of hierarchy has absolutely nothing to do with sociocultural construction, which it doesn’t.
Newman (journalist): Let me get this straight. You’re saying that we should organize our societies along the lines of the lobsters?
It would be funny as an SNL skit, but as a supposed demonstration of professional journalism, it is a sad commentary on the state of affairs.
More in line with this group’s focus is Peterson’s point on the evolutionary reality of the hierarchical organization of species, including humans. Of course, this was not a moral or political statement, but a reference to neurochemical bases for perceptions and behaviors.
I appreciate that in our discussions we can press into more nuanced conceptual territories than Ms. Newman was willing to allow Dr. Peterson.
This very rich, conversational thought piece asks if we, as participant designers within a complex adaptive ecology, can envision and act on a better paradigm than the ones that propel us toward mono-currency and monoculture.
We should learn from our history of applying over-reductionist science to society and try to, as Wiener says, “cease to kiss the whip that lashes us.” While it is one of the key drivers of science—to elegantly explain the complex and reduce confusion to understanding—we must also remember what Albert Einstein said, “Everything should be made as simple as possible, but no simpler.” We need to embrace the unknowability—the irreducibility—of the real world that artists, biologists and those who work in the messy world of liberal arts and humanities are familiar with.
In order to effectively respond to the significant scientific challenges of our times, I believe we must view the world as many interconnected, complex, self-adaptive systems across scales and dimensions that are unknowable and largely inseparable from the observer and the designer. In other words, we are participants in multiple evolutionary systems with different fitness landscapes at different scales, from our microbes to our individual identities to society and our species. Individuals themselves are systems composed of systems of systems, such as the cells in our bodies that behave more like system-level designers than we do.
Max Tegmark’s book, Life 3.0: Being Human in the Age of Artificial Intelligence, introduces a framework for defining types of life based on the degree of design control that sensing, self-replicating entities have over their own ‘hardware’ (physical forms) and ‘software’ (“all the algorithms and knowledge that you use to process the information from your senses and decide what to do”).
It’s a relatively non-academic read and well worth the effort for anyone interested in the potential to design the next major forms of ‘Life’ to transcend many of the physical and cognitive constraints that have us now on the brink of self-destruction. Tegmark’s forecast is optimistic.
If you’ve read the book, please share your observations and questions in the comments below this article. (If you are not a member and would like to be able to comment, send your preferred email address to email@example.com. Please provide a concise description of your interests relevant to our site. Links to relevant books and articles will be accepted. No other advertising or unrelated comments will be accepted and submitters may be banned.)
An article at Wired.com considers the pros and cons of making the voice interactions of AI assistants more humanlike.
The assumption that more human-like speech from AIs is naturally better may prove as incorrect as the belief that the desktop metaphor was the best way to make humans more proficient in using computers. When designing the interfaces between humans and machines, should we minimize the demands placed on users to learn more about the system they’re interacting with? That seems to have been Alan Kay’s assumption when he designed the first desktop interface back in 1970.
Problems arise when the interaction metaphor diverges too far from the reality of how the underlying system is organized and works. In a personal example, someone dear to me grew up helping her mother–an office manager for several businesses. Dear one was thoroughly familiar with physical desktops, paper documents and forms, file folders, and filing cabinets. As I explained how to create, save, and retrieve information on a 1990 Mac, she quickly overcame her initial fear. “Oh, it’s just like in the real world!” (Chalk one for Alan Kay? Not so fast.) I knew better than to tell her the truth at that point. Dear one’s Mac honeymoon crashed a few days later when, to her horror and confusion, she discovered a file cabinet inside a folder. A few years later, there was another metaphor collapse when she clicked on a string of underlined text in a document and was forcibly and instantly transported to a strange destination.
Having come to terms with computers through the command-line interface, I found the desktop metaphor annoying and unnecessary. Hyperlinking, however–that’s another matter altogether–an innovation that multiplied the value I found in computing.
On the other end of the complexity spectrum would be machine-level code. There would be no general computing today if we all had to speak to computers in their own fundamental language of ones and zeros. That hasn’t stopped some hard-core computer geeks from advocating extreme positions on appropriate interaction modes, as reflected in this quote from a 1984 edition of InfoWorld:
“There isn’t any software! Only different internal states of hardware. It’s all hardware! It’s a shame programmers don’t grok that better.”
Interaction designers operate on the metaphor end of the spectrum by necessity. The human brain organizes concepts by semantic association. But sometimes a different metaphor makes all the difference. And sometimes, to be truly proficient when interacting with automation systems, we have to invest the effort to understand less simplistic metaphors.
The article referenced in the beginning of this post mentions that humans are manually coding “speech synthesis markup tags” to cause synthesized voices of AI systems to sound more natural. (Note that this creates an appearance that the AI understands the user’s intent and emotional state, though this more natural intelligence is illusory.) Intuitively, this sounds appropriate. The down side, as the article points out, is that colloquial AI speech limits human-machine interactions to the sort of vagueness inherent in informal speech. It also trains humans to be less articulate. The result may be interactions that fail to clearly communicate what either party actually means.
I suspect a colloquial mode could be more effective in certain kinds of interactions: when attempting to deceive a human into thinking she’s speaking with another human; virtual talk therapy; when translating from one language to another in situations where idioms, inflections, pauses, tonality, and other linguistic nuances affect meaning and emotion; etc.
In conclusion, operating systems, applications, and AIs are not humans. To improve our effectiveness in using more complex automation systems, we will have to meet them farther along the complexity continuum–still far from machine code, but at points of complexity that require much more of us as users.