Category Archives: automation

2020-06-06 Check-in topics

Here are some of the topic references Scott, Paul, Edward, and Mark discussed during today’s check-in. If these provoke any thoughts, please feel free to reply by comment below this article or by reply to all from the associated email message from Cogniphile.

Socio-economic and political:

  • Alternate social and economic system – https://centerforpartnership.org/the-partnership-system/
  • Dark Horse podcast (Weinstein) ep. 19 on co-presidency idea
  • How could a shift to voting on issues rather than representatives work? What are the potential challenges? How could it be better? (There’s not a lot of easily discoverable analysis on this.)
  • Perspective: Despite our challenges and structural societal issues, most people in the U.S. enjoy more security—i.e., most Americans don’t need to worry about being violently attacked or starving to death. I think we agreed on this general point. It in no way lessens the obvious needs for systemic improvements.

    I add an after note, however, that a succession of unfortunate events, especially if medical issues and their crippling expenses are involved, can quickly deplete the average American’s finances and put them on the streets. A homeless person’s capacity to be resourceful literally includes their ability to carry and protect resources which become much more difficult to retain due to space in a car (or backpack) and increased exposure to crime. Social stigma becomes self-reinforcing to the homeless person and we who encounter them. Nearly all doors close. ‘Structural invisibility’ results—’society’ just stops seeing them (or can only see them as choosing or deserving their situations) and predators take society’s disregard as open season on the homeless.

    So, while it is true the threshold of personal disaster is farther from the average American than from the average, say, Zimbabwean or Eritrean, once an American crosses that threshold it can certainly be a devastating and nearly intractable circumstance. There are many trap doors leading down and few ladders leading back up. Thoughts?

Entertainment we’ve enjoyed recently:

  • Edward: Killing Eve – Bored British intelligence agent, Eve, is overly interested in female assassins, their psychologies and their methods of killing. She is recruited by a secret division within MI6 chasing an international assassin who calls herself Villanelle. Eve crosses paths with Villanelle and discovers that members within both of their secret circles may be more interconnected than she is comfortable with. Both women begin to focus less on their initial missions in order to desperately learn more about the other.
  • Mark: Devs (FX network sci-fi thriller series) – Atmospherically dark and brooding exploration of the implications of a quantum computing system capable of peering into past and future. Also a meditation on two competing physics theories, deterministic and indeterministic (Copenhagen interpretation – aka, ‘many worlds,’ ‘multiple universes’). From a genre perspective, it is a thriller.
  • Scott: After Life (Ricky Gervais) – follows Tony, whose life is turned upside down after his wife dies from breast cancer. He contemplates suicide, but instead decides to live long enough to punish the world for his wife’s death by saying and doing whatever he wants.
  • Paul: Exhalation (book of short sci-fi stories) Ted Chiang

    Mark would like to base a few future discussions on the following stories:
    • The Lifecycle of Software Objects “follows Ana Alvarado over a twenty-year period, during which she “raises” an artificial intelligence from being essentially a digital pet to a human-equivalent mind.”
    • The Truth of Fact, the Truth of Feeling – A study in memory and meaning told from interwoven future and past stories. “a journalist observes how the world, his daughter, and he himself are affected by ‘Remem’, a form of lifelogging whose advanced search algorithms effectively grant its users eidetic memory of everything that ever happened to them, and the ability to perfectly and objectively share those memories. In a parallel narrative strand, a Tiv [African tribal] man is one of the first of his people to learn to read and write, and discovers that this may not be compatible with oral tradition.” (Wikipedia)
    • The Great Silence – Mutimedia collaboration version here. An earthbound alien wonders about humanity’s fascination with missing space aliens and lack of interest of intelligences among us.
    • Omphalos – On an Earth on which science has long-since proven the planet is precisely as old as the bible states, an anthropologist following the trail of a fake artifact stumbles onto a shattering discovery.
    • Anxiety is the Dizziness of Freedom (title is a Kirkegaard quote) – “the ability to glimpse into alternate universes necessitates a radically new examination of the concepts of choice and free will.” (SFWA)

  • Scott: Who are some of your favorite fiction authors?

     

Winter 2020 discussion prompts

  • What is humanity’s situation with respect to surviving long-term with a good quality of life? (Frame the core opportunities and obstacles.)
  • What attributes of our evolved, experientially programmed brains contribute to this situation? (What are the potential leverage points for positive change within our body-brain-mind system?)
  • What courses of research and action (including currently available systems, tools, and practices and current and possible lines of R&D) have the potential to improve our (and the planetary life system’s) near- and long-term prospects?

Following is a list of (only some!) of the resources some of us have consumed and discussed online, in emails, or face-to-face in 2019. Sample a few to jog your thoughts and provoke deeper dives. Please add your own additional references in the comments below this post. For each, give a short (one line is fine) description, if possible.

In the Age of AI

A documentary exploring how artificial intelligence is changing life as we know it — from jobs to privacy to a growing rivalry between the U.S. and China.

FRONTLINE investigates the promise and perils of AI and automation, tracing a new industrial revolution that will reshape and disrupt our world, and allow the emergence of a surveillance society.

Book: Team Human by Douglas Rushkoff

Team Human by Douglas Rushkoff investigates the impacts of current and emerging technologies and digital culture on individuals and groups and seeks ways to evade or extract ourselves from their corrosive effects.

After you read the book, please post your thoughts as comments to this post or, if you prefer, as new posts. There are interviews and other resources about the book online. Feel free to recommend in the comments those you find meaningful. Also, the audiobook is available through the Albuquerque Public Library but may have a long wait queue (I’m aiming for a record number of ‘q’s in this sentence).

Please use the tag and/or category ‘Rushkoff’ in your new posts. Use any other tags or categories you want. To access categories and tags while composing a post, click ‘Document’ at the top of the options area on the right side of the editing page.

How to add a category to a post in WordPress sites using the Gutenberg editor

Any comments you add to this post should inherit the post’s categories and tags. Add any additional ones as you like.

Last, this site includes a book reviews app for registered site members. To use it, log in and select Review under the New menu.

Starting a new book review

Running on escalators

Ideally, automation would yield a Star Trek reality of increasing leisure and quality of choice and experience. Why isn’t this our experience? An article on Medium offers insight into why this is not occurring on any significant scale.

Evolved behavioral strategies explained by the prisoner’s dilemma damn the majority of humans to a constant doubling down. We exchange the ‘leisure dividend’ (free time) granted by automation for opportunities to outcompete others.

Apparently, the sort of reciprocal social learning that could lead us to make healthy choices with our leisure opportunities depends on us and our competitors being able to mutually track our outcomes across consecutive iterations of the ‘game’. That ‘traceability’ quickly breaks down with the complexity inherent in vast numbers of competitors. When we conclude that any viable competitor may use her leisure dividend to further optimize her competitive position, rather than to pause to enjoy her life, we tend to do the same. Each assumes the other will sprint ahead and so chooses to sprint ahead. Both forfeit the opportunity to savor the leisure dividend.

The prisoner’s dilemma shows that we (most humans) would rather be in a grueling neck-and-neck race toward an invisible, receding finish line than permit the possibility a competitor may increase her lead.

Any strategy that’s so endemic must have evolutionary roots. Thoughts?

Applying artificial intelligence for social good

This McKinsey article is an excellent overview of this more extensive article (3 MB PDF) enumerating the ways in which varieties of deep learning can improve existence. Worth a look.

The articles cover the following:

  • Mapping AI use cases to domains of social good
  • AI capabilities that can be used for social good
  • Overcoming bottlenecks, especially around data and talent
  • Risks to be managed
  • Scaling up the use of AI for social good

Can children learn to read without explicit instruction from adults?

[et_pb_section bb_built=”1″ _builder_version=”3.11.1″ background_color=”rgba(0,42,255,0.39)” next_background_color=”#ffffff”][et_pb_row _builder_version=”3.0.48″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.11.1″]

An experiment in a remote Ethiopian village demonstrates the potential of mobile devices to enable children to learn and teach each other how to read without traditional schooling.

[/et_pb_text][et_pb_video src=”https://tnp_encoded_videos.s3.amazonaws.com/web_videos/121006_TNP_BREAZEAL_720_9100.mp4″ _builder_version=”3.11.1″]
[/et_pb_video][et_pb_text _builder_version=”3.11.1″]

See also: How Reading Rewires Your Brain for Empathy

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section bb_built=”1″ _builder_version=”3.11.1″ prev_background_color=”rgba(0,42,255,0.39)”][/et_pb_section]

AI-enabled software creates 3D face from single photo

I wrote on my blog about this development and more generally about the increasing ease with which AI tools can forge convincing media. Go see my creepy 3D face.

Next discussion meeting Apr 2: Brain-Computer Interface, now and future

During our next discussion meeting, we’ll explore the status, future potential, and human implications of neuroprostheses–particularly brain-computer interfaces. If you are local to Albuquerque, check our Meetup announcement to join or RSVP. The announcement text follows.

Focal questions

What are neuroprostheses? How are they used now and what may the future hold for technology-enhanced sensation, motor control, communications, cognition, and other human processes?

Resources (please review before the meeting)

Primary resources
• New Brain-Computer Interface Technology (video, 18 m)
https://www.youtube.com/watch?v=CgFzmE2fGXA
• Imagining the Future: The Transformation of Humanity (video, 19 m)
https://www.youtube.com/watch?v=7XrbzlR9QmI
• The Berlin Brain-Computer Interface: Progress Beyond Communication and Control (research article, access with a free Frontiers account)
https://www.frontiersin.org/articles/10.3389/fnins.2016.00530/full
• The Elephant in the Mirror: Bridging the Brain’s Explanatory Gap of Consciousness (research article)
https://www.frontiersin.org/articles/10.3389/fnsys.2016.00108/full

Other resources (recommend your own in the comments!)

• DARPA implant (planned) with up to 1 million neural connections (short article)
https://www.darpa.mil/news-events/2015-01-19

Extra Challenge: As you review the resources, think of possible implications from the perspectives of the other topics we’ve recently discussed:
• the dilemma of so much of human opinion and action deriving from non-conscious sources
• questions surrounding what it means to ‘be human’ and what values we place on our notions of humanness (e.g., individuality and social participation, privacy, ‘self-determination’ (or the illusion thereof), organic versus technologically enhanced cognition, etc.)

Should AI agents’ voice interactions be more like our own? What effects should we anticipate?

An article at Wired.com considers the pros and cons of making the voice interactions of AI assistants more humanlike.

The assumption that more human-like speech from AIs is naturally better may prove as incorrect as the belief that the desktop metaphor was the best way to make humans more proficient in using computers. When designing the interfaces between humans and machines, should we minimize the demands placed on users to learn more about the system they’re interacting with? That seems to have been Alan Kay’s assumption when he designed the first desktop interface back in 1970.

Problems arise when the interaction metaphor diverges too far from the reality of how the underlying system is organized and works. In a personal example, someone dear to me grew up helping her mother–an office manager for several businesses. Dear one was thoroughly familiar with physical desktops, paper documents and forms, file folders, and filing cabinets. As I explained how to create, save, and retrieve information on a 1990 Mac, she quickly overcame her initial fear. “Oh, it’s just like in the real world!” (Chalk one for Alan Kay? Not so fast.) I knew better than to tell her the truth at that point. Dear one’s Mac honeymoon crashed a few days later when, to her horror and confusion, she discovered a file cabinet inside a folder. A few years later, there was another metaphor collapse when she clicked on a string of underlined text in a document and was forcibly and instantly transported to a strange destination.

Having come to terms with computers through the command-line interface, I found the desktop metaphor annoying and unnecessary. Hyperlinking, however–that’s another matter altogether–an innovation that multiplied the value I found in computing.

On the other end of the complexity spectrum would be machine-level code. There would be no general computing today if we all had to speak to computers in their own fundamental language of ones and zeros. That hasn’t stopped some hard-core computer geeks from advocating extreme positions on appropriate interaction modes, as reflected in this quote from a 1984 edition of InfoWorld:

“There isn’t any software! Only different internal states of hardware. It’s all hardware! It’s a shame programmers don’t grok that better.”

Interaction designers operate on the metaphor end of the spectrum by necessity. The human brain organizes concepts by semantic association. But sometimes a different metaphor makes all the difference. And sometimes, to be truly proficient when interacting with automation systems, we have to invest the effort to understand less simplistic metaphors.

The article referenced in the beginning of this post mentions that humans are manually coding “speech synthesis markup tags” to cause synthesized voices of AI systems to sound more natural. (Note that this creates an appearance that the AI understands the user’s intent and emotional state, though this more natural intelligence is illusory.) Intuitively, this sounds appropriate. The down side, as the article points out, is that colloquial AI speech limits human-machine interactions to the sort of vagueness inherent in informal speech. It also trains humans to be less articulate. The result may be interactions that fail to clearly communicate what either party actually means.

I suspect a colloquial mode could be more effective in certain kinds of interactions: when attempting to deceive a human into thinking she’s speaking with another human; virtual talk therapy; when translating from one language to another in situations where idioms, inflections, pauses, tonality, and other linguistic nuances affect meaning and emotion; etc.

In conclusion, operating systems, applications, and AIs are not humans. To improve our effectiveness in using more complex automation systems, we will have to meet them farther along the complexity continuum–still far from machine code, but at points of complexity that require much more of us as users.