Category Archives: singularity

Kara Swisher: Keeping tech honest

This reminded me of our Singularity meeting. Talking about platforms like Facebook she wonders why they didn’t build social responsibility into it. This is partly because the techies don’t understand much outside of their specialty, like the humanities (see 4c), thereby not having a sense of how their tech impacts the broader world. They assume that somehow the tech will magically solve these broader problems, but Facebook has proven beyond doubt that they do not, instead exacerbating them. And ultimately it seems to boil down to an adolescent boy’s emotional quotient (EQ).

The Singularity is Near: When Humans Transcend Biology

Kurzweil builds and supports a persuasive vision of the emergence of a human-level engineered intelligence in the early-to-mid twenty-first century. In his own words,

With the reverse engineering of the human brain we will be able to apply the parallel, self-organizing, chaotic algorithms of  human intelligence to enormously powerful computational substrates. This intelligence will then be in a position to improve its own design, both hardware and software,  in a rapidly accelerating iterative process.

In Kurzweil's view, we must and will ensure we evade obsolescence by integrating emerging metabolic and cognitive technologies into our bodies and brains. Through self-augmentation with neurotechnological prostheses, the locus of human cognition and identity will gradually (but faster than we'll expect, due to exponential technological advancements) shift from the evolved substrate (the organic body) to the engineered substrate, ultimately freeing the human mind to develop along technology's exponential curve rather than evolution's much flatter trajectory.

The book is extensively noted and indexed, making the deep-diving reader's work a bit easier.

If you have read it, feel free to post your observations in the comments below. (We've had a problem with the comments section not appearing. It may require more troubleshooting.)

Review and partial rebuttal of Bostrom’s ‘Superintelligence’

This article from the Bulletin of The Atomic Scientists site is an interesting overview of Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies. The author rebuts Bostrom on several points, relying partly on the failure of AI research to date to produce any result approaching what most humans would regard as intelligence. The absence of recognizably intelligent artificial general intelligence is not, of course, a proof it can never exist. The author also takes issue with Bostrom’s (claimed) conflation of intelligence with inference abilities—an assumption the author says AI researchers found to be false.

15 Nov 16 Discussion on Transhumanism

Good discussion that covered a lot of ground. I took away that none of us have signed on to be early adopters of brain augmentations, but some expect development of body and brain augmentations to continue and accelerate. We also considered the idea of bio-engineered and medical paths to significant life-span, health, and cognitive capacity improvements. I appreciated the ethical and value questions (Why pursue any of this? What would/must one give up to become transhuman? Will the health and lifespan enhancements be equally available to all? What could be the downsides of extremely extended lives?) Also, isn’t there considerable opportunity for smarter transhumans, along with AI tools, to vastly improve the lives of many people by finding ways to mitigate problems we’ve inherited (disease, etc.) and created (pollution, conflict, etc.)?

18 October meeting topic – General AI: Opportunities and Risks

all possible minds

Artificial intelligence (AI) is being incorporated into an increasing range of engineered systems. Potential benefits are so desirable, there is no doubt that humans will pursue AI with increasing determination and resources. Potential risks to humans range from economic and labor disruptions to extinction, making AI risk analysis and mitigation critical.

Specialized (narrow and shallow-to-deep) AI, such as Siri, OK Google, Watson, and vehicle-driving systems acquire pattern recognition accuracy by training on vast data sets containing the target patterns. Humans provide the operational goals (utility functions) and curate the items in the training data sets to include only information directly related to the goal. For example, a driving AI’s utility functions involve getting the vehicle to a destination while keeping the vehicle within various parameters (speed, staying within lane, complying with traffic signs and signals, avoiding collisions, etc.).

Artificial general intelligence (AGI or GAI) systems, by contrast, are capable of learning and performing the full range of intellectual work at or beyond human level. AGI systems can achieve learning goals without explicitly curated training data sets or detailed objectives. They can learn ‘in the wild’, so to speak. For example, an AGI with the goal of maximizing a game score requires only a visual interface to the game (so it can sense the game environment and the outcomes of its own actions) and an ability to interact with (play) the game. It figures out everything on its own.

Some people have raised alarms that AGIs, because their ability to learn is more generalized, are likely to suddenly surpass humans in most or all areas of intellectual achievement. By definition, once AGI minds surpass ours, we will not be able to understand much of their reasoning or actions. This situation is often called the technological singularity–a sort of knowledge horizon we’ll not be able to cross. The concerns arise from our uncertainty that superintelligent AIs will value us or our human objectives or–if they do value us–that they will be able to translate that into actions that do not degrade our survival or quality of existence.

Multimedia Resources

• Demis Hassabis on Google Deep Mind and AGI (video, 14:05, best content starts a 3:40)

• Google Deep Mind (Alpha Go) AGI (video, 13:44)

• Extra: Nick Bostrom on Superintelligence and existential threats (video, 19:54) – part of the talk concerns biological paths to superintelligence

Print Resources

• Primary reading (long article): Superintelligence: Fears, Promises, and Potentials

• Deeper dive (for your further edification): Superintelligence; Paths, Dangers, and Strategies, by Nick Bostrom

Members may RSVP for this discussion at https://www.meetup.com/abq_brain_mind_consciousness_AI/events/234823660/. Based on participant requests, attendance is capped at 10 to promote more and deeper discussion. Those who want to attend but are not in the first 10 may elect to go on the waiting list. It is not unusual for someone to change a “Yes” RSVP to “No”, which will allow the next person on the waiting list to attend. If the topic attracts a large wait list, we may schedule additional discussion.

Members of this site who can’t attend the meeting are welcome to participate in the extended discussion by commenting on this announcement.