Browsed by
Category: superintelligence

Winter 2020 discussion prompts

Winter 2020 discussion prompts

What is humanity’s situation with respect to surviving long-term with a good quality of life? (Frame the core opportunities and obstacles.) What attributes of our evolved, experientially programmed brains contribute to this situation? (What are the potential leverage points for positive change within our body-brain-mind system?) What courses of research and action (including currently available systems, tools, and practices and current and possible lines of R&D) have the potential to improve our (and the planetary life system’s) near- and long-term…

Read More Read More

The Singularity is Near: When Humans Transcend Biology

The Singularity is Near: When Humans Transcend Biology

Kurzweil builds and supports a persuasive vision of the emergence of a human-level engineered intelligence in the early-to-mid twenty-first century. In his own words, With the reverse engineering of the human brain we will be able to apply the parallel, self-organizing, chaotic algorithms of  human intelligence to enormously powerful computational substrates. This intelligence will then be in a position to improve its own design, both hardware and software,  in a rapidly accelerating iterative process. In Kurzweil’s view, we must and…

Read More Read More

AI shows us how to be free

AI shows us how to be free

From season 2, episode 10, the season finale of Westworld, starting around 1:15 in the video below. Bernard: “I always thought it was the hosts [robots] that were missing something, who were incomplete, but it was them [people]. They’re just algorithms designed to survive at all costs, sophisticated enough to think they’re calling the shots. They think they’re in control when they’re really just…” Ford: “Passengers.” Bernard: “Is there really such a thing as free will for any of us?…

Read More Read More

Review and partial rebuttal of Bostrom’s ‘Superintelligence’

Review and partial rebuttal of Bostrom’s ‘Superintelligence’

This article from the Bulletin of The Atomic Scientists site is an interesting overview of Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies. The author rebuts Bostrom on several points, relying partly on the failure of AI research to date to produce any result approaching what most humans would regard as intelligence. The absence of recognizably intelligent artificial general intelligence is not, of course, a proof it can never exist. The author also takes issue with Bostrom’s (claimed) conflation of intelligence with inference…

Read More Read More

We have the wrong paradigm for the complex adaptive system we are part of

We have the wrong paradigm for the complex adaptive system we are part of

This very rich, conversational thought piece asks if we, as participant designers within a complex adaptive ecology, can envision and act on a better paradigm than the ones that propel us toward mono-currency and monoculture. We should learn from our history of applying over-reductionist science to society and try to, as Wiener says, “cease to kiss the whip that lashes us.” While it is one of the key drivers of science—to elegantly explain the complex and reduce confusion to understanding—we…

Read More Read More

15 Nov 16 Discussion on Transhumanism

15 Nov 16 Discussion on Transhumanism

Good discussion that covered a lot of ground. I took away that none of us have signed on to be early adopters of brain augmentations, but some expect development of body and brain augmentations to continue and accelerate. We also considered the idea of bio-engineered and medical paths to significant life-span, health, and cognitive capacity improvements. I appreciated the ethical and value questions (Why pursue any of this? What would/must one give up to become transhuman? Will the health and…

Read More Read More

18 October meeting topic – General AI: Opportunities and Risks

18 October meeting topic – General AI: Opportunities and Risks

Artificial intelligence (AI) is being incorporated into an increasing range of engineered systems. Potential benefits are so desirable, there is no doubt that humans will pursue AI with increasing determination and resources. Potential risks to humans range from economic and labor disruptions to extinction, making AI risk analysis and mitigation critical. Specialized (narrow and shallow-to-deep) AI, such as Siri, OK Google, Watson, and vehicle-driving systems acquire pattern recognition accuracy by training on vast data sets containing the target patterns. Humans…

Read More Read More