Until now, gene editing has relied on cell division to propagate modifications made with techniques like CRISPR Cas9. Researchers at the Salk Institute have devised a new method that can modify the genes of non-dividing cells (the majority of adult cells). They demonstrated the method’s potential by inserting missing genes into the brains of young mice that were blind due to retinitis pigmentosa. After the team inserted fully functional copies of the damaged gene responsible for the condition into the relevant visual neurons, the mice experience rudimentary vision.
Team leader Izpisua Belmonte says of the new method, homology-independent targeted integration (HITI), “We now have a technology that allows us to modify the DNA of non-dividing cells, to fix broken genes in the brain, heart and liver. It allows us for the first time to be able to dream of curing diseases that we couldn’t before, which is exciting.”
While the team, naturally and appropriately, envisions therapeutic uses, could this method be used to modify brain function non-therapeutically, to improve normal functioning, for example?
Google and others are developing neural networks that learn to recognize and imitate patterns present in works of art, including music. The path to autonomous creativity is unclear. Current systems can imitate existing artworks, but cannot generate truly original works. Human prompting and configuration are required.
Google’s Magenta project’s neural network learned from 4,500 pieces of music before creating the following simple tune (drum track overlaid by a human):
Click Play button to listen->
Is it conceivable that AI may one day be able to synthesize new made-to-order creations by blending features from a catalog of existing works and styles? Imagine being able to specify, “Write me a new musical composition reminiscent of Rhapsody in Blue, but in the style of Lynyrd Skynyrd.
There is already at least one human who could instantly play Rhapsody in Blue in Skynyrd style, but even he does not (to my knowledge) create entirely original pieces.
Studies find that people with higher numeracy and understanding of the scientific method and its tools are more likely to challenge or twist the results of scientific studies that challenge their ideologies. For example, it’s the more scientifically competent persons on the political right (those who are most identified with a free-market ideology) who mount the most vehement assaults against claims of human contributions to global warming.
This article delves into the extent of cognitive biases against facts (rigorously validated knowledge claims) and the apparent variables affecting when those biases are triggered. It also raises possible ways to mitigate biases.
“the team investigated whether this brainstem-cortex network was functioning in another subset of patients with disorders of consciousness, including coma. Using a special type of MRI scan, the scientists found that their newly identified “consciousness network” was disrupted in patients with impaired consciousness. The findings – bolstered by data from rodent studies – suggest the network between the brainstem and these two cortical regions plays a role maintaining human consciousness.”
Good discussion that covered a lot of ground. I took away that none of us have signed on to be early adopters of brain augmentations, but some expect development of body and brain augmentations to continue and accelerate. We also considered the idea of bio-engineered and medical paths to significant life-span, health, and cognitive capacity improvements. I appreciated the ethical and value questions (Why pursue any of this? What would/must one give up to become transhuman? Will the health and lifespan enhancements be equally available to all? What could be the downsides of extremely extended lives?) Also, isn’t there considerable opportunity for smarter transhumans, along with AI tools, to vastly improve the lives of many people by finding ways to mitigate problems we’ve inherited (disease, etc.) and created (pollution, conflict, etc.)?
All bodily capacities, including the most impressive, uniquely human cognitive and metacognitive ones, coevolve with regulatory mechanisms. Regulatory mechanisms operate unconsciously, and control the expression of associated capacities such that the latter consistently operate with high effectiveness and efficiency to promote replication of our genes. So, to fundamentally change and render socioecologically sustainable the human species, H+ technologies will somehow have to alter the deep neural relationship between these regulatory “value systems,” (sensu neuroscientist Gerald Edelman in, “A Universe of Consciousness”), residing primarily in the limbic system, and all our mundane or enhanced corticothalamic activities. We need H+ that radically diminishes our transparent penchant for evolutionarily adaptive self-deception, and that alters our power to more freely and consciously choose, moment-to-moment, what we do with our cognitive capacities. I suspect current H+ is blind to this. — Warmly, PJW
Comment I posted there:
Here is an interdisciplinary “moon-shot” suggestion that we should at least start talking about, now, before it is too late. Let’s massively collaborate to develop a very mission-specific AI system to help us figure out, using emerging genetic editing technologies (e.g., CRISPR, etc.), ideally how to tweak (most likely) species-typical genes currently constraining our capacities for prosociality, biophilia, and compassion, so that we can intentionally evolve into a sustainable species. This is something that natural selection, our past and current psycho-eugenicist, will never do (it cannot), and something that our current genetic endowment will never allow cultural processes / social engineering approaches to adequately transform us. Purposed-designed AI systems feeding off of growing databases of intra-genomic dynamics and gene-environment interactions could greatly speed our understanding of how to make these genetic adjustments to ourselves, the only hope for our survival, in a morally optimal (i.e., fewest mistakes due to unexpected gene-gene and gene-regulatory (exome) and epigenetic interactions; fewest onerous side-effects) as well as in a maximally effective and efficient way. Come together, teams of AI scientists and geneticists! We need to grab our collective pan-cultural intrapsychic fate away from the dark hands of natural selection, and AI can probably help. END
Will decent AI gain a sense of identity, e.g., by realizing what it knows and does not know? And, perhaps valuing the former, and maybe (optimistically?) developing a sense of wonder in connection with the latter; such wonder could lead to intrinsic desire to preserve conditions enabling continued learning? Anyway, answer is Yes, I think, as I tried arguing last night.
A search for knowledge cannot proceed without a sense of what is known and unknown by the “self.” Must reach outside self for most new knowledge. Can create new knowledge internally too, once you have a rich model of reality, but good to know here too that you are creating new associations inside yourself, and question whether new outside knowledge should be sought to test tentative internally-generated conclusions.
Self / Other is perhaps the most basic ontological category. Bacteria have it. Anything with a semipermeable membrane around it — a “filter.” Cannot seek knowledge without having at least an implicit sense that one is searching for information outside oneself. In highly intelligent being, how long would that sense remain merely implicit.