I know, to free will or not to free will, that is the hackneyed question debated in philosophical circles since we learned how to talk. But here’s a cognitive neuroscientist’s research on “how neuronal code underlies top-down mental causation.” It’s a long video, over 2 hours, and I have yet to complete it. Here is Peter Tse’s CV. Here is his book on the topic is. Here is a good summary of Tse’s work on the topic.
It occurred to me that memes are a lot like frames as Lakoff describes them. Lakoff has done extensive cognitive scientific work on schemas, metaphors and frames. Check out this lengthy article in Frontiers in Human Neuroscience, 2014; 8: 958, “Mapping the brain’s metaphor circuitry.” Even though they don’t relate this to the concept of memes, there are some striking similarities. E.g.:
“Reddy had found that the abstract concepts of communication and ideas are understood via a conceptual metaphor: Ideas Are Objects; Language Is a Container for Idea-Objects; Communication Is Sending Idea-Objects in Language-Containers.”
Should it surprise us that human biases find their way into human-designed AI algorithms trained using data sets of human artifacts?
Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.
In a few previous posts I posted articles on new scientific research questioning some of Piaget’s original premises. This Wikipedia article discusses those neo-Piagetians who have taken into account the more recent science. Also see this article that discusses some of the neo-Piagetians but then focuses on Kurt Fischer’s work.
Caltech researchers have identified the brain mechanisms that enable primates to quickly identify specific faces. In a feat of efficiency, surprisingly few feature-recognition neurons are involved in a process that may be able to distinguish among billions of faces. Each neuron in the facial-recognition system specializes in noticing one feature, such as the width of the part in the observed person’s hair. If the person is bald or has no part, the part-width-recognizing neuron remains silent. A small number of such specialized-recognizer neurons feed their inputs to other layers (patches) that integrate a higher-level pattern (e.g., hair pattern), and these integrate at yet higher levels until there is a total face pattern. This process occurs nearly instantaneously and works regardless of the view angle (as long as some facial features are visible). Also, by cataloging which neurons perform which functions and then mapping these to a relatively small set of composite faces, researchers were able to tell which face a macaque (monkey) was looking at.
These findings seem to correlate closely with Ray Kurzweil’s (Google’s Chief Technology Officer) pattern-recognition theory of mind.
BMCAI library file (site members only)
“Until recently, scientists had thought that most synapses of a similar type and in a similar location in the brain behaved in a similar fashion with respect to how experience induces plasticity,” Friedlander said. “In our work, however, we found dramatic differences in the plasticity response, even between neighboring synapses in response to identical activity experiences.”
“Individual neurons whose synapses are most likely to strengthen in response to a certain experience are more likely to connect to certain partner neurons, while those whose synapses weaken in response to a similar experience are more likely to connect to other partner neurons,” Friedlander said. “The neurons whose synapses do not change at all in response to that same experience are more likely to connect to yet other partner neurons, forming a more stable but non-plastic network.”
Here’s a useful artificial intelligence introductory lesson from an MIT course:
This NY Times article is worth your time, if you are interested in AI–especially if you are still under the impression AI has ossified or lost its way.
Google and others are developing neural networks that learn to recognize and imitate patterns present in works of art, including music. The path to autonomous creativity is unclear. Current systems can imitate existing artworks, but cannot generate truly original works. Human prompting and configuration are required.
Google’s Magenta project’s neural network learned from 4,500 pieces of music before creating the following simple tune (drum track overlaid by a human):
Click Play button to listen->
Is it conceivable that AI may one day be able to synthesize new made-to-order creations by blending features from a catalog of existing works and styles? Imagine being able to specify, “Write me a new musical composition reminiscent of Rhapsody in Blue, but in the style of Lynyrd Skynyrd.
There is already at least one human who could instantly play Rhapsody in Blue in Skynyrd style, but even he does not (to my knowledge) create entirely original pieces.
Original article: https://www.technologyreview.com/s/601642/ok-computer-write-me-a-song/
Good discussion that covered a lot of ground. I took away that none of us have signed on to be early adopters of brain augmentations, but some expect development of body and brain augmentations to continue and accelerate. We also considered the idea of bio-engineered and medical paths to significant life-span, health, and cognitive capacity improvements. I appreciated the ethical and value questions (Why pursue any of this? What would/must one give up to become transhuman? Will the health and lifespan enhancements be equally available to all? What could be the downsides of extremely extended lives?) Also, isn’t there considerable opportunity for smarter transhumans, along with AI tools, to vastly improve the lives of many people by finding ways to mitigate problems we’ve inherited (disease, etc.) and created (pollution, conflict, etc.)?