See this article. A few excerpts:
“A new picture is taking shape in which conscious experience is seen as deeply grounded in how brains and bodies work together to maintain physiological integrity – to stay alive.”
“The brain is locked inside a bony skull. All it receives are ambiguous and noisy sensory signals that are only indirectly related to objects in the world. Perception must therefore be a process of inference, in which indeterminate sensory signals are combined with prior expectations or ‘beliefs’ about the way the world is, to form the brain’s optimal hypotheses of the causes of these sensory signals.”
“A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. […] We’ve found that people consciously see what they expect, rather than what violates their expectations.”
AI system can isolate individuals’ voices from other environmental noise, including other voices. Such a system has many potential uses, both benign and nefarious. The ability is rapidly improving to untangle signals from noise and identify which signals are from which sources. The approach should be able to apply to other kinds of signals too, not only sounds.
I know, to free will or not to free will, that is the hackneyed question debated in philosophical circles since we learned how to talk. But here’s a cognitive neuroscientist’s research on “how neuronal code underlies top-down mental causation.” It’s a long video, over 2 hours, and I have yet to complete it. Here is Peter Tse’s CV. Here is his book on the topic is. Here is a good summary of Tse’s work on the topic.
It occurred to me that memes are a lot like frames as Lakoff describes them. Lakoff has done extensive cognitive scientific work on schemas, metaphors and frames. Check out this lengthy article in Frontiers in Human Neuroscience, 2014; 8: 958, “Mapping the brain’s metaphor circuitry.” Even though they don’t relate this to the concept of memes, there are some striking similarities. E.g.:
“Reddy had found that the abstract concepts of communication and ideas are understood via a conceptual metaphor: Ideas Are Objects; Language Is a Container for Idea-Objects; Communication Is Sending Idea-Objects in Language-Containers.”
Should it surprise us that human biases find their way into human-designed AI algorithms trained using data sets of human artifacts?
Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.
In a few previous posts I posted articles on new scientific research questioning some of Piaget’s original premises. This Wikipedia article discusses those neo-Piagetians who have taken into account the more recent science. Also see this article that discusses some of the neo-Piagetians but then focuses on Kurt Fischer’s work.
Caltech researchers have identified the brain mechanisms that enable primates to quickly identify specific faces. In a feat of efficiency, surprisingly few feature-recognition neurons are involved in a process that may be able to distinguish among billions of faces. Each neuron in the facial-recognition system specializes in noticing one feature, such as the width of the part in the observed person’s hair. If the person is bald or has no part, the part-width-recognizing neuron remains silent. A small number of such specialized-recognizer neurons feed their inputs to other layers (patches) that integrate a higher-level pattern (e.g., hair pattern), and these integrate at yet higher levels until there is a total face pattern. This process occurs nearly instantaneously and works regardless of the view angle (as long as some facial features are visible). Also, by cataloging which neurons perform which functions and then mapping these to a relatively small set of composite faces, researchers were able to tell which face a macaque (monkey) was looking at.
These findings seem to correlate closely with Ray Kurzweil’s (Google’s Chief Technology Officer) pattern-recognition theory of mind.
Scientific American article
BMCAI library file (site members only)
“Until recently, scientists had thought that most synapses of a similar type and in a similar location in the brain behaved in a similar fashion with respect to how experience induces plasticity,” Friedlander said. “In our work, however, we found dramatic differences in the plasticity response, even between neighboring synapses in response to identical activity experiences.”
“Individual neurons whose synapses are most likely to strengthen in response to a certain experience are more likely to connect to certain partner neurons, while those whose synapses weaken in response to a similar experience are more likely to connect to other partner neurons,” Friedlander said. “The neurons whose synapses do not change at all in response to that same experience are more likely to connect to yet other partner neurons, forming a more stable but non-plastic network.”
Read more at: https://medicalxpress.com/news/2016-02-scientists-brain-plasticity-assorted-functional.html#jCp
This NY Times article is worth your time, if you are interested in AI–especially if you are still under the impression AI has ossified or lost its way.