An MIT Technology Review article introduces the man responsible for the 30-year-old deep learning approach, explains what deep machine learning is, and questions whether deep learning may be the last significant innovation in the AI field. The article also touches on a potential way forward for developing AIs with qualities more analogous to the human brain’s functioning.
Should it surprise us that human biases find their way into human-designed AI algorithms trained using data sets of human artifacts?
Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.
The Global Consciousness Project, Institute of Noetic Sciences (IONS, for which I was once Hawaii state coordinator) and Princeton Engineering Anomalies Research (PEAR) are collaborating to release a smart phone app, Entangled, that aims to
- Monitor your mind’s influence on your physical environment
- Let you take part in large-scale consciousness experiments
- Support ongoing development of a”consciousness technology” platform for developers and artists
- Monitor global consciousness data in real-time
Before you think I’ve gone off the deep end, let me explain that I gently stepped away from IONS after nearly 20 years because I did not see enough focus on or progress toward their stated goal—scientifically researching consciousness. I fully enjoyed their practice-oriented emphases on intuitive, embodied, mindful living, but while they remained ‘entangled’ in New Age phenomenalism and esoteric speculations, true scientific programs at many universities and research organizations have made steady, sometimes frustratingly slow progress (which is how science typically works). So, please don’t take this post as a tacit endorsement of any of the sponsoring organizations. They each raise interesting questions and do some work of scientific merit or promise, but (in my view) if you’re interest is in verifiable, repeatable, causally intelligible phenomena, you must stay vigilant of the unscientific chaff.
That said, the spike in non-random streams in random number generators immediately prior to the 9-11 atrocity remains one of the very few well-documented phenomena that could be taken to imply a correlation between a specific objective event and human transpersonal consciousness. In the view of the Global Consciousness Project, by collecting large samples of the right sorts of data, they can test their hypothesis that “Coherent consciousness creates order in the world. Subtle interactions link us with each other and the Earth.” As I understand it, they are extrapolating to the transpersonal level how an individual brain achieves coherent, self-aware states. Also, they would say we’re aware of the apparent precognitive 9-11 phenomenon because someone was collecting the relevant data that could then be recognized as correlated. The Entanglement app aims to collect more of such data while also providing users real- or near-real-time feedback.
If truly well-designed scientific research programs can show significant evidence of direct, entanglement-like correlations between objectively observable phenomena and consciousness (shown in brain functioning), I’ll be excited to learn about it. I think this is a monumental challenge.
Caltech researchers have identified the brain mechanisms that enable primates to quickly identify specific faces. In a feat of efficiency, surprisingly few feature-recognition neurons are involved in a process that may be able to distinguish among billions of faces. Each neuron in the facial-recognition system specializes in noticing one feature, such as the width of the part in the observed person’s hair. If the person is bald or has no part, the part-width-recognizing neuron remains silent. A small number of such specialized-recognizer neurons feed their inputs to other layers (patches) that integrate a higher-level pattern (e.g., hair pattern), and these integrate at yet higher levels until there is a total face pattern. This process occurs nearly instantaneously and works regardless of the view angle (as long as some facial features are visible). Also, by cataloging which neurons perform which functions and then mapping these to a relatively small set of composite faces, researchers were able to tell which face a macaque (monkey) was looking at.
These findings seem to correlate closely with Ray Kurzweil’s (Google’s Chief Technology Officer) pattern-recognition theory of mind.
BMCAI library file (site members only)
This NY Times article is worth your time, if you are interested in AI–especially if you are still under the impression AI has ossified or lost its way.
Google and others are developing neural networks that learn to recognize and imitate patterns present in works of art, including music. The path to autonomous creativity is unclear. Current systems can imitate existing artworks, but cannot generate truly original works. Human prompting and configuration are required.
Google’s Magenta project’s neural network learned from 4,500 pieces of music before creating the following simple tune (drum track overlaid by a human):
Click Play button to listen->
Is it conceivable that AI may one day be able to synthesize new made-to-order creations by blending features from a catalog of existing works and styles? Imagine being able to specify, “Write me a new musical composition reminiscent of Rhapsody in Blue, but in the style of Lynyrd Skynyrd.
There is already at least one human who could instantly play Rhapsody in Blue in Skynyrd style, but even he does not (to my knowledge) create entirely original pieces.
Original article: https://www.technologyreview.com/s/601642/ok-computer-write-me-a-song/