We propose ‘multi-level evolution’, a bottom-up automatic process that designs robots across multiple levels and niches them to tasks and environmental conditions. Multi-level evolution concurrently explores constituent molecular and material building blocks, as well as their possible assemblies into specialized morphological and sensorimotor configurations. Multi-level evolution provides a route to fully harness a recent explosion in available candidate materials and ongoing advances in rapid manufacturing processes.
“In this episode of Tech Effects, we explore the impact of music on the brain and body. From listening to music to performing it, WIRED’s Peter Rubin looks at how music can change our moods, why we get the chills, and how it can actually change pathways in our brains.”
For me the most interesting part was later in the video (10:20), how when we improvise we shut down the pre-frontal planning part of the brain and ‘just go with the flow,’ which is our most creative and innovation moments. This though does depend on having used the pre-frontal cortex in learning the techniques of music to get them so ingrained in memory that we are then free to play with what we’ve programmed.
The NYU Center for Mind, Brain & Consciousness hosts presentations, including topical debates among leading neuroscience researchers. Many of the sessions are recorded for later viewing. The upcoming debate among Joseph LeDoux (Center for Neural Science, NYU), Yaïr Pinto (Psychology, University of Amsterdam), and Elizabeth Schechter (Philosophy, Washington University in St. Louis), will tackle the question, “Do Split-brain patients have two minds?” Previous topics addressed animal consciousness, hierarchical predictive coding and perception, AI ‘machinery,’ AI ethics, unconscious perception, research replication issues, neuroscience and art, explanatory power of mirror neurons, child vs adult learning, and brain-mapping initiatives.
Several of us met on Labor Day with the goal of identifying topics for at least five future monthly meetings. (Thanks, Dave N, for hosting!) Being the overachievers we are, we pushed beyond the goal. Following are the resulting topics, which will each have its own article on this site where we can begin organizing references for the discussion:
sex-related influences on emotional memory
gross and subtle brain differences (e.g., “walls of the third ventricle – sexual nuclei”)
“Are there gender-based brain differences that influence differences in perceptions and experience?”
epigenetic factors (may need an overview of epigenetics)
If I missed anything, please edit the list (I used HTML in the ‘Text’ view to get sub-bullets). If you’re worried about the formatting, you can email your edits to email@example.com and Mark will post your changes.
New scientific findings support the idea that different humans’ brains store and recall story scenes the same way, rather than each person developing unique memory patterns about stories. Also, people generally do well recalling the details of stories. I want to see more targeted research that determines whether information packed in story structures (a person wrestling with a difficult challenge and changing as a result) is more readily and accurately transmitted from brain to brain via storytelling. This would be compared with information packaged simply to inform of facts (Wikipedia entries, technical reports, etc.). My experience agrees with this research: different people tend to recall stories equally well. (Oddly, people vary greatly in their recall of eye-witness tasks. Something about how information is delivered in storytelling greatly improves accuracy of recall.) I think our brains evolved a special facility for paying attention to stories and therefore to remember them. If true, storytellers should learn what we can about how the brain processes stories.
Google and others are developing neural networks that learn to recognize and imitate patterns present in works of art, including music. The path to autonomous creativity is unclear. Current systems can imitate existing artworks, but cannot generate truly original works. Human prompting and configuration are required.
Google’s Magenta project’s neural network learned from 4,500 pieces of music before creating the following simple tune (drum track overlaid by a human):
Click Play button to listen->
Is it conceivable that AI may one day be able to synthesize new made-to-order creations by blending features from a catalog of existing works and styles? Imagine being able to specify, “Write me a new musical composition reminiscent of Rhapsody in Blue, but in the style of Lynyrd Skynyrd.
There is already at least one human who could instantly play Rhapsody in Blue in Skynyrd style, but even he does not (to my knowledge) create entirely original pieces.