The articles cover the following:
- Mapping AI use cases to domains of social good
- AI capabilities that can be used for social good
- Overcoming bottlenecks, especially around data and talent
- Risks to be managed
- Scaling up the use of AI for social good
The articles cover the following:
Neural learning occurs at dendrite roots, not in synapses.
The newly suggested learning scenario indicates that learning occurs in a few dendrites that are in much closer proximity to the neuron, as opposed to the previous notion. …
The new learning scenario occurs in different sites of the brain and therefore calls for a reevaluation of current treatments for disordered brain functionality. … In addition, the learning mechanism is at the basis of recent advanced machine learning and deep learning achievements. The change in the learning paradigm opens new horizons for different types of deep learning algorithms and artificial intelligence based applications imitating our brain functions, but with advanced features and at a much faster speed.
[et_pb_section fb_built=”1″ _builder_version=”3.0.93″ custom_padding=”0px|0px|0px|0px”][et_pb_row _builder_version=”3.0.93″][et_pb_column type=”4_4″ _builder_version=”3.0.93″ parallax=”off” parallax_method=”on”][et_pb_text _builder_version=”3.0.93″ text_font=”||||||||” text_font_size=”20px”]
Sophisticated, sometimes AI-enabled data analytics tools allow construction of individual personality profiles accurate enough to support targeted manipulation of individuals’ perceptions and actions.
[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built=”1″ _builder_version=”3.0.47″ custom_padding=”11px|0px|0px|0px”][et_pb_row custom_padding=”0px|0px|0px|0px” _builder_version=”3.0.93″][et_pb_column type=”4_4″ _builder_version=”3.0.47″ parallax=”off” parallax_method=”on”][et_pb_blurb title=”Analytics firm abused Facebook users’ data to influence the presidential election” _builder_version=”3.0.93″ custom_margin=”||10px|” custom_padding=”20px||10px|” box_shadow_style=”preset2″]
Last night Facebook announced bans against Cambridge Analytica, its parent company and several individuals for allegedly sharing and keeping data that they had promised to delete. This data reportedly included information siphoned from hundreds of thousands of Amazon Mechanical Turkers who were paid to use a “personality prediction app” that collected data from them and also anyone they were friends with — about 50 million accounts. That data reportedly turned into information used by the likes of Robert Mercer, Steve Bannon and the Donald Trump campaign for social media messaging and “micro-targeting” individuals based on shared characteristics.
[/et_pb_blurb][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built=”1″ fullwidth=”on” _builder_version=”3.0.93″][et_pb_fullwidth_code _builder_version=”3.0.93″ use_background_color_gradient=”on” background_color_gradient_start=”#3f310c” background_color_gradient_end=”#b57926″ text_orientation=”center” custom_padding=”20px||24px|” animation_style=”fold”]<iframe width="560" height="315" src="https://www.youtube.com/embed/FXdYSQ6nu-M?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>[/et_pb_fullwidth_code][/et_pb_section]
A Guardian article last October brings the darker aspects of the attention economy, particularly the techniques and tools of neural hijacking, into sharp focus. The piece summarizes some interaction design principles and trends that signal a fundamental shift in means, deployment, and startling effectiveness of mass persuasion. The mechanisms reliably and efficiently leverage neural reward (dopamine) circuits to seize, hold, and direct attention toward whatever end the designer and content providers choose.
The organizer of a $1,700 per person event convened to show marketers and technicians “how to manipulate people into habitual use of their products,” put it baldly.
subtle psychological tricks … can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation”
Particularly telling of the growing ethical worry are the defections from social media among Silicon Valley insiders.
Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”, … confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.…It is revealing that many of these younger technologists are weaning themselves off their own products, sending their children to elite Silicon Valley schools where iPhones, iPads and even laptops are banned. They appear to be abiding by a Biggie Smalls lyric from their own youth about the perils of dealing crack cocaine: never get high on your own supply.
If you read the article, please comment on any future meeting topics you detect. I find it a vibrant collection of concepts for further exploration.
AI system can isolate individuals’ voices from other environmental noise, including other voices. Such a system has many potential uses, both benign and nefarious. The ability is rapidly improving to untangle signals from noise and identify which signals are from which sources. The approach should be able to apply to other kinds of signals too, not only sounds.
An MIT Technology Review article introduces the man responsible for the 30-year-old deep learning approach, explains what deep machine learning is, and questions whether deep learning may be the last significant innovation in the AI field. The article also touches on a potential way forward for developing AIs with qualities more analogous to the human brain’s functioning.
Should it surprise us that human biases find their way into human-designed AI algorithms trained using data sets of human artifacts?
Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.
Here’s a useful artificial intelligence introductory lesson from an MIT course: