Category Archives: persuasion

Do our models get in the way?

We’ve seen quite a few descriptions of an emerging paradigm known as the collaborative commons (CC). But a problem arises when we take another step by extrapolating from that data and then try to prescribe what we need to do in order to create a CC. I.e., we form a model of what the CC should be, and top down we try to implement it. Whereas the technology that enables the CC to grow organically has no apparent need of this top down imposition. To the contrary, it seems more of a capitalistic holdover instead of the middle out way the CC is naturally evolving.

Bonnita Roy has noted that “In a world as diverse in people and rich in meanings as ours, big change might come from small acts by everyone operating everywhere in the contexts that already present themselves in their ordinary lives.” It is quite the contrast from the enlightened heroes figuring it all out from their complex ivory towers which supposedly and hopefully ‘trickles down’ to the rest of us. This seems much more how the CC works in practice. Political and social revolution arises from the external socioeconomic system, the mode of production. Development is accomplished not by having a ‘higher’ model to which one must conform, but by the actual practice of operating within the emerging socioeconomic system.

Jennifer Gidley noted a similar phenomenon in that there is a difference between research that identifies postformal operations from those who enact those operations. And much of that research identifying it has itself “been framed and presented from a formal, mental-rational mode.” Plus those enacting postformal operations don’t “necessarily conceptualize it as such.” So are those that identify postformality via formal methodology really just a formal interpretation of what it might be? Especially since those enacting it disagree with some of the very premises of those identifying them?

The online discussions I engage with on meta-models is representative of this difference. It seems the abstract modeling of the development of the CC is what is operating to create it in a top-down manner. Not only that, what appears to be happening in all cases is that not only does each individual have their own thoughts and opinions on the topic, which is to be expected in diverse groups, we all end up justifying our own take over others. We all seem to be so attached to our own discoveries that we build an edifice and seek out and find supporting evidence to justify it. When confronted with different perspectives or evidence, our first inclination is to see how it fits into our own model or worldview, how we can twist and manipulate it to support our biases. What is there in common that holds us together if we are so closed to taking in new information from other perspectives, allowing them to sit in their own right, their own space, instead of trying to fit them into our own predispositions?

I’m reminded of what Said Dawlabani said, that the distributed network of the collaborative commons follows no ideologies. That it is open source, highly networked and depends on the wisdom of the crowd. I’m guessing that equally applies to our models on trying to create the CC, as we tend to idealize and attach to them. Is our ownership of our ideas more indicative of capitalism that the CC? It also seems that those who are enacting this new paradigm are doing so without need of any explicit theory or model about it. So is arguing about the correct theory even a necessary part of its enactment, as if like capitalism it too needs a top down elite model to implement it? Are our models just getting in the way and actually counter-productive to its natural evolution?

News startups aim to improve public discourse

A Nieman Reports article highlights four startups seeking to improve public discourse. Let’s hope efforts to create methods and technologies along these lines accelerate and succeed in producing positive outcomes.

How many does it take to tip the scales?

25% of a group according to this study published in Science journal.

“A new study finds that when 25 percent of people in a group adopt a new social norm, it creates a tipping point where the entire group follows suit. This shows the direct causal effect of the size of a committed minority on its capacity to create social change.”

And this is encouraging and a key reason I do my blog:

“While shifting people’s underlying beliefs can be challenging, Centola’s results offer new evidence that a committed minority can change what behaviors are seen as socially acceptable, potentially leading to pro-social outcomes like reduced energy consumption, less sexual harassment in the workplace, and improved exercise habits.”

AI-enabled software creates 3D face from single photo

I wrote on my blog about this development and more generally about the increasing ease with which AI tools can forge convincing media. Go see my creepy 3D face.

Cambridge Analytica pilfered Facebook data to influence election

[et_pb_section fb_built=”1″ _builder_version=”3.0.93″ custom_padding=”0px|0px|0px|0px”][et_pb_row _builder_version=”3.0.93″][et_pb_column type=”4_4″ _builder_version=”3.0.93″ parallax=”off” parallax_method=”on”][et_pb_text _builder_version=”3.0.93″ text_font=”||||||||” text_font_size=”20px”]

Sophisticated, sometimes AI-enabled data analytics tools allow construction of individual personality profiles accurate enough to support targeted manipulation of individuals’ perceptions and actions. 

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built=”1″ _builder_version=”3.0.47″ custom_padding=”11px|0px|0px|0px”][et_pb_row custom_padding=”0px|0px|0px|0px” _builder_version=”3.0.93″][et_pb_column type=”4_4″ _builder_version=”3.0.47″ parallax=”off” parallax_method=”on”][et_pb_blurb title=”Analytics firm abused Facebook users’ data to influence the presidential election” _builder_version=”3.0.93″ custom_margin=”||10px|” custom_padding=”20px||10px|” box_shadow_style=”preset2″]

Last night Facebook announced bans against Cambridge Analytica, its parent company and several individuals for allegedly sharing and keeping data that they had promised to delete. This data reportedly included information siphoned from hundreds of thousands of Amazon Mechanical Turkers who were paid to use a “personality prediction app” that collected data from them and also anyone they were friends with — about 50 million accounts. That data reportedly turned into information used by the likes of Robert Mercer, Steve Bannon and the Donald Trump campaign for social media messaging and “micro-targeting” individuals based on shared characteristics.

 

https://www.engadget.com/2018/03/17/facebook-cambridge-analytica-data-analysis-chris-wylie/?sr_source=Facebook 

[/et_pb_blurb][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built=”1″ fullwidth=”on” _builder_version=”3.0.93″][et_pb_fullwidth_code _builder_version=”3.0.93″ use_background_color_gradient=”on” background_color_gradient_start=”#3f310c” background_color_gradient_end=”#b57926″ text_orientation=”center” custom_padding=”20px||24px|” animation_style=”fold”]<iframe width="560" height="315" src="https://www.youtube.com/embed/FXdYSQ6nu-M?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>[/et_pb_fullwidth_code][/et_pb_section]

A dive into the black waters under the surface of persuasive design

A Guardian article last October brings the darker aspects of the attention economy, particularly the techniques and tools of neural hijacking, into sharp focus. The piece summarizes some interaction design principles and trends that signal a fundamental shift in means, deployment, and startling effectiveness of mass persuasion. The mechanisms reliably and efficiently leverage neural reward (dopamine) circuits to seize, hold, and direct attention toward whatever end the designer and content providers choose.

The organizer of a $1,700 per person event convened to show marketers and technicians “how to manipulate people into habitual use of their products,” put it baldly.

subtle psychological tricks … can be used to make people develop habits, such as varying the rewards people receive to create “a craving”, or exploiting negative emotions that can act as “triggers”. “Feelings of boredom, loneliness, frustration, confusion and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation”

Particularly telling of the growing ethical worry are the defections from social media among Silicon Valley insiders.

Pearlman, then a product manager at Facebook and on the team that created the Facebook “like”,  … confirmed via email that she, too, has grown disaffected with Facebook “likes” and other addictive feedback loops. She has installed a web browser plug-in to eradicate her Facebook news feed, and hired a social media manager to monitor her Facebook page so that she doesn’t have to.
It is revealing that many of these younger technologists are weaning themselves off their own products, sending their children to elite Silicon Valley schools where iPhones, iPads and even laptops are banned. They appear to be abiding by a Biggie Smalls lyric from their own youth about the perils of dealing crack cocaine: never get high on your own supply.

If you read the article, please comment on any future meeting topics you detect. I find it a vibrant collection of concepts for further exploration.