Tag Archives: risk analysis

Partisan Innumeracy

In his memoir, China in Ten Words, writer Yu Hua recalled an event following the end of the cultural revolution. Literature had been banned for many years but the memory of its joys had lingered in much of the population. Hua’s formative years had been during the intellectually desiccated period. Emerging from a time when being seen with any book other than officially sanctioned volumes of or about Chairman Mao could have grave consequences, he and many others craved stories featuring relatable human characters and situations.

Eventually China loosened its intellectual restrictions and the word that bookstores would be opening spread like a prairie fire. Hua tells of a local bookstore that announced it would distribute coupons for two books each to the first few customers to come to the shop’s opening. Hua rose before dawn and was dismayed to find several hundred people already in line, many of whom had been in line all night.

To occupy themselves those waiting speculated on how many coupons the shop owner would hand out. The speculations fell into three general groups. Those who’d gotten in line the day prior were sure their proaction would be rewarded with coupons. Those who arrived later and comprised the middle of the line agreed among themselves there would be more coupons for such obviously high-quality people who’d been wise enough to arrive before the deadbeats behind them. Those like Hua, farther back in the line, reached a consensus there would be enough coupons for them. After all, all of them were clearly quality people for having the foresight and diligence to arrive so early. Hilariously, each person’s assessment derived not from any objective assessment, such as estimating the number of books visible through the shop window, but rather on an emotional assessment of how deserving they were for making the effort to be in the line and, tellingly, how undeserving those farther back in the line were, considering they were comparative slackers. When the shop door opened and the actual number of coupons—50—was announced, persons one through 50 smugly congratulated themselves while the remainders moaned and cursed. Person 51 felt auspiciously unlucky and the number ’51’ became shorthand for unlucky throughout the town.

It’s entertaining to poke fun at the sort of irrational innumeracy the Chinese villagers displayed but it’s a universal flaw in human thinking. Our estimates and judgments of others’ estimates and judgments skew sharply in keeping with our sense of identity. Logically, our sense of who we are, which involves ongoing comparisons of ourselves with others, is a psychological matter that has no bearing on quantifiable facts about the world.

Take the results of a recent Ipsos/Axios survey in which American’s belief in whether the reported number of COVID-19 deaths was accurate, under-reported, or excessive. Beliefs closely aligned with political perspectives (which are primarily sociological and psychological, not fact oriented). This does not mean objective facts cannot align with a group’s position on a matter. It means such alignment often has less to do with a group’s objectivity or rationality than with how convenient the facts are to a group’s claims about what is real or important.

As for a more facts-based approach to assessing COVID-19 related death counts, the article linked in the preceding paragraph provides objective reasons for why an undercounting scenario is likely:

  • Insufficient testing obscures many COVID-19 deaths.
  • Several states’ death rates are significantly higher than statistics for other causes would suggest, and are still inceasing.

What’s important to most people, however, is not recognizing accurate facts or more valid reasoning but feeling they are true to their in-groups’ values and self-definition.

COVID-19 (Average) vs Other Causes of Death (Actual) in the U.S. – Animated Data Graph

Source: Covid vs. US Daily Average Cause of Death, Robert Martin on 8 Apr 2020

For those still saying influenza is a much bigger killer than COVID-19 (SARS-COV-2), the numbers don’t support that argument, especially considering there are many deaths that strongly appear to be due to COVID-19 that are not reported as such because the deceased are not tested. The animation conveys the speed with which an exponentially increasing infection rate overtakes other, relatively linear rates of expansion.

How the Black Death Radically Changed the Course of History

link.medium.com/YRFzoB3Xr5

This article is relevant to our recent discussions and Zak Stein’s (see Edward’s recent post) suggestion that great destabilizing events open gaps in which new structures can supplant older, disintegrating systems–with the inherent risks and opportunities.

Test determines approximate year of death

Age-at-death forecasting – A new test predicts when a person will die. It’s currently accurate within a few years and is getting more accurate. What psychological impacts might knowing your approximate (± 6 months) death time mean for otherwise healthy people? Does existing research with terminally ill or very old persons shed light on this? What would the social and political implications be? What if a ‘death-clock’ reading became required for certain jobs (elected positions, astronauts, roles requiring expensive training and education, etc.) or decisions (whom to marry or parent children with, whether to adopt, whether to relocate, how to invest and manage one’s finances, etc.)?

Will self-improving AI inevitably lead to catastrophe?

Paul W sent the following TED Talk link and said

If AI is by definition a program designed to improve its ability to access and process information, I suspect we cannot come up with serious AI that is not dangerous. It will evolve so fast and down such unpredictable pathways that it will leave us in the dust. The mandate to improve information-processing capabilities implicitly includes a mandate to compete for resources (need’s better hardware, better programmers, technicians, etc.) It will take these from us, and just as we do following a different gene replication mandate, from all other life forms. How do we program in a fail safe against that? How do we make sure that everyone’s AI creation has such a failsafe — one that works?

What do you think? I also recommend Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies