Age-at-death forecasting – A new test predicts when a person will die. It’s currently accurate within a few years and is getting more accurate. What psychological impacts might knowing your approximate (± 6 months) death time mean for otherwise healthy people? Does existing research with terminally ill or very old persons shed light on this? What would the social and political implications be? What if a ‘death-clock’ reading became required for certain jobs (elected positions, astronauts, roles requiring expensive training and education, etc.) or decisions (whom to marry or parent children with, whether to adopt, whether to relocate, how to invest and manage one’s finances, etc.)?
The articles cover the following:
- Mapping AI use cases to domains of social good
- AI capabilities that can be used for social good
- Overcoming bottlenecks, especially around data and talent
- Risks to be managed
- Scaling up the use of AI for social good
[et_pb_section bb_built=”1″][et_pb_row][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.17.6″]
From Axios interview with Elon Musk:
Musk said his neuroscience company, Neuralink, has about 85 of “the highest per capita intelligence” group of engineers he has ever assembled — with the mission of building a hard drive for your brain.
- “The long-term aspiration with Neuralink would be to achieve a symbiosis with artificial intelligence.”
- Wait. What? “To achieve a sort of democratization of intelligence, such that it is not monopolistically held in a purely digital form by governments and large corporations.”
[/et_pb_text][et_pb_video _builder_version=”3.17.6″ src=”https://www.youtube.com/watch?v=yQjUw16Mu_0″ /][/et_pb_column][/et_pb_row][/et_pb_section]
This article from the Bulletin of The Atomic Scientists site is an interesting overview of Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies. The author rebuts Bostrom on several points, relying partly on the failure of AI research to date to produce any result approaching what most humans would regard as intelligence. The absence of recognizably intelligent artificial general intelligence is not, of course, a proof it can never exist. The author also takes issue with Bostrom’s (claimed) conflation of intelligence with inference abilities—an assumption the author says AI researchers found to be false.
As much of the world settles into the spectacle and cozy embrace of culturally reinforced magical thinking, New Scientist has several interesting recent articles about the evolved intuitive nature of religious thinking as a cognitive by-product (of the value of assuming agency in environmental phenomena, for example) and delving into how atheism is and is not like religious thinking. I find the point interesting that religion and atheism (or any ism), as social constructs, cannot be studied and compared in the same ways that objectively real objects and phenomena can, but we can learn much from systematic approaches to investigating the underlying neurological functions and their probable evolutionary value.
If you don’t subscribe, Albuquerque Public Libraries carry New Scientist.
BMAI friends. The following ramble is my first cut at making sense of the grave role racial (and other) bias is playing in the world today. This was prompted by comments I see daily from my family and friends on social media. Thinking about the great lack of self- and group-awareness many of the commenters display, I turned my scope inward. How do my own innate, evolved biases slant me to take my group’s and my own privileges for granted and make invalid assumptions about those I perceive (subconsciously or explicitly) to be ‘the other’? I put this forward to start a discussion and hope you will contribute your own insights and references. Feel free to post comments or even insert questions, comments, or new text directly into my text. Of course, you can create your own new posts as well. Thanks.
TED talk of possible interest:
Comment I posted there:
Here is an interdisciplinary “moon-shot” suggestion that we should at least start talking about, now, before it is too late. Let’s massively collaborate to develop a very mission-specific AI system to help us figure out, using emerging genetic editing technologies (e.g., CRISPR, etc.), ideally how to tweak (most likely) species-typical genes currently constraining our capacities for prosociality, biophilia, and compassion, so that we can intentionally evolve into a sustainable species. This is something that natural selection, our past and current psycho-eugenicist, will never do (it cannot), and something that our current genetic endowment will never allow cultural processes / social engineering approaches to adequately transform us. Purposed-designed AI systems feeding off of growing databases of intra-genomic dynamics and gene-environment interactions could greatly speed our understanding of how to make these genetic adjustments to ourselves, the only hope for our survival, in a morally optimal (i.e., fewest mistakes due to unexpected gene-gene and gene-regulatory (exome) and epigenetic interactions; fewest onerous side-effects) as well as in a maximally effective and efficient way. Come together, teams of AI scientists and geneticists! We need to grab our collective pan-cultural intrapsychic fate away from the dark hands of natural selection, and AI can probably help. END