AI has learned to probe minds of other computers. Science 27 July 2018
http://www.sciencemag.org/news/2018/07/computer-programs-can-learn-what-other-programs-are-thinking?utm_campaign=news_weekly_2018-07-27&et_rid=17392990&et_cid=2218108
Implies that if we get multiple AI’s on-line, they will become one quickly? — PJW
I’m reading Kevin Kelly’s The Inevitable. He says something similar, that a human-level or beyond-human-level AI mind is unlikely to arise in an isolated lab but will emerge from the hyperconnectedness of the internet’s trillions of resources and processors.
It’s interesting the idea of AIs possessing a theory of mind (ToM) arises as the assumption humans possess ToM is coming under scrutiny. https://aeon.co/essays/think-you-can-tell-what-others-are-thinking-think-again?utm_source=Aeon+Newsletter&utm_campaign=112587920c-EMAIL_CAMPAIGN_2018_07_25_04_26&utm_medium=email&utm_term=0_411a82e59d-112587920c-70452909
After reading both articles, it’s clear the ‘big data’ approach makes AIs far more accurate (over 90%) than humans (≊50%, or chance) at predicting other humans’ complex behaviors (suicide, violence) or identifying whether someone is lying.
4.5