A brief rant in response to an article on Lifeboat, Is it possible to build an artificial superintelligence without fully replicating the human brain? By the way, does anyone know what Lifeboat really is? I'm a member, but I'm not sure if I'm just facilitating fund raising for no reason?
See further a longer rant on the fact that "Superintelligence" and its threats are already here, and that pretending that superintelligence requires ape-like AI is actually undermining our ability to protect ourselves. See also my page on AI and Robot Ethics.
While I agree with this article's main point & conclusion, I disagree with almost everything else about it, particularly it's opening salvo "The technological singularity requires the creation of an artificial superintelligence (ASI)." There is no one technological singularity, and standalone "artificial" superintelligence is not necessary for existential threat. AI is extending, facilitating and accelerating the superintelligence which is human culture. It is a threat exactly to the extent we are – if we can't control the run-away political and economic forces that lead us to destroy each other and our environment then we commit genocide and irrevocable cultural and ecological loss. Thinking of AI as some alien force or even as a moral subject (whether agent or patient) is neither helpful nor accurate.
See further a longer rant on the fact that "Superintelligence" and its threats are already here, and that pretending that superintelligence requires ape-like AI is actually undermining our ability to protect ourselves. See also my page on AI and Robot Ethics.
Comments