I was just Wednesday interviewed for ÖRF for their science blog, http://science.orf.at/. The interviewer Katharina Gruber was fascinated by AI consciousness, whether AI could really experience knowing its maker. Her interview of me is here, by the way.
Yes, of course AI can experience knowing its maker. I'm sure the Watson and Google algorithms have by now indexed quite a lot about how they are put together.
But look, if you believe in biology then your maker is just a fascinatingly complex process derived from an unsupervised algorithm that's only reward is persistence. And if you are a supernaturalist than you believe that your maker is an inconceivable supernatural entity that generated you through an inaccessible process.
Neither of those things is anything like just knowing a person. Knowing the exact intention and mechanisms by which you are constructed, and having complete access to the code and procedures by which you operate, is utterly unlike even knowing your parents.
I wish somehow we could learn to stop projecting ourselves into AI. It's dangerous because it enables corruption, and it has very, very little advantage.
See also my previous blogposts on AI being a second-order moral patient, including:
In 2024, Yeniçağ Gazetesi (a Turkish news site) published an article quoting me as saying that "AI cannot perceive God as a personal being because it is not conscious. It only analyzes the definitions of God put forward by humans and the meanings they attach to this concept." [double quotes theirs!] In fact, I've long said AI systems are TOO conscious to be smart – they have perfect records, not compressed understandings (LLM do alter that that though.) As I wrote back to the journalist (she briefly linked to me on linkedin, then fled the site) "I think AI systems *can be* conscious (can have perfect episodic memory, perfect self knowledge, can discuss these) but that whether or not they are architected that way has no moral consequences, and certainly doesn't help or hurt with "understanding" the supernatural (not that that's possible.)" You can find an overview of my work on AI consciousness here: https://www.joannajbryson.org/artificial-consciousness-emotions-dreaming-and-ethics or you might want to read this blogpost about how LLM and word embeddings relate to AI consciousness I wrote the day my bias paper got published by Science, or indeed the other blogposts under the consciousness label here.
Yes, of course AI can experience knowing its maker. I'm sure the Watson and Google algorithms have by now indexed quite a lot about how they are put together.
But look, if you believe in biology then your maker is just a fascinatingly complex process derived from an unsupervised algorithm that's only reward is persistence. And if you are a supernaturalist than you believe that your maker is an inconceivable supernatural entity that generated you through an inaccessible process.
Neither of those things is anything like just knowing a person. Knowing the exact intention and mechanisms by which you are constructed, and having complete access to the code and procedures by which you operate, is utterly unlike even knowing your parents.
I wish somehow we could learn to stop projecting ourselves into AI. It's dangerous because it enables corruption, and it has very, very little advantage.
See also my previous blogposts on AI being a second-order moral patient, including:
- If robots ever need rights we'll have designed them unjustly
- Robots are owned. Owners are taxed. Internet services cost Information
- Robots are more like novels than children
In 2024, Yeniçağ Gazetesi (a Turkish news site) published an article quoting me as saying that "AI cannot perceive God as a personal being because it is not conscious. It only analyzes the definitions of God put forward by humans and the meanings they attach to this concept." [double quotes theirs!] In fact, I've long said AI systems are TOO conscious to be smart – they have perfect records, not compressed understandings (LLM do alter that that though.) As I wrote back to the journalist (she briefly linked to me on linkedin, then fled the site) "I think AI systems *can be* conscious (can have perfect episodic memory, perfect self knowledge, can discuss these) but that whether or not they are architected that way has no moral consequences, and certainly doesn't help or hurt with "understanding" the supernatural (not that that's possible.)" You can find an overview of my work on AI consciousness here: https://www.joannajbryson.org/artificial-consciousness-emotions-dreaming-and-ethics or you might want to read this blogpost about how LLM and word embeddings relate to AI consciousness I wrote the day my bias paper got published by Science, or indeed the other blogposts under the consciousness label here.
Comments