Geoff Hinton is a super genius to whom I am personally indebted and with whom I am acquainted, because we both spent a lot of time in Edinburgh, including overlaps in the 1990s. However, he is a genius of machine learning, not governance.
Please do watch the clip (I watched it on 2x speed-up), and note that Musk is promoting it.
Click to go to X and watch the clip there. |
What Hinton gets wrong in the clip
He treats a potential AGI as a unitary other. He literally talks about AGI as a mother to a human child, and says humans have evolved to care about babies crying, but basically the mother mostly has the upper hand. This is entirely the wrong metaphor for AI. AI is more like a library, or if anyone is
stupid enough to give a fully synthetic entity legal personality / a capacity for ungoverned action, then it is more like a city or corporation. We have been governing such things for millenia. We do know how to do this. We do sometimes get it wrong, cf. Russia. But this is not a novel problem, nor will it be solved solely by corporate spend on in-house projects.
stupid enough to give a fully synthetic entity legal personality / a capacity for ungoverned action, then it is more like a city or corporation. We have been governing such things for millenia. We do know how to do this. We do sometimes get it wrong, cf. Russia. But this is not a novel problem, nor will it be solved solely by corporate spend on in-house projects.
What Hinton gets right in the clip
We ought to be spending at least 1/3 of what we presently spend building AI on governing AI and its consequences. Hinton also calls for strong regulation (but only to force companies to spend this money on safety in-house), and lists a variety of harms that we need to be working on which is a good list.
What this wrongness and rightness imply
This money Hinton earmarks should be paid in tax, so governments can afford to hire skilled expertise in AI, build more competence in their own houses, be less beholden to corporations and consultancies, and so governments can enforce the laws they are writing and have already written.
Links to other, longer material supporting the above
- AI Is Not a Unitary Actor: My Response to the UN Interim Report Consultation Me from earlier this year. Geoff makes a common error here, one the US seems to be promoting, and the UN document shared. Though note UNESCO and the EU don't have this problem. But a lot of other, US-dominated organisations do.
- People who worry about AGI often wind up recapitulating social sciences badly / unnecessarily naïvely. See the last paragraph of the section "Four Definitions of AGI" from this 2018 blogpost.
- AI being made a legal actor would be the ultimate shell company, which is exactly what a lot of the rich people promoting these kinds of metaphors want. That's a link to my 2017 article with two law professors, Tom Grant and Mihailis Diamantis.
- AI is nothing like a child, OR a parent. Parents and children are really roles that individual components of biological lineages take on. Corporate products are nothing like elements of biological lineages.
Comments
Chat gpt has interesting responses to your prompt. And you have to prompt it. It comes back with a good response.
PhD level? A PhD is what 30 years old. Went through life 30 years came up with papers on his own. Got peer reviewed. Etc. he went through life. Chat gpt did not
Even with agi. Human brains don’t operate in a vacuum. We learn offf the other 8 billion other brains. So agi won’t be there until more human like