What Geoffrey Hinton gets wrong and right about AGI (super short post)

Geoff Hinton is a super genius to whom I am personally indebted and with whom I am acquainted, because we both spent a lot of time in Edinburgh, including overlaps in the 1990s. However, he is a genius of machine learning, not governance.

Please do watch the clip (I watched it on 2x speed-up), and note that Musk is promoting it.

picture of Musk saying Hinton is right from an X post
Click to go to X and watch the clip there.

What Hinton gets wrong in the clip 

He treats a potential AGI as a unitary other. He literally talks about AGI as a mother to a human child, and says humans have evolved to care about babies crying, but basically the mother mostly has the upper hand. This is entirely the wrong metaphor for AI. AI is more like a library, or if anyone is
stupid enough to give a fully synthetic entity legal personality / a capacity for ungoverned action, then it is more like a city or corporation. We have been governing such things for millenia. We do know how to do this. We do sometimes get it wrong, cf. Russia. But this is not a novel problem, nor will it be solved solely by corporate spend on in-house projects. 

What Hinton gets right in the clip

We ought to be spending at least 1/3 of what we presently spend building AI on governing AI and its consequences. Hinton also calls for strong regulation (but only to force companies to spend this money on safety in-house), and lists a variety of harms that we need to be working on which is a good list. 

What this wrongness and rightness imply

This money Hinton earmarks should be paid in tax, so governments can afford to hire skilled expertise in AI, build more competence in their own houses, be less beholden to corporations and consultancies, and so governments can enforce the laws they are writing and have already written. 

Links to other, longer material supporting the above

Comments

I said this:

Chat gpt has interesting responses to your prompt. And you have to prompt it. It comes back with a good response.

PhD level? A PhD is what 30 years old. Went through life 30 years came up with papers on his own. Got peer reviewed. Etc. he went through life. Chat gpt did not

Even with agi. Human brains don’t operate in a vacuum. We learn offf the other 8 billion other brains. So agi won’t be there until more human like