Generative AI is a magic 8 ball, not AGI.

Using LLM to answer open-ended questions is abuse of intelligent technology. LLM are and will always be by nature Magic 8 Balls. They only output sensible text given prompts with sufficient context as input, for example when asked to summarise or smooth text, or report common answers to really common questions.

Didn't we all learn not to ask Magic 8 Balls "do my parents really love me" or "should I kill myself" when we were about 5 years old? (See answer in image or its ALT below).

How could LLM ever have been presented otherwise? I believe there are only two explanations:

  1. Technologists (and others) ignorant enough to blindly pursue AGI.
  2. Corporations and individuals intentionally interfering with real AI regulation.

Relatedly, GPAI stands for two things:

  1. GPAI (the global partnership for AI), an organisation designed by members of the G7 to provide AI information for governance, but then obliged to be "non normative" by the US under Trump, and
  2. "general purpose AI" as a surrogate term for "AGI", and meaning really only LLM and other generative AI, and their "foundation models". The term was introduced late in the AIAct development process, I believe by corporate lobbyists, and so far is really only being used really to discuss foundation models, not truely general purpose instruments like logic and Bayesian inference.

This all comes from yet another Chatham Rule discussion of AI regulation that got bogged down in this nonsense (this one led by the Ada Lovelace Institute and referencing UK law.) Which reminds me, I haven't blogged the first one here, only on LinkedIn. That was at WEF.


A magic 8 ball saying "As I see it, yes".
Image from Wikimedia provided CC 2.0 the alleged original prompt (or title of the piece if it is art) "Magic 8 Ball: Instrument of Evil?" Full credit https://en.wikipedia.org/wiki/Magic_8_Ball#/media/File:Magic_8_Ball_-_Instrument_Of_Evil?_(2426454804).jpg


Comments

Anonymous said…
What you wrote sounds like cheap dogmatic assertions by a verbalist who knows very little about AI but wants to be relevant in the field.
You and your verbalist ilk who have high verbal skills and little ability for science should really be afraid of LLMs because they may soon render you obsolete.
Joanna Bryson said…
Wow, my first non-spam comment in years. Too bad it's by someone too afraid to identify themselves so we can't see who's right 20 years from now. (I think I first set up this blog in December 2002?)

I have a PhD in the systems engineering of AI (from MIT) and have been working and publishing in the field since 1991. Before that, I was a professional programmer for 5 years (paying off my undergraduate debts.) Though 4 years ago I left computer science as a department to focus on governance full time because we're in a crisis of governance, I do still have a robotics project ongoing (with Humboldt University).