Guardrails – the book, the review, and the paragraphs relating to genAI

I was very flattered to be asked by Science to read and review Guardrails: Guiding Human Decisions in the Age of AI, the new book by Urs Gasser and
Viktor Mayer-Schoenberger. The review is here, you should be able to see the whole thing without a subscription since it's only one page: https://www.science.org/doi/10.1126/science.adn6814 I very much enjoyed the book. But I'm taking advantage of having a blog to post two paragraphs Science had to cut for length, which explain how Guardrails on generative AI such as chatGPT and Gemini work, which is NOT the topic of this book. "Approached about reviewing the book in late 2023, I had anticipated that it would be at least in part about the guardrails that were all the rage that year---the ones being imposed on generative AI systems like chatGPT. Generative AI such as a large language model (LLM) is based on enormous multi-layered networks trained using humongous numbers of processors on vast quantities of data over periods of months, requiring billions of dollars worth of electricity. LLMs are basically Websearch on steroids: rather than only learning the `meaning' (appropriate use) of words (Caliskan, Bryson, & Narayanan, 2017), they learn how to turn out phrases, sentences, and paragraphs as well. However, they are at their heart prediction machines, and where they are asked to predict off of too little information, the outcome is of course random. Because LLMs are fed plenty of data on producing confident, fluent text, these random `hallucinations' come out just as convincingly phrased as sentences closer to the truth. Guardrails in the context of LLM consist of any rule that makes the system less likely to produce not just `hallucinations' (randomness) but also any objectionable or biased content found reliably enough in the training data that might be regurgitated. LLM guardrails provide extra context to try to nudge the machine into generating palatable outputs, but cannot guarantee success. As such, LLM guardrails are indeed akin to the many kinds of guardrails {\em Guardrails}' authors describe. From the first pages, the authors are at pains to point out that guardrails are not walls. Guardrails constrain you, but you can climb over them or even adjust their location. Your agency matters. In fact, human agency is the core of their book, as the subtitle makes evident. Good guardrails must be legitimated---rooted in social structures. Governance mechanisms (guardrails) must achieve individual human empowerment through facilitating society-wide learning through iterative self-improvement. These requirements are what makes it a very bad thing that limits are being increasingly encoded as both digital and inflexible, and done so by corporations, though also sometimes governments."

Comments