Generative AI use and human agency

Generative AI is AI produced to learn patterns from data, which can also be used to create patterns similar to those in data. ChatGPT and mistral are examples of generative AI. There are many other kinds of AI, for example, search. 

When I say just "AI" below, I mean all of AI, including but not limited to generative AI.

  1. You do not have to use generative AI.  You may have to do spell checking or grammar checking, and the best versions of these might be generative AI, but otherwise generative AI is an option.
  2. Generative AI is not a source of facts, or data about anything other than generative AI itself. The output of generative AI is only predictions. These predictions have to be tested.
  3. Generative AI's output is often wrong – predictions can be nonsense. The people who get the most out of using it are often experts who can quickly spot and fix such errors. Generative AI will always sound fluent, confident, and competent assuming that is the tone most common in its input. Using generative AI output on a topic you haven't researched is extremely hazardous.
  4. AI itself cannot be held to account. Its behaviour is not altered by application of the penalties of law. The companies that produce or deploy AI may be held to account though, or at least the humans in them. Therefore, AI is never itself an actor deserving credit or blame. It is more like art, it can be good or bad, useful or not, but it is not like an artist.
  5. If you use AI, you are the one who is accountable for whatever you produce with it. You have to be certain that whatever you produced was correct. You cannot ask the system itself to do this. You must either already be expert at the task you are doing so you can recognise good output yourself, or you must check through other, different means the validity of any output.
  6. There are contexts in which it is immoral to use generative AI. For example, if you are a judge responsible for grounding a decision in law, you cannot rest that on an approximation of previous cases unknown to you. You want an AI system that helps you retrieve specific, well-documented cases, not one that confabulates fictional cases. You need to ensure you procure the right kind of AI for a task. That "right kind" is determined in part by the essentialness of human responsibility.
  7. Any argument or even discussion where you feel you need to blame generative AI for saying or doing or telling you something is an error. Blaming AI for the content of something you wrote or thought is just as wrong as blaming a laptop or typewriter or pencil. Generative AI is just another way for you to generate things, and it's not always a good one. You don't use a jet or a car to go next door. You don't use a piledriver to put a thumbtack on a bulletin board. 
  8. The purpose of education is personal improvement – to make yourself a more skilled and knowledgeable person. Overuse of generative AI may prevent you from achieving these goals.
  9. Generative AI produces above average human output, but typically not top human output. If you overuse generative AI you may produce more mediocre output than you are capable of.
  10. Overuse of AI may leave you vulnerable and incompetent without it. No organisation should assume it will always have access to generative AI. All organisations should be able to function reasonably effectively when electricity or telecommunications are down, though what "reasonably" means varies by how important the organisation is. More subtly, every organisation should remember that "guardrail" training can be used to favour adversarial perspectives generative AI output.
  11. Correcting or fact checking generative AI may take longer than just doing a task yourself, or with conventional AI tools. 
  12. You do not have to use generative AI. 
Generative AI (like cars, jets, and piledrivers) can use a lot of energy. So it's not just a big blunt instrument, a lot of genAI are big, blunt, unsustainable instruments.  That might be addressed with lighter-weight models or more sustainable energy. The points above though stand independent of how the generative AI is developed and run. Similarly, many famous genAI engines ran roughshod over copyright holders. I strongly support campaigns to compensate IP holders and data subjects generally, but even if we get these things right, the above still hold.

I was also tempted to include in 9 as a middle sentence "Note that if you are in an elite context, like attending a university, above average for humanity widely could be below average for your context." It's true, and maybe important for students to read. But 8 is already true without that fact, and it's presently true for everyone.

More on AI and accountability:
More from this blog on how generative AI (especially LLM) work:
Stuff not by me:
Thanks to Melissa McCradden for the energy reminder, nick splendorr for the copyright reminder, and Eerke Boiten on a tone thing. Thanks to Andrew Davies and Simon Willison for reminding me that people are most likely to misuse generative AI output when they least understand it. Note I added and changed the order of some items, so pre 21 February social media posts may reference numbering inaccurately. The emphasis on deciding when not to use AI at all came from an AI Policy Summit Virginia Dignum ran that I have yet to blog about, but should. I did already write a linkedin post about it

Comments