Dear Sender,
I'm very sorry, but I strongly want to suggest to you that you are experiencing a loss of individuality through over dependence on a corporate technology.
This is how generative AI works:
- A corporation steals or buys immense amounts of data, and then trains a "foundation model" on that data to predict what the average thing to say next in a context would be.
- Those models are super fun and interesting, but will also say dangerous or harmful things. So corporations create a facade of human-likeness in both interactions and politeness by paying humans to train additional data "attractors" usually called "guard rails." These basically make it more likely you'll get your predicted text from an "appropriate" part of the foundation model. They can also be programmed to do other things like Websearches to get more up-to-date information.
- Then the corporation keeps some data about you and your interactions, and starts customising the text predictions so that they not only are influenced by your immediate prompts, but by things you've said before, and maybe that other people like you have said before.
So basically, "your" AI is by no means your own. It is a tiny interface onto an animated version of a giant library of human culture. That special small interface the corporation is making for you slightly customises the animation of that library for you. There are millions or even billions of other ordinary people – "users" – interfaced onto that same foundation model. Each has a special interface, built for each of them in this automated way, based on their interaction. But each of those individualised, animated AI interfaces is sharing the same "mind", the same database, except for the very small bits of data special to your individual history.
It's kind of like seeing a reflection of yourself in an augmented-reality magic mirror, that mixes you, some kind of average of some society's culture, and the guardrails that the company whose product(s) you use has decided are most suitable for you. You are sacrificing your agency to that corporation in exchange for access to cool synthesised fractions of their foundation model's inputs. Lots of other people are having the same experience. By far, most of them do not interpret their experience the way you have been. However, given how many people are using genAI, even a tiny, tiny fraction of them having your experience is quite a lot of people, and a few write to me almost every week.
I'd like to urge you to read my other blogpost, Generative AI use and human agency. It has 12 bullets to help you think about how to use AI in a way where you maintain your own agency. Maintaining your own agency – knowing where you start and where the AI you are using ends – will help you, and through you, the rest of society. We are all better off if we can all know and understand our responsibilities to each other, and our responsibilities for what we create, however we create it;
yours,
Joanna
Comments