2024 BRIEF overview on LLM / "foundation models" / 2023's version of "AI"

History:  In 2023, the EU was consolidating its legislation called "The AI Act" (or AI Regulation.) Despite the fact that this is a fairly boring piece of legislation just ensuring that at least some digital products (the ones that may automatically alter human lives) are subject to proper product law, there was a massive amount of lobbying and misinformation. Part of all that was to misdirection pretending that the only "real" AI was the newest stuff on the block – large language models (LLM) like ChatGPT, which produce fluent and occasionally accurate verbal output, and related technologies producing sounds and images.

Provenance / Cost: While "generative AI" (models that can both interpret data, and produce data similar to what they can interpret) have been around since at least the 1990s, the largest of the large models are in the USA, produced by a small number of companies at really enormous, vast, expense. Expense of these largest models is in the billions of US dollars, and this is not only because of the data. The largest models take months to train on enormous amounts of extremely expensive hardware, consuming vast quantities of electricity and water.

Usefulness / threat: It is not clear that these models are really that useful, compared to similar models built on a smaller scale, including simple conventional keyword Web search. So far, people are not making more money off of them than they are spending building them. Certainly though many individuals and corporations are trying to figure out whether having automatically-generated, partially-correct text can become a part of their workflow. There are reasonable claims that such systems can replace bad and mediocre writing by humans, but don't tend to be able to (re)produce the best quality work. Also, such models seem to be able to analyse human text better, certainly much faster, and with less bias, (if they are programmed with that consideration in mind) than many humans.  This will, like most automation, lead to different human skills having different value, and thus lead to economic disruption for many households. But it is certainly not the end of work.

There is also some fear that the AI will fill the Internet with nonsense and swamp the useful data, but these claims also seem to be overwrought. Web search is still the best way to find the highest quality (including most dangerous) publicly-available human knowledge. Generative AI, while creatively interpolating found results, does not innovate truth – it just creates a lot of conjectures very quickly. Sometimes these conjectures are called "hallucinations."

Can the "hallucination" / invention / lying problem be fixed? No. These are systems of prediction. Predictions made from insufficient data will always be random. The problem is that the same thing that makes these systems really useful (that they are learning about culture e.g. language at many different levels simultaneously) also ensures that they are deeply inhuman – there is no way to tell from the syntax or tone of a sentence how correct the content part of the model is (the semantics). Nothing in modelling performed this way retains information about how much data underlies the predictions. It is possible that eventually a parallel structure could provide some of this information though, which could make the systems more useful.

Will generative AI be omniscient if we give it enough data, electricity, water, processors, and time? No. Scientific discovery is a process that takes time, space, and energy (like the rest of computation), and in particular, requires comparing novel ideas to specific types of matched data, ordinarily acquired for the purpose. Generative AI / foundation models are just a kind of weird summarisation of existing knowledge. It's like a novel kind of interface onto a library. But human culture does not contain all the facts in the universe. Generative AI is not in itself a means to advance knowledge; it's a means to retrieve certain kinds of information we already had. Though note that some social scientists are finding it useful as a part of their scientific process to accelerate research into human culture and opinions.

How is overemphasis on generative AI foundation models a regulatory threat? Wow, I'm surprised you asked that, but glad – it is one of my favourite concerns of the moment. The EU has actually since at least the GDPR been doing a pretty good job of addressing concerns about excesses of private and state power and misuse of AI and digital services more generally. Such work is essential to maintaining political, economic, and social stability, including through the exercise of democracy. The US for reasons that are not entirely clear, but possibly having to do with global advances in equalising power structures, has not really been supportive of this important effort. Notably, the EU is trying to ensure the stability of its own member nations' societies, as is its legal and moral obligation, but the US is the domicile of companies with global reach, including into the EU. The GDPR actually massively benefited these companies (and domestic EU ones, and ones from other global regions) by creating a mostly harmonised digital market, easing legal and commercial access to the 430 million fairly-affluent residents of the European Economic Area. The EU is predominantly a trade organisation, but it has to effect increases of trade in ways that do not harm its member nations. This care for safety and security is what other nations and some private companies are weirdly obsessed with, rather than focussing on the concomitant advances in economic opportunity. 

Since late 2023, the US in particular has been "leading" in global "AI regulatory efforts" – where "AI" means primarily these foundation models – at the level of the G7, G20, GPAI, and UN. It's touch and go even with the Council of Europe. Only the EU and UNESCO seem still mostly interested in the broader concerns of decent regulation of AI, broadly understood. The US now says it will write law that it hopes to be interoperable with the EU law. The concern of being too integrated with such law is that it may drag the EU's quite good efforts at integrating digital governance broadly into the rule of law down this weird rabbit hole which is only a small fraction of what should be our real concerns.


Me being tired of talking about LLMs in Dec 2023.
(Or maybe I was tired of jetlag.)
See also my earlier related posts

Comments