2024 BRIEF overview on LLM / "foundation models" / 2023's version of "AI"

History:  In 2023, the EU was consolidating its legislation called "The AI Act" (or AI Regulation.) Despite the fact that this is a fairly boring piece of legislation just ensuring that at least some digital products (the ones that may automatically alter human lives) are subject to proper product law, there was a massive amount of lobbying and misinformation. Part of all that was to misdirection pretending that the only "real" AI was the newest stuff on the block – large language models (LLM) like ChatGPT, which produce fluent and occasionally accurate verbal output, and related technologies producing sounds and images.

Provenance / Cost: While "generative AI" (models that can both interpret data, and produce data similar to what they can interpret) have been around since at least the 1990s, the largest of the large models are in the USA, produced by a small number of companies at really enormous, vast, expense. Expense of these largest models is in the billions of US dollars, and this is not only because of the data. The largest models take months to train on enormous amounts of extremely expensive hardware, consuming vast quantities of electricity and water.

Usefulness / threat: It is not clear that these models are really that useful, compared to similar models built on a smaller scale, including simple conventional keyword Web search. So far, people are not making more money off of them than they are spending building them. Certainly though many individuals and corporations are trying to figure out whether having automatically-generated, partially-correct text can become a part of their workflow. There are reasonable claims that such systems can replace bad and mediocre writing by humans, but don't tend to be able to (re)produce the best quality work. Also, such models seem to be able to analyse human text better, certainly much faster, and with less bias, (if they are programmed with that consideration in mind) than many humans.  This will, like most automation, lead to different human skills having different value, and thus lead to economic disruption for many households. But it is certainly not the end of work.

There is also some fear that the AI will fill the Internet with nonsense and swamp the useful data, but these claims also seem to be overwrought. Web search is still the best way to find the highest quality (including most dangerous) publicly-available human knowledge. Generative AI, while creatively interpolating found results, does not innovate truth – it just creates a lot of conjectures very quickly. Sometimes these conjectures are called "hallucinations."

Can the "hallucination" / invention / lying problem be fixed? No. These are systems of prediction. Predictions made from insufficient data will always be random. The problem is that the same thing that makes these systems really useful (that they are learning about culture e.g. language at many different levels simultaneously) also ensures that they are deeply inhuman – there is no way to tell from the syntax or tone of a sentence how correct the content part of the model is (the semantics). Nothing in modelling performed this way retains information about how much data underlies the predictions. It is possible that eventually a parallel structure could provide some of this information though, which could make the systems more useful.

Will generative AI be omniscient if we give it enough data, electricity, water, processors, and time? No. Scientific discovery is a process that takes time, space, and energy (like the rest of computation), and in particular, requires comparing novel ideas to specific types of matched data, ordinarily acquired for the purpose. Generative AI / foundation models are just a kind of weird summarisation of existing knowledge. It's like a novel kind of interface onto a library. But human culture does not contain all the facts in the universe. Generative AI is not in itself a means to advance knowledge; it's a means to retrieve certain kinds of information we already had. Though note that some social scientists are finding it useful as a part of their scientific process to accelerate research into human culture and opinions.

How is overemphasis on generative AI foundation models a regulatory threat? Wow, I'm surprised you asked that, but glad – it is one of my favourite concerns of the moment. The EU has actually since at least the GDPR been doing a pretty good job of addressing concerns about excesses of private and state power and misuse of AI and digital services more generally. Such work is essential to maintaining political, economic, and social stability, including through the exercise of democracy. The US for reasons that are not entirely clear, but possibly having to do with global advances in equalising power structures, has not really been supportive of this important effort. Notably, the EU is trying to ensure the stability of its own member nations' societies, as is its legal and moral obligation, but the US is the domicile of companies with global reach, including into the EU. The GDPR actually massively benefited these companies (and domestic EU ones, and ones from other global regions) by creating a mostly harmonised digital market, easing legal and commercial access to the 430 million fairly-affluent residents of the European Economic Area. The EU is predominantly a trade organisation, but it has to effect increases of trade in ways that do not harm its member nations. This care for safety and security is what other nations and some private companies are weirdly obsessed with, rather than focussing on the concomitant advances in economic opportunity. 

Since late 2023, the US in particular has been "leading" in global "AI regulatory efforts" – where "AI" means primarily these foundation models – at the level of the G7, G20, GPAI, and UN. It's touch and go even with the Council of Europe. Only the EU and UNESCO seem still mostly interested in the broader concerns of decent regulation of AI, broadly understood. The US now says it will write law that it hopes to be interoperable with the EU law. The concern of being too integrated with such law is that it may drag the EU's quite good efforts at integrating digital governance broadly into the rule of law down this weird rabbit hole which is only a small fraction of what should be our real concerns.


Me being tired of talking about LLMs in Dec 2023.
(Or maybe I was tired of jetlag.)
See also my earlier related posts

Comments

Anonymous said…
While this blog raises valid concerns about LLMs, a few points warrant further discussion:

Global AI Development: Focusing solely on US-based large models overlooks substantial AI advancements in China, Europe, and other regions. The cost figures mentioned may also need more context, as they vary depending on specific models and development processes.

Usefulness and Economic Impact: Downplaying the potential usefulness of LLMs and their economic impact might be premature. Successful applications across various industries demonstrate their value, even while profitability is still debated.

Bias and Mitigation: Mitigating bias in AI is more complex than simply programming the model with that intent. Biases are often embedded in the training data itself, requiring ongoing attention to fairness and explainability.

Workforce Disruption: The potential for economic disruption due to AI is real. However, predicting precisely how this will impact the future of work requires nuanced analysis rather than absolutes.

Hallucinations and Nonsense: While AI-generated misinformation is concerning, many systems incorporate safeguards to minimize such risks. The severity of this issue is worth exploring further. The assertion that the "hallucination" issue is unfixable might be overly pessimistic. While inherently predictive models will have inaccuracies, a counterargument could focus on potential techniques to mitigate this, such as fine-tuning, reinforcement learning with human feedback, or introducing mechanisms to explicitly signal uncertainty.

Superiority of Web Search: Stating web search is unequivocally superior for finding the highest-quality information feels a bit absolute. LLMs, coupled with information retrieval techniques, might eventually rival or even surpass traditional search engines in their ability to synthesize and present information in a tailored way.

Generative AI and Knowledge: Dismissing generative AI's contributions to knowledge discovery might be shortsighted. Its value in data analysis and hypothesis generation within scientific fields deserves deeper consideration. LLMs won't become omniscient. However, it's worth noting that, as they are exposed to increasingly massive and diverse datasets, the scope of knowledge they can synthesize expands rapidly. This, in turn, could lead to unexpected capabilities.

Regulatory Approaches: The article's portrayal of EU and US regulatory stances might be somewhat biased. Both regions face challenges in crafting effective AI regulation, deserving a more balanced critique. Implying the US focus on LLM regulation is driven solely by power dynamics could be an oversimplification. There may be genuine concerns around the rapid progress of LLMs necessitating some form of regulatory framework, even if it's alongside broader AI regulation.

Overall, this blog provides interesting insights but could benefit from less subjectivity and a more balanced consideration of the complexities surrounding LLMs and their impact.
Joanna Bryson said…
Ha, assuming that anonymous comment was written with the heavy input of an LLM, I doubt it can be called "objective" :-)