Technologies with and without natural scaling limits (re: quantum, AI, weapons)

This is a somewhat-quickly written post about how AI will and will not be altered by quantum and other forms of scaling. There's also some speculation about how cryptocurrencies will be, though that is speculation and I really am looking for feedback on that. Because cryptocurrencies might otherwise really be a challenge for national power. I massively revised it for clarity on 18 August, let me know if it's still unclear. 

click link to get Our World in Data's description of this picture
Fig 1: Our World in Data charts on AI capacities, see below.


I was talking to some people at the UN yesterday about AI regulation, and one wrote me back afterwards to ask about quantum. This reminded me that I've meant to write a blogpost about the limits of scaling generative AI for some time. So I start with that, since basically scaling is what quantum can do (in some areas.)



The utility of machine learning is limited by not just our knowledge and data, but what we could possibly care to know

This part is taken from a LinkedIn post I wrote in late September or early November 2023 (LI is not great about history, is it?) If you've already read that, feel free to skip the the quantum section.

On Twitter, my colleague Jack Stilgoe mused that the term "democratize" was losing all meaning. In my opinion the term has recently been used almost exclusively as a libertarian-anarchist attempt to undermine public understanding of the relationship between democracy and representation. OpenAI and similar are pretending that all that's needed to be known can be mined from the populus. This would also imply that with the advent of LLM, government and regulation have become redundant. [cf this blogpost about the real capacities of LLM I wrote in February 2024.] 

In fact no supernatural entity has seeded the human population with complete knowledge. The only "complete" knowledge or truth is the universe itself, and that is unworkably detailed. And constantly expanding. Policy (and science both) require the hard computational labour of synthesising new solutions for current contexts. Governance requires hiring researchers to consolidate this work. Governments who can afford to fund this work themselves, locally – and choose to do so – wind up with better, more bespoke information, and are consequently better able to strengthen and protect their societies.

picture out the window of an upper floor of the UN across a relatively narrow body of water (river?) at another city scape, there are veneitian blinds kind of like horizontal bars running across the windows
View from the UN yesterday.
Please consider the graphs above (Fig 1) about the recent rises in AI capacity. Don't just look at the slopes. Look at where machine learning plateaus for every competence – not much above human ability. This is because what we are doing with AI is automating aggregate versions of our own skills at manipulating the types of information that are salient to ourselves. Machine learning doesn't create superbeings. It uses our culture to broaden access to our knowledge, knowledge which we have built and paid for.  LLM are more like libraries than they are like AGI [my most recent blogpost about that, from June 2024]. If you were to run modern machine learning on all the knowledge humanity had available in 1900, you wouldn't get anything about space flight or antibiotics – or AI, of course.  But now with LLM (and other types of foundation models) we can use our knowledge in more ways, and indeed that may accelerate how fast we can discover new things, innovate more knowledge and processes. But generative AI, AI derived from machine learning won't "discover" or "reveal" what a decent number of us don't already know. It works through aggregation.

Going back briefly to the use of "democratise" to mean libertarian egalitarian peer-to-peer organisation: However many new things we find out, we'll still need to coordinate quite a number of our capacities through hierarchical entities like corporations, NGO's and governments. Some things we don't need to coordinate, but whenever you need to come up with a single policy for a shared good, like the health or security of a population, then you do. Hierarchy gives us agility for coordination, for choosing plans and expressing actions. Of course hierarchy can also limit expression of alternatives; whether that's a good thing or a bad one depends on the nature and urgency of the problems being addressed. But ideally, legitimate governments aggregate not only our knowledge but also our goals and interests in sensible ways. 

Can quantum actually scale indefinitely?  How would that affect inequality and transnational governing?

So anyway, let's go back to the question of scaling AI, and of quantum's impacts on that. Judging from the events I've attend on quantum (I don't research it myself), the answer to this question has been pretty consistent for at least five years. It seems unlikely there will be a sudden breakthrough. Rather, the cost of scaling quantum seems just prohibitively high. Consequently, in all likelihood it will only be paid by a small number of very affluent nations and corporations. If you want to look at who's likely to be able to pay that cost, I'd bet the 2024 Olympic medal tables are a good guess, at least for the countries.  So in this sense the (further) advent of quantum may increase inequality in a way that AI so far hasn't. Because cybersecurity and cyberwar are both critical issues for our time, maybe diminishing returns won't limit the amount of investment. 

And maybe this is similar to cryptocurrency mining (which is basically the same kind of technology as you need for large AI models.) Again, this isn't really my area of expertise, but it is of one of my coauthors, Arvind Narayanan. And he told me that there was a theory that as blocks get harder and harder to mine, it's not just that there are diminishing incentives to mine them. It's that there are ever-increasing incentives to figure out ways to break the system rather than work within it. So far though what we seem to be seeing is just insane amounts of planetary resources wasted on making "currencies" that's more like the art market than a real currency – buffeted by fashion, subject to lossage and destruction. But what happens if a few countries and companies can use quantum to mine blocks? Won't the whole thing just collapse into a deflated mess? Like if people actually cared about AI generated art rather than elite finite paintings?

As per the previous section, my first thought when I was answering the question is that AI is relatively unaffected by quantum, certainly in comparison to cybersecurity. If anything quantum may make even the subset of AI dependent on machine learning cheaper and more accessible, reducing inequalities – to the extent that quantum computing and AI gets provided as a utility and we can all share access to the newer, cheaper, better models. Although a lot of developers are already working on reducing data and energy requirements for even foundation-model based generative AI, so they may well solve this first.

statue looks remarkably like St. George slaying the dragon, including a cross on the hilt of the sword, but the dragon has two heads and its body looks like a rocket maybe a nuclear bomb
The UN got a statue of good triumphing over evil.
Then they had to put it behind bars.
I was also originally going to say that cybersecurity (but not AI) is "more or less like actual arms in an arms race." But there too, there are only so many times you can blow up a city, at least before a country has time to rebuild. So despite the fact the world is struggling to build enough ammunition right now, I think maybe weapons we are willing to use are more like "foundation model" AI, with natural limits on scaling based on the real efficacy of owning such things. Which may have helped limit the nuclear arms race. A lot of those weapons systems were at least portrayed as redundant when we heard the "detoniating 10% of existing nuclear weapons would destroy mammalian life on the planet" arguments that were going around previous to the massive reductions of nuclear warheads carried out post cold war. (But if anyone actually has any evidence that the 10% figure was ever actually true, I haven't been able to find it.  Please email me or post in the comments!)

Don't forget that horrific AI outcomes often have little to do with machine learning

Anyway, I think quantum may not matter much for AI, but may matter a lot for cybersecurity (and cryptocurrencies) and cybersecurity at least is a very big deal. I don't want to go off on yet another tangent here, so I just leave you with just another link to a linkedin post about the problem with AI I've labelled as fragility in this blog. You may want to click also the fragility label at the bottom of this post, but first read the linkedin post.

Another one of the good guys – Mark Riedl happened
to be in town and had lunch with me. Why do I know 
so few actual New Yorkers?


Comments