Technologies with and without natural scaling limits (re: quantum, AI, weapons)

This is a somewhat-quickly written post about how AI will and will not be altered by quantum and other forms of scaling. There's also some speculation about how cryptocurrencies will be, though that is speculation and I really am looking for feedback on that. Because cryptocurrencies might otherwise really be a challenge for national power. I massively revised it for clarity on 18 August 2024 and then again (with rethinking) 14 July 2025. Please let me know if it's still unclear. 

click link to get Our World in Data's description of this picture
Fig 1: Our World in Data charts on AI capacities, see below.


In Summer 2024, on the way home from my last visit to my father, I dropped by the UN in NYC to talk to some people about AI regulation. One wrote me back immediately afterwards to ask about quantum. This reminded me that I'd been meaning to write a blogpost about the limits of scaling generative AI for some time. So this blogpost starts from that – what can be scaled with AI, and what cannot. Because basically scaling is all that quantum can do, and then only in some areas.  I then turn to the question of quantum impacts, going beyond A(G)I. Finally, I have a brief reminder that AI-produced harms are not limited to aspects of AI typically built with machine learning.

I hope that it's evident that this is an area of ongoing research, so anyone may be wrong in their prognostications. But some of the below is just factual; nothing new we learn about quantum computing or machine learning can change facts of mathematics or indeed the nature of humans and our social sciences.

The utility of machine learning is limited not only by available data, nor the extent of human knowledge, but also by what we could possibly care to know

This part is derived from a LinkedIn post I wrote in late September or early November 2023 (LI is not great for history and archival citation, is it?) If you've already read that before, you might want to skip the the quantum section.

On twitter, fellow academic Jack Stilgoe mused that the term "democratize" was losing all meaning. I understand his concern.  The term seems sometimes recently to be used predominantly as a libertarian-anarchist attempt to undermine public understanding of the relationship between democracy and representation. "Democratisation" in the sense of broad individual access to and support of systems is great for resilience, but has little to do with the democratic system of governance. Democracy is about selecting appropriate people to be given positions of power. These democratically elected individuals anchor the legitimacy of decisions a government or other governing force takes. Giving such individuals the power and time to make such decisions is an important principal means a public uses to construct public goods. These goods include security, infrastructure, peaceful trade relations, and other essentials to the thriving of the population that delegated those legitimate decision makers that power.

When Sam Altman (and apparently Anthropic) talk about is using "an AI" to "crowdsource consensus across 8B people" on how governance can happen. Here's a clip of him talking about this at Harvard in November 2024 (starting around 13:04).  The context is discussing "an AI" governing AI itself, but it is not much of a secret that a number of Silicon Valley elite expect conventional government to fail in the face of the crises of climate, sustainability, and AI. In fact, here is a related discussion on Web3 in Forbes. These kinds of proposals seem to pretend that all that's needed to be known can be mined from the existing populous. The basic idea seems to be that the advent of LLM have made government and regulation redundant. [cf this blogpost about the real capacities of LLM I wrote in February 2024.] 

In fact no supernatural entity has seeded the human population with complete knowledge. The only "complete" knowledge or truth is the universe itself, and that is unworkably detailed. And constantly expanding. Policy (and science both) require the hard computational labour of synthesising new solutions for current contexts. Governance requires hiring researchers to consolidate this work. Governments who can afford to fund this work themselves, locally – and choose to do so – wind up with better, more bespoke information, and are consequently better able to strengthen and protect their societies.

picture out the window of an upper floor of the UN across a relatively narrow body of water (river?) at another city scape, there are veneitian blinds kind of like horizontal bars running across the windows
View from the UN yesterday.
Please consider the graphs above (Fig 1) about the recent rises in AI capacity. Don't just look at the slopes. Look at where machine learning plateaus for every competence – not much above human ability. This is because what we are doing with AI is automating aggregate versions of our own skills at manipulating the types of information that are salient to ourselves. Machine learning doesn't create superbeings. It uses our culture to broaden access to our knowledge, knowledge which we have built and paid for.  LLM are more like libraries than they are like AGI [my most recent blogpost about that, from June 2024]. If you were to run modern machine learning on all the knowledge humanity had available in 1900, you wouldn't get anything about space flight or antibiotics – or AI, of course.  But now with LLM (and other types of foundation models) we can use our knowledge in more ways, and indeed that may accelerate how fast we can discover new things, innovate more knowledge and processes. But generative AI, AI derived from machine learning won't "discover" or "reveal" what a decent number of us don't already know. It works through aggregation.

Going back briefly to the use of "democratise" to mean libertarian egalitarian peer-to-peer organisation: However many new things we find out, we'll still need to coordinate quite a number of our capacities through hierarchical entities like corporations, NGO's and governments. Some things we don't need to coordinate, but whenever you need to come up with a single policy for a shared good, like the health or security of a population, then you do. Hierarchy gives us agility for coordination, for choosing plans and expressing actions. Of course hierarchy can also limit expression of alternatives; whether that's a good thing or a bad one depends on the nature and urgency of the problems being addressed. But ideally, legitimate governments aggregate not only our knowledge but also our goals and interests in sensible ways. 

Can quantum actually scale indefinitely?  How would that affect inequality and transnational governing?

So anyway, let's go back to the question of scaling AI, and of quantum's impacts on that. Judging from the events I've attend on quantum (I don't research it myself), the answer to this question has been pretty consistent for at least five years. It seems unlikely there will be a sudden breakthrough. Rather, the cost of scaling quantum seems just prohibitively high. Consequently, in all likelihood it will only be paid by a small number of very affluent nations and corporations. If you want to look at who's likely to be able to pay that cost, I'd bet the 2024 Olympic medal tables are a good guess, at least for the countries.  So in this sense the (further) advent of quantum may increase inequality in a way that AI so far hasn't. Because cybersecurity and cyberwar are both critical issues for our time, maybe diminishing returns won't limit the amount of investment. 

And maybe this is similar to cryptocurrency mining (which is basically the same kind of technology as you need for large AI models.) Again, this isn't really my area of expertise, but it is of one of my coauthors, Arvind Narayanan. And he told me that there was a theory that as blocks get harder and harder to mine, it's not just that there are diminishing incentives to mine them. It's that there are ever-increasing incentives to figure out ways to break the system rather than work within it. So far though what we seem to be seeing is just insane amounts of planetary resources wasted on making "currencies" that's more like the art market than a real currency – buffeted by fashion, subject to lossage and destruction. But what happens if a few countries and companies can use quantum to mine blocks? Won't the whole thing just collapse into a deflated mess? Like if people actually cared about AI generated art rather than elite finite paintings?

As per the previous section, my first thought when I was answering the question is that AI is relatively unaffected by quantum, certainly in comparison to cybersecurity. If anything quantum may make even the subset of AI dependent on machine learning cheaper and more accessible, reducing inequalities – to the extent that quantum computing and AI gets provided as a utility and we can all share access to the newer, cheaper, better models. Although a lot of developers are already working on reducing data and energy requirements for even foundation-model based generative AI, so they may well solve this first.

statue looks remarkably like St. George slaying the dragon, including a cross on the hilt of the sword, but the dragon has two heads and its body looks like a rocket maybe a nuclear bomb
The UN got a statue of good triumphing over evil.
Then they had to put it behind bars.
I was also originally going to say that cybersecurity (but not AI) is "more or less like actual arms in an arms race." But there too, there are only so many times you can blow up a city, at least before a country has time to rebuild. So despite the fact the world is struggling to build enough ammunition right now, I think maybe weapons we are willing to use are more like "foundation model" AI, with natural limits on scaling based on the real efficacy of owning such things. Which may have helped limit the nuclear arms race. A lot of those weapons systems were at least portrayed as redundant when we heard the "detoniating 10% of existing nuclear weapons would destroy mammalian life on the planet" arguments that were going around previous to the massive reductions of nuclear warheads carried out post cold war. (But if anyone actually has any evidence that the 10% figure was ever actually true, I haven't been able to find it.  Please email me or post in the comments!)

Don't forget that horrific AI outcomes often have little to do with machine learning

Anyway, I think quantum may not matter much for AI, but may matter a lot for cybersecurity (and cryptocurrencies) and cybersecurity at least is a very big deal. I don't want to go off on yet another tangent here, so I just leave you with just another link to a linkedin post about the problem with AI I've labelled as fragility in this blog. You may want to click also the fragility label at the bottom of this post, but first read the linkedin post.

Another one of the good guys – Mark Riedl happened
to be in town and had lunch with me. Why do I know 
so few actual New Yorkers?


Comments