This is a somewhat-quickly written post about how AI will and will not be altered by quantum and other forms of scaling. There's also some speculation about how cryptocurrencies will be, though that is speculation and I really am looking for feedback on that. Because cryptocurrencies might otherwise really be a challenge for national power. I massively revised it for clarity on 18 August, let me know if it's still unclear.
Fig 1: Our World in Data charts on AI capacities, see below. |
I was talking to some people at the UN yesterday about AI regulation, and one wrote me back afterwards to ask about quantum. This reminded me that I've meant to write a blogpost about the limits of scaling generative AI for some time. So I start with that, since basically scaling is what quantum can do (in some areas.)
The utility of machine learning is limited by not just our knowledge and data, but what we could possibly care to know
This part is taken from a LinkedIn post I wrote in late September or early November 2023 (LI is not great about history, is it?) If you've already read that, feel free to skip the the quantum section.
On Twitter, my colleague Jack Stilgoe mused that the term "democratize" was losing all meaning. In my opinion the term has recently been used almost exclusively as a libertarian-anarchist attempt to undermine public understanding of the relationship between democracy and representation. OpenAI and similar are pretending that all that's needed to be known can be mined from the populus. This would also imply that with the advent of LLM, government and regulation have become redundant. [cf this blogpost about the real capacities of LLM I wrote in February 2024.]
In fact no supernatural entity has seeded the human population with complete knowledge. The only "complete" knowledge or truth is the universe itself, and that is unworkably detailed. And constantly expanding. Policy (and science both) require the hard computational labour of synthesising new solutions for current contexts. Governance requires hiring researchers to consolidate this work. Governments who can afford to fund this work themselves, locally – and choose to do so – wind up with better, more bespoke information, and are consequently better able to strengthen and protect their societies.
View from the UN yesterday. |
Going back briefly to the use of "democratise" to mean libertarian egalitarian peer-to-peer organisation: However many new things we find out, we'll still need to coordinate quite a number of our capacities through hierarchical entities like corporations, NGO's and governments. Some things we don't need to coordinate, but whenever you need to come up with a single policy for a shared good, like the health or security of a population, then you do. Hierarchy gives us agility for coordination, for choosing plans and expressing actions. Of course hierarchy can also limit expression of alternatives; whether that's a good thing or a bad one depends on the nature and urgency of the problems being addressed. But ideally, legitimate governments aggregate not only our knowledge but also our goals and interests in sensible ways.
Can quantum actually scale indefinitely? How would that affect inequality and transnational governing?
So anyway, let's go back to the question of scaling AI, and of quantum's impacts on that. Judging from the events I've attend on quantum (I don't research it myself), the answer to this question has been pretty consistent for at least five years. It seems unlikely there will be a sudden breakthrough. Rather, the cost of scaling quantum seems just prohibitively high. Consequently, in all likelihood it will only be paid by a small number of very affluent nations and corporations. If you want to look at who's likely to be able to pay that cost, I'd bet the 2024 Olympic medal tables are a good guess, at least for the countries. So in this sense the (further) advent of quantum may increase inequality in a way that AI so far hasn't. Because cybersecurity and cyberwar are both critical issues for our time, maybe diminishing returns won't limit the amount of investment.
And maybe this is similar to cryptocurrency mining (which is basically the same kind of technology as you need for large AI models.) Again, this isn't really my area of expertise, but it is of one of my coauthors, Arvind Narayanan. And he told me that there was a theory that as blocks get harder and harder to mine, it's not just that there are diminishing incentives to mine them. It's that there are ever-increasing incentives to figure out ways to break the system rather than work within it. So far though what we seem to be seeing is just insane amounts of planetary resources wasted on making "currencies" that's more like the art market than a real currency – buffeted by fashion, subject to lossage and destruction. But what happens if a few countries and companies can use quantum to mine blocks? Won't the whole thing just collapse into a deflated mess? Like if people actually cared about AI generated art rather than elite finite paintings?
As per the previous section, my first thought when I was answering the question is that AI is relatively unaffected by quantum, certainly in comparison to cybersecurity. If anything quantum may make even the subset of AI dependent on machine learning cheaper and more accessible, reducing inequalities – to the extent that quantum computing and AI gets provided as a utility and we can all share access to the newer, cheaper, better models. Although a lot of developers are already working on reducing data and energy requirements for even foundation-model based generative AI, so they may well solve this first.
The UN got a statue of good triumphing over evil. Then they had to put it behind bars. |
Don't forget that horrific AI outcomes often have little to do with machine learning
Anyway, I think quantum may not matter much for AI, but may matter a lot for cybersecurity (and cryptocurrencies) and cybersecurity at least is a very big deal. I don't want to go off on yet another tangent here, so I just leave you with just another link to a linkedin post about the problem with AI I've labelled as fragility in this blog. You may want to click also the fragility label at the bottom of this post, but first read the linkedin post.
Another one of the good guys – Mark Riedl happened to be in town and had lunch with me. Why do I know so few actual New Yorkers? |
Comments