This was originally a somewhat-quickly written post about how AI will and will not be altered by quantum and other forms of scaling. I massively revised it for clarity on 18 August 2024 and then again (with rethinking) on 14 July 2025. I still am, and long will be, seeking feedback.
 |
Fig 1: Our World in Data charts on AI capacities, see below. |
In Summer 2024, on the way home from my last visit to my father, I dropped by the UN in NYC to talk to some people about AI regulation. One wrote me back immediately afterwards to ask about quantum. This reminded me that I'd been meaning to write a blogpost about the limits of scaling generative AI for some time. So this blogpost starts from that – what can be scaled with AI, and what cannot. Because basically scaling is all that quantum can do, and then only in some areas. I then turn to the question of quantum impacts, going beyond A(G)I. Finally, I have a brief reminder that AI-produced harms are not limited to aspects of AI typically built with machine learning.
I hope that it's evident that this is an area of ongoing research, so anyone may be wrong in their prognostications. But some of the below is just factual; nothing new we learn about quantum computing or machine learning can change facts of mathematics or indeed the nature of humans and our social sciences.
The utility of machine learning is limited not only by available data, nor the extent of human knowledge, but also by what we could possibly care to know
This part is derived from a LinkedIn post I wrote in late September or early November 2023 (LI is not great for history and archival citation, is it?) If you've already read that, the quantum section is entirely new, though this too is reworked July 2025. Some major AI leaders don't seem to recognise the limits of data
On twitter, fellow academic Jack Stilgoe mused that the term "democratize" was losing all meaning. I understand his concern. The term certainly seems to be presently predominantly used as a libertarian-anarchist attempt to undermine public understanding of the relationship between democracy and representation. "Democratisation" in this new sense indicates a broad individual access to and support of systems. This is great for resilience – it's a good idea to have resilient, distributed systems including widespread technical expertise. But this has little to do with the democratic system of governance. Democracy is about how we select appropriate people to be given positions of power. These democratically elected individuals anchor the legitimacy of decisions a government or other governing force takes. Giving such individuals the power and time to provide coordination by making such decisions is a key element of the agile construction of public goods. Such goods include security; infrastructure like roads and garbage collection; localised conflict resolution like policing and courts; peaceful and fair trade relations both within the public and between publics, and many other essentials of our modern understanding of thriving.
What Sam Altman (and apparently Anthropic) talk about is using "an AI" to "crowdsource consensus across 8B people" on how governance can happen.
Here's a clip of him talking about this at Harvard in November 2024 (starting around 13:04). The context is discussing "an AI" governing just AI itself, but in reality that's all the governance Altman probably thinks we need long term. It is not much of a secret that a number of Silicon Valley elite expect conventional government to fail in the face of the crises of climate, sustainability, and AI. In fact, here is
a related discussion on Web3 in Forbes. Further, they expect it to happen in the very near future – even before the end of 2025, but "certainly" during Donald Trump's second term.
In fact no supernatural entity has seeded the human population with complete knowledge. The only "complete" knowledge or truth is the universe itself, and that is unworkably detailed. And constantly expanding (at least for the anticipated duration of our species.) Policy, science, and indeed all intelligence require the hard computational labour of synthesising new solutions for current contexts. Governance of good-sized polities in particular requires hiring researchers to consolidate this work. Governments who can afford to fund this work themselves, locally – and choose to do so – wind up with better, more bespoke information, and are consequently better able to strengthen and protect their societies.
If you were to run modern machine learning on all the knowledge humanity had available in 1900, you wouldn't get anything about space flight or antibiotics – or AI, of course. That just wasn't there, it wasn't in the data. LLM (and other types of foundation models) allow us to use our knowledge in more ways, and indeed that may accelerate how fast we can discover new things, innovate more knowledge and processes. But generative AI – AI derived from machine learning – won't "discover" or "reveal" what a decent number of us don't already know. It works through aggregation. LLM are more like libraries than they are like
AGI [That links my most recent blogpost on AGI, from June 2024].
The self-limitations of many forms of knowledge
 |
View from the UN yesterday. |
Speaking of AGI, please consider the graphs above (Fig 1) about the recent rises in AI capacity. Don't just look at the slopes, which are anyway bogus. People in AI have been working on language understanding and reading comprehension since the 1950s, and image recognition since the 1970s. But
look where machine learning plateaus for every competence – not much above human ability. This is because what we are doing with AI is automating aggregate versions of our own skills at manipulating the types of information that are salient to ourselves.
Machine learning doesn't create superbeings. It uses our culture to broaden access to our knowledge – knowledge which we have already built (and paid for.)
There is just no point in better than human speech recognition or reading comprehension – what can that even mean. Maybe you can do super-human levels of noise filtering, and pick out subtle indicators of intent better than the majority of humans. Generative AI
does seem to be useful for levelling up weaker performers at some tasks – but not
people who are over confident in AI and/or lack expertise in those tasks. But my point is that many fears of super-human AI are misplaced, just like fears of AI "taking all the jobs." Jobs are relationships between people. All artefacts are super human at something, e.g. books hold information longer and more precisely than brains. The real threats are some humans not taking the lives of some other humans seriously enough.
Can quantum actually scale quickly and indefinitely? How would that affect inequality and transnational governing?
Governing – and who will be deploying quantum computing?
Going back briefly to the use of "democratise" to mean libertarian egalitarian peer-to-peer organisation, and in summary of the previous section: However many new things we find out, we'll still need to coordinate quite a number of our capacities through hierarchical entities like corporations, NGOs and governments. Some things we don't need to coordinate. But anything requiring a single policy for a shared good, like the health or security of a population, we do. Hierarchy gives us agility for coordination, for choosing plans, and for expressing actions. Of course hierarchy can also limit the dissemination and expression of alternatives; whether that's a good thing or bad depends on the nature and urgency of the problems being addressed. Action selection involves a tradeoff between the quality of the solution, and how quickly a solution can be found and implemented. Time passes – a problem can peak in its destruction before any action is taken. Also, ideally, the focus we deploy is well-informed – the most promising solutions get the most attention. Our goal as academics and activists should be for governments to aggregate not only our knowledge but also our goals and interests in sensible ways.
So let's go back to the question of scaling AI, and of quantum's impacts on these. Judging from the events I've attend on quantum computing (I don't research it directly myself), the answer to this question has been pretty consistent for at least five years. It seems very unlikely there will be a sudden breakthrough. Rather, the costs of scaling quantum seem just prohibitively high. Consequently, in all likelihood, quantum computing will only be weilded by a small number of very affluent nations and corporations. If you want to look at who's likely to be able to pay those costs, I would have bet (until 2025*) that the 2024 Olympic medal tables are a good guess, at least for the countries. So in this sense the (further) advent of quantum may increase inequality in a way that AI so far hasn't. Because cybersecurity and cyberwar are both critical issues for our time, it may be that diminishing returns won't limit the amount of investment by these countries.
*Note: The Trump administration seems to be undermining all US government and research competence. Whether the US retains competence in this area therefore may come down to whether a few leading technology companies feel safe continuing to domicile there.
Impacts on logistics, and cryptocurrencies / proof of work?
The one thing I can imagine AI always being increasingly useful for is logistics. We are likely to be increasingly able to quickly solve hard problems – even the problem of quickly implementing a solution once selected is an example of the kind of logistics problems AI has always been useful for. I feel like this fact about AI is vastly under-appreciated while everyone is running around after the LLM and other genAI that are literally less coherent than headless chickens. GenAI is not what's changing the face of warfare in Ukraine, for example.
Another area I can imagine huge impacts is cryptocurrency mining. Blockchain too isn't really one of my areas of expertise, but it is of one of my coauthors, Arvind Narayanan, and I can certainly understand any argument grounded in costs of computation. Arvind told me about a theory that as blocks get harder and harder to mine, it's not just that there are diminishing incentives to mine them. It's also that the relative incentives of figuring out ways to break the system are ever-increasing versus those costs of working within it. So far though what we seem to be seeing is just insane amounts of planetary resources wasted on making "currencies" that's more like the art market than a real currency – buffeted by fashion, subject to lossage and destruction, and (of course, like everything) entirely based on scarcity. But what happens if a few countries and companies can use quantum to mine blocks? Won't the whole "currency" application just collapse into a deflated mess? Like if people actually cared about AI generated art rather than a finite pool of elite paintings?
Impacts on A(G)I
As per the previous section, my first thought when I was answering the question is that AI is relatively unaffected by quantum, certainly in comparison to cybersecurity. If anything, quantum may make even the subset of AI dependent on machine learning cheaper and more accessible, reducing inequalities. That will depend on the extent that quantum computing and AI get provided as utilities, such that we can all share access to any newer, cheaper, better models. Remember too that a lot of developers are already working on reducing data and energy requirements for even foundation-model based generative AI, so they may well solve this before quantum gets solved.
 |
The UN got a statue of good triumphing over evil. Then they had to put it behind bars. |
I was also originally going to say that cybersecurity (but not AI) is "more or less like actual arms in an arms race." But there too, there are only so many times you can blow up a city, at least before a country has time to rebuild. So despite the fact the world is struggling to build enough ammunition right now, I think maybe weapons we are willing to use are more like "foundation model" AI, with natural limits on scaling based on the real efficacy of owning such things. Which may have helped limit the nuclear arms race. A lot of those weapons systems were at least portrayed as redundant when we heard the "detonating 10% of existing nuclear weapons would destroy mammalian life on the planet" arguments that were going around previous to the massive reductions of nuclear warheads carried out post cold war. (But if anyone actually has any evidence that the 10% figure was ever actually true, I haven't been able to find it. Please email me or post in the comments!)…and general security
Prof. Bart Preneel gave me this feedback on a slightly earlier version of the above: " - I agree with the assessment on AI
- I agree that quantum will have limited impact on AI; but I do think that once these systems are built, they will be sold and used by many organizations (think of the IBM CEO who believed that the world needed only a handful of computers)
- cryptocurrencies: the problem is not the impact of QC on the mining (I don't believe that Grover will speed up mining) but the breaking of digital signatures such as ECDSA; this will require coordinated action to migrate to a new digital signature scheme that increases all the block chain sizes by a factor 10-20. Challenging but not infeasible and we have 10 years time to do this.
- on cybersecurity: impact of AI is large; impact of quantum computing is small (outside breaking the current public key algorithms).
"
I have to absolutely agree with Bart. There's no sense at all in talking about AI safety, audits, "trustworthiness" or anything else without cybersecurity, yet people seem to be ignoring this vulnerability left, right, and centre. Trustworthy anything is a fantasy if that thing is digital but not secure, because what software it's running or indeed who is teleoperating it could change in an instant. The past is no predictor of the future for any digitally controlled system, only for the corporations that provide them, and these should be judged in a very large part by the cybersecurity their systems ensure.
Don't forget that horrific AI outcomes often have little to do with machine learning
In conclusion, I think quantum may not matter much for generative AI e.g. coding and writing assistants. But it may matter a lot for cybersecurity (and cryptocurrencies) and cybersecurity at least is a very big deal. It may also matter a lot for logistics – for actually solving problems and coordinating to implement those solutions. That impacts every aspect of security, from war to sustainability.
I hesitate to go of on a tangent, but I want to remind anyone who read this far that putting too much trust in AI – or isomorphically giving too little control or voice to humans – is where the greatest harms to date have come from. Such as the Dutch welfare benefits scandal and the UK post office, both of which resulted in unjustified jail terms, bankruptcies, and consequently also in suicides. And also, perhaps even worse, the over-reliance on algorithms and machine vision in the run up to the success of the 7 October Hamas attacks Israel. This is a problem I've labelled as fragility this blog. You may want to click also the fragility label at the bottom of this post, but please whatever you do read the linkedin post I just linked.
 |
Another one of the good guys – Mark Riedl happened to be in town when I visited the UN and had lunch with me. Weirdly I seem to know relatively few actual New Yorkers. |
Thanks https://bsky.app/profile/qwamina.bsky.social for catching a typo!
Comments
[Submitted on 18 Jul 2025]
The Levers of Political Persuasion with Conversational AI
by Kobi Hackenburg, Ben M. Tappin, Luke Hewitt, Ed Saunders, Sid Black, Hause Lin, Catherine Fist, Helen Margetts, David G. Rand, Christopher Summerfield
There are widespread fears that conversational AI could soon exert unprecedented influence over human beliefs. Here, in three large-scale experiments (N=76,977), we deployed 19 LLMs-including some post-trained explicitly for persuasion-to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. Contrary to popular concerns, we show that the persuasive power of current and near-future AI is likely to stem more from post-training and prompting methods-which boosted persuasiveness by as much as 51% and 27% respectively-than from personalization or increasing model scale. We further show that these methods increased persuasion by exploiting LLMs' unique ability to rapidly access and strategically deploy information and that, strikingly, where they increased AI persuasiveness they also systematically decreased factual accuracy. https://arxiv.org/abs/2507.13919
As essentially analog probabilistic computation with the binary selection occurring at the end, quantum suffers the not only from noise but from the accumulation of errors due to calibration and control system inadequacies, and we may very well find a practical upper limit to the scale and number of operations that can comprise a reliable circuit. Something people may have forgotten is the noise immunity provided by digital logic, every correctly operating gate produces a measurably correct logic level. With quantum we can never inspect, never debug, never copy, never observe until the end.
If you're up for a deep dive, our extended research community published "How to Build a Quantum Supercomputer" where we examine the challenges of scaling to 1M higher quality qubits but also factor in an upper limit of 100K qubits in a single domain, necessitating distributed computation. This will mean classical networking between QPUs at first, so the paper describes our work on adaptively splitting and knitting Quantum subcircuits to minimize classical reconstruction overhead. Longer term there is potential for a photonic Quantum modality to extend entanglement between QPUs, but since real problems have a structure in their Hilbert space there is the potential for utility despite a qubit ceiling.
The competition isn't between quantum and classical compute, it's between simulation and experimentation where the physics includes not only Galileo and Newton but also Schrödinger and Dirac. Feynman's motivation is still the clearest opportunity.
And this is where we need to understand the limitations on Quantum and machine learning algorithms. Quantum ML on quantum data (from a quantum sensor, a quantum experiment or another quantum computer) has exponential speedup but QML on classical data loses the advantage due to encoding all that data. An ideal app on classical data is little data in, exponential quantum speedup, little data out and so far that is Shor.
Is Shor the first of many or the first and only? There are quantum and probabilistic computing natives in our future, but are they four, fourteen or forty years from the workforce?
But this is also where another DARPA program, Quantum-Inspired Classical Compute (QuICC), is taking aim at those same NP-hard optimization problems and apply probabilistic and other physics-based accelerators (classical physics, not quantum) and they may yield better, more scalable results sooner. Where this does come full circle back to AI is that if we have progress on the travelling salesman problems, we can re-purpose those solvers from logistics to logic and use them to solve binary satisfiability problems, huge numbers of terms in a Boolean logic statement connected with ANDs, ORs, and NOTs and be able to solve assigning 1s and 0s to make the statement true. This could add a deductive reasoning to compliment the inductive, predictive reasoning of the LLMs.
LLMs are capable of creating incredibly evocative, engaging and beguiling stories, and for some uses that is all we require of them. But for uses where it is also necessary that those stories be true, we need to add additional faculties to our AI companions.