Julian Fletcher FBCS, Chair of the BCS Quantum Specialist Group, looks at AI and quantum computing’s energy uses and finds comparing them is a fascinating but impossibly difficult task.
In a recent BBC Radio 4 interview, the team behind a quantum computer made a striking claim: their machine could solve a problem so complex that a conventional supercomputer would need to be wired to a small power station to match its performance. It’s a provocative image — one that invites us to ask: how do the energy demands of quantum computing compare to those of AI running on classical systems?
This question isn’t just academic. As AI systems become more pervasive and quantum computing inches closer to practical deployment, understanding their respective energy footprints is vital not only for sustainability, but for shaping the future of computing itself.
Calculating computing energy consumption
Energy consumption in computing is governed by a simple formula: Energy (E) = Power (P) × Time (t).
Assuming both systems draw from the same power source, the key variable becomes time — how long each takes to solve a given problem. But here’s where things get complicated: the time required depends entirely on the nature of the problem and the algorithm used. There’s no universal benchmark.
Instead, we turn to computational complexity theory, which helps us compare how different algorithms scale with problem size. Quantum algorithms often promise dramatic speedups — but only for certain types of problems.
Grover vs Shor’s algorithm
Take Grover’s algorithm, designed for unstructured search problems. Imagine trying to find a specific item in a database of 10,000 entries. A classical system might need to check each item one by one. Grover’s algorithm, running on a quantum computer, can find the item in roughly the square root of the total entries — just 100 steps.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
Using our energy formula, this translates to a 100-fold energy saving for quantum computing in this scenario. That’s a clear win for quantum, especially in tasks involving large scale search operations.
Now consider Shor’s algorithm, which tackles integer factorisation, which is the process of breaking down a large number into its prime components. For example, the number 15 can be factorised into 3 × 5. This is easy for small numbers, but for very large ones (like those used in encryption), it becomes computationally intense. Classical computers struggle with this, which is why RSA encryption (which relies on it) is considered secure.
Shor’s quantum algorithm can theoretically factor large numbers much faster than classical methods. However, in practice, current quantum systems are noisy and require extensive error correction. For a number like one million, the quantum approach ends up consuming 35 times more energy than classical computing. The overheads outweigh the theoretical speedup.
Understanding Big-O notation
To compare algorithms, computer scientists use Big-O notation. This describes how the time (or space) required by an algorithm grows as the size of the input increases:
- A linear algorithm (O(n)) grows proportionally with input size
- A quadratic algorithm (O(n²)) grows much faster
- Grover’s algorithm has a complexity of O(√n), which is significantly better than classical search (O(n))
- Shor’s algorithm has a complexity of O((log N)³), which is exponentially better than classical factorisation — but only in theory
Big-O helps us understand scalability, but it ignores real-world factors like hardware limitations, error correction and constant overheads.
That’s why quantum computers, despite their elegant algorithms, may not always be more energy efficient.
Are quantum computers always better?
These examples reveal a crucial truth: quantum computers are not simply faster versions of classical machines. Rather, they excel at specific tasks such as simulating quantum systems or solving certain mathematical problems, but may lag behind in everyday computing tasks like running a browser or performing basic arithmetic.
Similarly, ‘AI’ is a broad term. Most AI today runs on classical hardware, and its energy efficiency depends on the algorithm used. Deep learning models, for instance, are computationally intensive, while simpler optimisation models may be more efficient.
Hybrid quantum and classical systems
Looking ahead, we may see hybrid systems where AI models run on quantum hardware for specific tasks. This would involve comparing quantum AI algorithms to classical ones, not just comparing the raw hardware. Such systems could unlock new efficiencies, especially in fields like drug discovery, materials science, and financial modelling.
Final thoughts
Energy efficiency in computing isn’t a one-size-fits-all equation. Quantum computing offers dramatic savings for certain problems but can be more energy intensive for others. As both AI and quantum technologies evolve, their interplay — especially in hybrid systems — may redefine what’s possible in computing.
For now, the energy battle between AI and quantum computing remains context dependent. But one thing is clear: the future of computing will be shaped not by one technology alone, but by how they work together.
Take it further
Interested in this and related topics? Explore BCS' books and courses: