Rendered at 18:58:30 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
MathMonkeyMan 18 hours ago [-]
The title of this post changed as I was reading it. "It looks like the 'JVG algorithm' only wins on tiny numbers" is a charitable description. The article is Scott Aaronson lambasting the paper and shaming its authors as intellectual hooligans.
Strilanc 10 hours ago [-]
Agree. Scott is exactly correct when he just straight calls it crap.
It's inaccurate to say it wins on small numbers because on small numbers you would use classical computers. By the time you get to numbers that take more than a minute to factor classically, and start dreaming of quantum computers, you're well beyond the size where you could tractably do the proposed state preparation.
amelius 10 hours ago [-]
Well, the reviewers missed it too.
Strilanc 10 hours ago [-]
What reviewers? It's not a peer reviewed article.
amelius 9 hours ago [-]
Ok.
10 hours ago [-]
bawolff 6 hours ago [-]
Honestly i think he was remarkably polite given the sort of crap we are talking about.
measurablefunc 17 hours ago [-]
Scott Aaronson is the guy who keeps claiming quantum supremacy is here every year so he's like the proverbial pot calling the kettle black.
Strilanc 9 hours ago [-]
What do you mean? The original 2019 supremacy experiment was eventually simulated, as better classical methods were found, but the followups are still holding strong (for example [4] and [5]).
There was recently a series of blog posts by Dominik Hangleiter summarizing the situation: [1][2][3].
the reason people pay attention to him is that he does a good job publicizing both positive and negative results, and accurately categorizing which are bullshit
measurablefunc 15 hours ago [-]
All I know is he keeps being wrong about quantum supremacy but maybe this is the year he finally gets his wish.
adgjlsfhk1 14 hours ago [-]
he's been right about it. quantum supremacy was achieved in 2023 (but only for incredibly useless problems)
gsf_emergency_7 11 hours ago [-]
Yeah I think GP might now prefer his statement(s) to have been about "quantum _advantage_". Which is the modish term after all.
RcouF1uZ4gsC 18 hours ago [-]
Scott References the top comment on this previous HN discussion
While I think the idea that claiming one can "precompute the xr mod N’s on a classical computer" sounds impractical there are a subset of problems where this might be valid. According to computational complexity theory, there's a class of algorithms called BQP (bounded-error quantum polynomial time).
Shor's algorithm is part of BQP. Is the JVC algorithm part of BQP, even though it utilizes classical components? I think so.
I believe that the precomputational step is the leading factor in the algorithm's time complexity, so it isn't technically a lower complexity than Shor's. If I had to speculate, there will be another class in quantum computational complexity theory that accommodates precomputation utilizing classical computing.
I welcome the work, and after a quick scroll through the original paper, I think there is a great amount of additional research that could be done in this computational complexity class.
amluto 15 hours ago [-]
There is a genuinely interesting complexity class called BQP/poly, which is pronounced something like “bounded-error quantum polynomial time with classical advice” (add some more syllables for a complete pronunciation).
The JVG algorithm is not a high quality example of this or really anything else. If you think of it as “classical advice”, then it fails, because the advice depends on the input and not just the size of the input. If you think of it as precomputation, it’s useless, because the precomputation involved already fully solves the discrete log problem. And the JVG paper doesn’t even explain how to run their circuit at respectable sizes without the sheer size of the circuit making the algorithm fail.
It’s a bit like saying that one could optimize Stockfish to run 1000x faster by giving it an endgame table covering all 16-or-fewer-piece-positions. Sure, maybe you could, but you also already solved chess by the time you finish making that table.
adgjlsfhk1 15 hours ago [-]
JVC isn't BQP. it's exp time (I.e. worse than factoring without a quantum computer at all). it takes the only step of shors algorithm that is faster to run on a quantum computer and moves it to a classical computer
coolcoder9520 15 hours ago [-]
[flagged]
15 hours ago [-]
kmeisthax 18 hours ago [-]
I mean, considering that no quantum computer has ever actually factored a number, a speedup on tiny numbers is still impressive :P
dehrmann 16 hours ago [-]
I didn't get the quantum hype last year. At least with AI, you can see it do some impressive things with caveats, and there are bull and bear cases that are both reasonable. The quantum hype training is promising the world, but compared to AI, it's at the linear regression stage.
bawolff 6 hours ago [-]
Quantum computing is cool, but a lot of the people who were hyping it last year were absolute charletons. They were promosing things that quantum computers couldn't even do theoretically let alone next year. Even the more down to earth claims were stuff we are still 10-40 years away from presented as if its going to happen next month.
Quantum computers are still cool and things worthy of research. Its going to be a very long road though. Where we are with quantum computers is like equivalent to where we were with regular computers in the 1800s.
The hype people just make everything suck and should be ignored.
People get taken by the theoretical coolness and ultimate utility of the idea, and assume it's just a matter of clever ideas and engineering to make it a reality. At some point, it becomes mandatory to work on it because the win would be so big it would make them famous and win all sorts of prizes and adulation.
QC is far earlier than "linear regression" because linear regression worked right away when it was invented (reinvented multiple times, I think). Instead, with QC we have: an amazing theory based on our current understanding of physics, and the ability to build lab machines that exploit the theory, and some immediate applications were a powerful enough quantum computer built. On the other side, making one that beats a real computer for anything other than toy challenges is a huge engineering challenge, and every time somebody comes up with a QC that does something interesting, it spurs the classical computing folks to improve their results, which can be immediately applied on any number of off-the-shelf systems.
antonvs 9 hours ago [-]
> People get taken by the theoretical coolness and ultimate utility of the idea, and assume it's just a matter of clever ideas and engineering to make it a reality. At some point, it becomes mandatory to work on it because the win would be so big it would make them famous and win all sorts of prizes and adulation.
Good description. Commercial fusion power seems to be in the same category currently.
The next step once you have enough thinkers working on the problem is to start pretending that commercial success is merely a few years away, with 5 or 10 years being the ideal number.
jerf 4 hours ago [-]
The only things I'm aware of that I consider actual problems it solves are "it breaks classical encryption" and "you may be able to use it to directly model other quantum systems like for protein folding and such".
Everything else I consider pretty silly. "It can improve logistics" - I'm fairly sure computers are already as good as they can be, what dominates logistics calculations isn't an inability to optimize but the fact the real world can only conform so closely to any model you build. "It can improve finance" - same deal, really. All the other examples I see cited are problem where we've probably already got running code that is at the noise floor imposed by reality and its stubborn unwillingness to completely conform to plans.
If I had $1 to invest between AI and quantum computing I'd end up rounding the fraction of a cent that should rationally go to quantum computing and put the whole dollar in AI.
By far the most exciting possibility is one that Scott Aaronson has cited, which is, what if quantum computers fail somehow? To put it in simple and unsophisticated terms, what if we could prove that you can't entangle more than 1024 qubits and do a certain amount of calculation with them? What if the universe actually refuses to factor a thousand-digit prime number? The way in which it fails would inevitably be incredibly interesting.
adgjlsfhk1 15 hours ago [-]
The problem is that it's an exponential slowdown on large numbers.
Tyr42 17 hours ago [-]
Hey hey, 15 = 3*5 is factoring.
omoikane 1 hours ago [-]
You can also get a dog to factor 15, see pages 9-11 of this paper:
my understanding is that they factored 15 using a modular exponentiation circuit that presumes that the modulus is 3. factoring 15 with knowledge of 3 is not so impressive. Shor's algorithm has never been run with a full modular exponentiation circuit.
Strilanc 10 hours ago [-]
The very first demonstration of factoring 15 with a quantum computer, back in 2001, used a valid modular exponentiation circuit [1].
The trickiest part of the circuit is they compile conditional multiplication by 4 (mod 15) into two controlled swaps. That's a very elegant way to do the multiplication, but most modular multiplication circuits are much more complex. 15 is a huge outlier on the difficulty of actually doing the modular exponentiation. Which is why so far 15 is the only number that's been factored by a quantum computer while meeting the bar of "yes you have to actually do the modular exponentiation required by Shor's algorithm".
would other mersenne numbers admit the same trick? if so, factoring 2047 would be really interesting to see. it's still well within the toy range, but it's big enough that it would be a lot easier to believe that the quantum computer was doing something (15 is so small that picking an odd number less than sqrt(15) is guaranteed to be a correct factorization)
Strilanc 2 hours ago [-]
No, 15 is unique in that all multiplications by a known constant coprime to 15 correspond to bit rotations and/or bit flips. For 2047 that only occurs for a teeny tiny fraction of the selectable multipliers.
Shor's algorithm specifies that you should pick the base (which determines the multipliers) at random. Somehow picking a rare base that is cheap to do really does start overlapping with knowing the factors as part of making the circuit. By far the biggest cheat you can do is to "somehow" pick a number g such that g^2=1 (mod n) but g isn't 1 or N-1. Because that's exactly the number that Shor's algorithm is looking for, and the whole thing collapses into triviality.
guy4261 18 hours ago [-]
> (yes, the authors named it after themselves)
The same way the AVL tree is named after its inventors - Georgy Adelson-Velsky and Evgenii Landis... Nothing peculiar about this imh
apnorton 12 hours ago [-]
This might not be something entirely obvious to people outside of academia, but the vast majority (which I'm only weakening a claim of "totality" in order to guard against unknown instances) of entities that bear the name of humans in the sciences do so because other people decided to call them by that name.
From another view, Adelson-Velsky and Landis called their tree algorithm "an algorithm for the organization of information" (or, rather, they did so in Russian --- that's the English translation). RSA was called "a method" by Rivest, Shamir, and Adleman. Methods/algorithms/numbers/theorems/etc. generally are not given overly specific names in research papers, in part for practical reasons: researchers will develop many algorithms or theorems, but a very small proportion of these are actually relevant or interesting. Naming all of them would be a waste of time, so the names tend to be attached well after publication.
To name something after oneself requires a degree of hubris that is looked down upon in the general academic community; the reason for this is that there is at least a facade (if not an actual belief) that one's involvement in the sciences should be for the pursuit of truth, not for the pursuit of fame. Naming something after yourself is, intrinsically, an action taken in the seeking of fame.
johncarlosbaez 17 hours ago [-]
Adelson-Velsky and Evgenii Landis were not the ones who named their tree the "AVL tree".
In my "crackpot index", item 20 says:
20 points for naming something after yourself. (E.g., talking about the "The Evans Field Equation" when your name happens to be Evans.)
Same with RSA and other things, I think the author's point is that slapping your name on an algorithm is a pretty big move (since practically, you can only do it a few times max in your life before it would get too confusing), and so it's a gaudy thing to do, especially for something illegitimate.
Nothing on that list has been named that way by Euler himself of course.
yacin 16 hours ago [-]
if you’re Euler you get a pass
12 hours ago [-]
antonvs 10 hours ago [-]
The RSA authors didn’t name the algorithm after themselves.
croes 17 hours ago [-]
Named after != named by
zimpenfish 10 hours ago [-]
> Named after != named by
But also note that naming an algorithm, in and of itself, is fine; it's naming it after yoursel(f,ves) in the initial paper that's a sign of crackpottery.
* Named by: Probably fine but heavily weighted on the grandiosity of the title.
* Named after: Almost certainly fine (unless it's something like "X's Absolute Drivel Faced Garbage That Never Works Because X Kidnapped My Dog And Is A Moral Degenerate Algorithm", obvs.)
* Named by yoursel(f,ves) after yoursel(f,ves): In the initial paper? Heavy likelihood of crackpottery. Years later? Egotistical but strong likelihood of being a useful algorithm.
It's inaccurate to say it wins on small numbers because on small numbers you would use classical computers. By the time you get to numbers that take more than a minute to factor classically, and start dreaming of quantum computers, you're well beyond the size where you could tractably do the proposed state preparation.
[1]: https://quantumfrontiers.com/2026/01/06/has-quantum-advantag...
[2]: https://quantumfrontiers.com/2026/01/25/has-quantum-advantag...
[3]: https://quantumfrontiers.com/2026/02/28/what-is-next-in-quan...
[4]: https://arxiv.org/abs/2303.04792
[5]: https://arxiv.org/abs/2406.02501
https://news.ycombinator.com/item?id=47246295
Shor's algorithm is part of BQP. Is the JVC algorithm part of BQP, even though it utilizes classical components? I think so.
I believe that the precomputational step is the leading factor in the algorithm's time complexity, so it isn't technically a lower complexity than Shor's. If I had to speculate, there will be another class in quantum computational complexity theory that accommodates precomputation utilizing classical computing.
I welcome the work, and after a quick scroll through the original paper, I think there is a great amount of additional research that could be done in this computational complexity class.
The JVG algorithm is not a high quality example of this or really anything else. If you think of it as “classical advice”, then it fails, because the advice depends on the input and not just the size of the input. If you think of it as precomputation, it’s useless, because the precomputation involved already fully solves the discrete log problem. And the JVG paper doesn’t even explain how to run their circuit at respectable sizes without the sheer size of the circuit making the algorithm fail.
It’s a bit like saying that one could optimize Stockfish to run 1000x faster by giving it an endgame table covering all 16-or-fewer-piece-positions. Sure, maybe you could, but you also already solved chess by the time you finish making that table.
Quantum computers are still cool and things worthy of research. Its going to be a very long road though. Where we are with quantum computers is like equivalent to where we were with regular computers in the 1800s.
The hype people just make everything suck and should be ignored.
People get taken by the theoretical coolness and ultimate utility of the idea, and assume it's just a matter of clever ideas and engineering to make it a reality. At some point, it becomes mandatory to work on it because the win would be so big it would make them famous and win all sorts of prizes and adulation.
QC is far earlier than "linear regression" because linear regression worked right away when it was invented (reinvented multiple times, I think). Instead, with QC we have: an amazing theory based on our current understanding of physics, and the ability to build lab machines that exploit the theory, and some immediate applications were a powerful enough quantum computer built. On the other side, making one that beats a real computer for anything other than toy challenges is a huge engineering challenge, and every time somebody comes up with a QC that does something interesting, it spurs the classical computing folks to improve their results, which can be immediately applied on any number of off-the-shelf systems.
Good description. Commercial fusion power seems to be in the same category currently.
The next step once you have enough thinkers working on the problem is to start pretending that commercial success is merely a few years away, with 5 or 10 years being the ideal number.
Everything else I consider pretty silly. "It can improve logistics" - I'm fairly sure computers are already as good as they can be, what dominates logistics calculations isn't an inability to optimize but the fact the real world can only conform so closely to any model you build. "It can improve finance" - same deal, really. All the other examples I see cited are problem where we've probably already got running code that is at the noise floor imposed by reality and its stubborn unwillingness to completely conform to plans.
If I had $1 to invest between AI and quantum computing I'd end up rounding the fraction of a cent that should rationally go to quantum computing and put the whole dollar in AI.
By far the most exciting possibility is one that Scott Aaronson has cited, which is, what if quantum computers fail somehow? To put it in simple and unsophisticated terms, what if we could prove that you can't entangle more than 1024 qubits and do a certain amount of calculation with them? What if the universe actually refuses to factor a thousand-digit prime number? The way in which it fails would inevitably be incredibly interesting.
https://news.ycombinator.com/item?id=44608622 - Replication of Quantum Factorisation Records with a VIC-20, an Abacus, and a Dog (2025-07-18, 25 comments)
The trickiest part of the circuit is they compile conditional multiplication by 4 (mod 15) into two controlled swaps. That's a very elegant way to do the multiplication, but most modular multiplication circuits are much more complex. 15 is a huge outlier on the difficulty of actually doing the modular exponentiation. Which is why so far 15 is the only number that's been factored by a quantum computer while meeting the bar of "yes you have to actually do the modular exponentiation required by Shor's algorithm".
[1]: https://arxiv.org/pdf/quant-ph/0112176#page=15
Shor's algorithm specifies that you should pick the base (which determines the multipliers) at random. Somehow picking a rare base that is cheap to do really does start overlapping with knowing the factors as part of making the circuit. By far the biggest cheat you can do is to "somehow" pick a number g such that g^2=1 (mod n) but g isn't 1 or N-1. Because that's exactly the number that Shor's algorithm is looking for, and the whole thing collapses into triviality.
From another view, Adelson-Velsky and Landis called their tree algorithm "an algorithm for the organization of information" (or, rather, they did so in Russian --- that's the English translation). RSA was called "a method" by Rivest, Shamir, and Adleman. Methods/algorithms/numbers/theorems/etc. generally are not given overly specific names in research papers, in part for practical reasons: researchers will develop many algorithms or theorems, but a very small proportion of these are actually relevant or interesting. Naming all of them would be a waste of time, so the names tend to be attached well after publication.
To name something after oneself requires a degree of hubris that is looked down upon in the general academic community; the reason for this is that there is at least a facade (if not an actual belief) that one's involvement in the sciences should be for the pursuit of truth, not for the pursuit of fame. Naming something after yourself is, intrinsically, an action taken in the seeking of fame.
In my "crackpot index", item 20 says:
20 points for naming something after yourself. (E.g., talking about the "The Evans Field Equation" when your name happens to be Evans.)
> By doing so, we aim to provide a novel paradigm [...]
also made me think of item 19 on your list:
> 10 points for claiming that your work is on the cutting edge of a "paradigm shift".
I'm sad though that you didn't call it the "Baez crackpot index"...
In the original paper they do not give it any name: https://people.csail.mit.edu/rivest/Rsapaper.pdf
But also note that naming an algorithm, in and of itself, is fine; it's naming it after yoursel(f,ves) in the initial paper that's a sign of crackpottery.
* Named by: Probably fine but heavily weighted on the grandiosity of the title.
* Named after: Almost certainly fine (unless it's something like "X's Absolute Drivel Faced Garbage That Never Works Because X Kidnapped My Dog And Is A Moral Degenerate Algorithm", obvs.)
* Named by yoursel(f,ves) after yoursel(f,ves): In the initial paper? Heavy likelihood of crackpottery. Years later? Egotistical but strong likelihood of being a useful algorithm.