Fun fight over the Grover search algorithm

Joshua Vogelstein points me to this blog entry by Robert Tucci, diplomatically titled “Unethical or Really Dumb (or both) Scientists from University of Adelaide ‘Rediscover’ My Version of Grover’s Algorithm”:

The Chappell et al. paper has 24 references but does not refer to my paper, even though their paper and mine are eerily similar. Compare them yourself. With the excellent Google and ArXiv search engines, I [Tucci] would say there is zero probability that none of its five authors knew about my paper before they wrote theirs.

Chappell responds in the comments:

Your paper is timestamped 2010; however the results of our paper was initially presented at the Cairns CQIQC conference in July 2008. . . . The intention of our paper is not a research article. It is a tutorial paper. . . . We had not seen your paper before. Our paper is based on the standard Grover search, not a fixed point search. Hence, your paper did not come to our attention, as we were not concerned with that specific topic and indeed the fixed point search is not mentioned at all in our paper.

Tucci doesn’t buy it. I don’t know what to think, neither do I really care, as I know none of the people involved and I know nothing about the topic. When I hear “Grover,” all I can think of is Sesame Street. But the form of the dispute is interesting. I imagine that people closer to this field of research will be able to tell right away what really happened here.

7 thoughts on “Fun fight over the Grover search algorithm

  1. “I imagine that people closer to this field of research will be able to tell right away what really happened here.”

    Perhaps not… from looking at the comments on the blog, there seem to be comments in both directions, and none of them sound particularly insightful. (One mentions how famous the “Grover-Tucci, aka ‘Gucci’ algorithm” is – I really can’t tell if it’s facetious or not!). I’ll be interested to see how this goes. In my experience in other fields (e.g. machine learning) one often finds very similar algorithms cropping up one or two conferences apart, but the applications are different enough (and they are both simple enough extensions of earlier ideas) that you imagine they probably just came to the same conclusion independently. The papers look sufficiently superficially different that I imagine it would take quite a lot of domain knowledge to determine. If it *did* turn out that those in the know all agreed one must have been copied from the other, I think people working on automatic plagiarism detection would be kept occupied for a very long time trying to determine that one of these papers was copied!

  2. Google shows 1 citation, self, for his unpublished 2010 paper. It’s fair to say that nobody knows about it. He states it googles highly for a phrase the other authors do not use. He claims this proves that they were avoiding his paper, but it seems just as plausible that they didn’t think of using those keywords (they come at the problem from a different direction). The paper by Tucci is one of a small number citing an unpublished 2005 work by Grover, but the new authors do not cite that one. Because Tucci does not cite much of the literature (which he claims is irrelevant), it is invisible to papers-citing trees. People don’t troll through arXiv reading every manuscript; it’s an easy way to stay unknown.

    The comments by Tucci make clear that the new algo is not the same as the old one. He just thinks it should have been cited.

      • They cite no arxiv-only papers; they seem to have not done that search. Even if everyone puts papers on arxiv, not everyone reads it as if it was a journal (I would say most people don’t). Given that the focus of their paper, a lit review of every modification of the algorithm out there doesn’t appear to be necessary. Even if that was their goal, expanding a lit review to include pre-prints is often not done, since pre-prints are usually works in progress and have not been subject to external review for errors.

  3. The guy has apparently had somewhat of a change of heart, from his blog:

    “Addendum Jan 16, 2012:
    Now that my emotions have calmed down, I was able to look at the Chappell et al paper more carefully. Our algorithms are more different than I initially thought. I’m not sure yet if one algorithm is faster or more accurate than the other. They may be about the same, or they may not be. More analysis will be required to answer this. I still think Chappell et al should have mentioned my paper. I would have done so if I had been in their shoes. After all, my paper did precede theirs by 2 years and it solves the same problem, albeit by a different strategy.”

Comments are closed.