Who’s afraid of prediction markets? (Hanson vs. Thicke)

In a post entitled, “Compare Institutions To Institutions, Not To Perfection,” Robin Hanson slams a recent paper by Michael Thicke that criticizes prediction markets. Hanson summarizes:

Unfortunately many responses to reform proposals fit the above pattern: reject the reform because it isn’t as good as perfection, ignoring the fact that the status quo is nothing like perfection.

Is that really all there is to it? I’ve been interested in prediction markets, pro and con, for awhile, and I was curious, so I followed the link and read Thicke’s paper, “Prediction Markets for Science: Is the Cure Worse than the Disease?”. Here’s Thicke:

Advocates for science prediction markets argue that they could alleviate problems in science such as bias in peer review and epistemically unjustified consensus. I argue that bias in peer review and epistemically unjustified consensuses are genuine problems in science, and that current attempts to solve them fail in practical circumstances. Prediction markets do show some promise for answering scientific questions while avoiding the pitfalls of consensus and peer review. However, there are strong reasons to believe that science prediction markets will not perform nearly so well as election prediction markets, and implementing prediction markets on a large scale could have significant unintended consequences for the organization of scientific research.

Aha! So Thicke’s not just comparing prediction markets to some nonexistent ideal: he’s explicitly comparing them to the status quo (“Is the cure worse than the disease? . . . “significant unintended consequences”).

That was the title and the abstract; now time to look at the full paper. I did so . . . and I found no evidence, or even any argument, that the “cure” of prediction markets was worse than the “disease” of current problems of scientific peer review (see here, for example). Thicke does however give some arguments as to how prediction markets would have “significant unintended consequences”:

If scientists anticipate that trading on prediction markets could generate significant profits, either due to being subsidized as Almenberg, Kittlitz, and Pfeiffer (2009) propose, or due to legal changes allowing significant amounts of money to be invested, they could shift their attention toward research that is amenable to prediction markets. The research most amenable to prediction markets is short-term and quantitative: the kind of research that is already encouraged by industry funding. Therefore, prediction markets could reinforce an already troubling push toward short-term, application-oriented science. Further, scientists hoping to profit from these markets could withhold salient data in anticipation of using that data to make better informed trades than their peers. This could both slow publication and harm informal communication, and could even harm the accuracy of prediction markets themselves . . . Even if scientists do not pursue short term, easily resolvable research in pursuit of direct profits, prediction markets could distract from important scientific questions. If success in prediction markets is taken as a marker of scientific credibility, then scientists may pursue prediction-oriented research not to make direct profit, but to increase their reputation.

There are also various bits on why prediction markets might not work well, but the above is the place where Thicke lists possible negative consequences. No data, but he does supply arguments.

In the comment thread, Hanson writes that he used the form at Thicke’s website to tell him about his [Hanson’s] post. That was on 5 Aug 2017, and this post is scheduled for 26 Feb 2018, so perhaps Thicke has replied by now?

My views on prediction markets are not completely positive—for example, a few years ago I expressed skepticism about a proposed terrorism prediction market that was going to be run by a convicted terrorist. I do think there are cases where prediction markets can do real damage, if for no other reason than by lending political cover to dangerous people. But I didn’t see Thicke making that particular argument in his article.

Summary

Thicke summarizes:

The promise of prediction markets to solve current problems in assessing scientific claims is therefore largely illusory.

Does this statement make sense? It depends on what is meant by “solve” and “largely illusory.” First, sure, prediction markets are just one idea so it would be silly to think they could solve major problem in science all on their own. But nobody’s claiming that, right? Second, if “largely illusory” means that we have little to no evidence now that prediction markets can do a part in solving problems in science, then, yes, again I agree with Thicke. But is the promise of prediction markets largely illusory? That I’m not so sure. The promise seems real enough, even if there are some disagreements regarding implementation and evaluation.

In summary, I think Thicke raises some real issues and I disagree with Hanson’s claim that Thicke is comparing prediction markets to an alternative of “perfection”; but I agree with Hanson that the objections raised by Thicke are more theoretical than actual.

P.S. In the comment thread of his own blog post, Hanson writes that he used the form at Thicke’s website to tell him about his [Hanson’s] post. That was on 5 Aug 2017, and this post is scheduled for 26 Feb 2018, so perhaps Thicke has replied by now?

2 thoughts on “Who’s afraid of prediction markets? (Hanson vs. Thicke)

  1. This really doesn’t say anything about prediction markets, but is relevant to the general idea of comparing practical alternatives rather than using ideals. It is a little known paper (probably due to my advancing age) but a worthwhile read: “Regulation and Administered Contracts,” by Victor Goldberg in the Bell Journal of Economics and Management Science, circa 1972 (now the Rand Journal). Aside from a brilliant analysis of the need to compare institutions that recognize the inherent difficulties associated with “natural monopolies,” it has the distinction of probably being the only article ever published in the Bell Journal without a single equation! But the thesis was that there are aspects associated with some goods and services (such as electricity provision) that make private contracting difficult as well as causing issues for regulation. These practical realities must be part of any analysis comparing institutions – comparisons to idealized markets (that do not entail long term investments with technological change and monopoly power) are not meaningful. I think some kind of comparable analysis could be done for prediction markets.

  2. Following the guidance I take away from this blog (registration is not much of a cure for bad theory, design and measurement; let’s move away from binary hypothesis testing), I am less in favor of prediction markets than I used to be.

    You can’t create a meaningful asset without spelling out exactly how the authors are going to gather and analyze your data, so prediction markets will require preregistration. You might be able to have a prediction market based on parameter estimates, but asset values will be much more sensitive to details of the estimation method than a binary “will this analysis yield a p-value less than 0.xxx”. So I would expect prediction markets to foster binary thinking about hypotheses. Prediction markets allow little place for exploratory research, or more generally any aspect of a paper other than preregistered numerical results. That said, if you are already running preregistered hypothesis-testing studies, they seem like a nice source of insight, which we saw in Nosek et al’s open science replications of psych studies.

Leave a Reply to RJB Cancel reply

Your email address will not be published. Required fields are marked *