Is There a Replication Crisis in Finance?

Lasse Heje Pedersen writes:

Inspired by in part by your work on hierachical models, we have analyzed the evidence on research in financial economics, overturning the claims in prior papers that this field faces a replication crisis. Indeed, the power of the hierarchical model relative to frequentist multiple-testing adjustment along with simple improvements (e.g., leaving out findings that were never significant in the first place) turns out to make a huge difference here.

We also make simulations that show how the hierachical model reduces the false discovery rate while sacrificing little power. Comments very welcome!

The research article, by Theis Ingerslev Jensen, Bryan Kelly, and Lasse Heje Pedersen, begins:

Several papers argue that financial economics faces a replication crisis because the majority of studies cannot be replicated or are the result of multiple testing of too many factors. We develop and estimate a Bayesian model of factor replication, which leads to different conclusions. The majority of asset pricing factors: (1) can be replicated, (2) can be clustered into 13 themes, the majority of which are significant parts of the tangency portfolio, (3) work out-of-sample in a new large data set covering 93 countries, and (4) have evidence that is strengthened (not weakened) by the large number of observed factors.

I don’t know anything about the topic and I only glanced at the article, but I’m sharing because of our general interest in the topic of reproducibility in research.

2 thoughts on “Is There a Replication Crisis in Finance?

  1. I’ve just taught this paper and the Hou et al. paper in my behavioral finance class. The two papers use quite different definitions (e.g. excluding vs. not excluding micro-caps, value-weighted vs. equal weighted returns etc.), which explains a part of the difference (from first to second bar).

    What I wonder though, is: across other fields that rely primarily on observational data, what are the usual replication rates? 85% seems very very high…

  2. I’m glad people are starting to use hierarchical models in finance again, but I find a few things odd about this paper.

    1. Why regularize alpha but not beta? It seems like you could put hierarchical priors on the whole model, why just use the intercepts? Right now the model treats alpha as a known parameter when estimated via OLS. But hierarchical models for beta are as old as Vasicek 1973 (https://www.jstor.org/stable/2978759). If you mis-measure beta won’t you also mis-measure alpha?
    2. Why empirical Bayes? Why not just fit the model with MCMC? It seems like an empirical Bayes approach (vs. integrating) would understate the uncertainty and cause them to underestimate the variation in the distribution of alpha. There aren’t that many alphas so it doesn’t seem like this would be that computationally intensive, especially since it seems like it is treating the initial estimation error as known.
    3. Why no actual priors? It seems like this is all just likelihoods, but shouldn’t we also put at least weakly informative priors on the alpha measurements in the first place? Or the hyperparameters of the hierarchical model?
    4. Why CAPM? If this is in an APT setting why not just use PCA to get risk factors if you don’t want mechanical anomaly relationships?

    It’s possible that I missed something since I read rather quickly (or that I just missed something due to my own misunderstanding of the paper and the literature).

Leave a Reply

Your email address will not be published. Required fields are marked *