Problems in a published article on food security in the Lower Mekong Basin

John Williams points us to this article, “Designing river flows to improve food security futures in the Lower Mekong Basin,” by John Sabo et al., featured in the journal Science. Williams writes:

The article exhibits multiple forking paths, a lack of theory, and abundant jargon. It is also very carelessly written and reviewed. For example, the study analyzed the Mekong River stage (level of the water with respect to a reference point), but refers more often to the discharge (volume per time past a reference point: the relationship between the two is non-linear). It is pretty amazing that something like this got published.

Williams’s fuller comments are here.

I haven’t read through this at all, but I ran it by a colleague who knows this stuff, and my colleague agreed with Williams’s critique, so I’ll share it here.

Too bad the journal Science doesn’t have a post-publication review portal, so we have to do things in this awkward way.

P.S. Commenters pointed out updates in the journal here and here.

18 thoughts on “Problems in a published article on food security in the Lower Mekong Basin

  1. Added to my list of things I wish I had written and/or will write something as good as someday:

    “In a methodological tour de force, Sabo et al*. (1) relate components of the flow regime of the Mekong River…. However, simple flow routing considerations show that the “good” regime could not be implemented, and if it were, it would devastate the Dai fishery.”

    #MethodologicalTourDeForce #TooCleverByHalf

    All my high-quality-burn-jelly** aside, this goes back to a point we’ve discussed before about the use and uselessness of “methods” papers that are framed as “empirical papers”… or “empirical papers” where you aren’t supposed to believe the result but are supposed to like the methods… or whatever it is we do when we try to publish papers that are sorta about the method and sorta about the application. And it isn’t just here on the blog, David McKenzie talks about it in relation to “transparency” methods, but discusses the issue more broadly as well: “…[I]n each [of these papers], the results themselves can become of secondary interest to the method. The cynical take is then that an emphasis on methods is a way to get people interested in what would otherwise be less interesting papers. An alternative take is that the methods were the goal of the paper in the first place.”

    https://blogs.worldbank.org/impactevaluations/cynic-s-take-papers-novel-methods-improve-transparency

    As both a producer and a consumer of research, I can see pros and cons on both sides.

    From the consumer side, reading a “pure methods” paper can be incredibly difficult, even more boring and, given that it is really hard to simulate out math in your head in meaningful complexity, just showing the math and stating the assumptions of a new method provides almost zero help in usefully applying the method. So an empirical example is great – it forces the author to implement the thing in a real world context and do the kinds of diagnostics and checks that show researchers how to approach real problems using the new tool. BUT – and this is a big but – in the process of writing such a paper you will produce a number that someone will want to cite some day. So maybe you are working on synthetic controls algorithms, but now you also have to be an expert on Basque separatism and tobacco regulation (no hate to Abadie and coauthors, those are great papers, but I suspect that the vast majority of citations on the Tobacco paper are for the algorithm and the inference strategy, not a point estimate of policy effects, and if they are cited for the point estimate, it is probably because people already wanted a number in that general range they could throw out.)

    From the producer side, I like these papers too, mostly because “methods” papers can often only be published as “methods” papers if they are either a) incredibly mathy to the point of inscrutability to anyone other than 10 or so specialists scattered around the Academy; and/or b) written by someone pretty famous as part of a broader methodological critique (#Gelman, but Andrew’s papers usually have an empirical point too, one I think he wants us to believe). Of course, the cons to the producer are real too… really good estimates get knocked down journal tiers because there is no methodological contribution in the paper (other than the contribution of better data and clearer thinking, I guess); and really interesting methods ideas have to be shoehorned into some empirical setting which is often just whatever is convenient to the author and not whatever is the best application for the method.

    One alternative I’ve seen that somewhat bridges the gap is the “new method as replicate-and-extend paper”, and this might be a good way to go about it. In this case, you apply your novel method to a well-known, well-understood dataset that had a competent and careful analysis already done, and show how the new method can in some way improve our understanding of the original result: robust to different assumptions; revealing heterogeneity; identifying problems with the original algorithm/method, etc. Here the empirical claim generally matters less, assuming the findings don’t overturn the original results and simply add to or extend them, and the reader’s focus goes primarily to the method, which they can then understand in relation to the (already known and understood) empirical problem at hand. I might think of the Dinardo/Fortin/Lemieux decomposition work like this, but maybe I’m just revealing some fan-boy and that could be lumped in with the synthetic controls stuff; I’ve seen more obscure stuff on questions like rank preservation and alternative types of treatment effects (marginal treatment effects and other special kinds of local average treatment effects) that take a similar approach.

    But in general, I think one thing we need to do is make it increasingly clear what it is that people are supposed to take out of any particular paper we write. I suspect this paper was mostly about some weird modeling contribution and was not meant to be taken seriously. Similarly, I sometimes think of the “Great Recession as Shock to Value of Leisure Time” or “as Shock to household discount rate” macro-theory papers as being just really bad at making the point about what their point is. I don’t think the authors actually believed people just decided they didn’t want to work as much or didn’t value the future anymore, I think they were making a specific mathy claim about a specific family of models, and provided new insights and/or solution concepts that were the actual contribution of the work***.

    I think we can solve a lot of these problems if we work to write more clearly and carefully about our contributions in our works. Yeah, that would be easier if the “certainty culture” in publishing got a little more realistic about what statistics can/can’t ever do, but we can go a long way on our own too. It might cost us a bit personally in terms of publication profile, but it would greatly help us in the long-run in terms of the clarity, usefulness and (thus) productivity of our shared enterprise (which, just to be clear, is truth-generation and technological improvement, not money/fame-generation and career improvement).

    *this story was funnier when it was about foot insecurity. http://statmodeling.stat.columbia.edu/2018/07/01/deck-rest-year-2/
    **the original Mekong article uses a unit of measure called “Dai days”, which… #AlsoJelly.
    ***I mean, that’s what I think on days when I think of my discipline as a publishing game; on days I think of it as a political game I worry.

    • I really dislike methods papers that apply a subject matter independent method and reach a fully obscenely wrong conclusion due to subject matter ignorance. Regressing something on something and deciding that you want a river to flood like a square-wave function without any knowledge of the physics of how you’d accomplish that or what it would mean is pretty awful in my opinion. It’s like suggesting that based on some economics regression women should control their fertility to a rate of 3 babies per year for 6 months at some point between ages 27 and 30 because it optimizes their earnings potential or something. Just smack your forehead kinda stupid.

    • jrc’s comments get at something that I think hasn’t been discussed enough on this blog — that sometimes, part of the culture of “publish to advance or even stay level” is the “value” placed by some (promotion committees, journal editors, administrators, …) on “novelty”. Well, they don’t put it that way, because putting it that way might make it seem like something petty — but the question “What is novel in this paper?” is one that is often asked.

    • jrc:

      You raise serious concerns. There was a thought discussion on some of these at JSM2018 Theory vs. Practice https://ww2.amstat.org/meetings/jsm/2018/onlineprogram/AbstractDetails.cfm?abstractid=326764

      Everyone seem to agree there were real barriers to publishing useful material for applying statistics and often mentioned inappropriate expectations/desires for (overly) mathematical content.

      The organizer, Ryan Tibshrani is looking into getting the discussion written up.

      • Thanks for the link Keith. I’ll keep an eye out for a writeup.

        The fetishizing of mathiness and novelty (as Martha points out) seem to me to go hand in hand to some degree. Or maybe mathiness is the specific novelty that is fetishized in certain (sub-)fields. In my world “identifying variation” is often similarly fetishized, without regard to relationship to the real world (Andrew would point to the China Air Pollution paper here and the whole “but it’s an RD so it is unbiased” mindset).

        But in statistics and econometrics, I tend to think the mathiness is what gets you there – or, of course, the few and far between actually really good ideas or deep insights that reveal new ways of thinking/seeing something (which sometimes aren’t real mathy). And since most users of even fairly high-level statistical algorithms are not particularly well-versed in stochastic calculus or the algebra of convergence in probability/distribution, that establishes a clear barrier between author and reader right at the start that is difficult to overcome. Sorry, that should say author and “purported” reader, because for stats publications the real intended reader is the referee, who is well-versed in statistical math and probably considers that one of their most important types of expertise and what makes them a statistician (which, ok fair).

        So maybe the problem can be seen in the fact that journal editors and referees in statistics/econometrics/psychometrics/etc. publish papers for themselves, but they aren’t the ones who need the papers (or need the methods). And the only outlet for the investigation/use of these methods is in applied journals that want a paper to have a number in them (a parameter or a population estimate) that someone can cite some day. So where does the “how well does this method actually perform and under what real-world circumstances can it be reliably implemented” paper go? They show up sometimes, but often after a method has already been used for 5 or 10 years it seems, which… a little late by then, even ignoring the #Zombie effect.

    • It doesn’t matter how interesting your method is, you shouldn’t apply it to subject matter that makes no sense.

      So how do we encourage people to develop and disseminate methods in practical analyses where applications are realistic not pure made up noise, while also discouraging this kind of inappropriate fake result?

      It seems we need to force people to work on real world problems even while developing methods… Won’t someone please think of the poor mathematicians? :-)

      • >>> how do we encourage people to develop and disseminate methods in practical analyses where applications are realistic<<<

        One option: Try a model where people want to pay to read what you write?

        e.g. I'm quite impressed by the quality of articles in what I consider as trade publications.

  2. Do you ever read articles from areas which are dominated by politics, which are typically haranguing, dissembling and often outright distortions of fact (and omissions of contradictory facts)? It’s beyond what can be fixed. I mention this because sometimes it feels like the gist of the argument from the folks you criticize is ‘we could abandon the pretext of statistical reasoning but instead use it to make arguments appear more valid than they are’. Or, ‘we have no interest in reforming, just in making our work look good because we don’t actually care about rigor but about point of view’.

    To get into this a bit more: all science has ends in mind but their version of science is to generate a result, whether that’s as crass as getting tenure or money for work or some larger political end. They don’t view the world scientifically at all but rather attach the appearances of science to polemic, argument and so on. So if science is stuff that can be describe with a reliable degree of statistical accuracy, then nearly all academic work is not science and should be treated as polemic. That enables us to talk about it as representing academic rather than the actual concerns and, of course, we see papers that statistically treat the bias in papers, etc., which I find interesting because individual non-scientific work is transformed into science by the process of abstraction.

    I was thinking about this today when I read about a published work which involved using dogs to sniff out disease. Problem: they used 1 dog. Next problem: after publishing their amazing 1 bleeping dog results, they tried more dogs. No, they actually tried 1 more dog … and contradicted their own work. 1 more dog. That was too much to ask before publishing. Beyond sloppy by all involved. (And I’ve looked at studies about coconut oil and they seem to show it has a similar effect as olive oil, much less than butter, on LDL, but that’s a video of a talk not a paper.) What is the point of doing a 1 dog study? It certainly wouldn’t be to disprove that a dog can sniff disease, would it? The dog has hidden powers we can tap! Grrrrr. Many years ago, I was handed a paper about using homeopathic medicine to treat infantile diarrhea: fewer than 30 babies observed in an ER and not over time. Ya think they found an effect? It was small but they thought they found one. The guy who handed me the paper actually said, ‘what more do you need?’ as proof homeopathy works. My response was ‘I need actual proof’ but ya think he listened?

    What’s the line. Oh yeah: Life is but a walking shadow, a foolish player who struts and frets his hour upon the stage and is heard no more. It is a tale told by an idiot: full of sound and fury, signifying nothing. People say that and not just on stage. They think that: if I lie, if I make tilted and distorted arguments to get to the ends I desire, then life is but a walking shadow anyway. They miss the point Shakespeare was making: life is a walking shadow and the idiot is you for acting like an idiot in your life, by making your fake arguments to get ahead, to sell what you want as though the other perspectives are morally disgusting, as though your point of view is all. MacBeth is the idiot who told his own story. These people are the idiots telling their own stories. Shakespeare’s point was that you don’t have to be an idiot, that it’s your choice to kill Banquo, your choice to kill the King, your choice to grasp and lie and cheat. He makes this explicit in Hamlet: Claudius confesses all his wrongs but says bluntly he cannot go to heaven because he cannot give up what he got from doing wrong.

    I know this is an aside of all asides, but I’m from Detroit and Aretha’s death hit me because there’s so much black culture in Detroit. I grew up watching WGPR – where god’s presence radiates – and its black church services (and Jordanian orchestral music, Iraqi oud music, etc.). She was church. Her commercial achievement, which is equal to Ray Charles and Prince, is that she combine gospel with pop, bringing the multi-layering of gospel – which is African polyphony transformed into various musical dimensions fit to white, originally slave owner European forms – to the popular backbeat and pacing. Ray took church into soul. Prince too church into rock/dance of the next era. Aretha’s dad was an amazing preacher – and he had a kid with a kid of 12, which his daughter ‘imitated’ or ‘re-enacted’ in her own life (just a few years after her mother died) – and Aretha was better. Her gospel music is astounding and it brings out something hard to understand in general: how the church provided comfort for oppressed people by teaching them you have a friend in Jesus and this world is but a walking shadow and the teller of the tale for you as a slave is the idiot who oppresses you. You pray for a better life but you also pray the oppressor sees the light – shades of MLK – because you know they are afflicted by the devil that prevents them from being kind. It’s in this context that rap music occurs: boasting of money, cars, women is a response to the church saying this life is the illusion. (That takes a lot more time than I have now.) I like digressions.

    • The Shakespeare quote brought to mind that some of the guys in my high school had T-shirts that said ΣΦØ.

      And the discussion of Aretha brought back more memories of high school — in Detroit; same graduating class as Diana Ross.

    • I don’t understand your issue in regards to the “one dog study”.

      I’m not familiar with the study nor the claims made, but I assume it’s not “all dogs will instinctively smell out disease and communicate it effectively to researchers”. If a single dog really has the ability to reliably smell out a given disease (other than something that would probably trivial to identify like gangrene), that alone seems like important information worth communicating (i.e., publishable). Not because it implies one could use any dog, or even this dog, to reliably identify disease, but rather to point out that there seems to be strong signal in smell that this dog can (apparently) reliably detect and communicate.

Leave a Reply to Martha (Smith) Cancel reply

Your email address will not be published. Required fields are marked *