I think it’s great to have your work criticized by strangers online.

Brian Resnick writes:

I’m hoping you could help me out with a Vox.com story I’m looking into.

I’ve been reading about the debate over how past work should be criticized and in what forums. (I’m thinking of the Susan Fiske op-ed against using social media to “bully” authors of papers that are not replicating. But then, others say the social web is needs to be an essential vehicle to issue course corrections in science.)

This is what I’m thinking: It can’t feel great to have your work criticized by strangers online. That can be true regardless of the intentions of the critics (who, as far as I can tell, are doing this because they too love science and want to see it thrive). And it can be true even if the critics are ultimately correct. (My pet theory is that this “crisis” is actually confirming a lot of psychological phenomenon—namely motivated reasoning)

Anyway: I am interested in hearing some stories about dealing with replication failure during this “crisis.” (Or perhaps some stories about being criticized for being a critic.) How did these instances change the way you thought about yourself as a scientist? Could you really separate your intellectual reaction from your emotional one?

This isn’t about infighting and gossip: I think there’s an important story to be told about what it means to be a scientist in the age of the social internet. Or maybe the story is about how this period is changing (or reaffirming) your thoughts of what it means to be a scientist.

Let me know if you have any thoughts or stories you’d like to share on this topic!

Or perhaps you think I’m going about this the wrong way. That’s fine too.

My reply:

You write, “It can’t feel great to have your work criticized by strangers online.” Actually, I love getting my work criticized, by friends or by strangers, online or offline. When criticism gets personal, it can be painful, but it is by criticism that we learn, and the challenge is to pull out the useful content. I have benefited many many times from criticism.

Here’s an example from several years ago.

In March 2009 I posted some maps based on the Pew pre-election polls to estimate how Obama and McCain did among different income groups, for all voters and for non-Hispanic whites alone. The next day the blogger and political activist Kos posted some criticisms.

The criticisms were online, non-peer-reviewed, by a stranger, and actually kinda rude. So what, who cares! Not all of Kos’s criticisms were correct but some of them were right on the mark, and they motivated me to spend a couple of months with my colleague Yair Ghitza improving my model; the story is here.

Yair and I continued with the work and a few years later published a paper in the American Journal of Political Science. A few years after that, Yair and I, with Rayleigh Lei, published a followup in which we uncovered problems with our earlier published work.

So, yeah, I think criticism is great.

If people don’t want their work criticized by strangers, I recommend they not publish or post their work for strangers to see.

P.S. This post happens to be appearing shortly after a discussion on replicability and scientific criticism. Just a coincidence. I wrote the post several months ago (see here for the full list).

87 thoughts on “I think it’s great to have your work criticized by strangers online.

  1. Thanks so much for sharing that link — you did an admirable job of responding graciously to some pretty graceless criticism.

    I think it takes some confidence to respond as you did. You have to be able to understand the point that your critics are making well enough to decide whether you agree, and to know what next steps you should take to improve the work if you do agree. Given the widespread misunderstandings about what p-values really are, and the fairly shallow training many psychologists get in statistics, is it possible some of the people you have criticized simply didn’t understand what you were saying well enough to respond to it?

    I don’t think any answer to that question obligates you in any particular direction. Just thinking out loud.

    • “Given the widespread misunderstandings about what p-values really are, and the fairly shallow training many psychologists get in statistics, is it possible some of the people you have criticized simply didn’t understand what you were saying well enough to respond to it?”

      I think this is on point. I first started suspecting it was a major issue when reading the responses from Gilbert et al. to the Reproducibility Project:

      https://projects.iq.harvard.edu/psychology-replications/home

      Simple statistical errors abound throughout, including in their response to the RP:P authors’ response (falsely and apparently unthinkingly asserting that their analysis would produce the same results no matter which of 5 measures of reproducibility are used), one that misinterprets a 95% CI, and one that supposes the RP:P authors would have counted replication CIs for effect sizes falling *above* the original effect sizes as “failed replications”.

      And this was from a team of authors who knew that what they were writing would be carefully scrutinized by a very large audience of interested and well informed observers.

      Anyway, this is just an anecdote, but it was pretty high profile and (to me at least) pretty sobering. If some big name researchers felt that confident in some pretty basic misunderstandings, it makes me wonder exactly how ordinary researchers who don’t offer up opinions on statistical methods think about the methods they use. What exactly is going through their mind when they look at a p-value, or a confidence interval, given that they have to read about and produce these things all the time? I’d hope that most are at least aware that their understanding is shallow, but I strongly suspect that a lot of people who don’t understand frequentist inference don’t realize that they don’t understand frequentist inference.

      • Ben P. said: ” I strongly suspect that a lot of people who don’t understand frequentist inference don’t realize that they don’t understand frequentist inference.”

        This is an example of what I call “clueless that they’re clueless.” It is indeed a hard problem to deal with; in this case, a big part of the problem is that simplicity is so appealing to most human beings, yet the concepts of frequentist inference are anything but simple. Well-intentioned people simplify to “help people understand”, yet the simplifications usually lead to more misunderstanding.

    • Diane:

      Your blog post is excellent!

      I always benefited from the constructive criticism of my writings. Your example of finding the correct wording/thought process happens to me in all of my writings. Other reviewers of my work always make my communications clearer and more effective – in most instances, by asking questions and pointing to different approaches that I had not considered during the rebuttal in my head or left incomplete thought processes in writing as I completed them in my thoughts :-). When I go back and read my previous work, I am often in disbelief that I had produced such thoughtful research/paper; it is because so many people helped me to make that work better version of it.

  2. I have been thinking about this a lot, perhaps even more so when its not a complete stranger.

    For enabling better science, there does seem to be a trade-off between engaging more directly with other statisticians privately (where enabling better science happens downstream) and between engaging more publicly with other statisticians and non-statisticians (where enabling better science or just statistics is more direct and possibly much wider.)

    Erin’s comment “you have criticized simply didn’t understand what you were saying well enough to respond to it” would inform that trade off.

    John Cleese provide some humor on a extreme form of this – google “John Cleese on Stupidity”

    More seriously, it is much subtler in statistics and maybe the public route is needed to help find out what the misunderstandings are and by whom.

    • I have been thinking about this a lot, perhaps even more so when its not a complete stranger.

      Could you say more about why you think this distinction matters? (I’m not insinuating that it doesn’t, just curious about your instinct that it might.)

      Thanks for the John Cleese laugh :) One painful irony in all this for me is that in trying to understand the replication issue in psychology, we so often reach for explanations in findings borrowed from psychology! — and each time, this small voice in the back of my head says, well yeah, but do we really know that?

    • I have been thinking about this a lot, perhaps even more so when its not a complete stranger.

      Could you say more about why you think this matters? (I’m not insinuating that it doesn’t, just trying to explore your instinct that it might.)

      Thanks for the John Cleese laugh :) One of the painful ironies for me in the psychology replication crisis is that so often, in seeking to understand it, people reach for explanations borrowed from classic psychology findings! — and this small voice in the back of my head says, okay, but do we really know that?…

      • “this small voice in the back of my head says, okay, but do we really know that?”

        Great!

        In teaching mathematics (at the college level), one important thing I often said to students was, “How do you know?”, partly in the hope that they would internalize this — that is, form the habit of asking it themselves. (And this was particularly important to do when teaching future teachers, so that they would pick up the habit of asking their future students.)

        • Well, I mean, not really great? Like, sure, epistemology matters, but I think it’s actually an enormous problem for the discipline that everything in it basically has a steroids-scandal-style asterisk at the end. It would have been better if the epistemology had been closer to right in the first place…

    • “More seriously, it is much subtler in statistics and maybe the public route is needed to help find out what the misunderstandings are and by whom.”

      Yes!

  3. I think there needs to be a distinction here between feedback and criticism. Feedback can be critical but it is largely constructive and productive, coming at the early stages of a project or product, and directed at the work itself. The example you cite above falls more into the feedback category than the criticism category. The paper you recently posted on abandoning statistical significance is another example. I see criticism as coming after a work has been completed when there is basically no opportunity to revise, edit or improve the piece. Think about film/TV/lit criticism; it comes once the film/show/book is finalized and provides a global evaluation (good/bad/meh) that leaves no room for improvement. I think this is why criticism can often come off as personal. What am I supposed to do with the fact that you don’t like my finished work that is largely a reflection of my abilities? I can’t change it now! I would argue that what you and most others like is actually best described as feedback. This distinction is tricky in science because when it comes to ideas, there is always room for updating and improvement in theory. It’s not so straightforward with individual papers and findings, however. If you determine my study is ‘dead on arrival’ the only way to I can change the paper/study to address your comments is to retract the paper. Is that an improvement? It might be to you but it sure doesn’t feel like it to me! My point is, let’s be clear about what we’re talking about here. You keep saying ,’I love criticism!’ I don’t think that means what you think it means.

    • Sentinel:

      That’s an interesting distinction which I’ve never thought about before. Indeed, one reason I blog lots of ideas is so I can hear about how I’m wrong, before my work makes it into published form.

      And, just to be clear, I love criticism even when it comes after my paper has been published! Better if it comes before, but I can’t usually blame the critic for that, as most people typically aren’t even aware of work before it’s published.

      If someone tells me that my study is “dead on arrival,” yes, I’d rather hear sooner than later. But later is better than never.

    • Could this be pointing at a problem with how we have constructed the scientific edifice?
      Why should a single study be fixed? When software has bugs, we correct them and issue a new version. What if we had versioning on science?

      Thinking out loud here… the idea of changing something this fundamental is a bit pie-in-the-sky, but I agree with you that it is easier to be graceful about criticism of your own work if you can save face by changing the work.

      • Erin:

        The idea of a face-saving escape is important. Perhaps it’s easier for statisticians to admit error because uncertainty is something we deal with all the time: after all, even under the best of conditions our 95% intervals will be wrong on occasion. Perhaps also it’s easier for researchers in biomedicine to admit error because that field has such a long and glorious history of trial and error: even the greats made lots of big mistakes so this is no embarrassment at all.

        • even the greats made lots of big mistakes

          Fisher’s colossal debacle against the association between smoking and lung cancer comes to mind…

        • Maybe not “colossal”. Genes that predispose to nicotine cravings have been identified – kinda/sorta like the confounder he posited – and they may explain the smoking/lung cancer dose response curve better than “smoking causes cancer” alone.

        • thanatos
          you are right; there are never just one causal explanation for an outcome but i was referring to his zealousness in traveling around the world to defend his opposition to the association. he was not just suggesting a dose response; he was positing genes as the implausibility of the association. after all, he did have an alter motive though ;-)

        • When I read R. A. Fisher’s impression of what Student had done, after learning about what Fisher had been struggling with for several years, I got the strong impression that Fisher believed the study of variability had been put on the same footing as the study of the charge of an electron or the speed of light. Individuals vary but within groups there is some “law” that obtains for the entire group and Student had found a way to measure it. From there Fisher built a framework for assessing claims about groups; but in the end found himself a prisoner trapped within that which he had built. In any event, his work on 3d+ models is (to my untrained eye) staggeringly awesome given that he was doing it with paper and pencil and, oftener than nowadays, candle light.

        • Thanatos:

          I am not questioning his genius; he was a phenomenal statistician and contributed to the field tremendously. He had a plausible explanation for a risk factor for lung cancer other than smoking (without getting too much into necessary/sufficient causes); however, his refusal to accept the possible association between smoking and lung cancer is a great example of how even the most accomplished can err or be a victim of his own cognitive biases.

        • “Perhaps it’s easier for statisticians to admit error because uncertainty is something we deal with all the time”

          Perhaps many researchers who have a hard time admitting error don’t really understand that “if it involves statistical inference, it involves uncertainty.”

        • Andrew: do we know that it is easier for statisticians to admit error? I wonder if any sociology-of-science has looked at this empirically. If some disciplines are more self-correcting than others, especially if it happens in ways that don’t sacrifice too many individual careers (ie, if they have found a way to self-correct that doesn’t incentivize individual people to paper over their own mistakes), that would be wonderful to know.

        • Erin:

          I would sub-divide into admit error in empirical claims (a 95% interval) versus a methods paper (what a method does and why that is good/valuable).

        • I really like this point about statisticians having to deal with uncertainty all the time and that influencing how they think and respond to criticism.

    • ” If you determine my study is ‘dead on arrival’ the only way to I can change the paper/study to address your comments is to retract the paper. Is that an improvement?”

      In some cases, you can retract and rework the paper; in some cases, you can learn from the mistakes and use that learning to do something better; but perhaps the big point is not to think in terms of individual papers and finding, but in terms of continual learning.

      • “You keep saying ,’I love criticism!’ I don’t think that means what you think it means.”

        This sounds like a case of “different strokes for different folks” — I think he really does love it, which makes him sound kinda crazy to me — but I probably sound kinda crazy to him in some ways.

      • “In some cases, you can retract and rework the paper;”

        I’ve never seen such a case in psychology or any related empirical disciplines with a focus on causal inference. Do you have examples? Also, if my study is DOA, it’s not the paper I need to rework, it’s the study. And presumably, if all I have done is mine the vein of human noise behavior then my reworking is to rerun the study with probably a larger sample and improved measurement and show that there was nothing there to begin with. A retraction would tell that story just fine. So what incentive would a journal have to published the reworked version of my retracted study that shows essentially that the study was bunk to begin with? Retraction is usually the recognition of a fatal flaw in study. You seem to be assuming the reworked study works, which would seem very unlikely in most cases of a retraction. My point is, in practice in empirical disciplines focused inferring cause this seems next to impossible.

  4. The big question is how much it matters to you that your work is correct as compared to the body of work being correct (or being properly understand as correct/incorrect). A lot of the problem, it seems to me, is people getting very invested in a line of research and not feeling comfortable at all with the idea that it might all be wrong.

    Nobody wants to deny the truth, but the more important it is to you that your work is right, the harder it will be to accept criticism evenhandedly (we’re just human and are subject to motivated reasoning, etc.). But if you’re just happy to do research in a given area and not necessarily have some single coherent set of findings on your CV, you would of course welcome new information that helps you get closer to the truth. You can afford to be forward-looking—it helps to have tenure, too.

  5. [I would like to argue for a minority opinion, that is perhaps not my central view, but that could be true]

    Like most people reading this blog, I very much enjoy reading posts where Andrew Gelman takes down a published paper because it is not methodologically sound or flat out wrong. Some months ago, talking to another researcher I found out that I am not alone and this feeling is shared by many other applied researchers who feel that many published papers are wrong but don’t have the credentials, halo, reach, experience, and level of knowledge to point out the errors in a public forum. Andrew is our super hero who slashes those sacred cows in big name journals. I am a fan.

    When I read Fiske’s op-ed back in the day, I reacted like most people reading this blog. It felt like a desperate attempt to perpetuate a way of doing things that has not lead to good science. The online discussion of published scientific work seems as valid as any other discussion.

    But now let’s be a bit of a contrarian. What if there were situations where indeed the online discussion could cause more harm than good. Let’s say there is (a) a published paper and (b) an online forum or blog that may criticize the paper. I am thinking of two conditions under which this criticism could harm the author of the paper in an unfair way. Condition 1: Strong power differential. For instance, the writer of the blog may be a big name statistician who wrote the best and most popular books and has a loyal following whereas the author of the paper is either a junior academic or someone in a field where it is difficult to get methodological support from colleagues. Condition 2: there is no wide consensus on the issues in contention. For instance, we can all agree that assuming independence of variables that are clearly dependent can lead to trouble, using the wrong test is just incorrect, overfitting, etc. That is all fine. But there are some other points where the field has not produced a clear consensus. Consider model selection: should I use the WAIC or Bayes Factors? Or multiple comparisons: is it ok to use multi-level models? etc.

    [and before I continue, let me just say that this is not a criticism to Andrew Gelman or the community that reads and comments in this blog. I think everyone here is quite careful and often constructive and kind. But imagine a similar blog with less benevolent writers and commenters. Imagine an academic field that looks like the Seven Kingdoms in Westeros.]

    It could happen that bloggers in position of power can knock down research that is not clearly wrong, but they would do so because they prefer different methods, have a specific agenda, or feel good exercising power over the powerless. This could cause unnecessary harm to someones career without really enlightening anyone. This is a plausible scenario. And to prevent situations like this, I feel we should have stronger norms when dealing with discussion of published work.

    • Hernan,

      I think this is an important question. However, if we’re going to ask such questions, I think we also should specify a mechanism. That is, we need to be specific about how, exactly, a criticism can have a negative effect on someone or someone’s career.

      I would argue that a criticism cannot have a direct effect on anything! That is, it’s effects are always mediated by other people. E.g. If I don’t get tenure because of something Andrew Gelman writes about me, I cannot just blame Andrew Gelman. I also have to hold the tenure committee accountable.

      I think that most concerns about the negative effects of criticism completely overlook the fact that all effects of criticism are mediated. People don’t have to accept the criticism! Or, they don’t have to accept it completely. Or, they can use it to update their **continuous** position on some issue/hypothesis.

      In my view, this is pretty much a banality of evil kind of situation. Sure, critics who take strong normative positions on things that are ambiguous or (much worse) critics who criticize something because it doesn’t fit their preferences are a problem. But, the masses who mediate the effects of such criticisms are a much bigger problem.

    • “It could happen that bloggers in position of power can knock down research that is not clearly wrong, but they would do so because they prefer different methods, have a specific agenda, or feel good exercising power over the powerless.”

      Perhaps the remedy (or at least counter-measure) is for blog followers to take responsibility to try to point out cases where the blogger may be partisan, have a specific agenda, or be using power just for the sake of power. (The pointing out might take the form of private emails rather than public comments on the blog.)

  6. I was heavily criticised Andrew’s blog 2011, by Christian Robert, for a stats book i wrote for psycholinguists. I was crushed by the criticism and it even appeared in Chance magazine. Robert was pretty contemptuous in tone, at least on the blog.But every single criticism was just mathematical facts, nothing that is a matter of opinion. By the time this attack came I had already realized i have very incomplete knowledge of stats. So i spent four years studying stats, and rewrote book as an online document. Robert’s critique was just confirmation that I knew even less than i believed i knew. 6 years later my labs’ research is of a much better quality statistically speaking.

    One problem i now face in scientific arguments with fellow psycho*ists is they don’t understand the criticism, so it’s impossible to have any discussion because it is like talking to a wall; you talk about problems in their paper and they just balefully stare at you. The whole enterprise is a deadend and i am just concentrating on getting it right myself.

  7. When I have tenure, you may criticize my work ruthlessly and I won’t give a damn. Before tenure, however, I will view any public criticism as damaging my reputation, diminishing the perceived value of my overall work, and jeopardizing my career and paycheck.

    • I think that’s fair. But if the consequences of an act of criticism are so severe that they effect your income, that raises the question of whether you deserved that income to begin with. If you’re doing bad science, I’m perfectly comfortable with that affecting your scientific reputation and incentives for being a scientist.

      • That is to say, a good way to avoid jeopardizing one’s career is to not jeopardize one’s career by doing shoddy science. It’s also the case that shoddy science is rewarded under our current incentive structure. So, if one wants to try to get ahead by cutting corners, that’s a rational decision under risk. But, if one gets called out for it, it hardly makes sense to hold that against the critic.

        Also, people can get criticized unfairly. But, I’d be surprised if there were **many** cases where invalid criticism led to long term costs for the researcher.

      • All correct. My point, simply, is that it’s professionally easy for Gelman to love criticism. It isn’t so easy when your career is at stake. Given the surplus of PhDs, departments can deny tenure for all kinds of reasons and can treat young scholars as disposable resources that are easily replaced. Under such conditions, criticism doesn’t just represent a search for the truth.

        • I agree regarding how status/power differential changes the stakes of being the target of criticism. But, it seems like you’re assuming that when a junior scientist’s work is critiqued it will harm their career and that such harm is wrong. My point is that if a critique actually has such consequences, we need to consider the possibility that those consequences are deserved. That is, if some scientists cut corners and some don’t, the ones who don’t should receive greater rewards.

          In those cases where the consequences are not deserved, we should blame the tenure committee, not the critic. That is, I don’t think hostility toward criticism is a solution to the problem that criticism can harm your career when harm is undeserved.

          However, this all begs the question of how frequently the situation you’re concerned about even occurs. In my experience, high profile criticism usually obtains its status because it critiques high profile research. I agree that it would be a shame for junior researchers to have their careers destroyed (or just hindered) because they make an honest mistake. But, how often does this actually happen?

        • Wb:

          It is professionally easy for me to love criticism now. But I also loved criticism (that is, open and specific criticism that I could see, learn from, engage with, and respond to) when I was young. One of the most frustrating things for a young scholar is not criticism, but having one’s work being ignored! There were times as a young scholar that I received criticism that was severe and impolite but open and specific, and I appreciated it every time. I never felt that anyone owed me tenure, and if criticism revealed serious problems with my work, so be it. My belief was that criticism would help me make my work better; also I believed that my work was strong enough that I merited promotion, even though I had made mistakes. Scientists make mistakes. Elderly scientists make mistakes, and young scientists make mistakes, and I don’t think that making a mistake in a public forum should be a reason for an academic scientist to not get promoted.

          Also, beyond all this, remember that the ultimate reason for all this scientific publication is not to get careers for people but to benefit the public. I think the public gains from open airing of scientific concerns. If I make a mistake in published work, I’d like the world to know, not just me.

          Think about it: I publish a paper because I want the world to know about some important (or so I believe) finding. So, of course, if it turns out there are problems with my claims, I’d like the world to know that too! Whether I’m a junior or senior scholar, that doesn’t change the duty that we, the scientific community, have, to convey scientific disagreements and corrections openly and transparently to the larger society.

        • +1
          The promotion decisions should not be influenced by whether someone’s work is being critiqued by the others but how one responds to those critiques and learns from the failures.

        • “Also, beyond all this, remember that the ultimate reason for all this scientific publication is not to get careers for people but to benefit the public.”

          A great point that often seems forgotten in these discussions. I wonder how people with straight jobs would look at all of this?

    • Do you rely on online reviews by customers or expert ratings (e.g, Consumer Reports) to guide any your purchasing decisions? Do you consider it inappropriate for customers to leave reviews? Also, if you work at a state-funded institution, don’t you think that the people paying your salary have some right to find out if your work product stands up to some vigorous prodding before they basically commit to another 20 to 30 years of paying your salary?

    • I fear you flatter yourself. I predict that a habit of fighting criticism will be very hard to undo, after you have spent more than a decade practicing it.

  8. The manner by which criticism is couched s in and of itself a problem, I think. It is strident, anxious, & condescending. Actually the HIV/AIDS controversies were my 1st exposure to the acrimony among various scientists. Was not all that surprised b/c as a daughter of an academic, I was use to hearing about rivalries among academics.

    • Well this post is coming up to 7 years old now, so I guess it’s buried, but I was surprised to see only one such comment addressing this particular issue regarding criticism.

      Criticism / feedback is crucially important to development, but when the criticism is delivered in a condescending or even downright rude manner it ceases to be useful: Whether they’re correct to or not, the recipient will focus on those aspects of the critique, rather than the substantive aspects that need addressing. The critic then need-not have bothered writing the criticism – they’ve effectively just wasted their own time. If the work itself is of such poor quality so as not to be worthy of the critic’s time: Don’t criticize it. If it’s worth spending time criticizing it, it’s worth making sure that the criticism will be received. Often it’s just a case of modifying language, and often it’s easier to be politely neutral than it is to be rude, and it really feels personal when people we (previously had) respect (for) go out-of-their-way to be rude.

      Necessarily, the people from whom we would most like to receive feedback are those we consider most important in the field, and too-often in science these are people with big egos who feel enhanced by being rude to subordinates in the feedback that they give. This is often excused as “just the way they are”, and point to their success as an excuse (“it works so who cares”), and maybe even attribute their success to their toxicity, and use that to further excuse a more general culture of toxicity within science.

      Lastly, science is a global initiative that everybody from every culture should feel comfortable contributing to. When communicating with each other we may not be aware of different cultures’ standards for what constitutes “rude” and where the line is regarding language usage in what’s acceptable in criticism. But whatever the culture, if you’re polite that’s definitely acceptable. You don’t even have to be nice; you can be perfectly neutral in tone whilst still being polite. You don’t have to know anything about the other persons culture or background. It’s the easiest thing to be polite. You won’t get backlash, your words will be read and appreciated, and people will respect you more.

      • Jimbo:

        I disagree with your last sentence. Unfortunately, people whose work is being criticized will often try to ignore or deflect the criticism, as we discuss in our post on the ladder of responses to criticism. To put it another way, often if the critics you don’t persevere, the criticism will not be read and appreciated; it will just disappear unnoticed. James Heathers has discussed that point, as have I.

        Also, often the main audience of the criticism is not the author being criticized, but other readers of the material being criticized. The authors of the work being criticized are often, as you put it, “those we consider most important in the field . . . with big egos.” Sometimes they misrepresent their data and their research. They’ve succeeded in this world by bullshitting, and I think it’s very unlikely that any form of criticism will be will be received by them.

        Indeed, many would-be critics are too scared and intimidated to express public criticism, at any level of politeness, of bad work that’s been conducted or endorsed by bigshots. I get lots of emails from people pointing out fatal flaws in published work but wanting anonymity because of fear of retaliation.

  9. One comment though. When Kos criticized your maps he wrote a long detailed post including looking at different data sources. It wasn’t a quick hit and in no way did he seem to say you were incompetent, just that you were wrong in this case. I think in the time I’ve been commenting on the blog the most interesting set of posts about specific works was those about the failure to age adjust in looking at changing death rates. Why? Because you, like Kos, took the time to actually do a more careful critique including looking a the actual data. (Although IIRC it took a while to get to actually looking at the data.) Sometimes I find the posts here so interesting that I not only read the original articles, I then look at related work. Sometimes I feel that while your main point is correct you pull in unrelated and distracting side issues. But that’s all about also doing informal, opinionated posts. For example, I think the Heckman comments (which btw shows you don’t limit yourself to women or junior people) and some of Heckman’s comments too tend to obscure some very interesting aspects of his work on the impact of intervention on children with stunted growth (and very real questions about the causes of differential attrition from the study). Likewise the clothing color study has so many problems it is not necessary to critique an entire sub discipline’s approach to estimating hormone cycles.

    I was trying to find a long ago post that I think was about the impact of sunscreen which was also really interesting.

    • Elin:

      Lots to think about here but just very quickly let me say that it seems pretty clear that it’s inaccurate for people to say that days 6-14 are the dates of peak fertility. Setting aside other errors of any particular paper, I thought it worth pointing out this error too.

      More generally, when I discuss flaws in some particular research effort, my goal is almost never to shoot down a particular paper; rather, I’m interested in the larger questions being asked and in all the research methods being used. So, sure, if my only goal were to criticize the ovulating-women-wear-red study, I agree that “it is not necessary to critique an entire sub discipline’s approach to estimating hormone cycles.” But to the extent that it’s worth studying ovulation and behavior at all, it’s worth noting when people are getting the dates of ovulation wrong.

Leave a Reply to Sameera Daniels Cancel reply

Your email address will not be published. Required fields are marked *