My proposal is to place criticism within the scientific, or social-scientific, enterprise, rather than thinking about it as something coming from outside, or as something that is tacked on at the end.

I happened to come across this discussion in the comments of another blog a few years ago and it seemed worth repeating.

The background is that sociologist Fabio Rojas explained why he didn’t find it useful to teach “critical thinking”; instead he teaches whatever aspects of criticism are relevant for the subject (in his case, sociology) being taught.

In response, writing teacher Thomas Basbøll argued that there are aspects of criticism that are somewhat independent of the subject being addressed. For example, some controversial work in sociology can be criticized on the evidentiary grounds, without any sociological expertise being required.

My reaction was to say that I think a possible solution is to undo the “mind-body” or “church-state” separation of research methods (or “sociology,” or “important things about the phenomena that you are looking at”) and criticism. My proposal is to place criticism within the scientific, or social-scientific, enterprise, rather than thinking about it as something coming from outside, or as something that is tacked on at the end.

To put it another way:

1. We can engage in criticism as social scientists, using statistical (or, more generally, research) methods.

2. Knowing that we will be criticizing our own work, and that others will criticize it, will and should affect how we conduct our research. This is related to the saying, “To understand the past, you must first know the future,” or the statistical principle that the design of a study is best considered in light of how the data will be analyzed.

A psychology professor whom I’ve never met once wrote to me a mini-manifesto, disparaging my efforts as a critic. He wrote:

There are two kinds of people in science: bumblers and pointers. Bumblers are the people who get up every morning and make mistakes, trying to find truth but mainly tripping over their own feet, occasionally getting it right but typically getting it wrong. Pointers are the people who stand on the sidelines, point at them, and say “You bumbled, you bumbled.” These are our only choices in life. If one is going to choose to do the easier of these two jobs, then one should at least know what one is talking about. Sorry, but I think by and large you don’t.

I think this dualism is counterproductive. If, instead of dividing the world into “bumblers” and “pointers” (and let me not even comment on the ridiculousness of a research psychologist making such a categorization and saying “these are our only choices in life”), we were to consider “bumbling” and “pointing” to be two essential activities conducted by any scientist—and, for that matter, if we were to recognize that one can and should spend lots of time criticizing one’s own work (that is, “pointing” at our “bumbling”) and that we can and should consider criticism to itself be an ongoing development (that is, “bumbling” in our “pointing”)—then, I think our research and our criticism could improve.

To draw a humble analogy from my experiences on the street: If each of us were to spend some time as a pedestrian, some time as a bicyclist, some time as a bus rider, and some time as a car driver, than I think we’d all be able to interact more efficiently and considerately. But if we associate each person (or, in the sociology example, each area of expertise) with only one role, we get all kinds of trouble, as indicated by various psychology and biology researchers who can’t seem to handle open criticism.

35 thoughts on “My proposal is to place criticism within the scientific, or social-scientific, enterprise, rather than thinking about it as something coming from outside, or as something that is tacked on at the end.

  1. I think at bottom it all comes down to asking yourself “Could I be fooling myself here?” and “How well do I know this, anyway?”

    If you aren’t asking these questions with serious intent, you may not be doing much more than turning a crank and uncritically accepting the output. At best, you would be doing some kind of routine engineering, and you could only hope it would be appropriate. At worst, turning the crank will only turn you into a crank.

    • I had perused many critical thinking textbooks when I lived in Boston. In hindsight, I am less impressed with the efforts made toward ‘improving’ critical thinking. Nor do I know how many take courses related to it.

      I do think it is helpful to include non-experts/public in some issues. But subsets of them are funded by industry or wealthy benefactors.

      It goes back to a distinction between science and scientism that Susan Haack draws, which is overlooked when we evaluate the sociology of expertise.

  2. I know (prevailing view) is True because I learned that it was from Critical Thinking. Maybe you should try some Critical Thinking sometime! Then you could learn what True!

  3. Here’s a relevant point from Phillip Ball a leading UK science writer.

    “As there are food and book critics, we need science critics, not just to translate and explain science, but to put some context around research, ethical dilemmas, to explain a broader view of science, to think about it” @philipcball at #2020BISTConference #SciComm – So true!

    I’d add to his point is that external criticism is often very useful, a form of “unpeer review” where those involved haven’t all been educated in the same methods/theory preferences etc… we could call these counter-Kool-Aid forces.

    And finally analogies with art criticism offer a further lesson, you don’t have to be an artist to be a useful art critic, non practitioners can often see patterns and issues that are harder for insider experts to detect.

    • This may seem off the subject. There is something to the aphorism ‘Good is the Enemy of the Great’. Sometimes a further distinction is drawn between ‘specialists’ and ;generalists’.

      My experience has been that those who are engaged by and exposed to a wide variety of thinkers in the earliest childhood seem to cultivate more eclectic and logical thinking skills. I base that on reading many biographies. And specifically, I base this on reading Richard Posner’s scholarship, which resurrected my interest in the study of decisions. I use to wonder why such brilliant people made some truly obviously dumb decisions. I don’t count myself as brilliant. But I sure have made my share of dumb decisions.

  4. I find myself falling mostly into the Fabio camp.

    The only way to *teach* critical thinking is to assign problems that demand it. And if you want people to have the tools to solve those problems, you need to teach the *subject* well. If you don’t have the subject tools, you can’t think critically about the subject, period.

    That being said, many tools of science can be classified under many subjects. For example, while there may be specific types of data viz that are common in, say, election forecasting, data viz is also it’s own subject, and much of what is learned about data viz in ecology also applies to election forecasting. And whether you are an ecologist or election forecaster, you’re choosing a data viz that fits your data, you’re not just doing what’s been done before. The same is true of course of statistics, and there are many other methods that are widely applied in different scientific subjects.

    Also we learn many principles in many different subjects on our respective roads to in education. So many people have a basic knowledge of many subjects and it’s certainly legitimate for a business major with a basic knowledge of geology to question a study that seems to ignore the law of stratigraphic superposition.

    Finally I’d say while obviously subject knowledge is very important, you don’t always need to be a subject specialist to see a hole in someone’s argument – *especially* in social science, where there are so many many many unrecognized assumptions implicit in even the most basic experiment. Hence, it’s easy for the casual observer to recognize that grabbing 12 undergraduates for a study on investing isn’t likely to produce useful results.

    So in the end the answer to critical thinking is that if you want people to think critically, you have to give them *problems that demand it*.

    And the answer to criticism of science is that valid criticisms are, well, valid, no matter who’s doing the criticising. Unfortunately, just by their nature, some disciplines are more easily criticized by outsiders than others. It’s easy for non-specialists to criticize the assumptions in a social science study. It’s hard for non-specialists to criticize the mathematical methods in a geophysical study using gravity to determine the mass of glaciers over the poles.

    Last but not least, do we need a method or format for critique? I don’t think so. The bottom line is that most scientists don’t respond to critiques, they just dig in deeper. The best critique is to find a better solution to the problem, and that’s how science has typically progressed.

    • Very good points Jim.

      Listening to experts talk informally has offered a lot of clues as to what actually has gone into and what has been excluded in expert research and policies. That is my direct experience in the foreign policy and research realm. It’s simply not a matter of research practice either great, good, or bad. The quality of thinking is also reflective of the relationships among the experts themselves. As a teen, I got a huge dose of this due to the fact that my father used me as sounding board. His colleagues did too.

  5. Fabio Rojas wrote:

    “Obtaining truth is hard and there is no magical form of thinking called ‘critical thinking’ that can be separated from specific domains. Aside from a very simple general rules of thumb, such as “don’t be emotional in arguing” or “show my your evidence,” the best way to be improve your thinking is to learn from those who have spent a lifetime actually trying to figure out specific problems.”

    The shallowest layer of hell should be reserved for folks who hold forth like experts on subjects they know nothing about. Fortunately Thomas Basboll is there to inject some sense into the discussion.

    Here are a few “rules of thumb” that I included in my critical thinking curriculum:

    1. Consider the end cases. Take the most extreme values in the database and see if they make sense in your theory. If they don’t, figure out why they don’t.
    2. Perform an outcome analysis. Figure out all the different outcomes that your analysis could produce. Can your analysis cover all outcomes that could occur in the real world? Can it produce any outcomes that defy belief? Why not?
    3. Create a causation logic tree around your expected outcome. What are all the possible factors that could result in higher expenditures during male-named hurricanes? Lay these out in a formal, hierarchal logic tree. Can you control all the pertinent variables with data collection? Are there multiple plausible causes that are just as likely as your favored one? What happens if you perform the same analysis on a different plausible cause?

    This is just a snippet of course. I find that very few scientists bother with these steps. They are just as important in criticizing someone else’s work as in analyzing your own.

    If anyone wants to argue that these are not “‘critical thinking’ that can be separated from specific domains,” I would love to hear it!

    • I would be interested in how Philip Tetlock and the Good Judgment Project trained their participants. It is complicated b/c in my opinion, we don’t pay sufficient attention to the whether the stated goals and objectives of an enterprise are delaying or accelerating entropy. Base rate neglect. I am not sure which critical thinking curricula has addressed this. We don’t have many studies that explore expansively.

    • I would agree that critical thinking can be separate from specific domains, but I would also argue that it is most effective to learn critical thinking from within a specific domain. Few of us (myself not included) are rennaissance thinkers (I can’t even spell it!) any more, and I’m not sure that is a practical pursuit given how complex our understanding of the world has become. So, I think it is more effective to teach (and learn) critical thinking from within a specific domain rather than on its own. I agree with your sentiments about the “shallowest layer of hell” being reserved for specialists who think their expertise extends beyond their narrow training. But I don’t think the solution lies in general courses in critical thinking. Rather, it requires enhancing our over-specialized education with layers upon layers of complexities, assumptions, and implications that are necessary for any good analysis.

      To state it a different way: I believe that good writing and good data analysis require the same ways of thinking – critical thinking. But I think it is easier to learn to be a good writer through data analysis than having a budding data analyst take courses in good writing from writers. This does not mean that a statistician should teach them writing. I’m a big fan of team teaching – why not combine the talents of both in the educational process, rather than having writers teach writing and statisticians teach statistics and trust/hope the students figure out how to integrate the two? As far as I can tell, the only reason we don’t utilize team-teaching more is that it is more expensive than our compartmentalized approach.

    • You have great advice for robust criticism of data analysis. It *is* a great way to critique the kind of work you do.

      But I don’t know if that’s “critical thinking” in general. The problem with a lot of analyses is that it’s not possible to know all the possible outcomes of a model. Also, the fact that the analyst *thinks* the outcomes make sense doesn’t mean they’re correct. The fact that certain things seem to make sense has lead to some of the biggest screwups in history, WMD in Iraq being one glaring example.

      Anyway I do think those are good things to teach.

  6. I agree with the overall message, but I want to emphasize one thing that seems implicit: Critics need to be critical of their own ability to criticize.

    As I said, I think this “meta-criticism” is implicit in the original post, but it’s worth bringing up explicitly because it is just as easy for critics to be unaware of gaps in their understanding as it is for the bumblers. Without this meta-criticism, it is just as likely for critics to devolve into group-think and henpecking as it is for research communities to get entrenched in poor modes of thinking. We have seen instances of a lack of meta-criticism on this blog (https://statmodeling.stat.columbia.edu/2020/05/08/so-the-real-scandal-is-why-did-anyone-ever-listen-to-this-guy/).

    But meta-criticism is a two-way street: Sometimes, it should lead you to hedge or admit that you don’t know enough about a domain to meaningfully critique it. But other times, it should increase your confidence in your criticism. For example, I think criticizing the idea of large AND consistent social priming effects on large-scale behavior or criticizing “machine learning” that just recapitulates biased selection of training data. The problems there in some sense *require* an outsider’s view, because they can more easily abstract away from the specifics of any one study or application and see the broken logical structure of the whole enterprise.

  7. I think it’s somewhat questionable as to whether there are some kind of generic “critical thinking” skills that generalize across domains. There seems to be little evidence that there are.

    But, I’m reluctant to just dismiss the idea altogether. There’s a lot we don’t knkw even if the evidence is lacking.

    Instead, I think what makes sense is to exercise applying the genetic skills across a variety of domains – walking the line consciously and with “meta-cognition”, between theory and practice. If generalization takes place, it’s because you become skilled at walking that line. I don’t think it’s a skill that’s automatically conferred just because you focus on generic critical thinking skilla but become skilled at the practice of applying them.

    In the very least, there are probably some parallels between domains that aren’t overtly apparent and the more you practice in discrete domains the more you develop that network of parallels. Even if you aren’t applying generic skills to specific domains you’re developing the connecting network between more domains.

    • Joshua said,
      “I think it’s somewhat questionable as to whether there are some kind of generic “critical thinking” skills that generalize across domains. There seems to be little evidence that there are.”

      I think there are some “habits of mind” that are applicable across most domains — things like maintaining a skeptical attitude; asking what the evidence is; asking how well (or poorly) it supports the claims; asking if conclusions are extrapolated beyond the range of the evidence.

      • Martha –

        > I think there are some “habits of mind” that are applicable across most domains — things like maintaining a skeptical attitude; asking what the evidence is; asking how well (or poorly) it supports the claims; asking if conclusions are extrapolated beyond the range of the evidence.

        Yeah – I tend to think that’s true. Although I do think that we can find many people for whom those habits are more operational in some domains compared to others – particularly when some domains activate ideological or emotional or other biases more than others. And I suspect that when people are more accustomed to bringing those habits to particular domains where they are practiced and contextualized there would likely (often?) be some dropoff when someone is reasoning in a less familiar domain.

        So is there a difference between generic “critical skills”

        • I would say that generic (or perhaps general) critical thinking skills ideally are habits of mind — i.e., making them habits of mind is in some sense the ideal to work toward.

      • BTW, Martha –

        I’m a believer in helping students to become more self-aware of their strengths and weaknesses as learners, and more strategic about strategies they can apply to improve their learning. Imo, that is the most important goal of being a teacher: to help students be better “executives” of their own learning process. In a sense that would be like developing “habits of mind.”

    • I think it’s obvious that basic logic and probability theory are useful across many domains, and many philosophical courses in critical thinking will have units on at least these two topics. But aside from teaching generalizable skills, I take one of the tasks of a critical thinking course to be to expose students to a bunch of different ways it’s possible to go wrong, if only to inculcate a general kind of humility and skepticism. That’s why my critical thinking course (and most other philosophical critical thinking courses I’ve seen) covers a large number of different types of fallacies (rhetorical fallacies, statistical fallacies, psychological fallacies, etc.). I don’t think there’s evidence from a large randomized controlled study that shows that critical thinking courses have the effect I intend them to have, but then again—as you point out—that doesn’t mean they don’t. From a common-sense point of view and (anecdotally) for motivated individual students it seems they certainly *can* have the intended effect, and that’s good enough for me. Analogously, as far as I’m aware (though I’m not on top of this literature), no diet has ever been shown to be particularly effective on average in a large randomized controlled study. But common sense and tons of anecdotal evidence strongly suggest that a motivated individual can drastically improve their weight and body composition through healthy eating.

      • Olav said,
        “But aside from teaching generalizable skills, I take one of the tasks of a critical thinking course to be to expose students to a bunch of different ways it’s possible to go wrong, if only to inculcate a general kind of humility and skepticism. That’s why my critical thinking course (and most other philosophical critical thinking courses I’ve seen) covers a large number of different types of fallacies (rhetorical fallacies, statistical fallacies, psychological fallacies, etc.).”

        Analogously, for several years I taught a “continuing education” type course called Common Mistakes in Using Statistics: Spotting Them and Avoiding Them. A lot of students who signed up for the course said that the title of the course attracted them — because they don’t like making mistakes (and, in fact, many of them were the most statistically literate people where they worked, but realized that there was a lot they didn’t know).

  8. Criticism and analysis aren’t “two essential activities” in science. They aren’t even different activities. In the context of the scientific method, the two terms–criticism and analysis–are perfectly redundant and entirely interchangeable.

    Seriously. Take any sentence on this page containing the word criticism (or critical or critic) and replace it with analysis (or analytical or analyst) and it doesn’t change the meaning of the sentence one bit.

  9. Let’s say the “bumbler” has a hit rate of 1 in 20. How does the “bumbler” figure out which one of the 20 “discoveries” published is “right”? The “bumbler” has 20 “bumbler” friends. Now we have 400 “discoveries” with 20 correct ones. Without replications, and without “pointers”, how do we find the 20 truths among the sea of lies?

    It would be interesting to see what happens if we don’t divide the world into “bumblers” and “pointers” – but to make everyone 50% “bumbler”/ 50% “pointer”. The anonymous psychologist might then discover that “pointer” is not the “easier of the two jobs” because now s/he gets to decide which of the 400 discoveries are true, and then to convince the authors of the other 380 papers that they are wrong.

  10. Do we have any empirical evidence for the bumblers/pointers theory? Do theories developed and promoted by Newton/Einstein/Darwin/Turing were 5% scientific revolution and 95% bullshit?

    Ok, Newton had weird hobbies and Turing believed there was an evidence for ESP, but in the field of math and physics, was their ideas 95% trash?

    • Math is largely a process of making abstractions and discerning the truth about those abstractions. Quite different than trying to learn about a reality that is beyond our direct access – i.e. the empirical world.

      Then physics is an area of study, that in the large is vey stable and common (often the same in most places and changing imperceptivity slowly). Also, many aspects could be measured with high signal and low noise.

      So comparing apples with oranges?

  11. This reminds me of the dynamic in financial risk management. It is easy to observe success / productivity / accomplishment / profitability in the person who is proposing new deals / hypotheses etc. Because these this success is rewarded for each deal they make / paper they publish, they have a strong incentive to do as many as they can.

    It is much harder to observe the success of the person who says “wait, not so fast. Here are a few concerns; let’s do some more work before we go.” These people will seem like the source of the problems. Their value is in the negative effects (retraction, loss) that they prevent. When they do their job right, that loss is never seen.

Leave a Reply to Joshua Cancel reply

Your email address will not be published. Required fields are marked *