Skip to content
 

How to read (in quantitative social science). And by implication, how to write.

I’m reposting this one from 2014 because I think it could be useful to lots of people.

Also this advice on writing research articles, from 2009.

18 Comments

  1. Z says:

    My favorite part was when you gave examples of numbers: “I’m not gonna knock myself out trying to read a bunch of tiny numbers (868.15, 1245.80, etc.)”

    An example of the universal being in the specific?

    • Joe Nadeau says:

      Mendel’s Law’s of inheritance is an exceptional example. From ratios of inheritance patterns for many traits in peas, Mendel discovered the near-universal laws of inheritance – his famous 1:1 and 1:2:1 ratios.

  2. Terry says:

    I took a look at the commenters to the two articles. Just about no overlap with today’s commenters. (Zbicyclist was the only one I saw.)

    No idea what it means. Just reporting a fact.

    • Andrew says:

      Terry,

      I wish I could say that all our former commenters grew up and started their own blogs! But I guess people just got tired of commenting. Too bad. I love (most of) our comments!

    • I know I’ve been commenting since 2006 or 2007 but doesn’t look like I had anything to say about the 2009 post… I think there are at least 5 or 6 regular commenters I remember from pre 2010. I don’t know how regular they are… but reasonably still around. Obviously Phil Price, and Keith O’Rourke, Manoel Galdino is still around, Corey, Zbicyclist, Dale Lehman maybe, others I’m probably forgetting. I think that 2009 post had fewer than typical comments is the main thing.

      It’s hard to “grow up and start your own blog” with the power-law distribution of connections. Much better to comment on Andrew’s blog than start your own and get 1 comment from your friend occasionally.

      • Anonymous says:

        Quote from above: “I think there are at least 5 or 6 regular commenters I remember from pre 2010. I don’t know how regular they are… but reasonably still around.”

        I wonder if there is some sort of “evolution” in commenting and following blogs, etc. I can remember finding this blog 5/6/7 (?) years ago (or something like that), and thinking it was the most interesting, and useful, blogs out there (i still think that).

        I hesitantly started to comment once in a while, which started to become more frequently after finding out at least some people thoughts my comments were worthy of a “+ 1”, and/or at least some people replied to my comments, etc.

        I now feel i find myself trying to make the same points over and over again, which i myself are getting tired about, and i can only imagine how that must be for other readers :). And i also wonder what the point of it all is when i see groups of “the in-crowd” people on twitter replying and liking eachother’s posts, or ideas, or comments, which i feel are of a severely sub-optimal standard at best, and simply stupid at worst. I can totally see how these folks will not read and/or participate in this forum, just as i will not read and/or participate on twitter.

        I think i have given up on things, which possibly goes along with trying to stop commenting on this forum. Sh#t just depresses me too much, and i get only get angry and frustrated. I have tried to do the best i could do trying to help improve matters, in which this blog has been of tremendous help (in several ways i reason).

        Perhaps this is also the case with several other folks commenting, and may explain why some people stop commenting after a while.

        Imagine, if you will, a river flowing through green pastures, and a backdrop of mountains with snowy peaks. Maybe this forum is like a lodge at the base of the mountains. The owners (e.g. professor Gelman) are nearly always present and serve up food, drinks, and a place to sleep (e.g. providing topics to possibly discuss, providing the opportunity for people to participate, etc.). And the guests (people commenting) come and go.

        This forum (like a lodge) may be an important part of the travels done by many. Some stay at the lodge for a bit, some return to the lodge once in a while while exploring the surroundings, some go quickly up the mountain, and some quickly follow the river downwards. Some guests return, some don’t. Whatever may be the case, i like to think the lodge has helped many on their travels (perhaps without them even realizing), and will remain doing so for some time to come…

        • Martha (Smith) says:

          Nice metaphor.

        • Anonymous says:

          (Apologies for possibly double posting, but my comment from about an hour ago has not been posted yet. This doesn’t happen usually, and if it does i think this mostly happens when i include one or several links. I am now leaving out the link to the Levelt report, which i think might have caused my comment possibly ending up in the spam folder and not being posted. Let’s see if this works now)

          Quote from above: “And i also wonder what the point of it all is when i see groups of “the in-crowd” people on twitter replying and liking eachother’s posts, or ideas, or comments, which i feel are of a severely sub-optimal standard at best, and simply stupid at worst.”

          I have thought about this some more, and i think this is exactly part of what might be wrong with parts of academia today. It reminds me of the Levelt report on the Diederik Stapel fraud case.

          Quote from page 48 of the report:

          “Taken together all of the above reinforces the picture of an international research community of which Mr Stapel, his PhD students and close colleagues were part, and in which the customary research methods and associated standards and values were mutually shared.

          Another clear sign is that when interviewed, several co-authors who did perform the analyses themselves, and were not all from Stapel’s ‘school’, defended the serious and less serious violations of proper scientific method with the words: that is what I have learned in practice; everyone in my research environment does the same, and so does everyone we talk to at international conferences.”

          I wonder how and why, it seems to me, that a small selection of research topics (and associated researchers) somehow at a certain point in time get way too much attention and allocated resources (e.g. IAT stuff, Priming stuff, Ego depletion stuff, etc.)

          I wonder if a (small? large?) part of social science people are way too much focused on “groups” and “nudging” and stuff like that that they may have forgotten (at least some) people can have a mind of their own, and can have a spine, and can think for themselves, and stuff like that.

          I wonder if a (small? large?) part of social science people might be the “type” of people that really don’t, or can’t, think about why certain things should be done a certain way. Perhaps they (type of) people who DID find it strange and/or unscientific to do things a certain way left academia, or were phased out.

          I fear that a (small? large?) part of social science people might currently be making a whole new series of severe mistakes because “that’s what everyone thinks is an improvement” and stuff like that. I fear we might be in the midst of exactly the same problematic process, where a whole new set of “improvements” are being proposed and (uncritically?) accepted just because a certain group thinks it’s good for science, and talks about it repeatedly, and have “Big” names to endorse them, etc.

          It sometimes feels to me that when “Big” names (convincingly?) repeat certain mantras or topics or ideas, a set of (young?) folks simply (uncritically?) repeat what the “Big” names say. If you then get some media attention for it, and use a few buzzwords, you can easily get some momentum going for things that are not really thoroughly thought about. This could then all be reinforced by exactly the same things that have probably gone on for decades in academia: groups of friends reviewing each other’s work, formal and informal meetings where deals are made, s@cking up to the “Big” names just because you want to be part of the “in-crowd”, editors publishing their friends’ papers in their journals, journalists writing about research(-er) X because they get something out if it, etc.

          I find it interesting and depressing (to a certain extent) to (think to) see this process happen all over again. Like nothing has been learned the last 8 years or so. But perhaps it’s because nothing can have been learned, at least not by a possible important part of this process: young people in academia. Young people trying to make their way in academia may have not been around long enough to know how, and why, the game is played. I think these young people might be (consciously or unconsciously) taken advantage of, for instance by being given too little information concerning how, what, and why academia works like it does. I also think young people might be taken advantage of because it seems to me only a few “Big” names reap the rewards (i.c. money, power, inlfuence, etc.) of all this “let’s work together to improve science” stuff.

          Come to think of it, i think and fear many of the possible problematic stuff of the past 30 years or so will possibly be emphasized, reinforced, and multiplied tenfold in the coming years. Proposed large projects in academia can, and will in my estimation, only reinforce, emphasize, and replicate the possible core of many problematic issues in science: money, power, influence. If you are not careful, a small group of people can, and will in my estimation, determine large parts of the topics studied, the money allocated, and the “findings” that get the most (media) attention, etc.

          • Martha (Smith) says:

            The price of quality (like the price of liberty) is eternal vigilance. In particular, those of us who care about quality/intellectual honesty need to keep on doing whatever we can to keep it on the agenda. Every little bit helps.

            • Anonymous says:

              “The price of quality (like the price of liberty) is eternal vigilance. In particular, those of us who care about quality/intellectual honesty need to keep on doing whatever we can to keep it on the agenda. Every little bit helps.”

              Yes, thank you for this comment.

              It reminded me that in spite of my despondent view on matters it could be important in science to keep hammering on the importance of scientific principles, values, responsibilities, etc. (even when you think, or feel, or it even actually is the case, that others may not adhere to these things)

  3. jim says:

    Andrew, when I read “The result that a one-hour documentary shown six months earlier induces actual behavioural change…” , if I keep reading it’s only to find out what kind of black magic they used to arrive at such a ridiculous conclusion. From my POV, it can’t possibly be true.

    Think about it this way: people are exposed to gazillions of uplifting documentaries and True Amazing Success and Happiness stories. If they had any notable impact, everyone in the world would be stupendously wealthy. Since that’s obviously not true, a single movie can’t possibly have a significant impact on the population scale. It just can’t happen.

    That’s what I see in lots of research today: many claims that are outright impossible. Not just unlikely. But clearly, irrevocably, irrefutably impossible. You don’t need to look at the statistics to find problems. Just read the methods, find the eye-popping assumption and call it a day. How these things get published is beyond me because it all too often it barely takes a seconds’ thought to debunk them. It’s not even challenging. Yet there they are in print.

    Social sciences aren’t alone. People in the physical sciences also follow the head-in-the-sand method of reality-checking their assumptions and conclusions. Forecasts regarding climate and environmental related issues seem particularly susceptible to puffery. I suspect that’s because the longer-range the prediction, the less likely the predictor is to be called out on the failure or inaccuracy; and because of a prevailing bias for certain beliefs among the academic literati.

    • Ethan Bolker says:

      I’m curious. When you write

      Forecasts regarding climate and environmental related issues seem particularly susceptible to puffery

      do you really attribute concern for climate change to

      a prevailing bias for certain beliefs among the academic literati

      and put them in the same category as social science survey based predictions, for which

      it all too often it barely takes a seconds’ thought to debunk them. It’s not even challenging.

      ?

      The social science puffery often comes from correlations from a single noisy survey or experiment with no underlying plausible causal model. The climate change forecasts are based on many kinds of measurements collected over years fed into models built from well understood physics.

      • Martha (Smith) says:

        “The social science puffery often comes from correlations from a single noisy survey or experiment with no underlying plausible causal model. The climate change forecasts are based on many kinds of measurements collected over years fed into models built from well understood physics.”

        Agreed. In addition, there are scientists coming to climate science from a variety of backgrounds, and so there is a lot of inter-disciplinary criticism before forecasts are acceptable. In contrast, many social scientists are dismissive of criticism from people in other fields.

        • jim says:

          Hi Martha and Ethan,

          It’s reasonably possible to predict global temps within a broad range over a short period (20 yr). That’s been done already, although the predictions have run notably higher than reality.

          It’s not known at all if it’s possible to predict global temps in 2100, so one would have to assign that effort to speculation. Particularly since it’s highly dependent on CO2, which in turn is highly dependent on human behavior.

          But beyond that most predictions aren’t based on rock hard science and many are wildly speculative. For example, predicting the impact of “scenario x” on food production in 2100 is nothing but wild speculation. Aside form the fact that much of the science around such a prediction is fuzzy or requires untestable assumptions, unpredictable technological changes in food production will render any current prediction completely irrelevant.

          Last but not least, predictions about the future of the planet have a long history of failure, mostly because, as in social sciences, there are *numerous* interactions that cant be predicted or constrained. That hasn’t changed in recent years.

        • jim says:

          Ethan, Martha:

          “noisy survey or experiment with no underlying plausible causal model. “
          Actually climate data is extremely noisy. In some cases, like surface temp, the noisiness of the data is mitigated by incredible volumes of data. In other cases, like ocean surface temps, hurricane frequency and intensity, data on “heat waves” or “extreme precip events” , it’ ain’t.

          Another problem with social science data that Andrew mentions frequently is the number of unconstrained interactions. there are also myriad unconstrained interactions in climate / climate impact prediction – all the more so over time frames of 80-100 years.

          And while there may be usable constraints on global temp predictions, there are a huge variety of other predictions and forecasts for which the constraints are poor. The impact of climate change on food production, for example, is not well constrained or known, and that’s even before we account for changes in technology. There is still ample controversy regarding the degree of sea level rise as well and claims about the things like the breakdown of thermohaline circulation in the north atlantic – are utterly unknown and no more valid than the power pose.

          On top of that, many scientists are much less conservative in speaking to the press than they are in publishing in papers, which gives the impression that their opinions are supported by their research, when in fact their opinions are more extreme than their research will support. This generates blurs the line between fact and fiction.

          • Martha (Smith) says:

            I’m aware that climate science data is extremely noisy, and that there are many interactions. My point was that there is more scrutiny and criticism in the climate science community than in the social science community. I might also add that there are often different types of data relevant to climate change (e.g., tree ring data, ice core data, meteorological records, …) that can be analyzed separately, with results compared to see if they point to the same predictions. Also, people from the different scientific communities involved may be able to bring experience of possible confounders that are not apparent to people in other fields; this can be a source of skepticism of results, as well as a source of how to do better analyses.

        • jim says:

          I’ll just add that I think in general there is a lot of striving in the sciences in general to stretch for policy implications.

  4. jim says:

    Oops! Double post. My bad.

Leave a Reply