Skip to content

Tools for detecting junk science? Transparency is the key.

In an article to appear in the journal Child Development, “Distinguishing polemic from commentary in science,” physicist David Grimes and psychologist Dorothy Bishop write:

Exposure to nonionizing radiation used in wireless communication remains a contentious topic in the public mind—while the overwhelming scientific evidence to date suggests that microwave and radio frequencies used in modern communications are safe, public apprehension remains considerable. A recent article in Child Development has caused concern by alleging a causative connection between nonionizing radiation and a host of conditions, including autism and cancer. This commentary outlines why these claims are devoid of merit, and why they should not have been given a scientific veneer of legitimacy. The commentary also outlines some hallmarks of potentially dubious science, with the hope that authors, reviewers, and editors might be better able to avoid suspect scientific claims.

The article in question is, “Electromagnetic Fields, Pulsed Radiofrequency Radiation, and Epigenetics: How Wireless Technologies May Affect Childhood Development,” by Cindy Sage and Ernesto Burgio. I haven’t read the two articles in detail, but Grimes and Bishop’s critique seems reasonable to me; I have no reason to believe the claims of Sage and Burgio, and indeed the most interesting thing there is that this article, which has no psychology content, was published in the journal Child Development. Yes, the claims in that article, if true, would indeed be highly relevant to the topic of child development—but I’d expect an article such as this to appear in a journal such as Health Physics whose review pool is more qualified to evaluate it.

How did that happen? The Sage and Burgio article appeared in a “Special Section is Contemporary Mobile Technology and Child and Adolescent Development, edited by Zheng Yan and Lennart Hardell.” And if you google Lennard Hardell, you’ll see this:

Lennart Hardell (born 1944), is a Swedish oncologist and professor at Örebro University Hospital in Örebro, Sweden. He is known for his research into what he says are environmental cancer-causing agents, such as Agent Orange, and has said that cell phones increase the risk of brain tumors.

So now we know how the paper got published in Child Development.

Of more interest, perhaps, are the guidelines that Grimes and Bishop give for evaluating research claims:

I’m reminded by another article by Dorothy Bishop, this one written with Stephen Lewandowsky a couple years ago, giving red flags for research claims.

As I wrote back then, what’s important to me is not peer review (see recent discussion) but transparency. And several of the above questions (#3, #4, #7, and, to some extent, #8 and #9) are about transparency. So that could be a way forward.

Not that all transparent claims are correct—of course, you can do a crappy study, share all your data, and still come to an erroneous conclusion—but I think transparency is a good start, as lots of the problems with poor data collection and analysis can be hidden by lack of transparency. Just imagine how many tens of thousands of person-years of wasted effort could’ve been avoided if that pizzagate guy had shared all his data and code from the start.


  1. Jonathan says:

    Do you read non-statistical material? Non-science material is also replicable and not replicable: meaning it states facts that can but often aren’t checked, sources that are taken as true when they’re not investigated – and without any confidence assigned – and citations to other sources that cite other sources taken as statements of an underlying ‘fact’. The level of honesty in many fields is sadly low. Much of the work isn’t just polemic, but outright distortions that cross the boundary between negligence and either willful ignorance or actual intent. In some fields, it has become the norm to cite the work of others who agree with you as proof of what you’re saying, as though a circle jerk is procreative. Academic standards in many fields are sad.

    • Anonymous says:

      Can you give some examples of this please?

    • Anonymous2 says:

      +1 to Jonathan although a different metaphor might be “echo chamber.”
      • One thing that happens in the humanities is citing or quoting an article/book that has not been directly consulted but playing it off as though it has been. You can see this when a typo is reproduced in a quotation (some of this was the basis for charges in the Dershowitz–Finkelstein Affair, but the practice is rampant, even if all one need do is say “as quoted by/in …”).
      • Sokal and Bricmont’s point in their hoax and follow-up books was that science gets used in the humanities to intimidate or to get arguments through without critique. But they might have added, along the lines of what Jonathan is saying, that these thinkers get quoted as established fact (or as established falsity — as though either everything Lacan said was gospel or everything he said was wrong). There is a lot of building upon what may be rock or may be sand. But the building goes on.
      • Some sort of network analysis of citations in the Humanities might show more about those fields than about their content (more about the authors than about the topics).
      Anyway, not the specific examples Anonymous wants, but my sense is they are out there.

    • The state of academic standards must be addressed honestly and transparently. It is a sensitive subject obviously. Nevertheless several legal scholars have produced excellent analyses of academic.

      1. Deborah Rhode: In Pursuit of Knowledge: Scholars, Status, and Academic Culture

      2. Richard Posner: Overcoming Law & Public Intellectuals: A Study in Decline

      3. Derek Bok: Universities in the Marketplace

      Each,by no means, proffers to a radical reformist agenda either. I’m sure there are even more recent analyses. These reflect, in my view, different dimensions of epistemological, pedagogical, and normative challenges;

      In short intellectual crises of which the quantitative has loomed large.

      I’ve conveyed this, since 2004-5, to various institutions and groups. For example in discussions of the ‘debt’ & ‘debt ceiling’ ; in national security assessments, & in foreign policy formulation efforts.

      In Congress & across the intelligence community, there has been a push for ‘evidence-based decision-making. I’m reaeding proposals put out by think tanks.

      I speculate that unless there are emphases on critical thinking & communication, incrementalism will prevail. Perhaps exposure to cognitive biases, logic, & controversies within science.

      In those senses, I welcome Grimes & Bishop’s questions. But over 3 decades, answers to them have proven quite elusive. Cognitive dissonance has prevailed.

      Lastly, I am a citation buff, so I am sympathetic to Andrew’s last comment. Many citations in my opinion, do not warrant the placement nor the meaning attendant to them.

      Just some quick comments on the run. So apologies for errors.

      • Correction: ‘Perhaps exposure to cognitive biases, logic……. may improve how we evaluate evidence’.

      • Keith O'Rourke says:

        You could add Susan Haack

        “In an environment like this, an environment of perverse incentives that reward, not the truly serious, but the clever, the quick-witted, the flashy, the skillful self-promoter, and the well-connected, it’s no wonder that the very virtues that good intellectual work, and perhaps especially good philosophical work, requires—patience, intellectual honesty, realism, courage, humility, independent judgment, etc.—are rapidly eroding”

        • Thank you. I’ve read some of her work sporadically. I see she has commentaries on Popper and Peirce as well. I very fortunately started out with national security law groups. So I have witnessed the practical applications & outcomes of legal precedents & rule making. The evidential processes across some domains & disciplines, in parallel with David Kennedy, Harvard Law School. Thus we share some similar views about the ubiquity of law in nearly every domain & discipline as well as the epistemological so ensuing in each.

        • Thanatos Savehn says:

          She is also, alas, the favorite philosopher of the worst of the cell-phones-cause-cancer ambulance chasers and has reportedly appeared at their meetings dedicated to fighting Daubert and other attempts to demand rigor from what is presented as “scientific knowledge” in the courtroom. Her celebrated breach with orthodoxy in the academy came in the context of fights with the sort of feminist epistemologists who claim that formal logic, calculus, etc. are just tools of the patriarchy.

          Anyways, her hot take (at least as it appears in the arguments of lawyers whose briefs I have to read) is that Popper is not only dead but dead wrong because, well, if you power pose and your neuroendocrine levels don’t spike it could be that our analytical techniques for detecting neuroendocrines don’t always work or maybe brain waves from nearby evil skeptics cleave the molecules before they can be detected so Ta-Da! no refutation! and we get to sue employers for not teaching power posing to employees and the jury gets to decide whether the power posing expert witness paid $500/hr is likeable and preaching something they want to believe; and the defense has to defend not only itself but analytical chemistry too. yay.

          P.S. Meant to respond sooner but I watched O’Quinn try the famous breast implant case. And to your point, yes, many judges (and jurors) mistake ridiculous claims of precision for evidence of scientific rectitude.

          • Thanatos; I like this passage by Haack

            ‘But we all know, if we’re honest with ourselves, that the way to get good philosophical work done is not to give plausible people huge grants, but to allow serious people the freedom to follow ideas where they lead—freedom from pressure to rush the work, exaggerate their results, or reach conclusions deemed politically acceptable, freedom from anxiety that failure to conform to intellectual fashion or to defer to this or that Big Noise may make it difficult to publish in the “prestigious” journals and, more generally, freedom from demands to go along to get along.’

            Re: And to your point, yes, many judges (and jurors) mistake ridiculous claims of precision for evidence of scientific rectitude.

            What would be a remedy for such thinking?
            Reply to this comment

          • I would be very surprised if Haack actually has stated the reasoning you have attributed to her and her, presumably, circles. If you have documentation of it, please refer me to it.

            As for the claims that formal logic/calculus, etc are tools of patriarchy, I haven’t come across such feminist critiques.

            In conferences, I have found that most female experts have been reluctant to get into an adversarial debate so as to once and for all drive the last stake into a mediocre or dumb line of reasoning. And I’d say many male experts avoid it such an encounter writ large. After all, what do experts do? They have careers to maintain.

            • Anonymous2 says:

              “claims that formal logic/calculus, etc are tools of patriarchy” : The following may be example or red herrings:

              Luce Irigaray (to whom a chapter in the Sokal/Bricmont Fashionable Nonsense is devoted): “Female sexuality cannot articulate itself within an Aristotelean type of logic.”
              Jacques Derrida (also a Sokal/Bricmont target) coined “phallologocentrism”

              Whether people read too much into such claims, or not enough, is covered by Craig R. Vasey in “Logic and Patriarchy” (in Postmodernism and Continental Philosophy)

              There was also a good deal of debate over Andrea Nye’s Words of Power: A Feminist Reading of the History of Logic.

              • Thank u Anon. I’m on the Amtrak train at the moment. I’ll look these references I don’t see myself a feminist in the popular sense. Ironically my sustained protection has come from men b/c My story is apparently likened to Mean Girls & Footnote. Therefore not convinced that such interpretations compel the truth of female behaviors even by radical feminism

  2. Pizzagate? lol Andrew I have the distinction of having my very FIRST Stormy Daniels drink at Comet Pizza Place. LOL kinda kidding.
    Pizza very good however.

  3. Anonymous says:

    “Just imagine how many tens of thousands of person-years of wasted effort could’ve been avoided if that pizzagate guy had shared all his data and code from the start.”

    I’ve been thinking about data sharing and wonder if it’s really that useful. Or perhaps better phrased, i wonder if without things like pre-registration (which provides possibly crucial additional info + accountability) it will be useful in the future for possible fraud/error checking (i think it will still be useful for other things).

    I reason that:

    1) Things like pizzagate may be typical if research of the past decades in the sense that researchers a) rarely asked for/check data, computer power/savviness was less than it is now, and b) researchers (therefore) still had “clumsy” data. Possible data-fabrication or manipulation will probably become less “clumsy” in the future.

    2) Open data to me seems highly sub-optimal without pre-registration information that can be checked by the reader to see what measures, conditions, etc. were used in an experiment. I am not sure what, for instance, the open data badge in PsychScience implies, but i can imagine it can just lead to a “cleaned” data-set where the researcher could have left out outliers, measures, conditions, etc.

    (in light of the latter point i find it very interesting that PsychScience seems to have handed out way more “open data” and “open materials” badges than “pre-registration” badges if i am not mistaken)

  4. Phil says:

    “1. Is there a plausible mechanism for the effect?”, that’s often a tough one. If, by ‘plausible’, we mean a mechanism that can’t be ruled out by application of physical laws, the answer is almost always “yes.” It’s good to have it on the list, and it’s even good that it’s the first thing on the list, but I think it’s pretty rare for this to be a useful question: if there weren’t an explanation that some people, at least, find plausible, you wouldn’t be having the discussion in the first place.

    As for cell phones and cancer…I think cell phones ‘could’ cause cancer, but don’t. Or at least not brain cancer. Collectively, we have done a massive experiment in which we’ve exposed literally billions of people to different amounts of cell phone radiation over the past fifteen or twenty years. This might have been a dumb thing to do, but it seems to have worked out OK: cancers of the nervous system (including the brain) have remained dead flat. Of course there could be some very small effect, but it’s in the noise compared to known negative effects of cell phones such as additional car accidents from distracted driving.

    • Jeff says:

      Good point. The questions on the list seem to presume a “yes” to an unasked question zero: Is there a measurable effect? Given widespread concerns about cell phones causing cancer (or vaccines causing pretty much everything) it may be useful to spell that one out.

    • I also think they “could” but probably don’t. However, looking for an adoption curve I found… surprise, a link to a post on this site:

      This shows that 1990 was sort of the “start” of cell phone adoption, and it didn’t get to 50% until around 2000. So we’ve got 18 years of “widespread” exposure, maybe less. that might not be enough to see brain cancer rates kick in. Compare to something else that requires long term exposure but definitely causes cancer: smoking.

      That shows that widespread adoption was in place by say 1940, but we didn’t have a definite smoking/cancer connection until well into 1960’s and that had to be a much bigger effect.

      So, I agree in principle, but it’s really probably going to take another 20 years to be sure.

      • Phil says:

        Maybe not everyone was _convinced_ of a smoking/lung cancer connection until the sixties, but that AHRQ paper that you cited shows lung cancer already climbing at the very left edge of the plot: 1931 higher than 1930. Given that the rate was going up, you can at least imagine people not being sure why: maybe it’s car exhaust, or indoor plumbing, or whatever else was changing back then (which was a lot of stuff). But a rate staying essentially level for decades, that’s a lot harder to reconcile with a theory that it should be going upward: you have to believe that what ought to be an upward rise just happens to be exactly countered by a decline due to something else, over a period of years.

        I’m pretty sure that even if the average latency for cell-phone-induced lung cancer were twenty years or more, there would be some early onsets so we would already be seeing an increase. And of course, maybe there is an increase over what there would otherwise have been! But if so it’s very small.

        • > And of course, maybe there is an increase over what there would otherwise have been! But if so it’s very small.

          Agreed, there is not a smoking gun about cell phones and cancer. If there’s a small effect, we’ll need longer to detect it, but we can already rule out effects of the order of magnitude of smoking. So, cell phones and brain cancer is not anything like cigarettes and lung cancer is already well established.

          Note though that the adoption curve for cigarettes was over a much longer timescale than for cell phones. So the rising rate of lung cancer between 1930 and 1950 say is related to an onset of cigarette consumption that started in around 1910 and took til 1940 to hit half its peak. The similar onset for cell phones was 1990 and took til 2000 to hit half its peak, so 30 years for cigs vs 10 for cells.

          Anyway, basically I agree with you that there’s very little evidence for cellphones and brain cancer, but it will take another 20 years to quantify well just how close to zero the effect is (and this is just a fundamental fact about small effects, that they take long times or large sample sizes to quantify).

      • Bob says:

        Understanding RF exposure trends over time is complicated. First-generation cell phones used lots of power—but were usually car phones so they weren’t used much and the antenna was relatively far from the user. CDMA 2G phones had a substantially lower maximum power and much much lower average power. GSM 2G phones were lower power that 1G but higher that CDMA. All 3G systems used a form of CDMA.

        Now, the bulk of communication has shifted to digital so the quantity of bits/handset/month has exploded. Cells are packed more densely—so not as much power is needed to reach the base station. On balance, I suspect that 4G has increased RF exposure compared to 3G. Here’s a reference that states that typical RF power from an LTE handset is about 1% of max power.

        They state:
        “In rural environments, the 95th percentile time-averaged output power level of UE [cell phone] was found to be 6.4 dBm (4.4 mW), which is 2.2% of the maximum possible output power”

        I suspect that if you go back some decades, you will be able to find situations in which people living near FM radio stations got higher exposure to RF than cell phone users get today.


        • These are all good points. I’ll point out though that

          > I suspect that if you go back some decades, you will be able to find situations in which people living near FM radio stations got higher exposure to RF than cell phone users get today.

          this may be true, but may not be so relevant. The main mechanism I’d be concerned about would be harmonic resonance of certain enzymes etc. Just as exposure to a few hundred watts of 27Mhz CB output isn’t likely to be the same as a few hundred watts of microwave oven output, so too would 100Mhz FM radio output not necessarily be the same as ~1900Mhz modern cell phone output. The ability to selectively heat important biological enzymes in this gigahertz range is the main concern I can think of for realistic health effect mechanisms… and evidently as Phil says it doesn’t seem like there’s really empirically any problem… but the problem is relatively specific to gigahertz type frequencies one suspects and so evidence from FM radio, CB radio, TV broadcast etc might be less relevant than it at first appears.

          • Bob says:

            You are correct. I left out discussion of the transition to higher frequencies because it would have made my response even longer and less-related to the main purpose of this blog. Still, we do have significant experience of exposure to various microwave frequencies by microwave techs, radar techs, and satellite ground station staff.

            It’s certainly possible that there are medically important, non-thermal effects of non-ionizing radiation. I’ve met smart people who believe that to be true. But, if there are such effects it seems highly probable that they are subtle and minor.

            For perspective, there are about 10 fatalities per year in the U.S. involving tower workers. See


    • anon says:

      The “massive experiment” is completely uncontrolled in almost every way possible, so I’m utterly unconvinced by that argument.

      • If there are conditions where cell phones cause cancer, and they are achievable, then with a big enough sample, some people will achieve them. The statistic that sums all cancer inicence should show some effect under that condition, it’s a very low noise statistic for this purpose.

        We can conclude something about the size of the effect or the achievability of the condition… Either cell phones cause at most a small change in cancer rate or the conditions where they cause a substantial change are extremely rare.

        • anon says:

          Daniel: ” it’s a very low noise statistic for this purpose.”

          How so?

          I haven’t looked at the numbers, but the baseline rates for glioma are tiny, so is the sample really “big enough”?

          “Very low noise”? Just on radiation alone, there are many sources. And possible non-radiation contributors are too numerous to list.

  5. Jonathan says:

    My actual point, which I realized I never reached in my prior comment, is that one can make arguments about academic standards in many ways. Example: statistical rigor is a way of looking at work but it isn’t the only way. Many papers (and careers) discussed here have been defended by the authors and champions as representing potential, as slices and perspectives that offer hints which we may interpret conceptually even if the stastical rigor is lacking. After all, almost nothing we experience in life is ‘rigorous’ other than the experience of life inevitably being experienced, and yet we have opinions on almost anything that passes through our perception. Not only that but in academia there is, of course, a means of discourse about any topic. (I’m not making anything up; the nature of discourse is an old topic and its expressive methods in academic and intellectual pursuits have been taken apart in detail. If anyone doesn’t know, think Popper.) Let’s say a study is low powered and otherwise somewhat noisy, so the most you can really say is ‘well, maybe it means this’ which in modern academic discourse means ‘it is statistically significant’ or some other incantation that pretends – here’s Popper again – to distinguish that science from myth.

    It’s as though the system could think: it apapts to being labeled mythical by drawing tighter and tighter definitions that label this ‘science’ as though those labels transform ‘well, maybe it means this’ into a metrical tensor and the curve of a falling object. Not a Pynchon fan, but Gravity’s Rainbow. Most publications are stories we tell to ourselves. They aren’t ‘true’, any more than Suetonius is an actual history of the house of Augustus. People want it to be true. Many over the centuries have staked their reputations on the descriptions of Caligula being, well, if not exactly true, because Suetonius must be exaggerating, certainly not a complete fabrication … though there’s no evidence that it wasn’t a complete fabrication invented by those who wanted to cast aspersions on the concept of an Imperial ‘House’, who in traditional Roman terms believed in the Republic and, in particular, in the S of SPQR. (It was, I believe, Trajan who solved the problem for a long time by adopting a most capable man to be his ‘son’ and heir, a tradition that continued until, again I believe, Marcus Aurelius, who bequeathed Commodus to Rome.) Far-fetched? Ever been to the Sistene Chapel? Until it was cleaned, the work was presented as a meditation in dark versus light in which Michelangelo drags existence out of darkness, in which the muscularity of shape is bound and held back or down. And then? Many refused to accept what their eyes saw, that it was bright colors, and that not only their entire careers but history of art was a story told from an inaccurate perspective. But it was true while it lasted, right? Better to believe in the Michelangelo you want than the guy who used eye-popping color and great sweeps of light with very little dark. And people believed that despite his other actual paintings being generally infused with light, almost pastel and jewel-like in intensity. I guess the Vatican was supposed to be different.

    We all engage in creative acts of story-telling. It’s built into the Bible. (It’s built into every religion but the Bible is so familiar here.) Be born again into the moment through acceptance, through ritual acts that reflect your appreciation of existence beyond what you can know. Each of us is born into the moment. And we’re told from an early age that life is what we make it, that we have a personal story to tell, to enact, and that one of the highest forms of being is to feel one with your surroundings, with yourself, in a creative act of living. So we make up stories about existence. Midrash. Lots of stories, mostly very noisy, like all Midrash because you can’t prove a single one, those being on the mythical side of the line that divides ‘science’ from what cannot be derived unless you assume a whole bunch of things you can’t actually prove and which resist the kind of regularization which statistics can evaluate except in the sense that n people believe this story versus the n-x that believe this story, if one then treats n as evidence of objective truth beyond the population n from which n was drawn.

    Turning to academia, I listen to people in ‘soft’ fields like American Studies and Women’s Studies and various forms of Asian and Middle Eastern Studies and what they’re selling is their perspective, using the academic methods of discourse. Some fields appear more inbred or incestuous than others; the work is less ‘research’ to uncover evidence but packaging and presentation of various partial facts in accordance with certain agreed upon conclusions so the act of presentation buttresses the shared conclusions. Stick with the program or, as Taylor says, they’re burning all the witches, even if you aren’t one. Everyone knows if you want to get a job and then get ahead, you a) have to present material using the method of discourse for that field and b) unless you’re a very good actor, then you have to believe what you’re saying enough to be passionate about it, when ‘believe’ is a soft version which combines wishes plus hope plus some evidence plus ignoring other perspectives because you see value in the perspective of this method of discourse. I’m not saying all work is bad or anything like that. In fact, I’m saying people argue for things they tend to actually believe, even if the statistics are lousy, even if they don’t check facts deeply (because facts complicate things you believe, don’t they?), even if from another perspective what you’re saying is .

    To get back on track, statistical ‘rigor’ in most fields is just a thing you need to tell a story. The sad thing to me is how bad most of the stories are. And even in fields – like specific archeologies – where people know intimate details of the exact things under discussion, the number of stories about what those details mean can be huge when you dig below the Wikipedia level. So I’m not saying we need to raise academic standards. Nor am I saying we must accept that most fields are rife with prejudices, some old in academia and some old outside of academia and new to it. We’ve been losing that argument since at least the Roman Republic when people complained on the record about the degradation of the language and the forms of civil behavior. That’s another point: we tend to model in our heads as though the past held a higher version, like Wagner’s terrible mythologies of corruption of the pure – to whit, see North Korea, the self-declared ‘cleanest race’ whose purity is under constant attack by an outside world jealous of it. Or we model as though the past is the worst thing possible.

  6. “…what’s important to me is not peer review (see recent discussion) but transparency. And several of the above questions (#3, #4, #7, and, to some extent, #8 and #9) are about transparency. So that could be a way forward.

    “…think transparency is a good start, as lots of the problems with poor data collection and analysis can be hidden by lack of transparency. Just imagine how many tens of thousands of person-years of wasted effort could’ve been avoided if that pizzagate guy had shared all his data and code from the start.”

    IMO Andrew is being perhaps too diplomatic here: yes there are hints of transparency in Grimes & Bishop’s list, but there is no mention of transparency of method or facilitating auditing / replicability, of access to code & data. Bishop, at least, has been around this topic for some time, so it’s doubtful the omission is accidental.

    • Keith O'Rourke says:

      Auditing will just be too effective too quickly ;-)

      Transparency is too easy to subvert as for instance (1990s?) when clinical researchers were stuck with studies they had done or could do and they used new guidelines on how to do and report good research to package what they had as looking very much like good research.

      Also, drawing from David Spiegelhalter channeling Onora O’Neill, one shouldn’t strive to be trusted, but to show trustworthy attributes. This goes beyond things like “fishbowl transparency” – essentially you need to let others get it, understand it and _check it_. The check it requires access to details such as through access to code & data and a potential audit.

      • Anonymous says:

        “Also, drawing from David Spiegelhalter channeling Onora O’Neill, one shouldn’t strive to be trusted, but to show trustworthy attributes. This goes beyond things like “fishbowl transparency” – essentially you need to let others get it, understand it and _check it_. The check it requires access to details such as through access to code & data and a potential audit.”


        I am already hearing researchers discuss that it’s okay to do X as long as you’re “transparent” about it. I assume they mean “honest” because there is usually no way for the reader to check anything concerning X.

        I think the term “transparency” is already being misused, or at least, used in a way that makes it seem to be more about being “honest” than it is about giving others the opportunity to check things themselves. I sincerely hope we do not have to go through all that we went through the past decades concerning the dangers of trusting scientists to be “honest” about things.

        I think we are already well on our way for “confirmatory analysis” to be (-come) the new “we hypothesized that”: nothing will change and HARK-ing/selective reporting will still be a thing only different words will be used.

  7. Right ‘auditing’ would be.

    Wow Keith you and I share much reading. BTW, I was able to watch a few Youtube lectures on Frank Plumpton Ramsey & Charles S. Peirce. I have to say that I contest some of the evidence over Peirce’s influence on Ramsey based on hearing anecdotes about the Ramsey family. I could be wrong. I’m rummaging around sources.

    • Keith O'Rourke says:

      Interesting – inside knowledge can be very helpful at spotting what might be wrong.

      From Misak’s book CAMBRIDGE PRAGMATISM From Peirce and James to Ramsey and Wittgenstein on page 280

      We find him [Ramsey] on January 23 1924 writing in his diary: “Read some Hobbes Logic, and Peirce, who is surprising good in parts.” Two days later: “Read some more Pierce” (sic) and three days later: Read more Peirce in morning wrote after tea some notes on formal logic (abstraction, identity, axiom of infinity). Peirce towards end of this book sad stuff. … Ramsey was in the final stages of his undergraduate degree, and he writes that he may write his dissertation on probability “partly because interested in it again by Peirce.” On the 29th of January, Ramsey writes “Finished Peirce”, but on the 31st, he was still making notes “of C.S. Peirce”.

      (Email me and I’ll send the whole chapter if I can also cc Misak.)

  8. I did manage to see Youtube Misak’s lecture on the Pragmatists From Cambridge MA to Cambridge England. Based on it, and my own readings of Frank P. Ramsey’s life and writing, I wouldn’t have drawn as broad observations as she. Admittedly, I donated my Vienna Circle & Frank Ramsey symposium proceedings. So can’t draw on it at the moment. Where is your email listed?

  9. AV says:

    I am finding myself trying to remove voluntarily an item of junk science from the public domain where – I have placed it – a couple of years ago. I knew better then, I can do better now if the journal’s editors let me. There does not seem to be any way to do this…

    Would you know how to retract an old paper you’d rather not have written ?

    • Andrew says:


      Four times people have found serious errors in my published papers. In each instance I contacted the journal and sent a correction notice which they published.

    • Anonymous says:

      I don’t know if the following is relevant for your particular case, but should you not be aware thanks to pre-print servers you can also write a paper about your own paper/situation (should the editor not want to retract or post a correction).

      This may serve the same purpose and/or may even be considered to be better (e.g. way more opportunity to explain yourself, provide arguments for why you think the original paper should not be cited, etc.).

      Should the journal where your original papers was published not want to retract or post a correction you could perhaps even mention that in your pre-print paper as additional information.

      Should you have a google scholar profile and/or a personal website you can list the pre-print just like you would an “official” paper to maximize the chance that others can see/read/cite it.

      • AV says:

        Either form of correction seems reasonable enough. I would prefer that the paper dissapears without trace, for sure. A colourful correction might as well work that way.

        • Anonymous says:

          “I would prefer that the paper dissapears without trace, for sure. A colourful correction might as well work that way.”

          Join the club! I even wish i never went to university, and came to know of all the problems in science. Unfortunately i can not go back in time to stop myself from what i think has been the biggest mistake, and subsequent deception, of my life thusfar.

          I tried my best to help improve matters, which is perhaps similar to your situation: you can’t go back in time to undo something, but you can perhaps try and make the most of it at this point in time.

        • Martha (Smith) says:

          I agree with Anon’s comment, “you can’t go back in time to undo something, but you can perhaps try and make the most of it at this point in time.”

          Indeed, from the point of view of improving the state of science, following Anon’s suggestions is an important positive contribution to science, that can set an example for others and help “normalize” the scientifically important practice of “publishing” corrections (even though it might feel like eating crow; but maybe you can think of it more like “no pain, no gain”.)

Leave a Reply