How scientists perceive advancement of knowledge from conflicting review reports

Kevin Lewis pointed me to this article. It seemed kinda familiar, I took a look at the abstract, and I realized . . . I reviewed this article for the journal! Here was my referee report:

The paper seems fine to me. I have only two minor comments, both relating to the abstract.

1. I disagree with the “normative principles of statistical and scientific inference positing that new study findings can never reduce our level of knowledge.” I agree that knowledge should increase in expectation, but in any given case, can’t our level of knowledge decrease, just by bad luck? For example, suppose we’re comparing two treatments, and in fact A is slightly better than B. But we do an experiment with small sample, and it just happens that B outperforms A on the data at hand. Then our useful knowledge of the world has decreased.

2. The abstract says, “the majority of participants do not feel that ‘we know more’ as the result of the two new studies when the second study fails to replicate the first.” Looking at Figure 1, the percent who says “we know more” under the “opposite effect” condition is very close to 50% for all cases except for experiment 2. So I feel that the framing as “the majority of participants”, while literally correct, is misleading. Maybe replace “the majority of” with “half or more of”.

Based on the abstract to the published paper, it seems that they completely ignored my first suggestion and only minimally implemented my second suggestion.

On one hand, this is annoying: I went to the effort to write a review report, I gave two minimal comments, both of which were reasonable and easily implemented—but the authors still couldn’t be bothered to follow through.

On the other hand, maybe this is good news. Peer review is kinda horrible, so it’s heartening that the authors felt free to ignore reviewer comments and just write what they really felt, even though in this case it was kinda wrong and misleading. Gotta respect that sense of confidence.

In all seriousness, I think the paper is fine, setting aside these two minor issues.

66 thoughts on “How scientists perceive advancement of knowledge from conflicting review reports

  1. Regarding 1, if the sample is small and leads to a false conclusion, then wouldn’t it be more correct to say that no knowledge has been gained at all rather than that knowledge has decreased? After all, the conclusion is then neither justified (because of the small sample), nor true, and hence two of the three standard necessary conditions for knowledge have not been met (those conditions being justification, truth, and belief. See: https://plato.stanford.edu/entries/knowledge-analysis/)

    • Olav:

      Even if a study is designed and implemented perfectly, there’s still a 5% chance of observing a statistically significant result, if the null hypothesis is true. A statistically significant result seeming to show that a treatment works can reduce our understanding of the world, if the treatment doesn’t really work. Long term, we can replicate the study and eventually learn the error and catch up in our state of knowledge. But, short term, or until that new study is done, our knowledge has decreased.

      • Yes, a misleading study can certainly reduce our understanding, but that doesn’t automatically mean it will reduce our knowledge. Knowledge and understanding are distinct.

        That being said, I’m not denying that it’s possible to lose knowledge due to a misleading scientific study. For example, if you already know P, but then gain misleading evidence against P that causes you to lose your belief that P is true, then you will lose your knowledge that P is true. On the other hand, if you lose your belief due to a single study, then that indicates that your belief was not very secure to begin with and therefore unlikely to be knowledge. So, in a sense, knowledge is hard to lose since part of what characterizes knowledge is that it is secure.

        • If security is a necessary part of the definition of knowledge, then there is quite little knowledge in existence given that science is the process of proving wrong what was previously known to be true.

        • “Secure” does not mean “guaranteed to be true.” It doesn’t even mean “probably true.” What it means, exactly, depends on the account of knowledge in question. For example, according to one of the most popular accounts of knowledge (in philosophy), a belief counts as knowledge if it has been produced via a reliable process, i.e. a process that tends to produce truths (given a range of circumstances). By that account of knowledge, there is a lot of scientific knowledge. That doesn’t mean that some of what we now believe to be knowledge may not later turn out to be wrong, since sometimes a reliable process can produce a falsehood and sometimes we can believe that a process is reliable even though it is not.

        • How is this consistent with your assertion that both bad methods (believed to be good) does not consistently reduce knowledge? It does.

        • Edit:

          How is this consistent with your assertion that use of bade methods (believed to be good) does not cosistently reduce knowledge? It does. Imagine a world in which an academic discipline has come to a consensus that correlation is in fact equivalent to causation. If this premise were believed and research results interpreted through such a filter, this would necessarily result in a consistent production of information that reduced knowledge of reality.

        • Let’s further assume, in this fantastical world, that there is intentional insertion of erroneous information of some amount. For those ingesting this erroneous information, their knowledge of reality will be reduced as a result of that information.

        • Bad methods will lead to a lot of mistaken beliefs. I suppose if you originally had well-founded true beliefs about the matters in question, i.e. you had knowledge, then the bad methods will deprive you of your knowledge since they will cause you to lose your beliefs. But if you never had beliefs to begin with, you won’t be losing knowledge; you’ll just be ending up with a bunch of false beliefs. That’s what I’d say, but I think you and I are just using the word “knowledge” somewhat differently.

        • Sorry, I should have said “if you never had well-founded true beliefs to begin with, you won’t be losing knowledge.”

        • And when I argue strongly for a policy based on false beliefs and manage to convince decision makers that my false beliefs are in fact true and their previously true beliefs are false — am I not reducing knowledge?

        • Keep in mind that my argument is now based on quantitative evidence and they have a strong inclination to be “data driven”.

        • Yes, if you manage to convince decision makers who previously had well-founded true beliefs that their beliefs are false, then you are reducing their knowledge. However, if they simply had true beliefs that were not solidly founded, then you are not reducing their knowledge; you are just giving them false beliefs. That’s what I think is the best way of describing the situation anyway. But in any case, regardless of whether you reduce someone’s knowledge or simply give them false beliefs, you are doing damage! On that we can agree.

        • We can agree on the damage done.

          You are defining knowledge as an individual belief about an aspect of reality. And by extension, group level consensus as disciplinary knowledge.

          If I use methods that are accepted within a given discipline but those methods cannot possibly produce reliable knowledge of reality — are my beliefs well founded? I’ve used methods that have been approved by consensus. But, some percentage of people recognize flaws in those methods that reliably produce false information.

          From your perspective, both groups possess knowledge and thus knowledge is necessarily divorced from reality and resides within the individual and is based on their ability to reason critically. From this perspective, consensus knowledge is a bit of a crap shoot.

        • Publication bias resulting from p-value cutoffs can and do result in repeated replications of wrong information which necessarily result in a reduction of knowledge.

    • Andrew,

      I resonate with your 1st comment, in particular. I really think that binariness is concerning in much argumentation. Strikingly, the logicians I’ve come across, who resoundingly decry binary thinking, are some of the most likely to engage in false dichotomies and other fallacies. We are not sufficiently mindful of what our own contradictory assertions and standards, depending also on context.

      Maybe the distinction between Systems 1 and 2 type thinking may be relevant, as elaborated by Kahneman. The updating of the Socratic method may necessary. My own experience is that some folks waddle off in a huff if you ask tough questions.

  2. Herbert Simon won a Nobel Prize for his work on bounded rationality and satisficing. It seems to me that in that imperfect and real world, when studies are poorly done and require massive efforts to understand, it is easily the case that our “knowledge” decreases in many cases. It almost feels like in “most” cases at times.

    • This also reminds me of the economic research on consumer choice. Standard economic theory says that increased choice can never decrease consumer welfare. More behavioral economic research shows that this is often not the case.

  3. The ‘can knowledge decrease’ thing is right up Wittgenstein’s alley. W thought most disagreements were based on differences of interpretation of words rather than differences of thought. Or at least I think that’s what he thought. How would we know what he thought, all we have to go on is his words and maybe we interpret those wrong.

    Suppose someone fabricates a study about, say, how much people eat if they eat off a small plate vs a big plate. They get it published and some people read it. Has their ‘knowledge’ increased, decreased, or stayed the same? Does it matter whether the study, though fabricated, has the effect going in the right direction and has the magnitude kinda right? Does it matter if the readers know the study was fabricated? There’s a sort of tautological sense in which their knowledge has increased in and case: they know someone (or perhaps some AI) wrote a paper and that they themselves read it, and they know hat it was about and what it claimed to say. That’s knowledge.

    Perhaps we could say that knowledge can only increase but understanding of any particular phenomenon can go either way. But what if I forget something? Ok, never mind, knowledge can decrease.

    Let’s get Wittgenstein onto this.

    It’s really a pity if knowledge can decrease: when someone says something like “I’ll know more in a few days”, my standard rejoinder is “well, sure; you can’t know less.”

    • I’d allow that “information” increases with each study. Information seems to be a property of a library or the internet. If the new study bumped two others out of the library, information could decrease.

      “Knowledge” seems to be a property of a mind. What do you really know when you’re drunk? (And how do you know that?)

      I know a philosopher could confuse us about this.

      • Pretty much what I was thinking. The words used in the paper are not well-defined, and varying circumstances are not considered. It ain’t as simple as they seem to think – they seem to be looking for something “universal”, but the devil is in the details (circumstances, beholder, etc. etc.) Not a keeper.

    • It’s always dangerous to make a claim about what Wittgenstein meant, but since I taught Wittgenstein in one of my classes a couple of weeks ago I feel a bit entitled.

      Wittgenstein did not think most disagreements were based on differences of interpretations of words. Rather, he thought (in his later work) that all language derives its meaning from its use in particular practical contexts. For example, if a scientist claims to know that a certain hypothesis is true, then that will have a certain meaning in that scientist’s community: it means that the hypothesis is correct and that the scientist can demonstrate that the hypothesis is correct to the satisfaction of the practitioners of the community using modes of argument generally agreed-upon in that community (or something like that. Not sure Wittgenstein would endorse this). Thus, a given domain will generally include rules for how to resolve disagreement, and most disagreements will not be about the meaning of words.

      However, Wittgenstein also thought that you run into trouble when you try to divorce concepts from the practical contexts that imbue those concepts with meaning, and he thought that this is what typically happens in philosophy. Thus, philosophical disagreements (e.g. over what knowledge is, in the abstract) tend to be futile.

      At the same time, Wittgenstein was not above doing philosophy himself, including about the general nature of knowledge (see “On Certainty”), so at the end of the day what I’ve said above can’t be quite right.

      • Olav, you’d certainly know better than I. I have never read even a scrap of Wittgenstein, and was basing my claim entirely on the knowledge I (thought I) learned about him in a recent New Yorker article about Frank Ramsey, which says: “The “Tractatus” [by Wittgenstein] argued that philosophical problems are the result of misunderstanding the logic of language” and a few other things in that vein. It’s a bit like a game of Telephone: Ramsay interprets Wittgenstein, Anthony Gottleib (the New Yorker article’s author) summarizes Ramsey’s view of Wittgenstein’s argument, then I distort it again. Question is, have I increased or decreased this blog’s readers’ knowledge of Wittgenstein?

        This reminds me of the observation that most educated people know Galileo was the first person to suggest that the earth goes around the sun, that he muttered ‘eppur si muove’ after being forced to recant, that he dropped a heavy and a light cannonball off the Leaning Tower of Pisa to prove that they fall at the same rate. None of these are true, but it would be absurd to claim that these people don’t know who Galileo was.

        Anyway, you’re right, I don’t know jack about Wittgenstein. I still think this issue is one he could sink his teeth into, though.

        • David P I assume you’re not a native English speaker. In English it’s acceptable, indeed fairly common, to avoid repeating the same noun with different adjectives attached to it. For instance, a recipe might say you need “one large and one small bowl” to mean “one large bowl and one small bowl.”

        • Cannonballs come in different sizes (and therefore weights) to fit different bores of cannon. I don’t know what weights Galileo supposedly used but plausibly something like 8 pounds vs 24 pounds. (Since it didn’t actually happen I’m not sure there is a right answer; it’s like asking how much Iron Man’s armor weighs.) Anyway the light one is light compared to the heavy one, and the heavy one is heavy compared to the light one.

          Andrew, do you dislike the terminology because they are both heavy compared to a feather? But on the other hand they are both light compared to an elephant.

        • Interestingly, if you took a cannonball and a wooden ball of the same diameter, they *would* probably hit at different times, because drag. I haven’t done the calcs recently but it’s one of my favorite examples of a statistical analysis of mechanistic models, since the experiment is so simple.

        • Even with iron cannonballs there’s enough difference to be (barely) perceptible, I seem to recall, according to someone who has done the calculations. I’d be surprised if nobody has done the experiment too. Anyway the drag goes up with the surface area while weight goes with the volume, so the bigger, heavier one falls a little faster. But at least the idea that they fall in a time inversely proportional to weight is definitely disproved. Of course you could drop a grape and an apple from waist height and achieve that.

        • For a fixed size, the terminal velocity is related to the weight, since the main component of the drag is proportional to v^2 and we have a relationship mg = c v^2 at terminal velocity so v = sqrt(mg/c).

          when the fall is very long, most of the fall occurs at terminal velocity so we can say distance d = v*t = sqrt(mg/c)*t, so t = d/sqrt(mg/c) so the time is actually inversely proportional to sqrt(mg)

          Of course you’d have to drop things from an airplane at say 10k feet to figure this out, and Galileo didn’t have that option.

        • Daniell,
          Yes, ‘for a fixed size’ the effect would be larger. And of course Galileo could have used a hollow cannonball and a solid one, something like that. But for conventional solid cannonballs, if one is twice the diameter it will be 8x the weight but also have 4x the surface area and thus roughly 4x the drag. I would think more than 4x the frag because it will be blunter — larger radius of curvature — obviously.

          I don’t think you’re disagreeing with any of that, I’m just pointing it out for people who haven’t thought about what’s involved in figuring out what would happen with real cannonballs.

          And yeah, a cannonball would have to drop a long way to reach terminal velocity.

  4. To split hairs a bit, I think the counterargument to the “what if you get unlucky” question is that if your knowledge base is so fragile that it can be shaken by one small bit of bad luck then you have a bad inference methodology. In principle even getting the unlucky study might tell you a bit of info about the study variability.

    • Zhou –

      > I think the counterargument to the “what if you get unlucky” question is that if your knowledge base is so fragile that it can be shaken by one small bit of bad luck then you have a bad inference methodology. In principle even getting the unlucky study might tell you a bit of info about the study variability.

      I agree. So with more investigation the first study gives you useful information about developing a more valid inference methodology.

    • That’s precisely right. One of the lessons I draw for students when I teach about the analysis of knowledge (which indeed I sometimes do!) is that even though there is disagreement on which account is best, all the most popular accounts have a few features in common. One of those features is what’s sometimes been called the “margin for error” property: a belief needs to have a secure basis in order to count as knowledge.

  5. What if the second study clearly calls into question the assumptions of the first study, which if discarded increases the uncertainty of both?

    For instance, noticeable different sensitivity and specificity estimates?

  6. Andrew –

    > For example, suppose we’re comparing two treatments, and in fact A is slightly better than B. But we do an experiment with small sample, and it just happens that B outperforms A on the data at hand. Then our useful knowledge of the world has decreased.

    In reality, maybe that applies most of the time.

    But that is because people are too eager to resolve uncertainty. In the best of all worlds (where further investigation interrogates the findings and discovers the reason for the misleading outcome), more information increases our useful knowledge.

    I’d say that there’s often a time component. There can be a relationship between the usefullness of the knowledge and the passage of time.

    • Joshua said, “In reality, maybe that applies most of the time.

      But that is because people are too eager to resolve uncertainty. ”

      Agreed — “standard methodology” so often is “use this format, and interpret the result this way” — not exactly encouraging thinking. More “that’s the way we’ve always done it.”

  7. Andrew –

    > Peer review is kinda horrible, so it’s heartening that the authors felt free to ignore reviewer comments and just write what they really felt, even though in this case it was kinda wrong and misleading. Gotta respect that sense of confidence.

    It’s unfortunate that the editors didn’t require the authors to reconcile the comments from reviewers. Doesn’t necessarily mean just accept them, but it does mean justify rejecting them if they don’t accept them.

    When editors require that to take place, then the manuscript is only strengthened.

    • I’m curious how you would estimate the % of time that the critiques in your reviews are just ignored as opposed to addresssed, periferally and thoroughly, respectively?

      Are there obvious variables that you think are associated with the likelihood of your critiques being addressed – other than just the diligence of the authors, of course? :-)

    • Joshua:

      I usually don’t worry about it. If I am sent a paper to review, and I don’t like it, and it’s published anyway, or I give specific criticism that are ignored, I just think . . . what if the paper had never been sent to me at all? So, yeah, it’s annoying, but ultimately mistakes are the responsibility of their authors.

      • “I usually don’t worry about it. If I am sent a paper to review, and I don’t like it, and it’s published anyway, ”

        I don’t think anyone should put too much stock in peer review anyway. Plenty of garbage has slipped through the review process.

        • Jim,

          Yeah, I remember a few years ago a journal sent me a really bad paper. I trashed it in my review report. Then awhile later they sent me a second version, which was just as bad . . . then awhile later a third version . . . at that point I told them that obviously they liked the paper so they should just publish the damn thing!

        • Ha, that’s hilarious!

          But I didn’t mean to say that the processes is useless. I’ve heard a lot of people say they often get good feedback in review. Just that it’s not a certification of quality or certainty. And I mean it wasn’t ever designed to be that either.

          Personally I think it would be cool to let every paper go through a couple of rounds then let papers be published but post the reviews on a website.

        • jim –

          > I don’t think anyone should put too much stock in peer review anyway.

          Sure. But “too much stock” can be quite subjective.

          > Plenty of garbage has slipped through the review process.

          Sure. But in the name of non-binary thinking…

          Are you of the opinion that the overall net outcome of peer review is negative?

          If so, would you think that research should just be published in a non-selective and non-hierarchical process? Or maybe it should be selected for publication in a random fashion (I’ve read a bunch of stuff lately suggesting that awarding research grants on a random basis, or some sort of hybrid process that combines selecting the top grant reviews and throwing out the bottom and randomizing the middle would net better results).

          My sense is that for all the flaws, of which their are many, the net effect overall of peer review has been a positive contribution that advances many fields and has overall led to scientific advancement. Sure, don’t consider peer reviewed research as dispositive, and don’t go overboard on any one study – but I’m not sure what system would prove more beneficial. Personally, I have seen (what I consider to be) good material significantly improved via the peer review process.

          In fact, it’s interesting to look at this thread in juxtaposition to the Santa Clara study reaction – where people criticized the authors for wide dissemination prior to peer review, and I think that there’s general agreement that the 2nd version is improved as a result of the review of experts such as Andrew.

          Do you think that all research should just be published online and put up for “crowd peer review?” – and that through a kind of natural selection process the best material will surface to the top?

        • “the net effect overall of peer review has been a positive”

          Oh, sure. No doubt most papers are prepared far more carefully just because of the fear of rejection. If you can publish anything there’s no incentive to produce quality work.

        • Am I hopelessly naive to think that a significant number of researchers look forward to getting input to improve their work – at least some of the time?

        • I’ve never submitted a paper for publication and I don’t work with research people, so I’m gathering nuggets from previous experience and from occasional conversations.

          My impression is that most researchers seek a lot of feedback throughout the research and writing process from many people. Keep in mind the work has probably already been presented and subject to (praise / ridicule) at conferences in seminars and elsewhere. As a result they have an idea of what criticism are likely to arise and try to anticipate those criticisms and close those doors.

          But it’s a long process to go from starting a proposal to completing a paper or series of papers and often by the time a paper is written the work focus and main effort has moved on to the next horizon. So while people accept that they may need to make adjustments from reviews, and they recognize that their work often benefits from it, they don’t generally seem to be looking forward to addressing reviewers concerns. Seems pretty much normal to me.

    • My experiences as a reviewer have not been good. As I’ve gotten more experience in the field, I’ve been less generous with flawed papers or ones that add nothing to our knowledge. That’s probably 90% of the literature. Generally harsh reviews or recommendations to reject a paper are ignored. That’s probably down to the need of publishers to fill their journals with something and the tremendous pressure on researchers to publish a lot of papers. The marketing aspect of publication has become a presumptive part of scientific culture in many fields. Top notch people just ignore most of the literature because they know how the sausage is made. Even clearly wrong results and claims usually go unchallenged if made by a “leader” in the field.

      The problem here is that generally to produce a result, algorithm, etc. that’s new and genuinely interesting takes often many years of work especially in a relatively mature field. I would expect most researchers to be able to produce a couple excellent contributions per year. Instead, you find many academics with large teams doing little but managing their teams. They often have scores to hundreds of publications every year.

    • Joshua said, “It’s unfortunate that the editors didn’t require the authors to reconcile the comments from reviewers. Doesn’t necessarily mean just accept them, but it does mean justify rejecting them if they don’t accept them.”

      Yes!

      • I think that reviewers shouldn’t offer a recommendation on publication one way or the other until their critique has been addressed point by point, and then they have the opportunity to say whether they feel their points of critique have been adequately dealt with.

        If the editor thinks that the reviewers analysis is valuable, that should be an absolute requirement.

        Then the reviewer can weigh in on whether his/her points were adequately addressed and whether he/she recommends publication.

        Then the editor can choose to publish irrespective of a reviewer indicating that their critique has been adequately addressed – based on the editor’s own assessment of the reviewer’s critique.

        It’s makes no sense to solicit the input of a reviewer and then publish the article anyway if the reviewer’s critique has effectively been ignored.

  8. Andrew:

    I think footnote 1 deals with your first comment. I doubt it deals with it satisfactorily. However, to me the biggest problem is that they have not defined knowledge. They seem to be taking the naive view that knowledge is justified true belief. However, we have known for decades that knowledge is not justified true belief because of the so-called Gettier’s problem. I can believe that it is 1:30 based on my looking at a clock that says it is 1:30, and it can be 1:30, but unknown to me the clock stopped at 1:30 yesterday and no longer keeps time. I just happen to be correct. There is no solution to this problem that saves justified true belief as a definition of knowledge. Based on Gettier’s problem it is very easy to see how I can know less today than I did yesterday. When I learn the clock is broken, I know longer have a justification for my belief. Most philosophers regard the accidental connection between justification and truth to render “justified true belief” as an inadequate definition of knowledge. Putting aside what the proper definition of knowledge is, what is clear is that proper scientific methods do have this accidental quality. They cannot guarantee the truth of our beliefs. So, we can based on good scientific practice just happen to be right, and later learn that our methods of measurements were unreliable. I don’t believe that the participants in the study are “wrong” to believe that a replication failure means we know less. I think it is ambiguous.

    • Steve:

      Maybe footnote 1 deals with your first comment, but then they should’ve fixed the abstract! It’s not cool to make a false statement in the title or abstract of a paper and then only correct it in a footnote.

      • I agree, but I don’t think that they can deal with the issue you raised without trashing their whole work. If the participants of their study might be correct in believing a failed replication might reduce our knowledge then the entire experiment is hopeless. So they just said in their footnote essentially we assume that participants don’t think the way Andrew German thinks. I don’t think they can address the problem you raised without redoing the experiment, which is why they ignored it. That is not making an excuse for them, but I think it is what happened.

    • When you learn that the clock is broken in the broken clock Gettier example, you don’t lose your knowledge that it’s 1:30. Rather, you learn that you never knew what time it was to begin with. I do think it’s important to distinguish between having your knowledge reduced and simply finding out that you actually didn’t know something you thought you knew—in the second case, you’ve actually gained knowledge!

  9. Another (?) approach might be to point out that many beliefs are created by bad work and the repetitive citing of them locks those beliefs into patterns that become Orthodoxy. It would be nice to assume there is a movement forward, so we could identify the areas where progress has been retarded by bad beliefs, but the dependencies are complicated. As in, Galen’s work was enthroned, and much of the history of medicine, at least as I learned it, was taught through a lens of the difficulty of over-turning patently absurd notions that were so entrenched it was risky to engage in inquiry. But that structure of rejection and resistance relied on religious beliefs and the power of a centralized Church to impose irrational beliefs because they feared inquiry (for whatever reason, but it often seems personal power of clerics who didn’t want to cede power to other authorities.)

    To play a game, let’s say someone publishes a study about how to treat a condition, and that study is wrong but it’s accepted and becomes the standard. Lots of people suffer. Do you measure that as a negative progress because the problem could have been solved faster – no, say it would have been solved faster – if not for wrong orthodoxy? Or like the medical profession used to say – maybe still does: hold up a textbook and say half of this is wrong but we don’t know which half? If you don’t know about much less than the exaggerated half, you’re committing to establishing wrong understandings as gospel because you can rarely wait in life for perfect answers.

    I think the implicit logic behind many studies, when it comes to the state of knowledge, is that our understanding is wildly imperfect ‘anyway’, together with what I think of as the ‘glimpse of stocking’ idea – which in modern times seems to have become other idiocies like ‘side boob’ – meaning the fantasy generated out of the tiniest points of contact. This fantasy can enlarge to shocking degrees, and not only in sexual contexts. The bluntly ridiculous ESP work manifests attachment of glimpses to large scale fantasies.

    Trying to phrase this kind of thing using very basic number concepts. Consider something as basic as Fermat’s Little Theorem: you can take 2 to a prime exponent, say 7, and step back one inversion to 6, then subtract 1 and that is divided by 7. One count changes the meaning entirely from 63 being 7*9 to 128 being 2 to the 7th. Two chains that relate and intersect wth a bit of manipulation, but when you’re at 63, you might read 7*9 and not realize the actual chain counts not by 3 but by 2. You imagine all sorts of connections that emerge when you count by 3, but you missed the 1 that attaches that chain to a different one. In idea form, you might think, wow, this connects across one thing through another to a third, and you think that’s deeply true because this chain expands as math tends to do. You have your predicates, your inductive process … but it’s an artifact of the underlying mathematical structure that relates various levels together depending on the manipulations you argue you can do. And it also may be you missed the higher level chain, something this blog often points out when questioning the rationality of a model (or noting that they didn’t really discuss a model but left that implicit as though their findings are a model on their own). And so you think you’re on to something because that swells in your mind, but you miss that a step down the road, there’s a shift and the chain that’s expanded in your head isn’t the real one but is instead a few derivatives away.

    I think we all experience that. Everyone has gotten carried away on threads of thought – or worse, had words come out of your mouth that you regret, sometimes as you’re speaking but you just can’t stop. Your fit to the context, say a social context, can become awkward in a tiny moment as you realize you’ve somehow gone way off track. And people do this all the time in their lives in larger ways too. We all grow and shrink ideas in our heads. Those ideas are right until they’re wrong. That sometime means wrong until they’re right. It’s more difficult to define that. Take homosexuality; thought wrong, still thought wrong by many, but those who have been homosexual over the ages can’t so easily be described as ‘wrong then, now right’ because they were never actually wrong. And they weren’t wrong contextually either because adapting to preserve your own existence is hard to describe as wrong (unless you define cases where you adapt by becoming one who does wrong). In other words,

    Then again, there’s the Dostoyevsky model that the only way to true consciousness is through suffering, so bad work is a form of suffering which contributes to the growth of consciousness by being mistaken and perhaps by inflicting pain. There is almost too much Russian fatalism in that. I believe more in the French manner: the good and the bad fit together. ‘Quand l’amour a disparu, quand le coeur s’en est allé du côté des jamais, plus jamais, on ne pout que regretter l’amour envolé, si votre coeur vide est trop lourd, si l’ennui menace vos jours, it faut aimer.’

    And back to the usual programming …

    • The last three words were ‘il faut aimer’ not it. Auto-correction when typing French is a bear. From a song called La Chanson D’un Jour D’été.

      • Thanks for the reference to the song – a new one for me. Its last few lines (“Chanter …, Chanter …, Chanter …, … pour être heureux!”) describe what is sometimes the best way to cope with a lot of things in life.

      • I just noticed another typo (presumably auto-correct): should be “on ne peut”, not “on ne pout”. (An auto-correct Freudian slip?)

  10. Was thinking how to get around issues with this study. Thought trying to what one sees as findings it helps.
    Here goes…
    When asked in a short survey scientists are more familiar with the procedures and methods they use and the norms regarding the value of their collective output than laypeople.

Leave a Reply to Curious Cancel reply

Your email address will not be published. Required fields are marked *