47 thoughts on “The (Lance) Armstrong Principle

  1. I was just listening to an old This American Life episode called “Right to Remain Silent.” In the second half, it talks about some of the numbers games big city PDs are playing. NY rolled out their CompStat system around 1994 and crime has dropped steadily since then. But there’s pressure to keep the decreases going, either through very aggressive methods or from just faking the stats. The episode noted a few examples of the police discouraging victims from officially reporting crimes and of recording serious crimes as less serious offenses. Hard to know how prevalent this is or how much it compromises the crime data.

    See also: Cheating scandals in Atlanta and elsewhere in response to totally unrealistic No Child Left Behind requirements.

    • The implication of the episode of This American Life is that current reports of crime rate statistics from police departments are unreliable. Which begs the following questions:

      1. Which sources of crime rates are the most reliable, and least likely to be “gamed”?

      2. Are crime rates in the US being significantly under-reported? If so, how much of this can be attributed to police manipulating or faking the statistics?

      • 1. Only homicide rates are reliable and comparable across jurisdictions.
        2. All crime rates except for homicide are always being significantly under-reported. But by how much varies with crime and location and nature of victim. When people have little trust in the police, reported crime rates are lowered even if the police do not try to make this happen. No one can quantify this, as there is no better source to compare the reported rates to.

        • The NCVS provides an alternate source of crime data to police numbers in the UCR. It also has data on non-reporting and the reasons for non-reporting. Unfortunately, it’s national, not local. (I think the NCVS has data for some large cities, but they don’t make it as readily available.) Car theft and even robbery numbers are also useful, especially if you are looking at trends rather than the numbers for a single year.

    • Dzhaughn:

      Good point. I altered the post. The original version said, “If you push people to promise more than they can deliver, they’re motivated to cheat.” The updated version is, “If you promise more than you can deliver, you’re motivated to cheat.”

      • I sort of prefered the previous version because it suggests that the pusher bears at least some of the responsibility for the cheating. Perhaps in an ideal world they wouldn’t: Hey, I don’t care how hard you push me, I’m not gonna cheat, etc. But in the real world, when real money and real jobs and real psychological pressure are involved, some people will give in to the pressure. The new formulation loses that aspect of it.

        • Phil:

          How about this:

          “If there’s pressure to promise more than you can deliver, you’re motivated to cheat.”

          Or maybe the original formulation is fine, I’m not sure now.

        • I’m with Phil. The original version had a nice point that it’s not just a flaw in the individual cheater, it’s also a flaw in the system that pushes.

          “It takes two to make an accident” — F Scott Fitzgerald, The Great Gatsby

        • The original aphorism isn’t necessarily wrong; it maybe is just not The Lance Armstrong principle.

          Perhaps a parable would be a more suitable genre.

        • I don’t know, I assume many many people pushed Armstrong to cheat, they were all benefiting monetarily from his cheating. The system was set up where tiny competitive edges made for massive financial benefits. The same seems to be true for the Wansinks and Cuddys and Wegmans and Stapels of the world too, no?

        • I think you can say there were pressures on Lance to cheat. Certainly there were pressures on the other riders on his team to cheat with him.

        • I like the point and tying it to being pushed, but getting pressured to win is not part of the normal narrative. He was very ambitious and self-driven. It’s not at all clear how much he was “pushed” or that he wouldn’t have cheated just to win regardless. It would be like adding the related observation that people with no good legal options to earn a good income may turn to crime and calling it the “Bernie Madoff corollary” instead of after Jean Valjean or someone.

          I like the Wells Fargo example but I admit I didn’t think of the right scandal until I clicked through.

        • From the testimony of teammates on the USPS team, the whole team was organized around cheating (invisibly so that the cheating wouldn’t show up on tests). So it just isn’t true that no one was really pushing Lance Armstrong to cheat.

        • Campbell’s Law would likelyapply to the Atlanta cheating scandal and to the New York Police stats problem.

          “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

          There are several formulations by various writers.

  2. I have called something similar the Tour de France effect for a while: if competition is intense enough, the winners are probably cheats.

    Mostly in arguments over economic competition, and why having ever more of it isn’t always a good thing: at some point, the costs of policing to keep participants honest, will outweigh the gains to be had from pushing prices closer to cost.

    It’s not a novel insight and it shouldn’t be controversial, but it’s surprising how many people deny it.

  3. I prefer, “If acceptable performance is only possible if you cheat…”

    Or when honesty ensures you will be treated with contempt, derision and poverty is a certainty.

    Everyone else in the Tour de France cheated. Everyone. No really you can’t make the grade without it. They were very clearly burying evidence to avoid scandal.

    Now think on publish or perish and what you know of journals publicising results they’ve published that can’t replicate.

    From an ethical standpoint as a junior researcher in academia; you can choose not to compete. Full stop. The end.

    Save your outrage for those who force that decision, the one where Lance decided not to be poor but to compete. And they KNEW.

      • Sample size of 1? What happened to your lament about statisticians applying their own principles to themselves? How many acceptable-non cheaters do we have evidence of and how many acceptable-cheaters? I don’t pretend to know the answer, but isn’t that more to the point?

        • Fair enough, you are correct. But perhaps more important would be the statement “acceptable performance in academia generally requires cheating.” I think a position like that could be defended as true and the Andrew case would not be sufficient to disprove it.

        • Dale:

          I think even that statement of yours isn’t correct. I guess it depends on the field of academia. For example, in theoretical statistics, there’s very little cheating but there’s a lot of people wasting time coming up with methods that nobody should use, or proving irrelevant theoretical results. Similarly, I doubt there’s much cheating by literature professors. Even in social psychology and medical research, it’s my impression that lots of successful academic researchers aren’t cheating, even by accident (if, for example, we call it “cheating” when a researcher innocently and inadvertently uses the garden of forking paths to extract a series of statistically significant comparisons out of pure noise).

          I’d prefer to say, not that cheating is required, or even generally required, but that there are strong incentives to cheat, and not many incentives not to cheat.

        • I actually do not think that academics generally must cheat in order to perform acceptably (in career terms), so this comment is a bit off the original topic. I do, however, feel that in many fields academics’ careers do incentivize them to follow forking paths, to keep data private, to not reveal enough about how they manipulated the data so as to preclude true replication, and to – if forced to acknowledge errors – always maintain that their conclusions are not affected by any errors that are found.

          To use the bike racing analogy, I think these are the parallels to the “cheating” that takes place. From a legal standpoint there is a difference, since none of the practices I am mentioning are illegal nor are the grounds for losing your job. But I would posit that academic career success is enhanced by all of these practices and that they are more commonly practiced than not.

        • To give a slightly different perspective: In many fields, “that’s the way we’ve always done it” seems to many (most in the field?) like “the right way” — and breaking that mindset (i.e., getting people to realize that “the way we’ve always done it” is in fact “cheating”) can be very difficult.

        • MS: To give a slightly different perspective: In many fields, “that’s the way we’ve always done it” seems to many (most in the field?) like “the right way” — and breaking that mindset (i.e., getting people to realize that “the way we’ve always done it” is in fact “cheating”) can be very difficult.

          GS: How a statement of the problem is phrased matters (i.e., conceptual issues matter). In your post, whether you realize it or not, you point to “mindsets” and “realizations” as causes of behavior. Gelman, to his credit, has focused on what can actually be done – manipulate the observable, measurable variables of which behavior is a function. No doubt you will see this as nitpicking but (mainstream) psychology has been all but wrecked by the widespread employment of explanatory fictions. Now, you might think that your way of talking is OK – just analyze the causes of “mindsets” and “realizations.” But 1.) this is not what has happened in psychology and 2.) there is no harm in skipping the unobserved “middle-term” – if the environment leads to “mindsets” which lead to behavior, then we may treat the behavior as a function of environmental contingencies.

        • Glen, “mindsets” or “middle-terms” are internal “state”. for example, suppose there is a safe with an electronic keypad. I walk up to the keypad and press 9 and it pops open. You may decide that “pressing 9 causes box to open” but the fact is that 30 seconds earlier I first entered 6 other numbers and 9 was only the last one needed, and the timeout is 1 minute so it hadn’t yet reset its internal state…

          I think it’s a mistake in science to refuse to use unobserved state as an explanation of what is going on. In fact, Bayesian statistics is 100% about inferring the values of unobserved things (parameters) conditional on a model for how those unobserved things explain observed things (data).

          Of course, one wants models of unobserved things to be good, so one shouldn’t just accept them because someone used a model once and it becomes standard. It’s fine to question models that use unobserved state, but I think the view that one should never use unobserved state which you seem to advocate for goes in a direction I can’t agree with.

        • Daniel: Glen, “mindsets” or “middle-terms” are internal “state”.

          GS: The tipoff that psychology is not really like physics or other legitimate sciences that posit hypothetical constructs is the comical facility with which the glaringly-gratuitous inner, mediational processes or “things” (or whatever) are invented. Moliere had his famous joke about this sort of practice (google “dormitive virtue” if you don’t know what I’m talking about).

          Daniel: for example, suppose there is a safe with an electronic keypad. I walk up to the keypad and press 9 and it pops open. You may decide that “pressing 9 causes box to open” but the fact is that 30 seconds earlier I first entered 6 other numbers and 9 was only the last one needed, and the timeout is 1 minute so it hadn’t yet reset its internal state…

          GS: Notice that the “internal state” in your example provides no information that isn’t “contained” in observable events. It is utterly empty. Of course there is *some* “internal state” whenever there is a temporal (or spatial) gap in a statement of causation [Change the word causation if you insist that causes must be spatially and temporally contiguous with the effects – I know…call them “independent-variables.”] but pointing that out is useless and gratuitous.

          Daniel: I think it’s a mistake in science to refuse to use unobserved state as an explanation of what is going on.

          GS: You mean like the utterly gratuitous “mindset”? Or dormitive virtue? [Actually, if I remember correctly, you defended domitive virtue in a previous conversation!] And, BTW, I have nothing against hypothetical constructs – atoms were such…so were receptors and genes. What I am against is the laughably-gratuitous nonsense that characterizes mainstream psychology and (this shows the danger) the fields that it has corrupted like much of neuro”science.” You will not find “intentions” or “wants” or “mindsets” in the brain. What is funny is that the reification of ordinary-language mental terms suggests that ordinary-language parses behavioral processes correctly. Why would anyone think that? The reason is that ordinary-language talk about behavior is of some effectiveness – in the world of ordinary affairs. But ordinary-language practices will not suffice for science.

          Daniel: In fact, Bayesian statistics is 100% about inferring the values of unobserved things (parameters) conditional on a model for how those unobserved things explain observed things (data).

          GS: I understand the English words that compose the sentence above, but I do not understand what you are saying. Could you give a clear, simple example sans “jargony” syntax (like “conditional on a model”)? Much of what you say sounds fishy as all get out to me. Like…is a parameter a thing? Or a process? And, are you saying that the parameter is a cause?

          Daniel: Of course, one wants models of unobserved things to be good, so one shouldn’t just accept them because someone used a model once and it becomes standard. It’s fine to question models that use unobserved state, but I think the view that one should never use unobserved state which you seem to advocate for goes in a direction I can’t agree with.

          GS: Well, the last is largely irrelevant to this conversation since I never said that science should never posit “unobservables” in the sense of hypothetical constructs – like atoms and receptors once were. What I am ridiculing is the gratuitous reification of ordinary-language terms as causes – or the invention of “things” on the spot to explain behavior (that’s like the domitive virtue. A good case can be made that mentalistic terms (wants, needs, expectations, attitudes, beliefs, knowledge etc. etc. etc. etc. etc.) are simply names for behavior and its context. They are, importantly, occasioned (at least one kind of use of the terms) by the observation of behavior and/or its context just as the emission of the phrase “white dog” is occasioned by a white dog. The term “belief” (or the related verb “believes,” “believed” etc.) is a name of certain kinds of behavior-in-context just as “dog” is the name of a certain kind of animal. So…put a related way, one could say that the inner causes of psychology (which they try to dress up in scientific respectability by talking about the brain) are simply inferred unhesitantly from behavior and, also without hesitation, used to explain the behavior from which they were inferred – as when an education person suggested that you couldn’t read my son’s handwriting because he “had dysgraphia.” Anyway…it seems to me that we have had this conversation before. I’m not gonna proofread this sucker so my apologies ahead of time for typos etc.

        • Martha wrote: +1 for Daniel.

          GS: Ok…then why are you worried about Gelman’s arguments concerning “incentives”? Oh…yeah…that’s were some of the “inner causes” come from. And tell me, how is it that you will change “mindsets” and produce the necessary “realizations.” Will you change these things directly? Or will you do and say (not that saying is not doing, mind you) things that change the alleged inner causes which, in turn, results in a change in behavior? So what good are the alleged inner causes?

        • Glen, I don’t have any problem at all with you complaining about what is actually done in Psychology. I don’t know enough about psych to know how rampant the problems you mention are, but I find it totally plausible that you are correct. However this isn’t a blog about *psychology* it’s a blog about “(Bayesian) statistical modeling, causal inference, and social science” which covers a lot of ground. And you’ll maybe remember, I didn’t defend “dormitive virtue” I defended “occupied fraction of Histamine H1 receptors in the brain” which is for all intents and purposes unobservable, but very inferable from a combination of in-vitro experiments and predictive models.

          In engineering we work all the time with “continuum” models of materials… stress and strain and displacement and soforth. But there patently does not exist any continuum, it’s atoms, there is no “spatial derivative of the displacement field (strain)” because all there is is discrete atoms that individually displace.

          This bothered me a lot until I could put a logically consistent interpretation around it. You might find it interesting, it’s in my dissertation:

          http://digitallibrary.usc.edu/cdm/ref/collection/p15799coll3/id/346615

          The basic interpretation is that continuum models are models for the statistical properties of material within a finite volume, and that depending on various nondimensional ratios at work in the system, you wind up with different types of continuum models. All of them are false descriptions of the atoms, but true approximate descriptions of statistical properties of atoms within a small but finite volume.

          If you insist on solving problems in science with models that directly model the real physical situation, then you’ll wind up having to do psychology by specifying a lagrangian for every molecule and every electron in the human body. Then you’ll be able to predict exactly what I will type in my next comment from the simple input of 10^90’th precisely specified initial conditions… simple!

          So, fine, psychology invents fictions that have no real explanatory power. That’s a problem. But it’s not a problem because of invented fiction, it’s a problem because of lack of true explanatory power! You can’t do psychology without inventing some statistical fictions (such as “animals” and “brains” and “speech”), because the reality is just a crapload of quantum mechanics.

          Since you’re at a blog that concentrates on Bayesian statistics applied to social sci, I thought you might already have a background for the Bayesian statistical stuff. Here is a not too fishy example in several blog posts from a few years back:

          http://models.street-artists.org/?s=dropping+ball

          Several friends helped me do some experiments. We took several sheets of paper and crumpled them into a ball. Then we dropped each one from a given height and timed its fall. I modeled the whole process using several fictions: for example that the ball was a sphere of unknown diameter, or that the initial height was a well defined thing (the ball is actually a crumpled bit of paper, and has bits that stick out all over). The diameter was a parameter, given a parameter I could calculate a drag coefficient and use a formula to infer a net air drag under the pretense that it was a perfect sphere, and include this in the differential equation for the acceleration of the ball.

          That there *is no* single diameter is given from the start, but *conditional on the assumption that there is an “effective” diameter* I get one prediction for fall time for each effective diameter, and therefore I can infer which diameters produce predictions more like what was actually observed and which produce predictions less like what is actually observed. In probability notation I get:

          p(ActualFallTime | Diameter, MeasurementError, ActualInitialHeight, Gravitation…)

          All the items on the right hand side of the conditioning bar | are parameters in the model, if you tell me their exact value, I tell you how probable it is to observe a given ActualFallTime. In the Bayesian machinery, we can do some algebra and convert this “likelihood” expression (that’s what’s above) into a “posterior distribution” for the parameters, that is an expression like:

          p(Diameter,MeasurementError,ActualInitialHeight,Gravitation… | ActualFallTime)

          Now, we’re conditioning on the hard concrete data and we find out which values for Diameter,.. etc are plausible given the data. This is the essence of Bayesian statistics: posit a model that connects unobserved quantities to predictions of data, and then use actual data to infer the numerical values of the unobserved quantities.

          all of it requires that your predictive model have real power to predict (ie. it’s “conditional on a model”)

          If you have various plausible models, you can incorporate them into a kind of “shootout” where they compete for attention and the ones that predict well “win” (relevant terms are “bayesian model averaging” or “continuous model expansion” or other jargon, but it comes down to partitioning 1 unit of plausibility among all the various possible models that you are willing to consider for the moment)

          So, hopefully that can help you with some context for the discussions at this blog surrounding models and Bayesian statistics.

        • Daniel: Glen, I don’t have any problem at all with you complaining about what is actually done in Psychology. I don’t know enough about psych to know how rampant the problems you mention are, but I find it totally plausible that you are correct. However this isn’t a blog about *psychology* it’s a blog about “(Bayesian) statistical modeling, causal inference, and social science” which covers a lot of ground.

          GS: Yes, but, the current issue centered, at first, around Martha’s comments about “mindsets” and “realizations” as causes of behavior, which was embedded in the context of “why is the scientific behavior of some ‘scientists’ is so laughably disastrous?” Further, where you entered the conversation was in defense of the notions put forth by Martha. Further, 85% of what gets talked about here is directly or indirectly about psychology and the word “social” is not always appended to “science.”

          Daniel: And you’ll maybe remember, I didn’t defend “dormitive virtue” I defended “occupied fraction of Histamine H1 receptors in the brain”[…]

          GS: Not exactly. If “not exactly” weren’t appropriate you wouldn’t have even brought it up originally, since the occupation of receptors has nothing to do with using “dormitive virtue” as the name for a cause. No, you were tacitly defending the use of explanatory fictions created on the spot using circular reasoning. “See? ‘’Dormitive virtue’ is really the occupation of H1 receptors!” I used to argue with this guy that defended ordinary-language terms employed as causes of behavior on the basis that “It makes you look for ‘what’s inside.’” But you don’t have to support folk-psychology, and its “sophisticated” sibling, mainstream psychology, to argue that “filling temporal gaps” in causal accounts of behavior (or any other non-mechanistic science) is ultimately desirable. Further, at least in the case of ordinary-language terms as technical terms, neurophyisologists are misled about “what’s in there.” More precisely, folk-psychology gives a misleading view of what is to be described at the neurobiological level. Only an experimental science can discover the proper ways to talk about a subject matter and the science of behavior is no exception. Ordinary language does not parse behavior into appropriate functional units of analysis and the endeavor for a conceptually-sound (and therefore scientifically-sound) neurobiology of behavior has been decimated.

          Daniel: […]which is for all intents and purposes unobservable, but very inferable from a combination of in-vitro experiments and predictive models.

          GS: You will have to be clearer concerning what you are calling “unobservable.” I would say that you are wrong in a very important sense – the sense relevant to our current discussion. Receptors are no longer hypothetical constructs and, hell, they are directly-observable with a scanning electron microscope (I’m pretty sure). But, maybe you are saying that the “fraction of occupied receptors” is not observable. But this doesn’t raise the sorts of conceptual issues that were raised by “receptor,” “gene,” or “atom” before they lost their status as a hypothetical construct. And I should also add failed “entities” like phlogiston and the ether.

          Daniel: In engineering we work all the time with “continuum” models of materials… stress and strain and displacement and soforth. But there patently does not exist any continuum, it’s atoms, there is no “spatial derivative of the displacement field (strain)” because all there is is discrete atoms that individually displace.

          This bothered me a lot until I could put a logically consistent interpretation around it. You might find it interesting, it’s in my dissertation:

          http://digitallibrary.usc.edu/cdm/ref/collection/p15799coll3/id/346615

          The basic interpretation is that continuum models are models for the statistical properties of material within a finite volume, and that depending on various nondimensional ratios at work in the system, you wind up with different types of continuum models. All of them are false descriptions of the atoms, but true approximate descriptions of statistical properties of atoms within a small but finite volume.

          If you insist on solving problems in science with models that directly model the real physical situation, then you’ll wind up having to do psychology by specifying a lagrangian for every molecule and every electron in the human body. Then you’ll be able to predict exactly what I will type in my next comment from the simple input of 10^90’th precisely specified initial conditions… simple!

          GS: Thanks for the heads-up LaPlace. I am, however, completely in the dark WRT how this relates to anything we are discussing. Well…maybe not completely – you are getting ready to say that psychology’s fictions are just like your “fictions” (OK…I peeked).

          Daniel: So, fine, psychology invents fictions that have no real explanatory power. That’s a problem. But it’s not a problem because of invented fiction, it’s a problem because of lack of true explanatory power!

          GS: Sometimes when espousing philosophy, as you are doing, a brief statement of the alternative philosophy is warranted. According to the main competing view of your Standard Mechanistic Reductionism, science is not a matter of “finding truth.” It is not a matter of “constructing a model that is the same as, or progressively-closer to, the ‘real world.’” It is a matter, according to this alternative view, of (with a behaviorist’s spin) engaging in behavior that strengthens (makes more probable) verbal behavior that in turn provides a listener (perhaps the same scientist who is the speaker) with a repertoire effective in prediction and control over a subject matter. In other words, “truth criteria” are replaced by prediction and control. Talk of the “real world” and “truth” quickly becomes mere metaphysical micturition. Reified explanatory fictions are fictions precisely because they lack any explanatory power.

          Daniel: You can’t do psychology without inventing some statistical fictions (such as “animals” and “brains” and “speech”), because the reality is just a crapload of quantum mechanics.

          GS: LOL! First, I’d like to say “thank God you’re around to keep us all on the straight-and-narrow WRT reality and truth and, therefore, on the essence of science.” Or is it “reality” and “truth”? If the level at which you investigate a phenomenon leads to prediction and control (where appropriate) then it is science not “approximate science” that’s not *really* about “reality.”

          Daniel: Since you’re at a blog that concentrates on Bayesian statistics applied to social sci, I thought you might already have a background for the Bayesian statistical stuff. Here is a not too fishy example in several blog posts from a few years back:

          http://models.street-artists.org/?s=dropping+ball

          Several friends helped me do some experiments. We took several sheets of paper and crumpled them into a ball. Then we dropped each one from a given height and timed its fall. I modeled the whole process using several fictions: for example that the ball was a sphere of unknown diameter, or that the initial height was a well defined thing (the ball is actually a crumpled bit of paper, and has bits that stick out all over). The diameter was a parameter, given a parameter I could calculate a drag coefficient and use a formula to infer a net air drag under the pretense that it was a perfect sphere, and include this in the differential equation for the acceleration of the ball.

          That there *is no* single diameter is given from the start, but *conditional on the assumption that there is an “effective” diameter* I get one prediction for fall time for each effective diameter, and therefore I can infer which diameters produce predictions more like what was actually observed and which produce predictions less like what is actually observed. In probability notation I get:

          p(ActualFallTime | Diameter, MeasurementError, ActualInitialHeight, Gravitation…)

          All the items on the right hand side of the conditioning bar | are parameters in the model, if you tell me their exact value, I tell you how probable it is to observe a given ActualFallTime. In the Bayesian machinery, we can do some algebra and convert this “likelihood” expression (that’s what’s above) into a “posterior distribution” for the parameters, that is an expression like:

          p(Diameter,MeasurementError,ActualInitialHeight,Gravitation… | ActualFallTime)

          Now, we’re conditioning on the hard concrete data and we find out which values for Diameter,.. etc are plausible given the data. This is the essence of Bayesian statistics: posit a model that connects unobserved quantities to predictions of data, and then use actual data to infer the numerical values of the unobserved quantities.

          all of it requires that your predictive model have real power to predict (ie. it’s “conditional on a model”)

          If you have various plausible models, you can incorporate them into a kind of “shootout” where they compete for attention and the ones that predict well “win” (relevant terms are “bayesian model averaging” or “continuous model expansion” or other jargon, but it comes down to partitioning 1 unit of plausibility among all the various possible models that you are willing to consider for the moment)

          So, hopefully that can help you with some context for the discussions at this blog surrounding models and Bayesian statistics.

          GS: Yeah…if y’all weren’t too busy talking about psychology and science (or “science” as the case may be) in general 80% of the time. Keep in mind, for example, that the whole issue of the “hot hand” is psychology.
          As to your model above, it is irrelevant to explanatory fictions with which this sub-portion of the thread began – you know, you defending “mindsets” and “realizations”? Here’s the thing: your “unobservables” don’t raise the same issues as hypothetical constructs, which is where the meaty issues in the philosophy of science. IOW, you have conflated two different issues (levels of analysis per se is conflated with questions about, what you would call, the “existence” the “substrate” of the reductionism. Now, I might “slip” and say something like “there was a question about the existence of atoms before Einstein’s treatment of Brownian Motion” but that isn’t really a slip – it is simply a colloquial statement. Talk of atoms and the associated mathematical verbal behavior that arose is useful in prediction and control of the “One World.” Notice I didn’t say the “real world.” That is because that notion is intimately tied to the notion of “scientific truth.” But here’s the catch: there is such a “thing” as “truth” and it would mean that the speaker could say things that are the best possible at prediction and control – but there is no way to ever know that what one says is the best possible – the simplest and most effective. That is where the metaphysics creeps in and it is irrelevant to prediction and control – or should be. One of my most favorite stories is in Order Out of Chaos by Prigogine. He recounts the gist of a caveat that accompanied Fourier’s winning of a prestigious scientific award for his partial differential equations describing heat transfer in metals. The award was presented but it was made known that it was given somewhat begrudgingly since no statement about what was “really going on” was made by Fourier. IOW, his stuff was just a fiction, useful, but a fiction. Prigogine refers to that event in the history of science as the “birth of complexity.”

        • I should have made this more explicit: The use of “model” by Daniel is not, I think, what most of the scientific world would think of as a model – except, perhaps, superficially. AFAICS, what Daniel and others are talking about is 1.) Collecting data meticulously. 2.) Doin’ some fancy-Dan stuff with it, and 3.) Predicting, probabilistically, exactly the sort pf events about which the data were meticulously collected! This is no different in principle than me injecting an animal with a drug, plotting the mean and range of each dose, and using that to “predict” what the effect of some particular dose would be on that particular animal. I read the answer off the graph along with caveats about, at least, the range of possibilities. I could do the same experiment with a bunch of animals and pool their data and make some predictions about what happens when a new animal is used. This is not much like what models are like in other sciences. There, the model, if it is a good one, allows the prediction of new KINDS of events – not just the sorts of events from which the model is constructed. The epitome, of course, is physics where the existence of new particles is predicted by the model. If the model (or “model” as the case may be) doesn’t predict new types of phenomena, then the “model” has another name – a summary of the data.

        • Well if we’re being pedantic your N may well be 1 but that’s where your proof stops, the rest of the statement is merely a claim.

          I’m not trying to be a jerk here and accuse of you of anything more than being flippant about the extent of the problem being forced on PhD students & post-docs. Go talk to a few where both you and they are properly anonymous. You might find put something of significance. Sadly. Or perhaps you already know it.

        • Hal:

          I’m sure there are labs where cheating is required to succeed. But statistics isn’t the same as biology. Academic fields differ in what is expected of people. Fortunately, in statistics it is possible for us to be useful without cheating, also we are not generally under pressure to produce “discoveries” in the way that this may be the case in some bio labs.

      • You are not a junior post-doc trying to establish a career under an amoral professor. And to paraphrase Christine Keeler, you would say that, wouldn’t you. Oh but I do believe you, really. I’m not being smart or sarcastic saying that. I believed Lance too and to be fair to him, he had stronger evidence than you do with all those drug tests he passed. The most frequently drug-tested athlete in the world.

        If you were to state that you did in fact, cheat, career over. Good night. Just as true for you as Lance too.

        • Hal:

          I think we’re in complete agreement here. I wrote, “If you push people to promise more than they can deliver, they’re motivated to cheat.” And you’re pointing to a group of people who are pushed to promise more than they can deliver, and then they’re motivated to cheat. So, agreement.

        • No we really are not.

          Who is doing the promising? You say grad students and post docs. I _don’t_. I say senior members of the profession are making the promises. I say /you/ are making those promises. The grad students are the ones who come up very, very, very hard indeed against the untruth of those promises when they are in fact untrue, which you will agree is rather common. Feel free to estimate a percentage of grad students with desires of a career in academia who don’t find they have winning lottery tickets and being smart and hard working ain’t going to get it done. The people making those promises to grad students are entirely and wholly insulated from the consequences of those promises being lies.

          You see what I’m getting at here. When the game is bent the only option you have is to leave the table or cheat better. These were Lance’s options. I can’t bring myself to hate him for his choice. I totally and absolutely can bring myself to hate the Tour organisers who repeatedly buried positive results of riders for fear of scandal ensuring the game was bent.

          You see the parallel. You’re not suggesting it’s unfair to draw that parallel are you?

        • Hal:

          You write that I am saying that “grad students and post docs” are “doing the promising.”

          I never said such a thing. That’s all coming from you.

          So I’ll amend my above statement (“I think we’re in complete agreement here”) to “I think we’re in complete agreement, except for the part in which you think we are in disagreement because you strongly disagree with something I never said.”

        • Ho hum.

          But the grads _are_ cheating, aren’t they? Just like Lance did. You know too frequently (how frequently? Fancy an estimate?) Their option is to walk away or cheat. Meanwhile you know that it isn’t exactly uncommon to note that one’s research assistant “did that work” when it is found out for the academics who hired the grads and made the promises. The promise makers keep their hand clean of the cheating knowing it will happen to get publications etc etc you know all this.

          The grads aren’t “pushed to promise” They’re the ones who get nailed, just like Lance did. Twice. Once with the decision and again with the consequences when it comes to light.

          The Academics are somehow “pushed to promise” (to get the cheap labor from grad students??) and when that promise they make TO the grad students is proven false they do NOT get nailed. Pushed despite tenure so they aren’t really pushed at all, to my mind.

          The grad students here are Lance. The tenured academics are the ruling body ensuring the game is rigged. The grad students can walk away or cheat. Just like lance. Lance cheated. Lance did not promise. Lance was not pushed to promise. Lance had promises as a young man who loved riding a bike made TO HIM. He cheated, but he did not promise.

          So who made the promise? It wasn’t Lance and it wasn’t the grads – it wasn’t those who are cheating and labeled cheats. Do you see the distinction?

          “The Tour de France Ruling Body Principle” isn’t as catchy as “The Lance …” and hey everybody has had a 2 minute hate session directed at Lance rather than where it actually deserves to be directed.

          Tenured academics are *forced* to promise what? How? By whom? Tour de France organisers were forced how? These are the promises that matter and for which people should be held accountable for the resulting cheating, but are not.

          The Lance Armstrong principle would be better served as a name for directing rage and hate at exactly the wrong target. Or perhaps a naming for the completely inevitable result of a wholly morally bankrupt structure. (If not Lance as the drug addled winner whom we should hate, second place was on drugs, so was third and fourth and fifth and …) But Lance is the object of the hate. Lance walks away instead of cheating and the amount of cheating drops by a metric unit of precisely zero.

          Again, the grads never made the promise. They’re accountable for and victim of promises made by people who frequently literally cannot be fired and can afford their rent and shoes for their kids. The grads cheat not because of promises they made themselves, but promises that were made *TO* them.

          Can you really not see this point? I’m not asking you to agree with it just see it and acknowledge that it’s a reasonable point of view given the utter debacle going on in so much of academic research.

          You really should try to anonymously converse with grad students and post-docs. Just out of morbid fascination of the gaze into the abyss if nothing else.

Leave a Reply to jrkrideau Cancel reply

Your email address will not be published. Required fields are marked *