Vaping statistics controversy update: A retraction and some dispute

A few months ago we reported on two controversies regarding articles in the medical literature on the risks of e-cigarettes (vaping).

One of the controversial papers was “Electronic cigarette use and myocardial infarction among adults in the US Population Assessment of Tobacco and Health [PATH],” by Dharma N. Bhatta and Stanton A. Glantz, published in the Journal of the American Heart Association. That paper had a data issue: in one of their analysis, they were assessing the effects of e-cigarette use on heart attacks, but, as pointed out by critic Brad Rodu, “11 of the 38 current e-cigarette users in their study had a heart attack years before they first started using e-cigarettes.” At the time, I suggested that the authors of the article perform a corrected analysis removing the data from the people who had heart problems before they started vaping.

Since our post, some things have happened.

First, a letter dated 20 Jan 2020 was sent by 16 public health researchers to the editors of the Journal of the American Heart Association, citing Rodu’s comments and my blog post and urging the journal to do something to resolve the issue.

Second, on 23 Jan 2020, the journal responded with a letter stating, “The AHA continues to follow the guidelines as outlined by the International Committee of Medical Journal Editors (ICJME) and the Committee on Publication Ethics (COPE), which include protecting the integrity of the scientific publishing process. No additional information is available at this time.”

Third, on 18 Feb 2020, the journal issued a retraction! Here it is:

Retraction to: Electronic Cigarette Use and Myocardial Infarction Among Adults in the US Population Assessment of Tobacco and Health

After becoming aware that the study in the above‐referenced article did not fully account for certain information in the Population Assessment of Tobacco and Health [PATH] Wave 1 survey, the editors of Journal of the American Heart Association reviewed the peer review process.

During peer review, the reviewers identified the important question of whether the myocardial infarctions occurred before or after the respondents initiated e‐cigarette use, and requested that the authors use additional data in the PATH codebook (age of first MI and age of first e‐cigarettes use) to address this concern. While the authors did provide some additional analysis, the reviewers and editors did not confirm that the authors had both understood and complied with the request prior to acceptance of the article for publication.

Post publication, the editors requested Dr. Bhatta et al conduct the analysis based on when specific respondents started using e‐cigarettes, which required ongoing access to the restricted use dataset from the PATH Wave 1 survey.1 The authors agreed to comply with the editors’ request. The deadline set by the editors for completion of the revised analysis was not met because the authors are currently unable to access the PATH database. Given these issues, the editors are concerned that the study conclusion is unreliable.

The editors hereby retract the article from publication in Journal of the American Heart Association. [original article URL: https://www.ahajournals.org/doi/10.1161/JAHA.119.012317]

Fourth, also on 18 Feb, Stanton Glantz, the second author of the now-retracted article, posted a response with the combative title, “Journal of American Heart Association caves to pressure from e-cig interests.” In his post, Glantz first reviews the dispute and then continues:

Rodu and a colleague argued that the analysis described above was inadequate because the PATH restricted use dataset had the date of first heart attack and date at which people started using e-cigarettes and that we should have used these two dates to exclude cases rather than the approach we took.

Indeed, one of the peer reviewers had suggested the same analysis. As I [Glantz] detailed in a letter to JAHA, while there was some misunderstanding of the specific supplemental analysis requested by the reviewer, the analysis that we presented during the peer review process substantially addressed the question raised by the reviewer. As I wrote the editor, Dr. London:

In any event, it is important to keep in mind that this discussion is about a supplementary analysis, not the main analysis in the paper. As the paper states, restricting the data as we did substantially dropped the number of MIs and the supplemental analysis was not statistically significant. Reviewer 2 understood and accepted our supplementary analysis and, after we responded to the original comment, recommended publishing the paper as it is with primary analysis (which is based on the whole dataset) despite the issues discussed in this letter.

In addition, doing the additional alternative analysis will not change the main analysis in the paper, which the reviewers and editors accepted.

The normal protocol for raising a technical criticism of a paper would be to write a letter to the journal criticizing the paper. If the editors find the criticism worth airing, they would invite the authors (in this case, Dr. Bhatta and me) to respond, then publish both letters and allow the scientific community to consider the issue.

Indeed, Rodu has published several letters and other publications criticizing our work, most recently about a paper I and other colleagues published in Pediatrics about the gateway effect of e-cigarette use on subsequent youth smoking. . . .

Rather than following this protocol, I first learned of Rodu’s criticism when USA Today called me for a response to his criticism. I was subsequently contacted by the Journal of the American Heart Association regarding Rodu’s criticism. I responded by suggesting the editors invite Rodu to publish his criticism in enough detail for Dr. Bhatta and I to respond, as well as accurately disclose his links to the tobacco industry.

Instead, the editors of the Journal of the American Heart Association demanded that Dr. Bhatta do additional analysis that deleted heart attacks before people may have used e-cigarettes as Rodu wanted rather than as how we did in the subsidiary analysis in the paper.

Dr. Bhatta and I have no issue with doing such additional analysis. Indeed, we prepared the statistical code to do so last November. (I doubt that the results will be materially different from what is in the paper, but one cannot be sure until actually running the analysis.)

The problem is that, during the process of revising the paper in response to the reviewers, we reported some sample size numbers without securing advance approval from the University of Michigan, who curates the PATH restricted use dataset. This was a blunder on our part. As a result, the University of Michigan has terminated access to the PATH restricted use dataset, not only for Dr. Bhatta and me, but for everyone at UCSF.

As part of our effort to remedy our mistake, we have published a revised version of the table in question (Table S6 in the paper) deleting the sample size numbers that had not been properly cleared with the University of Michigan. (Doing so did not materially change the paper.) . . .

Now, under continuing pressure from e-cigarette advocates (link 1, link 2), the editors of the Journal of the American Heart Association have retracted the paper because, without access to the PATH restricted use dataset, we have not been able to do the additional analysis.

The editors also gave Dr. Bhatta and me the option of retracting the paper ourselves. We have not retracted the paper because, despite the fact that we have not been able to do the additional analysis Rodu is demanding, we still stand behind the paper. . . .

I read this response with an open mind. Glantz makes three points:

1. Procedurally, he’s not happy that Rodu contacted the news media with his criticism and that a group of researchers contacted the journal. He would’ve been happier had Rodu submitted a letter to the journal, and then the journal could publish Rodu’s letter and a response by Bhatta and Glantz. I disagree with Glantz on this one. He and Bhatta made an error in their paper! When you make an error, you should fix it. To have a sequence of letters back and forth, that just muddies the waters. Especially considering that the previous time that Rodu found an error in a paper by Glantz and coauthors (see “Episode 1” here), they just brushed the criticism aside.

If someone points out an error in your work, you should correct the error and thank the person. Not attack and try to salvage your position with procedural arguments.

2. Glantz says that they wanted to do the recommended analysis, but now they can’t, because they don’t have access to the data anymore. That’s too bad, but then you gotta retract the article. If they ever get their hands on the data again, then they can redo the analysis. But, until then, no. You don’t get credit for an analysis you never did. They had access to the data, they messed up. Maybe next time they should be more careful. But the Journal of the American Heart Association should not keep a wrong analysis in print, just because the people who did that wrong analysis are no longer situated to do it right.

In his blog, Glantz wrote, “I doubt that the results will be materially different from what is in the paper, but one cannot be sure until actually running the analysis.” Exactly! One cannot be sure. A scientific journal can’t just go around publishing claims because the authors doubt that the correct analysis will be materially different. That kind of thing can be published in the Journal of Stanton Glantz’s Beliefs, or on his blog—hey, I understand, I have a blog too, and it’s a great place to publish speculations—but not in the Journal of the American Heart Association.

3. Finally, Glantz argues that none of this really matters because “this discussion is about a supplementary analysis, not the main analysis in the paper.” The argument is that main analysis is the cross-sectional correlational analysis; this supplementary analysis, which addresses causation, doesn’t really matter. This may be true—maybe the cross-sectional analysis was enough, on its own, to be published. But the paper as published did not just include the cross-sectional analysis; indeed, it seems that this analysis over time was required by the journal reviewers.

So, to recap:
– On the original submission, the reviewers said the cross-sectional analysis was not enough. They “requested that the authors use additional data in the PATH codebook (age of first MI and age of first e‐cigarettes use) to address this concern.”
– Bhatta and Glantz did the requested analysis. But they did it wrong.
– When they were asked to do the analysis correctly, Bhatta and Glantz were not able to do so because they did not have access to the data.
The sequence seems clear. Whether or not Glantz now thinks this “supplementary analysis” was key to the paper, the journal reviewers demanded it, and at the time of submission, Bhatta and Glantz did not argue that this supplementary analysis was unnecessary or irrelevant; rather, they did it. Or, to be more precise, they appeared to do it, but they screwed up. So I don’t find Glantz’s third point convincing either.

Glantz says that he and Bhatta “still stand behind the paper.” That’s fine. They’re allowed to stand by it. But I’m glad that the Journal of the American Heart Association retracted it. I guess one option could be that the authors could resubmit the paper to a new journal, labeling it more clearly as an opinion piece. That should make everyone happy, right?

Glantz concludes his post with this statement:

The results in the paper are accurately analyzed and reported. That is why we refused to retract the paper.

As I said earlier, we are still hoping to regain access to PATH so that we can do the additional analysis and put this issue behind us.

This to me seems like a horribly unscientific, indeed anti-scientific attitude, for two reasons. First, the results in that paper were not “accurately analyzed and reported.” They screwed up! They’re not being asked to do an “additional analysis”; they’re being asked to do a correct analysis. Second, the goal is science is not to “put this issue behind us.” The goal is to learn about reality.

It might be that all the substantive claims in that retracted paper are correct. That’s fine. If so, maybe someone can do the research to make the case. My problem is not with the authors of that paper believing their claims are true; my problem is with them misrepresenting the evidence, and I have a problem with that, even if the misrepresentation of evidence was purely an accident or honest mistake.

Conflicts of interest

There’s some further background.

I looked up David Abrams, the first author of the letter sent to the journal. He’s a professor of public health at New York University, his academic training is in clinical psychology, and he’s an expert on cigarette use. For example, one of his recent papers is, “How do we determine the impact of e-cigarettes on cigarette smoking cessation or reduction? Review and recommendations for answering the research question with scientific rigor,” and another is “Managing nicotine without smoke to save lives now: Evidence for harm minimization.” A web search brought me to this article, “Don’t Block Smokers From Becoming Smoke-Free by Banning Flavored Vapes,” on a website called Filter, which states, “Our mission is to advocate through journalism for rational and compassionate approaches to drug use, drug policy and human rights.” Filter is owned and operated by The Influence Foundation, which has received support from several organizations, including Juul Labs, Philip Morris International, and Reynolds American, Inc.

Brad Rodu, my original correspondent on this matter, is the first holder of the Endowed Chair in Tobacco Harm Reduction Research at the University of Louisville’s James Graham Brown Cancer Center. He is trained as a dentist and is also a senior fellow of The Heartland Institute, which does not make public the names of its donors but which has been funded by Altria, owner of Philip Morris USA. Rodu also has been directly funded by the tobacco industry.

On the other side, Stanton Glantz is a professor of tobacco control at the University of California, and his academic training is in engineering. He’s been an anti-smoking activist for many years and has made controversial claims about the risks of secondhand smoke. He’s been funded by the U.S. Food and Drug Administration and the National Institutes of Health.

So, several of the people involved in this controversy have conflicts. In their letter to the journal, Abrams et al. write, “The signatories write in a personal capacity and declare no competing interests with respect to tobacco or e-cigarette industries.” I assume this implies that Abrams is not directly funded by Juul Labs, Philip Morris International, etc.; he just writes for an organization that has this funding, so it’s not a direct competing interest. But in any case these researchers all have strong pre-existing pro- or anti-vaping commitments.

That’s ok. As I’ve written in other contexts, I’m not at all opposed to ideologically committed research. I personally have no strong take on vapes, but it makes sense to me that the people who study the topic most intently have strong views on the topic and have accepted money from interested parties. To point out the above funding sources and cigarette company links is not to dismiss the work being done in that area.

Also, just to speak more generally, I’ve taken $ from lots of places. Go to this list and you might well find an organization that you find noxious. So I’m not going around slamming anyone for working for cigarette companies or anyone else.

I will say one thing, though. On the webpage of the Heartland Institute is the following statement: “Heartland’s long-standing position on tobacco is that smoking is a risk factor for many diseases; we have never denied that smoking kills.” Now consider the following quotations from leaders of the cigarette industry:

Philip Morris Vice President George Weissman in March 1954 announced that his company would “stop business tomorrow” if “we had any thought or knowledge that in any way we were selling a product harmful to consumers.”

James C. Bowling, the public relations guru and Philip Morris VP, in a 1972 interview asserted, “If our product is harmful . . . we’ll stop making it.”

Then again in 1997 the same company’s CEO and chairman, Geoffrey Bible, was asked (under oath) what he would do with his company if cigarettes were every established as a cause of cancer. Bible gave this answer: “I’d probably . . . shut it down instantly to get a better hold on things.”

Lorillard’s president, Curtis Judge, is quoted in company documents: “if it were proven that cigarette smoking causes cancer, cigarettes should not be marketed.”

R. J. Reynolds president, Gerald H. Long, in a 1986 interview asserted that if he ever “saw or thought there were any evidence whatsoever that conclusively proved that, in some way, tobacco was harmful to people, and I believed it in my heart and my soul, then I would get out of the business.”

Given that the Heartland Institute takes the position that smoking kills, and given that they’re in contact with Philip Morris, RJR, etc., maybe they could remind the executives of these companies of the position that their predecessors took—“If our product is harmful . . . we’ll stop making it”—and ask what they should be doing next.

P.S. There’s been a long history of cigarette companies giving financial support to prominent statisticians, including Ronald Fisher, Donald Rubin (my Ph.D. adviser, see section 2 of this article), Herbert Solomon, Richard Tweedie, Arnold Zellner, Paul Switzer, Joseph Fleiss, Nathan Mantel, and Joseph Berkson. The most amusing example is this one from the 1970s:

Ingram Olkin, chairman of Stanford’s Department of Statistics, received $12,000 to do a similar job (SP-82) on the Framingham Heart Study . . . Lorillard’s chief of research okayed Olkin’s contract, commenting that he was to be funded using “considerations other than practical scientific merit.”

37 thoughts on “Vaping statistics controversy update: A retraction and some dispute

  1. Andrew: What a relief to realize I was wrong!! My first thought after reading the title of this post was “Oh my God; they’ve finally done it; they’ve finally gone off the edge to the dark side. No longer satisfied with merely idolizing, misinterpreting, and misusing them, scientists are now actually vaping statistics!! “.

  2. I completely agree with every observation in this post but one–that the authors of the retracted paper have “conflicts.” Imagine if someone said that an author of a perfectly reasonable paper on climate change had conflicts simply because:

    He is a professor of climate change, had advocated for taking measures to reverse climate change, and had made controversial claims that pollutants other than carbon may accelerate climate change in the past. He has been funded by the EPA and the United Nation’s Green Climate Fund.

    That would be…odd. Perhaps there’s more to the situation with Glantz than is apparent from your note, but as it’s described, Glantz’s background just makes me think he acknowledges that tobacco is harmful and that he has no more need to disclose this view than a water safety researcher would need to disclose his belief that consuming lead is bad for children.

    I do agree that Glantz’s background shows he has “strong views on the topic” and that that’s not a bad thing. I just would make a distinction between accepting money from orgs that advocate for human health and therefore incidentally against tobacco use, and accepting money from orgs created by or funded by tobacco companies to advocate directly for tobacco use through journalism and research, particularly when there is a history of deception by similar groups. I think characterizing both things as “conflicts” fails to make that distinction.

    • “would make a distinction between accepting money from orgs that advocate for human health and therefore incidentally against tobacco use…”

      I don’t buy that at all.

      Private advocacy groups have to constantly prove a need for themselves to their donors. The researchers employed and funded by those groups have every reason to distort their findings in ways that will keep the dollars flowing. If a researcher is funded by Tom Steyer would s/he be writing papers saying “wow, looks like climate change isn’t as bad as we thought!” No. At the very least it’s the same bias felt by survey respondents to provide the answers that they believe are expected. At the worst it’s highly motivated. Its certainly no secret that – at least in the case of the environmental community – many claims made by private “public interest” groups are wildly exaggerated if not out and out false.

      Ultimately, the source of the funding isn’t relevant. The entire point of science is to present the methods and data and judge those, not the source of funding. This blog happily devotes most of it’s time to that, and if the rest of the world would too, we’d all be in a better place.

      • We don’t disagree. When I referred to “orgs that advocate for human health” I was specifically referring to the NIH and FDA, the two funding agencies Andrew cites for Glantz, as opposed to private advocacy groups like Heartland. But my statement, I see now, does not actually *say* that, and your inference was reasonable. Oops!

        Granted, someone funded by NIH has incentives to publish claims that will get them more money from the NIH and other public funding agencies, and the NIH has incentives to fund researchers whose past publications support what it understands to be the science. But that’s not so much a conflict of interest as it is a bias toward consensus and conventionality, inherent in just about any established scientific paradigm. It’s a problem, but a different problem.

      • > Ultimately, the source of the funding isn’t relevant. The entire point of science is to present the methods and data and judge those, not the source of funding. This blog happily devotes most of it’s time to that, and if the rest of the world would too, we’d all be in a better place.

        I think we’d be in a worse place. It’s just wasteful for everyone to scrutinise methods and data (which requires substantial training and expertise). Most people are instead going to have to trust others who have done that at least to some extent. Even scientists in other fields, or even the same field but just not willing to verify things themselves. Source of funding and other conflicts of interest are helpful for making educated guesses about what to believe, or whom to believe.

        • I’d take it a step farther than Anon does. In published research, we are provided only with summaries of the methods. The many investigator degrees of freedom that go into data cleaning, definition of variables, choice of models, etc. are generally not set out. Yet these are, in fact, the steps that are the most vulnerable to being deliberately or unconsciously manipulated towards a preferred result. There is, after all, only one way to calculate ordinary least squares estimates of a linear regression. But there are numerous ways to choose which variables to put in that regression, and numerous ways to define those variables, etc. Particularly in a world where most effects being estimated are fairly small (the large ones having been well known for a long time), tiny little nudges in the methodology can easily obscure what is really going on or create the impression of something that isn’t really happening.

          For these reasons, it is generally not possible for a consumer of research findings to really know what to make of a published result. That really requires a more detailed review of the raw data and all the management and analysis code than is usually available even to editors and reviewers.

          So, at the end of the day, what is truly required is a culture that promotes honesty and transparency and strongly sanctions violations of those. On top of that, the culture must encourage after-publication criticism and discussion, and reward people who acknowledge and retract their errors and sanction those who stonewall. And since we are all merely human, people should avoid doing research that evaluates things where they have a vested interest (whether financial, ideological, or other). We do not, unfortunately, live in such a world. So a policy of mandatory disclosure of all conflicts of interest, or anything that a cynic might construe as a conflict of interest, is a poor second, but it’s the best we’ve got.

        • “In published research, we are provided only with summaries of the methods. ”

          It’s appropriate to use the summary of a method if the method has been published and verified, in which case it should be referenced and any modifications to the method elucidated. Otherwise, it’s not appropriate to use the summary of a method.

          But while I agree the methods section of many research papers leaves plenty open to question, I also don’t find it that difficult to find papers that have methodological errors spelled out clearly. The problem is that most people aren’t looking or don’t care.

        • “It’s just wasteful for everyone to scrutinise methods and data (which requires substantial training and expertise)”

          Great. If people want to trust researchers, they should feel free to skip the methods and discussion and read the abstract and the conclusions. For everyone else, however, research methods and data are important and they should be published with discussion and conclusions so anyone who’s interested can assess them.

          The scientific method and system was developed precisely to eliminate the need for trust. It was developed so we could see with our own eyes what is true and what is not, and how the data gathered for a given study bear on that question.

  3. “The normal protocol for raising a technical criticism of a paper would be to write a letter to the journal criticizing the paper. If the editors find the criticism worth airing, they would invite the authors (in this case, Dr. Bhatta and me) to respond, then publish both letters and allow the scientific community to consider the issue.”

    This statement is incredible. So we can’t discuss obvious flaws in scientific papers unless an approved gatekeeper says it’s ok? More often than not, the defenders of the “normal protocols” of scientific publishing end up showing exactly why those protocols need to be retired.

    • “This statement is incredible. So we can’t discuss obvious flaws in scientific papers unless an approved gatekeeper says it’s ok? More often than not, the defenders of the “normal protocols” of scientific publishing end up showing exactly why those protocols need to be retired.”

      +1

    • “The normal protocol for raising a technical criticism of a paper would be to write a letter to the journal criticizing the paper. If the editors find the criticism worth airing, they would invite the authors (in this case, Dr. Bhatta and me) to respond, then publish both letters and allow the scientific community to consider the issue.”

      I’d say the normal protocol would be to send an email to the author(s) pointing out the problem. But it would still be OK to discuss the problem in a seminar, or in an online discussion group (which is in may ways a natural extension of a seminar).

  4. Regarding protocol of errors in papers:

    I’d say it seems reasonable, courteous and collegial to inform authors of a serious concern about their work before going to the press about a problem. But that shouldn’t prevent someone from contacting the publishing editor or submitting a discussion piece or rebuttal, which after all is just a more formal way of registering a concern with the author; or from discussing it with colleagues.

  5. I agree that the conventional approach is communication via the medical journal where the article was published. In 2004 Professor Glantz published a study in the British Medical Journal (BMJ) reporting that a ban on public smoking in Helena, Montana was quickly followed by a significant reduction in hospital admissions for heart attacks. We presented data at the journal’s website that random variation might have played a role in the reduction (https://www.bmj.com/rapid-response/2011/10/30/additional-information-acute-myocardial-infarctions-helena-montana). Professor Glantz’s response mainly focused on the tobacco industry (https://www.bmj.com/rapid-response/2011/10/30/people-should-not-ignore-our-fundamental-observations).

    In 2015 Professor Glantz published a study in Tobacco Control claiming that the smoking population in the U.S. is “softening,” (i.e. more likely to quit as a result of lower smoker prevalence). We published a re-analysis of that study in the journal Addiction (https://onlinelibrary.wiley.com/doi/full/10.1111/add.13340 ). The editors wrote that Professor Glantz was invited to respond, but “declined the journal’s offer to publish the parts of their response that addressed the article itself.”

    In 2018 Professor Glantz published a study in Pediatrics claiming that e-cigarettes caused teens who are experimental smokers to become established smokers. See Professor Gelman’s discussion of this study in his November 27 2019 blog post (https://statmodeling.stat.columbia.edu/2019/11/27/controversies-in-vaping-statistics-leading-to-a-general-discussion-of-dispute-resolution-in-science/). The details of our criticism are laid out in plain language in two blog posts by me (https://rodutobaccotruth.blogspot.com/2018/04/retract-uc-san-francisco-e-cigarette_3.html and https://rodutobaccotruth.blogspot.com/2018/04/uc-san-francisco-authors-inadvertently.html ). We submitted our re-analysis to the journal online (https://pubpeer.com/publications/20C84AD92909A06A88AFAC7DA39BAA). The response from Professor Glantz contained ad hominem and defamatory content, and was removed by the editors. However, Professor Glantz’s comment was recently republished at PubPeer (https://pubpeer.com/publications/20C84AD92909A06A88AFAC7DA39BAA ).

    With respect to the current retracted article in the Journal of the American Medical Association, the journal doesn’t have a letters section or any other forum for comments. Nothing at all. My only recourse was to find the editors’ academic email addresses, and write to them, with no expectation that they would respond. In fact, they didn’t respond at all to my July 11 and July 18 letters, until October 30, 2019 after I sent them another email. All of this has been documented on PubPeer (https://pubpeer.com/publications/F177153E02CA8B3E7B9E70BC8DB204 ).

    Brad Rodu

  6. https://www.desmogblog.com/fake-science-fakexperts-funny-finances-free-tax
    1) Heartland Institute has been funded by Big Tobacco for decades, starting no later than 1990s, when it had Philip Morris executive Roy Marden on Board. He was PM guy who doled out $ to thinktanks, I found the yearly spreadsheets.
    HI President Joe Bast begged PM for $, bragged of his article claiming innocence of Joe Camel ( most successful narketing campaign targeting adolescents).
    See pp37-47 in:
    https://www.desmogblog.com/sites/beta.desmogblog.com/files/fake.pdf

    2) In 2013, tobacco co’s finally decided vaping would more than niche, either bought startups or were working on own.
    Suddenly, thinktanks changed their tune from “smoking=liberty” to “smoking may be bad, vaping can help”.
    https://www.desmogblog.com/2014/01/28/familiar-think-tanks-fight-e-cigarettes

    3) PM = Altria ($25B)+ PM International ($80B). For context, IBM($80B), Facebook($71B).
    BiG Tobacco totally depends on addicting folks during adolescent brain development (~10-24, peak 15-19). Teen smoking has slowly decreased, vaping may save the business.

    4) i can’t comment on this paper fight given COI (I’m an advisor for Stan’s Center at UCSF), but there is a well-known elphant-in-room “prior” about nicotine: during adolescence it’s addictive & has negative effects on brain. It stiffens/constricts blood vessels & promotes angiogenesis (Google nicotine angiogenesis). That doesn’t bother 30ish adults, but is a major factor in cardiovascular disease (CVD, heart disease, strokes)in older adults with other risk factors. Smoking-related CVD deaths ~ lung cancer deaths, followed by COPD (emphysema). Smoke particles: cancer&COPD, but nicotine contributes strongly to CVD.
    There aren’t yet a lot of 50-70-year-old vapers, but given nicotine effects, it would be *astonishing* if there weren’t elevated risk of CVD when they get there. The risk is likely lower than for smokers, but early results indicate dual-use (smoke+vape) worse than either alone.

    • I guess you’re quoting market caps for the tobacco companies in comparison to Tech companies?

      Some updates and additional context, with ranks:

      1. AAPL $1300
      2. MSFT $1000
      3. GOOG/GOOGL $1000
      4. FB $560
      5. BABA $560 (Ali Baba)

      11. WMT $325 (Wal mart)

      52. PM ($135B) (Phillip Morris)
      60. IBM ($125B)

      105. MO ($80B) -altria group

    • John:

      Regarding your point #4: All this may be true. In this case it would be fine for Bhatta and Glantz to publish a paper about it. Thd bad thing is when they make errors in data analysis and then won’t admit their errors. Maybe they should just write some papers about their priors and not mess around with actual data!

        • Zad:

          Yeah, people always think I’m being sarcastic or patronizing, but I’m dead serious here. I see this so much, that researchers have strong beliefs and, at least from their perspective, deep qualitative understanding, but instead they write really weak quantitative papers based on bad data and sloppy analysis. No need for them to focus on their weaknesses.

        • Andrew, that sure as heck would be a lot more honest than some papers which pretend to be a compromise of an investigator’s prior + the data, but which are really just the investigators projecting their prior into the data and allowing it to swamp the whole thing, and then spinning the entire thing as original research that is “objective”

  7. What really irritates me about this whole fiasco is that Glantz et al has pretty much framed it as a “special interests” job trying to take down their research rather than admitting that their inability to do the analysis correctly (for whatever reason, lack of access etc) was the reason it was retracted. In fact, Glantz says that retractions are serious actions that are only reserved for research misconduct. Couldn’t be more wrong!

    “Retraction is a very serious step, which is warranted when, according to the International Committee of Medical Journal Editors, there is “Scientific misconduct in research and non-research publications includes but is not necessarily limited to data fabrication; data falsification including deceptive manipulation of images; purposeful failure to disclose relationships and activities; and plagiarism.” (http://www.icmje.org/recommendations/browse/publishing-and-editorial-issues/scientific-misconduct-expressions-of-concern-and-retraction.html).”

    https://tobacco.ucsf.edu/journal-american-heart-association-caves-pressure-e-cig-interests

  8. Given the history of statisticians in supporting Tobacco, including Gelman PhD adviser, I think conflicts of interest in this matter are more important than the post suggests. Too bad that the post does not acknowledge it.

    • Manuel:

      I added a P.S. above mentioning Rubin and other statisticians who’ve been paid by the cigarette companies. Indeed, this used to be a big deal. There are some well known statistics professors on that list.

Leave a Reply to Steve Cancel reply

Your email address will not be published. Required fields are marked *