“I suggest a “meta rule”. Everyone should make a logical argument about why their conclusion makes sense. Whether you use stats or some other method, it had better follow some logic that isn’t based on verifiably incorrect assumptions.

The attempt to short-cut this is at the heart of most of the examples of bad science we see on this blog.”

+1/2 –I’d modify it to “Everyone should make a logical argument about why their assumptions and reasoning to their conclusion make sense.”

]]>Many more examples to come, from vaccinations, fluoride, water safety and other sciencey stuff.

What a strange list. All of those are instances where governments are forcing medical procedures onto their citizens. That would explain the continued funding of NHST by the government…

]]>Actually, I don’t care what you do DC, :) , but do check out http://www.statisticool.com/nobelprize.htm

And don’t forget about using p-values and statistical significance to assess evidence for quantum supremacy: http://www.statisticool.com/quantumcomputing.htm

Many more examples to come, from vaccinations, fluoride, water safety and other sciencey stuff.

Cheers,

Justin

I suggest a “meta rule”. Everyone should make a logical argument about why their conclusion makes sense. Whether you use stats or some other method, it had better follow some logic that isn’t based on verifiably incorrect assumptions.

The attempt to short-cut this is at the heart of most of the examples of bad science we see on this blog. For example from today: https://statmodeling.stat.columbia.edu/2019/11/05/the-incentives-are-all-wrong-causal-inference-edition/

]]>I was thinking that if some of the material in those two links had been allowed in the original Nature commentary then there would have been less heated “fire works”.

On the other hand, maybe that sort of outward clash is needed to move forward – maybe that’s why the “two’ links are now so informative.

p.s. in the process of making this https://statmodeling.stat.columbia.edu/2019/10/15/the-virtue-of-fake-universes-a-purposeful-and-safe-way-to-explain-empirical-inference/ compatible with those two links.

]]>I think John Ioannidis’ How Evidence-Based Medicine has been Hijacked’ prompts a much deeper discussion. In some sense, the ‘stat sig and p-value’ subject has become more of a focused distraction. Please, please feel free to suggest I’m off base.

Actually, in reviewing Susan Haack’s work, it is uncanny that I have made some of the same arguments even before I came across her recent lectures. Both of us acknowledge substantial skepticism over claims and whistles about the ‘scientific method’.

]]>Well, a lot of the discussion also continued with these two prints:

1. https://arxiv.org/abs/1909.08579

2. https://arxiv.org/abs/1909.08583

especially part 2, which was also discussed on this blog (https://statmodeling.stat.columbia.edu/2019/09/24/chow-and-greenland-unconditional-interpretations-of-statistics/)

]]>Perhaps a b-value, for bias, is in order.

]]>Researchers should use whatever method best elucidates/analyzes the relationships in their data at the relevant scale of observation and for the purpose of the study.

Some approaches that have been or still are in wide use clearly have problems and the scope of conclusions that can be drawn from them has been dramatically overstated. The simple p <0.05 method has some utility at a coarse level for certain purposes. But the occurrence of p < 0.05 alone isn't proof of *any* hypothesis.

But from what I've seen, most research that doesn't replicate "significance" has worse problems than the utilization of p values as the exclusive arbiter of truth. It also has a poor theoretical basis and what theoretical basis does exist relies on unsatisfied and frequently unrecognized assumptions. None of this will change simply by changing the p-value rule.

Also, regardless of how P-values are used, **all** research needs to be tested and replicated multiple times. Again, changing p-value rules won't change that either.

What people should be demanding is much quality experiments, higher quality analytical methods, more open data, ***ALOT*** more general skepticism about results and ***ALOT*** more replication.

]]>The same thing is still sort of true for CFM, you use computational fluid mechanics to show that adding dimples lowers drag or whatever, but you still validate in a wind tunnel. the CFM is mainly a way to avoid running so many physical tests.

Lots of molecular Dynamics are Newtonian calculations with tweaks to the potential energy that account for effective properties of QM… like multi-body potentials or whatever. That’s what my solid state physics friend said anyway.

]]>“No one has used quantum anything for any practical purpose.” What about, oh, I dunno, transistors? Lasers? Atomic bombs???

]]>]]>Quantum effects are becoming more pronounced at the most advanced nodes, causing unusual and sometimes unexpected changes in how electronic devices and signals behave.

Quantum effects typically occur well behind the curtain for most of the chip industry, baked into a set of design rules developed from foundry data that most companies never see.

[…]

That little bit further away from the interface is a quantum effect, and at 130/90/65nm it became a measurable delta in the behavior of the inversion capacitance. We went off and studied, learned it, and built it into our predictive device models.

https://semiengineering.com/quantum-effects-at-7-5nm/#comment-238213Instead, it is all empirically determined deviations from the classical models.

And I have no idea why you would have such respect for the Nobel prize, it is a political event.

And quantum supremacy… No one has used quantum anything for any practical purpose. Just like GR, there are empirical fudges to the classical theory they use supposedly based on the “real” theory, but no one actually uses quantum mechanics for real stuff.

]]>But it didn’t actually work that way in BASP when it wasn’t allowed. Ricker et al in “Assessing the Statistical Analyses Used in Basic and Applied Social Psychology After Their p-Value Ban” (https://www.tandfonline.com/doi/abs/10.1080/00031305.2018.1537892) write

“In this article, we assess the 31 articles published in Basic and Applied Social Psychology (BASP) in 2016, which is one full year after the BASP editors banned the use of inferential statistics…. We found multiple instances of authors overstating conclusions beyond what the data would support if statistical significance had been considered. Readers would be largely unable to recognize this because the necessary information to do so was not readily available.”

Also, please read “So you banned p-values, how’s that working out for you?” (http://daniellakens.blogspot.com/2016/02/so-you-banned-p-values-hows-that.html) by Lakens.

Using public misunderstanding of ‘chocolate good for you’ vs ‘chocolate now bad for you’ type of argument is silly (although they use results from a frequentist survey to make their point, oddly enough). Frequentist methods of course allow for error, and this ‘flip flopping’ based on stochastic data or different findings/experiments would not be solved by Bayesian methods either, and a meta-analysis would be more ‘conclusive’ than individual studies.

Moreover, “bias” with alpha levels is being used as a boogeyman. P-values were used in recent Nobel prize winning science as well as analyzing data to determine quantum supremacy. See http://www.statisticool.com/nobelprize.htm and http://www.statisticool.com/quantumcomputing.htm for some examples.

Justin

]]>Re Survey: I welcome any query [including surveys] that sheds light on the sociology of expertise, for the latter has continued to prevent or hasten the potential for better decisions.

Lastly, I thought that the survey excluded the email addresses when all was said and then. Sorry just didn’t pay that much attention to the chronology of the surveying.

]]>