The Association for Psychological Pseudoscience presents . . .

[cat picture]

Hey! The organization that publishes all those Psychological Science-style papers has scheduled their featured presentations for their next meeting.

Included are:

– That person who slaps the label “terrorists” on people who have the nerve to question their statistical errors.

– One of the people who claimed that women were 20 percentage points were likely to vote for Barack Obama, during a certain time of the month.

– One of the people who claimed that women are three times as likely to wear red, during a certain time of the month.

– The editor of the notorious PPNAS papers on himmicanes, air rage, and ages ending in 9.

– One of the people who claimed, “That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.”

– Yet another researcher who responded to a failed replication without even acknowledging the possibility that their original claims might have been in error.

– The person who claimed, “Barring intentional fraud, every finding is an accurate description of the sample on which it was run.”

<

The whole thing looks like a power play. The cargo-cult social psychologists have the power, and they’re going to use it. They’ll show everyone who’s boss. Nobody’s gonna use concerns such as failed replications, lack of face validity, and questionable research practices to push them around!

Talk about going all-in.

All they’re missing are speakers on ESP, beauty and sex ratio, the contagion of obesity, ego depletion, pizza research, 2.9013, and, ummm, whatever Mark Hauser and Diederik Stapel have been up to recently.

Here’s the official line:

The mission of the Association for Psychological Science is to promote, protect, and advance the interests of scientifically oriented psychology in research, application, teaching, and the improvement of human welfare.

I guess that works, as long as you use the following definitions:
“promote” = Ted talks, twitter, and NPR;
“protect” = deflect scientific criticism;
“advance the interests of scientifically oriented psychology” = enhance the power of a few well-situated academic psychologists and their friends.

It’s a guild, man, nuthin but an ivy-covered Chamber of Commerce. Which is fine—restraint of trade is as American as baseball, hot dogs, apple pie, and Chevrolet.

The only trouble is that I’m guessing that the Association for Psychological Science has thousands of members who have no interest in protecting the interests of this particular club. I said it before and I’ll say it again: Psychology is not just a club of academics, and “psychological science” is not just the name of their treehouse.

166 thoughts on “The Association for Psychological Pseudoscience presents . . .

  1. > Psychology is not just a club of academics, and “psychological science” is not just the name of their treehouse.

    Sure about that? The evidence here is that this is precisely what psychology and psychological science, in fact, are. You (and I!) just wish that weren’t so . . .

    • Low blow. The replication crisis is particularly germane to social psychology. Clinical psychology doesn’t have the same problem. E.g., we know cognitive behavioral therapy works because we’ve replicated numerous times.

        • Thank you for the paper, Andrew. I found it thorough and particularly liked the many specific resources y’all pointed clinical researchers to.

      • Solomon:

        I would be surprised if there was any area where this not present to some degree (or about to emerge very soon).

        There was a time when I thought this sort of thing would be very rare in clinical trials research, but then I encountered various conversations such as “its not important what people can do but whether you get along with them”, “my invited guest cancelled and I have to find a replacement, but if I invite anyone outside the _in group_ I’ll never be asked to do anything anymore”, “the abstract was interesting but we didn’t quite understand it so we assigned it as a poster so we can find out more about it without giving it much exposure”, “yes that data source likely is valuable but as it might be publicly available don’t tell anyone about it”, “I agree that they are being totally unreasonable, as the often are, but please don’t repeat my comment as I don’t want them negatively affecting my career”, “we had thought we could convince you your approach was flawed and were surprised it isn’t but given you asked for time away to deal with unfortunate family affairs – we would really like you to use our approach in your work”, etc., etc.

        Most appropriate cat picture this year!

        • Keith:

          If you had to pick ONE area of science (outside of pure mathematics) that you had the most confidence in terms of replicability and overall reliability in terms of CURRENT research findings, what would that be?

        • Aaron:

          Anywhere the research work is subject to review before it is carried out and at real risk of full and complete audit.

          For instance this would be pharmaceutical industry research that is overseen by FDA (which rules out stuff only seeking approval at the EMA).

          Its more the real risk than actual audit per se, but with the actual audit there is more certainty.

          Now, in cases where I have worked with collaborative groups and I was able to spot check the research, those rare cases when I did not find serious errors, I would have confidence.

          And there is John Arnold’s take on this – don’t believe a published paper unless you know the people involved.

          (Math is easy as its quickly and cheaply replicated (verified) by other mathematicians so the risk of errors persisting is low)

        • From the Economist link above”
          It means recipients of Gates’ largesse can no longer offer their wares to journals such as Nature, the New England Journal of Medicine or the Proceedings of the National Academy of Sciences, since reading the contents of these publications costs money.

          Is this a case of take Gates’ money and commit professional suicide? At least in the shorter timeframe until granting agencies and promotion committees learn to ignore the fact that there are no publications in high-impact journals?

          I like the idea of open access but perhaps some of the details need ironing out here?

        • After seeing the propaganda from the FDA about kratom causing psychosis and death, I wouldn’t trust the FDA to do it’s job properly regarding regulating pharmaceuticals. There also has been many allegations of corruptions by past employees, one even tried contacting Obama back then about it but of course nothing was done. You can’t trust “science” that happens when there is a lot of money involved.

        • Alexa:

          I meant the research they reviewed not the decisions they make based on it.

          If all of that is not publicly available, that is a problem.

        • Well, as I have mentioned before, in my field (electrical engineering/wireless/computer science) I see many papers that seem to me useless but few, if any, that seem to be wrong. I must add that “seem to me to be useless” is sometimes a reflection on my knowledge and ability rather than on the utility of the paper.

          I cannot recall a paper in these fields that presented results with P-values. I just did a search of the IEEE Digital Library, which contains more than 4×10^6 documents, for “null hypothesis”; I got only 361 hits–250 of which were conference publications. It was also the case that 298 of those documents were published in this century and only 14 of the publications were dated 1990 or earlier.

          In contrast, a search for “cramer-rao” got 5,051 hits; a search for “Bayesian” got 27,264 hits; and a search for “karhunen-loeve” got 1,899 hits.

          I recall an illustrative case a few years ago. There was a regulatory proceeding in which three or four parties had filed studies regarding measurements of the same phenomenon. The parties came to wildly different conclusions regarding the appropriate regulatory action. If you went into the details of the studies, you could determine that all parties measured the same physics. There was no thumb on the scale when any party made the measurements. The difference was in the interpretation of the measurements—how much degradation of a communication system should be regarded as unacceptable. Ultimately, that is an economic, distributional, or psychological issue—not an engineering issue. Interestingly, there was also an error in all of the studies—well it was not exactly an error—but all studies failed to identify the reason for the disparity in the measurements between two classes of devices. Consequently, some of the studies speculated regarding the applicability of the results for the higher-performing class of devices to the entire universe of devices. I don’t recall that anyone went overboard on this point.

          After reading this blog for some years, my mental alarm bells go off all the time when I read in the press about studies that examine databases and that conclude something such as “wearing tight shoes makes kids grow taller.” But, those alarm bells don’t go off reading engineering journals.

          Bob

        • +1 to Bob’s observation. In Chem. Eng. too it’s a similar situation.

          At least in the practice of Engineering very little is decided on the basis of p-values alone. It’s a different question about academic articles in engineering journals. Those have gotten contaminated by silly testing just as other areas.

          I think the unifying vector of contamination is academia. The silly-testing virus is pervasive in the University circuit.

        • Keith:

          If you had to pick ONE area of science (outside of pure mathematics) that you had the most confidence in terms of replicability and overall reliability in terms of CURRENT research findings, what would that be?

          Not Keith, but I don’t think there is any question that the people reporting the data from the probes we send around the solar system are doing a great a job. Here are just two papers I was looking at recently. Notice how it has absolutely nothing to do with significance tests. They carefully describe data and compare to models (either their own or in the literature) and discuss possible reasons for the (always present) deviations:

          http://www.sciencedirect.com/science/article/pii/S0019103516304869
          http://iopscience.iop.org/article/10.3847/2041-8205/816/1/L17

          If you want to choose a role model field, that would be it. I’m not sure what the name would be exactly, “Solar System Astronomy”, I guess. Also look how great they are about sharing large/complicated datasets: http://pds-geosciences.wustl.edu/missions/lro/diviner.htm

        • I’d like to point out that just saying “when there’s no p-values, the results are great!” is a bit convoluted.

          If you have very large amounts of data, especially if it’s fairly accurate, there’s no desire what-so-ever for p-values; you just want measures over your sample, and the errors in this measure should be ignorable, given enough data. Of course, all else being equal, this is obviously the ideal place to be.

          But what if you can’t get there? Clinical trials are incredibly expensive, and we know people don’t like the idea of raising the price of drugs. Plus, for ethical reasons, we can’t just give out drugs until our SE’s are epsilon and then make a decision. Do you just give up and say “study something where data’s free?”

          I’m not disagreeing with the idea that there’s a correlation between fields of study that do not report p-values and the reliability of their results (outlier: medical case studies that don’t use p-values). But I am saying it’s a correlation and not the end of the story.

        • That was just an old argument brought up by Meehl. The point is that unless you actually think you have the “one true theory”, which is perhaps true in cosmology and particle physics, but nowhere else, just looking for the mere existence of deviations is pretty senseless, even if you are checking your model (rather than doing NHST):

          In social science, everything is somewhat correlated with everything (“crud factor”), so whether H0 is refuted depends solely on statistical power. In psychology, the directional counternull of interest, H*, is not equivalent to the substantive theory T, there being many plausible alternative explanations of a mere directional trend (weak use of significance tests). Testing against a predicted point value (the strong use of significant tests) can discorroborate T by refuting H*. If used thus to abandon T forthwith, it is too strong, not allowing for theoretical verisimilitude as distinguished from truth.

          Appraising and Amending Theories: The Strategy of Lakatosian Defense and Two Principles that Warrant It. Paul E. Meehl. Psychological Inquiry Vol. 1 , Iss. 2, 1990. https://pdfs.semanticscholar.org/2a38/1d2b9ae7e7905a907ad42ab3b7e2d3480423.pdf

          You care not that there is a deviation from observations, but about the exact way the model deviates, so it can be adjusted before being compared to the next dataset.

          Also, I don’t have any particular respect for clinical trials, and their value is not determined by the cost of collecting the data. Of course, blinding and randomization techs are great, but only the tip of the iceberg of what needs to be done to make sure the results are being interpreted properly. And it is very difficult, if not impossible, to tell if you are correctly interpreting a single value like “difference between two groups” with no a priori theoretical prediction available.

        • Keith, it’s clear that it’s “present to some degree” in clinical psychology. I’ve seen it in my colleagues and in the published literature. If heeded, the suggestions in Andrew’s linked paper will help move clinical science forward. But our current deficiencies doesn’t justify the glib “Sure about that?” remark I reacted to.

        • To bring it closer to home (for some of us), I’d be surprised that this does not engulf the data science/machine learning/computational statistics literature also. Do we seriously believe that the “test set” has been used just once? Or that different “random” partitions of training/validation/test sets were not explored?

          To replicate machine learning/data science type work, we have to generate new data of the same type, a parallel to recruiting a new panel. Is this type of replication ever done?

      • Solomon: There are problems with replication (and other problems such as outcome switching) in drug testing — What (if any) evidence can you provide that there are not similar problems with clinical trials in clinical psychology? In particular, there has indeed been serious criticism of clinical psychology clinical trials, such as the Pace trials (http://statmodeling.stat.columbia.edu/2017/03/19/whassup-pace-investigators-youre-still-hiding-data-cmon-dudes-loosen-getting-chronic-fatigue-waiting-already/) and clinical trials of cognitive behavioral therapy for schizophrenia (http://keithsneuroblog.blogspot.com/2016/01/science-is-other-correcting.html)

        • Martha, I’d imagine your specific concerns about outcome switching are warranted for the therapy RCT literature, too. Also, I have no interest in justifying the Pace trial data sharing mess. As far as I can tell, clinical psychologists are particularly terrible at sharing their data. The moves the Psychological Science crowd have taken are admirable in that regard.

    • > Psychology is not just a club of academics, and “psychological science” is not just the name of their treehouse.

      Sure about that? The evidence here is that this is precisely what psychology and psychological science, in fact, are. You (and I!) just wish that weren’t so . . .

      GS: Mainstream psychology, which is an extremely broad endeavor, is indeed a cesspool. But heck, so is most of what is called “social science” (and other fields – much of neuroscience for example). These endeavors are deeply flawed conceptually and methodologically. Indeed, most of psychology and much of neuroscience (and much of social “science”) is not science at all. Ethology had the chance to be a natural science (it was originally sort of the behaviorism of biology) but that was not realized as ethologists never developed a very sophisticated philosophy of their endeavor. Now, much of it is frankly mentalistic (now called “cognitive”) and “evolutionary psychology” which is really just a handmaiden to cognitive psychology. There is only one natural science of behavior, and it is now called “behavior analysis.” A good way to think of behavior analysis and mainstream psychology is to imagine that intelligent design won the conceptual arguments and only a small, dwindling group of biologists take the selectionist perspective. That is analogous to mainstream psychology. And, as most seem aware here, mainstream psychology’s reliance on NHST (and ignorance of SSDs) has rendered it largely garbage even in the “fact department.” To be fair, though, some findings in mainstream psychology, however, are both reliable and somewhat general and a few even apply to individuals’ behavior (which is, or should be, the subject matter of psychology).

      • I know this game.

        I get to spend time with all kinds of people interested in human behaviour
        (cognitive psychologists, neuroscientists, philophers of mind, neuropsychologists,
        clinicians, computer scientists, etc.),
        and we all think our approach is the One Truth Path to Truth,
        and everyone else is barking up the wrong tree
        (I believe this myself, to be fair).

        I don’t know any of the Behaviour Analysis crowd personally, but I see you guys think the same way, which is nice.

        • “…we all think our approach is the One Truth Path to Truth.”

          GS: No doubt. But the mistake is to think that that means nobody’s right (with due respect to the fact that “right” raises questions about what “scientific truth” is – a philosophical issue upon which scientists may be divided). It has been argued, I think persuasively:

          http://www.behavior.org/resources/88.pdf

          that conceptual issues are largely neglected by much of psychology. But this does not mean that the approaches are unencumbered by underlying assumptions, just that those assumptions and their implications are not analyzed. And the conceptual foundations are amenable to attack – but not an empirical one. Conceptual issues are, by another name, philosophical issues, and philosophical analysis is the method. Again…just because everybody is sure that they are right doesn’t mean that they are right, it doesn’t mean that everybody’s “right in their own way,” and it doesn’t mean that one of the approaches cannot actually BE right and the others wrong.

    • But psychology aka. psychological science, seems to be, that at best, it’s created as much “disease” as it has helped with humans and their behaviors. Well, maybe? http://www.americantable.org/2012/07/how-bacon-and-eggs-became-the-american-breakfast/ Heart disease, that is! :) Thanks to the nephew of Freud! lol Psychology is as much about how to control others, influence peoples politics and habits, as it is about mental health and social interactions! I think they need to get their collective heads out of the sky and just help individuals in their quest to cope with what is instead of pointing people towards social justice positions. Might as well become an addictive substance, it feels good, until your broke and the social justice high has become social tyranny! It’s not their fault though, they have a brain disease! GTFOH!

  2. Andrew, your descriptive analysis (“Included are”) sorely lacks statistical support. Moreover, your summary statistic is shamefully misleading: you counted one person TWICE, suggesting that there are 7 perpetrators instead of just 6. This significantly exacerbates the problem.

  3. Couldn’t agree more with the last couple sentences in this post. Some of us are trying to alleviate existing problems (of course Bayesian methods–and Stan!–play a role): http://www.psychologicalscience.org/conventions/annual/2017-workshops

    See the two workshops in the above link –>
    1) Bayesian Inference With JASP: A Fresh Way to Do Statistics, and
    2) Computational Modeling of Decision-Making Tasks With a Single Line of Coding: Modeling Can Be as Easy as Doing a T Test

  4. Andrew, your commentary here is most welcome. I saw this ad for the meeting several weeks ago and immediately wrote letters to colleagues to discuss leaving the APS. If things don’t change soon I know that I won’t be renewing my membership. I know that there are others who feel similarly.

    When the APS was founded it was because psychological scientist felt the APA was not scientific enough. The same thing is starting to happen to the APS. There’s the Psychonomic Society, which may have been as strong as APS at one time but currently has a much smaller membership. In a typical Psychonomics meeting most of these APS authors would get destroyed in Q&A. My feeling, having been to both is that it is very unfortunate that the Psychonomic Society meetings have fallen off so much. Perhaps it’s because the researchers there are considered too critical.

    • No doubt that the Psychonomic Society meetings are smaller than the APS meetings, but that is mostly because the latter has grown so much. Each Psychonomic Society meeting program lists the number of submissions to the past 10 years worth of meetings. They have grown from 883 in 2006 to 1306 in 2015. It’s been pretty steady growth, except for a bump in 2009 when the meeting was held in Boston.

      I agree that the Psychonomic Society meetings (and journals) are more rigorous, but I have some concern that the Psychonomic Society is trying to become more like APS (lots of awards and attempts to generate hype about articles). The two societies are still worlds apart, though.

    • The issue with APS has a lot to do with bad management. When Kraut announced his retirement, APS launched a nationwide, comprehensive search for a new director. After many months of searching, who was the best possible candidate? The current deputy director! Now, who expects things to change for the better in that case? On top of that, while Kraut is now “Executive Director Emeritus”, word on the street is he calls most of the shots anyway.

      How is APS as a place to work? Well it earns a low 1.4-star rating on Glassdoor: https://www.glassdoor.com/Overview/Working-at-Association-for-Psychological-Science-EI_IE816952.11,48.htm and they have a lot of staff turnover looking at the updates of their staff directory over time: http://web.archive.org/web/*/http://www.psychologicalscience.org/index.php/about/staff-resource-directory

      Now, does a well-managed, cutting-edge organization end up with low Glassdor ratings and high employee turnover?

      Does the board do anything? From what I’ve seen, most of them just rubber-stamp any request from the director and staff, but I’m ready to stand corrected if someone can bring up a situation where they didn’t.

      Governance? Go to the APS website and try to find a copy of their financial statements. I couldn’t find them. Note at a non-profit, they are required to file an IRS Form 990 that provides basic financial information. Well-run organizations publish this right on their own website. Not APS. Meanwhile, APA publishes a 50-page annual report, including financials: http://www.apa.org/pubs/info/reports/2015-report.pdf

      Psychonomic Society has been growing, but part of the reason is the conference is free for members and undergrads. They made that decision a few years ago, and I think it does help get more people involved, but just beware it’s not necessarily popularity that has increased Psychonomic’s numbers. (APS did something similar — giving a free 1-year APS membership to all attendees of their ICPS conference.)

      As Greg stated, Psychonomic has a long way to go to be the size and stature of APS. I no longer go to APS and have given up my membership, but it seems to me like Psychonomic may end up taking over APS’s position, at least among real academics.

      • Funny, that in this blog thread there seem diametrically opposite views on APS vs APA which is doing better. See E J Wagenmakers’ comment above.

        • Oh I didn’t mean to hold up APA as a great organization all-around — they’re not (the handling of torture interrogations comes to mind). Just pointing out how even they publish a detailed annual report accessible to everyone.

          If I had to pick the best-run organization in the field, I guess I’d go with SPSP (social psych). They had their issues also, like the embezzlement issue (and no heads rolled over that, even the person who handed _signed blank checks_ to the person who did the embezzling) that cost members hundreds of thousands of dollars. However, they do seem to be very innovative lately in terms of their annual meeting and what they offer to membership.

    • The video doesn’t seem to work. It claims that it isn’t available. Bow-wow. Mou. Is it the universe or me that’s broken? “Who knows, not me”, said David Bowie.

      • A sad little kitty: Apologies. I am the world’s worst typist. The link was to a video clip of “Always Look on the Bright Side of Life” sung by the character played by Eric Idle in the 1979 Monty Python movie LIFE OF BRIAN.

    • Carol:

      I’m sure there are many good speakers at the conference. After all, Psychological Science publishes lots of good stuff. And it doesn’t surprise me that there will be some bad stuff at the conference too. Perfect quality control is impossible. What struck me was how many of notorious figures in the Psych Science / PPNAS world were featured in that special ad. It really looks like a power play, with the treehouse gang giving everyone else the bird and saying, Yeah, we go the power and we’re gonna keep doing what we’re doing. I mean, really, an invited talk by the guy who said that 20% women change back and forth between Democrat and Republican every month? Power pose?? Etc. These guys are the boss and they want everyone to know it.

  5. As the former program chair for APS, I was very disappointed too to see who was highlighted in the Presidential Address and the awards. For the last 4 years we worked very hard to bring the issue of psychological rigor and scientific misconduct to forefront and make APS the psych. organization known for scientific ethics and rigor. We did make significant strides with changes in the Psych Science journal and making this issue well known to the members. The awards are not selected by the program but by the APS Board. The Presidential symposium is selected by the president, not the program committee. Susan Goldin-Meadow selected the speakers like Amy Cuddy, not those representing most of the program. This is why it is important to think about who runs an organization. I am sad to say that I will be boycotting APS this year because of these choices. I think many of us tried hard to represent the science that was such an important part of our separation from APA. I still think APS can be that organization but we are still in crisis and in a struggle.

    • Pam:

      Yeah, the Cuddy invite is particularly provocative. I think Griskevicius’s work is just as bad, and it doesn’t belong in an invited presentation in any science conference that I could imagine, but Cuddy is out-and-out notorious. That’s one reason that I saw this as a political statement on the part of Team Treehouse: they want to demonstrate that they can put Cuddy on the featured program, just cos they can.

    • Kudos to Pam on boycotting APS. I think influential academics need to stand up against bad science. I think your boycott needs to be publicized more.

      If more people voted with their feet then the boards & chairs would think twice before shoving such egrigious crap down our throats.

  6. Hi Andrew,

    I consider APS to be a microcosm of psychology (and perhaps science at large): there are those that push hard for reform, and then there are those in positions of power who feel that they have done nothing wrong, and life is best if nothing were changed. There has been tremendous progress in psychology over the last five years, and nowhere is this more visible than at APS: Psych Science is a completely different journal than it was before — it now stresses openness, awards badges for responsible scientific behavior, and the new editor has stated that he supports preregistration. On top of that, APS has just launched a new journal (https://www.psychologicalscience.org/publications/ampps). This will be a fantastic journal, and I am willing to bet that you’ll have a paper in there within two years from now. And then there is Perspectives on Psychological Science, an APS journal that –under the leadership of Bobbie Spellman– has played a key role in the scientific revolution we find ourselves in today. In fact, your “multiverse” paper was just published there. Wolf Vanpaemel even presented it in Vienna at the international APS conference, where the methods sessions were very well attended.

    So I think the glass is half-full. Yes, more can be done, but the situation is of course tremendously difficult from a political and personal point of view. So I think you are right to suggest that those who have enthusiastically put forward certain bold claims would do well to engage in some self-reflection. And yes, APS should reconsider how it does its advertisements; they are probably designed by a team of people who may not realize how the times have changed. On the other hand, compared to other societies, APS is a shining example. The Psychonomic Society has taken some action (although much less than APS), but the APA has simply refused to budge *at all*. It will be interesting to see where we stand in another 5 years!

    Cheers,
    E.J.

    • +1 for Bobbie Spellman, who gave some really interesting comments at the annual BITSS conference as part of a panel on transparency in Psychology publishing. She seems to be fighting the good fight for what appears to be very little reward beyond making the profession better.

      …which makes me wonder: with her and Brian Nosek there, what are they putting in the water at U. Virginia? Truth serum? Chutzpah juice ™? Or maybe this actually happened? http://statmodeling.stat.columbia.edu/2016/01/07/28459/

        • Let’s just say — UVA psych dept has been an interesting place over the last bunch of years… (Though Brian and I are mostly gone; he to COS and I to the law school.) The thing is — it has been personally very civil, we have parties, we treat our students well, we get along. There are just some topics that should not be discussed in public.

    • Thanks, E.J. And thanks, Andy (if I may) — as Editor of Psych Science, I appreciate your criticisms and share many of your concerns. But let me say, for what it is worth, that APS has been very receptive to my proposals aimed at enhancing transparency and replicability in Psych Science. Change in large organizations often takes quite a bit of time. At present, the left hand doesn’t always know what the right hand is up to. But my impression is that Sarah Brookhart (Executive Director), Todd Reitzel (Director of Publications), and Roddy Roediger (Publications Committee Chair) are all firmly committed to transparency and replicability. Dan Simons (who was the RRR editor for Perspectives in Psychological Science) will be Editor of the new methods journal that E.J. mentioned and Dan is a leader of efforts to cut the bullshit and raise the science in psychology. Recent APS conventions have included multiple methods-oriented sessions that were extremely well attended, and recent issues of the APS Observer have included valuable methodological pieces. Swwaya shwaya, little by little.

      Steve

      • Steve:

        Thanks for your note. It’s good to know that APS is not a monolithic organization, despite what it might look like from that announcement above. I agree that change is difficult, and I appreciate the hard work you’re doing in this regard.

        • “More to come!”

          I am hoping Psychological Science will soon introduce Registered Reports (https://osf.io/8mpji/wiki/home/) to allow researchers who don’t want to contribute to publication bias, and who want to adhere to some higher standards, to submit their research.

        • That may indeed come about, albeit probably not in the immediate future. There is a lot to be said for RRs and I am actively exploring the possibility (but it is not my top priority).
          Steve

        • “(but it is not my top priority).”

          I find it incomprehensible that this is not your, or any editors, priority.

          Even things like pre-registration or open data, which “Psycholgical Science” incentivizes through badges, can be considered to be pointless with regard to gauging the evidentiary value of findings when researchers and journals can just pick and choose what to submit/publish based on the results.

          I find it incomprehensible from a scientific standpoint that Registered Reports are not mandatory for papers based on hypothesis testing, or at least offered at all psychology journals as an option for authors.

          I have now officially given up on Psychology.

          I thank you for your reply though.

        • Anonymous, if registered reports are necessary for you to have faith in a field, then I think you have to give up on essentially all of current science. No scientific field currently makes such reports a central part of publishing.

        • Greg:

          There are certain areas where “just pick and choose what to submit/publish based on the results” is being limited or not much of an option in the first place, but outside that simply, not believing the published literature (unless one knows the authors) is very reasonable if not hard to criticize.

        • If you go to the APS’s webpage for Psychological Science
          http://www.psychologicalscience.org/publications/psychological_science

          you will find the sentence “Watch Geoff Cumming’s video workshop on the new statistics.” which contains a link to
          http://www.psychologicalscience.org/members/new-statistics

          He’s not teaching hierarchical Bayes models in Stan—but he is saying lots of things that make sense. I think the presence of this link should be regarded as real progress.

          Bob
          PS my apologies if there have been posts on this point and this post is duplicative.

        • Thanks for sharing those, Bob. I’ve not seen them referenced anywhere else here. The fact that these are prominently displayed on the home page for Psychological Science (along with acknowledgments of the most common statistical problems that have led to published findings failing to replicate) is encouraging.

          I forwarded these videos along to some friends who have gotten tired of listening to me lecture them on p-values.

        • If memory serves me, Geoff Cumming has been preaching for several years. The question is how many people in the psych community have been paying any attention.

    • Related: now there is a push on twitter towards deemohasizing the problems with replication and p-values by influential people like jeff leek. The arguments are that there is so much else that is wrong about science than p-values, let’s shift focus on that, as if it is an either-or decision that has to be made. And leek is pointing to the dangers of “overhyping” the replication problem (us govt reducing funding citing non replicability). So now people can fight back by saying, hey p-values are not the biggest problem (true) and hey all you mini wanna be gelmans whining about nonreplicability are. Osting us dollars man. Stop it!

      • There is an accidental war on science by scientists hyping reproducibility problems. This is the first major political salvo.

        https://twitter.com/jtleek/status/847206145546526720

        Disclaimer: I am pro-science, not anti-science. The problem is many, many people passing off something else (NHST) as science, perhaps most of them not even realizing it.

        I’m sorry to say, but I really hope this isn’t just a political thing about the EPA and NIH gets a big cut. This isn’t even about the wasted $28 billion a year or whatever it really is, most likely that money will just be wasted some other way instead.

        It is to shock people into thinking about what they are doing (destroying the public faith in science). It is that if people in it for career/status reasons would leave first as the money dries up, it may stem the flow of BS being produced. I’ve felt this way for years now, even that NIH should be shut down and the useful stuff like pubmed be transferred to a new agency focused on data hosting/sharing.

      • The Honest and Open New EPA Science Treatment Act, or HONEST Act, passed 228-194. It would prohibit the EPA from writing any regulation that uses science that is not publicly available.
        […]
        But Democrats, environmentalists and health advocates say the HONEST Act is intended to handcuff the EPA. They say it would irresponsibly leave the EPA unable to write important regulatory protections, since the agency might not have the ability to release some parts of the scientific data underpinning them.
        […]
        The bill would also require that any scientific studies be replicable, and allow anyone who signs a confidentiality agreement to view redacted personal or trade information in data.

        “The secret science bills the Republicans tried to enact over the previous two congresses were insidious bills, designed from the outset to prevent EPA from using the best available science to meet its obligations under the law. Those bills were constructed to hamstring the ability of EPA to do about anything to protect the American public,” she said.

        http://thehill.com/policy/energy-environment/326380-house-votes-to-restrict-epas-use-of-science

        The way that article describes it, this sounds like a good thing… I wish they would give an example of the type of secret/proprietary data the EPA is currently relying upon though.

    • “…the scientific revolution we find ourselves in today.”

      GS: Wow! Another revolution for psychology? Let’s hope it works as well as that there “cognitive revolution.” Well…one good thing…we can comfort ourselves in knowing that we do not actually see the papers filled with irreproducibility – just their representations in our brains or alleged minds.

      On a more serious note…if mainstream psychology ever figures out how to reliably obtain facts again (Why “again”? Think psychophysics, the field that really began experimental psychology – I don’t think Fechner and Weber used too many p-values) that pertain to individuals, all they will then have left to tackle is the fact that mainstream psychology is, conceptually, a disaster. And there’s nothing as bad as having a faulty conceptual base since the underlying concepts are neither facts, hypotheses nor theories. They are assumptions that are empirically unassailable.

      Other than that, you know, I don’t have any strong feelings on the subject of psychology.

    • I’d argue the Psychonomic Society hasn’t taken any recent action because they haven’t had as much need. The days of Psychonomics Comics are long past and with their last big scientific shakeup happening in the 90s, when APS was ignoring the issues. APP, and PBR would probably each far exceed all of the APS journals in replicability rates (not that they’re anything like perfect).

    • The Presidential Symposium is ENTIRELY the President’s choice. The President also has lots of input into the Keynote and Bring the Family addresses. But really nothing on the awards — that’s an entirely separate committee. The others — depends how much the Prez wants to be involved. (But probably not much for these invite talks.) (Right, Pam?)

  7. Andrew, I think you’re referring to me when you mention the person who “claimed that women are three times as likely to wear red, during a certain time of the month.” Could you please refer me (and your readership) to the place where I have made this claim? I reported this result in an empirical paper, but have never claimed that that observed effect size is the “true” effect size, and, in fact, in several follow-up posts and a published article have made exactly the opposite claim, that the true effect appears to be much smaller (see http://ubc-emotionlab.ca/2014/12/response-to-uli-schimmacks-blog-post/; or http://ubc-emotionlab.ca/2014/05/redpink-redux/; or http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0088852).

    More broadly, you seem to imply in this post that because you take issue with my research on women’s ovulation and clothing choice I should be barred from presenting completely unrelated research at the APS conference. I hope this is not actually what you mean– I’ve conducted numerous studies on a range of topics (see http://ubc-emotionlab.ca/publications/), most of which have been replicated by my own lab, and in many cases, by others’ as well (in fact, I was the lead author of one of the few social psychology studies that was found to replicate in Nosek and colleagues’ Science article reporting replication attempts of 100 studies). Perhaps I’m misunderstanding your post, and this is not, in fact, what you meant to suggest – in which case I’d appreciate you correcting that impression. Thanks.

    • Jessica:

      1. From the abstract of your paper: “Across two samples (total N=124), women at high-conception risk were over three times more likely to wear a red or pink shirt than women at low-conception risk.” This statement was not supported by the data reported in that paper, because your comparison dates were not actually the dates of high and low conception risk.

      2. I do not think you should be barred from presenting research on clothing choice, or on any other topic, at any conference. Nowhere did I say that I thought you should be barred, and I regret if I gave that impression. I think it was a poor choice of the APS program committee to have so many featured speakers whose work has serious statistics problems (in the opinion of myself and other knowledgeable methodologists). The point here was not that you were speaking at the conference but that you, Cuddy, etc., were all given such featured roles. But this is the choice of the APS committee; it’s their call, and, again, I would not at all think you should be barred! Not at all.

      • Thank you for clarifying; I’m glad to hear that this was a mis-impression. Regarding the abstract, I know that you and I disagree about the correct way to measure conception risk, but in my opinion that kind disagreement does not invalidate a result (and as you know the way that I measured it is supported by a substantial body of prior work. In addition, I think you’re aware that the basic finding has now been replicated using a hormonal measure of conception risk?) If you continue to feel the need to use me as an example of someone who does the kind of science you object to, I’d appreciate it if you could make clear what your objection is. In other words, rather than making an overly broad ad-hominem attack, you might instead say “One of the people who found that women are three times as likely to wear red during a certain time of the month, based on a measure of conception risk that I take issue with”. If you wanted to be even more open, you could follow this statement by noting that I’ve since published pieces explicitly stating that the true effect size is not that large. Or, even better, you could refrain from using me or my work as an example in this context.

        • Jessica:

          Here’s the definition of ad hominem: “(of an argument or reaction) directed against a person rather than the position they are maintaining.” I am not making an ad-hominem attack, I’m specifically disagreeing with things that you published, including the erroneous claim that days 6-14 are the days of peak conception risk (this i not , and your p-values which are invalidated because of forking paths. The larger problem with those studies is that given any realistic effect size, the sort of data you collected will simply be too noisy to find anything. Carlin and I discuss this general issue in our 2014 paper. These studies never had a chance in the first place. This is a problem that we’ve only really understood in the past few years; before that, it was considered ok to grab some data, hope for statistical significance, and run with it. But now we realize this is a recipe for disaster. Finally, your papers are in the published record, and I don’t think there’s anything improper or inappropriate about pointing out errors in the public record. I think it’s a big problem with science in general and social psychology in particular that people don’t like to recognize that they have made mistakes. In this way I think that senior figures in the field such as Bem, Bargh, and Fiske have set a terrible example.

        • “In addition, I think you’re aware that the basic finding has now been replicated using a hormonal measure of conception risk?)”

          I find this statement interesting. From what i understand, researchers and journals in psychology are allowed to pick and choose what they publish based on the results they find. If this is correct, then i reason that there could be 100’s of “failed” replications performed. Or, there could be 100’s of “successful” replications performed. Nobody knows…

          If this makes any sense, stating that there has been a “successful” (or “failed”) replication study published tells you nothing about the validity of the original result given the publication system currently in place in psychology.

    • @Jessica:

      I fail to see your point: What’s the substantive difference between the statements

      (a) Andrew said: “women are three times as likely to wear red, during a certain time of the month”

      (b) You wrote: “women at high-conception risk were over three times more likely to wear a red or pink shirt than women at low-conception risk”

      Sure, they aren’t identical. But I fail to see what your underlying point is.

  8. Andrew: by not clarifying your argument with the position you believe I’m maintaining (which, as I keep noting, I’m not), you are implicitly directing your attack against me, as a person, rather than my position. And I’ve got no problems with you raising concerns about the data published in our original Psych Science article; I just believe it’s misleading to not also make clear to your readers that we have addressed many of those concerns in follow-up work. I’m really not sure what this omission accomplishes, other than de-incentivizing those of us who do seek to follow up our studies that were, admittedly, underpowered, with more strongly powered work. It’s hard enough to publish such replication efforts in the field’s most mainstream journals; it’s especially disappointing than blogs like one this also choose to refer only to findings reported in those journals, rather than acknowledge (and even give credit for) follow up projects that have attempted to deal with the limitations of those prior studies.

    • Jessica:

      I’ve blogged on this too, and I don’t think you and your colleagues have addressed the issues in the follow-up work. It’s just more noisy data, more forking paths, and more p values less than 0.05. The whole thing is just hopeless. The statistical problems are slightly more subtle than those encountered by Satoshi Kanazawa, but the big picture is the same: small signal, noisy data, lots of ways to find statistical significance, lots of ways to declare a win, basically you’re shuffling around random numbers. It’s not just those papers of yours, it’s a problem with lots of psychology, including the celebrated work of Bem, Bargh, etc.

  9. You are taking on a lot people here, but isn’t this how some (if not majority) of the academics world work now. If you are looking for a cronyism-free, drama-free club. Good luck!

  10. I generally try to avoid this kind of nonsense, but this time a reply is necessary. This blog post makes sweeping assertions about the quality of the program at the upcoming APS convention, and implies that an oligarchy of powerful social psychology hacks is somehow controlling all of APS, burning it to the ground and gleefully jumping on the ashes, based on a single advertisement of featured speakers that went out by email. I think a little contextual information might be helpful here.

    This assessment is based on the ad’s highlighting of a few speakers who have committed a variety of offenses, in Gelman’s eyes. These offenses include possible previous use of genuinely problematic research practices that have only been widely recognized as such in the last few years; failures-to-replicate with unknown causes; interpreting research findings in a way Gelman finds offensive; arguing fiercely about the medium and tone that are appropriate for discussions of other researchers’ work; and serving as the editor of a journal that published three papers Gelman finds worthy of ridicule. Some of these criticisms may have merit, with respect to particular publications or other professional decisions, but a blog post of this length is not going to sort that out in a thoughtful, balanced way, so the accusations end up reading as cheap shots.

    Four of the featured speakers are recipients of career achievement awards. James and Cattell award recipients are recognized in the program by giving a talk or, in Mahzarin Banaji’s case, by organizing a symposium that highlights their influence on the field. Each of the four award recipients has made massive contributions to psychology, over decades, that have been profoundly influential and have stood up over time. While you may object to a specific professional essay, publication, or editorial decision, I highly doubt that you have the depth of knowledge in each of their fields to challenge the awards themselves.

    Other speakers are talking about their current programs of research, on topics that differ from the particular publications to which you have objected. The reason so many of the invited speakers in this email are from social psychology is that the ad is focused on the personality and emotion “track” of the conference, and much of the work in emotion falls into the subdiscipline of social psychology. I’m pretty sure you’ve also received or will receive emails highlighting the clinical, developmental, cognitive, biological, and methods tracks, but as those don’t support the story you’re selling I don’t expect to see them featured on this blog.

    Let’s be honest, Andrew (can I call you Andrew? It seems that this has become sufficiently personal that a first-name basis is appropriate) – a lot of your outrage is about the inclusion of Amy Cuddy in the presidential symposium, right? The power pose research has been the target of several of your previous essays, for fair reasons. But before you draw insidious conclusions about APS and its motives, you need some background information. As noted in some of the comments above, the speakers in the presidential symposium are selected solely by the current President of APS, who is elected by the APS membership at large. Dr. Goldin-Meadows issued the invitation to Cuddy, and it was accepted, before Dana Carney released her public statement owning up to the problematic research practices that had been used in the original Psych Science paper. I had misgivings about the original invitation, and my personal preference would have been to ask the speaker to voluntarily step down after Carney’s statement was released. However, it’s not up to me, or the program committee, or anyone at APS other than the current president, who has to make the best decision she can in an extremely difficult situation. And here’s another thing. Unless you’re taking Bem’s ESP research way more seriously than your blog post suggests, you don’t know what Cuddy is going to say. Neither do I. You’re clearly assuming that she’s going to double-down on the extreme assertions from the original power pose publication and Ted talk, and pretend that there’s no controversy about this. However, Cuddy’s recent interview with the Ted-talk folks (http://ideas.ted.com/inside-the-debate-about-power-posing-a-q-a-with-amy-cuddy/) suggests that her current perspective may be a lot more nuanced and revised than you’re expecting. So what should we do? Banish her from ever speaking at APS again, or listen with an open mind to what she has to say? What about the rest of the researchers about whom you have complained? Do we lock them out of the APS convention in perpetuity because of your concerns, or do we critically evaluate each issue on its own, and make the best assessment we can of the quality of their current work and its potential for impact on our field?

    These are the kinds of real decisions those of us who are actually IN the field of psychology have to make. It’s very easy to sit outside the field and mock us in blog posts and essays on Slate. But most of us got into psychology because we care about people, we want to understand the human experience, and we dearly want to do our bit to improve it. Many of us – a rapidly growing number – are deeply aware that our field is in crisis, that the foundations on which we’ve built careers may be shakier than we thought, and we have a lot of work to do to fix it. That includes the members of this year’s APS program committee. If you bother to look at the entire program, rather than cherry-picking a few individuals to ridicule, you’ll see open and honest acknowledgment of this, and tremendous concern with strengthening the quality of the work we do. In addition to an entire track dedicated to cutting-edge methods, there are panels and symposia on how to improve the reproducibility and validity of psychology research; the future of clinical science; the challenges faced by 21st century neuroscience; the role neuroscience should play in psychology; a critical discussion of whether popular “brain training” programs are actually effective or not; and the role of psychology research in public policy. Nothing on ESP, but we do have a panel on ego depletion – a constructive, moderated discussion among several outstanding researchers of what to do next when replicability problems arise for a major phenomenon with serious real-world implications.

    So while you’re mocking from the sidelines, we’re in the mud trying to fix this thing. I understand that you are tracking not only the state of APS as an organization, but the state of psychology as a field. If you’re serious about either of those, take the time to look over the whole program. Come to the convention and see the butt-breaking work many of us put into figuring out how we can do our research even better (yes, there’s a LOT of argument about all this, and that’s okay!). Then see what you have to say.

    Or do you mostly care about writing blog posts that are sexy and attention-grabbing, but inaccurate? Wouldn’t THAT be ironic?

    Regards,
    Michelle “Lani” Shiota
    Chair, 2017 APS Program Committee

    • It really is that these criticisms were old news already in 1967, and a lot of people are tired of figuring that out after being “tricked” into partaking, supporting, or believing:

      http://statmodeling.stat.columbia.edu/2016/05/06/needed-an-intellectual-history-of-research-criticism-in-psychology/
      https://meehl.dl.umn.edu/sites/g/files/pua1696/f/074theorytestingparadox.pdf

      So just start getting everyone entering your community read that paper and be able to argue why it is wrong, or stop doing that stuff (NHST).

    • Lani:

      You write: “This blog post makes sweeping assertions about the quality of the program at the upcoming APS convention.”

      No. I did not make any sweeping assertions about the quality of the program. I made specific assertions about these special sessions. I said nothing about the general program.

      You write that I “imply that an oligarchy of powerful social psychology hacks is somehow controlling all of APS.”

      No, I never said anything about “all of APS.” Indeed, I said the opposite. I said, ” the Association for Psychological Science has thousands of members who have no interest in protecting the interests of this particular club.”

      You write that I say something about “burning it to the ground and gleefully jumping on the ashes.”

      I have no idea what you’re talking about. You’re the only one who’s talking about burning anything to the ground. I was just talking about a treehouse. The fire imagery is all coming from you.

      You write, “possible previous use of genuinely problematic research practices that have only been widely recognized as such in the last few years.”

      I was talking about someone who made the ridiculous claim “that women were 20 percentage points were likely to vote for Barack Obama, during a certain time of the month.” Yes, this was publishable in Psychological Science a few years ago but it was wrong then too.

      You write, “interpreting research findings in a way Gelman finds offensive.”

      Actually, the problem is not that certain interpretations are “offensive,” it’s that they’re in error. And it’s not about “Gelman”; these are just statistical or substantive mistakes.

      You write that I’m bothered by people “arguing fiercely about the medium and tone that are appropriate for discussions of other researchers’ work.”

      Actually, I’m bothered by someone who used the term “methodological terrorist” to describe people who are not terrorists. And then never apologized about calling people terrorists.

      But, then again, I live in NYC, so discussion of “terrorism” is kinda personal to me. Maybe for people who live in Princeton, New Jersey, far away from any attacks, “terrorism” is just a cool way to insult people you disagree with.

      You write, “serving as the editor of a journal that published three papers Gelman finds worthy of ridicule.”

      I was talking about someone who personally accepted several papers that actually are worthy of ridicule, by lots of people, not just me. If you want to defend the himmicanes, air rage, and ages-ending-in-9 papers (and I think there were a few more that I can’t remember right now), go for it. But, seriously, these papers are junk science. It’s not about me.

      You bring up Mahzarin Banaji.

      I know nothing of Mahzarin Banaji. I was not saying that all the people on that list do bad work, just that there were several people on the list who represented certain unfortunate trends in recent psychology.

      You write, “a lot of your outrage is about the inclusion of Amy Cuddy in the presidential symposium.”

      No. Cuddy is just one of several examples. If I had to choose, I guess I’d say I’m most bothered that the “terrorism” person is speaking, and maybe I’m second-most-bothered that the ovulation-and-voting person is speaking. Because I found the terrorism comment to be incredibly offensive, and I found the ovulation-and-voting thing to be particularly ridiculous. After all, power pose could’ve been real; it just turns out that it didn’t replicate and that the original paper was disowned by its first author. The ovulation-and-voting thing never had a chance.

      But what really bothered me was not any one or two of the speakers but that there were so many who were associated in one way or another with cargo cult science. I guess had Cuddy not happened to be on the list, I’d feel about 6/7 as annoyed.

      You write, “you don’t know what Cuddy is going to say. Neither do I.”

      I agree. As I wrote in my comment above to Jessica Tracy, I don’t think anyone, Cuddy included, should be banned from speaking at APS or even be banned from giving a featured talk. Cuddy’s a free person and should feel free to speak on whatever she wants. Despite what you say, I’m not assuming anything about what she’ll say. I still think it would’ve been much more interesting to hear from Eva Ranehill.

      Again, had the featured sessions included one or two of the seven people I pointed to in my post, I’d say, sure, that represents a mix of what people are doing in the field. But a power pose researcher and an ovulation and clothing researcher and an ovulation and voting researcher and the person who greenlighted several ridiculous papers for PNAS and also went around calling people terrorists and a researcher who responded to a failed replication without even acknowledging the possibility that their original claims might have been in error and the person who claimed, “Barring intentional fraud, every finding is an accurate description of the sample on which it was run” . . . that’s a lot to take.

      And, again, it’s not just about me. See the comments from various psychologists such as Pam Davis-Kean above.

      You ask, “what should we do? Banish [Cuddy] from ever speaking at APS again, or listen with an open mind to what she has to say?”

      Just to repeat: No, I’m not talking about banishing anyone. Hell, invite Diederik Stapel and Marc Hauser too, if you want. I don’t think Daryl Bem should be banned from speaking, either. Invite anyone you want. Satoshi Kanazawa, whoever. And, of course, once they’re speaking, have an open mind. I never suggested otherwise.

      Now, should you invite someone whose work has been discredited to give a featured talk? Should you advertise that talk to the world? If you want to, go for it. It’s your call. But if you choose to feature such speakers, don’t be surprised if outsiders—people not part of your committee—choose to laugh at the APS. You make the call. Choosing Cuddy and others in your lineup of specially featured speakers is a bold, controversial decision. So own the controversy, and don’t be surprised if outsiders such as myself consider it both funny and disturbing. Next year you can up the ante with invites to Stapel, Hauser, etc.

      You write that my blog post is “sexy and attention-grabbing, but inaccurate.”

      There’s nothing about sex in this post, and I do not apologize for grabbing attention: I went to the trouble of sharing my thoughts and so, sure, I want people to read what I wrote. But “inaccurate”? What’s inaccurate about anything in my post above.

      Finally, I agree 100% that being on the program committee is hard work, and I’m sure there’s lots of great stuff in the conference, as E. J. Wagenmakers noted above. Given all the hard work you and your colleagues have done in putting this conference together, I think it’s really too bad that the APS decided to promote it with a set of featured speakers that include many representatives of the crisis in psychology. That’s really too bad, and the people to get angry at are the ones who invited all those people and the ones who go around calling people “terrorists.” Not me. I’m just pointing out the problem.

      • AG: I was talking about someone who made the ridiculous claim “that women were 20 percentage points were likely to vote for Barack Obama, during a certain time of the month.”

        GS: When you say things like this – which you do often enough – it makes it sound like you are criticizing something other than statistical errors, methodological problems, or even misinterpretation of the data. It makes it sound like your claim is that the statement is prima facie false for some reason. I get the impression that (and the power pose stuff is the same) you are implying that the conclusion is just too silly – that the conceptual foundations behind the specific paper are flawed. Now, don’t get me wrong, I’m not endorsing the statement (“women are 20…”) but, rather, saying that it is a logical possibility – and even remotely plausible given that there are almost certainly some behavioral changes associated with women’s menstrual cycles. I do see things about the statement that can be criticized without regard to data or its analysis but that amounts to a criticism of the clarity of the writing.

        • Glen:

          In this case, no, the claimed differences are not scientifically plausible. There’s lots of evidence from surveys that very few people change their opinion during that phase of a general election campaign, far fewer than 20% of women of childbearing age. See for example here.

          The scientific errors go hand in hand with the statistical errors, and it’s no coincidence that a claim which makes no scientific sense is published based on statistically flawed evidence. It’s just like Daryl Bem. Yes, it is somewhere on the very outside edge of theoretical possibility that Cornell students have ESP, but it happens that he made this demonstration using laughably bad statistical evidence. Same with Kanazawa. Yes it’s possible that differences in sex ratios for the groups he studied happen to go in the direction that he reported, but if so it’s basically completely random given the quality of his data: I could just as well sit in a room with my sisters and start rolling dice and then write papers about the inherent propensity for a die to come up “6” when thrown by a man or a woman.

          The ovulation and voting paper was bad for so many reasons, including that its evidence was p-values which were obtained using the garden of forking paths, including that it was making a statement about within-person variation using between-person measurements, including that the study was dead on arrival given any plausible effect sizes. And one of its problems was that its claims made no substantive sense. One aspect of the sort of bad science that we’ve been discussing is that it will often put a microscope on noise, without any serious thought about the connection between measurement and the underlying questions being studied. I accept that a lot of this sort of junk science exists in psychology and it can’t just be banned, but I’m not happy to see it part of the highlighted special features program of the conference.

        • Greg Francis: What nonsense from the speaker! The reason why “methodological terrorists” and “self appointed data police” exist is because the field has not been welcoming to quantitatively-oriented people with valid criticisms of published research. Does Susan Fiske have any idea how difficult it can be to publish statistical commentaries on social-psychological articles?

          My observation over the years is that graduate training in social psychology tends to to focus on being supportive of one’s graduate students, not on rigor. Is this good for science? No.

        • The remarks in the video take me back to the 80’s, when one branch of feminism (consisting mostly of philosophers and psychologists) pushed the idea of “women’s ways of knowing”. It seemed really anti-feminist to women (such as myself) in most STEM fields. It was (still is) the opposite of welcoming to us — in contrast to how most STEM fields themselves have become increasingly more welcoming to women.

    • Lani:

      Just one more thing. You write, “I generally try to avoid this kind of nonsense.”

      I don’t like nonsense either. But, again, don’t blame me. I’m not the one who published nonsense like himmicanes, I’m not the one who published nonsense like the claim that 20% of women change their vote preference depending on the time of the month, etc etc. My concern is that the Association for Psychological Science is promoting nonsense.

      They’re not just including talks on various controversial topics in a big conference. That would be fine with me. I’m a big-tent kinda guy. Let Daryl Bem speak, let Amy Cuddy speak, then invite Eva Ranehill and Uri Simonsohn to explain why Amy Cuddy is wrong. Etc. That would be fine.

      But that’s not what APS is doing. They’re not just letting various controversial people speak. They’re featuring several researchers who have had serious statistical or replication problems in their work (or, in Fiske’s case, in papers that she’s personally vetted for publication). Put it all together and the APS in its featured sessions is making a strong statement in favor of nonsense.

      I think you and I have the same goals here. We’re both pro-science and anti-nonsense. The nonsense bothered me so much that I wrote an angry blog post. On the plus side, the nonsense has also motivated lots of research on my part, including my 2014 paper with Carlin on type M and type S errors.

      I think the research is important, and I think the outrage is important too. I care about psychology. I think it’s great that you’re working on the inside to improve the ASA program. And I think that you should realize the value of people like me, coming from the outside, pointing out serious problems with the message being sent by APS’s choice of featured lineup.

    • How about inviting Ranehill to talk after Cuddy’s talk, just as an example. Next time round? Or even Carney? She was first author in that famous paper now, and should be honored too?

    • “Four of the featured speakers are recipients of career achievement awards. (…) While you may object to a specific professional essay, publication, or editorial decision, I highly doubt that you have the depth of knowledge in each of their fields to challenge the awards themselves.”

      https://en.wikipedia.org/wiki/William_James_Fellow_Award

      And the winner of the 2008-2009 “William James Fellow Award” is…*drum roll*…That person who slaps the label “terrorists” on people who have the nerve to question their statistical errors.

      And the winner of the 2013 “William James Fellow Award” is…*drum roll*…That person who stated that getting a significant result with n = 10 often required having an intuitive flair for how to set up the most conducive situation and produce a highly impactful procedure.

      And the winner of the 2015 “William James Fellow Award” is…*drum roll*…The person who claimed, “Barring intentional fraud, every finding is an accurate description of the sample on which it was run.”

      On a more serious note: perhaps it would be interesting for APS to have a discussion about individual awards in science. I think science is a collaborative effort in which individual awards have no place. If i was offered one, i would refuse it based on that reasoning alone.

      • Unrelated to this topic, I guess, but I agree on the individual award thing, but haven’t ever really thought about it. Would you give it to a lab, perhaps?

        • I thought about that, but I couldn’t really think of an alternative. If you give it to a group of scientists, it still kind of acts like giving them to individuals. I think awards are important, as they do serve the purpose of recognizing good work, but I don’t know a way around the individual part.

        • > I think awards are important, as they do serve the purpose of recognizing good work
          Awards would be important _if_ they identified and encouraged good work.

          For instance, I know of one current faculty that was bullied by their administration to divert funds from junior faculty development to to shore up the efforts to get awards for senior faculty. Apparently that administration feels they have little choice given the competition they currently face – they may well be right.

        • Well, yes, I was working with that assumption. In an ideal world, etc. Just as in an ideal world we wouldn’t give awards to individuals. I think this post highlights where we currently are in relation to that ideal world.

        • I would say the same reasoning applies to labs, so no awards for them either.

          https://en.wikipedia.org/wiki/Standing_on_the_shoulders_of_giants

          “The metaphor of dwarfs standing on the shoulders of giants (Latin: nanos gigantum humeris insidentes) expresses the meaning of “discovering truth by building on previous discoveries”.[1] This concept has been traced to the 12th century, attributed to Bernard of Chartres. Its most familiar expression in English is by Isaac Newton in 1676: “If I have seen further, it is by standing on the shoulders of giants.”[2]”

      • And the winner of the 2019 “William James Fellow Award” is…*drum roll*…:

        The person who presumably (co-)wrote and/or agreed with the following sentence “Indeed, the evidence is also consistent with the opposite conclusion — that the reproducibility of psychological science is quite high and, in fact, statistically indistinguishable from 100%.”

        Congratulations to Mr. Gilbert, who in my honest opinion, definitely belongs among the previous winners mentioned above! I thank him for his “lifetime of intellectual contributions to the basic science of Psychology”.

        Source 1: https://www.psychologicalscience.org/observer/2019-william-james-fellow-award-goes-to-phelps-gilbert-nadel-werker

        Source 2: https://projects.iq.harvard.edu/psychology-replications

      • +1

        I was previously under the impression that APS was a more rigorous alternative to APA’s conference. Their promotional materials this year and the unnecessarily aggressive responses to criticism in this blog have convinced me otherwise.

    • Regarding Cuddy’s current perspective, the Ted interview is very similar to the interview she did for “Science of Us” in Nymag last year. Her position is “nuanced and revised” in that she’s publicly downgraded her belief in the effect of power posing on hormones and risk-taking to “agnostic”. But, she now seems to be engaged in a bit of revisionism meant to rescue her original work: she claims that the primary dependent variable all along had been subjective feelings of power. This, despite the prominent placement of hormones and risk-taking in both the original article and the Ted talk, and despite subjective feelings of power previously being referred to as a “manipulation check”, and despite Carney’s claim that risk-taking behavior was the original primary dependent variable.

      Every now and then in that Ted interview, she briefly starts to acknowledge the failures of 3 of the original 4 hypotheses to replicate (the most interesting 3 too – is “increased subjective feelings of power” an exciting enough effect to have warranted all the publicity on power posing?)… but then she quickly goes into qualifications and platitudes about the progression of science and optimistic notes about how someone else’s work just might end up proving her right all along. In short, her public statements in response to all the criticism of power posing have sounded like attempts at damage control rather than a good faith engagement with serious criticism. That’s why she’s receiving so little respect from those who have criticized her work on methodological grounds.

      • Ben:

        +1. A related problem with Cuddy’s response, as with Jessica Tracy’s comment elsewhere in the thread, and John Bargh’s reaction to the unsuccessful replications of his studies, is the lack of understanding that these studies are so noisy as to be essentially hopeless. They’re mired in the “run an experiment and hope for statistical significance” paradigm, not recognizing that if your measurements are of low enough quality and if your underlying phenomenon is variable enough, that (a) legitimate statistical significance is very hard to attain, and (b) if you do get lucky with statistical significance, your finding will be exaggerated and is likely to have the wrong sign.

        To put it another way, if you’re doing cargo cult science, it doesn’t matter how pure your intentions are.

      • I have a copy of the original draft of the Carney, Cuddy and Yap manuscript. In this draft it is clear that they were considering not even writing about feelings of power. Andy Yap makes a note in that draft that they originally thought that feelings of power might be a mediator of the relationship between expansive (i.e., power) poses and the gambling decision but that this mediation hypothesis was not supported. The note also strongly suggests that they viewed the “feelings of power” variable primarily as a manipulation check.

        • There’s also this line from Cuddy et al’s “Preparatory Power Posing Affects Nonverbal Presence and Job Interview Performance” (2015):

          “Immediately after delivering their speeches, as a manipulation check, participants reported how dominant, in control, in charge, powerful, and like a leader they felt on a 5-point scale from 1 (not at all) to 5 (a lot). These five items showed high reliability and thus were averaged into a composite (α = .89). The difference between high-power and low-power posers’ self-reported feelings of power (high-power: M = 2.47, SD = 0.93; low-power: M = 2.04, SD = 0.93) was marginally significant, F(1, 60) = 3.258, p = .076, d = 0.46, ηp2 = .053 (see Table 2). This finding is consistent with past research showing that power posing has a weak impact on self-reported feelings of power despite its stronger effects on cognitive and behavioral outcomes (Carney et al., in press; Huang et al., 2011). In addition, the manipulation check questions were asked after the stressful speech task, which could have depleted participants’ conscious feelings of power.”

          So here we have a statement directly from Cuddy that testing for subjective feelings of power served as a “manipulation check”, a confirmation that these feelings are not of primary interest, and also that, as of 2015, Cuddy believed that the effect of power posing on feelings of power was weaker than the other effects that she’s now come pretty close to conceding don’t exist.

          BUT – now she’s found other studies that have apparently replicated the effect on feelings of power. None of the other effects have replicated, so she’s sticking with feelings of power. It all seems so disingenuous.

      • Ben said (March 30, 8:52 pm): “Every now and then in that Ted interview, she [Cuddy] briefly starts to acknowledge the failures of 3 of the original 4 hypotheses to replicate (the most interesting 3 too – is “increased subjective feelings of power” an exciting enough effect to have warranted all the publicity on power posing?)… but then she quickly goes into qualifications and platitudes about the progression of science and optimistic notes about how someone else’s work just might end up proving her right all along.”

        Ben opines that this statement of Cuddy’s is focusing on damage control. But I wonder if something else might be going on in her head and/or gut: That focusing on “how someone else’s work just might end up proving her right all along” is a sort of expression of faith in her belief that her hypothesis is true. There is a striking analogy between her response and that of many people who, despite all the evidence so far against cold fusion, still believe that it is an area worthy of research.

        But there is also a (to me) very important difference between proponents of continuing cold fusion research and Cuddy’s hope that research on power pose will continue: If, indeed, cold fusion can work, it could conceivably have societal benefits by providing a viable, fairly safe source of energy. What are the benefits of power pose? In his March 31, 7:24 post, Ben quoted Cuddy as saying, “Immediately after delivering their speeches, as a manipulation check, participants reported how dominant, in control, in charge, powerful, and like a leader they felt on a 5-point scale…” This sounds to me like saying that the hoped-for (by Cuddy) effects of power pose include increasing feelings of being dominant or powerful. Are these societally desirable ends? Here is how I see these ends:

        If I feel dominant, I take this as a sign that I am feeling too big for my britches; that I have stepped over an ethical line – that I need to step back some, and maybe even owe someone an apology, or need otherwise to try to repair some damage I might have caused. Feeling powerful is not as clear cut: I could feel powerful in a dominant way, or in a way that just is a feeling of being competent at doing something worthwhile. So I consider feeling powerful a sign that I need to do some reflection: Am I feeling powerful in the sense of dominant (in which case my sense of ethics says I need to step back, etc.) or am I just feeling competent at doing something that is ethical to me? All things considered, I consider the desired (by Cuddy) effects of power pose to be more societally negative than societally positive.

        PS: No, this is not an April Fool’s joke. It’s serious.

  11. I think symposium honouring Mahzarin Banaji should also include Hart Blanton, Gregory Mitchell, Phillip Tetlock and others who have questioned the validity of the Implicit Association Test (the cause of the implicit “revolution”). Perhaps Jesse Signhal too http://nymag.com/scienceofus/2017/01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html

    Mitchell, G., & Tetlock, P. E. (2017). Popularity as a Poor Proxy for Utility. Psychological Science Under Scrutiny: Recent Challenges and Proposed Solutions, 164.

    • Couldn’t agree more with this. Andrew, if you ever have the time, it would be very interesting to many of us to hear your thoughts on the Implicit Associate Test work of Banaji and coauthors. As the NYMag article linked above describes, IATs seem to have been vastly oversold by these researchers.

  12. I think that everything would have been fine and Andrew would have applauded them if they had only called it “the bleeding edge of science” instead of cutting edge science?

  13. Something I really don’t understand in this ongoing crisis in psychology, is a lot of the ‘defenders’ of the status quo seem to think that even a few years ago it was quite OK/defensible to follow flawed procedures in conducting and analysing studies etc because they claim they weren’t widely recognised as such (apparent in Michelle Shiota’s comment above for an example).

    However, speaking as a psychologist, I remember getting it drilled into me 15+ years ago both in taught classes and from project advisors that for example we should always do a power calculation, specify a primary outcome and an analysis plan before collecting the data, only stop collecting data at a prespecified point etc. So I find it mystifying that people can try and claim ignorance of these matters in the recent past, in attempting to justify why they aren’t at fault and their research should continue to be respected.

    Or is this more of a social psychology issue (not to deny huge problems in other areas of psychology and science generally)?

    • @Confused: The reality is that in the past, most reviewers didn’t ask nor care for a-priori power calculations or power in general. And with publish and perish, that makes it ok for many to ignore power then (if the reviewers don’t care, they don’t have to either in their mind.) This is slowly changing. Reviewers and journals slowly adopt better practices regarding power.

      As for methods: I think that for researchers with tenure, your phd students and staff were and still are largely unprotected from potential power abuse. That makes it easy for researchers with tenure to live in a bubble and never be criticized in their own work environment. Add to that that senior researcher with tenure may have not read a statistics book in decades or rate high in need for power and narcissism and it’s not too difficult to imagine how they deflect all the blame this way. Even 20 years ago, people knew that repeated testing-resampling-retesting was wrong. But yeah – psychology lacks an organized system for staying up-to-date statistics wise. Medical doctors have to go to seminars and do stuff to stay up-to-date, but researchers don’t have to do anything to ensure that their way of working stays up-to-date.

      It would be more precise to indeed say that many researchers got away with doing wrong things and claim they need more time to adapt.

    • Speaking as a social psychologist about 10 yrs out of grad school (working in a non-academic field because I could not stand the clubhouse environment), there was a sharp divide between what you were taught in methods class and how you learned to “actually” do research under the guidance of a faculty member. There’s no way they can claim ignorance. Reading that post above just made me think that the only reason they are having these kinds of discussions now and seriously debating the issues at hand is because people like Andrew Gelman have been calling them out (and gaining attention for doing so). So bravo to this blog for being attention grabbing.

  14. Thanks for your continuing work and honest words to point out this misery Andrew Gelman.

    However.
    I think we all should respect and accept the decision of the APS program committee and the APS president as what it is: They made a decision what kind of psychology and what kind of scientific standard they want to promote. And they did not execute it half-baked, but packed everyone in it who stands for the kind of psychology they want to promote! They really stick to their guns!
    Unfortunately it seems like they didn’t get Wansink to give a talk on how to boost paper production 300% by data slicing. It would complement the program perfectly. Guess he’s too busy publishing all those papers to attend.

    Irony off and in all seriousness: I’ve never been so ashamed to be a psychologist.

    @ Michelle Shiota: You can’t in all seriousness claim to want to promote better scientific standards as part of the APS program and simultanenously reward *so* *many* researchers known for being the poster-children of scientific misconduct by giving them this kind of promotion. The interview with Cuddy shows how little she learned (she offers her p-hacked studies as evidence that her research still holds up for heaven’s sake! She does not comment on her questionable research practices like p-hacking at all!) – please read it again carefully.

    It’s quite unfortunately that the APS program committee does not see the kind of science they actually promote. You put people on pedestals who lash out against methodologists as “terrorists” and yes, conventiently put people concerned about methods in psychology into an extra track where they can keep to themselves. Bravo. That is exactly what the Cronies want: People want to question and discuss methods and statistics should be seperated from those who do not want their methods questioned. The “Methodical Terrorists” and all the evil people who question Powerposing get their own space where they can talk to themselves. The Cronies instead get the spotlight.

    This is ridiculous and as Gelman points out, just the sheer amount of questional research that is promoted by the APS for this event is a testimony for how little the APS cares about the future of psychology. Apparently it’s all about who has fancy awards, or is known from TED, Twitter and Instagram. What a shame!

  15. Confused wrote: I remember getting it drilled into me 15+ years ago both in taught classes and from project advisors that for example we should always do a power calculation, specify a primary outcome and an analysis plan before collecting the data, only stop collecting data at a prespecified point etc.

    GS: You forgot one! “Always use single-subject designs where appropriate.” Oh…you didn’t have that one drilled into you?

  16. I found this claim by Jessica Tracy very intriguing:

    ” I reported this result in an empirical paper, but have never claimed that that observed effect size is the “true” effect size, ”

    Is that like a generic cop out anyone can use to justify crap? “I saw it in my data but doesn’t mean this extrapolates to the world”

    Well, if you think your results don’t extrapolate why advertise them as such?

    • Rahul: I didn’t understand Jessica Tracy’s claim. Would we ever know the “true” effect size when working with a sample? The computed effect size is always an estimate, is it not?

  17. If you are a psychologist who would like to influence the Association for Psychological Science toward greater methodological rigor and sophistication, then please consider voting in the upcoming election for a new President Elect and two new Members at Large of the Board of Directors. Simine Vazire joined the Board of Directors of APS last year, and I would like to see all three of these new positions filled by other statistically savvy scientists who are committed to transparency. The slates will be announced to members any day now, and the election will be completed sometime this month. See http://www.psychologicalscience.org/members/join-renew for information. If you are not sure of your membership status, you can check it at https://www.psychologicalscience.org//members-only/member_search.cfm. I thank you for considering this request.
    Steve

    • An additional benefit of becoming an APS member is that you then become eligible to receive APS awards!

      Apparently, it seems that only members of the APS can become recipients of APS awards for “their outstanding contributions to scientific psychology”.

      http://www.psychologicalscience.org/members/awards-and-honors/fellow-award

      “The APS William James Fellow Award honors APS Members for their lifetime of significant intellectual contributions to the basic science of psychology. Recipients must be APS members recognized internationally for their outstanding contributions to scientific psychology. Honorees are recognized annually at the APS Convention.”

      So, if you feel that it is totally appropriate to receive an individual APS award, and you feel that you really deserve one, step 1 is to become a member.

    • Steve Lindsay: What is the probability that the slate for the new president would contain “statistically savvy” (from the perspective of the participants in this blog) candidates? Surely it’s low. And even if it does, what is the probability that such a person would be elected by the APS membership? Again, surely it’s low.

      Were any of the past presidents “statistically savvy”?

      I doubt this is a viable solution to the poor quality articles published by the journals supported by APS.

      • Possibly relevant to Anon’s questions:

        Steve Lindsay wrote (above): “Simine Vazire joined the Board of Directors of APS last year, and I would like to see all three of these new positions filled by other statistically savvy scientists who are committed to transparency.”

        I followed Simine Vazire’s blog for a while a couple of years ago. My impression was that she is indeed committed to transparency, but I would not go so far as to call her “statistically savvy” based on what she wrote then (although it is of course possible that she has increased her understanding of statistics since then.)

        • Martha (Smith): Thanks. I’ve looked at her CV, and I agree that she does not seem “statistically savvy.” I don’t think Steve Lindsay is either, so perhaps that’s why he thinks she is.

        • Martha, Anon:

          There are many levels of savviness. These two people may be so statistically savvy by your standards, but they’re a big step up in savviness from the people who said, “the replication rate in psychology is quite high—indeed, it is statistically indistinguishable from 100%.” The people who made that claim may have some statistics papers on their CV, but when it comes to statistical street smarts, they don’t got it.

          To put it another way, savviness depends on context. Satoshi Kanazawa knows some statistics, and if you were to put him in a safe environment where he couldn’t hurt himself, some area where variation is low and effects are large, he could probably do OK. But in the area where he works, his analyses are a disaster. He’s not savvy where it counts. (Amusingly enough, on his website, Kanazawa writes, “If what I say is wrong (because it is illogical or lacks credible scientific evidence), then it is my problem.” He’s right about that!)

          Regarding Steve Lindsay’s remark about “statistically savvy scientists”: what’s important is that they be savvy where it counts, that they know what they don’t know, etc. I don’t know enough about the Association for Psychological Science to say more, but I think it depends on context.

        • Disagree. I think it’s safe to say that many if not most social psychologists are not statistically savvy in ANY context.

  18. Really interesting parallel in the world of biomedical sciences right now.

    Like Susan Fiske, Jeffrey Drazen became infamous for coining a term to describe certain researchers. He referred to researchers who use public data as “research parasites”.
    http://www.nejm.org/doi/full/10.1056/NEJMe1516564#t=article

    Similar to “methodological terrorists”, scientists were not happy (or at least scientists on Twitter weren’t happy), and many biologists began to call themselves research parasites. Someone even created an award for the biggest parasite:
    http://researchparasite.com/

    Guess what? Jeffrey Drazen is chairing a conference on data sharing as we speak!
    http://events.nejm.org/

    Any chance we can get an award for largest “methodological terrorist”?

    • I think I have some ideas for my halloweeen costume this year. But should I be a methodological terrorist or research parasite? Such a difficult choice!

    • Any chance we can get an award for largest “methodological terrorist”?

      GS: Maybe…but many here are in the running for “most naively-accepting of the standard view of science.” Especially in this area (but in many others):

      “…the idea that ‘science advances by developing operational definitions of concepts [i.e., correspondence rules linking theoretical constructs to observables]’”

      The quote within that quote is Stanovich’s (1998) book. The quote is in Machado et al (2000). Here is the link if anyone is interested:

      http://www.behavior.org/resources/88.pdf

      It is interesting that you should mention “methodological terrorism” (I’m pretty sure I can guess what you mean). If one has a simplistic view of science, then “doing science” is mostly a matter of knowing The Methods. After dutiful application of The Scientific Formula, using The Methods, progress toward Truth will certainly follow. Thus, those not in possession of The Methods are to be excoriated. Don’t get me wrong, most of the research that y’all attack is trash. But in so many areas…what isn’t trash? And in mainstream psychology, a favorite (and deserving) target here, the folly that characterizes the methods used (by mainstream psychologists) pales in comparison to the folly of the assumptions underlying the endeavor. Err…those would be assumptions that I’m pretty sure would be manifested among the Faithful here.

      When I found this blog, my initial impression was that the talk here about science in general (i.e., stuff that gets into the philosophy of science) was characterized by, let’s see…what did I call it?…oh yeah…“hubris.” I think that still, though I find the blog informative and interesting. Many important issues are raised here, and that is a good thing…I’m just not too keen on what I am starting to imagine is the consensus concerning their resolution around here.

      The title of this particular thread involves accusing psychology of being a pseudoscience. Fair enough I say as a psychologist (but not a mainstream psychologist) and those that get after mainstream psychology are enemies of my enemies. But what are ALL of the problems with psychology and, when you take on an issue in psychology, how many of psychology’s problems are you guilty of? OK…the p-values are gone, but how much of the conceptual core of mainstream psychology do you share?

      OK…a bit of a rant…

      • Glen:

        You write, “The title of this particular thread involves accusing psychology of being a pseudoscience.”

        No. I never said that psychology is a pseudoscience, nor did I ever intend to imply this. I do think that powerful members of the Association for Psychological Science spend a lot of time defending pseudoscience. I think that pseudoscience of the Bem/Bargh/Cuddy/Kanazawa/Wansink variety is a prominent subfield of psychology. But I would not describe psychology as a whole as being a pseudoscience.

    • So, since we’ve been discussing Angus Deaton a lot, I like what he has to say about “rent seeking” in the absence of GDP growth at the end of this interview https://www.theatlantic.com/business/archive/2017/03/angus-deaton-qa/518880/ also the stuff about regulatory capture somewhere in there too.

      My gut reaction to looking “under the rug” of the last 20 years of economics (a project I’m working on now) is that this rent seeking thing is a feedback trap. When you have low GDP growth, it becomes advantageous as a strategy to do a lot of rent-seeking, to set yourself and your buddies up in a position of power, where you can get paid money to do jack shit (sorry for the vulgarity but… this has me pretty angry about it all). You see it in the finance industry getting bailouts from crappy mortgages, you see it in protectionist tendencies that the current president has, you see it in regulatory capture of the FDA, and the FCC, in the regulatory capture of education funds by “education mill” charter school corporations, in the success of the “Pharma Bro” dude and his capture of the supply chain of an off-patent anti-parasite drug, in the successful marketing of the EpiPen to doctors so that they are afraid to prescribe any of the cheaper alternatives and so that Mylan can then jack up the price of the effective monopoly good they have, in venture capital using “free money” printed by quantitative easing to finance the production of tech start-ups that do jack shit and then sell themselves to larger companies run by crony friends whose stock is primarily held by your 401k etc etc… and you see it in science where little cliques control the apportionment of grant funds and conference recognition and publication review, and access to data by “in groups only” etc.

      But, then once you’ve incentivized this behavior, the rent seeking sucks all the growth potential out of the economy, and results in … continued low GDP growth and continued rent seeking.

      Think about it, we’ve had enormous improvements in computing and networking technology since 1995, the year URLs hit television commercials. This has resulted in vast improvements in supply chains, distribution of goods, predictions of demand, targeting of advertising, etc etc. By all rights, we should have excellent growth in purchasing power of families, and yet here is the year on year percentage change in (GDP/Population/CPI):

      https://fred.stlouisfed.org/graph/?g=deD0

      It’s been less than 2.5% for 2 decades.

      Is this because it’s impossible to grow such an enormous country as the US? I don’t think so, I think it’s because rent seeking / regulatory capture behavior sucks all the consumer surplus out of the economy.

      So, it’s not just that we should be angry at scientists who run these sham cargo cult crap industries, we need absolute indignation about the levels of regulatory capture and rent seeking throughout our country.

      • Hello, Daniel. This is somewhat after the fact, but I am slightly familiar with your website due to the comments that you occasionally leave on Deborah Mayo’s Error Statistics posts. I like find Deborah’s outlook on frequentism, falsifiabilty and Pearson-Neyman-Fisher, although she is a true philosopher of probability and scholar, whereas I am a mere practitioner. Deborah is cordial to all, including to Professor Gelman despite his Bayesian tendencies.;)

        So, let’s address your comment, which is much more in my domain of financial economics and public policy, rather than the more specifically methodological concerns of reproducibility in psychology and other social science research. You see, I monitor regulatory compliance and strive to provide effective challenge to U.S. community and smaller community banks in the wake of the U.S. financial crisis, enforced under Dodd-Frank.

        Despite my essential conservative outlook, I too believe that the findings of Nobel Prize winner Angus Deason is astute, compassionate, and identifies the loss of economic welfare due to rent seeking and cronyism in the U.S. (and perhaps other 1st world European economies). Yes, I support Donald Trump, but I entirely agree with you that regulator capture has plagued the United States throughout multiple presidential administrations be they Democratic or Republican. From Reagan, to Clinton to Bush to Obama, the protections that were put in place after the Great Depression by Franklin Delano Roosevelt were weakened and rolled back.

        Glass-Steagall and the Securities Acts of 1933 and 1934 under FDR would be a libertarian’s dream today. Yes, they were restrictive, but in 20 or so concise pages sequestering investment and retail banking (with the use of Chinese Walls), our banking industry flourished, grew, with a a thriving ecosystem of intermediaries and market participants until 2000, when Bill Clinton got rid of it all. Disaster followed, to be replaced by the 11,000 pages of the Dodd-Frank Act, full of waivers for practitioners of “financial innovation” such as hedge funds and special purpose or private equity vehicles who are not subject to the same stringent reporting requirements as the small federal savings and loan bank where I work, despite our asset holdings intended to preserve capital for military dependents and widow, in contrast to the riskiest, most esoteric financial structures imaginable which receive reporting and disclosure exemptions under Dodd-Frank, year after year since 2010. This is not a problem brought upon us by Donald Trump. It was instigated under the oversight and control of the Democrats.

        I agree with you about the pernicious regulatory capture of education funds by “education mill” charter school corporations. There were introduced in the 2nd term of GWBush, but only achieved massive prominence and expansion under under Obama and his Secretary of Education, and lobbied for by the highly influential DFERs (Democrats for Education Reform), who are primarily funded and advocated for by Democrat Nina Moskowitz, Jeff Bezos’s mother, Martin Seligman (of APA fame) protege, for McKinsey consultant Angela Duckworth, now a MacArthur Genius for rapid transformation as an ground breaking educator, with numerous ties to charter schools and cronies of DFER et al. Teach for America is another Wall Street Democrat crony-endorsed program that offers to reduce costs to public schools by replacing qualified and credentialed school teachers with untrained recent college graduates, paid minimally for two year stints; DeRay McKesson is a big advocate of Teach for America, a DFER endorsed program.

        This is in no way a defense of the appointment of Betsy DeVos to secretary of education under Trump. Diversion of taxpayer funds and lack of fiscal accountability for the charter school racket was used to undermine the mostly successful, fair, U.S. K-12 public education system did not begin with Trump and DeVos. It was in full swing throughout the eight years of pro-charter, Democrat-led education reform (through replacement of genuine education policy makers), financed by Chamber of Commerce sponsored Common Core and other privatization for 8 consecutive years under Obama, preceded by GW Bush’s similar efforts.

        Do recall that the finance industry bailouts were overseen and enacted during the tenure of the Bush and Obama administrations’ Federal Reserve (Bernanke), and Treasury secretaries Geithner and Paulson. This was put in motion at least 10 years prior to Trump.

        It hasn’t been all bad. Some things got better. Obama oversaw passage of legislation to prevent lax mortgage requirements. Pharma Bro was finally charged and imprisoned. Under Trump, years of irresponsible quantitative easing (resulting in easy money that only fueled ersatz start-up’s that were primarily vehicles for venture capital to flip for profit, without any genuine innovation, productivity growth, or tech advancement unless one counts social media and advertising) has finally ended.

        You cite protectionism as another example of rent seeking and corruption by Trump for his cronies. I don’t see it. Trump has the endorsement of the AFL-CIO for his tariff and trade policies. And that gets to the other matter: You attribute regulatory capture and rent seeking behavior as the primary cause of our failure to see GDP and purchasing power growth over the past 20 to 25 years. Yes, there has been tremendous innovation and progress in computing as well as wide use of analytic methods to optimize supply chains, logistics and so much more. There is a crucial step missing in your chain of causation though. Improvements in computing, manufacturing and supply chain speed and efficiency should have benefited all of us. Instead, those in charge of the means of production chose to off shore our manufacturing to lower-wage nations, resulting in massive job loss among the middle class in the US, as well as middle management and engineering (with the exception of software engineering, to a degree, although even there, much of the innovation has been siphoned to “financial innovation” which further concentrates wealth among the few, without any of the multiplier effects that are associated with expansion of plants, property and equipment as in the recent past, and distributed geographically rather than in localized in elite coastal enclaves).

        Yes, we should be angry at at regulatory capture, and self-serving special interest groups (regardless of partisanship) whose obvious intentions are to drain consumer surplus from the country, by whatever means possible. Included is the greed that drives low wage foreign labor to replace Americans seeking to earn a living wage, as well as offering access to U.S. higher education to non-citizens for a hefty premium over tuition charged to American applicants at public land grant schools and private universities. The defense used by the right and the left for favoring foreign students is that to give preference to Americans is bigotry and racism. In fact, it is greed. The same is true for paying living wages for jobs in the U.S. The trope that Americans won’t work and are unintelligent, uneducated, thus unfit to work is motivated by the same cronyism that you described, of setting up yourself and your friends in a situation of power which ensures diversion of public funds to the private enrichment of those in control, without any acknowledgement by legislators, regulators, community leaders, or others who should want to reverse the descent into the pure hype of “Silicon Valley innovation” that benefits almost no one but financial speculators.

        Sorry for my lengthy response to a somewhat off topic comment on a year old post. My response is at as off topic as Daniel’s. I apologize, Andrew Gelman.

        • A year late and a digression from the main topic, but I can’t let this go unanswered. I’m not sure what you main points are, Lisa, but I got distracted by the political overlay (perhaps Daniel instigated this, but I’d like to return to what I think are the important points). I’ve spent my career studying regulation – primarily in telecommunications – and rent-seeking and regulatory capture are far too simplistic to explain much of anything. I think it is safe to say that whenever there is a lot of money and power at stake (as with any regulatory program), and whenever there is imperfect and asymmetric information (as with any regulatory program), we are likely to see elements of both rent-seeking and capture that become more active. They are always there, just like various viruses and bacteria that can cause great harm. Given the right conditions, they become more damaging. And, if your point is that those right conditions don’t depend on which party is in power, then I agree with you.

          What are the conditions under which these pernicious forces become more damaging? That is a lengthy subject and one which deserves a lot more analysis. We can’t blame regulatory capture on Trump or Republicans or Democrats. It is always present in some form. Daniel seems to speculate that it gets worse when the economy is relatively stagnant – I doubt that, and actually think it is more likely the reverse (growth makes it easy to put up with many bad things). I think we should be upset with rent-seeking wherever we observe it – it is unethical behavior despite it being perfectly understandable given our incentives for private rewards by influencing policy. The answer might seem to be less government and less regulation – indeed, that is what most economists appear to believe, but I think that is too simplistic. As Victor Goldberg pointed out (45 years ago in a remarkable paper), regulation derives from the essential nature of the goods/services that are being regulated. Simply removing regulation does not change the nature of the problem. We have environmental problems and resorting to private contracts will not “solve” these any more than regulation will. What we need to compare is the imperfections that come with whatever means we try to use to address these problems.

          Education and research and complex issues – I don’t believe free markets will address the variety of goals society – among these, equity, basic scientific research, public goods, economic growth, etc. There is plenty of rent-seeking on display in DeVos’ Dept. of Ed. just as there was plenty in prior administration’s Depts. of Ed. Will having better expertise set education policies improve things or merely change the identities of who is seeking (and receiving) the rents?

          These are all tough questions and I think the gist of your comment may be consistent with my own – but I found the political overlay distracting. Does it matter that you are a Trump supporter and I am not? I believe we should hold people accountable for their actions, regardless of whether their predecessors did similar (or even worse) things.

  19. I’m beginning to think this club thing could actually be a secret cabal that goes back centuries whose mission is to undermine science.

    Check this out. The “100%” replication paper was last authored by Timothy Wilson.

    Who’s Timothy Wilson? I don’t know, but I couldn’t help but notice he’s at UVA and is the cousin of David Sloan Wilson.

    Who’s David Sloan Wilson? I don’t know, but couldn’t help but notice when he promoted a paper he published with Brian Wansink!!!!! And after pizzagate no less!

    https://twitter.com/OmnesResNetwork/status/842929426245607425

    Just like we have the Erdős number should we also have a Wansink number!?!?

    • “And what does “Andrew” think of the American Psychology Association?”

      If i am not mistaken the American Psychological Association publishes the APA Publication Manual every few years.

      The latest edition is close to 300 pages, and tackles super important issues like when to use quotation marks (see APA Publication Manual 6th ed. page 91 if i am not mistaken).

      And for just $39.95 you can get your own copy, which you can use for about 10 years after which you will have to buy the next edition to see whether you can still use quotation marks for the same stuff you were just about getting used to:

      http://www.apastyle.org/manual/index.aspx

      • Andrew,

        Thanks. I repost this article b/c I don’t see how one can’t discern that the points raised are so starkly evident.

        You and I have some very similar sensibilities, although I am not a trained statistician nor a psychologist. I am not sure what I am. LOL Maybe just in life long search for the intelligent. And fun and interesting stuff to do.

        What I will say is that, at least in the political science/foreign policy realm, I, by through quirk of history. attended pivotal conferences at Princeton primarily, Harvard, Yale, Hartford Seminary, during my childhood, I have heard the back channel discussions leading up to several theories and assumptions. Of course it’s a subjective experience. But fruitful in observing the sociology of expertise: how it shapes the knowledge acquisition environment.

Leave a Reply to Eoin Cancel reply

Your email address will not be published. Required fields are marked *