Sanjay Srivastava writes:

Below are the names of some psychological disorders. For each one, choose one of the following:

A. This is under formal consideration to be included as a new disorder in the DSM-5.

B. Somebody out there has suggested that this should be a disorder, but it is not part of the current proposal.

C. I [Srivastava] made it up.

Answers will be posted in the comments section.

1. Factitious dietary disorder – producing, feigning, or exaggerating dietary restrictions to gain attention or manipulate others

2. Skin picking disorder – recurrent skin picking resulting in skin lesions

3. Olfactory reference syndrome – preoccupation with the belief that one emits a foul or offensive body odor, which is not perceived by others

4. Solastalgia – psychological or existential stress caused by environmental changes like global warming

5. Hypereudaimonia – recurrent happiness and success that interferes with interpersonal functioning

6. Premenstrual dysphoric disorder – disabling irritability before and during menstruation

7. Internet addiction disorder – compulsive overuse of computers that interferes with daily life

8. Sudden wealth syndrome – anxiety or panic following the sudden acquisition of large amounts of wealth

9. Kleine Levin syndrome – recurrent episodes of sleeping 11+ hours a day accompanied by feelings of unreality or confusion

10. Quotation syndrome – following brain injury, speech becomes limited to the recitation of quotes from movies, books, TV, etc.

11. Infracaninophilia – compulsively supporting individuals or teams perceived as likely to lose competitions

12. Acquired situational narcissism – narcissism that results from being a celebrity

In academic research, “sudden wealth syndrome” describes the feeling right after you’ve received a big grant, and you suddenly realize you have a lot of work to do. As a blogger, I can also relate to #7 above.

**. . . and statisticians**

It’s easy to make fun of psychiatrists for this sort of thing–but if statisticians had a similar official manual (not a ridiculous scenario, given that the S in DSM stands for Statistical), it would be equally ridiculous, I’m sure.

Sometimes this comes up when I hear about what is covered in graduate education in statistics and biostatistics–a view of data analysis in which each different data structure gets its own obscurely named “test” (Wilcoxon, McNemar, etc.). The implication, I fear, is that the practicing statistician is like a psychiatrist, listening to the client, diagnosing his or her problems, and then prescribing the appropriate pill (or, perhaps, endless Gibbs sampling^H^H^H^H^H^H^H^H^H^H^H^H^H^H^H talk therapy). I don’t know if I have a better model for the training of thousands of statisticians, nor maybe do I have a full understanding of what statistical practice is like for people on the inferential assembly line, as it were. But I strongly feel that the testing approach–and, more generally, the approach of picking your method based on the data structure–is bad statistics. So I’m pretty sure I’d find much to mock in any DSM-of-statistics that might be created.

Another uncomfortable analogy between the two professions is that statistical tests, like psychiatric diagnoses, are trendy, despite their supposed firm foundation in mathematics and demonstrated practical success (just as psychiatry boasts a firm foundation medicine along with millions of satisfied customers over the decades). Compounding the discomfort is that some of the oldest and most established statistical tests are often useless or even counterproductive. (Consider the chi-squared test, which when used well can be helpful–see chapter 2 of ARM for an example–but is also notorious as being nothing more than “a test of sample size” and has let many researchers to disastrously oversimplify their data structures in order to fit the crudest version of this analysis.)

Instead of a DSM, the statistical profession has various standard textbooks, from Snedecor and Cochran to . . . whatever. But our informal DSM, as defined by practice, word-of-mouth, and our graduate curricula, is nothing to be proud of.

Thanks for the link Andrew.

Actually, in the applied-stats world of psychological research, there

isa DSM of statistical procedures. It's the menu system of SPSS. In fact, SPSS has a "statistics coach" that asks you a bunch of questions about your data and then tells you what procedure to run.At least the world of statistics doesn't have multimillion-dollar businesses (The Wilcoxon Corporation ?!) lobbying to include their test so they can make money every time somebody uses it.

Answers: 1. C 2. A 3. A 4. B 5. C 6. A 7. B 8. B 9. A 10. C 11. C 12. B

"At least the world of statistics doesn't have multimillion-dollar businesses"

IBM bought SPSS for about $1,200,000,000 in July, 2009 — all cash.

I can't believe I got 10 of 12 right — maybe I do remember stuff from that long-ago psychology degree.

A reason that Statistics doesn't have a "DSM" type guide is that it is a methodological field, rather than a substantive one.

If I understood it right, Andrew makes a point that statistics is as much of a mess as psychiatry. If true, I am surprised. Psychiatry is a mess not because psychiatrists are stupid or lazy. It is a total joke (and not a science) simply because its subject is way too hard. In contrast, to this outsider statistics does not seem to be anywhere near as hard. I may be very naive but it seems to me that it should be easily amenable to comparisons in order to identify and form best practices. So what's going on? Are statisticians stupid? Or do they simply not care? Or is it really true that an applied mathematics is as complex as the most complex manifestations of life?

Sanjay, Zbicyclist:

I have somewhat warm feelings about Spss. It's Sas that I hate. I think it's because everyone knows that Spss plays a specific role, as a conventional set of tools for psychology researchers. Spss is

supposedto be clunky and out-of-date, it'ssupposedto give you endless, hard-to-read printouts that will take someone in the back room hours to interpret.The danger with Sas, to me, is that a lot of people in biostatistics think of it as state-of-the-art. Stata has what seems to me to be a much more "statistical" attitude.

DK – or as Freud put it in his Towards a Scientific Psychology – its not going to happen in my lifetime – but I [Freud] have no plans to change my career.

If one defines the main role of statistics as to enable one to avoid being seriously misled by observations – then – it may need to be slightly more complex [than] the most complex manifestations of life?

K

the chi-squared test is also notorious as being nothing more than "a test of sample size"

hi andrew, would you mind elaborating or giving an example? hypothesis tests are dependent on sample size. but is the chi-squared test more so than other tests?

and setting aside the general problems of hypothesis testing, off the top of your head, what other tests would you consider useless or counterproductive? (for new and infrequent readers, fisher's exact test.)

Fisher's exact test requires fixed marginal totals in your table, does it not? So may not be appropriate in some cases.

I've always felt that SAS is a step above SPSS since it requires more "coding" and is less menu driven. However aren't all statistical software packages (SAS, SPSS, Stata, R) guilty of being black boxes that use canned procedures?

Robert:

Yes, that's exactly my objection to Fisher's exact test (as explained on occasion on this blog, also I think in my 2003 paper). It's very unusual to collect data with both margins fixed.

Regarding the software packages: I have no problem with canned routines. I just somehow feel that Spss is safer because people

recognizethat it is cookbook, whereas I fear that Sas is perceived by many as being state-of-the-art.But with randomizing set numbers to two groups (treatment versus control) and assuming the strict Fisher Null hypothesis that implies both margins are fixed.

Such a Null hypothesis might (should) be considered counter-factual but it seems like a good starting point for the analysis of a randomized trial.

At least to get the direction of effect pinned down.

K