Moving beyond hopeless graphics

I was at a talk awhile ago where the speaker presented tables with 4, 5, 6, even 8 significant digits even though, as is usual, only the first or second digit of each number conveyed any useful information. A graph would be better, but even if you’re too lazy to make a plot, a bit of rounding would seem to be required.

I mentioned this to a colleague, who responded:

I don’t know how to stop this practice. Logic doesn’t work. Maybe ridicule? Best hope is the departure from field who do it. (Theories don’t die, but the people who follow those theories retire.)

Another possibility, I think, is helpful software defaults. If we can get to the people who write the software, maybe we could have some impact.

Once the software is written, however, it’s probably too late. I’m not far from the center of the R universe, but I don’t know if I’ll ever succeed in my goals of increasing the default number of histogram bars or reducing the default number of decimal places in regression results. In the latter case, I just went and wrote my own display() function. Now I have to get people to switch from summary() to display(), which is a big task but is perhaps easier than convincing whoever is in charge of R to change the defaults.

P.S. Neal Beck points us to this (turn to his article beginning on page 4):

Numbers in the text of articles and in tables should be reported with no more precision than they are measured and are substantively meaningful. In general, the number of places to the right of the decimal point for a measure should be one more than the number of zeros to the right of the decimal point on the standard error of this measure.

Variables in tables should be rescaled so the entire table (or portion of the table) has a uniform number of digits reported. A table should not have regressions coefficients reported at, say, 77000 in one line and .000046 in another. By appropriate rescaling (e.g., from thousands to millions of dollars, or population in millions per square mile to population in thousands per square mile), it should be possible to provide regression coefficients that are easily comprehensible numbers. The table should clearly note the rescaled units. Rescaled units should be intuitively meaningful, so that, for example, dollar figures would be reported in thousands or millions of dollars. The rescaling of variables should aid, not impede, the clarity of a table.

In most cases, the uncertainty of numerical estimates is better conveyed by confidence intervals or standard errors (or complete likelihood functions or posterior distributions), rather than by hypothesis tests and p-values. However, for those authors who wish to report “statistical significance,” statistics with probability levels of less than .001, .01, and .05 may be flagged with 3, 2, and 1 asterisks, respectively, with notes that they are significant at the given levels. Exact probability values may always be given. Political Analysis follows the conventional usage that the unmodified term “significant” implies statistical significance at the 5% level. Authors should not depart from this convention without good reason and without clearly indicating to readers the departure from convention.

All articles should strive for maximal clarity. Choices about figures, tables, and mathematics should be made so as to increase clarity. In the end all decisions about clarity must be made by the author (with some help from referees and editors).

12 thoughts on “Moving beyond hopeless graphics

  1. The popular ggplot2 package defaults to 30 bins for histograms, which I find to be a good number for data display and quite a bit more informative than base R’s histogram.

    I’m on the fence about solving rounding in software. It could get confusing from a programatic sense if some objects (like matrices) have completely different rounding defaults as other object (lm/glm). When in doubt I would err on the side of consistent treatment. On the other hand, when code is specifically spitting out results for publication, having the software reduce the number of digits makes sense. I believe that xtable does this.

  2. All you have to do is have display() do some nicer things than summary(), especially with data.frame objects, and you’d get traction. Don’t promote it on the rounding, promote it on better features.

  3. Maybe, rather than try to ‘arm’ wrestle the R community into using a new function, you could write a new default for summary(). This would actually be especially useful in the ‘arm’ package, which is excellent btw! I ran into an interesting snag with it when trying to answer a question on stats.stackexchange.com. Someone fit a model using bayesglm, and then did summary(model). They were confused by the presence of p-values in the output! Without a summary() method for bayesglm objects it calls summary.glm(). Even having summary.bayesglm just call display() would be an improvement, but it seems to me that you could have a wider effect at least on users of the arm package.

  4. John:

    Good point! We are in fact working on including average predictive comparisons and some graphics into display() so this should help.

    Drew:

    That makes sense; thanks.

  5. i think the problem is not the number of decimal places in default statistical output. instead, one of the problems is, as andrew and others have pointed out numerous times, people do not think about how to present their results, whether in table or graphical form. even the best defaults still require some tweaking. another problem is that even when researchers do think about the presentation of results, they opt for terrible things, such as 3d pie charts or reporting numbers with 8 significant digits.

    when i have to do any data analysis, i actually prefer the R defaults, which gives a few more decimal points, because it can help point out if i have made errors.

  6. I am writing a new statistical package (not for R) and I have been wondering about how do this. Say I have a bunch of samples from the posterior. Now I want to summarize this parameter estimate in a textual form for the user, i.e. mean +/- standard deviation. How many digits should I report for the mean and how many for the SD?

    Google tells me to use one digit for the SD and the number of digits for the mean should agree in place with the SD (http://www.facstaff.bucknell.edu/kastner/CHEM221/announcements/sigfig.html) But that’s just a rule somebody made up, right? Can we do any better?

    What if I said, one thing I might use the posterior for is working out in which percentile any given fixed value falls (Cook, Gelman and Rubin). Or any insights from decision theory, perhaps given certain assumptions about the utility function?

  7. Indeed, I can only applaud to the idea switching the default. For instance in STATA , outputs include 6 or 7 digits after the decimal point for means, sd or betas. But the packages such as estat or estab further transport this misleading and worthless information into the .rtf or .tex files. I have never been able to change the default rule for the export of n-digits within the export commands for estat, although model fit and other charactersitics of the layout of tables can easily be changed, added or surpressed. Any sugestions?

    • Getting precisely the display formats you want can be a challenge in any software, but this comment is in part misleading or incorrect and should not be regarded by others as authoritative comment on Stata.

      Stata, I imagine, would plead guilty to the charge that its default output usually carries far more detail than makes sense for your publication. That was never intended. There is, I readily agree, plenty of scope for arguing about what the default formats should be. But you can change more than is implied here. See e.g. http://www.stata.com/help.cgi?set+cformat

      -esttab- in particular (to give its correct name) is user-written and written by Ben Jann. I guess Ben would plead guilty of using the defaults he likes and he offers plenty of options to override them. If that doesn’t satisfy, the answer is simple: you can write your own program to codify your own preferences. But it is ridiculous to imply that -esttab- even by default is just copying Stata’s own defaults.

      I don’t know what Rob means by -estat-‘s export commands. -estat- is not a package, by the way.

      Naturally I don’t support spurious precision in output any more than anyone else, but plenty of others can make that case.

  8. Beck’s rules are excellent, except for the ‘significant’ one. The word ‘significant’ should be allowed only as a synonym for ‘important’ and the term ‘statistically significant’ should be banned altogether. An effect can be ‘statistically significant’ but unimportant, or tremendously important but not ‘statistically significant.’ Whoever chose the term ‘statistically significant’ made a big mistake and it is time to stop perpetuating it.

    As far as improvements to R, I would also like a better ls() function. I’d like to easily see the type of each object (list, data frame, vector, function) and the size, and to have a less laborious way of seeing objects whose names include certain patterns. I realize this isn’t a statistical issue the way the rounding defaults and histograms are, but these features would have a bigger effect on me personally.

  9. With the appropriate archiving of raw data for reproducible research, the “nicing up” of summary results presents no problems.

    But if this remains as it usually is – that all that is available later for re-analysis and meta-analysis is the publication – the impact of information loss needs to be considered. For instance, the (severe) rounding of p-values can make there later combination (much) less valuable than it could have been.

    An appendix of unrounded results would easily over come this.

    And it does seem to favour censoring of information (usually thought to be distracting) over encouraging better information processing by the reader …

Comments are closed.