John Sterman’s /Business Dynamics/ is a good if hefty and somewhat encyclopedic text on system dynamics. Click on Tom Fiddaman’s name to see some examples, or click on my name, too (he’s got a higher density of system dynamics per post than I do).

]]>In any case, the quote is at the end of the paper, and for anyone vaguely aware of atmospheric stocks and flows (i.e. that temperature and precipitation depend on the integral of the radiative effects of CO2), as Pielke surely is, alarm bells should have been going off long before.

]]>I followed your link. At the end of your post from 2011 you write:

Actually, the paper doesn’t even address whether floods are increasing or decreasing. It evaluates CO2 correlations, not temporal trends. To the extent that CO2 has increased monotonically, the regression will capture some trend in the betas on CO2, but it’s not the same thing.

But I followed your link to Pielke’s page, and in the comments of his own blog he gives this quote from the paper in question:

What these results do indicate is that except for the decreased flood magnitudes observed in the SW there is no strong empirical evidence in any of the other 3 regions for increases or decreases in flood magnitudes in the face of the 32% increase in GMCO2 that has taken place over the study period.

In a sense this is irrelevant if, as you argue, that paper is seriously flawed. Still, it does not seem that Pielke misread it.

]]>http://blog.metasd.com/2011/10/linear-regression-bathtub-fail/ ]]>

That said – only an economist could name a “flow” as “stationary”.

]]>But it’s not an obvious concept — when I tried to show that the difference between a probability and a probability density can be explained by thinking about the units, I got a lot of blank stares.

]]>Some ideas for your list:

It’s a good habit to drill undergrads to never calculating anything without explicitly writing all units & ensuring units cancel out. As an aside, it helps you handle messy mixed-units situations.

Another sanity check is to make sure nothing in a formula that’s inside an exp() or ln() etc. ever has a leftover unit.

Also, to remember to check extreme conditions in any formula to see if it yields expected results e.g. t=0, x=infinity, y=x etc. Has helped me catch bugs / typos / model errors so often.

Yet another, is to be always wary & careful with any naked numerical constants in formulae. e.g. the 180 in this one http://en.wikipedia.org/wiki/Kozeny%E2%80%93Carman_equation

PS. Maybe this is too basic & obvious for the readers here. But it might help beginners / students.

]]>http://www.amazon.com/Sense-Style-Thinking-Person%C2%92s-Writing/dp/0670025852/ref=asap_bc?ie=UTF8 ]]>

More seriously, if how to deal with feedback effects is not typically taught, where is a good place to learn about it?

]]>Yes, logs are often not understood. One thing I do in teaching is to emphasize that although we tend to count additively (on our fingers; 1, 2, 3, ..), Nature often counts multiplicatively (1, 2, 4, 8, …). So taking logs is just translating Nature’s way to our more usual way (and using logs base 2 is often easier to interpret than natural logs or logs base ten!)).

Hmm — maybe I’d better post some of my exercises from the prob/stat course I taught for teachers — both these points are addressed in them.

]]>I must be missing something important about his point, yes?

]]>But people in many branches of statistical science could certainly learn from the habits of natural scientists of keeping track of dimensions and units of measurement as utterly routine and essential. I’ve seen researchers not otherwise naive looking at regression coefficients and failing to realise that most of what they were seeing was a side-effect of differing units. Citing the variance as a univariate descriptive statistic is usually pointless, as its units of measurement often make little sense to researchers, even when they do know that it has different units. Naturally, that’s the reason for standard deviations.

The question of why people don’t plot much more is interesting, important, and intricate. The answers are usually tacit, but here are some I’ve encountered:

1. I’m fitting a model with several variables and/or a large dataset. No graphs could represent that adequately; therefore I show no graphs.

2. Graph interpretation is subjective and lacking in rigour.

3. People in my field expect tables, not graphs. If I submit tables as expected and instructed, graphs will be rejected as at best repeating the information given precisely in the tables.

4. I work with categorical data and categorical data can’t be graphed expect trivially. You expect me to show pie charts?

5. Graphs are for showing the obvious to the ignorant (nod to Edward Tufte’s wording of a view he does not endorse), so would just raise suspicions that I am not doing cutting-edge research.

6. The effects I am detecting are too subtle to be shown graphically. The result is highly significant, but not important.

7. No graph I could devise supports the argument I am trying to convey. (#6 made more general.)

I should perhaps emphasise that I do know lots of really good reasons for graphing data. I am just trying to diagnose thinking on the other side.

]]>