Skip to content

Statistics for firefighters: update

Following up on our earlier discussion, Daniel Rubenson from Ryerson University in Toronto writes:

The course went really well (it was a couple of years ago now). The course was run through a partnership my department has with the Ontario Fire College. Basically, firefighters can do a certificate and sometimes a degree in public administration and part of that is a course on methods. It was a small group — about 8 or so — very motivated guys (all guys). Some of them were chiefs or deputy chiefs from small towns, others captains who were doing the certificate in order to improve their chances for promotion or as a step into a broader public admin career.

I had asked them ahead of time to bring with them whatever data they could get their hands on and that they thought would be interesting. This included response times, data on professional v voluntary firefighters, some insurance data and the like.

I should mention that is was an intensive mode course. So we had 4.5 days together and none of the students had any background in statistics whatsoever and most had been out of university for several years.

With this in mind, I started by doing a few exercises to get them thinking about data and numbers. These came mostly from your book, Andrew (Teaching Statistics: A Bag of Tricks). We spent a morning playing around with these sorts of exercises and then transitioned into some lecture time going through the ideas of taking a concept and moving toward a variable, thinking about measurement etc. In other words, some of background building blocks to getting to the point where we can start to do some analysis of data and answer questions.

We did an intro to very basic descriptive statistics and the students worked in pairs with the data they’d brought as well other data I came with or had simulated. I found with this type of group in particular that working in pairs or groups was very useful. We did exercises teaching concepts of probability and different distributions. We did the candy weighing exercise (always a big hit).

We also spent a fair amount of time talking and thinking about what for lack of a better term would be called research design. This included a discussion about confounding. Lots of nice examples related to the firefighting world here: e.g. why do fires/fire damage/whatever seem to increase with the increase in smoke detectors?

Along the way we worked our way up to bivariate tests and ended with a gentle introduction to regression.

To be honest, the lesson plan kind of flew out the window and we largely played it by ear during the week we had. But I kept a few “big picture” ideas in mind as we went along. The main goal was to get them up the point where they could think about (and usually do) some simple but useful analysis in their everyday work. And also to be able to understand and react to reports and the like coming from city governments and so on. So one of the other things we did was work on statistical literacy by reading, critiquing and and interpreting reports.

I think one of the main take aways for me was that it’s important with this kind of group — and also because of the intensive format — to really mix it up in terms of lecturing, exercises, games, group work, short assignments and quizzes etc.

And Brent Van Scoy from Omaha Fire in Nebraska writes:

I am not certain what fire code Canada follows, but most of the local departments are trying to become compliant with NFPA (National Fire Protection Agency) 1710, which covers fractal time responses. There are many different levels, but basically it measures different periods of times for fire department responses. As an example, time of dispatch until the unit leaves the engine house and also the time it takes for the until to “travel” to the call. It includes many other variables depending on the type of incident, but as general rule it is a fairly straightforward measurement of time. The code requires that 90% of the time a unit should arrive within 5 minutes, which would seem pretty straightforward, but the controversy comes with how it is interrupted. The code is a Boolean value; either you make the objective or you don’t. Instead, others look at the average time of all the calls and considered the code met if the average is below 5 minutes. Part of my job is presenting that information in a way that will help them understand the range of calls that were greater than 5 minutes.

If you have a few minutes, this article is worth a few minutes of your time (Los Angeles Fire). I feel sorry for the guy/gal who has to deal with this mess! Luckily my job is not nearly that complicated. I simply create histograms and other charts in Excel to graph the population sample of our incidents and times, then present it to the Fire Chief every quarter. It is pretty low level stuff, but I am trying to become more efficient in my job, so any article I can find on Fire Department’s statistics I take a great interest. We use our data to help us place units/rigs in different parts of the city to help improve our NFPA 1710 compliance.


  1. Chris says:

    I know you like to highlight ridiculously bad graphs once in a while, but I think this deserves the price for “awful chart of the year”.

    Setting aside the absence of any real concern for identifying causality, the chart features a proportion (or indexed value — it’s not obvious) and a dollar value on the *same axis*!

  2. Steve Sailer says:

    There should be a course called “Statistics for Judges in Firefighter Lawsuits:”

    • Or more generally, “Statistics for Judges”?

      • K? O'Rourke says:

        It is (used to be?) a bit of a chicken and egg problem in that law students can’t waste time learning statistical arguments that judges are unlikely to grasp and judges are _too old_ to learn (i.e. already too comfortable being successful and wise with what they already know).

        Recall there was some movement in Scotland? to try and address this…

        Would be interested to get some comments from people up to date and actually involved in teaching statistics _for use in the courtroom_ (i.e. meshed with rules of evidence and judicial roles, etc.)

        The link from Steve Sailor is actually very scary about the damage judges who don’t understand statistics (almost all of them?) can do. My favourite example is the quote from a judge in reaction to the presentation on an estimate and its standard error – “I will not have errors in my court – standard errors or otherwise.”

  3. Steve Sailer says:

    Here’s an analysis of the statistical reasoning employed by Judge Nicholas G. Garaufis in his landmark 2009 decision in “Vulcan Society v. Fire Department of New York:”

  4. Rahul says:

    @Andrew, @Daniel Rubenson

    Have you had any one in the “candy weighing exercise” weigh 5 candies so that they roughly represent each class (small, medium, large) and scale up according to the estimated number of each type? Wonder how the bias / accuracy of that estimate runs?

    • Andrew says:


      Almost all the estimates I get are way too large. But in a class of 40, in which the students pair up so I get 25 estimates, typically one or two of the estimates out of 20 are close to the true value. When this happens, I ask the students how they did so well. Sometimes they have no explanation, other times they respond that they were trying to get a sample with a representative set of small, medium, and large items.