Whassup with the weird state borders on this vaccine hesitancy map?

Luke Vrotsos writes:

I thought you might find this interesting because it relates to questionable statistics getting a lot of media coverage.

HHS has a set of county-level vaccine hesitancy estimates that I saw in the NYT this morning in this front-page article. It’s also been covered in the LA Times and lots of local media outlets.

Immediately, it seems really implausible how big some of the state-border discontinuities are (like Colorado-Wyoming). I guess it’s possible that there’s really such a big difference, but if you check the 2020 election results, which are presumably pretty correlated with vaccine hesitancy, it doesn’t seem like there is. For example, estimated vaccine hesitancy for Moffat County, CO is 17% vs. 31% for neighboring Sweetwater County, WY, but Trump’s vote share was actually higher (81%) in Moffat County than in Sweetwater County (74%).

According to HHS’s methodology, they don’t actually have county-level data from their poll (just state-level data), which isn’t too surprising. This is how they arrived at the estimates:

It’s not 100% clear to me what’s skewing the estimates here, but maybe there’s some confounder that’s making the coefficient on state of residence much too big — it could be incorporating the urban/rural split of the state, which they don’t seem to adjust for directly. I guess the way to check if this analysis is wrong would be to re-run it to try to predict county-level election results and see if you get the same discontinuities (which we know don’t exist there).

Let me know what you think. It’s strange to see results that seem so unlikely, just by looking at a map, reported so widely.

I agree that the map looks weird. I wouldn’t be surprised to see some state-level effects, because policies vary by state and the political overtones of vaccines can vary by state, but the border effects just look too large and too consistent here. I wonder if part of the problem here is that they are using health insurance status as a predictor, and maybe that varies a lot from state to state, even after adjusting for demographics?

How big is the Household Pulse Survey? The documentation linked above doesn’t say. I did some googling and finally found this document that says that HPS had 80,000 respondents in week 26 (the source of the data used to make the above map). 80,000 is pretty big! Not big enough to get good estimates for all the 3000 counties in the U.S., but big enough to get good estimates for subsets of states. For example, if we divide states into chunks of 200,000 people each, then we have, ummmm, 80K * 200K / 330 million = 48 people per chunk. That would give us a raw standard error of 0.5/sqrt(48) = 0.07 per chunk, which is pretty big, but (a) some regression modeling should help with that, and (b) it’s still enough to improve certain things such as the North Dakota / Minnesota border.

The other thing is, I guess they know the county of each survey respondent, so they can include state-level and county-level predictors in their model. The model seems to have individual-level predictors but nothing at the state or county level. It might be kinda weird to use election results as a county-level predictor, but there are lots of other things they could use.

On the other hand, the map is not a disaster. The reader of the map can realize that the state borders are artifacts, and that tells us something about the quality of the data and model. I like to say that any graph should contain the seeds of its own destruction, and it’s appealing, in a way, that this graph shows the seams.

P.S. I wrote the above post, then I wrote the title, and then it struck me that this title has the same rhythm as What joker put seven dog lice in my Iraqi fez box?

36 thoughts on “Whassup with the weird state borders on this vaccine hesitancy map?

  1. > On the other hand, the map is not a disaster. The reader of the map can realize that the state borders are artifacts, and that tells us something about the quality of the data and model.

    This is far too generous. You should spend some time on social media to see how people actually consume journalism today. Maps like these feed partisan instincts. If you look at reactions to the map, people are using it to shame some states and not others, in part based on the sharp borders.

    • Anon:

      I don’t think it makes sense to shame any states, but I do think the survey is providing some information, and if it’s showing different average attitudes in different states, that’s useful to know. Even if the borders are not sharp, the average state differences can be a reasonable thing to look at.

      • I don’t disagree with what you just said. So write that next time!

        What you wrote was:

        > The reader of the map can realize that the state borders are artifacts, and that tells us something about the quality of the data and model.

        What reader is this? Is this a typical reader of a mass publication? No it is not. It is a highly-educated, exceptional reader. When evaluating the quality of a visualization, I recommend thinking about the effects on the audience as a whole rather than just on the upper crust.

        What visualizations like this do is a) divide readers by states, feeding partisanship. b) mislead readers into thinking states have more power than they do c) deteriorate faith in government by making forecasts seem factual

        These are all things you’ve written about in the past. An idealized view of the reader is not the best way to interpret the problems with this article and map.

        • Anon:

          You make good points. We can separate four issues here:

          1. The map with its sharp state borders is misleading.

          2. Average state differences are real. If the map were shown at the state level instead of the county level, we’d see sharp state borders—but then it would be clear even to uninformed readers that these represent average differences between states that happen to border each other; they would not represent sudden jumps at the border.

          3. It’s good to try to make a map at the county level, but then the mapmaker is under the burden to do some county-level modeling. It doesn’t have to be an explicitly spatial model—it’s fine to have a regression model with predictors that happen to be spatially correlated—but, as the above graph shows, it’s not enough to just include individual- and state-level data with nothing in between.

          4. From my standpoint as a statistician, I like graphs that contain the seeds of their own destruction. I like the above map for the same reason that I like the ridiculous regression discontinuity plot here, because it gives a clue as to what went wrong.

          I agree that point 4 is specialized, but it’s not quite as irrelevant as you might think! Imagine an alternative in which HHS had made the map, seen the weird state borders, and decided to fix them by passing it through a local smoother or low-pass filter. Then the map might look just fine—but it would be wrong! And in some ways this smoothed map would be even more wrong, in that the smoothing would hide the problems in the model.

        • I don’t disagree, again. But then just say that *you* not *a reader* are the audience that thinks these maps are ok as is.

          I think we should evaluate these maps both in terms of what statisticians can takeaway and the risks to a general audience. My issue is your post treats “a reader” as if it were a general thing when it’s really just a stand-in for you!!

        • Anon:

          It’s not just me, it’s also all those people on twitter who were bothered by the graph.

          One problem is that people often don’t take graphs seriously. I remained stunned that the authors of that air-polliution-in-China paper made that graph, looked at it, and included it in their paper—without at any time realizing how ridiculous their fitted model was! I’m glad they included the graph rather than hiding it, but I’m bothered that they weren’t bothered by it.

        • Anon.. unfortunately, as great as this blog is, Gelman will just refuse to admit he was wrong, even on pretty minor points like this one. He does it in a very polite way.. but I’m not sure I’ve ever seen him fully capitulate to a commenter’s point.

        • Matt:

          I know about not feeding the trolls, but . . . I admit I was wrong all the time! It’s not about “fully capitulating.” We’re having a conversation here, not a struggle for dominance. The anonymous commenter above made some good points, which inspired further thoughts by me, which inspired further thoughts by him, etc.

        • @Matt

          Yeah, I mean it’s not so much he’s wrong as that his analysis is bad. He’d criticize it if someone else made it. I suspect he wrote this too quickly or there are some departmental ivory tower effects and he just isn’t incentivized to be interested in the impact on a general audience of bad graphics like these.

          Either way, this blog’s visualization analysis is often lacking, and audiences know it, even though I know Andrew considers it a hobby of his. I wish he’d get more up to date on how visualizations are consumed today and skip talking about “a reader” when he seems to be naive to audience engagement and consumption patterns.

          Thankfully, there’s so much other good model analysis on this site that we can turn a blind eye and vent in these comments :)

        • Andrew, I’m not really trolling. I do think you do a lot of goal-post shifting in your replies. A lot of your posts are clearly written in haste, which is totally fine, but it means you aren’t going to have fully thought through a lot of things. Anyways, I agree it was a useless comment by me. Carry on.

        • Anon:

          Just to clarify because it seems to have been lost in this comment thread: I think the model has problems! I think that map is bad! When I say the map is kind of good, I mean that it’s kind of good because it reveals how bad the model is. The map is good . . . because it reveals so clearly how bad it is . . . ideally motivating the producers of the map to do better in the future . . . and in the meantime motivating lots of people (such as us) to criticize it.

          I thought the above was implied by my above statement, “On the other hand, the map is not a disaster. The reader of the map can realize that the state borders are artifacts, and that tells us something about the quality of the data and model. I like to say that any graph should contain the seeds of its own destruction, and it’s appealing, in a way, that this graph shows the seams.” But I guess that this comment thread shows that this paragraph was not clear to everyone, so I guess I was wrong in how I wrote it.

          Finally, visualization is not “a hobby” of mine. It’s part of my research. I’ve published a lot on the topic, including theory, methods, and applications. I’m sure there’s lots to disagree with in what I’ve written, and I don’t think my work on visualization is the last word on the topic. I’m sure that my analysis on visualization, as well as on other topics, is “often lacking.” That’s true for everyone. Fortunately, it’s possible for us to make research contributions while still only seeing part of the elephant.

          In any case, I appreciate the comments, as my original post was incomplete, as it did not have my points 1-4 here, nor did it get into the issues you’ve been discussing regarding confused interpretations of the graphs by naive readers.

        • Yes your paragraph’s intent was lost on me.

          If that’s your perspective, your critique should have included points like:
          – The headline of the piece and the map should have indicated the map is meant to show the fault of the model
          – The paragraphs around the map should have discussed these particular faults
          – Annotations should have been included to make it VERY clear to any remaining audiences the map is meant to show failure

          I like the idea of showing the data to show failure. But that requires a very special touch not performed here by NYT Graphics, which does struggle at these kinds of things (see: the needle forecast).

          I actually think that would make for a great type of post in general, that could be a series, but it runs counter to journalists instincts to show truth in a more literal sense.

        • What the fuck are you all talking about? The post above is 90% criticism and two sentences at the bottoms saying “the map is not a disaster” and “the reader CAN”, not the reader “WILL”. There’s no substantive criticism of any of the statements made, just policing that the tone wasn’t hard enough. What, did the twitter people make you so angry that anything less than personal insults is too lukewarm ivory tower for you?

        • Andrew –

          > 2. Average state differences are real.

          I guess they’re real, but what conclusions can you draw from them? I don’t see any that are meaningful, and so I think it might be more misleading than informative to look at the average stager differences. My assumption is that there are much more informative variables of interest and that essentially, choosing state-level differences is an arbitrary choice.

        • Anon –

          > What visualizations like this do is a) divide readers by states, feeding partisanship. b) mislead readers into thinking states have more power than they do c) deteriorate faith in government by making forecasts seem factual.

          I completely agree except I don’t really understand what you mean by (c).

  2. The model says it has predictors at the state level. That’s what’s making the pattern here. As Andrew says, it makes sense for there to be some state-level predictor, but once you’ve done that, if the populations are more or less identical in adjoining counties, the only thing you’re going to see between those counties is the state intercept gradient. That would be a simple explanation.

    • Philip:

      What Bergstrom says is reasonable, but I’m not sure he’s correct when he says these are “state-level survey data.” I think these are individual-level survey data that happen to be analyzed at the state level. I didn’t notice anything inherently state-level about the data. (This is different than our radon example, where separate surveys really were conducted in different states.) So I think the sharp borders on the above map are an analysis problem, not a data problem.

  3. Imo, there’s entirely too much comparison between states going on which are based on aggregating data at the state level.

    Just got off a Twitter thread with a certain well-known (Nobel-winning no less) scientist comparing covid outcomes between Florida and California, with no control for confounding variables other than age-adjustment.

    Looking at the within-state diversity relative to predictive variables for covid outcomes, what’s the real value there in considering the states as undifferentiated entities and then comparing across the states?

    I suppose there might be SOME value in such a comparison, but imo, in the end it brings a negative return in that it sheds little light but does much to confirm ideological or other biases. People who have firm beliefs about the advisability of NPIs or the benefits of a Republican vs. a Democratic governor just take those comparisons and run with them. But is there really any value given the diversity within states as well as the non-similarity across states, not to mention the complexity of (in accounted for) interaction effects?

  4. The publicly available version of the Pulse dataset does not have a variable for what county the person lives in. My guess is that the estimated coefficients on the state dummy variables are not good enough due to the lack of state-level predictors in the hierarchical model that they did not estimate. Also, the Pulse survey does stratified random sampling with with 50 states, D.C., and 16 large Metropolitan Statistical Areas (MSA) as the strata, so not adjusting for the MSA is suspicious.Finally, the Pulse survey has a response rate of about 5%, and I would worry about whether the 5% of people who answer a covid survey have different propensities to get vaccinated than the people who are not in the survey but have the same demographics.

    • Ben:

      Interesting. Given all this, would you suggest they just make a state-level map? Or if they’re gonna do a map at the county level, how can they get something more reasonable?

      • One would think that the HHS could get access to the non-public Pulse data (that has addresses) or just access the Census Bureau to run the logistic regression on the non-public Pulse data. But I doubt the HHS would estimate a model like we would do it (hierarchical, with county or better Congressional district level grouping, 2020 vote percentage as a higher-level predictor, post-stratification to match actual local vaccination rates, etc.)

        • “One would think that the HHS could get access to the non-public Pulse data (that has addresses) or just access the Census Bureau to run the logistic regression on the non-public Pulse data.”

          +1. Forgoing obvious solutions like this increases my blood pressure. And if the Census Bureau for some reason just won’t share, why not use a spatial method to disaggregate the Pulse data? Either way it seems like laziness on the part of HHS.

      • “would you suggest they just make a state-level map”

        No. They already have a state level map that’s clearly wrong. :(): In the current map the variation with states is so small (rarely more than two brackets) it would just disappear into the currently dominant bracket. Likely the subordinate bracket in most states is just barely over the line from the dominant bracket.

        In any realistic map the white or light purple of Napoleon Dynamite country (S Idaho) would extend comfortably up into most of eastern Washington state, except around WSU and Spokane. Even the Canadians in that part of the world own AKs and MAGA hats.

        • Jim:

          Sure, but we’re used to seeing electoral maps of the U.S. and we know that even though NY state is blue, it includes some Republican-voting areas. Another way to do it would be to represent each state by a circle.

        • I agree that the expected behavior of people in much of southern Idaho would be mirrored by those in the rural areas of eastern Washington. But you should also look south. Box Elder Utah borders on Cassia County Idaho had has similar demographics—albeit Box Elder has somewhat higher incomes. But the forecasts are quite different.

          Similarly, you would not expect sharp gradients at the boundaries with northern Nevada or eastern Oregon.

          Bob76

  5. If I may as an occasional reader and not in the “trade” compliment the host (in particular) and the commenters on the high level of discourse in general but on this post in particular. To me the finer points illustrate how difficult it is to manage our human tendencies.
    I did come away with some new insights which might have been clouded if my reaction had been affected by implied motivation.

    • Really? More than half of the comments revolve around a rather peculiar remark made by an anonymous participant. The rhetorical question posed by Somebody summed up my thoughts rather nicely: wtf.

      This has not been a particularly illuminating discussion at least as far as the history of this blog goes. On the other hand, it isn’t the worst we’ve had.

      • Allanc:

        The back-and-forth may have gone on too long, but I appreciated the opportunity to expand on my thoughts here, which I wouldn’t have thought to do except in response to the comment.

  6. There are multiple traps waiting when such maps are constructed from regression estimates. As soon as I saw the map I started thinking about Trump supporters and vaccine hesitancy – and judging by the comments I am probably not the only one. But the map does not display vaccine hesitancy – it displays presumed vaccine hesitancy based on regression estimates using predictor variables also likely to correlate with political views. And even if it displayed raw data, there would still be the ecological fallacy to worry about. I wonder if such maps should come with a summary of what really can and cannot be said about political views and vaccine hesitancy.

  7. These seem to be the most likely culprit: “Public Use Microdata Areas (PUMA) level – PUMAs are geographic areas within each
    state that contain no fewer than 100,000 people. PUMAs can consist of part of a single
    densely populated county or can combine parts or all of multiple counties that are less
    densely populated. Detailed maps of PUMAs for each state are available
    at: https://www.census.gov/geographies/reference-maps/2010/geo/2010-pumas.html
    3. County level – to create county-level estimates, we used a PUMA-to-county crosswalk
    from the Missouri Census Data Center. PUMAs spanning multiple counties had their
    estimates apportioned across those counties based on overall 2010 Census populations.”

    All the PUMAs are within a state, and there are lots of border counties that have far less than 100,000 people. The data used to get those counties’ estimates would be from areas farther away from the border, so you get less similarity between counties in different states that border each other. The weights from the HPS (which I assume were used to get the coefficients) are also adjusted by state level controls: https://www2.census.gov/programs-surveys/demo/technical-documentation/hhp/Phase3_Source_and_Accuracy_Week_27.pdf, so that just might encourage counties within a state to have more similarity than neighboring counties not in that state.

Leave a Reply

Your email address will not be published. Required fields are marked *