Can we stop talking about how we’re better off without election forecasting?

This is a public service post of sorts, meant to collect some reasons why getting rid of election forecasts is a non-starter in one place. 

First to set context: what are the reasons people argue we should give them up? This is far from an exhaustive list (and some of these reasons overlap) but a few that I’ve heard over the last week are: 

Fivey Fox Pandora's Box

  • If the polls are right, we don’t need forecasters. If polls are wrong, we don’t need forecasters.
  • Forecasts are hard to evaluate, therefore subject to influences of the forecaster’s goals, e.g. to not appear too certain that they can be blamed. Hence we can’t trust them as unbiased aggregations of evidence.  
  • Forecasters may have implicit knowledge from experience, such as a sense of approximately what the odds should be, but it’s hard to transparently and systematically incorporate that knowledge. When a forecaster  ‘throws in a little error here, throws in a little error there’ to get the uncertainty they want at a national level, they can end up with model predictions that defy common sense in other ways, calling into question how coherent the predictions are. The ways we want forecasts to behave may sometimes conflict with probability theory
  • There’s too much at stake to take chances on forecasts that may be wrong but influence behavior. 

I don’t think these questions are unreasonable. But it’s worth considering the implications of a suggestion that forecasts have no clear value, or even do more harm than good, since I suspect some people may jump to this conclusion without recognizing the subtext it entails. Here are some things I think of when I hear people questioning the value of election forecasts: 

#1 – A carefully constructed forecast is (very likely to be) better than the alternative. Or to quote a Bill James line Andrew has used, “The alternative to good statistics is not no statistics, it’s bad statistics.”

What would happen if there were no professional forecasts from groups like the Economist team or professional forecasters like Nate Silver? A deep stillness as we all truly acknowledge the uncertainty of the situation does not strike me as the most likely scenario. Instead, people may look to the sorts of overreactions to polls that we already see in the media to tell them what will happen, without referring back to previous elections. Or maybe they anxiously query friends and neighbors (actually there’s probably some valuable information there, but only if we aggregate across people!), or extrapolate from the attention paid to candidates on public television, or how many signs they see in nearby yards or windows, or examine tea leaves, look at entrails of dead animals, etc. 

One alternative that already exists is prediction markets. But it’s hard to argue that they are more accurate than a carefully constructed forecast. For instance, it’s not clear we can really interpret the prices in a market as aggregating information about win probabilities in any straightforward way, and there’s reasons to think they don’t make the best use of new data. They can produce strange predictions too at times, like giving Trump a >10% chance of winning Nevada even after it’s been called by some outlets. 

Even in seemingly “extreme” cases like 2016 or 2020, where bigger than anticipated poll errors led to forecasts seeming overconfident about a Biden win in various ways, forecast models do a better job than reports on the polls themselves by accounting for sampling and non sampling polling errors systematically, and to some degree unanticipated polling error, if imperfectly.  Relative to poll aggregators, forecasts make use of fundamentals like regularities in previous elections to interpret seeming shifts. 

Some arguments about forecasts no longer being valuable point to 2016 and 2020 as examples of how they’ve lost their utility since the polls are broken . But polls can be broken in different ways, and without other information like fundamentals to fall back on or aggregation methods that smooth out bumps, it can be very hard to know what to pay attention to when incoming information seems to be in disagreement. A forecasting model can be useful for helping us figure out which information to pay more attention to. We can argue about whether for this particular election  the average person’s intuitive sense of probability of winning would really be worse if they hadn’t seen a forecast, but that strikes me as somewhat of a straw man comparison.  Like any approach we take to reasoning under uncertainty, forecasting needs to be interpreted over the long term. That some aspects of elections appear predictable if we gather the right signals seems hard to dispute. None of the adjustments a forecast does will be perfect, but they’re beyond what the average journalist will do to put the new information in context during an election cycle. 

#2 – Saying election forecasts are dangerous because they can be wrong but still influence behavior is a slippery slope.

Let’s consider the implications of the argument that we should stop trying to do them because they can influence behavior in a situation where there’s a lot at stake. Zeynep Tufecki’s recent New York Times article makes many good points that echo challenges we described with evaluation, and has some anecdotes that might seem to suggest getting rid of forecasts entirely due to their potential influence on behavior. For example, Snowden tweeting in 2016 that it was a safe election to vote for third party candidates, and Comey claiming he sent his letter to Congress about reopening the email investigation in part because he thought Clinton would win. 

But from the standpoint of science communication, arguing that forecasts are harmful because they could mislead behavior becomes a slippery slope. There’s a chance pretty much any statistics, or really any news, we present to people might inform their behavior, but also might be wrong. Where do we draw the line, and who draws it? Another way to put is, by arguing that forecasts do more harm than good, we’re implying that it can be ok to censor information for people’s own good, since they won’t be able to make good choices for themselves. Taking responsibility for how information is communicated is great, but I don’t really want my news organizations deciding I can’t handle certain information. And of course, on a completely practical level, censoring information is hard. If there’s a demand, someone will try to meet it. 

#3 – We have the potential to learn a lot from forecasts. 

Here I can speak from experience. This election cycle, diving into some of the detailed discussion of the Economist forecast model, in contrast to FiveThirtyEight’s, taught me a lot about election dynamics, like the importance of, and difficulty reasoning about, between state correlations and the role the economic climate can play. That Nate Silver, Elliot Morris, etc. have thousands of followers on social media suggest that there’s a big group of people who care to hear about the details.  

Also while I have some prior familiarity with Bayesian statistics (not to mention I think about uncertainty all the time), seeing these methods applied to elections has probably improved my generic statistical modeling knowledge as well. It’s a great lesson in what options we have at our disposal when trying to incorporate some anticipation of ontological uncertainty in predictions, for instance. Not to mention all the lessons about statistical communication.  

I am admittedly far from the typical layperson consulting these. But if we think that we need to reach some level of assurance that our models are highly accurate before we can put them out in the world, we are eliminating opportunities for the general advancement of data literacy. This isn’t to imply that everyone who consults forecasts is learning a lot about elections; no doubt many go for easy answers that might assuage their anxiety. There are some major kinks to iron out, in how they’re framed and communicated, some of which present major challenges given that people are hard-wired to want certainty and answers. But I think we sometimes don’t give audiences credit for their ability to get more statistically literate over time. There are various types of graphics, scatterplots and animated simulations for instance, that were once uncommon to see at all in the media. It can take gradual exposure over time for certain next steps in data journalism to become part of the average news diet, but it does happen. There’s still a long way to go, but I’m pretty sure we can increase the numbers of people learning through election forecasts by finding ways to prioritize “insight”–into what matters in an election, for example–rather than just answers. I liked the Economist forecast’s visualization of state-wise correlations as a way to invite readers to judge for themselves how reasonable they seem. For the same reason I like a very simple suggestion Andrew made in a talk recently for how one could frame a prediction: “Biden could lose, but there’d be a reason for it.” Getting readers to think a little more deeply about what information a forecast might have missed seems to me like a valuable form of political engagement in itself. 

49 thoughts on “Can we stop talking about how we’re better off without election forecasting?

      • So, whatever sells? Where does it stop? Drugs, stolen goods, weapons?

        PS. Not saying forecasts are similar in harm. Just saying that it’s a silly argument that it’s good coz it sells.

        • But in this case its information that people want, not crack. If we think we shouldn’t give people model predictions if we can’t guarantee their accuracy, then we’re setting people up to have some unrealistic expectations about what statistical modeling is about.

        • Agree, but more generally unrealistic expectations about what science can actually achieve.

          It can’t avoid being misleading but rather just lessen that possibility.

          So the only actual assurance science can give, is that with persistence in adequate inquiry, the ways in which we were mislead (either initially or because the world has changed) _would_ eventually become apparent.

        • “So, whatever sells? Where does it stop? ”

          Well IMO it’s cool to throw a breast out on national TV at half time to make a few bucks. But allowing statisticians to rake it in? Phhh. Man that’s where I draw the line.

  1. I agree with this. I think a carefully constructed forecast can be useful in many ways, especially when accompanied by explanations and visualizations that show how the forecast could be wrong.

    By the way, there’s probably a small typo in explanation #2 – Mueller should probably be Comey.

  2. This is a minor point, but I really dislike the trend of “we need to talk about” or “we need to stop talking about.” Just tell me what the point is, and not whether I should or shouldn’t be talking about it. If you’re telling me that election forecasts are important, fine; if you’re telling me they aren’t important, fine. If you’re telling me I shouldn’t “talk about” this topic: no, you don’t get to say that.

  3. I think you need some sort of prominent disclaimer on any election forecast. Maybe something like:

    “In 2020 the Economist forecast that Joe Biden had a 96% probability of winning the presidency and he did in fact win. However if we had accurate polling data at the time we would have forecast only a 65% (?) probability of winning. This is a reminder that this forecast is largely based on polling data and the forecast will look better to the extent that polls are more accurate. We believe our model is well calibrated to historical polling errors, but there is no guarantee that future polling errors won’t be larger as improvements in polling methodology are offset by declining response rates and partisan non-response bias. Furthermore, our model is calibrated to perform well on average for many elections, but that is no guarantee of a reliable forecast in a specific election. We hope this model will help you interpret polling data and related media coverage, but when making important decisions (voting, volunteering for campaigns, donating, etc.) please remember that we can not be certain the race isn’t much closer (or that the margin isn’t much wider) than polls indicate.”

    • > However if we had accurate polling data at the time we would have forecast only a 65% (?) probability of winning.

      This is a weird metric. If we had had polls that we knew were accurate, the forecast would have been 100% for Biden! You only get down to 65% is you imagine we had polls that were accurate, but that we thought they might have had normal polling error. That feels to me like double-counting the polling error.

      Like there was in fact a polling error from 54.4% to 51.7%, and if the polls had been at 51.7% then there could have been a polling error to 49.0%. But it’s very unlikely that there would have been a polling error from 54.4% to 49.0%.

      • Oscar:

        Think of it this way. Given a forecast 90% of winning, it is unlikely (not impossible) to have a close race. In the same way, given a close race it is unlikely (not impossible) to have a forecast of 90%. For a race like 2020, I estimated the median 538 forecast would be closer to 65%. Sometimes higher. Sometimes lower. But not often 90%. I’m not sure what The Economist forecast would have said but I think it’s useful to ask he question.

        So I’m not saying 90% is wrong. I’m just saying usually the forecast would be closer to 65%…

  4. I think generally we will never be worse off by having access to more data/modeling. In the limit when these data/models are too noisy/sloppy to be predictive, the harmless decision is to ignore them. Indeed in the future with more and more forecast models, there will be an even more meta modeling that examines and aggregates all forecasts. However, there could be a potential risk for the general public to lose faith in statistics or data science. But hey, no one would really expect any forecast to be exact until all financial analysts lost their jobs.

    • Yeah I learned about evaluating the model by thinking about state correlations (and conditional predictions). I also learned about the spatial plots where the volume of each state is the number of electoral college votes.

      I probably learned some other things, but that I can name a couple makes me feel good about the process. Sure, maybe I would have learned the same thing if G-man was posting about basketball but this all seems fine to me.

      Also guest appearances (at least Josh/Phil/Jessica, probably forgetting some) were fun.

  5. Well, I do think that public opinion polling has failed rather badly for 2 presidential election cycles in a row.

    The RCP polling average for the generic ballot showed Dem +6.8. It’s pretty clear the actual result will be perhaps +2 or less. This is a worse performance than in 2016 or in 2018 by a lot. In both of those elections, this average was within a point or 2 of being correct.

    On the presidential numbers, the average was 7.2% and the actual results is 3.3% at the moment. CNN had it at 12% and Quinnipiac had it at 11% and both missed by more than twice their alleged margin of error. CNN in virtually every instance showed Democrats up more (in many cases dramatically more) than the actual result. Sounds like pretty good evidence of bias or really bad methodogy.

    On the state level, the results were terrible for some polls such as CNN and Quinnipiac. Trafalgar was generally much closer in many cases such as Michigan and Florida. Their critique of polling must be taken seriously it seems to me. Basically, there are a lot of people who distrust pollsters and/or don’t want to tell anyone over the phone anything about their politics. BTW, Nate Silver’s “poll averages” were off by more generally. I don’t know why.

    What amazes me is the lack of public reflection on why the dramatic failure. I’ve seen virtually nothing on the subject.

    A question to ponder is whether polls can actually influence results. I think personally that they can because of the desire to be on the winning side.

  6. So…it’s early, but it looks like if you just predicted that Biden’s share of the two party vote in each state in 2020 would be identical to what it was in 2016, you would’ve have done *better* than the 538 model. (You would’ve done even better by just adding 1-2 points onto each state’s 2016 result.) How much do such models need to improve upon trivial forecasts such as these for us to decide they’re worth paying attention to?

    Nate Silver writes:

    “Since almost no people have the relevant expertise to build political forecasting models (it takes tons of work and even then is easy to get wrong), political betting markets are basically just a competition over what types of people suffer more from the Dunning–Kruger effect.”

    (https://twitter.com/natesilver538/status/1321902309010530307)

    What’s more important–getting the right results, or doing “tons of work” informed by “relevant expertise”? (And who’s the one with Dunning-Kruger disease again?!) Maybe it’s time to admit that the predictable component of election results is trivial, and the so-called experts have no systematic ability to improve upon this.

      • Thanks for the pointer, Andrew. What I’m wondering is in what sense anyone was more informed about anything this year by paying attention to Nate Silver’s poll averages, or election forecasts, if someone who had plugged their ears and just pointed to the 2016 results would have done better. It seems the thesis of the OP (not by you, I realize), is that the person who plugged their ears would have been less informed in some way, but I’m failing to see how.

        • “in what sense anyone was more informed about anything this year by paying attention to Nate Silver’s poll averages, or election forecasts, if someone who had plugged their ears and just pointed to the 2016 results would have done better.”
          No, that is not the thesis of the post. What I’m saying is that I see various problems with assuming that, because forecasts appeared off (due to problems like polls all being off in consistent ways) forecasting itself is prolematic or not useful. When all the errors are consistent, there’s not much you can do. But that’s not always the case. Hence, we need to think about election forecasting in the long run rather than say its broken based on a couple observations. Also, while some people may not have been more informed this year than if they had instead watched polls being reported by journalists, other people may have been more informed from all of the information that comes with an election foredcast (I for instance, learned a lot from the models, without assuming that the probabilities they were giving me were written in stone).

        • Can you speak more specifically to what it is that you learned from these models this year? My contention is that, if the 2016 results predict the 2020 results better than these models did, it’s unclear whether inferences based on these models enjoy any consistent relationship with reality. The concern is not so much that the predictions were terrible in the last two elections—from a position of zero information, they seem pretty good. The concern is that the failure to beat a *really* dumb baseline forecast raises questions about whether all the supposed expertise about the subject matter embedded in these models is just the astrology of our era.

        • Well at a high level, leading up to election day I expected Biden to have a better chance than Clinton, and I don’t see reason to believe that was wrong. Though much of what I learned this year was more “meta” – how people with way more political science background than me think about very low probability events, for example, and the relationships between outcomes in different states. But like I said, I’m not trying to argue that this year’s forecasts were necessarily more useful at predicting what happened than something like looking at what happened in 2016. I acknowledge there are big challenges, like the fact that people have trouble acknowledging uncertainty, even when you give them a probability. I just don’t think its a reason to stop attempting to combine information more systematically than the average person might in the long run.

        • Fair enough. To be clear, I’m in favor of trying to do better, and agree that a good forecasting system would be a useful thing to have. I just think it may be time for a fundamental rethink of current approaches, in view of the failure of the state of the art to beat a trivial forecast. I appreciate Andrew’s posts, because I get the sense that he’s genuinely open to the possibility that he’s doing it all wrong. I’m not sure the same is true of the much higher profile Nate Silver, who seems to think he did the best someone in his position can possibly do. I think the “experts” need to seriously re-examine what their expertise consists in, and whether it’s actually inhibiting their performance rather than enhancing it.

      • Well, It looks like the 95% confidence intervals for the state polls are large enough that one can show the result was generally within that interval. That’s not exactly a strong argument for the layman to pay attention. It’s a little biased to show the total vote percentage, which makes the confidence intervals and the difference with the actual result seem smaller. It would be more informative to show a scale with the difference between predicted result and actual result.

        As I pointed out earlier, the polling averages were much worse this time than in previous episodes, particularly on the generic ballot. They were a lot worse on the Presidential question too. In 2016, the final average was pretty close to the actual result as Hillary was tanking the week before the election.

        In any case, there is strong evidence that standard polling practice at least at the state level is bad. Quinnipiac showed Wisconsin +17% Biden. In reality, it was 1% or so. It’s more informative to analyze the accuracy of various polling methodologies so there is some hope for improvement. At the current rate of decline in accuracy, someone should be asking why they are paying so much for such biased information.

  7. Polling can be very useful to a campaign. The pollster finds that the chicken farmers of Beta County are important, and our candidate gets photographed eating fried chicken. That’s value. Betting is predicated on knowing more than the public plus the vig that the bookies get; in my view, the bookies’ vig makes betting a money loser for the most part. On the other hand, voting is your way of getting your voice heard, and the polls don’t matter.

    • As I think of oncodoc’s comment above – maybe one way to do it would be to carve up the polling into a bunch of discrete cohorts (e.g., rural whites, chicken farmers, Cuban Americans in Florida, Mexican Americans in Florida, whites in Philly suburbs, etc.) and then apply them in proportionately representative fashion to specific districts, states, and nationally? Obviously, you’d never be able to get the cohorts exactly right and you’d have a hard time modeling all the interaction effects…but maybe it’s better than the current system which kinda works in reverse?

  8. I think I tend to agree with Zeyneps view point. Whats relevant isnt a forecast (there is no action to take from it!) but a Q-function, in that youd like to know “how can i causally change the outcome in Pennsylvania” (so phone banking may have x value, texting y, door knocking z). However you think of prediction markets – they provide an avenue to *monetarily* hedge, which is a well defined action

    Further, with the non-stationarity embedded in once every 4 year elections – there’s no way to really falsify forecasts.

  9. I am intrigued by the idea that the mode of presentation of polls and their results can encourage people to think about the uncertainties. Something along these lines could be incorporated in high school curricula as well. That way, young people approaching voting age would come to understand, at least at a rudimentary level, how polls and other predictions can go wrong.

  10. I think this whole discussion misses an important point. Given the way we finance elections in this country, polling’s values and dangers are quite different than if we financed them differently. In economics, there is a theory of second best (often ignored these days) that asks what the optimal policy is, given constraints that things start out with imperfections (to put it mildly). Once we admit that we have an election system that depends on raising and spending large sums of money, polling looks to me like a less innocent collection of “information” to be used. Used by whom? For what purpose? I’d be more supportive of polling if elections were publicly financed, but given the way it actually works, I’m not so sure.

    • Is the idea that the “function” of polls is to encourage donations to campaigns? I agree this muddies the waters quite a bit, since polls with such a goal would have little reason to try to accurately reflect voter attitudes/intentions.

      It reminds me of something we see a lot on this blog, the “science as rhetorical gloss”. A researcher is convinced they are right (or at least of the value in promulgating their beliefs) and feels that scientific language is the way to convince others. So they treat the processes of science as a simple checklist that gives them license to speak that language and use scientific-sounding rhetoric to support their point. (The TED talk circuit is built on exactly this model.)

      Anyway, it sounds like from what you’re saying that polling is being used in a similar fashion, as a way of adding the rhetorical gloss of rigorous objective survey language to whatever point someone wants to make. Put that way, though, I suspect polls have always been pulled in that direction, and maybe current campaign financing (and the need to attract clicks and keep TV news viewers) exacerbates this problem.

      • I don’t have any simple story in mind. Campaigns use polls to modify their message, to allocate their resources, and to focus further fund-raising efforts. Voters can be influenced by polls, both in terms of their voting patterns and likely turnout. Given how much money it takes to run a campaign, I think it is difficult to clearly identify the impact that polls have on elections. The simple logic that more information is good, however, belies these subtle (and potentially large) impacts. I don’t pretend to understand the net impact of polls, but I am leery of any simple arguments in this environment.

        • I agree we can’t easily assess the impact of polls or forecasts for that matter on voters. But from a common sense standpoint (which is undoubtedly also informed by what I’ve learned from researching decision making under uncertainty and watching the replication crisis/ statistical reform play out), I’m very doubtful there’s a large or consistent impact of polls and forecasts on voting behavior. They may play some role in belief formation, but people gather information from many sources leading up to an election, and often make decisions unrelated to how sure they are that they know what the outcome will be. E.g., huge numbers of people vote in presidential elections despite knowing that their vote has little chance of being decisive, like in states that are very stable for one side of the other.

      • Perhaps, but recognizing that the fallacy is the common result emphasizes that if someone is going to claim a slippery slope exists, that person should bring evidence that it actually might exist. Claiming a slippery slope as a way to shut down discussion or downplay an idea should be frowned on.

  11. You are too nice to Tufecki here. Tufecki clearly has no idea how probability works (the argument in the column is that forecasts are obviously no good because Biden should have won by more). And here I was thinking the NYT was pro-science!

  12. I’m not sure I really understand the argument being made about forecasts in #2 (“election forecasts are dangerous because they can be wrong but still influence behavior”). I could imagine that in the absence of forecasts, some people whose voting behavior is influenced by forecasts might take their cues from polls instead. And as a casual observer, polls alone seem to do “worse” than forecasts and my sense is that there is less out-in-the-open interpretation of what they mean in the media. I’m not I understand why a media landscape with polls but no forecasts would make for a more informed/engaged electorate.

  13. Possibly (or probably) less than totally accurate forecasts based on specifiable inputs with known distributions around a hypothetical “true” value are better alternatives to deliberately misleading or dark-whistling forecasts?

  14. I think one thing is that for many (probably most) people, there isn’t really a meaningful distinction between polls and forecasts. And I’m not saying that means we need to “educate” people about the difference, because I’m not really sure why anyone should care about it. What people want to know is what the election result will be. One effect of the rise of these forecasting models has been to say “Look, the polls themselves are not a good way to know what the result will be” (even more so for “a single poll” rather than “the polls”).

    But if it’s really the case that polls are strictly worse as predictors than a forecasting model, then there’s no reason anyone should even see or know about polls unless they’re using them to make a forecast. Think about the weather. The weather forecast doesn’t say “we measured a barometric pressure of X at this location”; it just says “tomorrow we predict the high will be this, the low will be this, and there’s X% chance of rain”. The raw meteorological data is meaningless to the public; all that matters is the forecast.

    It could be argued that this is a media problem in that the media fixate on meaningless upticks and downticks on individual polls. But at the same time, there’s the question of why polling houses are releasing polls without models, and why modelers have to work with those polls even if they’re off. Why not create vertically integrated teams that do polling AND modeling so that you can engineer the polling in order to produce the best models?

  15. It took awhile to dig up this old post but I think it is important to continue this discussion. The issue of polls still seems important to me – not just the way we analyze them, but the extent to which they are useful or not.

    What makes me return to this matter was the election objection articulated by Ted Cruz. His “evidence” consisted of the “fact” that 39% of the public believe the election was rigged. Quite apart from his motivations and the particular attributes of that poll, I think this reveals an under-appreciated danger in polling. When evidence becomes what people believe, as measured by a poll, then we move one step closer to democracy via popular vote. The whole idea of electing public officials is to protect ourselves from that type of mob rule. Otherwise, with existing technology, we could just have the public vote on every issue and enact policy that way. We elect representatives to protect our “better angels” from, well the opposite.

    I think polling risks fueling the trend towards evidence by popular belief. Sure it is valuable to know what people are thinking. But how do we separate that from the influence of polls on our actions. It’s fine for analysts to want information about what people believe. But when our politicians and the public see that same information, its most likely use is to cater to the beliefs – it is easier to placate people than to try to change their beliefs (an unsubstantiated assertion on my part). So, polling reinforces our tendencies to rely on System 1 thinking to make decisions, rather than the more deliberate and analytical System 2 thinking. In my mind, human nature is already stacked in System 1’s favor, so reinforcing that is not a positive thing.

    • I just read Cruz’s argument and see the types of fallacies you’re talking about … equating beliefs that something happened with evidence that it happened, which is pretty ridiculous given how quickly and effectively misinformation can spread on social media. And then seeming to double back say that even if they’re not actually evidence of the thing, they must now be taken seriously because we can’t risk losing trust. The circular reasoning is bad.

      “Sure it is valuable to know what people are thinking. But how do we separate that from the influence of polls on our actions”.
      I tend to see the value of polls in politics from a game theoretic perspective – knowing what others believe is valuable (assuming the polling information has some signal) because it helps us best respond. Your comment has me thinking again about the risks they pose and how seriously we should take this risks. May write a new post.

      • ” equating beliefs that something happened with evidence that it happened, which is pretty ridiculous given how quickly and effectively misinformation can spread on social media. ”

        Every Sunday, the good church-goers must come to terms with this puzzle. And there are plenty of ’em where I’m at.

Leave a Reply to Dale Lehman Cancel reply

Your email address will not be published. Required fields are marked *