Decision-making under uncertainty: heuristics vs models

This post is by Phil Price, not Andrew.

Sometimes it’s worth creating a complicated statistical model that can help you make a decision; other times it isn’t. As computer power has improved and modeling capabilities have increased, more and more decisions shift into the category in which it’s worth making a complicated model, but often it still isn’t. If you’re trying to make a decision about something that is affected by many different factors which interact in unknown ways and are controlled by parameters whose values you don’t know very well, it’s probably not worth your trouble to try to make a detailed model.

To take a current example, if you want me to predict how many Americans will have died of COVID-19 by the beginning of June, 2021, I’m not going to try write a model that simulates all of the political, social, medical, and environmental factors that go into that number, I’m just going to make up something that seems reasonable to me based on my general sense of how all these things interact. Presumably I could learn just a little bit more by making that complicated model — at least it might help me understand what the most important parameters are — but in practice the uncertainty in the numbers coming out of such a model is going to be so large that I don’t see how it could be worth the trouble.

Although I’m pretty sure it would not be worthwhile to try to build a model to answer the question posed above, there are plenty of other cases in which making a model is well worth the trouble.

Which brings me to my current predicament: I have a client who wants some advice (on how to decide how much electricity to buy in advance at a fixed price, rather than on the day they use it at a variable price) and I keep going back and forth about whether it’s worth trying to build a detailed model for this. I just don’t know how much there is to gain from such a model, compared to just using some rules of thumb to make the decisions, and I think that even figuring this out will take a lot of work. How should I proceed?

The problem involves electricity bills for a large company that uses a lot of electricity (their annual bill is something like $100 million). Unlike homeowners, they are exposed to real-time fluctuations in electricity prices. If there’s an unusually warm spring in the Sierra Nevada mountains, all the snow melts into water and has to be released through the dams, so there’s a lot of hydropower and electricity gets cheap for a while in Northern California…but then that hydro isn’t available later in the year, so if there’s a warm summer and the natural gas “peaker” plants are in heavy use, the electricity price will be highly dependent on the cost of natural gas, which could be high because…well, you get the idea. 

The company had normally been content to pay whatever the market price happened to be, with fairly predictable seasonal variability and occasional mildly pleasant or mildly unpleasant surprises …until 2018, when some usual events led to extremely high prices for a short time at a few facilities. When I say ‘extremely high’ I mean it: the price per MWh was about 200 times the typical price for a few hours, and about 30 times normal for several days. That was bad enough — spending an extra month’s electricity budget for those facilities in a few days — but of course there was no guarantee that that period was the worst it could ever be. What if it happened again, with even higher prices and for an even longer duration? It had taken some pretty unusual events to lead to those high prices, so it’s not like this is likely, but it’s possible. 

The company responded by starting to buy some electricity several months in advance. There’s a standard market for this; you can buy such-and-such an amount of MWh for next June at a specific delivery location for $y per MWh. The basic idea is that you are worried that the price of electricity next June might be exceptionally high, but some electricity producer somewhere is worried that the price might be exceptionally low, so you’re both willing to make a deal. Actually there are third-party energy traders who are heavily involved in the market too.

I am part of a three-person consulting team that is advising the company. We started by considering a single facility, i.e. a single warehouse. Historically, how predictable are electricity prices, and what does the distribution p(price | predicted price) look like? Here we take the market price as the predicted price. Similarly, how predictable is a facility’s electric load, and what does the distribution p(load | predicted load) look like? We have a simple model based on historical data, and we can quantify how well it works. 

Actually, we need p(price, load | predicted price, predicted load), since there is correlation between the error in the predicted price and the error in the predicted load. For instance, unusually hot weather can lead to higher energy prices (because higher demand for air conditioning) and higher electric load in the company’s facilities (ditto).  

Even for a single facility it’s hard to know exactly how to model all of this: we only have a few years of useful data — because both the facilities and the electric industry itself have changed a lot in the past five years, and are continuing to change — so that’s only a couple of dozen summer months for example. That’s not really enough to know how to parameterize the problem, e.g. to characterize the tail of the distribution that is relevant for 1-in-20 or even 1-in-10 events. But the problem here is at least something we can wrap our heads around, and it’s not too hard to model: Once we learn the price at which we can buy electricity next June, we generate a lot of possible futures, meaning what the distribution of real-time prices next June might look like, and how much energy the facility might need, and we have an objective function that we optimize conditional on those distributions. We are comfortable with this, and so is our client. 

But now we want to move on to optimizing over many facilities and many months. It’s a big company with facilities all over the country, so a lot of things will at least partially average out on a company-wide scale. Electricity prices are correlated across the country, but they are not perfectly correlated (coal, hydro, wind, solar, nuclear, and natural gas prices don’t vary in lockstep, and there are transmission losses and transmission bottlenecks that stop electricity from flowing freely all across the country.)  If the company as a whole wants to avoid spending far more than expected for energy, they are already partially covered simply by being spatially diverse. They don’t need to make sure the energy prices are under control at every single facility, as long as they aren’t crazy-high at too many of them.  Similarly, they can absorb high prices for a month or two, as long as it doesn’t bust their budget for the year (or maybe for half a year). Suppose the company wants to (try to) make sure their electricity budget next year doesn’t exceed by more than 15% what they have budgeted. It’s pretty clear that the optimal decision for the company, as far as the amount of electricity to buy in advance, is going to be less than the amount that would be obtained by trying to make sure they don’t go over 15% at any facility, in any month.

So we’ve started working on how to do the multi-facility, multi-month generalization of the problem we’ve already solved…and it’s a bit of a mess. The problem is mathematically straightforward, it’s just that when we get to the modeling decisions we don’t really trust them. We need the variance-covariance matrix for the errors in the predicted prices between the facilities, and the variance-covariance matrix for the errors in the predicted electric load between the facilities, and we don’t have nearly enough data to estimate those with any confidence. We’re more or less OK about the diagonals of those matrices, which is what goes into the single-facility single-month stuff we’ve already done, but the off-diagonal elements are going to be very poorly estimated. And of course the results will be conditional on the functional forms we’ve chosen for the price and load variability, for which we care a lot about the upper tails but have few data up there. 

One option is: quit complaining and write the model. We’ll have a fairly big, fairly ugly, fairly complicated model and will have really uncertain estimates of a lot of the parameters, but that’s just life sometimes. And we can use the model to do some sensitivity analysis to figure out what additional information might help us. 

Another option is to just sort of wing it. The company could do the single-month, single-facility optimization for each facility and then just cut the recommended purchase for each facility in half, for example.  Or they could purchase the full recommended amount at the facilities that face the highest price uncertainty and not purchase any at the other facilities (there is regional variation in price volatility). We could look at historical data and come up with some heuristics that seem to work OK. 

Trying to write the whole model would be kinda fun, but honestly I’m not sure I’d trust the results more than I would trust some heuristics based on expert judgment that we can elicit from someone who is familiar with electricity futures trading. 

So I’m putting this out there for advice, if any of you have it. 

This post is by Phil

62 thoughts on “Decision-making under uncertainty: heuristics vs models

  1. Some quick thoughts: this seems like an insurance problem that you have worked out for the individual case, and now you want to see how it works out for a case with many agents/facilities. (I say agents because that’s what I think if with insurance problems). In insurance problems, it is well known that the worst case are when there is aggregate risk: when the bad things that can happen, happen to many agents at the same time. Similarly, in your case, clearly the worst outcome for the firm would be if there was a high correlation among the energy prices and/or energy loads. This suggest that you can work out a worst case scenario: i.e. focus your analysis on the high estimates of the correlations. If you similarly work out the case where the prices and loads are independent, you will get upper and lower bounds on the amount of insurance you want to buy. It seems like that would be useful even if you want to take an intuitive approach from then on.

    In other words, since there is so much well understood structure to the theory, it seems worth it to compute it. But I say this as a theorist who never works with data, so take it with a grain of salt.

  2. Can the company choose to reduce its demand on peak days? If they can, they could bid into the capacity markets and not only pay much less on peak days by shutting down, they can make money in PJM and some other regional markets (not ERCOT) by simply offering up the potential of shutting down. This could change the math on opening.

  3. Just get them to buy electricity caps with a strike at where they want protection. They exist and are reasonable liquid in the Australian energy market. I don’t know much about the US energy market, but think it is more developed.

    Your client will pay a premium to the writer of the option. Some quarters they won’t be of any use (and you lose premium), other quarters they will do exactly what they were meant to do.

    They should also consider:
    * do they trade futures cap contract with an exchange or with an investment bank over-the-counter
    * if the former, then they may need to make / receive margin calls through their clearing agent
    * if the latter, who they buy the cap from: a higher rated investment bank is better providing greater guarantees that this will pay out to them when spot prices exceed strikes.

    If they have a Treasury team, then they should be the ones overseeing these protection schemes.

    Forecasting energy prices is a fools game.

    • Forecasting energy prices is not a fool’s game, it’s a necessary feature of the energy markets. You can buy electricity months in advance at a price that is a forecast of the future energy price. If what you’re saying is that we’d be fools to think we can beat the market, I agree. We are not trying to beat the market.

      I’m sorry I gave you —and apparently everyone else— the impression that we don’t know how the electricity markets work. We know what products are available and how they can be used.

      • Seems to me that you are trying to solve an underwriting problem. Whereas what’s important to the consumer company is different metrics.

        The model that is useful to an options issuer will be quite different to the one required for an options user ( as a hedge). The first problem is much more difficult than the second.

  4. I may be missing something here, but the company can make the aggregate electricity price for all its loads as certain as it likes, at the cost of paying forward spreads. And they wouldn’t even have to do this with their own trading operations: there are plenty of companies that will manage your energy price risk–for a fee. And that fee will never be nearly as large as the worst-case correlated upside. Of course, you’ll never get the best case correlated downside either. But if this is a risk management issue and not a profit maximization issue then I’m not sure I see the problem.

    If you’re trying to manage forward purchases to *beat* the market, forget about it. The one month forward price of electricity is an excellent predictor of next months prices, with a small added spread to account for risk.

    One other issue: in most states the facility can get on a tariff in which they have the ability to shut down operations when prices are very high and sell their unused energy back to the grid for others to use. Whether or not this makes sense depends on the profitability of the facility and the price of power, but there are lots of situations in which a low profit per unit of electricity consumed (aluminum mills are a classic example) industry can make more money when electricity prices are high by not consuming than they can at average prices.

    • J(ao),
      Your first paragraph is exactly what the company is doing: they are buying ‘load-following hedges.’ You may know, but others here will not, that this means you buy a specified fraction of your electric load at a fixed price per MWh, e.g. you can buy an 80% load-following hedge at $45 per MWh, which means that however much electricity you use, you get 80% of it for $45 per MWh and buy the rest on the spot market. The other kind of hedge is a “block hedge”, in which you buy a fixed amount at a fixed price, e.g. buy 2200 MWh at $43 per MWh.

      As you say, a load-following hedge takes all the risk out of it for you. But you have to pay someone else to take that risk. We are currently collecting data that should let us quantify this, but people in the industry seem to think it’s typically somewhere above 5%. If you buy a 100% load-following hedge every month, you’re spending maybe 8% more for electricity than if you pay at time of use. (This refers only to the cost of electricity itself, not the demand charges and transmission charges). When your electric bill is $100M per year, these little bits add up. It’s well worth paying us our consulting fees for a few months if we can provide them with a method they can use for years to save several million dollars per year. We think they can do _almost_ as well by buying appropriately sized block hedges, which have much lower premiums.

      As for Demand Response programs, yeah, I’ve got a ton of experience with those, and more than half of my work over the past several years has involved DR one way or another. A very relevant suggestion, but one I’m already aware of.

      Thank you for your helpful comments.

      • I suspect you can do better with block hedges, at the cost of having a group of qualified people to continually rebalance the hedges. I think you’re right about the 5% (or so) and what you’re paying for is a group of energy analysts sitting at someone else’s desk. If your company is big enough to have a well-staffed energy trading group, you probably don’t have to pay more than few million to save $10-$20 million. Getting back to your original question, though, I think it’s really hard to automate that process… Modeling the electricity market everywhere in the country (and/or the world) including correlations just can’t be worth the trouble. It’s also hard to make sure you’ve hired the *right* people, and that they have the right incentives. (The basic incentive of all these guys is to that they’re paid a percentage of their savings, so they love to take risk… much more risk that the company wants them to take, and then accept the asymmetry of getting rich or getting fired and moving on.) As consultants, you can help give metrics to assess risks, but actually manage those risks is a real-time job of trading desk. My two cents…

      • “there are plenty of companies that will manage your energy price risk–for a fee”

        Yep, that’s what I was thinking. That fee is probably sandwiched between the cost of the model and people required to implement it, and however much margin competitive pressures force firms to give up to get customers. So I guess you have to ask “how much is their margin” before you spend a lot developing your own model.

  5. This is a hedging problem no? Is there a ~standard approach to this? I don’t know, but I would think so, lots of companies hedge energy costs. But it also depends on whether the company is trying to minimize costs long term and willing to spend to develop, or if they just want protection from spikes.

    If I were overseeing such a project I think I’d go simple first and get more complicated as need demands. IMO the simplest approach is to just buy forward options below a given strike price for supply a year ahead, and use a simple model to establish that maximum strike price – something like less than one std devation above spot price for same period during previous year. I have *no* idea what the option cost would be, but probably small compared to spending 200x the “normal” spot price for a day or two’s operations.

    I guess I’d also be inclined pilot the thing at a limited number of facilities.

  6. All,

    I have clearly done a bad job explaining what I am trying to accomplish. Literally every commenter so far assumes that neither I nor the company knows anything about the energy market. I don’t know where I gave that impression, nor how people could think that a company that spends $100 million per year on electricity could be ignorant of the market. We know how the market works and we know what products are available.

    • :-)

      I have said this before with respect to disaster insurance, flood insurance, and other things, it’s worth it to have some mechanistic modeling. It doesn’t have to be super detailed, and the best kinds of mechanistic modeling are not necessarily super accurate while still providing the right general behavior… but mechanistic modeling of some kind can be a major secret sauce.

      • The goal is to find the optimal set of ‘hedges’, I.e. advance purchases of electricity at market prices, in order to minimize an objective function that takes into account both the expected electricity cost and the cost of an unusual event such as a 95th percentile spike in prices. For instance the function to be minimized could be Z = E + a*c95, where E is the expected cost, c95 is the estimated 95th percentile cost, and a is a parameter that represents the risk tolerance. Here E is the expected cost for a set of months and a set of facilities, not a single month at a single facility.

        At a given moment, for a given facility, there are market prices for electricity any number of months in advance. The price for a given month for a given facility can be thought of as a forecast for what the price will be when the time comes. The errors in the forecast prices at different facilities are correlated — if the forecast is too low at one facility it’s likely too low at others — but the correlations are very poorly estimated from the data available. Also, for each facility we have a forecast for the number of MWh they will need in each future month. These forecasts are uncertain and the errors are correlated across months and facilities but these correlations, too, are poorly estimated.

        We need to decide, every month, how much electricity to buy 1, 2, 3, 4, …, up to say 12 months in advance, at each facility. So, for instance, we could decide this month to buy 800 MWh at Facility A for September, 700 MWh for October, 700 MWh for November, etc., and similarly for Facility B and C and D and E and so on. Next month maybe we should buy some more at Facility A for November but not for October, or whatever. We are looking for a method of making these decisions.

        But this is not what I am asking about. We know how to write the model and we know how to choose the optimum purchases conditional on the model.

        What I’m hoping for is some insight on whether to bother. It is going to be a bear to write the model and to do the optimization, and the decisions are going to depend on a lot of very uncertain parameters.

        There are some problems for which it is clearly not worthwhile to write a model. There are other problems for which it clearly is worthwhile. With this problem It is not clear to me in either direction. I can’t think of any way to be sure what we will get out of the model until we try it, but my gut feeling is we won’t get much. But I could be wrong. And the alternatives aren’t great either. What to do, what to do, that’s my question.

        • There’s a certain cost to writing the model. On the other hand, you can probably estimate how uncertain are these parameters and when fed to the model how uncertain would be the outcomes.

          e.g. Is the effort going to cost $100,000? What’s the 95% CI you expect on your MWhr forecasts. Is it +/- 1%? 5%

          I realize that the exact answer will only be available post model but surely you must have some estimates?

          PS. Isn’t this rather strong to say: “We know how to write the model and we know how to choose the optimum purchases conditional on the model.”

          If you really know all this accurately, then the question is kinda moot. How much money do you save them and how much consulting fees will you be charging them?

          What’s the cost of maintaining status quo? If you pay as you consume (no hedge) how large is a black swan event loss? If I have a 1% chance of a 100M USD loss and your consulting fees are $100k its a no brainier to go ahead and model.

          Well actually, the status quo may not be a “no hedge” but how much better is your model over whatever hedge the company may do heuristically anyways.

        • Rahul,
          If we knew pretty accurately how much it would cost us to write the model, and we knew pretty accurately how much better its recommendations would be would be than what the company doing now (or than what the company would could do if we put a small amount of effort into simply thinking about the issue some more) then I agree, this would be a pretty easy call. But in fact, although it is clearly going to take “a lot of time” to write the model, I don’t know if that’s a full week or a full month or what. And I don’t know if the recommendations it generates will be just a bit better than what they’re doing now, or a lot better, or I suppose it’s even possible they will be worse.

        • Does the company have the ability to modulate its consumption significantly? Can it resell electricity that it bought a while back? if so it seems that you constantly have to choose not just what to buy but also what to plan to consume and what to sell. the planning to consume part might well be the most important portion of the model as this involves altering business operations and has unique components well beyond what a pure trader deals with. To me that’s where the modeling gold is likely to lie. it’s also even more interesting of the client generates, such as from solar installations or as a byproduct of certain operations (cogeneration).

    • “The goal is to find the optimal set of ‘hedges’ ”

      What’s confusing is that it’s not clear what makes your problem different than the hedging problem that all kinds of commodity consumers have to solve on a daily basis.

      So it seems like a common hedging problem that occurs in different flavors in all kinds of businesses, from energy to hog bellies.

      Seems like the question you’re being asked is: is there some way to make a major leap in modelling beyond the standard hedge model already available on the market? Unless you have some new insight into the problem that’s not recognized by probably thousands of other companies and organizations trying to do the same thing, the answer is probably: no.

      Don’t write the model.

      • It is indeed a standard model and we are applying a standard solution…when applied to a single facility for a single well-defined time period. What makes this non-standard, at least as far as I know, is the multi-month, multi-facility optimization. If the goal is to avoid exceeding their electricity budget by more than 20% in a given quarter year at a specific facility, with 95% certainty, that’s standard, we know how to buy hedges to handle that.

        But if the goal is to avoid exceeding their company-wide electricity budget by 20% it gets more complicated. Obviously one way to attain this goal is to apply it to each individual facility: if no facility exceeds 20% over-budget then obviously the sum over all of them will also be acceptable. But if you do it that way then you are hedging more than you need to: it’s unlikely (though not impossible!) that all of these geographically dispersed facilities are going to face exceptionally high energy costs at the same time.

        I guess I’m just repeating myself.

  7. You might have thought of this already, but can you work top-down instead of bottom up? At the P&L level, what level of excess electricity expenditure “hurts” from the perspective of whatever constituency the client cares about?

    Then you can at least triage by the probability a facility or set of related facilities might cross that threshold. Obviously, I’m arguing for a heuristic approach here, but in a somewhat systematically defensible way. Coming up with the systematic heuristic is some work, but not as much as a global optimization project, so might serve as a middle ground.

    You could also doublecheck this against what you elicited from an expert. This seems like a reasonably cost effective way to generate two points of comparison.

    • Kevin,
      One thing this exercise has already done is to get the client to think a bit more about exactly what it is they are worried about, and how much are they willing to spend in expectation in order to prevent a given level of potential badness.

      And yeah, one possibility is to do something like you’re suggesting: just hedge at the facilities where the risk exposure is the largest (which basically means the price is the most volatile and the electricity consumption is the most uncertain), and then not hedge anywhere else. But of course, as soon as you say that, you realize that you should be able to do better than yes/no: maybe buy 90% of the ‘full amount’ of the hedge at the riskiest facilities, and 50% at the less risky, and so on.

      But yeah, I, too, am leaning towards heuristics.

  8. One thing that struck me reading your post is that you only have a few years of useful data given how much the company and the market have changed and are continuing to change.

    That suggests to me that even if you can put together a reasonable multi facility and month model, it might have a limited useful life.

    The simpler approach may be more durable for a longer time period.

    I guess the suggestion is to think about what it would take to break the simple model or the complicated model.

    • Yes to your first couple of paragraphs. Actually I think the model that we are contemplating should be useful for a long time, but due to continuing changes in the markets and the company’s operations the input parameters are never going to be estimated with decent precision.

      My gut feeling is that you are right and we will do just as well with a simpler approach.

  9. One approach to this problem is to model the company’s operations under various energy crises which the company intends to survive, assume that these crises are unmodelable black-swans with enough probability to be worried about, and assume that the energy options’ costs are worth paying for while they are saving the company from bankruptcy under these “expected” crisis cases. Plus some extra for heuristic safety.

    Why are we not framing the problem this way?

    Or: How can we do better than this without making very strong and probably wrong assumptions about the robustness of the national energy market?

  10. Interesting problem! My feeling is that heuristics / an expert trader would have a higher expected value over time in this case.

    The best solution depends on the company’s exposure, of course, and to what extent DR etc. can mitigate risk. I assume that it can’t, otherwise you would not have the problem.

    The assumption is often that the trader has a monthly “premium”, whereas the model is “cheaper” to operate. However, models of such complex and rapidly evolving systems tend to be a lot of work and require continuous adjustment. So they are not necessarily lower maintenance. At best they are valuable, high-maintenance inputs to an expert.

    In my experience, models with such variance-covariance matrices tend to make money here and lose money there. Each model is different, of course, but in the ones I’ve done, the false-positives and false-negatives (opportunity costs etc.) have cancelled out the good decisions. It isn’t that half of the decisions are good and the other half bad. It is that the bad ones are so much more expensive than the good ones are profitable. In the business-as-usual case, the model might do as well as an expert trader, or even better. But the anomalous cases you want to protect against are ones that cannot be captured adequately by a model, and there you might lose big. As you indicated, you can have 199 days of good trading wiped out by one bad day. These cases are also so rare that I don’t think you can test your model adequately to have any certainty that it behaves well in these cases. Also, due to their nature, each anomalous case is different, even if you account for the factors that created the previous anomaly, it is no guarantee that it will capture future anomalies.

    • Great comment.

      The central question: do you want to optimize – in which case your model opens the risk that the optimization will periodically fail and destroy it’s accrued value – or protect against spikes?

      • “Optimizing” in this case means buying hedges that provide the specified amount of protection against price spikes. That’s the whole point of the problem!

        • As I mentioned in an earlier comment, one thing the ‘specified amount of protection’ can mean is:
          For instance the function to be minimized could be Z = E + a*c95, where E is the expected cost, c95 is the estimated 95th percentile cost, and a is a parameter that represents the risk tolerance.

          This is not the only way to define the objective function, and may not be the one we choose in the end, but for now this is what I mean when I say ‘specified amount of protection’. With this function, the company is essentially saying they will pay $a to avoid a 5% chance of having to pay $c95 or more.

    • Is Herman really Taleb under an alias? But that is a valid point. I have never worked on exactly this type or problem, but in a previous life did a lot of work on stochastic capital accumulation and consumption models (or equivalently stochastic harvesting models). Policies that are optimal under an expected utility over a given time horizon, are often not optimal when you are concerned about the properties of sample paths, most importantly if there is some return that would act as an “absorbing state” which is basically what Herman refers to. The extreme case is in life histories models, where the returns over time are multiplicative, so even one zero (or the equivalent of some low value that has the same effect) blows up all the other returns. Even to say that the policy provides “the specified level of protection” you have to be very careful if that is the expected level of protection or some actual property of the sample paths. Or to put it another way, the difference between solving a problem for the expected risk or a problem where at each time period the probability of the undesirable event is below a given level. The former problem is usually doable, the latter problem is quite hard.

      • Roy,
        I think part of the reason for hedging in this case is to keep the electricity expense in a range where it doesn’t have a nonlinear effect. If they budget $x and the expense is $1.2*x, well, they’re out $0.2*x but that’s it. Whereas if they budget $x and the expense is $2*x then uh oh, they have to cut some other budget, or delay some planned expansion, or borrow some money, or something. I’m not really sure, actually, but that’s my impression. So, to connect this to what you just said: I think they are worried about some large unexpected expense (large compared to their liquid assets or available resources or something), which might indeed be what you’ve said is called an “absorbing state”, but I get the impression they’re being fairly conservative about how to define that. I think that if they’re hedged adequately against a 95th percentile fiscal quarter, whatever they mean by that exactly, and they experience a 99th percentile fiscal quarter, that will hurt but won’t be crushing. I hope so, since a 99th percentile fiscal quarter happens every 25 years on average!

        But what if there are two 99th percentile quarters in a row? It’s not like they’re statistically independent. For a lot of businesses the current pandemic has put them in exactly that situation, in fact. It happens.

        So, yes to your point about ‘each time period’. This is relevant.

    • Herman,
      “As you indicated, you can have 199 days of good trading wiped out by one bad day. These cases are also so rare that I don’t think you can test your model adequately to have any certainty that it behaves well in these cases. Also, due to their nature, each anomalous case is different, even if you account for the factors that created the previous anomaly, it is no guarantee that it will capture future anomalies.”

      Yes indeedy. It might be that assuming such-and-such is approximately lognormal will work just fine…until it doesn’t. But the response can’t reasonably be to just assume some ultra-long-tailed distributions (in an explicit model, or implicitly) because, well, how long-tailed should it be? You can’t spend all your time and money to protect against some event that you know is extremely unlikely but you don’t know exactly how unlikely.

      In essence: you have correctly understood one of the issues we are facing. Got any suggestions?

    • Herman,
      I, too, am worried about whether using something like lognormal distributions with a specified variance-covariance structure will capture the events the company really needs to be protected from. They might work fine for typical events, but I have little faith that they will capture the tail behavior correctly. But I’m not sure what to do with this worry. Those rare events are just that — rare — so we really have no way to know what their statistical distribution is. We can certainly recommend larger hedges than the simplistic model would imply, but how large exactly?
      If we don’t use the simplistic model, we need to use some other model…maybe not explicitly, but if we do it with heuristics that is a kind of implicit model too. We’ve gotta do something!

  11. I’d like to return to the general issue that your post raises. How complex to make the model? I do similar analyses often, though not usually at the scale of this one – and I teach courses in analyzing such problems. Historically, I have always preferred simpler models. That is changing, as data and computing power keep increasing – why confine yourself to a simple model when many of the complexities can be included in the model? So, I think you need to do both to an extent. I always start with a simple model and then add complexities (e.g., correlations between uncertain factors are added to a model without these; multiple types of sectors/agents are added after modeling a generic one, etc.). In the end, I’d want to compare the complex models with the simpler ones – if the complex model does not provide more useful or different results, then why use it? On the other hand, if it is a “better” model, then the simpler model can (and should) be used for communicating with the client, while the quantitative recommendations come from the more complex model.

    One other issue is lurking in the background. As the model becomes more complex (hence, more realistic), the danger of tunnel vision increases. There are always some unquantifiable (and barely identified) risks, and these are easy to ignore when the model is complex, but less so when the model is simple (although that danger never disappears). Such “black swans” pose a real challenge. Did you model include the next pandemic? For me, the most important insight from Taleb’s work has been to be alert to the fact that the most serious uncertainties are probably not modeled at all. In other words, what gets included in the model is generally the least consequential factors!

    I think my last statement is debatable, and perhaps wrong. But I don’t think it is easily dismissed. Decision-making remains an art, and if these considerations were not important, then I think you would not have been hired to do the analysis for this company. Without needing to make such judgements, an algorithm will always perform better. But (thankfully, I believe) the world remains sufficiently uncertain that decisions about how much detail, what uncertainties to include, how to communicate with decision-makers, etc. remain decisions that humans must make.

    • Wow. Lots of spot-on stuff in three succinct paragraphs! Maybe you need to repeat this message about once a week to help keep us all in touch with reality.

    • Dale,
      Sorry about the late response. Yes to everything you say above, pretty much. But I think to convert ‘black swan events’ into statistics-speak, I think the point is that the high-cost, low-probability tail can have more probability in it than people usually assume when they make decisions, or perhaps that people explicitly ignore such events to their peril. So one thing we can do is simply put longer tails on our relevant distributions…maybe instead of a lognormal model for price errors (which is what we’re using) we could use a log-t4 distribution or something. But I don’t think that will really do what we want: in the abstraction of our model, there’s this energy market and you buy and sell energy, and pay these prices, yada yada, but if the ‘black swan events’ are not of the type that we’re modeling then the whole modeling paradigm breaks down. Maybe North Korea hacks the electric grid and brings the whole thing down. Or maybe they can’t hack the controls but they hack the record-keeping and make it impossible to know who used how much electricity. There are lots of low-probability, high-consequence events that we are not going to be able to model, no matter what we do with the parameter values.

      In short: Yes.

  12. Even if you don’t trust the hypothetical complex model enough to hand it off to the company, if you think you might be working for these people next year, depending on how much time it takes, you could write down the model so Future Phil knows what Current Phil was thinking.

    • Yes, writing the model on paper is going to happen anyway. But coding it up and getting it to converge is a much bigger task.

      I think I’ll start by coding a toy problem that has some of the key characteristics. Maybe it won’t be as hard as I think.

  13. I think a comment I made a few posts ago may be pertinent to this situation: don’t mix together uncertainties you can model with some confidence and others you can’t. (“The sum of a well estimated number and a poorly estimated number is a poorly estimated number.”) Trying to model everything obscures the precision and usefulness of the portion you can model well.

    That’s why I’m a fan of scenario analysis, at least as a first step. The less understood uncertainty is represented as a set of representative values or constraints, and you see how the model responds to each. This can also suggest which scenarios are potential black swans with dominant impact over a long sequence of outcomes — if you can even construct scenarios for them. (But you have a better chance of doing this as an ad hoc exercise than by looking for tail outcomes in a model intended to be applicable generally.)

    I know nothing about energy markets, so it could be that none of this applies.

  14. So I work on a problem which I wonder if it’s similar to this in some sense.

    A company say in India exporting products to the USA is exposed to the risks of dollar to rupee price fluctuations.

    The solution is buying forex hedges.

    But here’s where the difference in approach lies: as a manufacturer-exporter you don’t try to model forex trends. That’s the job of the financial firm selling you the forex options.

    As a exporter all you need to know is how much exposure you are likely to have and what’s your risk appetite.

    Any that’s where I find Phil’s approach puzzling.

    • I think the idea is that Phil knows how to solve the problem for each individual location… He can make each location have less than a certain amount of variation in energy costs.

      What he can’t figure out without a complex model is how to relax the individual constraint that *each* location has the specified amount of energy cost variation, into a cheaper hedge where the *whole company* has a specified amount of energy price variation at minimal cost.

      • Daniel:

        That makes sense.

        I guess philosophically I’ve a problem with the overall hedging approach: Companies should actively control risks that fall under their core competencies, and passively outsource risks that they don’t understand (e.g. energy costs) to third parties more competent to model and hedge them.

        As a consumer of energy, why not just focus on predicting your consumption, then decide what’s the max price spike that our P&L is willing to bear and finally buy an energy option at that price and be done with it?

        This is sort of model complexity creep that leads to disaster I feel. Historically it’s just a tiny step to delude oneself that we have such a good model that we accidentally drift into options-as-speculation territory than using them as merely a hedge.

        Basically complex models can lull us into being more confident than is justified, especially considering the likelihood for tail events. e.g. to confidentially cap company-wide consumption at 15% while relaxing the per-company cap is bound to require system-wide / country-level assumptions stronger than justified. Mostly no problem, but a tail event leads to catastrophe. And it is impossible to model these systemic events well from a few years of within-company datasets.

        Or perhaps I am not understanding the intricacies of Phil’s modelling approach…..

        • I agree with this somewhat, but suppose this is a company that owns and rents 100 large office buildings. energy consumption is dependent on the weather and occupant demand. it might be difficult to even figure out what the consumption will be. And consumption may correlate highly with price and demand spikes in the region.

        • Rahul,
          I think you do understand the issue, but maybe are overoptimistic in assuming it would be easy to “passively outsource the risk.” It’s quite easy to do that outsourcing for a single facility, because there are standard market products to do so: to reduce your risk in month m, you can by k kWh at price P for that facility. Easy peasy.

          But electricity prices spatially variable and the company has facilities all over the place. There is no single existing product that they can buy, that lets them buy KK kWh at price PP, which they can then allocate as they choose across all of their facilities. All they can do is buy existing products.

          I think you’re going to say “right, that make sense, but surely there is a company out there that will create a portfolio of existing products in order to give them the risk profile they want.” And you’d be right, there is such a company: it’s us. That’s what we’re trying to do. There are surely other firms that can do it too, and I suppose if we don’t do a decent job they might go to one of them, but ultimately those other companies will have to do the same thing we do, which is to model the spatio-temporal variability of prices and electric loads (or, more correctly, errors in predicted prices and errors in predicted loads).

        • “As a consumer of energy, why not just focus on predicting your consumption, then decide what’s the max price spike that our P&L is willing to bear and finally buy an energy option at that price and be done with it?”

          Badabing.

          That’s why when Phil says he’s trying to “optimize” I presume it’s all on price because seems to me the best way to manage risk is to just cap the price with an option.

          “Basically complex models can lull us into being more confident than is justified, especially considering the likelihood for tail events.”

          Badaboom. Presumably Phil’s group has some mechanism to account for out of sample events because there have been several in the last few decades and it would be crazy to overlook those, so I’m sure they haven’t. Just the same the more complex the model the more likely something is to sneak through.

  15. One of the advantages of computerization has been that it has become easier to synthesize data from a statistical model (in fact, the first use of a computer I experienced was my math teacher bringing his ZX81 into statistics class for that purpose). My impression is that the uncertainty in the modeling means that you have a space of many possible models that describe loads and prices (and predicted loads and predicted prices), and you don’t know which one to commit to.
    But do you have to? How encompassing can you make your set of correlation models that will spit out synthetic data that “looks like” the realworld data that you have and expect?
    And then, can you devise a hedge buying strategy that will do well against all of these models? I.e. run your hedging strategy not against the one “true” model that you picked, but the space of possible models, and then see how well it works?

    It would allow you to get a better handle on the question “what if we’re wrong”, and actively looking to broaden this model space means you’re looking for ways to be wrong instead of trying to find the one way to be right, which may be what you need to do at your current state of knowledge.

    Maybe I’m too uneducated and this is already baked into your workflow, in that case, please disregard.

    • Mendel,
      You ask “How encompassing can you make your set of correlation models that will spit out synthetic data that “looks like” the realworld data that you have and expect?” That’s the right question, and the answer is that I have no faith that we can do this, largely because we don’t really know what we expect. We can create a model that generates synthetic data that look like the last few years of real data, but that is not nearly enough years to know what the tail probabilities are. What does a 95th-percentile summer look like? We really have no idea: if you go back twenty or thirty years, the markets and the electricity industry were so different that they don’t seem all that relevant. It’s sort of like Andrew’s example that when you’re building a presidential elections model you really only have fifteen or twenty elections to use for calibration, because it’s not like the election of Martin Van Buren in 1836 tells you anything useful about today.

      So, yeah, I don’t really think we can build a model that will have the right statistical properties. On the other hand, we can at least give it our best shot, and if we have a model we can at least look at the sensitivity to the uncertain parameters, and look at some ‘worst plausible case’ kinds of things. Whereas if we don’t have a model at all, what are we gonna do? (Actually the default would be to use our single-facility optimizations and just make up a rule for modifying them, based on our intuition about the problem, so it’s not like we wouldn’t be using any formal modeling at all).

      Anyway I agree with your reservations.

  16. It sounds like you are considering to model it with purely a statistical model. I would create a supply & demand model for the significant markets–the supply you can model directly with the cost curve for different generation facilities. Electricity markets are usually geographically separate so correlations between geographies are. probably not helpful to understand those. You can model the demand stochastically with historical data.

    • Electricity markets are geographically separate but electricity prices are correlated across markets, and we care very much about that correlation — …not necessarily the correlation coefficient, but the probability that multiple geographies will experience extremely high prices in the same month. Indeed, this is pretty much the whole ballgame! If the markets were uncorrelated, the problem would be pretty easy, there’ s a central limit theorem thing going on.

      • I would still model the price by directly modeling the cost curve by market and then varying demand. You can model the correlations on the demand side only.

  17. On a complete different note, to the extent you can, can you say how you modeled for single facility, single month?
    I’m curious how these models are done.
    Thanks much
    Saptarshi

    • Sapsi,
      Let’s consider next August, i.e. one year from now. We can check the market price per MWh for buying electricity in that month. (Actually there are high- and low-price periods of the day, but let’s ignore that). We take the market price as a forecast of what the monthly-average price will be if we simply wait and buy at whatever rate actually occurs on each day next August, possibly plus a premium (e.g. maybe the current market price tends to exceed the actual price in a year by 3% or whatever). If we think there’s a premium we remove it, so we end up with what we think is an unbiased forecast of the future price.
      We assume the actual price will be distributed around the forecast price. In our case we are assuming the distribution is lognormal. Based on previous data, we have an estimate of the geometric standard deviation (GSD) of the distribution. We choose the geometric mean (GM) such that the arithmetic mean is equal to the forecast price.
      Similarly, we have a forecast for the amount of electricity the facility will use next August (also based on fitting a model to historical data), and we have a model for the distribution of actual consumption around the forecast.
      We also assume that the errors are correlated, so if the price is higher than the forecast price then the consumption (the ‘load’) is more likely to be higher than lower than the forecast load.
      Putting this stuff together, we have a joint distribution of (price, load) next August. We draw thousands of simulations from this distribution and calculate the 95th percentile cost of electricity. If there were no load uncertainty it would be nearly trivial to calculate the effect of buying a hedge of a given size, but since both price and load are uncertain there’s a bit of a calculation. I won’t bother writing it here, you can probably figure it out, or you can look it up.

      Actually I have skipped a detail, in the stuff above: you can’t just calculate the cost of electricity by multiplying (average monthly cost) x (electricity used in the month) because the cost varies from day to day, indeed from hour to hour, within the month. Even if the forecast for the average monthly cost were exactly right — let’s say $60 per MWh — the actual cost might be $140 during some afternoons and $25 during mid-morning some days, and so on. If the electricity’s price and the facility’s electric load were uncorrelated this wouldn’t matter, but in fact prices tend to be high when people (and companies) are using the most electricity, so that has to be taken into account. On the other hand, this company has some ability to shift some load away from the high-cost times, so their price-load correlation is lower than it would be otherwise; potentially they can even get it to zero or slightly negative.

      So we add one more layer of sampling to the method I described above. That is, we sample from our joint distribution of (monthly-average electric load, monthly-average price per MWh), and then we draw hourly (load, price) that have those right arithmetic means and have the right within-month correlation. It’s these hourly numbers that we use for the actual calculation. This is really just a detail, though; if you understood the calculation at the monthly level, you’ve got the idea.

  18. Check our Gerd’s work.

    In this podcast, he talks about you’re issue, and how his group creates solutions. E.g. when advising the Bank of England on risk assessment, they don’t create complicated models; rather, they create simple rules (e.g. fast and frugal trees).

    Not only are these “process models” better at prediction, they are something that people can actually understand and use!

    https://www.econtalk.org/gerd-gigerenzer-on-gut-feelings/

  19. Is there viability in having on site generators with perhaps on site gas storage? For example, if some of the larger units are in Ercot. I’m guessing not, if the company has the option to DD.

    • Yes, some of the facilities have the capability of on-site generation, or at least I think so; normally only for emergency use, but if the price of electricity spiked prohibitively then I suppose that could qualify, maybe.

  20. There was so much to read here for a two minute coffee break. But if you view the challenge as partly dealing with correlations between extreme events then copula modeling might be useful. (Often used in crop insurance when weather etc creates lots of perils)

Leave a Reply

Your email address will not be published. Required fields are marked *