Skip to content

Econ corner: A rational reason (beyond the usual “risk aversion” or concave utility function) for wanting to minimize future uncertainty in a decision-making setting

Eric Rasmusen sends along a paper, Option Learning as a Reason for Firms to Be Averse to Idiosyncratic Risk, and writes:

It tries to distinguish between two kinds of risk. The distinction is between uncertainty that the firm will learn about, and uncertainty that will be bumping the profit process around forever. It’s not the Knightian risk or ambiguity idea, though.

I took a look at the article and didn’t have any deep responses. If I’m understanding right, Rasmusen’s point is that adding uncertainty adds a chance that someone will make a bad decision; it adds a component of uncertainty with a potential downside but no upside. The only think I wonder is whether this result would hold a setting where there are multple firms, all of which can have uncertainty. If you move to a setting with more uncertainty, then this might be a setting where your competitors have more uncertainty. In which case, the question is not whether you might make a bad decision under uncertainty, but whether you’ll do worse in this aspect than your competitors.

More generally, I am uncomfortable with the way that “risk aversion” is commonly discussed in economics. I’ve written about this many times on the blog but maybe not for a few years.

I sent Rasmusen the above brief comments and he replied:

You do have it right. I haven’t thought much about it with several competitors, but it would work much the same way. It might be, however, that for each firm some of the value risk is over what his competitors are like and will do. In fact, I think it’s realistic to have a number of firms entering a new industry, with each of them having a rational belief that it might be good at this new product, but knowing that in fact only one of them will survive, and that firm surviving because ex post it turns out to have the lowest costs.

You are quite right to be uncomfortable with how economists handle risk aversion, as every good economist ought to be. It’s definitely appropriate sometimes, and definitely inappropriate other times, and where it’s inappropriate we just don’t have a good convention to model it, and maybe there isn’t a single one. There isn’t a unique reason people dislike uncertainty. In my paper, the reason is that they find it makes decisionmaking harder. For individuals and firms, it often is that it means you have to do contingency planning, or incur the thinking cost of changing what you do later when the unexpected happens. Both of those rational reasons can lead to Gigerenzer rules of thumb, a general dislike of uncertainty. In the Ellsberg Paradox, I think its that people know that when a decision is more complicated, they are (a) more likely to make calculation mistkaes, and (b) more likely to get cheated.


  1. jim says:

    Rasmusen postulates “This article explores a different explanation [for why firms avoid idiosyncratic risk]: If the firm encounters more noise in evaluating whether an activity is profitable or not, it will make worse decisions, so it choose avoid risky activities.”

    It’s not clear whether he’s claiming that firms already avoid idiosyncratic risk because they fear the noise will cause them to make worse decisions; or whether is postulating that they necessarily will make worse decisions in a noisy environment. It doesn’t seem like the later can be true, but in the former there’s no way to know for certain whether the variation one encounters is noise or signal.

    But suppose that a firm embarks on a new activity and initial results are negative. How does the firm know whether this negative response is noise or signal? It can’t know that. Some firms are guided by metrics and thus abandon the activity; others are guided by belief and continue to pursue the activity. But there’s still no way to know which of these two approaches will succeed.

    My *experience* is that, at the middle level of management, there are very strong incentives to not change anything regardless of the market incentive or risk. I suspect the desire to hedge against idiosyncratic risk is not related to the business environment at all, but related people who are already comfortable trying to ensure that they remain so.

  2. Kyle C says:

    I love that insight about how competitors face uncertainty too. In that light, it’s interesting to reflect on how hard-boiled detective stories and science fiction such as Star Trek dramatize a related dynamic. I won’t bother with examples, but very often, facing peril, a heroine or hero will “stir the pot” to make the situation more chaotic and unpredictable, gambling that the bad guys will make mistake before our protagonist does.(OK, Red Harvest by Dashiell Hammett is a seminal example.)

    • Detective fiction often highlights the importance of simply sticking to the problem and refusing to let go even in the face of seemingly complete confusion. That, and in some of the more “male” versions having a slight edge in physical capabilities: speed, strength, shooting ability, etc.

      Interestingly while the detective is usually bound by a code of honor, often there is a sidekick who is not so limited, and provides a very very dangerous edge. I’m thinking like Hawk to Spencer or Joe Pike to Elvis Cole (I’m reading a bunch of Robert Crais right now).

      Connelly’s Bosch novels have a somewhat different dynamic at times because the characters have to follow police procedures, though of course much of the interest comes from the times when those procedures break down.

      Not sure how this relates to the original post, but I couldn’t help but throw in a few cents on my favorite literary genre since you brought it up.

  3. Peter Dorman says:

    This looks like it might be entering into territory I explored a couple of decades ago. At the time it was in the context of the precautionary principle, but it applies generally to any situation in which there is fundamental uncertainty (not just quantifiable risk) and flows of new information rationally alter the valuation of recurrent choices. The abstract:

    Existing formulations of the Precautionary Principle tend to be too weak or too strong. They are too
    weak if they limit themselves to rejecting, for policy purposes, the bias in scientific research toward
    minimization of Type I error. This position is already embodied in classical decision theory. They are too
    strong if they demand proof of safety on the part of producers of potentially hazardous products and
    processes; this would eliminate too many beneficial activities. An intermediate position is proposed: the
    function of precaution is to take into account what we don’t know as well as what we do about the
    consequences of human activity. This leads to a meta-rule: decision-making is precautionary if unpredictable
    revisions in knowledge lead equally to unpredictable revisions in regulation. In the context of evolving
    knowledge about the ecological impacts of human activities, this implies a shift toward significantly greater

    It was published in Ecological Economics 53 (2005), although I worked it out about 15 years earlier.

Leave a Reply to Peter Dorman