Skip to content
 

Cost-benefit

The Federal Reserve Bank of Minneapolis has an interesting article by Douglas Clement from 2001 about cost-benefit analysis in pollution regulation.

I’m generally a fan of cost-benefit analysis and related fields, for use in setting government policies. Andrew, Chia-Yu Lin, Dave Krantz, and I once wrote a decision-analysis paper with a large cost-benefit component, that someone said is “the best paper I have seen on decision-making under uncertainty in a public health context” (thanks, mom!). But this article mentions the fact that the Clean Air Act explicitly forbids costs from being considered in setting pollutant standards, and then goes on to discuss this seemingly ridiculous fact…in a way that basically convinced me that eh, maybe cost-benefit analysis in this context isn’t necessary after all.

The usual problems are cited: it’s hard to figure out how to evaluate some costs and some benefits in terms of dollar costs (or any other common scale), there is no agreed “value of a life”, yada yada. Standard stuff. In the aforementioned decision analysis paper, we avoided many of these issues by comparing several policies (for radon monitoring and mitigation), including the current one; thus, we could find policies that save more lives for less money, and be confident that they’re better than the current one, without having to claim that we have found the optimum. But if you’re trying to do something new, like set a maximum value for a pollutant that has never previously been regulated, then our approach to avoiding the “value of a life” issue won’t work.

But in addition to the usual suspects for criticizing cost-benefit analysis, the article mentions a few others. For instance:

(1) “Potential benefits of a policy are often calculated on the basis of surveys of people’s willingness to pay for, say, a better view of the Grand Canyon, an extra year of life or avoidance of cancer. But critics argue that willingness to pay hinges, in part, on ability to pay, so that cost-benefit analysis is fundamentally dependent on the equity issues it professes to avoid.”

(2) “Indeed, Breyer, in his American Trucking concurrence, said that the technology-forcing intent of laws like the Clean Air Act makes determining costs of implementation “both less important and more difficult. It means that the relevant economic costs are speculative, for they include the cost of unknown future technologies. It also means that efforts to take costs into account can breed time-consuming and potentially unresolvable arguments about the accuracy and significance of cost estimates.” In short, it makes cost-benefit analysis itself less likely to pass a cost-benefit test.”

(3) “In an article titled “The Rights of Statistical People,” Heinzerling argues that analysts have created the entity of a “statistical life” in order to facilitate cost-benefit analysis, but the concept essentially strips those lives of their human rights. We don’t allow one person to kill another simply because it’s worth $10 million to the killer to see that person dead and because society may measure that person’s worth at less than $10 million, she argues; then why should regulatory policy be based on a similar principle?”

Item (1) could be handled within a cost-benefit analysis (by defining an ‘equity score’ and assigning a trade-off between dollars and equity, for example); item (2) just suggests that cost-benefit analysis can be so uncertain and so expensive that it’s not worth the effort, but that doesn’t seem true for regulations with implications in the billions of dollars — heck, I’ll do a cost-benefit analysis for 1/1000 of that, and that’s a real offer. Item (3) is a real ethical question that challenges the heart of cost-benefit analysis, and I’ll need to think about it more.

I’m tempted to go on and list items (4), (5), and (6), but read the article yourself. Among the people quoted is a Columbia economics prof, by the way. (In case you, the reader, don’t know: Andrew teaches at Columbia).

Overall, my view seems to be fairly close to that of someone quoted in the article:

====

“My own justification for using cost-benefit analysis in common-law decision-making,” wrote Richard Posner, “is based primarily … on what I claim to be the inability of judges to get better results using any alternative approach.” And he recommends that the acknowledged moral inadequacy of the Kaldor-Hicks criteria—its neglect of distributional equity—should be addressed by simply employing cost-benefit as a tool for informing policymakers, not as a decision-making imperative.

“This may seem a cop-out,” Posner admitted, “as it leaves the government without a decision rule and fails to indicate how cost-benefit analysis is to be weighted when it is merely a component of the decision rule.” But “I am content to allow the usual political considerations to reinforce or override the results of the cost-benefit analysis. If the government and the taxpayer and the voter all know—thanks to cost-benefit analysis—that a project under consideration will save 16 sea otters at a cost of $1 million apiece, and the government goes ahead, I would have no basis for criticism.”

=====

Of course, more typically we would think that the project will save between 3 and 40 sea otters at a cost of $200K to $6 million each, or whatever. But I agree with the general idea.

2 Comments

  1. Andrew says:

    I'm in favor of cost-benefit analysis too. But I don't know if I ocmpletely buy into the quote from Bhagwati (the Columbia economist, whom in fact I've not met):

    "At least this approach would enable you to put down some explicit weights," observed Jagdish Bhagwati, professor of economics at Columbia University and an amici signer. "I'm in favor of quantifying, just to make the thing aboveboard. Implicit criteria are not really good things. I do believe that putting it down-—getting people, pro and con, to say what they think the relative weights ought to be-—is a good thing. And once you've done that, cost-benefit analysis just naturally follows."

    How can he be so sure that "putting it down-—getting people, pro and con, to say what they think the relative weights ought to be-—is a good thing"? I mean, I know what he's saying, but if people aren't "putting it down," maybe they have good reasons for that. I don't see what his criteria are for being "a good thing."

  2. Bill says:

    On item (3).

    The fact that I would agree to your inflicting on me a one in a million chance of death for $17 does not imply that you can kill me for $17 million.

    Some authors use the term "small-risk value of life" instead of "value of life" to emphasize this. I assume that a policy increasing pollution creates a small increase in the chance of death for a large number of people, not a large increase in the chance of death for one person.

    I believe there are ethical problems with this kind of analysis, but not because of item (3). I think the issue is that the others shouldn't be able to set the value for my life, since they can then make it as small as they want. Each person should have a small-risk value of life that they set for themselves to guide their own decisions.

    We could use this kind of reasoning in areas where there is more consent involved. For example, if I buy a screwdriver, the manufacturing company might say "We assigned you a small risk value of life of $2 million when designing this product." That would tell me that I shouldn't buy this product; it isn't safe enough for me, with my small risk value of life of $17 million. I might buy one with an advertised value of $100 million, but I might be overspending; they would have spent more on safety that I would, and presumably that would be incorporated into the price.

    However, I can't ethically say "Your life is worth $1 million. Here is a penny, so I can inflict a one in a hundred-million chance of death on you." You are the only one who can ethically assign a value to your life. You might say that you also must consent to any risk I inflict on you, and you would be right up to a certain level of risk, but I could imagine an ethical code saying something like "It is ethical to inflict up to one-in-a-billion chance of death per year on a person without their consent and without compensating them. Up to one in ten million per year, you need to compensate them, but not necessarily get their consent. Above that, you need their consent to inflict that risk upon them."