Random restriction as an alternative to random assignment? A mini-seminar from the experts

Robin Hanson suggested here an experimental design in which patients, instead of randomly assigned to particular treatments, are randomly given restrictions (so that each patient would have only n-1 options to consider, with the one option removed at random). I asked some experts about this design and got the following responses.

Eric Bradlow wrote:

I think “exclusion”, more generally, in Marketing has been done in the following ways:

[1] A fractional design — each person only sees a subset of the choices, items, or attributes of a product (intentionally) on the part of the experimenter. Of course, this is commonly done to reduce complexity of the task while trading off the ability to estimate a full set of interactions. The challenge here, and I wrote a paper about this in JMR in 2006, is that people infer the values of the missing attributes and do not, despite instructions, ignore them. Don Rubin actually wrote an invited discussion on my piece. So, random exclusion on the part of the experimenter is done all of the time.

[2] A second way exclusion is sometimes done is prior to the choice or consumption task, you let the respondent remove “unacceptable” alternatives. There was a paper by Seenu Srinivasan of Stanford on this. In this manner, the respondent eliminates “dominated/would never choose alternatives”. This is again done for the purposes of reducing task complexity.

[3] A third set of studies I have seen, and Eric Johnson can comment on the psychology of this much more than I can, is something that Dan Ariely (now of Duke formerly of MIT and colleagues have done), which seems closest to this post. In these sets of studies, alternatives are presented and then “start to shrink and/or vanish”. What is interesting is that these alternatives that he does this to are not the preferred ones and it has a dramatic effect on people’s preferences. I always found these studies fascinating.

[4] A fourth set of related work, of which Eric Johnson has great fame, is a “mouse-lab” like experiment where you allow people to search alternatives until they want to stop. This then becomes a sequential search problem; however, people exclude alternatives when they want to
stop.

So, Andy, I agree with your posting that:

(a) Marketing researchers have done some of this.

(b) Depending on who is doing the excluding, one will have to model this as a two-step process, where the first step is a self-selection (observational study like likelihood piece, if one is going to be model-based).

The aforementioned Eric Johnson then wrote:

I think there are at least two important thoughts here:

(1) random inclusion for learning… Decision-making has changed the way we think about preferences: They are discovered (or constructed) not ‘read’ from a table (thus Eric B.’s point 3).

A related point is that a random option can discover a preferences (gee, I never thought I liked ceviche….) so there may be value in adding random options to the respondent,,, The late Hillel Einhorn wrote about ‘making mistakes to learn.’

(2) “New Wave’ choice modeling often consists of generating the experimental design on the fly: Adaptive conjoint. By definition, these models use the results from one choice to eliminate a bunch of possible options and focus on those that have the most information. Olivier Toubia at Columbia Marketing is a master of this.

To elaborate on Eric B.’s points:

Consumer Behavior research shows that elimination is a major part of choice for consumers, probably determining much of the variance in what is chosen. Make choice easier, learning harder.

There is an interesting tradeoff for both the individual and larger publics here: You try a option you are likely not to like (treatment which may well not work). If you are surprised, then you (or subsequent patients) benefit for a long time. Since this is an intertemporal choice, people may
not experiment enough.

Finally, Dan “Decision Science News” Goldstein added:

I’ve never seen a firm implement such a design in practice, neither when I worked in industry, nor when I judged “marketing effectiveness” competitions.

My own thoughts are, first, that there are a lot of interesting ideas in experimental design beyond the theory in the textbooks. It would be worth thinking systematically about this (someday). Second, I want to echo Eric Johnson’s comment about preferences being constructed, not “read off a table” from some idealized utility function. Utility theory is beautiful but it distresses me that people think it fits reality in an even approximate way.

1 thought on “Random restriction as an alternative to random assignment? A mini-seminar from the experts

  1. This isn't quite random restriction, but see:

    Lavori PW, Rush AJ, Wisniewski SR, Alpert J, Fava M, Kupfer DJ, Nierenberg A, Quitkin FM, Sackeim HA, Thase ME, Trivedi M.
    Strengthening clinical effectiveness trials: equipoise-stratified randomization.
    Biol Psychiatry. 2001 Nov 15;50(10):792-801.

Comments are closed.