Anupam Agrawal writes:
I am an Assistant Professor of Operations Management at the University of Illinois. . . . My main work is in supply chain area, and empirical in nature. . . . I am working with a firm that has two separate divisions – one making cars, and the other makes trucks. Four years back, the firm made an interesting organizational change. They created a separate group of ~25 engineers, in their car division (from within their quality and production engineers). This group was focused on improving supplier quality and reported to car plant head . The truck division did not (and still does not) have such an independent “supplier improvement group”. Other than this unit in car, the organizational arrangements in the two divisions mimic each other. There are many common suppliers to the car and truck division.
Data on quality of components coming from suppliers has been collected (for the last four years). The organizational change happened in January 2007.
My focus is to see whether organizational change (and a different organizational structure) drives improvements.
My hypothesis is that this changed structure in car strengthened supplier trust (my interviews with suppliers point to this) which helped in improving quality for car (but not for truck, even for the same supplier). For analyzing this, I was thinking of a difference-in-differences analysis between the quality data of these two divisions. The organizational change in the car division is similar to a quasi-experiment.
Here is the problem: When I presented my ideas to several of my colleagues, they have suggested that I must do a Propensity score matching analysis – in addition to a difference-in-differences, since this is not a ‘natural experiment’ – the firm did not randomly select between whether to launch this program for car vs truck. So truck division is not a good “matched” control. An omitted variable drives both adoption and performance on quality, resulting in omitted variable bias. The common suppliers also create contamination in the dependent variable, quality. So, given the data, and the quasi experimental setting, which kind of analysis is best suitable? Also, is this suggestion of combining two kinds of analysis a good one? I have read papers that suggest this (Blundell and Dias, 2000), but I am apprehensive since my setting does not quite match with that of the papers that suggest this analysis. I do have a separate division (of the same firm) , and there are common suppliers, so how can Propensity score provide more information?
My reply: I think that to do matching, you need to have multiple cases. In this problem, the different cases are the different suppliers, right? In that case, yes, it makes sense to try to match or otherwise adjust for pre-treatment differences in the characteristics of the suppliers. I agree with your colleagues that difference-in-difference analysis is not enough and will not necessarily do a good job of adjusting or imbalance or lack of complete overlap. We discuss these issues further in chapters 9 and 10 of ARM.