Bayesian approaches to quality control (acceptance) sampling?

Stanley Chin writes:

I had the usual stats training in grad school, and after some years as a practicing statistician and economist find myself increasingly approaching problems from a Bayesian perspective — never more so than in a problem that was brought to me as an external consultant. My question is brief, the set up is a little long — the question is in the subject line, can you recommend any reading in Bayesian approaches to quality control sampling?

The set up is this — the regulator is trying to work with blood banks to determine an appropriate quality control sampling scheme for “leukoreduced” (filtered to remove white blood cells) blood units. A particular level of white blood cells has been set as a the maximum permissable; units above this are “failures” and either have to be reworked or discarded. The impact of missing a “failure” unit is actually not that large — it can cause fever, and may be related to very rare complications. Leukoreduction has been carried out for a number of years, and there’s good data on the distribution of white blood cell counts in the resulting products (or so I’m told; those data are not yet in my hands).

The regulator’s current “friendly guidance” (it’s not a rule) is for each blood center to test 60 units a month. For many centers, this would amount to 100% testing of their leukoreduced platelet pheresis products. I think this number comes off the old ANSI quality control charts. It seems to me, though, that if I have enough data to establish a prior on the distribution of white blood cell counts, and I know something about the cost of testing and the cost of false negative units, then I can work out an optimal sampling plan to detect (clinically significant) shifts in the underlying process, anything that increases the weight in the right tail of the pdf (shift in mean or higher moments).

Or so I think. I’m hoping that this is a worked problem? But a brief perusal of JSTOR and the like did not reveal a canonical article on the subject. Thus my plea for help. Anything that you could offer would be greatly appreciated!

My (lame) reply: This sounds really interesting. My quick thought is that a hierarchical model would do the trick. The idea would be that, instead of having to come up with a “prior distribution,” you’d have a “group-level model” which could me more directly estimated from data. Once that’s set up, the decision analysis should be straightforward given the assumed cost structure.

1 thought on “Bayesian approaches to quality control (acceptance) sampling?

  1. Coming from a statistical process control perspective…

    You're losing information by classifying the results as pass or fail.

    Try using a process control chart to monitor the actual white blood cell count. This will allow you to estimate the amount of variability in the data, and hence the maximum amount of white blood cells you can expect to see (using a 3 standard deviation range).

    If the data are stable (no points outside the control limits) you can estimate the standard deviation of the ongoing data stream and hence calculate your sample size to measure the white blood cell count to the desired level of accuracy.

    Or, better still, just keep on plotting the white blood cell counts to detect any changes in the process over time.

    If the data are not in statistical control, you have no basis for selecting a sample size because you have no basis for inferring from the sample to the population. In this case, you need to identify and rectify the causes of the out of control data prior to setting up your monitoring plan.

    Cheers

    Robert

Comments are closed.