# Advice on sample size and power calculations: chapter 13 of Stephen Senn’s book, chapter 20 of our book, and some other references

Following this discussion of his statistical advice, Stephen Senn sent me this chapter on “Determining the Sample Size”, which has a great beginning:

Clinical trials are expensive, whether the cost is counted in money or in human suffering, but they are capable of providing results which are extremely valuable, whether the value is measured in drug company profits or successful treatment of future patients. Balancing potential value against actual cost is thus an extremely important and delicate matter and since, other things being equal, both cost and value increase the more patients are recruited, determining the number needed is an important aspect of planning any trial. It is hardly surprising, therefore, that calculating the sample size is regarded as being an important duty of the medical statistician working in drug development.

I’ll pile on the references by linking to chapter 20 of my book with Jennifer, which, compared to Senn’s book, has a bit more calculation and a bit less discussion. I suspect many readers will benefit from reading both. (Full link to the book here.)

Finally, here’s a link with a couple more references, including a great little article by Russ Lenth.

## 3 thoughts on “Advice on sample size and power calculations: chapter 13 of Stephen Senn’s book, chapter 20 of our book, and some other references”

1. I don't think it's widely known how often sample sizes are not determined by statistical calculations. Quite often the discussion goes something like this.

Statistician: You need a sample size of 50 to obtain such and such power.

Physician: You can have 28 patients.

Statistician: OK, then the sample size is 28.

I'm not saying this is necessarily a bad thing. There may be very good reasons for a hard limit on the number of patients. And it's better to acknowledge the limitations than to come up with a disingenuous post hoc justification for the sample size.

2. I don’t think it’s widely known how often sample sizes are not determined by statistical calculations. Quite often the discussion goes something like this.

Statistician: You need a sample size of 50 to obtain such and such power.

Physician: You can have 28 patients.

Statistician: OK, then the sample size is 28.

I’m not saying this is necessarily a bad thing. There may be very good reasons for a hard limit on the number of patients. And it’s better to acknowledge the limitations than to come up with a disingenuous post hoc justification for the sample size.

3. Chapter 14 of his book on multi-center trials is also interesting particularly the topic of fixed versus random effects. Having seen the same topic in your ARM book I thought his response was interesting:
"I have no firm opinion as to which of fixed or random effects is the right approach and consider that both have their place. I would nearly always propose a fixed-effect analysis of a clinical trial. I might also consider that a random-effect analysis would be useful on occasion; especially if there were rather many centres which had been fairly widely selected. I am, however, rather sceptical of some of the enthusiasm which proponents of the random effect model seem to generate on occasion."