“Losing one night’s sleep may increase risk factor for Alzheimer’s, study says”

CNN’s on it:

A preliminary study found the loss of one night’s sleep in healthy young men increased the levels of tau protein in their blood compared to getting a complete night of uninterrupted sleep.

Studies have shown that higher levels of tau protein in the blood is associated with an increased risk of developing Alzheimer’s disease. “Our exploratory study shows that even in young, healthy individuals, missing one night of sleep increases the level of tau in blood suggesting that over time, such sleep deprivation could possibly have detrimental effects,” said study author Dr. Jonathan Cedernaes, a neurologist at Uppsala University in Sweden. The study was published Wednesday in Neurology, the medical journal of the American Academy of Neurology.

From the linked paper:

Methods In a 2-condition crossover study, 15 healthy young men participated in 2 standardized sedentary in-laboratory conditions in randomized order: normal sleep vs overnight sleep loss. Plasma levels of total tau (t-tau), Aβ40, Aβ42, neurofilament light chain (NfL), and glial fibrillary acidic protein (GFAP) were assessed using ultrasensitive single molecule array assays or ELISAs, in the fasted state in the evening prior to, and in the morning after, each intervention.

Results In response to sleep loss (+17.2%), compared with normal sleep (+1.8%), the evening to morning ratio was increased for t-tau (p = 0.035). No changes between the sleep conditions were seen for levels of Aβ40, Aβ42, NfL, or GFAP (all p > 0.10). The AD risk genotype rs4420638 did not significantly interact with sleep loss–related diurnal changes in plasma levels of Aβ40 or Aβ42 (p > 0.10). . . .

Hey, didn’t somebody say something about the difference between significant and non-significant?

Anyway, this all could be a real thing. The headline is just a bit dramatic.

28 thoughts on ““Losing one night’s sleep may increase risk factor for Alzheimer’s, study says”

  1. When journalists write headlines like this: Is it because (1) they don’t understand the nuances of significance or the strength of evidence, or (2) because they want a headline that will say something meaningful even when there isn’t much of a story? Both, I imagine, but I guess (2) is the bigger factor.

    • It’s a continuum. The researchers are mainly (2); the university PR office is a mix; journalists are mainly (1). But they aren’t mutually exclusive. Hanlon’s razor suggests everybody should be (1), but it’s also easier to work towards a “significant” result when you don’t really know what you’re doing.

  2. To be fair, the headline says “may increase risk factor” not “may increase risk”. This actually seems like quite an accurate and undramatic assessment. The problem is just that lots of readers will wrongly conflate “risk factor” and “risk” because so much shoddy writing by scientists and journalists alike has trained people to do this.

    • I had some difficulty to understand what “may increase risk factor” meant even after you pointed it out. Sleep deprivation increases “blood level of tau”. Which may be a “risk factor” for AD (it says there “higher levels is associated with an increased risk” but a causal link is far from clear).

      • Even if there is no causal relationship such that higher X sets about a chain of events that actually makes Y happen… it does imply that we assign more probability to Y happening according to whatever risk model makes X a risk factor for Y.

        For example, if we have a model for weight that takes exactly one covariate “height” as an observed “risk factor”, then if you grow taller, we will estimate your weight as being higher (higher risk of being heavy), even if what happened was you were a mildly obese 16 year old who started exercising while your bones were growing, and gained 3 inches of height while dropping 20 lbs of weight.

        what happens, vs our knowledge about it are two different things.

        • No, increasing a risk factor for Y only increases the risk of Y if X causes Y. Otherwise it just impacts the output of your model, which is not the same as actual risk because it doesn’t model effects of counterfactual interventions

        • This is a fundamentally Bayesian vs Frequentist issue. What is “actual risk”?

          To a Bayesian, basically “risk” is just “probability of” and probability is just “degree of credibility” and is entirely “in our heads” (or the “brain” of our computer). So changing your X changes your risk of Y because it increases our model’s probability assigned to you.

          In contrast, to a Frequentist, there is no such thing as individual risk, as whether an individual gets the condition or not is either 0 or 1 but can’t have a probability associated since it’s not a repeatable event. So to a Frequentist risk means “frequency of the condition in the population of interest” and again increasing your X puts you in a different population in which the frequency is higher so to the extent that you can attribute “frequency in the population” to a “member” of the population “increases your risk” (really just changes you to a different population which has a higher risk).

          Risk simply doesn’t exist outside of either our calculators/models (for a Bayesian) or our population level averages (Frequentist). There is no such thing as “actual risk”.

        • Note that if you have a mechanistic model, it’s possible that increasing X by one mechanism wouldn’t increase risk, vs increasing it by another mechanism would increase risk. Then X is only a “risk factor” under the second mechanism.

        • As a further meditation on risk versus probability, I really like this particular thought experiment: suppose you are drawing a ball from an urn containing an arbitrary number (N) of balls, each blue or red. Why an urn? Because this is probability theory dammit! But moving on :) In scenario A, you happen to know that the distribution of red and blue balls is exactly 50-50. That is to say, the odds or the “risk” if you will, are known exactly. In scenario B, you know nothing EXCEPT that the balls could plausibly be either blue or red.
          What is the probability the ball you draw is blue under scenario A?
          What is the probability the ball you draw is blue under scenario B?
          Do these differ why or why not?
          How you approach this problem is enormously probative of the underlying distinctions between risk and probability, and whether you side philosophically more with Bayesianism – which just means you are comfortable generalizing the tools of probability straightforwardly to decision making under uncertainty – or with some interpretation of probability that is less amenable to a Bayesian perspective.

        • Z:

          I’m pretty sure I see what you’re getting at, but I don’t think what you write is correct as stated. For example, suppose Y is height at age 13 and X is height at age 12.5. I don’t know that it makes sense to say that height at age 12.5 “causes” height at age 13, but I can imagine that lots of treatments that increase X will also increase Y.

          Or, to make it even cleaner, let X be recorded hight at age 12.5 on some form, and Y be actual height at age 13. And suppose that the form being filled out at age 12.5 is put in a drawer somewhere, so it has no effect on what happens at age 13. Still, I’d expect that the sorts of interventions that would increase X would also increase Y. That said, I could imagine an intervention that effects X but not Y: in this case, the intervention could be to pay someone to write a higher number on the form at age 12.5. So maybe the right way to say it is that, if X does not cause Y, then it’s possible to imagine an intervention that changes X but does not change Y. But many interventions that change X can also change Y, even if X does not cause Y.

        • You’re right, Andrew, I stated that way too strongly. I should have said only *necessarily* impacts the risk of Y if X causes Y. Otherwise it *might* just impact model output. (Changing tau levels on a single day probably has about the same amount of impact on risk of ultimately getting Alzheimer’s disease as erasing the tau level in your medical record and replacing it with another number then sticking it back in the file drawer, to use your example.)

        • You’re still acting as if “risk of Y” is an “actual thing” which has a “true quantity” kind of like say “kg of adipose tissue”.

          This just doesn’t make any sense to me.

          when you say it “just impacts model output” since *risk IS model output* this is 100% the same thing.

          what I’ll agree with is that some models are better than others. A model which predicts a risk increase when there is no mechanistic causal connection X causes Y will be a less good model than one which relies on causal connections.

        • Daniel:

          I am finding your conceptualization confusing and perhaps I am stuck in a frequentist mindset. That said, it seems that a risk factor such as a weak heart valve, if unmeasured and not included in a model is still a risk factor for heart failure.

          Are we simply defining risk differently?

        • Curious,

          A weak heart valve is a risk factor for heart failure in any model in which the weak heart valve is taken into account in such a way as to change the assigned probabilities.

          Let’s put it this way… suppose I tell you that person A is 5ft 6, female, and has 32% body fat… You’ll assess their risk of heart failure as one thing, primarily based on obesity perhaps. Now we add the info that they have a weak heart valve… You’ll assess the risk as even higher right? But nothing changed about the person! They’re the same person and have the same heart valve issue as before I told you that they have a weak heart valve.

          This shows that risk can’t be a thing inside the person. It’s not like “kg of adipose tissue” which is objectively a quantity inherent to the person. Unlike that, “Risk of heart failure” is a quantity inside the head of the person making the assessment and is inherently based on their state of knowledge about the person in question.

          Is the weak heart valve a factor that physically contributes to whether or not their heart fails? Yes. But it doesn’t contribute to our assessment of whether she’s likely to have a heart failure unless it’s included as known information in whatever assessment procedure we use to assign the probability.

          Suppose secretly our poor patient has brakes on her car that will catastrophically fail during her trip to the coast next week. In the presence of this knowledge she has zero risk of spontaneous heart failure, because she’ll die in an accident long before her heart stops pumping due to its structural or electrical defects. But in the absence of that knowledge she’s at high risk of heart failure… And in the absence of the knowledge of the valve defect, she’s at only moderate risk due to obesity…

          Risk of a thing is not the thing itself.

        • Daniel:

          So what you are saying is that the mistake I’m making is to confuse the reality we are modeling, which includes both known and unknown causal factors, with the concept of risk, defined exclusively by the modeled probability.

    • This is, as I understand it, essentially the same logic that is applied to (“bad”) cholesterol. Because a lot is bad, kind of a lot is kinda bad. Just below kinda-bad is potentially pre-bad. Etc.

      • Sorry to double post, but also as to suntans and skin cancer. Burns are bad, tanning beds are bad, ergo all tans are bad (where the evidence peters out).

  3. In this particular case, I’m less critical of the journalist than the scientist and the journal. Neurology seems like a pretty big journal to submit your “exploratory” study to, especially when it seems like it would be pretty easy to replicate and/or add conditions. I mean, what if the risk factor drops dramatically after two nights of sleep deprivation?? And shame on the journal for publishing it, when there are probably dozens of more valuable submissions rejected as (supposedly) unlikely to represent a significant impact on the field–say, letters from methodologists critiquing the quality of the design or analysis in a previous Neurology article.

  4. Please can you explain how the “difference between significant and non-significant” criticism applies here? It looks like they directly compared ratios (evening-to-morning change in t-tau) between the “sleep loss” and “normal sleep” condition, and got p=0.035.

    • I was wondering the same thing. They mentioned a significant result and several non-significant results and didn’t compare them. And then they report on a separate interaction test, which is what you’re supposed to do to test for a difference.

    • Ben, Daniel:

      It’s this: “In response to sleep loss (+17.2%), compared with normal sleep (+1.8%), the evening to morning ratio was increased for t-tau (p = 0.035). No changes between the sleep conditions were seen for levels of Aβ40, Aβ42, NfL, or GFAP (all p > 0.10).” They’re creating a division between one case (t-tau), where they say there was increase, and others (Aβ40 etc.), where they say there were no changes. But this division is based on p = 0.035 in one case and p > 0.10 in the others. But that’s really noisy; you can easily see p = 0.035 in one case and p > 0.10 in others just from chance variation alone.

  5. Daniel:

    So what you are saying is that the mistake I’m making is to confuse the reality we are modeling, which includes both known and unknown causal factors, with the concept of risk, defined exclusively by the modeled probability.

Leave a Reply

Your email address will not be published. Required fields are marked *