"Project Talent" adds to long-range dementia predictions

Language Log 2018-10-01

Tara Bahrampour, "In 1960, about a half-million teens took a test. Now it could predict the risk of Alzheimer’s disease.", WaPo 9/21/2018:

In 1960, Joan Levin, 15, took a test that turned out to be the largest survey of American teenagers ever conducted. It took two-and-a-half days to administer and included 440,000 students from 1,353 public, private and parochial high schools across the country — including Parkville Senior High School in Parkville, Md., where she was a student. […]

Fifty-eight years later, the answers she and her peers gave are still being used by researchers — most recently in the fight against Alzheimer’s disease. A study released this month found that subjects who did well on test questions as teenagers had a lower incidence of Alzheimer’s and related dementias in their 60s and 70s than those who scored poorly.

The cited study is Alison Huang et al., "Adolescent Cognitive Aptitudes and Later-in-Life Alzheimer Disease and Related Disorders", JAMA 2018:

Findings In this cohort study of 43 014 men and 42 749 women, lower adolescent memory for words, in women, and lower mechanical reasoning, in men, were associated with higher odds of Alzheimer disease and related disorders in later life.

More specifically,

Population-based cohort study from the Project Talent–Medicare linked data set, a linkage of adolescent sociobehavioral data collected from high school students in 1960 to participants’ 2012 to 2013 Medicare Claims and expenditures data. The association between adolescent cognitive ability and risk of ADRD in later life was assessed in a diverse sample of 43 014 men and 42 749 women aged 66 to 73 years using a series of logistic regressions stratified by sex, accounting for demographic characteristics, adolescent socioeconomic status, and regional effects.

How big are these effects?

Results from logistic regressions are expressed as odds ratios and Bonferroni-corrected 95% simultaneous confidence intervals. We express cognitive aptitude measures as z scores such that change in odds ratio (OR) should be interpreted per SD disadvantage in cognitive ability. Using a Bonferroni-corrected α, low IQ (men: OR, 1.17; 95% CI, 1.04-1.32; women: OR, 1.17; 95% CI, 1.04-1.31) and low general academic aptitude (men: OR, 1.18; 95% CI, 1.05-1.33; women: OR, 1.19; 95% CI, 1.06-1.33) were significantly associated with increased odds of ADRD in later life in both men and women.

What does this really mean? This passage is helpful:

In women, low memory for words in adolescence showed the strongest association with ADRD in later life such that 1 SD disadvantage was associated with 1.16-fold increased odds (OR, 1.16; 95% CI, 1.05-1.28). In men, low memory for words was also an important indicator (OR, 1.16; 95% CI, 1.05-1.27); however, mechanical reasoning showed a slightly more robust association; 1 SD disadvantage in mechanical reasoning was associated with 1.17-fold higher odds of ADRD (OR, 1.17; 95% CI, 1.05-1.29).

What is "1.16-fold increased odds"? They tell us earlier that

In a sample of 43 014 men and 42 749 women, incidence of Medicare-reported ADRD was 2.9% in men (n = 1239) and 3.3% in women (n = 1416)

So for women in their sample, the overall odds of an ADRD diagnosis are 1416/(42749-1416) or about 0.034. If I've understood the report correctly, increasing those odds by 1.16 would predict (after a bit of algebra) about 1634 ADRD diagnoses for women whose "memory for words" score was  one standard deviation below the mean, ignoring the various statistical corrections for other factors.

[This is because

(1634/(42749-1634))/(1416/(42749-1416)) = 1.160

Here the odds ratios are pretty close to the risk ratios.]

These are substantial epidemiological changes, even if the predictive value for individuals remains relatively low. What accounts for them?

The authors mention the "Nun study" among other earlier indications of similar relationships — for some background, see "Writing style and dementia", 12/3/2004; "Miers dementia unlikely", 10/21/2005; "Nun study update", 8/27/2009. As that last post notes, a 2009 paper in Neurology by Diego Iacono et al. ("The Nun Study. Clinically silent AD, neuronal hypertrophy, and linguistic skills in early life") established that "higher idea density scores in early life are associated with intact cognition in late life despite the presence of AD lesions". This is one of several lines of evidence suggesting that the connection between early-life cognitive skills and later-life AD  is probably not due to a difference in the degree of later-life neurodegeneration, but rather  to a "cognitive reserve" effect. "Cognitive reserve" means that people who start out with better skills can function at a higher level with a given amount of physiological deterioration, and thus delay diagnosis.

But is this because of some genetic (or at least embyonic) difference in neurophysiology? Or is it because of differences in childhood experience (including nutrition, education, and other aspects of the environment)? Or some mixture of both?

I don't know of any evidence on this point. So we might as well add "protection against ADRD" to the many reasons to improve childhood nutrition and education, avoid environmental toxins, etc.