A.I. Birnbaum1, J.D. Grill2,3,4,5, D.L. Gillen1,2,3
1. Department of Statistics, University of California, Irvine, Irvine, CA, USA; 2. Institute for Memory Impairments and Neurological Disorders, University of California, Irvine, Irvine, CA, USA; 3. Alzheimer’s Disease Research Center, University of California, Irvine, Irvine, CA, USA; 4. Department of Neurobiology and Behavior, University of California, Irvine, Irvine, CA, USA; 5. Department of Psychiatry and Human Behavior, University of California, Irvine, Irvine, CA, USA
Corresponding Author: Adam I. Birnbaum, Bren Hall 2019, Irvine, CA, 92697-1250, USA, abirnbau@uci.edu, (949) 824-9862
J Prev Alz Dis 2023;3(10):471-477
Published online April 12, 2023, http://dx.doi.org/10.14283/jpad.2023.45
Abstract
BACKGROUND: Cohort effects in study populations can impact clinical trial conclusions and generalizability, particularly in trials with planned interim analyses. Long recruitment windows may exacerbate these risks in Alzheimer’s disease (AD) trials.
OBJECTIVES: To investigate the presence of cohort effects mild-to-moderate AD trials.
DESIGN: Retrospective analysis using pooled participant-level data from nine randomized, placebo-controlled trials conducted by the Alzheimer’s Disease Cooperative Study (ADCS).
SETTING: Trials were multicenter studies conducted by an academic trial network.
PARTICIPANTS: The trials enrolled participants with mild, mild-to-moderate, or moderate AD who were over age 50 and had mini mental state exam scores between 12 and 26.
INTERVENTIONS/EXPOSURE: We defined a participant’s site-standardized enrollment time as the number of days between their screening date and the first screening date among randomized participants at their site within their study.
Main Outcome(s) and Measure(s): Our primary outcome was the 12-month change in the AD assessment scale – cognitive subscale (ADAS-Cog). Secondary outcomes were participant demographics and time to study discontinuation.
RESULTS: The pooled sample consisted of N=2,754 at baseline with N=2,191 participants completing a 12-month visit. We found no meaningful differences in the distributions of sex, race and ethnicity, age, years of education or baseline ADAS-Cog score across enrollment time. We found a significant association between enrollment time and 12-month change in ADAS-Cog, with participants enrolling 100 days later tending to experience an increase on the ADAS-Cog of 0.16 points greater (reflecting greater cognitive decline; 95% CI: (0.021, 0.294), p = 0.02), after controlling for potential confounding factors.
CONCLUSION: We found minimal evidence of clinically relevant cohort effects in ADCS trials. Our results reinforce the original findings of these trials.
Key words: Alzheimer’s Disease, cohort effects, clinical trial design.
Introduction
Clinical trials rely on well-defined patient populations to draw meaningful and generalizable conclusions about treatments under study. Recruiting a sample of participants that is representative of the target population and instituting proper randomization are key to avoiding bias in estimates of treatment effects. These processes can be disrupted by cohort effects; if participants who come into trials later are different from those enrolled earlier. Shifts in participant characteristics over the course of accrual may yield misleading estimates of treatment effects at interim analyses and/or potential bias due to imbalances of confounding factors across treatment groups.
Alzheimer’s disease (AD) trials are at particular risk for biases from cohort effects due to extended recruitment windows that result from high rates of exclusion and logistical barriers to participation (1, 2). Participants recruited later into trials may differ from those recruited earlier due depletion of pools of readily available participants or due to differential recruitment rates across heterogeneous study sites (3).
The implications of cohort effects recently came to light in the controversial EMERGE and ENGAGE trials of the anti-amyloid treatment aducanumab. In these trials, the sponsor contended that a cohort effect resulted in an invalid interim futility analysis and inappropriate trial stoppage (4). This situation exemplifies that the presence of cohort effects can have critical repercussions on trials and their findings.
Here, we aimed to assess the potential presence of cohort effects in a set of nine AD trials performed by the Alzheimer’s Disease Cooperative Study (ADCS), a joint initiative between academic trialists and the National Institute on Aging. In particular, we investigated whether cohort effects were present with respect to demographic data, cognitive decline, study partner type, and study discontinuation.
Methods
Data Source
We used data from nine ADCS trials that were conducted between 1999 and 2014. We received the datasets through the University of California, San Diego ADCS Legacy database and the Alzheimer’s Therapeutic Research Institute at the University of Southern California. In chronological order, the study interventions tested in the ADCS trials were the non-steroidal anti-inflammatory drugs (NSAIDs) rofecoxib and naproxen (5), simvastatin (6), high dose vitamin B supplementation (7), valproate (8), huperzine A (9), docosahexaenoic acid (DHA) (10), intravenous immunoglobin (IVIg) (11), resveratrol (12), and the FYN kinase inhibitor AZD0530 (13). All trials were placebo controlled and failed to demonstrate superiority of the intervention over placebo on their prespecified primary outcomes.
With the exception of the huperzine A trial, which had a primary endpoint of 16 weeks and an optional extension study that ran for up to 12 additional months, each trial had a duration of at least 12 months. Seven trials enrolled participants with mild-to-moderate AD, one enrolled participants with moderate AD (valproate), and one enrolled participants with mild AD (AZD0350). The trials included similar mini mental state exam (MMSE) ranges in their inclusion criteria. The largest difference was between the valproate (range: 12-20) and AZD0530 (range: 18-26) trials. Four participants had missing race and ethnicity information and one had missing years of education. We omitted these participants from regression analyses. Thus, our combined sample consisted of N=2,754 participants at baseline and 2,191 participants at 12 months. For analyses involving study partner type we omitted the IVIg data which did not include study partner information.
Defining Time of Enrollment
We defined site-standardized enrollment time as the number of days between a participant’s screening visit and the first screening visit among randomized participants at the same site within the same study. This construction standardizes at the study level, to account for calendar time differences between studies, and at the site level, to account for differences in site time to activation. We chose to standardize at the site level to reflect the hypothesis that cohort effects may be caused by local depletion of eligible patient pools.
In secondary analyses, we defined a participant’s trial enrollment percentile as the proportion of participants enrolled into a given study at the time they enrolled. For example, if a trial’s total sample size was 100 participants, the participant with the 10th earliest screening date is assigned an enrollment percentile of 0.10 since, at that time, 10/100 participants had enrolled. In this construction there is no site-level standardization. We chose this definition to align with sequential testing frameworks where interim analyses are commonly performed when pre-specified proportions of the full sample reach the primary endpoint (14). We primarily treated both site-standardized enrollment time and trial enrollment percentile as continuous predictors in regression analyses.
Outcome Variables
To assess cohort effects on trial outcomes, we used the 12-month within-subject change in the 11-item ADAS-Cog (15). We chose the 12-month change because all trials had data up to at least that point and each trial recorded the ADAS-Cog at the 12-month visit. If participants had an incomplete task coded as being due to cognitive reasons, we imputed the maximum possible score for that task. If participants had an incomplete task coded as being due to a non-cognitive reason (either participant refusal or due to physical reasons) we imputed their average score among completed tasks. In the IVIg dataset, we were not able to compute participants’ scores on the word recognition task due to lack of word-specific data collection. For these participants we performed hot deck imputation and imputed the score of the participant whose other task scores most closely matched among participants in all other trials; if multiple participants were equally similar, we imputed the average score among these participants. We determined that the marginal distribution of imputed scores was similar to the observed distributions in other trials via visual inspection. Where applicable, we repeated analyses without the IVIg dataset to assess sensitivity to our imputation procedure.
To assess the association between enrollment time and study partner type, we categorized study partners into three groups: spouse, adult child, or other. We did not have study partner information for the IVIg trial, hence this dataset was necessarily omitted for relevant analyses.
We defined a participant’s time to study discontinuation as the number of days between their screening date and the date they dropped out of their study. Withdrawing consent, discontinuing participation due to an adverse event other than death, or being lost to follow-up were all treated as dropping out of the study. If a participant reached their specific study’s primary endpoint (16 weeks, 12 months, 18 months, or 24 months, depending on the study) or died prior to either completing the study or dropping out, we treated them as censored. In addition to treating their time to study discontinuation in days as a continuous measure, we also considered a binary indicator of primary endpoint completion.
Statistical Analyses
To investigate potential differences in patient baseline characteristics over enrollment time, we split our sample into quartiles by enrollment time and qualitatively assessed for potential differences in distributions. We summarized continuous characteristics using means and standard deviations and summarized categorical characteristics using counts and proportions.
To analyze cohort effects on trial outcomes, we regressed the 12-month change in ADAS-Cog on enrollment time while controlling for baseline score, sex, age, race & ethnicity, and years of education. We included trial as a factor variable in this and all other regression models. We used generalized estimating equations (GEE) (16) with an exchangeable working correlation structure at the study-site level to allow for clustering between participants who attended the same site in the same study. Confidence intervals (CIs) were constructed using robust standard errors (17) unless stated otherwise.
To analyze cohort effects on study partner type, we regressed a participant’s study partner type at baseline on the same covariates as above using multinomial logistic regression. To account for potential clustering at the study-site level, we obtained bootstrap variance estimators for the regression parameter estimates. However, CIs obtained via these bootstrapped estimates did not qualitatively differ from those obtained with model-based variance estimators for any model parameters. Therefore, CIs reported for this analysis use the model-based standard errors.
To analyze cohort effects on participant retention marginally, we used Kaplan-Meier estimates stratified by site-standardized enrollment time quartiles. To analyze cohort effects on participant completion rates while controlling for potential confounding factors, we fit a Cox proportional hazards model with the same aforementioned covariates. To allow for between-trial heterogeneity in baseline hazards, we fit the model stratified by trial. We fit a logistic regression to a binary indicator of whether participants completed their primary endpoint.
All p-values were obtained using Wald tests and all hypothesis tests were 2-sided. The association between the 12-month change in ADAS-Cog and site-standardized enrollment time was our primary inferential target. All other analyses were secondary, and corresponding p-values should be interpreted accordingly. To aid in interpretation and detect potential non-linear trends, we repeated all regression analyses with enrollment time discretized into quartiles and treated as a factor variable. For our secondary investigation, we also repeated all analyses above replacing site-standardized enrollment time with trial enrollment percentile; these results are available in the supplemental content. We performed our analyses using R version 4.2.0 and used the ‘geepack’, ‘nnet’, and ‘survival’ packages for fitting regression models.
Results
Figure 1 displays site-standardized enrollment plots (left panel) and non-site-standardized cumulative distribution functions (CDFs) of enrollment (right panel) for each trial. Of note, all 402 participants in the DHA trial were screened within 224 days of their site activation time. The NSAIDs trial also recruited participants more quickly than the remaining trials, with the vast majority of its 351 participants enrolled within one year. Other trials of similar sample size had sites that took over two years to enroll.
Left: Enrollment plots by trial using site-standardized enrollment time; time is standardized at both the trial- and site-level so that for each site within each trial, the time origin represents the earliest screening date among only the subsequently randomized participants at that site; all participants who fall on a given vertical line are assigned an identical site-standardized enrollment time; example dashed line is given at 150 days. Right: Empirical enrollment CDFs by trial using non-site-standardized enrollment time. Time is standardized only at the trial-level so that within each trial, the time origin represents the earliest screening date among all subsequently randomized participants. All participants who fall on a given horizontal line are assigned an identical enrollment percentile; an example dashed line is given at 60% enrollment. CDF, cumulative distribution functions.
Table 1 presents baseline demographic information, average 12-month change in ADAS-Cog, and percent of participants completing the trial primary endpoint stratified by quartile of site-standardized enrollment time. Empirical enrollment time quartiles were [0,33] days, (33,119] days, (119,322] days and >322 days. Few differences were apparent across these cohorts. Instead, the assessed demographic and clinical characteristics appeared largely uniform. The analogous table stratified by trial enrollment percentile is in Table S1. There was a slight increase in the proportion of participants enrolling with adult child study partners after the first quartile but beyond this, the cohorts appeared largely similar (table S1).
a. does not include participants from the IVIg trial
Those in the last enrolled site-standardized quartile did appear to demonstrate greater cognitive decline measured with the ADAS-cog, compared to earlier enrolled participants (Table 1). Using a GEE regression model to assess potential cohort effects with respect to cognitive decline, we found a significant difference in 12-month change in ADAS-Cog for participants with different site-standardized enrollment times (Table 2). Later enrolled participants had an expected 12-month change in ADAS-Cog 0.16 points higher than similar participants who enrolled 100 days earlier (95% CI: (0.02, 0.29), p = 0.02). Analysis using the discretized construct suggested this observed effect may have been primarily due to differences between the earliest and latest enrolling participants: we estimated that participants who enrolled in the fourth quartile of site-standardized enrollment time had an expected 12-month change in ADAS-Cog of 0.97 points higher than similar participants who enrolled in the first quartile (95% CI: (0.03, 1.91), p = 0.04). These results were not sensitive to our imputation procedure as neither of these point estimates were meaningfully changed when omitting the IvIg trial data (though the resulting CIs did narrowly cross over zero due to decreased precision, data not shown). In the analysis using trial enrollment percentile, the sign of point estimates was the same but no coefficients were significantly non-zero (table S2).
a. estimates obtained in an alternative model with enrollment time replaced by its discretization; b. p-value obtained via a 2-sided multivariate Wald test of the hypothesis that coefficients corresponding to all factor levels are equal to zero.
Figure 2 summarizes the estimated relative risk ratios, 95% CIs, and p-values from the multinomial regression assessing potential cohort effects on study partner type at baseline using site-standardized time. When treated as continuous, there was no meaningful association between enrollment time and the relative risk of enrolling with an adult child study partner compared to a spousal study partner; this was also the case between enrollment time and the relative risk of enrolling with an other study partner compared to a spousal study partner. Analyses using the discretized construction of site-standardized enrollment time did provide some evidence that later enrolled participants had a higher relative risk of enrolling with an other study partner compared to the first quartile reference group (right side of right panel of figure 2, top). We estimated that participants who enrolled in the 2nd, 3rd, and 4th quartiles had 50, 68, and 59% higher relative risk of enrolling with an other study partner type, compared to a spousal one, than similar participants who enrolled in the 1st quartile (95% CI for 2nd quartile relative risk (0.94, 2.40), p = 0.09; for 3rd quartile: (1.06, 2.68), p = 0.03; for 4th quartile: (0.96, 2.63), p = 0.07). In the secondary analysis using trial enrollment quartiles, we found evidence that later enrolled participants had a higher relative risk of enrolling with an adult child study partner than with a spousal one compared to the first quartile reference group (95% CI for 2nd quartile relative risk ratio: (1.12, 2.16), p = 0.009; for 3rd quartile: (1.10, 2.11), p = 0.01; for 4th quartile: (1.04, 2.00), p = 0.03; figure S1).
Left: summary of relative risk ratios of enrolling with an adult child study partner compared to a spousal one. Right: summary of relative risk ratios of enrolling with an other study partner compared to a spousal one. Within each panel the estimate corresponding to the model using continuous enrollment time is given on the left and those from the model using site-standardized enrollment time quartiles is given on the right.
Figure 3 illustrates Kaplan-Meier curves estimating the retention probability over study duration, stratified by site-standardized enrollment time quartiles. Retention probability was similar across the cohorts over the duration where we had relatively high precision (up to around 18 months). We found no significant association between enrollment time and hazard for discontinuation when treating enrollment time as continuous (estimated hazard ratio for an enrollment time difference of 100 days: 1.02, 95% CI: (0.99, 1.07), p = 0.16). Estimated hazard ratios from the model using discretized site-standardized enrollment time can be seen in the legend of figure 3. In this model, participants who enrolled in the 2nd quartile had significantly lower hazard for study discontinuation compared to similar participants who enrolled in the first quartile (95% CI: (0.77, 0.97), p = 0.03), though the trend did not continue for subsequent enrollment time quartiles. Secondary analysis using trial enrollment quartiles similarly displayed little evidence of cohort effects with respect to hazard for study discontinuation (figure S2). The results of the logistic regression with binary study completion status as the outcome (not shown) similarly suggested minimal cohort effects with respect to early study discontinuation.
Legend: corresponding hazard ratio estimates and 95% CIs from a Cox proportional hazards model of time to study discontinuation on discretized site-standardized enrollment time; study completion or dying prior to primary endpoint completion were considered censoring. Bottom: Number of subjects at risk; cumulative number of dropout events given in paratheses.
Discussion
The financial and human resource costs of clinical trials emphasize the imperative to ensure the validity and generalizability of findings. Identifying and addressing sources of bias in downstream estimates of treatment effects represents a key aspect of this effort. Cohort effects can lead to bias in at least two ways. First, heterogeneity in participant characteristics over the duration of accrual could lead to imbalances of confounders of the target treatment effect across study arms. Second, if an effect-modifier of the treatment effect varies with enrollment time, study results could be biased or even erroneous. Although balance across treatment groups may be restored by chance (in the first case) or we may obtain a representative sample of the target population by the end of accrual (in the second case), each of these scenarios can yield biased estimates and the appearance of a non-constant treatment effect across interim analyses. Potential treatment group imbalances due to cohort effects can be avoided (in both interim and final analyses) by block randomizing (18), which should be implemented whenever feasible.
In this analysis we investigated whether cohort effects were present in a combined sample of nine trials performed in participants with mild to moderate AD. We found no clinically meaningful differences across site-standardized enrollment time or trial enrollment percentile with respect to participant demographics, baseline ADAS-Cog score, or dropout. This is good news. A lack of cohort effects lends further credence to the conclusions reported for these trials.
We did find evidence that later-enrolled participants tended to experience slightly larger cognitive decline over the first 12 months of trial participation. Discretizing site-standardized enrollment time into quartiles suggested the observed difference was primarily driven by heterogeneity between the earliest- and latest-enrolling participants. Although these differences were statistically significant, the effect sizes were small and thus may not be clinically relevant. For example, many AD trials, including some included here, are powered to demonstrate around a 2-point difference between groups on the ADAS-cog; our observed difference was only 8% of this difference.
When modeled as a continuous variable, we found no association between enrollment time and study partner type at baseline. When discretized into quartiles, however, we estimated that participants enrolling in any quartile after the first had higher relative risk of enrolling with a non-spousal, non-adult child study partner than with a spouse, compared to participants enrolling in the first quartile. Though we saw no cohort effects for the outcome of dropout, previous studies have found that these participants, who enroll with a study partner that is neither their spouse nor their adult child, may be at increased risk for dropout compared to those with a spouse (19). Minority race and ethnicity participants may also more frequently enroll with these study partners (19-21) perhaps indicating the need for greater time to recruit these underrepresented groups (though we observed no such cohort effect for race or ethnicity). The analysis using trial enrollment percentile suggested an analogous trend with adult child study partners. Elucidating these relationships will require further research but may be important to future trial recruitment.
Our study had limitations. We investigated two constructions of participants’ time of enrollment but other relevant constructs may exist. We attempted to control for potential confounders between enrollment time and our outcome variables, but the target association between variables of interest could have been masked by confounding factors not included in our analyses. We focused on demographic and cognitive outcomes; future analyses might investigate whether temporally non-constant treatment effects occur, especially across interim estimates in AD trials. We did not account for the possibility that cohort effects could have been masked by trials competing to enroll. Finally, although our sample size was large and these trials enrolled participants from throughout the US, our observations may not generalize to AD clinical trials sponsored by industry or performed outside of the US.
Conclusions
We found minimal evidence of clinically relevant cohort effects in AD trials. Overall, our results bolster the original conclusions of the trials used in our sample.
Acknowledgements: We thank the participants and researchers of the original studies and the ADCS (grant U19 AG010483) who made our analysis possible.
Funding: This material is based upon work supported by the National Science Foundation’s Graduate Research Fellowship (grant no. DGE 1839285), and the National Institute of Health’s National Institute on Aging (NIA) (grants P30 AGO66519, R01 AG077628-01, RF1 AG059407). Dr. Grill reports research support from NIA, the Alzheimer’s Association, BrightFocus Foundation, Biogen, Genentech, Eisai and Eli Lilly and consulting for SiteRx. Dr. Gillen and Mr. Birnbaum have no potential conflicts of interest to report.
Open Access: This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
References
1. Schneider LS, Olin JT, Lyness SA, Chui HC. Eligibility of Alzheimer’s Disease Clinic Patients for Clinical Trials. J Am Geriatr Soc. 1997;45(8):923-928. doi:10.1111/j.1532-5415.1997.tb02960.x
2. Grill JD, Karlawish J. Addressing the challenges to successful recruitment and retention in Alzheimer’s disease clinical trials. Alzheimers Res Ther. 2010;2(6):34. doi:10.1186/alzrt58
3. Schneider LS. Aducanumab Trials EMERGE But Don’t ENGAGE. J Prev Alzheimers Dis. Published online 2022. doi:10.14283/jpad.2022.37
4. Budd Haeberlein S, Aisen PS, Barkhof F, et al. Two Randomized Phase 3 Studies of Aducanumab in Early Alzheimer’s Disease. J Prev Alzheimers Dis. Published online 2022. doi:10.14283/jpad.2022.30
5. Aisen PS, Schafer KA, Grundman M, et al. Effects of Rofecoxib or Naproxen vs Placebo on Alzheimer Disease Progression: A Randomized Controlled Trial. JAMA. 2003;289(21):2819. doi:10.1001/jama.289.21.2819
6. Sano M, Bell KL, Galasko D, et al. A randomized, double-blind, placebo-controlled trial of simvastatin to treat Alzheimer disease. Neurology. 2011;77(6):556-563. doi:10.1212/WNL.0b013e318228bf11
7. Aisen PS. High-Dose B Vitamin Supplementation and Cognitive Decline in Alzheimer Disease: A Randomized Controlled Trial. JAMA. 2008;300(15):1774. doi:10.1001/jama.300.15.1774
8. Tariot PN. Chronic Divalproex Sodium to Attenuate Agitation and Clinical Progression of Alzheimer Disease. Arch Gen Psychiatry. 2011;68(8):853. doi:10.1001/archgenpsychiatry.2011.72
9. Rafii MS, Walsh S, Little JT, et al. A phase II trial of huperzine A in mild to moderate Alzheimer disease. Neurology. 2011;76(16):1389-1394. doi:10.1212/WNL.0b013e318216eb7b
10. Quinn JF, Raman R, Thomas RG, et al. Docosahexaenoic Acid Supplementation and Cognitive Decline in Alzheimer Disease: A Randomized Trial. JAMA. 2010;304(17):1903. doi:10.1001/jama.2010.1510
11. Relkin NR, Thomas RG, Rissman RA, et al. A phase 3 trial of IV immunoglobulin for Alzheimer disease. Neurology. 2017;88(18):1768-1775. doi:10.1212/WNL.0000000000003904
12. Turner RS, Thomas RG, Craft S, et al. A randomized, double-blind, placebo-controlled trial of resveratrol for Alzheimer disease. Neurology. 2015;85(16):1383-1391. doi:10.1212/WNL.0000000000002035
13. van Dyck CH, Nygaard HB, Chen K, et al. Effect of AZD0530 on Cerebral Metabolic Decline in Alzheimer Disease: A Randomized Clinical Trial. JAMA Neurol. 2019;76(10):1219. doi:10.1001/jamaneurol.2019.2050
14. Emerson SS, Kittelson JM, Gillen DL. Frequentist evaluation of group sequential clinical trial designs. Stat Med. 2007;26(28):5047-5080. doi:10.1002/sim.2901
15. Rosen WG, Mohs RC, Davis KL. A new rating scale for Alzheimer’s disease. Am J Psychiatry. 1984;141(11):1356-1364. doi:10.1176/ajp.141.11.1356
16. Liang KY, Zeger SL. Longitudinal data analysis using generalized linear models. Biometrika. 1986;73(1):13-22. doi:10.1093/biomet/73.1.13
17. Huber PJ. The Behavior of Maximum Likelihood Estimates under Nonstandard Conditions. Proc Fifth Berkeley Symp Math Stat Probab. 1967;1:221-233.
18. Friedman LM, Furberg C, DeMets DL. Fundamentals of Clinical Trials. 4th ed. Springer; 2010.
19. Grill JD, Raman R, Ernstrom K, Aisen P, Karlawish J. Effect of study partner on the conduct of Alzheimer disease clinical trials. Neurology. 2013;80(3):282-288. doi:10.1212/WNL.0b013e31827debfe
20. Hakhu NR, Gillen DL, Grill JD, for the Alzheimer’s Disease Cooperative Study. Dyadic Enrollment in a Phase 3 Mild Cognitive Impairment Clinical Trial. Alzheimer Dis Assoc Disord. 2022;36(3):192-199. doi:10.1097/WAD.0000000000000506
21. Raman R, Quiroz YT, Langford O, et al. Disparities by Race and Ethnicity Among Adults Recruited for a Preclinical Alzheimer Disease Trial. JAMA Netw Open. 2021;4(7):e2114364. doi:10.1001/jamanetworkopen.2021.14364
© The Authors 2023