Automated Early Detection of Alzheimer's Disease by Capturing Impairments in Multiple Cognitive Domains with Multiple Drawing Tasks
Abstract
Background: Automatic analysis of the drawing process using a digital tablet and pen has been applied to successfully detect Alzheimer's disease (AD) and mild cognitive impairment (MCI). However, most studies focused on analyzing individual drawing tasks separately, and the question of how a combination of drawing tasks could improve the detection performance thus remains unexplored. Objective: We aimed to investigate whether analysis of the drawing process in multiple drawing tasks could capture different, complementary aspects of cognitive impairments, with a view toward combining multiple tasks to effectively improve the detection capability. Methods: We collected drawing data from 144 community-dwelling older adults (27 AD, 65 MCI, and 52 cognitively normal, or CN) who performed five drawing tasks. We then extracted motion- and pause-related drawing features for each task and investigated the associations of the features with the participants' diagnostic statuses and cognitive measures. Results: The drawing features showed gradual changes from CN to MCI and then to AD, and the changes in the features for each task were statistically associated with cognitive impairments in different domains. For classification into the three diagnostic categories, a machine learning model using the features from all five tasks achieved a classification accuracy of 75.2%, an improvement by 7.8% over that of the best single-task model. Conclusion: Our results demonstrate that a common set of drawing features from multiple drawing tasks can capture different, complementary aspects of cognitive impairments, which may lead to a scalable way to improve the automated, reliable detection of AD and MCI.