As digital technologies start to offer deeper insights into cognition, it is vital that such assessments are validated in a scientific manner and that high-quality norms are available for the tests. Whether this is done by showing psychometric equivalence to paper versions (construct validity), scoring algorithm equivalence to human raters, or simply demonstrating that patients can perform the tests in typical neurology practice environments, validation must be prioritized. Philips has invested in these aspects as part of its robust testing during IntelliSpace Cognition trials. A typical validation protocol for the scoring algorithms involves asking multiple human raters to score drawings or audio files and then comparing the output of the scoring algorithm with the variance between humans. In general, for IntelliSpace Cognition, high correlations were found for all the relevant outcome measures as compared with the human raters.
In summary, while cognitive screeners such as the MoCA may have their place in the neurology workflow, they do not provide deeper information about the patient’s cognitive function that can help neurologists determine next steps in diagnosis and treatment. Furthermore, brief cognitive screeners used in isolation may actually increase the risk of classification errors when determining cognitive impairment.4 In a study of 824 older adults, 301 (35.7%) participants were misclassified by at least one brief assessment (from MMSE, memory impairment screen [MIS], and animal naming [AN]), while only 14 (1.7%) were misclassified by all three assessments.5
While there is already enormous value in offering traditional neuropsychological measures thanks to a lower barrier to use, digital technology in the future will offer even more options, especially when a range of new digital outcome measures is validated. Such measures are now starting to emerge.6 As an FDA Class II medical device, IntelliSpace Cognition introduces a new tool to neurologists in the form of an integrated digital platform. When presented as a concept to neurologists, 75% thought it would increase their confidence in making correct referral decisions for full neuropsychological assessments.7
1 MarkeTech Group, Davis, CA, study of 75 neurologists with clinical practices, commissioned by Philips 2018.
2 Psychometric Properties of IntelliSpace Cognition http://clinicaltrials.gov/ct2/show/NCT03801382?term=NCT03801382&draw=2&rank=1
3 Vermeent S, Dotsch R, Schmand B, et al. Evidence of validity for a newly developed digital cognitive test battery. Frontiers. 2019 [article under revision].
4 Philips IntelliSpace Cognition Cognitive Model. US Leaflet. Philips Healthcare. 2019.
5 Ranson JM, Kuzma E, et al. Predictors of dementia misclassification when using brief cognitive assessments. Neurol Clin Pract. 2019;9(2):109–117.
6 Klaming L and Vlaskam BNS. Non-dominant hand use increases completion time on part B of the Trail Making Test but not on part A. Behav Res Methods. 2018;50(3):1074–1087.
7 Based on a 2019 Philips study of 100 neurologists in the US