IntelliSpace Cognition is more than a cognitive screener

Cognitive assessment

IntelliSpace Cognition is more than a cognitive screener

Sign up for news and updates in cognitive assessment

Stay up-to-date and subscribe.

Join your peers, sign-up to stay informed and receive insights into healthcare innovations, straight to your inbox.

(Please be sure to check the box to receive communications from Philips)

Contact information

* This field is mandatory
*

Contact details

*
*
*

Company details

*
*
*

Philips IntelliSpace Cognition assessment – why it’s more than a cognitive screener

Neurologists often screen patients for cognitive impairment with the MoCA or MMSE. However, there is a tendency to use the results to derive deeper insights about cognition, a use for which these tests may not be adequate


The Montreal Cognitive Assessment (MoCA) and the Mini-Mental State Examination (MMSE) are well known instruments that neurologists often utilize as part of a larger neurological examination. While a neurologist may prefer one screener over another, most neurologists agree on the usefulness of these instruments for screening a patient’s cognition. In fact, 77% of neurologists still use standard standalone screeners to assess cognition in their patients.1 After all, paper screeners are quick, simple to administer, and easy to score and interpret. However, they are not designed to be more than this and are often found lacking when neurologists need more details about the cognitive state of their patient but don’t have access to a full neuropsychological assessment within a reasonable timeframe.

Screeners such as the MoCA and MMSE are based on truncated versions of neuropsychological tests


The MoCA screener is an example of a series of truncated neuropsychological tests; the tests may resemble the actual tests that they are derived from, but they are not the same. For example, the Trail Making Test (TMT) component of the MoCA; in its full form, consists of 25 targets and the total time to completion is the outcome measure. Importantly, there is also a practice test that allows the assessor to confirm that the patient understands the instructions. Since its inception more than 80 years ago, numerous publications have used this outcome measure, which is generally accepted as a good metric for executive function. However, to be an accurate measure, it should be used in a controlled, reproducible environment with supervision from a medical professional who can intervene when a mistake is made and the results should be compared to norms of a healthy peer group. In the truncated TMT of the MoCA, there are 10 targets rather than 25, no practice test to confirm that instructions are understood, and a binary outcome measure that indicates whether the targets could be connected or not. While this truncated measure is useful in the screener scenario, it gives little information about executive function. This is one example, but the same principle applies to the truncated verbal learning tests or naming tests in the MoCA in which the details are sacrificed for the speed and simplicity of a screener.

Cognitive assessment with Philips IntelliSpace Cognition

With the advent of computerized testing such as that embodied in Philips IntelliSpace Cognition, there is an opportunity to have the best of both worlds. For example, the digital TMT on the IntelliSpace Cognition platform uses an iPad Pro and Apple Pencil to collect input from the patient. The platform applies fully integrated instructions, feedback if a mistake is made, and automatic comparisons to the norm group. It removes many of the barriers that interfere with administering the TMT in the context of the neurology office.

Unlike the truncated version, the digital TMT offers the traditional measures. And due to the construct validity study showing an average corrected correlation of 0.95 to the paper versions, these measures can be compared to 80 years’ worth of literature and conclusions drawn on executive functioning.2 These considerations are also true for the other tests on IntelliSpace Cognition where high correlations to paper are also found for Clock-drawing, Rey Auditory Verbal Learning Test (RAVLT), Star Cancellation, Letter Fluency, Digit Span, and the Rey–Osterrieth Complex Figure Test. The traditional measures that are extracted from these tests by validated algorithms are well studied and understood.
Report Browser
Consequently, there is also an advantage of being able to map test results to cognitive domains and aid the neurologist with a certain level of interpretation. IntelliSpace Cognition categorizes test scores in terms of six well-known cognitive domains from traditional neuropsychology: executive functioning, processing speed, verbal processing, working memory, long-term memory and retrieval, and visual-spatial processing.3,4

As digital technologies start to offer deeper insights into cognition, it is vital that such assessments are validated in a scientific manner and that high-quality norms are available for the tests. Whether this is done by showing psychometric equivalence to paper versions (construct validity), scoring algorithm equivalence to human raters, or simply demonstrating that patients can perform the tests in typical neurology practice environments, validation must be prioritized. Philips has invested in these aspects as part of its robust testing during IntelliSpace Cognition trials. A typical validation protocol for the scoring algorithms involves asking multiple human raters to score drawings or audio files and then comparing the output of the scoring algorithm with the variance between humans. In general, for IntelliSpace Cognition, high correlations were found for all the relevant outcome measures as compared with the human raters.
 

In summary, while cognitive screeners such as the MoCA may have their place in the neurology workflow, they do not provide deeper information about the patient’s cognitive function that can help neurologists determine next steps in diagnosis and treatment. Furthermore, brief cognitive screeners used in isolation may actually increase the risk of classification errors when determining cognitive impairment.4 In a study of 824 older adults, 301 (35.7%) participants were misclassified by at least one brief assessment (from MMSE, memory impairment screen [MIS], and animal naming [AN]), while only 14 (1.7%) were misclassified by all three assessments.5
 

While there is already enormous value in offering traditional neuropsychological measures thanks to a lower barrier to use, digital technology in the future will offer even more options, especially when a range of new digital outcome measures is validated. Such measures are now starting to emerge.6 As an FDA Class II medical device, IntelliSpace Cognition introduces a new tool to neurologists in the form of an integrated digital platform. When presented as a concept to neurologists, 75% thought it would increase their confidence in making correct referral decisions for full neuropsychological assessments.7

Desktop and tablet


Join us!
Advancing digital Technologies for Rapid
and Reliable Cognitive Assessment

More articles

  • IntelliSpace Cognition Validation

    IntelliSpace Cognition Validation

    Learn more
  • Advancing Digital Technologies for Rapid and Reliable Cognitive Assessment

    Advancing Digital Technologies for Rapid and Reliable Cognitive Assessment

    Learn more

References


1 MarkeTech Group, Davis, CA, study of 75 neurologists with clinical practices, commissioned by Philips 2018.
 

2 Psychometric Properties of IntelliSpace Cognition http://clinicaltrials.gov/ct2/show/NCT03801382?term=NCT03801382&draw=2&rank=1
 

3 Vermeent S, Dotsch R, Schmand B, et al. Evidence of validity for a newly developed digital cognitive test battery. Frontiers. 2019 [article under revision].
 

4 Philips IntelliSpace Cognition Cognitive Model. US Leaflet. Philips Healthcare. 2019.
 

5 Ranson JM, Kuzma E, et al. Predictors of dementia misclassification when using brief cognitive assessments. Neurol Clin Pract. 2019;9(2):109–117.
 

6 Klaming L and Vlaskam BNS. Non-dominant hand use increases completion time on part B of the Trail Making Test but not on part A. Behav Res Methods. 2018;50(3):1074–1087.

7 Based on a 2019 Philips study of 100 neurologists in the US