IntelliSpace Cognition Validation

Cognitive assessment

IntelliSpace Cognition Validation

Sign up for news and updates in cognitive assessment

Stay up-to-date and subscribe.

Join your peers, sign-up to stay informed and receive insights into healthcare innovations, straight to your inbox.

(Please be sure to check the box to receive communications from Philips)

Contact information

* This field is mandatory
*

Contact details

*
*
*

Company details

*
*
*

Philips IntelliSpace Cognition, Validated Against Traditional Paper Neuropsychological Tests

A medical device to aid assessing cognition


As a medical device company, Philips follows the standards set by the FDA and other global organizations that govern medical devices and products. When we identified a gap in reliable digital technologies for cognitive assessment, we developed the IntelliSpace Cognition (ISC) product in the way we always establish our clinical offerings- through research, validation, clinical studies and then filing with the FDA. We knew we could leverage the strong history of cognitive testing that exists with neuropsychologists while building in an added layer of insights and more accessibility that could benefit patients, neurologists, and neuropsychologists. In developing ISC, Philips invested a significant effort to validate the digital neuropsychological tests and associated algorithms as well as collect accurate normative data.

Normative data collection and validation


Normative data are quantitative measures are used to compare an individual’s performance to that of a relevant population. A comparison to a normative baseline is essential to interpret an individual test score and ideally accounts for differences related to age, education and sex. In fact, not comparing to normative data will often result in missing subtle signs of cognitive impairment. The cognitive assessment in ISC features well-established neuropsychological tests, each with its specific normative corrections but based on the same representative group of the US population. These normed outcome measures are validated against the traditional paper versions of each test. In 2019, Philips collected the normative data and test validation data through a study that was performed in accordance with the ISO14155 standard for Good Clinical Practice.1 It involved collecting data across four different states (NY, FL, PA, CA) from a group of 450 healthy participants enrolled using stratified sampling such that distributions of age (50-80y), sex, education level, and racial/ethnic background reflect the US Census.
 

For test validation we used data from this study to compare the psychometric properties of the digital ISC tests to their paper counterparts. Two-hundred fifty participants in the study completed either the digital ISC tests or the paper tests versions, once or twice in repeated visits about 2-3 weeks apart. Two hundred of the participants visited twice to perform both the digital ISC tests and the paper versions. The data from this specific subgroup allows a direct comparison between the ISC tests and the traditional paper while eliminating inter-individual differences that could otherwise obscure subtle differences.

Agreement between digital and paper tests


The scores on the ISC digital tests correlate well with the traditional paper tests (see table)*. While small differences between ISC and paper are to be expected, they do not influence the interpretation of ISC tests due to the integrated norms that are available on the ISC digital platform.
 

Philips recently published results concerning construct validity from a first study, based on a prototype version of the product in a peer-reviewed journal.2 In this paper, structural equation modeling (SEM) was used to investigate the underlying relationships between the ISC tests with two theory-driven neuropsychological models. For both models, the outcome measures of the digital tests mapped onto the expected cognitive domains. This also suggests that the digital tests on ISC can be interpreted in the same way as traditional paper versions while offering a high level of standardization and ease-of-use. In a future paper, we will use the SEM modeling technique on the data of the aforementioned validation study from 2019, to show that the ISC tests map on the same latent variables (i.e. cognitive domains) as their respective paper counterparts.

Tests on IntelliSpace Cognition compared with paper counterparts*

Test
Measure
Average Score Digital test
Average Score Paper test
Correlation†
MMSE

Brief score

Standard score

14.9

27.7

14.9

27.8

0.85

0.81

Clock Drawing

Copy score

Memory score

4.5

3.8

4.5

3.8

N/A

N/A

Digit Span

Forward score

Backward score

7.8

6.4

8.0

6.1

0.74

0.64

RAVLT

Learning trials score

Immediate recall score

Delayed recall score

44.5

8.5

8.5

43.5

8.0

7.9

0.67

0.76

0.83

ROCFT

Copy score

Immediate recall score

23.6

10.1

24.8

10.6

0.78

0.79

Star Cancellation

Duration per correct cancellation (s)

Star score

Laterality index

0.9

52.8

0.5

0.8

52.7

0.5

0.87

1

0.92

Trail Making

TMT A duration (s)

TMT B duration (s)

41.5

108.5

34.0

82.4

0.92

0.70

Letter Fluency
Total score
42.9
41.0
0.88

*Preliminary findings, pending final analysis.

corrected for attenuation...

The very high level of automation and standardization that ISC achieves is accomplished by using cutting edge technology to adapt the way the tests are administered and scored. For example, in the Rey Auditory Verbal Learning Test (RAVLT), the word stimuli are played by the computer instead of being spoken by the human test administrator. Subsequently, the participant’s response is not written down by the administrator, but the audio is recorded and scored by an algorithm to return the score. The benefit of this approach is that it uses digital technology and voice recognition to reduce the amount of manual work while maintaining the validity of the underlying tests. The data and the observed correlation reflect similar scores on cognitive tests to the paper counterpart. 

Conclusion


The normative data, comparison to paper and the validation of algorithms against human raters form the foundation of the Philips ISC product. These aspects are important in giving the clinician confidence in the solution.  In fact, when shown a video of how ISC works, 80% of neurologists said they would feel confident using IntelliSpace Cognition because Philips has collected normative data to provide healthy peer comparisons.3

More articles

  • IntelliSpace Cognition is more than a cognitive screener

    IntelliSpace Cognition is more than a cognitive screener

    Learn more
  • Introducing Philips IntelliSpace Cognition

    Introducing Philips IntelliSpace Cognition

    Learn more

References


1
Psychometric Properties of IntelliSpace Cognition.

http://clinicaltrials.gov/ct2/show/NCT03801382?term=NCT03801382&draw=2&rank=1.
 

2 Vermeent, S., Dotsch, R., Schmand, B., Klaming, L., Miller, J.B., & van Elswijk, G. (2020). Evidence of validity for a newly developed digital cognitive test battery. Frontiers in Psychology: Neuropsychology.
http://doi.org/10.3389/fpsyg.2020.00770.
 

3 Based on a 2019 Philips study of 100 neurologists in the US.