A transcription-less quantitative analysis of aphasic discourse elicited with an adapted version of the Amsterdam-Nijmegen Everyday Language Test (ANELT)

Abstract

Background: For speakers with mild to moderate expressive aphasia the ultimate goal of aphasia therapy is to improve verbal functional communication, which may be assessed with the Amsterdam-Nijmegen Test for Everyday Language (ANELT; Blomert et al., 1995). The ANELT is based on a qualitative and transcription-less method of analysis: the scoring is based on personal judgement and directly made from the recording of the test. Previous research (Ruiter et al., 2011) has shown that a quantitative measure for the ANELT not only allows verbal effectiveness (i.e., the amount of essential information conveyed) to be measured more sensitively, but also allows derivation of a measure of verbal efficiency (i.e., average amount of essential information produced per time unit). Although the quantitative scoring further improved the construct validity of the ANELT, there is a limitation that hinders its clinical application: the quantitative measure requires orthographic transcription of the spoken responses to the test. That is, the quantitative scoring is transcription-based. Aims: In order to work towards clinical applicability of the quantitative measure of the ANELT, this study addressed the potential of a transcription-less variant of the quantitative analysis, in which the amount of essential information is directly quantified on the basis of recording, as a valid and reliable procedure for the measurement of verbal effectiveness. Methods & Procedures: A total of 56 speakers of Dutch participated: 31 neurologically healthy speakers and 25 persons with aphasia. Monologic discourse elicited with 10 scenarios from an adapted version of the ANELT (Ruiter et al., 2016) was analysed with both a transcription-based quantitative method and a transcription-less quantitative one. Resulting data were systematically compared on the following psychometric properties: internal consistency, inter-rater reliability, construct validity, convergent validity, and known-group validity. Outcomes & Results: Internal consistency and inter-rater reliability were good and comparable between both scoring methods. Only for one scenario did the transcription-based scoring method yield higher agreement among the raters. With respect to validity, both scoring methods seem to yield measures of the same underlying constructs, show a strong and positive correlation, and allow differentiation between persons with and without aphasia. Conclusions: Although future research is needed to develop norm scores and investigate other psychometric properties, the result from the comparison demonstrated the potential of the transcription-less quantitative method as a valid and reliable method to analyse monologic discourse elicited with the adapted ANELT.

Publication
In: Aphasiology

Related