acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

13
Acceptance of computerized compared to paper- and-pencil assessment in psychiatric inpatients Bernhard Weber*, Barbara Schneider, Ju¨ rgen Fritze, Boris Gille, Stefan Hornung, Thorsten Ku¨hner, Konrad Maurer Department of Psychiatry and Psychotherapy I, J. W. Goethe University, Heinrich-Hoffmann-Str. 10, D-60528 Frankfurt/Main, Germany Abstract In recent years various approaches to the application of computerized assessment in psy- chiatry have been reported. Patients’ acceptance is important for reliability and validity of results. Comparisons with conventional paper-and-pencil tests are missing. The present study aimed to investigate the acceptance of computerized assessment, parti- cularly compared to conventional paper-and-pencil techniques, in seriously impaired psy- chiatric inpatients. For this purpose a questionnaire (OPQ) was developed by the authors and evaluated in 135 psychiatric inpatients. Seventy-eight of these patients completed a set of neuropsychological and psychopathological assessments two times, once using the conventional approach and additionally using a computer version. Their OPQ results were analyzed, in the total sample as well as in diagnostic subgroups. Although the psychiatric patients had some problems in dealing with the computer, com- puterized assessment was convincingly well accepted and even superior to conventional paper- and-pencil techniques. Overall only few differences were found between diagnostic subgroups, indicating a higher relevance of computer attitude in depressive disorders. General skepticism concerning the feasibility of computerized assessment in psychiatric inpatients appears to be unjustified. # 2002 Published by Elsevier Science Ltd. Keywords: Computer attitude; Computer diagnostics; Self-assessment; Acceptance; Psychiatry 1. Introduction Computerized psychological and psychopathological assessment is increasingly used in psychiatry. The common availability of microcomputers and their relatively Computers in Human Behavior 19 (2003) 81–93 www.elsevier.com/locate/comphumbeh 0747-5632/03/$ - see front matter # 2002 Published by Elsevier Science Ltd. PII: S0747-5632(02)00012-2 * Corresponding author. Tel.: +49-69-6301-5347; fax: +49-69-6301-7087. E-mail address: [email protected] (B. Weber).

Upload: bernhard-weber

Post on 17-Sep-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

Bernhard Weber*, Barbara Schneider, Jurgen Fritze, Boris Gille,Stefan Hornung, Thorsten Kuhner, Konrad Maurer

Department of Psychiatry and Psychotherapy I, J. W. Goethe University, Heinrich-Hoffmann-Str. 10,

D-60528 Frankfurt/Main, Germany

Abstract

In recent years various approaches to the application of computerized assessment in psy-chiatry have been reported. Patients’ acceptance is important for reliability and validity of

results. Comparisons with conventional paper-and-pencil tests are missing.The present study aimed to investigate the acceptance of computerized assessment, parti-

cularly compared to conventional paper-and-pencil techniques, in seriously impaired psy-

chiatric inpatients.For this purpose a questionnaire (OPQ) was developed by the authors and evaluated in 135

psychiatric inpatients. Seventy-eight of these patients completed a set of neuropsychological

and psychopathological assessments two times, once using the conventional approach andadditionally using a computer version. Their OPQ results were analyzed, in the total sampleas well as in diagnostic subgroups.Although the psychiatric patients had some problems in dealing with the computer, com-

puterized assessment was convincingly well accepted and even superior to conventional paper-and-pencil techniques. Overall only few differences were found between diagnostic subgroups,indicating a higher relevance of computer attitude in depressive disorders. General skepticism

concerning the feasibility of computerized assessment in psychiatric inpatients appears to beunjustified.# 2002 Published by Elsevier Science Ltd.

Keywords: Computer attitude; Computer diagnostics; Self-assessment; Acceptance; Psychiatry

1. Introduction

Computerized psychological and psychopathological assessment is increasinglyused in psychiatry. The common availability of microcomputers and their relatively

Computers in Human Behavior 19 (2003) 81–93

www.elsevier.com/locate/comphumbeh

0747-5632/03/$ - see front matter # 2002 Published by Elsevier Science Ltd.

PI I : S0747-5632(02 )00012 -2

* Corresponding author. Tel.: +49-69-6301-5347; fax: +49-69-6301-7087.

E-mail address: [email protected] (B. Weber).

Page 2: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

low costs enhance this trend. In recent years a number of new approaches have beendescribed and evaluated. In general, the authors report a good acceptance of com-puterized instruments (Carr, Ancill, Ghosh, & Margo, 1981; French & Beaumont,1987; Freudenmann & Spitzer, 2001; Gitzinger, 1990; Hedlund, Vieweg, & Cho,1985; Lucas, Mullin, Luna, & McInroy, 1977; McGuire et al., 2000; Neal, Busuttil,Herapath, & Strike, 1994; Pelissolo, Veysseyre, & Lepine, 1997; Schriger, Gibbons,Langone, Lee, & Altshuler, 2001; Shakeshaft, Bowman, & Sanson-Fisher, 1998;Spinhoven, Labbe, & Rombouts, 1993; Stevenson, Beattie, Alves, Longabaugh, &Ayers, 1988; Turner, Ku, Rogers, Lindberg, Pleck, & Sonenstein, 1998; Weber,Fritze, Schneider, Simminger, & Maurer, 1998).In principle, computerized assessment may have important advantages: it is less

time consuming and thus potentially cost-effective, it assures standardization andreliability (Canfield, 1991), facilitates statistical analysis and allows a precise timeregistration. The assessments can be adapted individually and even complex diag-nostic schedules can be applied where treatment strategies can be decided on even ina real-time mode (Fahrenberg, 1994). Patients’ contribution to essential judgmentssuch as preference of treatment and clinical outcomes by computerized self-reportand self-monitoring data might become easier and facilitate the monitoring ofquality in psychiatry. Nevertheless, there seems to be some possibly justified reservetowards widespread application which might be explained partly by doubts aboutthe acceptance of computerized assessment by heavily impaired psychiatric patients(Skinner & Pakula, 1986; Spinhoven et al., 1993).From an ethical point of view the acceptance of new instruments is important,

because psychiatric patients should not be exposed to aversive or even frighteningdiagnostic procedures. Moreover, from a methodological point of view initialacceptance is to be regarded as an important motivational factor (Jager & Krieger,1994; Lienert & Raatz, 1994) where limited acceptance might endanger validityand reliability of computerized assessments. Furthermore, general acceptance as aroutine approach presumes an appraisal of the success of the patient–computerinteraction.In 1993, Spinhoven and co-workers (Spinhoven et al., 1993) investigated the fea-

sibility of computerized psychological examination in psychiatric outpatients. Fifty-four percent of the initially selected patients refused the examination. Educationallevel, previous experience with computers, and attitude towards computers werefound to be related to the success of the patient–computer interaction.Time is going on and the availability of computers, the attitude towards compu-

ters, and their routine use have changed rapidly in our society (Maxwell, 2001).Human–computer interaction is regarded as an increasingly important issue (Car-roll, 2001; French & Beaumont, 1987; Sutcliffe, 2001), which has still been widelyignored in terms of patient–computer interaction.Up to now feasibility and acceptance of computerized assessment as well as dif-

ferences to conventional assessment have not yet been sufficiently investigated inpsychiatric patients. Therefore, the present study aimed at examining the acceptanceof computerized instruments in seriously impaired psychiatric inpatients, particu-larly compared to conventional paper-and-pencil techniques.

82 B. Weber et al. / Computers in Human Behavior 19 (2003) 81–93

Page 3: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

2. Patients and methods

A self-rating questionnaire (‘Operation and Preference Questionnaire’, OPQ) wasdeveloped and completed by 135 psychiatric inpatients during mixed—computerizedand conventional—assessments in order to check its applicability. The scale indi-cates acceptance of the computerized part of the examination, particularly com-pared to conventional paper-and-pencil techniques. The OPQ consists of 13 items(Table 1). For each of these items a score is calculated with high values indicatinggood and low values indicating poor acceptance of computerized assessment. Threeadditional questions evaluate former assessment experience and educational level.The theoretical minimum of the scale is 10.5 and the maximum 39 points; a scorehigher than 26 (predominantly positive answers) is considered to represent a goodacceptance and a successful patient–computer interaction.

Table 1

Final acceptance of computerized examination: OPQ self-assessment of 78 psychiatric inpatientsa

Item No. OPQ items ‘Very good’

or ‘good’

‘Satisfactory’‘

or ‘sufficient’

‘Poor’ or

‘unsatisfactory’

1 General scoring of the

computer program

51 26 1

18/24/9 10/10/6 0/1/0

Factor 1 (functionality)

2 Scoring of touchscreen function 67 7 4

25/28/14 2/4/1 1/3/0

3 Scoring of keybord function 55 15 8

17/25/13 6/8/1 5/2/1

4 Scoring of computer instructions 55 20 3

21/20/14 5/14/1 2/1/0

5 Scoring of computer questions 50 25 3

19/20/11 7/14/4 2/1/0

Factor II (comparative quality) Computer Undecided Paper-pencil

6 Which form would be preferred

in principle ?

45 25 8

18/18/9 7/13/5 3/4/1

8 Which form is more binding? 23 47 8

13/6/4 10/27/10 5/2/1

9 Which form facilitates more

frankness?

30 38 10

11/12/7 11/20/7 6/3/1

11 Which form is more straining? 19 29 30

7/8/4 9/15/5 12/12/6

12 Which form seems to be more

reliable?

32 38 8

14/12/6 10/21/7 4/2/2

Factor III (irritability) Yes – No

7 Has the computer a calming effect? 41 – 37

17/17/7 11/18/8

10 Does the computer make nervous? 19 – 59

5/10/4 23/25/11

13 Facilitates the computer good

concentration ?

57 – 21

20/25/12 8/10/3

a Answers of the total sample are presented in the first line in bold. The second line contains the results

of diagnostic subgroups: psychotic (N=28)/mood (N=35)/other (N=15) disorders.

B. Weber et al. / Computers in Human Behavior 19 (2003) 81–93 83

Page 4: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

The OPQ data of 78 of patients, undergoing an identical neuropsychological andpsychopathological assessment procedure, were analyzed for acceptance of thecomputerized version in comparison to conventional techniques.First, a paper-and-pencil version of a German translation of the Groningen

Computer Attitude Scale (GCAS), developed by Bouman, Wolters, & Wolters-Hoff(1989), was used for the assessment of general attitude towards computers. TheGCAS was chosen because of its good adaptation to European conditions of com-puter experience. The Lickert-type scale consists of 16 statements with five possibleanswers varying between ‘totally agree’ and ‘totally disagree’. The minimum score ofthe scale is 16 and the maximum 80; a score higher than 48 is considered to representa positive attitude towards computers.Furthermore a neuropsychological assessment was performed by the Mini Mental

State Examination [MMSE (Folstein, Folstein, & McHugh, 1975)], a conventionaland a computerized memory task [ADAS (Ihl & Weyer, 1994) word recognition,form A and B], a conventional [d2 (Brickenkamp, 1994)] and a computerized atten-tion task [ERTS (Beringer, 1993)] and a visual pursuit tracking task (ERTS).A specifically designed program (Weber et al., 1998) supplied with a touch-screen

supported device, was used for the computerized memory task and further compu-terized psychopathological self-ratings. In this self-administered part of the assess-ment, basic instructions and an automated tutorial were presented first in order tofamiliarize patients with it and to train them before starting the real assessment. Thisprocedure allowed even patients with cognitive impairments or neuroleptic sideeffects to go through the whole program. During this training patients completed ashort ‘Training Questionnaire’ (TQ) for measurement of initial acceptance (mini-mum score 4, maximum 12). Patients were instructed to refrain from asking forexternal support while completing the assessment. Nevertheless, an adviser waspresent for ‘rescue’ in case of definite uncertainty. To this extent, the settingmimicked conventional psychological assessment. The computerized assessmenttook about 1 h and could be interrupted on demand at several pre-determinedpoints.Patients with organic brain disease and patients insufficiently familiar with the

German language were excluded as were non-compliant patients. Less than 10% ofthe patients refused to participate because they felt to be stressed too much. Lessthan 5% dropped-out during the course of computerized assessment because of aloss of concentration or compliance. All other patients (N=78) completed the scaleswithout experiencing major problems. The age of these patients was m=43.5�SD=15.1 (range=18–76) years. Forty-seven patients were female (47.0�15.3; 25–76 years) and 31 male (38.3�13.2; 18–68 years). DSM IV diagnoses (AmericanPsychiatric Association, 1994) are shown in Table 2, differences between diagnosticsubgroups concerning age and gender in Table 3.Reliability and internal structure of OPQ were analyzed by Cronbach’s alpha,

split-half reliability, principal component analysis and principal factor analysis(centroid method). For further statistical analysis patients were divided intothree diagnostic subgroups: psychotic disorders, mood disorders and others (non-psychotic and non-mood disorders, Table 2). Kruskal–Wallis ANOVA and the

84 B. Weber et al. / Computers in Human Behavior 19 (2003) 81–93

Page 5: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

Mann–Whitney U-Test were used for testing differences between these diagnosticsubgroups. The Mann–Whitney U-Test was also used as a test for differencesbetween genders. Relations between age, OPQ, TQ and GCAS scores were tested bySpearman’s Rank Order Correlation. Additionally, a (stepwise forward) multipleregression analysis for OPQ scores was performed in order to assess the effect ofpossibly predicting variables.

3. Results

Although mostly working independently in a straightforward and productive wayof interaction with the computer, most of the patients took the opportunity to askquestions. Patients were observed to lose self-confidence if left completely alone withthe computer, whereas excessive involvement of the adviser provoked a lack ofindependence.In order to evaluate the possibility of data reduction, a principal component ana-

lysis of the 13 OPQ items was carried out and showed four factors witheigenvalues>1 and a cumulated explained variance of 65.8%. Factor I (items 4,5)explained 31.4% of variance, factor II (items 6,8,9,11,12) 15.4%, factor III (items1,7,10,13) 10.8% and factor IV (items 2,3) 8.2%. Item means ranged from 2.0 to 2.5,standard deviations from 0.50 to 0.95 and item-total correlations from 0.36 to 0.55.Cronbach’s alpha was calculated as �=0.81 and the split half reliability as r=0.88.The internal structure of the OPQ-scale was analyzed by a principal factor analy-

sis (centroid method, varimax rotation). In accordance with the preceding thematicselection of items three factors with eigenvalues>1 were extracted and might be

Table 2

Main psychiatric diagnoses and diagnostic subgroups of 78 psychiatric inpatients with identical assess-

ment procedure

Main diagnoses (DSM IV) N

Psychotic disorders (N=28)

Schizophrenia 16

Schizoaffective disorder 9

Brief psychotic disorder 2

Substance induced psychotic disorder 1

Mood (depressive) disorders (N=35)

Depressive episode 32

Dysthymic disorder 3

Other disorders (N=15)

Neurotic or personality disorder 10

Substance related disorder 4

Posttraumatic stress disorder 1

Total 78

B. Weber et al. / Computers in Human Behavior 19 (2003) 81–93 85

Page 6: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

Table 3

Mean values and standard deviations of OPQ (final acceptance) total scores and subscores, TQ (initial acceptance) and GCAS (computer attitude) scores and

age as well as gender distribution of the total sample and diagnostic subgroups

OPQ (final

acceptance)

OPQ

functionality

OPQ

comparative

quality

OPQ

irritability

OPQ

item 1

TQ (initial

acceptance)

GCAS

(computer

attitude)

Age

(years)

Gender

female/male

(N)

All patients (N=78) 30.3�4.9 9.6�1.8 11.4�2.4 7.0�2.2 2.3�0.4 10.2�2.2 57.7�10.6 43.5�15.1 47/31

Psychotic disorders (N=28) 30.5�5.2 9.3�1.6 * 11.5�2.9 7.3�2.1 2.3�0.5 10.7�1.5 58.5�9.2 36.6�9.1 11/17

Mood disorders (N=35) 29.6�5.1 9.4�2.1 11.2�2.2 6.8�2.3 2.3�0.5 10.0 �2.7 56.9 �10.9 50.5�16.3 28/7

Other disorders (N=15) 31.3�4.2 10.4�1.0 11.5�2.2 7.0�2.7 2.3�0.3 10.1�2.2 58.1 �12.7 40.1�14.5 8/7

* Significantly lower compared to patients with other disorders (Mann–Whitney U-test: Z=2.2, P=0.03).

86

B.Weber

etal./

Computers

inHumanBehavio

r19(2003)81–93

Page 7: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

meaningfully interpreted as ‘functionality’ (factor I, eigenvalue=3.72, explaining28.6% of variance; items 2,3,4,5), ‘comparative quality’ (II, 1.62, 12.5%; 6,8,9,11,12)and ‘irritability’ (III, 1.07, 8.5%; 7,10,13) of the patient–computer interaction. Fac-tor I includes items asking for judgements about functionality of details of the self-administered computer program, factor II items ask for preference of computerizedas opposed to paper-and-pencil assessment under several aspects, and factor IIIitems concern calmness/nervousness and concentration while working with thecomputer. For further analysis these clustered items were used for calculating sub-scales. Item 1 (global scoring of the computerized assessment) was not assigned toone of the three factors.A good (final) acceptance of computerized examination (OPQ score>26) was

found in 63 patients (80.8%). Mean OPQ total and subscale scores for all patientsand diagnostic subgroups are shown in Table 3.The analysis of single items showed, that 65% of the patients rated the computer

program in general good or very good, 89% the touch-screen handling, 71% thekeyboard handling and the comprehensiveness of the instructions and 64% that ofthe questions. Fifty-three percent of the patients described a calming effect of thecomputer while 24% experienced some nervousness. Nevertheless, 73% felt thatthe interaction with the computer program facilitated concentration. Comparedto the paper-and-pencil assessment the computer version in principle was preferredby 58% of the patients (vs. 10% preferring the conventional version). The computerprogram was thought to facilitate openness (38 vs. 13%), to be more binding (29 vs.10%), more reliable (41 vs. 10%) and less demanding (38 vs. 24%; for detailedanswers to single items see Table 1).A positive attitude towards computers (GCAS score>48) was found in 65

(83.3%) patients whereas 13 (16.7%) showed a more or less negative attitude. Initi-ally (TQ), 90% of the patients felt able to work with the computerized scale,76% even enjoying it but 35% expected difficulties for the subsequent computerizedassessment.Cross-correlations of GCAS, TQ and OPQ total scores, OPQ subscores and the

(global) item 1, age, and education are shown in Table 4 for the total sample as wellas diagnostic subgroups. Sixteen patients reported experience of computerized and48 of conventional assessment. Pre-experience of computerized (Z=�2.2, P=0.03)as well as of conventional (Z=�2.0, P=0.04) assessment were found to be sig-nificantly correlated to OPQ ‘functionality’ subscores.Overall, no gender related differences were found. Because of their possible sus-

ceptibility to drug-induced side-effects OPQ total scores and subscores were addi-tionally controlled for correlation to CPZ and diazepam equivalents (Dietmaier &Laux, 1992). No correlations were found, neither in the total sample nor withindiagnostic subgroups.In a multiple regression analysis (stepwise forward) with OPQ scores of the total

sample as the dependent variable and age, gender, GCAS scores, TQ scores, diag-nostic subgroups, pre-experience of assessment as well as educational level as inde-pendent variables, three variables were included in the statistical model: GCASscores, explaining 21% (�=0.34, P=0.003) and TQ scores, explaining further 6.5%

B. Weber et al. / Computers in Human Behavior 19 (2003) 81–93 87

Page 8: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

Table 4

Crosscorrelations of GCAS (computer attitude), TQ (initial acceptance) and OPQ (final acceptance) total scores, OPQ subscores and item 1, age and educa-

tiona

OPQ

functionality

OPQ

comparative

quality

OPQ

irritability

OPQ

item 1

Age Education GCAS

(computer

attitude)

TQ

(initial

accept.)

OPQ 0.71���� 0.71���� 0.74���� 0.57���� �0.12 0.16 0.41��� 0.38���

Total score 0.66���/0.82����/0.73�� 0.81����/0.57���/0.69�� 0.80����/70����/0.70�� 0.63���/0.58���/0.30 0.30/�0.35�/�0.05 �0.07/0.49��/�0.41 0.36(�)/0.32(�)/0.56� 0.56��/0.28/0.24

OPQ – 0.29�� 0.44���� 0.31�� �0.22(�) 0.31�� 0.32�� 0.27�

Functionality 0.30/0.23/0.49� 0.54��/0.49��/0.27 0.28/0.52��/�0.25 0.11/�0.43�/�0.07 0.04/0.55���/�0.19 0.28/0.42�/0.08 0.33(�)/0.33�/0.15

OPQ – 0.24� 0.34�� 0.03 0.03 0.17 0.28�

Comp. quality 0.41(�)/0.08/0.19 0.47�/0.26/0.21 0.29/�0.06/0.17 �0.06/0.23/�0.39 0.29/�0.06/0.44(�) 0.56��/�0.01/0.60�

OPQ – 0.46���� �0.05 0.09 0.38��� 0.35��

Irritability 0.54��/0.41�/0.45(�) 0.32(�)/�0.20/0.04 �0.07/0.31(�)/�0.27 0.29/0.38�/0.47(�) 0.41�/0.50��/�0.15

OPQ – 0.10 0.00 0.22(�) 0.07

Item 1 0.26/�0.01/0.21 �0.11/0.18/�0.02 0.11/0.14/0.72�� 0.38�/�0.08/�0.13

Age – �0.19(�) �0.28� �0.10

0.25/�0.54���/0.24 �0.06/�0.48��/�0.05 �0.12/�0.33(�)/0.18

Education – 0.36�� 0.24�

0.36(�)/0.57���/�0.06 0.15 /0.40�/�0.06

GCAS – 0.43����

0.50��/0.47��/0.24

a The second line contains the results of diagnostic subgroups: psychotic (N=28)/mood (N=35)/other (N=15) disorders.(*) P<0.1.

* P<0.05.

** P<0.01.

*** P<0.001.

**** P<0.0001.

88

B.Weber

etal./

Computers

inHumanBehavio

r19(2003)81–93

Page 9: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

of variance (�=0.28, P=0.01), and pre-experience with computerized assessment,explaining a further 2% of variance (�=0.12, P=0.0002).

4. Discussion

The evaluation of general test criteria confirmed the OPQ to be a suitable tool forthe estimation of acceptance of computerized assessment. The scale may be descri-bed by the three factors ‘functionality’, ‘comparative quality’ and ‘irritability’ inpatient–computer interaction. The results of the principal component analysis of the13 OPQ items indicate that item reduction is not advisable.The mean GCAS score (57.7) indicates a less positive attitude towards computers

than in healthy persons (GCAS score 59.0, n=148; Bouman et al., 1989) or pri-marily neurotic psychiatric outpatients (60.2, n=157; Spinhoven et al., 1993). Thismight result from less experience with computers as well as from differences in ageand gender distribution of the samples investigated.The results of the ‘Training Questionnaire’ confirm a higher degree of initial

uncertainty in the handling of computers in patients than in healthy subjects (Weberet al., 1998). Although the percentage of subjects who enjoyed the computerizedassessment was surprisingly high (75%), many patients (39%) expected difficulties indealing with the computer.In spite of this comparatively poor starting condition for interaction of psychiatric

patients with the computer the final acceptance of computerized assessment wasfound to be convincingly good: the OPQ scores as well as the low drop-out andrefusal rates proved a good acceptance. Overall, only a small minority of patientsexperienced the computer as aversive. At least 65% of the patients preferred thecomputerized assessment compared to conventional examination. Surprisingly, allitems asking for a decision between computerized and conventional assessment wereanswered in favor of the computerized version.No clear-cut indication for a higher degree of frankness in computerized assess-

ment was found. However, patients declared a higher degree of frankness in com-puter assessment as opposed to paper-and-pencil assessment in OPQ item 9 (38.5 vs.12.8%). This might correspond to the findings of Turner et al. (1998) of an increasedreadiness to report risk taking behavior in male adolescents or the observationof Lucas et al. (1977) of an increased readiness to report the amount of alcoholconsumed in patients with alcohol-related illnesses when using computer assess-ment. Thus, validity of computer assessment might be superior to conventionalassessment.No striking differences were found between diagnostic subgroups in OPQ single

item answers (Table 1). This partly corresponds to the findings of Spinhoven et al.(1993) who at least did not find a correlation between psychopathology (measuredby SCL-90) and computer attitude.In mood disorders a trend to prefer the answer ‘undecided’, if possible, was seen.

This might reflect the phenomenon of ambivalence, often found in depressedpatients. Furthermore, OPQ ‘functionality’ scores were found to be significantly

B. Weber et al. / Computers in Human Behavior 19 (2003) 81–93 89

Page 10: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

lower in psychotic compared to non-psychotic and non-mood disorders (Table 3,Mann–Whitney U-test: Z=2.2, P=0.03) and rather low in mood disorders. Patientswith psychotic or depressive disorders might experience more problems in the inter-action with the computer because of their cognitive impairment. An influence ofdrug-induced side-effects could be excluded.In the total sample lower ‘irritability’ and ‘functionality’ subscores as well as lower

initial acceptance (TQ) were found to be correlated to a more negative attitude tocomputers (Table 4). Although the variance explained by computer attitude wasfound to be limited (multiple regression analysis), this corresponds to the findings ofSpinhoven et al. (1993) who saw a significant correlation between computer attitudeand ‘relaxation during computerized assessment’. However, the preference of thecomputerized over the conventional assessment (OPQ ‘comparative quality’ sub-score) was surprisingly found to be independent of computer attitude.Looking at corresponding results of diagnostic subgroups (Table 4) it becomes

obvious that the above correlations are mainly caused by the patients with mooddisorders. In patients with psychotic disorders computer attitude seems to havesome influence on initial (TQ) but not on final acceptance (OPQ subscores). In thesub-sample of other disorders computer attitude only seems to be relevant to theglobal approval of computerized assessment (item 1).Age and education showed a high covariance in mood disorders and seemed to be

related to low OPQ ‘functionality’ and TQ scores in this diagnostic subgroup. Thismight indicate more problems with the computerized assessment in older and lesseducated depressed patients. In the present investigation no further dependencies onage, educational level or gender were observed with regard to the acceptance ofcomputerized assessment (TQ, OPQ), as identified for computer attitude in this andin former studies (Spinhoven et al., 1993).In conclusion, despite some risks in psychiatric patient–computer interaction,

computerized assessment was convincingly well accepted by seriously impaired psy-chiatric inpatients. It even yielded superior results compared to conventional paper-and-pencil examination. Overall, only few differences were found between diagnosticsubgroups. Previous skepticism (Skinner & Pakula, 1986; Spinhoven, 1993) con-cerning the feasibility of computerized assessment in psychiatric inpatients seems tobe unjustified hence.Most of those patients who were unable to manage computerized assessment in the

present study were also unable to complete conventional psychological assessments,and pre-experience of computerized as well as conventional assessment seems to havea positive impact on ‘functionality’ in the patient–computer interaction. Anyhow, thiscorresponds to considerations of Spinhoven et al. (1993) that in their study the pro-blems of patients might have been unrelated to the use of a computer for assessment.Nevertheless, individual computer attitude and acceptance—besides the observa-

tion of behavior—should be systematically monitored as possibly confounding co-variables in order to ensure meaningful interpretations of findings. Effects of nega-tive computer attitude on neuropsychological computer-tested performance wasfound in depressed patients. These results have been published elsewhere (Weber,Fritze, Schneider, Kuhner, & Maurer, 2002).

90 B. Weber et al. / Computers in Human Behavior 19 (2003) 81–93

Page 11: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

The results of this study favor future efforts to increase the use of computers. Butsome restrictions and general requirements should be mentioned: computer pro-grams should be adapted to the individual capabilities of psychiatric patients. Fea-sibility of computer assessment might be improved by devices such as voice outputor even voice recognition, on-demand training, on-demand help functions and on-demand background information. These approaches resemble the current develop-ment of common commercial interactive computer software. They may be expectedto improve patient–computer interaction, but might be accompanied by somedecrease of standardization which is thought to be of utmost relevance in conven-tional psychological testing. However, the application of computer programs devel-oped for less restricted subjects to severely impaired psychiatric patients ismethodologically questionable anyway. In the future psychological assessment inpsychiatry will likely become more and more individualized (Jager & Krieger, 1994;Sutcliffe, 2001).Further research on the complex factors which govern the human–computer

interaction (Carroll, 2001; French & Beaumont, 1987; Sutcliffe, 2001) is needed inorder to optimize the patient–computer interface. Interferences between individualpsychopathology, computer attitude, acceptance of computerized assessment andthe findings obtained by this kind of assessment cannot be ruled out in psychiatricpatients (Weber et al., 2002). Even in less impaired neurotic patients, Spinhoven etal. (Spinhoven et al., 1993) found a significant correlation between the degree ofpsychopathologic impairment and ‘relaxation during computerized assessment’.Computerized assessment is not simply a support by a machine but a particular

mode of information processing (Carroll, 2001; Sutcliffe, 2001) where the computerhas feed-back impacts on e.g. individual strategies and self-discipline (Jager &Krieger, 1994). Studies on ‘computer hassles’ (Hudiburg, 1991; Hudiburg, Ahrens,& Jones, 1994) and ‘computerphobia’ (Rosen & Maguire, 1990; Rosen, Sears, &Weil, 1987; Rosen & Weil, 1995; Tseng, Tiplady, MacLeod, & Wright, 1998; Weil,Rosen, & Shaw, 1988; Weil, Rosen, & Wugalter, 1990) in healthy subjects have beenpublished recently. This topic might reach some importance in psychiatry becausepsychiatric patients might be assumed to be particularly susceptible to such problems.

Acknowledgements

This work was supported by grant We2263/1–1 from Deutsche For-schungsgemeinschaft (DGF).

References

American Psychiatric Association. (1994). Diagnostic and Statistical Manual of Mental Disorders: DSM

IV. Washington, DC: American Psychiatric Association.

Beringer, J. (1993). Experimental Run Time System (ERTS). Frankfurt/Main: BeriSoft Cooperation.

Bouman, T. K., Wolters, F. J. M., & Wolters-Hoff, G. H. (1989). Een schaal om attitudes tegenover

computers te meten. Nederlands Tijdschrift voor de Psychologie, 44, 288–292.

B. Weber et al. / Computers in Human Behavior 19 (2003) 81–93 91

Page 12: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

Brickenkamp, R. (1994). Test d2—Aufmerksamkeits-Belastungs-Test. Gottingen: Hogrefe.

Canfield, M. L. (1991). Computerized rating of the psychotherapeutic process. Psychotherapy, 28, 304–

316.

Carr, A. C., Ancill, R. J., Ghosh, A., & Margo, A. (1981). Direct assessment of depression by micro-

computer. A feasibility study. Acta Psychiatrica Scandinavica, 64(5), 415–422.

Carroll, J. M. (2001). Human–computer interaction, the past and the present. In J. M. Carroll (Ed.),

Human–computer interaction in the new millennium (pp. xxvii–xxxvii). New York, Boston, San Francico:

ACM Press and Addison-Wesley.

Dietmaier, O., & Laux, G. (1992). Uebersichtstabellen. In P. Riederer, G. Laux, & W. Poldinger (Eds.),

Neuro-Psychopharmaka Wien. New York: Springer.

Fahrenberg, J. (1994). Ambulant assessment: computer assisted data acquisition under natural conditions.

Diagnostica, 40(3), 195–216.

Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). ‘‘Mini-mental state’’. A practical method for

grading the cognitive state of patients for the clinician. Journal of Psychiatry Research, 12(3), 189–198.

French, C. C., & Beaumont, J. G. (1987). The reaction of psychiatric patients to computerized assessment.

British Journal of Clinical Psychiatry, 26(4), 267–278.

Freudenmann, R. W., & Spitzer, M. (2001). Computer-assisted patient survey as the basis for modern

quality assurance in psychiatry. Results of pilot studies. Nervenarzt, 72(1), 40–51.

Gitzinger, I. (1990). Acceptance of tests presented on a personal computer by inpatients. Psychother

Psychosom Med Psychol, 40(3–4), 143–145.

Hedlund, J. L., Vieweg, B. W., & Cho, D. W. (1985). Mental health computing in the 1980s. II. Clinical

applications. Comput Hum Serv, 1, 1–31.

Hudiburg, R. A. (1991). Relationship of computer hassles, somatic complaints, and daily hassles. Psy-

chological Reports, 69(3), 1119–1122.

Hudiburg, R. A., Ahrens, P. K., & Jones, T. M. (1994). Psychology of computer use. 31. Relating computer

users stress, daily hassles, somatic complaints, and anxiety. Psychological Reports, 75(3), 1183–1186.

Ihl, R., & Weyer, G. (1994). Alzheimer’s Disease Assessment Scale (ADAS)—German Version. Gottingen:

Beltz.

Jager, R. S., & Krieger, W. (1994). Future prospects of computer-based assessment, exemplified on the

basis of treatment-oriented assessment. Diagnostica, 40(3), 217–243.

Lienert, G. A., & Raatz, U. (1994). Testaufbau und Testanalyse. Weinheim: Psychologie Verlags Union.

Lucas, R. W., Mullin, P. J., Luna, C. B., & McInroy, D. C. (1977). Psychiatrists and a computer as

interrogators of patients with alcohol-related illnesses: a comparison. British Journal of Psychiatry, 131,

160–167.

Maxwell, K. (2001). The maturation of HCI: moving beyond usability towards holistic interaction. In

J. M. Carroll (Ed.), Human–computer interaction in the new millennium (pp. 191–233). New York, Bos-

ton, San Francico: ACM Press and Addison-Wesley.

McGuire, M., Bakst, K., Fairbanks, L., McGuire, M., Sachinvala, N., von Scotti, H., & Brown, N.

(2000). Cognitive, mood, and functional evaluations using touchscreen technology. Journal of Nervous

and Mental Disease, 188(12), 813–817.

Neal, L. A., Busuttil, W., Herapath, R., & Strike, P. W. (1994). Development and validation of the com-

puterized clinician administered post-traumatic stress disorder scale-1-revised. Psychological Medicine,

24, 701–706.

Pelissolo, A., Veysseyre, O., & Lepine, J. P. (1997). Validation of a computerized version of the tem-

perament and character inventory (TCI) in psychiatric inpatients. Psychiatry Research, 72(3), 195–199.

Rosen, L. D., & Maguire, P. D. (1990). Myths and realities of computerphobia: a meta-analysis. Anxiety

Research, 3, 175–191.

Rosen, L. D., Sears, D. C., & Weil, M. M. (1987). Computerphobia. Behavior Research Methods Instru-

mentation and Computers, 19, 167–179.

Rosen, L. D., & Weil, M. M. (1995). Computer anxiety: a cross-cultural comparison of university students

in 10 countries. Computers in Human Behavior, 11(1), 45–64.

Schriger, D. L., Gibbons, P. S., Langone, C. A., Lee, S., & Altshuler, L. L. (2001). Enabling the diag-

nosis of occult psychiatric illness in the emergency department: a randomized, controlled trial of the

92 B. Weber et al. / Computers in Human Behavior 19 (2003) 81–93

Page 13: Acceptance of computerized compared to paper-and-pencil assessment in psychiatric inpatients

computerized, self-administered PRIME-MD diagnostic system. Annals of Emergency Medicine, 37(2),

132–140.

Shakeshaft, A. P., Bowman, J. A., & Sanson-Fisher, R. W. (1998). Computers in community-based drug

and alcohol clinical settings: are they acceptable to respondents? Drug and Alcohol Dependence, 50(2),

177–180.

Skinner, H. A., & Pakula, A. (1986). Challenge of computers in psychological assessment. Professional

Psychology, 17, 44–50.

Spinhoven, P., Labbe, M. R., & Rombouts, R. (1993). Feasibility of computerized psychological testing

with psychiatric outpatients. Journal of Clinical Psychology, 49, 440–447.

Stevenson, J. F., Beattie, M. C., Alves, R. R., Longabaugh, R., & Ayers, T. (1988). An outcome mon-

itoring system for psychiatric inpatient care. QRB Qual Rev Bull, 14(11), 326–331.

Sutcliffe, A. (2001). On the effective use and reuse of HCI knowledge. In J. M. Carroll (Ed.), Human–

computer interaction in the New Millennium (pp. 3–29). New York, Boston, San Francico: ACM Press

and Addison-Wesley.

Tseng, H. M., Tiplady, B., Macleod, H. A., & Wright, P. (1998). Computer anxiety: a comparison of pen-

based personal digital assistants, conventional computer and paper assessment of mood and perfor-

mance. British Journal of Psychology, 89(4), 599–610.

Turner, C. F., Ku, L., Rogers, S. M., Lindberg, L. D., Pleck, J. H., & Sonenstein, F. L. (1998). Adolescent

sexual behavior, drug use and violence: increased reporting with computer survey technology. Science,

280, 867–873.

Weber, B., Fritze, J., Schneider, B., Simminger, D., & Maurer, K. (1998). Computerized self-assessment in

psychiatric in-patients: acceptability, feasibility and influence of computer attitude. Acta Psychiatrica

Scandinavica, 98(2), 140–145.

Weber, B., Fritze, J., Schneider, B., Kuhner, T., Maurer, K. (2002). Bias in computerized neuropsycho-

logical assessment of depressive disorders caused by computer attitude. Acta Psychiatrica Scandinavica,

105(2) 126–30.

Weil, M. M., Rosen, L. D., & Shaw, S. (1988). Computerphobia reduction program: clinical resource man-

ual. Carson, CA: California State University Dominguez Hills.

Weil, M. M., Rosen, L. D., & Wugalter, S. (1990). The etiology of computerphobia. Computers in Human

Behavior, 6, 361–379.

B. Weber et al. / Computers in Human Behavior 19 (2003) 81–93 93