a study to evaluate the introduction of a pattern recognition technique for chest radiographs by...

26
Radiography (1996) 2, 263-288 ORIGINAL ARTICLES THE LLEGE OF R A+3 RAPHERS A STUDY TO EVALUATE THE INTRODUCTION OF A PATTERN RECOGNITION TECHNIQUE FOR CHEST RADIOGRAPHS BY RADIOGRAPHERS Helen Hughes, Kenneth Hughes and Raymond Hamill Monklunds District Trust Hospital, Aidrie, Scotland ML6 OJS (Received 30 August 1995; accepted 15 July 1996) Purpose: This study introduced and evaluated a pattern-recognition technique for radiographers looking at chest radiographs. The technique allowed radiographers to screen significant/insignificant chest radiographs and introduced a broad classification of pathological groupsPpneumothorax, effusion, collapse, and ‘shadows’. Methods: The project began with a pre-tutorial study of radiographers’ assess- ments of chest radiographs. Lectures introducing a pattern-recognition technique were then presented by a consultant radiologist. The post-tutorial study demon- strated the radiographers’ use of the pattern-recognition technique in assessing further chest radiographs. The radiographers were requested to complete a questionnaire both before and after the tutorial. The radiographs assessed in this study were grouped into two categories: those taken in the hospital’s General Department and satellite Health Centres, and those taken in the hospital’s Accident and Emergency Department. Results: Within both of these categories improvements in sensitivity, specificity and predictive values were obtained by the radiographers’ use of the pattern technique, introduced by the consultant radiologist. False-positive diagnoses decreased and, more importantly, so did the false-negative diagnoses. Conclusions; The radiographers successfully acquired a pattern-recognition tech- nique. They were able to classify radiographs into significant/insignificant groups, and recognised broad pathological groups. These groupings were likely to have significant immediate clinical implications. Key words: red-dot; skill-mix; reporting; education; triage INTRODUCTION ‘Red dot’ systems have been used extensively to assess and develop the radiographers’ skills. This form of role development has provoked mixed success and reception [24]. Cheyne introduced a red dot scheme to the Ealing Hospital Casualty Department and 98% radiographically detected abnormality rates were reported. In addition, radiogra- phers detected 2.5% of abnormalities which would otherwise have been initially undetected by casualty ofhcers [5]. The success of such studies has not, however, been without some criticism. Reynolds indicated that red dot systems used in some departments were a waste of time; if radiographers see an abnormality, they should be able to say what it is, and not just attach a red dot to the film. Why should radiographers not report on films? [6]. 107%8174/96/040263+26 $25.00/O 263 0 The College of Radiographers

Upload: helen-hughes

Post on 19-Sep-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Radiography (1996) 2, 263-288

ORIGINAL ARTICLES

THE LLEGE OF

R A+3 RAPHERS

A STUDY TO EVALUATE THE INTRODUCTION OF A PATTERN RECOGNITION TECHNIQUE FOR CHEST RADIOGRAPHS BY RADIOGRAPHERS

Helen Hughes, Kenneth Hughes and Raymond Hamill

Monklunds District Trust Hospital, Aidrie, Scotland ML6 OJS

(Received 30 August 1995; accepted 15 July 1996)

Purpose: This study introduced and evaluated a pattern-recognition technique for radiographers looking at chest radiographs. The technique allowed radiographers to screen significant/insignificant chest radiographs and introduced a broad classification of pathological groupsPpneumothorax, effusion, collapse, and ‘shadows’. Methods: The project began with a pre-tutorial study of radiographers’ assess- ments of chest radiographs. Lectures introducing a pattern-recognition technique were then presented by a consultant radiologist. The post-tutorial study demon- strated the radiographers’ use of the pattern-recognition technique in assessing further chest radiographs. The radiographers were requested to complete a questionnaire both before and after the tutorial. The radiographs assessed in this study were grouped into two categories: those taken in the hospital’s General Department and satellite Health Centres, and those taken in the hospital’s Accident and Emergency Department. Results: Within both of these categories improvements in sensitivity, specificity and predictive values were obtained by the radiographers’ use of the pattern technique, introduced by the consultant radiologist. False-positive diagnoses decreased and, more importantly, so did the false-negative diagnoses. Conclusions; The radiographers successfully acquired a pattern-recognition tech- nique. They were able to classify radiographs into significant/insignificant groups, and recognised broad pathological groups. These groupings were likely to have significant immediate clinical implications.

Key words: red-dot; skill-mix; reporting; education; triage

INTRODUCTION

‘Red dot’ systems have been used extensively to assess and develop the radiographers’ skills. This form of role development has provoked mixed success and reception [24].

Cheyne introduced a red dot scheme to the Ealing Hospital Casualty Department and 98% radiographically detected abnormality rates were reported. In addition, radiogra- phers detected 2.5% of abnormalities which would otherwise have been initially undetected by casualty ofhcers [5]. The success of such studies has not, however, been without some criticism. Reynolds indicated that red dot systems used in some departments were a waste of time; if radiographers see an abnormality, they should be able to say what it is, and not just attach a red dot to the film. Why should radiographers not report on films? [6].

107%8174/96/040263+26 $25.00/O 263

0 The College of Radiographers

264 Hughes et al.

Recent College instructions to radiographers state that the immediate interpretation of radiographs may be made by the radiographer who can report on the X-ray examination and the radiographic appearances. However, the radiographer needs to have completed a prescribed course of training and should report in accordance with an agreed protocol [7]. The Royal College of Radiologists has also given its support to the principles of delegation and wishes members to explore the potential of delegation to radiographers and other radiology staff [8]. There is no doubt that such training will have profound implications for the developing role of the radiographer. Radiographers play an integral part in front-line diagnosis and the acceptance of ‘skill-mix’ principles should increase and develop their professional standing.

It is encouraging that the recent Audit Commission Report [9] recognises that radiographers are capable of undertaking extended roles. The introduction of skill-mix principles will allow radiographers to expand and develop their roles.

Studies have compared the radiographers’ abilities to extend their role into reporting against those of casualty officers. A study carried out in Ealing General and Northwick Park Hospitals showed that half the clinically important abnormalities wrongly interpreted by casualty officers were correctly interpreted by radiographers [lo]. Important errors were made by casualty officers in the reporting of the chest, wrist, face, skull, and abdomen [l 11. Errors made by junior radiologists were similar to those made by casualty officers, therefore suggesting that films should be reported by consultant radiologists [12]. The ‘gold standard’ of the radiologist is not altogether foolproof [13]. De Lacey reported that 4% of radiologists’ reports were in error and 1% were equivocal; half of these errors were clinically important [14].

Swinburne [15] suggested that extending the role of the radiographer by means of teaching a pattern-recognition technique would help them to distinguish normal from abnormal films and help improve job satisfaction. His definition of pattern recognition states that ‘it allows an individual to identify whether a given situation is “normal” or “abnormal” without a prolonged and complex training’. The College of Radiographers also proposed that there should be discussions about extending radiography into the area of pattern recognition [16].

As suggested in Renwick’s article, such a pattern method has not yet been tried or tested [13].

METHODOLOGY

Instruwlents jbr duta collection 1. A pre-tutorial questionnaire was given to all participating radiographers. 2. A pre-tutorial assessment sheet was introduced for completion and collection of

baseline data before introduction of tutorials. 3. Following the tutorials, a post-tutorial assessment sheet was then completed with

reference to the two pattern illustration handouts (Figs 1 and 2). These illustrations were designed by the author and the consultant radiologist specifically for this study. The only modification on this assessment sheet was the inclusion of a single column to identify the patterns introduced at tutorials.

4. A post-tutorial questionnaire was then completed. 5. The questionnaires were analysed using the Q-Aid Support System@ (Initiative

Software Applications). The data from pre- and post-tutorial assessments were entered and analysed using a customised Paradox@ database (Borland).

Start

N.B. Behind heart!

Pattern recognition in chest radiographs 265

1. RADIOGRAPHIC

4 Central/

Corre&ta~h~asure/

1 WOth rib post.-

Finish

start! pish ,/ ‘\

r” ‘\ \, Axilla

2. DIAPHRAGIWHEARTBIEDIASTINUM

? Free air ? Heart size/shape/hilar vessels

? Rotated < ;in;;,sprocess

? Penetrated ~ Disc spaces behind heart

? Inspiration - 10th rib post.

? Tracheal position trachea? central

3. PLEURAL/EDGE

? Pneumothorax/effusion ? Thickening/tagging

4. LUNG FIELDS/POSTERIOR RIBS

Compare both sides Ribs

Lungs

, Pneumothorax

? Abnormal Effusion Collapse ‘Shadows’

5. ANTERIOR RIBS/SHOULDER GIRDLES

? Fractures

6. NECK/SOFT TISSUES

? Gas soft tissues ? Mastectomy / /

/ Breast , 1 ‘\ ,’ ‘\ :

- ..__a ,’ ‘, ,’ .___,’

Figure 1. Routine examination chest X-ray. ? Previous chest X-ray.

? NORMAL

? Position :- Tubes/Wires

Catheters ? Further views

266 Hughes et al.

e Black

- Lung edge

‘Haziness’

White

Erect Supine

Dense/segmental loss lung volume

1. PNEUMOTHORAX

c ? Expiration film (also useful if? foreign body) ? Tracheal position-tension-pushed _____

2. EFFUSION

t Supine penetrated film/erect film or decubitus film ? Tracheal position-pushed

3. COLLAPSE I f

? Lateral film to locate Where are trachea/diaphragm/fissures/

vascular markings?

? Tracheal position-pm

4. ‘SHADOWS’ -Single-? Lateral film to locate t

Medical advice required -clinical information

- Multiple -previous films/reports/progress

? PREVIOUS FILM

<

Pneumothorax B ? Chest drain

CLINICAL Effusion B ? Chest drain/diagnosis INTERVENTION Collapse B ? Medical management/investigation/hronchoscopy

‘Shadows’- Further investigations

Figure 2. Pattern rccogdition. Clinical managementhtervcntion

Pattern recognition in chest dioguuphs

The questionnaires were constructed to allow for assessment of: 1. confidence levels 2. radiographic technique 3. radiographic opinions 4. role/status/skills 5. teaching/tutorials 6. value of pattern recognition techniques.

261

However there were certain questions specific to either one of the questionnaires: 1. medical comment 2. usefulness of the pattern technique:

usefulness classification value inclusion in teaching

Twenty-five radiographers took part in the pre-tutorial questionnaire, but one form was returned too late for analysis. Twenty-six radiographers were included in the analysis of the post-tutorial questionnaire.

In order to assist in the analysis, a style similar to Renwick’s triage system was applied [13]. Modifications were made to the format: ‘Ahnovmal’ in his study was replaced by SigniJicant in this study, Insign$cant was considered to be more appropriate than ‘Normal’ because radiographers can often recognise abnormal variations which do not have significant clinical importance, such as old calcified glands or anatomical differences.

The radiographers’ assessments of the chest radiographs were compared with those of the six departmental consultant radiologists.

Sample sizes Over a period of 5 weeks, a sample of 197 chest radiographs was taken prior to tutorials. After the tutorials, a final sample of 484 chest radiographs was taken over a period of 6 weeks.

Teaching method The chest radiograph is undoubtedly difficult to interpret. However, much of the difficulty in interpretation would disappear if one ignored the differential diagnosis of pulmonary shadowing as suggested by Adams [17]. There is the mistaken belief that one should specify the cause of pulmonary shadowing in all cases.

The radiographer is often requested to comment on the chest radiograph, especially in casualty departments, Health centres, Intensive Care Units, during mobile radiogra- phy on the ward, and to check on radiographs in main departments. The radiographer should not be intimidated by such requests or feel constrained by medico-legal implications [I 81.

Those who suffer pneumothorax, effusion, collapse form a group in which the radiological signs are specific. This group is particularly important because correct management, which might be life-saving, requires active intervention. This is precisely the situation needing the radiographer’s skill. Often the comfort of further advice is not readily available from more experienced medical or radiographic staff, for example, a pneumothorax presenting overnight or at the weekend in casualty cannot await a more experienced member of staff and a decision must be made rapidly.

268 Hughes et al.

Table 1. General format for calculations

4-fold table (2 x 2) Consultants reports for disease

positive predictive -

radiographic

screen

a+c b+d a+b+c+d c-- negative predictive

The reason for this simplified approach is to allow the radiographer to concentrate on the detection and differentiation between pneumothorax, effusion, collapse, and pulmonary shadowing. Recognition of these groups determines the need for further radiographic views if necessary and highlights the urgency for clinical intervention.

‘Shadows’, particularly multiple shadows, have much less urgency attached to their recognition and can usually await a more formal report by the radiologist. A routine inspection of the chest radiograph should be carried out prior to any attempt at classification (Fig. l), and with practice, this inspection should take no more than 2 min. This technique is recommended for any departmental ‘checker’. Most examinations will be seen as insignificant (normal) at this stage. The radiographic screen for technical acceptance is of paramount importance, but must be assessed in the context of the patient’s clinical condition. Any tubes, wires, or catheters must be inspected for position, particularly the tips of these lines.

If an abnormality is seen on initial inspection, an attempt is made to classify it into one of four broad pathological groups-pneumothorax, effusion, collapse, and shadows-for further radiographic views if necessary and in order to appreciate its significance (Fig. 2).

The post-tutorial study was then carried out. In this part of the study the radiographers were encouraged to comment on significant appearances and to identify pneumothorax, effusion, collapse, and shadows. This allowed the radiologists to score the radiographers’ reports.

De$nition of screening test ‘A procedure where a relatively simple test is applied to individuals in order to make a presumptive iden@cation of disease. ’

Results were evaluated by considering sensitivity, specificity, and predictive power. Evaluation of a screening test using the fourfold table is demonstrated in Table 1 [19].

The population being screened will consist of two groups: 1. Those with disease 2. Those who do not have disease.

The screening test will be split into two groups: 1. Those screened positive (Significant) 2. Those screened negative (Insignificant)

Puttevn recognition in chest radiogvuphs 269

However, screening tests are not 100% infallible; therefore the population is divided by the screening test into four different groups. 1. Those with disease who are correctly identified as positive-

TRUE POSITIVES (a) 2. Those without disease who are incorrectly screened as positive-

FALSE POSTTTVES (b) 3. Those with disease who are incorrectly screened as negative--

FALSE NEGATIVES (c) 4. Those without disease who are correctly screened as negative-

TRUE NEGATIVES (d)

Four possible combinations of test results and patient disease are as follows:

CONSULTANTS REPORTS RADIOGRAPHERS REPORTS

DISEASE PRESENT

/ 1

l SIGNIFICANT TEST (TRUE +VE) a

INSIGNIFICANT TEST (FALSE - VE) c

PATIENT POPULATION

\ DISEASE ABSENT l SIGNIFICANT TEST (FALSE +VE) b

a + c=Disease present b + d = Disease absent

INSIGNIFICANT TEST (TRUE - VE) d

a+ b =Those identified with disease by screening test = Screened by radiographer as SIGNIFICANT

c + d = Those identified as negative by screening test = Screened by radiographer as INSIGNIFICANT

a+ b + c + d = TOTAL screened population

Sensitivity (a/a + c) x 100 = o/o with disease correctly screened as positive by the screening procedure; i.e. O/u of radiographers’ reports correctly screened as significant.

Specijicity (d/b+d) x 100=X of those without disease correctly screened as negative by the screening test; i.e. “/ of radiographers’ reports correctly screened as insignificant.

Predictive power of a positive test (+ ve predictive value) (a/a+b) x lOO=the proportion of those with a positive test who had disease; i.e. “/o screened significant by radiographers which were reported ‘disease present’ by radiologist.

210 Hughes et al

Predictive power of a negative test ( - ve predictive value) (d/c+d) x lOO=the proportion of those with a negative test who actually did not exhibit disease; i.e. o/ screened insignificant by radiographer which were reported ‘disease absent’ by radiologist.

Prevalence (a + c/a + b + c + d) = the proportion of those in the screened population who actually have disease.

Kappa K= (observed proportion - expected proportion)l(l - expected proportion) Observed proportion=proportion of cases in which agreement is observed between radiographer and radiologist. Expected proportion=proportion of cases in which we would expect to see agreement between radiographer and radiologist by chance.

The results of a screening test can be represented in a fourfold table (Table 1). The disease variable appears at the top (Consultant radiologist’s report: Disease Present/ Absent) and the screening variable appears at the left hand side (Radiographer’s report: Significant/Insignificant) [ 191.

DISCUSSION

Questionnaires (Tables 2, 4 and 6) The questionnaires proved to be a quick and effective way of assessing staff views.

They used the minimum of staff time, and were constructed to allow comparisons to be made between the pre- and post-tutorial questionnaires.

Conjidence levels (Figs 3-7, Tables 3, 5 and 6) Confidence levels, although initially high at 7.71, improved in the post-tutorial study to 8.73. There were no obvious differences between the various grades of radiographers.

Technique for checking a chest radiograph (Table 6) The pre-tutorial questionnaire demonstrated that 19 out of a total of 2.5 of the pre-tutorial study radiographers had a technique for looking at the chest radiograph. Surprisingly, one still did not have a technique following the instructional programme or, perhaps, was just reluctant to use one!

In retrospect, question 2 of both the pre-tutorial study and the post-tutorial questionnaire regarding checking of a chest radiograph could have been more specific as several radiographers raised the question of whether this implied checking for technique or pathology. However, checking for pathological processes would seem less likely as the responses given indicated otherwise.

Opinions (Table 6) The question of giving opinions and willingness to do so proved to be a complex question for the radiographic staff and their responses showed very divided opinions. There was no doubt that radiographers felt able to give valuable opinions, with 22 in the pre-tutorial study answering positively and 26 in the post-tutorial study. However,

Pattern recognition in chest radiographs 271

Table 2. Radiographers’ assessment of chest radiographs. Pre-tutorial questionnaire results

TOTAL RESPONDENTS: 25 of 26 ISSUED

Question

How confident do you feel about checking a chest radiograph?

Do you have a technique for looking at a chest radiograph?

Have you, at any time, been asked to comment on chest radiographs by a doctor or any medical staff!

Do you believe that radiographers are able to give opinions on radiographs?

Are you happy about giving verbal opinions?

If you answered ‘NO’ to Question 5, is it because of: (a) Your own uncertainty or your own inexperience?

(b) Medico-legal implications?

(c) Other reasons? (plcase state below)

Response

YES NO

Scaled response l-l 0

19

25

22

3

16

18

7

5

0

2

21

1

1

2

Do you believe that radiographers should expand their professional role and broaden their knowledge by being able to rccognise broad pathological groupings?

24 0

Is interpretation of the chest radiograph adequately covered in standard teaching?

4 20

Would you value a ‘Pattern Recognition Technique’ for looking at chest radiographs?

24 0

Do you believe that the radiographers’ skills are used to their full potential?

0 24

when looking at willingness to give opinions, this division was much clearer. Only three radiographers in the pre-tutorial study were willing to give opinions, but this figure increased in the post-tutorial study to 14.

Medico-legal reasons remain a high restraining factor: 1X radiographers in the pre-tutorial study indicated they were not happy to give opinions due to medico-legal reasons. This figure decreased to 11 radiographers in the post-tutorial study. One radiographer commented, ‘Responsibility would be placed on the radiographer for

212 Hughes et al.

Table 3. Radiographers’ assessment of chest radiographs. Pre-tutorial questionnaire results

Number of questionnaires returned =25 of 26 issued Average years qualified = 10.24 Average confidence level (Ql) =7.71

Distribution of Grades Grade Basic Senior II Senior I

G!!L?G 16 I 2

Average confidence level by Grade Grade Basic Senior II Senior I

Count 7.63 8 7.5

Distribution oj’ confidence levels LeA 6

5 6 I 8 9

10

Count 0 1 0

14 2 4 3

misleading medical staff, radiographers are not paid for this’. Other comments by the radiographers indicated that they felt they were not trained ‘well enough to make a diagnosis’ or ‘to pick out less well known pathologies’.

Several radiographers indicated that they do not get paid enough to assume the additional responsibility of interpreting radiographs.

Sixteen radiographers were not happy about giving opinions, and indicated uncertainty/inexperience as the reason. This figure decreased to three radiographers in the post-tutorial study. The radiographers’ uncertainty/inexperience had clearly improved following the tutorials.

Rolelstatuslskills (Table 6) All the radiographers wished to expand their professional role. Seventeen in the post-tutorial study felt they had achieved this by being able to recognise broad groupings in the chest radiograph. All radiographers in the pre-tutorial study question- naire believed that their skills were not used to their full potential. Buckwell (1992) reported similar findings in her study on radiographers’ perception of their professional role [20].

In retrospect, the questions about whether the roles/status/skills of the radiographer had been expanded by the technique and had aided in the recognition of the broad pathological groupings, could have been more specific. The questionnaires could have allowed deletion of appropriate categories. The question was too broad for a simple yes or no answer and raised many questions by the participating radiographers about the relevance of each term to the answer.

Pattern recognition in chest vadiogvaphs 213

Table 4. Radiographers’ assessment of chest radiographs. Post-tutorial questionnaire results

Response

YES NO

TOTAL RESPONDENTS: 26 OF 26 ISSUED

Question

Following the tutorials on pattern recognition, how confident do you now feel about checking a chest radiograph?

Do you now have a technique for looking at a chest radiograph?

Do you believe that the radiographers are able to give better opinions on chest radiographs following the tutorials?

After the tutorials, are you happy about giving verbal opinions on chest radiographs?

Scaled response l-10

25 1

26 0

14 12

If you answered ‘NO’ to Question 4, is it because of:

(a) Your own uncertainty or your own inexperience?

(b) Medico-legal implications?

(c) Other reasons? (please state below)

3 2

I1 1

I 3

Was interpretation of the chest radiograph adequately covered in the tutorials?

26 0

As a result of your ‘Pattern Recognition Technique’ training, can you now recognise/classify simple broad groups?

26 0

Has your professional role/status/skill been expanded by being able to recognise broad pathological groupings?

17 9

Has the training in chest radiograph pattern recognition been useful to you in your daily work?

25 1

Would you value a similar approach to interpretation of other radiographs? 26 0

Should a similar fonnat for interpretation of chest radiographs be 25 1 included in standard teaching?

The responses from these questions indicated that there is still room for improvement by radiographers. Even after the study some felt they had not improved sufficiently to adopt the responsibility for such a proposed technique. On-going support and education is required for the full implementation of such developments.

214 Hughes et al.

Table 5. Radiographers’ assessment of chest radiographs. Post-tutorial questionnaire results

Number of questionnaires returned =26 of 26 issued Average years qualified = 10.24 Average confidence level (Ql) =8.73

Distribution of Grades Grade Basic Senior II Senior I Superintendent IV

Average confidence level by Grade Grade Basic Senior IT Senior I Superintendent IV

f$;bution qf conJidence levels

<8 8 9

10

Count 16 7 2 1

Count 8.69 8.57 9

10

Count 0

12 9 5

Table 6. Questionnaire results

Pre-tutorial Post-tutorial (25) (26)

1. Confidence levels 2. Radiographic technique 3. Radiographic opinions

verbal willingness: (uncertainty/inexperience (medico/legal (other

4. Role/status/skills 5. Teaching/tutorials 6. Value of pattern recognition techniques

Questions spec$c to either one qf the yuestionnaives 1. Medical comment 2. Usefulness of the pattern technique

usefulness classification value inclusion in teaching

7.71 19

22 26 3 14

16 3) 18 11) 7 1)

24 17 4 26

24 26

8.73 25

25

25 26 26 25

Pattern recognition in chest radiographs

16

14

12

10

2 2 8

u 6

4

2

t

0 <=5 5 6 7 8 9 10

Confidence level

Figure 3. Distribution of confidence levels. Pre-tutorial study.

Senior I

4 E Senior II

0

I I I I 0 2 4 6 8 10

Average confidence level

Figure 4. Average confidence level by grade. Pre-tutorial study.

Teachingltutorials (Table 6) Prior to the tutorials only four radiographers claimed that the interpretation of the chest radiograph had been adequately covered in standard teaching. After the tutorials all of the radiographers felt that the technique was clarified.

Questions specific to one questionnaire The remaining questions in the questionnaires had a non-common theme and were specific to the particular questionnaire.

Medical comment All radiographic staff have at some time been asked to comment on a chest radiograph by medical staff. This answer alone justifies a full study of this topic and in particular is a strong justification for this project.

216 Hughes et al.

<=5 5 6 7 8 9 Confidence level

Figure 5. Distribution of confidence levels. Post-tutorial study

0 2 4 6 8 10 Average confidence level

Figure 6. Average confidence level by grade. Post-tutorial study

Value of’pattern recognition The questionnaires proved to be a useful tool in collecting the radiographers’ general feelings about confidence, technique, their diagnostic opinions, role, status, skills, teaching and value of the technique. The majority of radiographers indicated the overall value of a pattern-recognition technique in terms of usefulness, classification and teaching. They expressed a very strong desire to adopt a similar approach to the interpretation of other radiographs; a very positive response to the pattern technique.

Tutorials The tutorials proved to be very informative and allowed for educational interaction. They were well received by the radiographers, as indicated by the follow-up study questionnaire. The tutorials were limited to two 1 h sessions due to time constraints for radiographers and radiologist; tutorial time was considered a limiting factor and could have been usefully expanded as there was a considerable amount of material to absorb.

Pattern recognition in chest radiographs 271

16 ,

2

0 t in <=5 5 6

- 7 8 9 10

Confidence level

Figure 7. Distribution of confidence levels. 0 =pre-tutorial; n =post-tutorial.

Table 7. Summary of false negatives

Pre-tutorial Study (General and A & E) False negatives = 13 (197 radiographs)

Post-tutorial Study (General and A & E)

False negatives = 8 (484 radiographs)

Opacity (L) Hilum Opacity (L) Hilum 2.5 cm Collapsclconsolidation Metastaseslnodules Congestive cardiac failure Patchy opacity (L) base x 3 Minimal fibrosis Chronic obstructive airways disease Emphysema Cardiomegaly/emphysema/oedema Requires follow up

Opacity 1 cm (doubtful sig.) Hiatus hernia Minimal cardiac failure Mild congestion Minimal pleural/interstitial fluid Fibrosis/minimal failure Minimal failure/pacemaker Minimal infection (R) Mid-zone

Discussion of the radiologist’s assessment of the radiographers’ reports In order not to miss significant lesions in a screening procedure, false-negative reports are highly significant. False-positive results are less of a problem, as the intention of this study is to screen out positive disease for review and also for correlation with clinical assessment. The false-positive and false-negative reports are tabulated in order of clinical importance for pre- and post-tutorial studies (Tables 7 and 8).

The pre-tutorial study was designed to assess the radiographers’ ability to pick out Significant and Insignificant appearances. No education was given at this stage. The results of the radiologist’s assessment highlighted several interesting points. In the pre-tutorial study, a significant false-negative was reported as ‘metastatic/nodules by the radiologist. Although the radiographer failed to comment on this report the

L=left; R=right.

278

Table 8. Summary of false positives

Hughes et al.

Pre-tutorial Study Post-tutorial Study (General and A & E) (General and A & E) False positives= 50 False positives = 65 (197 radiographs) (484 radiographs)

Significant & advice required Infection Enlarged heart Left ventricular failure Raised (R) Hemi-diaphragm Opacity Chronic obstructive airways disease ?Mediastinum

=19 =11 = 8 = 3 = 3 ZZ 2 = 3 = 1

Shadows Significant i advice Enlarged heart Left ventricular failure Raised (R) Hemi-diaphragm Opacity Chronic obstructive airways disease Effusion Tracheal shift

=35 = 8 = 6 = 1 = 5 = 3 = 2 = 4 = 1

R = right

radiographer did indicate a need for ‘further advice’ on the assessment form. Two small opacities were missed by the radiographer and these opacities and the remaining false-negatives were all recorded as ‘insignificant’ and ‘no advice required’.

In the pre-tutorial study, the radiographers were only asked to pick out significant disease and usually indicated this in association with the category ‘Further advice required’. The radiographers spotted four of the reported six pneumothoraces in the pre-tutorial study and commented on the presence of only six of the 15 effusions. They failed to report on any of the three reported collapses.

If other comments were given, they were in the context of increased markings indicating infection/consolidation, enlarged heart, raised (right) hemi-diaphragm, con- gestive cardiac failure (CCF)/left ventricular failure (LVF), chronic obstructive airways disease (COAD), masses. However, these comments were too infrequent to relate accurately to the radiologist’s report.

In the post-tutorial stage following tuition, the radiographers were asked to classify significant chest radiographs into one of the four main groups: pneumothorax, pleural effusion, collapse, shadows. They were encouraged to comment on the chest radiographs.

A subjective radiologist’s scoring O&IO was introduced in order to assess the accuracy of the radiographer’s report:

O-wrong, 5--intermediate, lo-correct.

The average score for the General Department was 8.45 and 8.04 for Accident & Emergency department (Table 9). The significance of this scoring could not be emphasised too much. The general impression, however, was that overall scoring was reasonably high with no obvious difference between radiographers’ grades. This would perhaps refute the suggestion of a previous study that the radiographers’ accuracy correlated reasonably well with seniority (Berman et al. 1985). However, this scoring was only carried out in the post-tutorial study. The radiologist felt that the radiogra- phers’ comments in the pre-tutorial stage were too general and infrequent and could not be scored easily against radiologists’ reports. The post-tutorial stage, however, allowed the radiologist to score the radiographers’ comments against the classification

Pattern recognition in chest radiographs 279

Table 9. Scoring of radiographers. Post-tutorial study. General departments, Health centres and A&E

Years qualified Score Score Count

General Departments/Health Centres 1

: 5 6 8 9

10 11 12 13 15 17 19 20

Total = 342 Average score = 8.45

Accident & Emergency Years qualified

8.3 0 36 6.4 2 2 8.4 3 1 7.5 4 1 8.9 5 21 7.8 8 19 7.7 10 262 9.5 7.5 7.5 8.5 8.0 9.3 9.9

Score Score Count

2 4 5 6 7 8 9

10 13 15 17 19

Total= 142 Average score=%04

6.55 0 17 7.83 2 1 7.52 4 2 8.57 5 14 8.0 8 9 8.59 10 99

10.0 8.29 8.5 8.58 8.5 8.75

group. When asked to comment, the radiographers performed well with the limited classification system.

In the post-tutorial departmental/Accident & Emergency study only one small opacity was missed and commented on as ‘doubtful’ by the radiologist. All of the false-negatives listed (Table 7) were recorded as insignificant and no advice required. These false-negatives were all of minimal degree and listed in order of clinical importance: minimal failure, congestion, fluid, fibrosis, infection. One hiatus hernia was not commented on by the radiographer. The post-tutorial study showed a notable improvement with no serious clinical omissions in the false-negative group. This was an encouraging post-tutorial analysis of the significance of ‘missed’ false-negative lesions.

280 Hughes et al.

All eight of the reported pneumothoraces were reported ‘by the radiographers (8/S). They also reported on the majority of the effusions (23/30) and the majority of the collapses (12/14) with significant tracheal shift. When asked to comment, the radiogra- phers performed well with a limited classification system. Not surprisingly, shadows, was the major group listed.

The radiographers failed to comment on bony abnormalities. The most significant unreported bony abnormality was lytic ribs. No comment was given, although it was appreciated that there was a shadow and also significant and further advice were required. There were no comments on old rib fractures (7) or on a fracture of surgical neck of humerus (1). They also failed to comment on surgical emphysema (2) hiatus hernia (l), cervical rib (l), and numerous old TB changes/foci.

Bony abnormalities overall were poorly reported by the radiographers, but would have had little immediate effect on the clinical management of these patients and were not immediately classifiable into the four-point system. In retrospect, a bony classification may be a useful addition to such a study in order to concentrate the radiographers’ minds onto another area, outside the lung fields.

Overall, the reporting performance by the radiographers was excellent, within the defined parameters of significance and pathological grouping. There is no doubt, as in any study, that increasing practice and confidence will lead to improving results. Also increasing experience leads to a reduction in observer variation. Even allowing for time, practice and increasing confidence, however, there was a markedly significant improvement in the post-study analyses.

RESULTS

Pre- and post-tutorial sensitivity, specificity, positive and negative predictive values for the General Department/Health Centres and Accident & Emergency were compared (Tables 10-12).

There was overall improvement in the General Department/Health Centres in sensitivity from 86% to 91.5%; specificity increased from 54.7% to 83.5%; positive predictive values increased from 62.8% to 67.7%; and negative predictive values from 81.4% to 96.3% (Fig. 8). Kappa statistical analyses for the general department demonstrated a significant improvement in the level of agreement between the radiogra- phers and radiologists in rating chest radiographs, from a fair agreement of 0.4 pre-tutorial study, to a substantial agrement of 0.68 post-tutorial study.

Accident & Emergency pre-tutorial and post-tutorial studies also demonstrated improving results, with sensitivity rising from 87.8% to lOO”/o, specificity 40% to 68(X, positive predictive values 63.2% to 73.6% and negative predictive values 73.7% to 100% (Fig. 9).

Kappa statistical analyses for Accident & Emergency demonstrated a significant improvement in the level of agreement between the radiographers and the radiologists, in rating chest radiographs from a fair agreement of 0.29 pre-tutorial study, to a substantial agreement of 0.67 post-tutorial study. In conclusion, the level of agreement in the General Department and Accident & Emergency Department therefore increased substantially in the post-tutorial study.

The importance of the statistical analyses lies in the significant improvement in the predictive values overall. It is important to recognise that the prevalence of a disease can be an important factor in determining the predictive values of a screening test. As the

Pattern recognition in chest vndiographs

Table 10. Pre-tutorial study. General departments, Health centres andA&E

281

General departments and health centres True positive (a)=49 False positive (b) = 29 False negative (c) = 8 True negative (d) = 35 Total chest radiographs=121

Sensitivity= 86% Specificity= 54.1% Positive predictive power = 62.8% Negative predictive power = 81.4%

Prevalence=0.47 Kappa=0.4 (Fair agreement)

Accident & emergency True positive (a) = 36 False positive (b)=21 False negative (c) = 5 True negative (d)= 14 Total chest radiographs= 76

Sensitivity= 87.8% Specificity = 40% Positive predictive power = 63.2% Negative predictive power = 73.7%

Prevalence = 0.54 Kappa=0.29 (Fair agreement)

prevalence of the abnormality in the population being screened decreases, there is a strong tendency for the predictive value of the positive test to decrease, whereas there is also a lesser tendency of the predictive value of the negative test to increase. With the prevalence in our study decreasing in the post-tutorial studies, this would imply that the positive-predictive values should also decrease, and the negative-predictive values increase. Our results for positive-predictive values are contrary to this expected change; although prevalence decreased from pre-tutorial to post-tutorial, the positive-predictive values increased. This suggests the radiographers were better in the post-tutorial study than the stated figures would infer. This additional observation reinforces our overall findings that the tutorials were beneficial.

In any screening procedure, it is important to be aware of the false-positive and particularly the false-negative values, as these may be overlooked if not reviewed. The false-positive results are less problematical as the intention of this study was to screen out positive disease for review and also for correlation with the clinical assessment. The results demonstrated an improvement in both the false-positive and more importantly the false-negative diagnoses.

In the General Department and Health Centres, the false-positive values decreased from 45.3% pre-tutorial to 16.5% post-tutorial and similarly in Accident & Emergency the false-positive value fell from 60% pre-tutorial to 32% post-tutorial (Table 12).

282 Hughes et al.

Table 11. Post-tutorial study. General departments, Health centres and A&E

General departments and health centres True positive (a) = 86 False positive (b)=41 False negative (c) = 8 True negative (d) =207 Total chest radiographs = 342

Sensitivity=91.5% Specificity=83.5% Positive predictive power= 67.7% Negative predictive power=96.3%

Prevalence=0.27 Kappa=0.68 (Substantial Agreement)

Accident & emergency True positive (a) = 67 False positive (b) = 24 False negative (c) =0 True negative (d) = 5 1 Total chest radiographs= 142

Sensitivity= 100% Specificity= 68% Positive predictive power=73.6% Negative predictive power= lOO’l/o

Prevalence = 0.47 Kappa=0.67 (Substantial Agreement)

It is of interest that in Renwick’s triage study, the results concluded that due to high false-positive diagnoses, radiographers could not triage casualty radiographs sufficiently accurately to allow them to extend their current reporting role [13]. The high false-positive diagnoses found in that study and this pattern recognition study [l] may indicate that the radiographers are over-reporting. But is it not safer in such screening procedures to err on the side of safety? After the pattern tutorials, the false-positive value decreased. Gleadhill et al. stated, ‘false-positive diagnoses, although unfortunate, rarely cause more than annoyance and were considered unimportant. On the other hand, false-negative diagnoses could be clinically important or clinically unimportant’ [21]. These assumptions would imply that the false-negative diagnoses have to be looked at more critically, particularly in a screening procedure.

Renwick’s study highlighted a false-negative value of 21% for chest radiographs. In this pattern recognition study, for the General Department/Health Centres, the false-negative values decreased from 14% pre-tutorial to 8.5% post-tutorial study. The radiologist commented in the post-tutorial study that there were ‘no serious clinical omissions in the false-negative group’. Accident & Emergency demonstrated a signifi- cant decrease in the false-negative value from 12.2”/0 pre-tutorial to 0% post-tutorial (Table 12).

Pattern recognition in chest radiographs

Table 12. Summary of results from pre- and post-tutorial studies

283

Abnormal chest radiographs Normal chest radiographs

True +ve False - ve True - ve False +ve Total (% Abnormal) (“%J Normal)

General Department Pre-tutorial Post-tutorial Total chest radiographs

Accident & Emergency Pre-tutorial Post-tutorial Total chest radiographs

Overall total chest radiographs

49 8 (140/o) 35 29 (45.3%) 121 86 8 (8.5%) 207 41 (16.5%) 342

135 16 242 70

36 5 (12.2%) 14 21 (60%) 76 67 0 51 24 (32%) 142

103 5 65 45

238 21 307 115

83.5

54.7

I

67.7 62.8

1 96.3

Sensitivity Specificity +ve Pred Val -ve Pred Val

Figure 8. Results of’ chest radiograph study. Main departments and health centres. 0 =pre-tutorial; n =post-tutorial.

This study refutes the previous conclusion of Renwick’s study [13]. Pattern recogni- tion allowed our radiographers to improve on their false-positive and false-negative values. A high false-positive diagnosis should not deter any radiographer from extending their reporting role. These false-positives, however, should be assessed in the clinical context, and will require formal reporting.

The Leeds College of Health pilot study on role extension is now in its second and final year [22]. This study is currently assessing the validity of training selected radiographers to carry out plain film reporting; Their study centres on a more broad-based remit, relating to reporting tasks. Areas of their study will include principles of pattern recognition, decision making and audit issues. Intensive training of such a selected group of radiographers should produce favourable results.

However, this should not deter any radiographer from adopting a similar pattern- recognition approach. With training, our unselected group of radiographers of different grades and years qualified demonstrated encouraging results.

284 Hughes et al

100

23

I

63 7

L

73.6

Sensitivity Specificity +ve Pred Val -ve Pred Val

Figure 9. Results of chest radiograph study. Accident & Emergency. U =pre-tutorial; n 1 post-tutorial.

The comparison of the pre- and post-tutorial results demonstrated the overall success of the pattern-recognition technique. The radiographers proved to be more positive and accurate in their assessment of the post-tutorial radiographs. Their ability to pick out true-positive and true-negative diagnoses increased which resulted in a decrease in the false-positive and, most importantly, the false-negative diagnoses.

The pattern recognition technique has highlighted the latent capabilities of our radiographers. The results of this study have reinforced its value. Radiographers’ skills should not go unnoticed. Fielding states that radiologists must accept that radiogra- phers have the ability to recognise abnormalities and, with limited resources, it must be right to exploit this untapped skill to improve patient care [23].

CONCLUSIONS

The aims and objectives of this study were achieved. The pattern-recognition technique undoubtedly allowed assessment of the significance of chest radiograph appearances and the classification of the four pathological groups.

The radiographers’ assessments of chest radiographs were analysed before and after a pattern-recognition technique was introduced and compared with the radiologist’s reports. Improvement was demonstrated by the radiographers’ ability to pick out significant/insignificant radiographs and confirmed by the analysis of sensitivity, specificity and predictive values. False-positive and most importantly false-negative results decreased. No serious omissions were made in the false-negative group in the post-tutorial group.

The questionnaires determined that radiographers considered the pattern technique as extremely valuable. Such developments are decisive, positive steps towards skill-mix principles.

Pattern recognition is a valuable technique and should play an integral part in developing and extending the role of the radiographer. The confidence that radiogra- phers acquire by developing such a technique will improve job satisfaction. This will improve the radiographers’ self-image and ultimately project a more positive professional image.

Pattern recognition in chest radiographs

ACKNOWLEDGEMENTS

285

I thank the following for their help in compiling this project: Mr M. Wilson, Business Manager, Mrs A. MacLennan, Superintendent Radiographer, and all radiographic staff of Monklands Trust who participated and made this study possible.

1.

2. 3. 4. 5.

6. I.

8. 9.

10.

11.

12.

13.

14.

15. 16.

17.

18. 19.

20.

21.

22. 23.

REFERENCES

Hughes H. A study to evaluate the introduction of a pattern recognition technique for chest radiographs. H.D.C.R. MODULE F College of Radiographers, 1995. Anderson J. Red dots enhance the job. Radiography Today 1991; 57: 654 31. Unions attack red dot scheme. Radiography Today 1991; 57: 651 1. Salt D, Arundel M. Red dragon expertise (Editorial). Radiography Today 1991; 57: 650 32. Cheyne N, Field-Boden Q, Hall R. The radiographer and the front line diagnosis. Radiography 1987; 53: 609 114. Radiographer reporting comes full circle (Editorial). Radiography Today 1994; 60: 680 46. X-ray examination requests by nursing practitioners and radiographic reporting by radiographers. College of Radiographers, 1995. Medico legal aspects of delegation. Royal College of Radiologists, 1993. Improving your image: how to manage radiology services more effectively. HMSO, 1994. Berman L, De Lacey G, Twomney E, Twomney B, Welch T, Eban R. Reducing errors in the accident and emergency department: a simple method using radiographers. Br Med J 1985; 290: 421-2. Wardropc J, Chenncls PM. Should all casualty radiographs be reviewed? Br Med J 1985; 290: 1638-40. Seltzer SE, Hessel SJ, Herman PG, Swenson RG, Sheriff CR. Resident fihn interpretations and staff review. Am J Radio1 1981; 137: 129-33. Renwick IGH, Butt WP, Steele B. How well can radiographers triage x-ray films in accident and emergency departments. Br Med J 1991; 302: 568-9. De Lacey G, Barker A, Harper J, Wignall B. An assessment of the clinical effects of reporting accident and emergency radiographs. Br J Radial 1980; 53: 30449. Swinburne K. Pattern recognition for radiographers. Lancet 1971; i: 589-90. Why radiography should be undertaken by radiographers (Editorial). Radiography Today 1992; 58: 660 21. Adams FG. Simplified approach to the reporting of intensive therapy unit chest radio- graphs. Clin Radiol 1979; 30: 219-26. Annual Report. Medical Defence Union, 1988; London, MDU 21. Stempel LE. Eenie, meenie, minie, mo What do the data really show? Am J Obst Gynaecol 1982; 144: 745-52. Buckwell E. Radiographers perception of their professional role. Module F. College of Radiographers. Radiography Today 1992; 58(667): 16-9. Gleadhill DNS, Thomson JY, Simms P. Can more efficient use be made of x-ray examinations in the accident and emergency department? Br Med J 1987; 294: 943-7. Road to hot reporting (Editorial). Radiography Today 1992; 58: 657 30. Fielding JA. Improving accident and emergency radiology. Clin Radio1 1990; 41: 149-59.

APPENDIX: STATISTICAL SUMMARY

Analysis of the data gathered is directed towards assessing, statistically, the agreement between two groups of observers when evaluating sets of chest radiographs. In this case the two groups of observers are the Radiographers and the Radiologists. The simplest measure in such an instance is simply the proportion of cases in which the two groups agree. The obvious disadvantage of this approach is that it does not account for those cases in which the two groups would agree by chance. In order to correct for such chance agreement, the Kappa statistic is calculated.

286 Hughes et al.

Kappa measures the agreement between the evaluations of two observers when both are rating the same object. The difference between the proportion of observed agreement in cases and the proportion of agreement expected purely by chance is divided by the maximum difference possible between the observed and expected proportions, given the marginal totals. A value of 1 indicates perfect agreement. A value of 0 indicates that agreement is no better than chance.

Interpretation of this statistic is as follows:

KAPPA Statistic Interpretation

<o Poor agreement o-0.20 Slight agreement 0.21-0.40 Fair agreement 0.41-0.60 Moderate agreement 0.61-0.80 Substantial agreement 30.81 Almost perfect agreement

KAPPA is calculated as follows: K = (Observed proportion - Expected proportion)/( 1 - Expected proportion) Observed proportion of agreement = Proportion of cases in which agreement is observed between radiographer and radiologist. Expected proportion of agreement=Proportion of cases in which we would expect to see agreement between radiographer and radiologist purely by chance.

Results General Department: Pre-tutorial

Radiologist Totals Row

Abnormal Normal proportion

radiograph radiograph

Radiographer Abnormal radiograph 49 29 78 0.645 Normal radiograph 8 35 43 0.355

Totals 57 64 121 Column proportion 0.471 0.529

Observed proportion=(49+35)/121=0.694 Expected proportion=(0.645 x 0.471)+(0.355 x 0.529)=0.492 Kappa=(0.694 - 0.492)/(1 - 0.492)=0.202/0.508=0.4 i.e. Fair agreement.

Pattern recognition in chest radiogruphs

General Department: Post-tutorial

287

Radiologist Totals Row

Abnormal Normal proportion

radiograph radiograph

Radiographer Abnormal radiograph 86 41 127 0.371 Normal radiograph 8 207 215 0.629

Totals 94 248 342 Column proportion 0.275 0.725

Observed proportion=(86+207)/342=0.857 Expected proportion=(0.371 x 0.275)+(0.629 x 0.725)=0.558 Kappa=(0.857 - 0.558)/l - 0.558)=0.299/0.442=0.68 i.e. Substantial agreement. The level of agreement in the General Department between the Radiographers and Radiologists in rating chest radiographs has therefore increased post-tutorial.

Accident & Emergency Department: Pre-tutorial

Radiologist Totals Row

Abnormal Normal proportion

radiograph radiograph

Radiographer Abnormal radiograph 36 21 57 0.75 Normal radiograph 5 14 19 0.25

Totals 41 35 76 Column proportion 0.539 0.461

Observed proportion=(36+ 14)/76=0.658 Expected proportion=(0.75 x 0.539)+(0.25 x 0.461)=0.52 Kappa=(0.658 ~ 0.52)/(1 - 0.52)=0.138/0.48=0.29 i.e. Fair agreement.

288 Hughes et al.

Accident & Emergency Department: Post-tutoriul

Radiologist Totals Row

Abnormal Normal proportion

radiograph radiograph

Radiographer Abnormal radiograph 67 24 91 0.641 Normal radiograph 0 51 51 0.359

Totals 67 75 142 Column proportion 0.472 0.528

Observed proportion=(67+51)/142=0.831 Expected proportion=(0.641 x 0.472)+(0.359 x 0.528)=0.492 Kappa=(0.831 - 0.492)&l - 0.492)=0.339/0.508=0.67 i.e. Substantial agreement. The level of agreement in the Accident & Emergency Department between the Radiographers and Radiologists in rating chest radiographs has therefore increased post-tutorial.