segmenting radiographs of the hand and wrist

11
ELSEVIER ComputerMethodsand Programsin Biomedicine 43 (1994)227--237 computer methods and programs in biomedicine Segmenting radiographs of the hand and wrist G.K. Manos a*, A.Y. Cairns a, I.W. Rickets a, D. Sinclair b ~Microcomputer Centre, Department of Mathematics and Computer Science, Unioersityof Dundee, Dundee DD l 4HN, Scotland, UK bDepartment of Diagnostic Radiology, Ninewells Hospital Dundee DD1 9SY, Scotland, UK Abstract The work presented in this paper concerns the development of computer-based techniques for the segmentation of hand-wrist radiographs and in particular those obtained for the TW2 method for the assessment of skeletal maturity (bone age). The segmentation method is based on the concept of regions and it consists of region growing and region merging stages. A bone extraction stage follows, which labels regions as either bone or background using heuristic rules based on the grey level properties of the scene. Finally, a technique is proposed for the segmentation of bone outlines which helps in identifying conjugated bones. Key words: Segmentation; Hand radiographs; Image analysis; Computer vision 1. Introduction For centuries the hand has been known as a mirror of disease. Accordingly, a hand radiograph reflects a wide range of disease states [1]. One of the most important examinations carried out on hand radiographs is that of the evaluation of skeletal maturity (bone age). Evaluation of skele- tal maturation is useful between the ages of 1 and 18 years. By studying the osseous development of the hand, one can distinguish, prior to puberty, those children who will mature early and those whose growth will be delayed, hence the usage of *Corresponding author, Advanced Technologies Depart- ment, INTRASOFT S.A., Adrianiou 2 Str., Athens 115 25, Greece. bone age in the diagnosis and in monitoring treat- ment of endocrinological problems. Various methods have been developed for assessing skele- tal maturity, the majority of them based on the examination of radiographs of the left hand and wrist. One of the most reliable methods is the TW2 method [2]. This method, requires the comparison of 20 bones with descriptions and reference images established by the originators of the method, and thus is considered laborious and time consuming by physicians. In order to auto- mate this method, one of the most important tasks is that of the segmentation of the hand-wrist bones, i.e. the delineation of bones from soft-tis- sue and background. TW2 radiographs are char- acterised by their varying scene content, since cartilage has not been transformed into bone in 0169-2607/94/$07.00 © 1994 Elsevier Science Ireland Ltd. All rights reserved. SSDI 0169-2607(93)1493-Y

Upload: gk-manos

Post on 28-Aug-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

E L S E V I E R Computer Methods and Programs in Biomedicine 43 (1994) 227--237

computer methods and programs in biomedicine

Segmenting radiographs of the hand and wrist

G.K. Manos a*, A.Y. Cairns a, I.W. Rickets a, D. S i n c l a i r b

~Microcomputer Centre, Department of Mathematics and Computer Science, Unioersity of Dundee, Dundee DD l 4HN, Scotland, UK

bDepartment of Diagnostic Radiology, Ninewells Hospital Dundee DD1 9SY, Scotland, UK

Abstract

The work presented in this paper concerns the development of computer-based techniques for the segmentation of hand-wrist radiographs and in particular those obtained for the TW2 method for the assessment of skeletal maturity (bone age). The segmentation method is based on the concept of regions and it consists of region growing and region merging stages. A bone extraction stage follows, which labels regions as either bone or background using heuristic rules based on the grey level properties of the scene. Finally, a technique is proposed for the segmentation of bone outlines which helps in identifying conjugated bones.

Key words: Segmentation; Hand radiographs; Image analysis; Computer vision

1. Introduction

For centuries the hand has been known as a mirror of disease. Accordingly, a hand radiograph reflects a wide range of disease states [1]. One of the most important examinations carried out on hand radiographs is that of the evaluation of skeletal maturity (bone age). Evaluation of skele- tal maturation is useful between the ages of 1 and 18 years. By studying the osseous development of the hand, one can distinguish, prior to puberty, those children who will mature early and those whose growth will be delayed, hence the usage of

*Corresponding author, Advanced Technologies Depart- ment, INTRASOFT S.A., Adrianiou 2 Str., Athens 115 25, Greece.

bone age in the diagnosis and in monitoring treat- ment of endocrinological problems. Various methods have been developed for assessing skele- tal maturity, the majority of them based on the examination of radiographs of the left hand and wrist.

One of the most reliable methods is the TW2 method [2]. This method, requires the comparison of 20 bones with descriptions and reference images established by the originators of the method, and thus is considered laborious and time consuming by physicians. In order to auto- mate this method, one of the most important tasks is that of the segmentation of the hand-wrist bones, i.e. the delineation of bones from soft-tis- sue and background. TW2 radiographs are char- acterised by their varying scene content, since cartilage has not been transformed into bone in

0169-2607/94/$07.00 © 1994 Elsevier Science Ireland Ltd. All rights reserved. SSDI 0169-2607(93)1493-Y

228 G.K. Manos et al. / Comput. Methods Programs Biomed. 43 (1994) 227--237

very young children, whilst, at the other extreme, adolescent children have fully ossified bones con- jugated with one another. It is the varying content of the hand anatomy which is of particular inter- est in 'bone age' assessment, and at the same time it is one of the major obstacles in per- forming reliable segmentation.

Most of the practical image analysis systems are limited to problem domains where simplifica- tions to the analysis process exist or can be intro- duced, as for example in the case of constrained industrial scenes. Segmentation and interpreta- tion of biomedical images is particularly proble- matic due to structural and temporal variations in the morphology of objects and structures [3]. Since images used in the TW2 method depict biological objects, an infinite set of object shapes that sup- port the same significant features is possible. Ad- ditional problems are posed by the non-optimal exposure of radiographs, image 'fogging' due to X-ray scattering and by the appearance of inter- nal structure detail caused by the penetrating nature of radiation.

Segmentation algorithms are usually based on two basic image properties: discontinuity and similarity. Segmentation methods based on grey level discontinuity are referred to as edge-based, whilst those based on grey level similarity are referred to as region-based. A common approach to segmentation of scenes by edge-based methods is to first obtain an edge map of the image using an edge detector, and then group these edges into more elaborate boundaries using knowledge about the shapes of the sought objects and structures. Region-based methods use the 'dual' of edge- based detection, in which regions are isolated not by determining their boundaries but by their inte- riors. Segmentation, then, is based on the as- sumption that there are features of the regions that can be exploited to distinguish them from each other.

Most of the approaches, so far, for the segmen- tation of hand-wrist radiographs have been edge-based, but with limited success. However, most of the successful edge-based segmentation applications have been for scenes for which a well-defined model of the scene boundaries is available. Region-based segmentation has been

used successfully in scenes of variable content and also for scenes for which there is a lack of explicit boundary information. TW2 scenes have these characteristics and therefore a region-based approach to segmentation is considered as a fea- sible alternative to edge-based segmentation.

This paper presents a region-based approach, consisting of region growing and region merging stages. Also, edge information is integrated into the region merging process to verify and, where necessary, to correct region boundaries. The labelling of regions into bone or background is based on the natural constraints of the scene.

2. Skeletal maturation

Radiographs of the left hand and wrist are used by the majority of the methods for assessing bone age. Although not completely representative of the maturation of the entire skeleton, it is satis- factory in most clinical situations. Two of the most commonly used methods are the Greulich and Pyle atlas [4] and the TW2 method [2]. TW2 is generally believed to be more flexible and de- rive from a more solid mathematical base than the atlas method [5]. In the TW2 method 20 bones are used for the examination. Each bone is classified into 8 or 9 (labelled A-I) stages to which scores are attached. Ratings are assigned by comparing the bone under examination with descriptions, reference radiographs and diagrams presented in the TW2 manual.

Fig. 1 presents a model of the hand showing the effect of growth in the visual appearance of the band bones in radiographs. The varying part of the hand and wrist anatomy are 28 bones which are absent at birth; at some stage after birth they begin to ossify, then continue to grow until they achieve full maturity (typically by the age of 17 years for boys and usually earlier for girls). However, there are 21 long tubular bones, which retain roughly the same shape, although vary considerably in size, from birth to adoles- cence. If the TW2 is to be automated the fol- lowing tasks will be required: segmentation of bones, i.e. extraction of bones from soft-tissue and background; recognition of individual bones; feature extraction and classification. These tasks

G.K. Manos et al. / Comput. Methods Programs Biomed. 43 (1994) 227--237 229

Dlstal V ~ ~ Distal IV

Middle V

Proximal V Proximal lit

Distall

Proximal I

Harnate ~ ' I m m m m ~ Capitate

l I Q Tr lquet ro i i Lunate

Ulna ~ C'- Radius

Scaphoid

Tmpezlum

t KEY RU$ Invarlant Bones

~ Carpal Bones ( ~ Eplphysls

~xx~.x~.~ This area varies from empty space to bones articulated with each other

i This area varies ~ m empty space to bones fused with each other

Fig. 1. A model of a hand-wrist radiograph depicting the visual appearance of the bone structures during growth.

should be performed reliably regardless the age of the patient.

3. Previous work

Most of the work on the image analysis of hand radiographs is aimed at producing systems for the automatic assessment of skeletal maturity. One of the first published studies is directly concerned with automating the TW2 method [6-13]. The

segmentation of the scene was performed by first histogram-equalising the image and then obtain- ing a multi-level thresholded image using a method based on the index of fuzziness and en- tropy of the image. The next step was to detect the edges on the thresholded image and thus to obtain outlines of various areas on the image. After edge detection three steps followed: repre- sentation of the obtained image contours by their respective chain codes; smoothing of the chain

230 G.K. Manos et al. / Comput. Methods Programs Biomed. 43 (1994) 22 ~ 2 3 7

codes to remove wiggles; assignment of degree of 'arcness' to each smoothed chain in order to extract curvature primitives for description and classification. Two methods were developed for classification: hierarchical syntactic recognition using context-free grammar and syntactic recogni- tion using fuzzy context-free grammar. Results of the work were demonstrated on an image of the radius area.

Other research for the automation of TW2 is reported in [14,15]. This work was concerned with the segmentation of carpal bones only, using an edge-based approach, comprised of edge detec- tion and edge linking. Mathematical morphology operators were used to eliminate irrelevant edges. Classification was performed by comparing the shape of the bone under examination with pre- stored shapes corresponding to various stages of maturity of the particular bone. The segmenta- tion method was demonstrated on images of well-separated carpal bones. Another edge-based method for the segmentation of TW2 radiographs has been proposed in [16]. The Laplacian of Gaussian (LOG) operator was used for edge de- tection and an edge tracking technique was used to obtain closed contours. For each region en- closed by a closed contour a set of statistics was extracted which was used to combine similar neighbouring regions into larger regions and then label them as bone or background using a heuris- tic cost function. According to the authors, sev- eral bones have not been detected or parts of bones are missing, mainly due to errors during the edge linking stage.

Another method for the segmentation of hand bones has been proposed in [17]. The first step of this method consisted of a model-based histogram modification process, used to stretch the original image histogram in order to obtain a new his- togram with three well-separated peaks corre- sponding to background, soft-tissue and bone. The five fingers are identified using a horizontal profile searching technique. Subsequent segmen- tation is achieved by a one-dimensional edge operator and a contour following technique. Re- sults were presented showing segmentation of the 3rd proximal phalange. According to the authors, problems arise with the edge follower, which fails

at noisy edges. They conclude that a clinical sys- tem would likely require human interaction.

In [18] a technique for the analysis of hand radiographs is described based on a hand model obtained through the use of population statistics of hand measurements (length, width, thickness, etc.). Cues obtained by processing the image, for example width of anti parallel edges which might signify the shaft of a long tubular bone, were compared with the predictions derived from the model and consequent matching was followed. Results of bone segmentation, details of the algo- rithms and techniques used were not reported by the authors.

Finally, another study involving the analysis of hand radiographs was reported in [19,20]. It in- volved the diagnosis of rheumatoid arthritis, by detecting changes in the periarticular contours of the finger bones. Due to the diffused nature of bone boundaries an interactive approach to seg- mentation was favoured by which an operator places, with a light pen, up to 200 control points on the bone boundary. These points were then corrected to their optimum position by a local edge detector and linked to form the bone boundary by a cardinal spline technique. A polar signature representing the shape of the bone was then used for the detection of changes in shape due to rheumatoid arthritis.

Image segmentation is an important issue of the automation process and has been addressed by all researchers in the field. However, no satis- factory solution to this problem has been found so far. The majority of the segmentation tech- niques were edge-based. These techniques have produced good results only in the case of well isolated bones with distinct boundaries, which usually occurs only in radiographs of patients of a very young age. One of the most important issues regarding the use of edge-based methods for the segmentation of hand wrist radiographs, is the difficulty in linking edges to form bone boun- daries. This is mainly attributed to the following factors: shape characteristics of the bone boun- daries cannot be accurately predicted since they depend on the degree of skeletal maturation; the scenes contain a large number of edge junctions caused by internal bone texture, .fusion or capping

G.K, Manos et al. / Comput. Methods Programs' Biomed. 43 (1994) 227 237 231

between bones and palmar and dorsal surfaces; bone boundaries can be shallow (just a small rise in optical density) and diffuse.

Region-based segmentation has produced promising results in scenes of variable content and also for scenes for which there is a lack of explicit boundary information, as in the case of TW2 scenes. Such applications include natural scene segmentation [21], aerial imagery segmen- tation [22,23] and also the segmentation of digital modality images (CT and MRI scans) [24,25]. In the work described here it is demonstrated that a region-based approach to segmentation of hand-wrist radiographs is a feasible alternative to an edge-based approach. However, experience in image analysis has shown that it is unlikely that any single segmentation process will produce a description that is adequate for an unambiguous interpretation of the image. Rather, multiple processes are required, each of which produces a description that may be incomplete and erro- neous [21]. The edge map of the scene produces a complementary representation of the image to that produced by a region map. The edge repre- sentation can then be integrated with the region representation to produce a more accurate seg- mentation.

4. System description

The processing approach used is that of 'bot- tom-up' or 'image data driven', in which each stage of processing yields data for the next. The block diagram of the system processes is depicted in Fig. 2. The first step is to digitise the radio- graphs with an approximate resolution of 0.12 mm/pixel using a camera and light-box arrange- ment. Since the available spatial resolution was limited by the frame-grabber to 256 × 256 pix- els/frame, only small areas of the radiograph could be digitised to the required resolution. Af- ter digitisation, a preprocessing step follows; this produces two images, an edge map image (edge magnitude and direction) and smoothed image. The edge image is used at the second, region merging, stage and the smoothed image as an input to the first stage of the region segmentation process. Segmentation is based on forming, as reliably as possible, elementary regions and then

( Dlglflsod Rc~llol~retph j

Fig. 2. Block diagram of the system.

combining them to form larger structures which begin to resemble the expected object structures. Following that there is a region labelling stage which extracts bones from background and a stage in which an attempt is made to segment the outlines of the bones; this latter might help to identify conjugated bones.

5. Pre-processing

For edge detection the Canny edge detector (Gaussian smoothing - - convolution with the first differential of a Gaussian - - non-maximal sup- pression) was used [26] due to its ability to pro- vide good edge localisation and a single edge response, which are very important for the combi- nation of edge and region information for region

232 G.K. Manos et al. /Comput. Methods Programs Biomed. 43 (1994) 227~237

merging purposes. Fig. 3b shows the edge map of the original digitised image of Fig. 3a.

A very useful pre-processing step was found to be the edge-preserving smoothing filter developed by Nagao and Matsuyama [22]. This smoothing operator has the ability to remove noise without blurting the sharp edges and also to enhance blurted edges. The former characteristic helps the pixel agglomeration process during region growing whilst the latter, by enhancing faint bone boun- daries, means that the erroneous merging of bone with background is avoided, during region growing. Fig. 3c shows the resulting image after the smoothing stage.

6. Segmentation

6.1. Region growing stage In this stage the image is partitioned into a

large number of elementary regions using a sim- ple algorithm by which neighbouring pixels are included in a region if their grey level difference is less than 3. The choice of this low threshold is deliberate in order to guarantee that erroneous merging of bone with background does not occur. However, the drawback is that a very large num- ber of regions is produced. The robust perfor- mance of the algorithm is attributed mainly to the edge preserving smoothing, since blurted or noisy edges, where incorrect merging is apt to take place, have been effectively sharpened. Fig. 3d shows the outcome of this processing stage.

6. 2. Region merging stage A technique that has been applied successfully

to the segmentation of natural scenes, developed by Beveridge et al. [27], was adopted as a first stage of region merging. It has the advantage of using general rules and thus can be applied to different types of scenes, the only requirement being that the input set of regions to be frag- mented in such a way that no region splitting is required. This technique is based on combining three merging scores representing region similar- ity, size and connectivity into a global merge score which reflects the ability for merging between two regions. The characteristics of the method are the following: regions with similar

grey level characteristics are encouraged to merge, larger regions are preferred over small ones, whilst regions with little common boundary are discour- aged from merging in order to avoid the forma- tion of regions with small necks connecting large areas. This technique proved to be robust and reliable and it reduces the number of original regions by approximately 40%. Fig. 3e shows the resulting regions after this stage of processing.

6.3. Region merging by fusion of edge and region information

The first step of processing at this stage regis- ters edge pixels obtained by the edge detector to the equivalent boundary pixels of the regions [28,29]. For this registration both edge magnitude and direction information are used. The second and final step merges two regions if less than 45% of the region boundary pixels correspond to edge pixels. The algorithm operates in similar iterative fashion as the one described in the previous sec- tion. This algorithm proved to be robust and achieves a reduction of 60-70% on the number of regions produced by the previous stage. There- fore, the number of regions after this stage corre- sponds approximately to the 15-20% of the origi- nal number of elementary regions. Fig. 4f shows the resulting region outline image after this stage of processing.

6. 4. Region labelling Having segmented the scene into a manageable

number of regions, the next task is to assign a label on each region signifying its correspondence to either bone or background. Labelling is per- formed through the application of a set of heuris- tic rules which are mainly based on the grey level characteristics of radiographs. In particular, in hand-wrist radiographs three entities can be easily identified: background, which is of uniform grey level and of the lowest intensity; soft-tissue, which is of non-uniform grey level (due to varying thickness), brighter than background but locally (i.e. in a local neighbourhood) less bright than bone; and bone which is of non-uniform grey level and locally brighter than soft-tissue. Prior to labelling, a feature list is compiled for every ele- mentary region in the image. This list includes

G.K. Manos et al. / Comput. Methods Programs Biomed. 43 (1994) 227--237 233

i • ' •

Fig. 3. Processing stages. (a) Original image. (b) Edge detection. (c) Smoothing. (d) Region growing (1092 regions). (e) Merging I (681 regions). (f) Merging II (242 regions). (g) Bone extraction. (h) Boundary segmentation.

;.... .... ~ ~:!:'~ :

: :'::i:::~ i .4:. .'::.. : :i

0 I:::1

=..

D

%

Z~'g--ZgZ (1,66I) YP "PaUto!~t LgUtDd~O-~d spoffJal4l "lnautoD / "l v Ja souvlq "~'D t'~

G.K. Man,'~s et al. / Conlput. M~'thods t 'r sg m ~ Biomed. 4,1 (1994) 227--2_?7 2 3 5

ui

• i ̧ .:: ,~!!:!;::~:.~.v.~:; :+

: •~:~!!ii!~!i!~,~ ~ :i!,,: ~' • ~ i

236 G.K. Manos et al. / Comput. Methods Programs Biomed. 43 (1994) 227--237

features such as mean grey level, various contrast measurements , region adjacency information, cluster parentship, activity status, etc.

Segmentat ion of the bones is achieved by rules which use the above features and their attributes [28,29]. Initially regions of positive contrast are labelled as bone and then regions of negative contrast and regions that are local intensity minima are labelled as background. Following that, intensity relationships between regions are used to label neighbouring regions. Such relation- ships are based on the constraints that in a small neighbourhood, if a region is labelled as back- ground then a neighbouring region of lower in- tensity should be labelled as background. Also, if a region is labelled as bone then a neighbouring region of higher intensity should be labelled as bone. Finally clusters of regions are labelled as bone or background according to the labelling of the majority of their active regions. Fig. 3g shows the outcome of this stage.

6.5. Bone boundary segmentation This final stage at tempts to identify points in

the boundaries of the extracted bones which might be indicative of bone conjugation or fusion with other bones. This will be useful in identifying and labelling individual bones. A characteristic of con- jugated bones is that they usually exhibit indenta- tions at the point of conjugation. The method for the identification of indentations is based on the work of Eccles et al. [30]. The boundaries of the extracted bones are chain coded and then the chain code is smoothed. Indentations are regis- tered as minima of the smoothed chain code. These points can either be linked as shown in Fig. 3h or can be used as input to a high-level boundary description process.

7. Experimental results

Figs. 4 and 5 show some of the results that obtained using this method. They show the origi- nal image; the region image obtained after region merging; the image after region labelling; and, finally, boundary segmentation. For the develop- ment of the method 14 images of the radius-ulna region were used, spanning the range of maturity

of this area. The method was tested on another 20 images of areas of the hand and wrist and also to radiographs of other parts of the anatomy, such as knee, foot and chest, but with limited success in the case of the latter. The results produced by this method represent a significant improvement over the work of earlier methods, especially with regard to the segmentation of radiographs with varying degrees of skeletal ma- turity.

Acknowledgement

The support of Timex Corporation, Dundee, Scotland is gratefully acknowledged.

References

[1] A.K. Poznanski, The Hand in Radiologic Diagnosis (Saunders, Philadelphia 1984).

[2] J.M. Tanner, R.H. Whitehouse, W.A. Marshall, M.J.R. Healy and H. Goldstein, Assessment of Skeletal Maturity and Prediction of Adult Height TW2-20 Method 2nd edn. (Academic Press, 1983).

[3] N. Walker and J. Fox, Knowledge based interpretation of images: a biomedical perspective, Knowledge Eng. Rev. 11(2) (1987) 249-264.

[4] I. Pyle, A.M. Waterhouse and W.W. Greulich, A Radio- graphic Standard of Reference for the Growing Hand and Wrist (The Press of Case-Western Reserve Univer- sity, Cleveland, 1971).

[5] M. Garn, The applicability of North American growth standards in developing countries, Can. Med. Assoc. J. 93 (1965) 914-919.

[6] K. Pal and R.A. King, Application of fuzzy set theory in detecting X-ray edges, in Proceedings of the 1981 Inter- national Conference on Acoustics, Speech and Signal Processing, Vol. 3, pp. 1125-1128 (1981).

[7] K. Pal and R.A. King, Histogram equalisation and PI functions in detecting X-ray edges, Electron. Lett. 17(8) (1981) 302-304.

[8] K. Pal and R.A. King, On edge detection of X-ray images using fuzzy sets, IEEE Trans. Pattern Anal. Mach. Intell. PAMI 5 (1983) 69-77.

[9] K. Pal, R.A. King and A.A. Hashim, Automatic grey-level thresholding through index of fuzziness and entropy, Pat- tern Recognition Lett. 1 (1983) 141-146.

[10] K. Pal, R.A. King and A.A. Hashim, Image description and primitive extraction using fuzzy sets, IEEE Trans. Syst. Man Cybern. SMC 13 (1983) 94-100.

[11] A. Pathak, K. Pal and R.A. King, Syntactic recognition of skeletal maturity, Pattern Recognition Lett. 2 (1984) 193-197.

[12] A. Pathak and K. Pal, Fuzzy Grammars in syntactic recognition of skeletal maturity from X-rays, IEEE Trans. Syst. Man Cybern. SMC 16, 1986.

G.K. Manos et al. /Comput. Methods Programs Biomed. 43 (1994) 2 2 ~ 2 3 7 237

[13] A. Kwabwe, K. Pal and R.A. King, Recognition of bones from X-rays of the hand, Int. J. Syst. Sci. 16 (1985) 403-413.

[14] J. Serrat, D. Van Esso, J.J. Villanueva and J. Argemi, Determinacion-automatica de la edad osea, in Proceed- ings of the 3rd International Symposium on Biomedical Engineering, Madrid, Spain, 7-9 October, 1987.

[15] J. Serrat, Contribucio a la Determinacio Automatica de L'Edad Ossia, Progress Report (Universitat Autonoma de Barcelona, Facultat de Ciencies, September 1988).

[16] R.H. Riste-Smith, W.P.A. Ditmar, M. Holubinka, P. Howlett and D. Wright, A knowledge-based segmentation applied to medical radiographs, in Proceedings of the 3rd IEE International Conference on Computing and its Aplications, University of Warwick, 18-20 July 1989, pp. 353-359.

[17] D.J. Michael and A.C. Nelson, HANDX: A model-based system for automatic segmentation of bones from digital hand radiographs, IEEE Trans. Med. Imaging MI 8 (1989) 64 69.

[18] T.S. Levitt and M.W. Hedgcock Jr, Model-based analysis of hand radiographs, Proc. SPIE 1093 (1989) 563-570.

[19] M.A. Browne, P.A. Gaydecki, R.F. Gough, D.M. Gren- nan, D.M. Khalil and H. Mantora, Radiographic image analysis in the study of bone morphology, Clin. Phys. Physiol. Meas. 8 (1987) 105-121.

[20] P.A. Gaydecki, M.A. Browne, H. Mantora and D.M. Grennan, Measurement of radiographic changes occuring in rheumatoid athritis by image analysis techniques, Ann. Rheum. Dis. 46 (1987) 296-301.

[21] A. Hanson and E. Riseman, The VISIONS image under- standing system, in Advances in Computer Vision, Ed. C.

Brown, Vol. 1, pp. 1-114 (Lawrence Erlaum, New Jersey, 1988).

[22] M. Nagao and T. Matsuyama, A Structural Analysis of Complex Aerial Photographs (Plenum Press, New York, 1980).

[23] C. Smyrniotis and K. Dutta, A knowledge-based system for recognising man-made objects in aerial images, in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1988, pp. 111-117.

[24] G. I. Vernazza, B. Serpico and G. Dellepiane, A knowledge-based system for biomedical image processing and recognition, IEEE Trans. Circuits Syst. CS 34 (1987) 1399-1416.

[25] S.Y. Chen and K. Fu, An expert vision system for medical image segmentation, Proc. SPIE 1092 (1989) 162-172.

[26] J. Canny, Computational approach to edge detection, IEEE Trans. Pattern Anal. Mach. Intell. PAMI 8 (1986) 679-698.

[27] J. R. Beveridge, J. Grifith, R.R. Kohler, A.R. Hanson and E.M. Riseman, Segmenting images using localised his- tograms and region merging, Int. J. Comput. Vision, 2 (1989) 311-347.

[28] G. Manos, A.Y. Cairns, I.W. Rickets and D. Sinclair, Automatic segmentation of hand-wrist radiographs, Image Vision Comput. 14(2) (1993) 100-111.

[29] G. Manos, Segmenting Radiographs of the Hand and Wrist Using Computer Vision, PhD Thesis (University of Dundee, Scotland, 1991).

[30] M.J. Eccles, M.P.C. McQueen and D. Rosen, Analysis of the digitised boundaries of planar objects, Pattern Recog- nition, 9 (1977) 31-41.