keraudren-k-2015-phd-thesis

167
Imperial College of Science, Technology and Medicine Department of Computing Automated Organ Localisation in Fetal Magnetic Resonance Imaging Kevin Keraudren A dissertation submitted in part fulfilment of the requirements for the degree of Doctor of Philosophy of Imperial College London August 2015

Upload: kevin-keraudren

Post on 15-Apr-2017

76 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Keraudren-K-2015-PhD-Thesis

Imperial College of Science, Technology and Medicine

Department of Computing

Automated Organ Localisation in FetalMagnetic Resonance Imaging

Kevin Keraudren

A dissertation submitted in part fulfilment of the requirements for the degree of

Doctor of Philosophy

of

Imperial College London

August 2015

Page 2: Keraudren-K-2015-PhD-Thesis

To my wife

Page 3: Keraudren-K-2015-PhD-Thesis

Declaration of Originality

I hereby declare that the work described in this thesis is my own, except where specifically

acknowledged.

Kevin Keraudren

3

Page 4: Keraudren-K-2015-PhD-Thesis

Abstract

Fetal Magnetic Resonance Imaging (MRI) provides an invaluable diagnostic tool complemen-

tary to ultrasound due to its high resolution and tissue contrast. In order to accommodate fetal

and maternal motion, MR images of the fetus are typically acquired as stacks of two-dimensional

(2D) slices that freeze in-plane motion, but may form an inconsistent three-dimensional (3D)

volume. Motion correction methods, which reconstruct a high-resolution 3D volume from such

motion corrupted stacks of 2D slices, have revolutionised fetal MRI, enabling detailed studies

of the fetal brain development. However, such motion correction and reconstruction procedures

require a substantial amount of manual data preprocessing in order to isolate fetal tissues from

the rest of the image. Beside the presence of motion artefacts, the main challenges when au-

tomating the processing of fetal MRI are the unpredictable position and orientation of the

fetus, as well as the variability in anatomy due to fetal development. This thesis presents novel

methods based on machine learning and prior knowledge of fetal development to localise auto-

matically organs in fetal MRI in order to automate the preprocessing step of motion correction.

This localisation can also be used to initialise a segmentation, or orient images based on the

fetal anatomy to facilitate clinical examination. The fetal brain is first localised independently

of the orientation of the fetus, and then used as an anchor point to steer features used in

the subsequent localisation of the heart, lungs and liver. The localisation results are used to

segment fetal tissues in each 2D slice and this segmentation can be further refined throughout

the motion correction procedure. The proposed method to segment the fetal brain is shown to

perform as well as a manual preprocessing. Preliminary results on a similar application to the

motion correction of the fetal thorax are also presented.

4

Page 5: Keraudren-K-2015-PhD-Thesis

Acknowledgements

I thank Prof. Daniel Rueckert and Prof. Jo Hajnal for their supervision and their enthusiasm

throughout my PhD. I am also grateful to all my friends and colleagues at Imperial College

London and King’s College London for their support throughout these four years of PhD and

for providing such a stimulating research environment.

This PhD is part of a programme grant from the Engineering and Physical Sciences Research

Council on Intelligent Medical Imaging in the presence of motion.

5

Page 6: Keraudren-K-2015-PhD-Thesis

Copyright Declaration

The copyright of this thesis rests with the author and is made available under a Creative

Commons Attribution Non-Commercial No Derivatives licence. Researchers are free to copy,

distribute or transmit the thesis on the condition that they attribute it, that they do not use it

for commercial purposes and that they do not alter, transform or build upon it. For any reuse

or redistribution, researchers must make clear to others the licence terms of this work.

6

Page 7: Keraudren-K-2015-PhD-Thesis

Contents

1 Introduction 21

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

1.2 Research contributions and thesis outline . . . . . . . . . . . . . . . . . . . . . . 22

1.2.1 Automated localisation of the brain in fetal MRI . . . . . . . . . . . . . 23

1.2.2 Normalising image coordinates for fetal age . . . . . . . . . . . . . . . . . 24

1.2.3 Automated localisation of the heart, lungs and liver in fetal MRI . . . . . 24

2 Magnetic Resonance Imaging of the fetus and fetal development 25

2.1 Imaging the developing fetus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.1.1 Ultrasonography (US) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.1.2 Magnetic Resonance Imaging (MRI) . . . . . . . . . . . . . . . . . . . . 27

2.1.3 Motion correction of fetal MRI . . . . . . . . . . . . . . . . . . . . . . . 32

2.2 Fetal development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.2.1 Incidence of cephalic presentation . . . . . . . . . . . . . . . . . . . . . . 36

2.2.2 Fetal growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.2.3 Appearance patterns in fetal MRI . . . . . . . . . . . . . . . . . . . . . . 39

7

Page 8: Keraudren-K-2015-PhD-Thesis

8 CONTENTS

2.2.4 Modelling of the fetal body in MRI . . . . . . . . . . . . . . . . . . . . . 41

2.3 Imaging data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

2.3.1 Dataset 1: used for training and cross-validation of the fetal brain local-

isation and segmentation methods . . . . . . . . . . . . . . . . . . . . . . 42

2.3.2 Dataset 2: used for training and cross-validation of the method to localise

the fetal heart, lungs and liver . . . . . . . . . . . . . . . . . . . . . . . . 43

2.3.3 Dataset 3: used for testing the method to localise the fetal heart, lungs

and liver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

2.4 Challenges in the automated analysis of fetal MRI . . . . . . . . . . . . . . . . . 44

3 Machine learning and object detection 45

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.2 Image features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.2.1 Convolutional features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.2.2 Rectangle features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.2.3 Histogram of Oriented Gradients (HOG) . . . . . . . . . . . . . . . . . . 50

3.2.4 Scale Invariant Feature Transform (SIFT) . . . . . . . . . . . . . . . . . 51

3.2.5 Dense rotation invariant features . . . . . . . . . . . . . . . . . . . . . . 52

3.3 Machine learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

3.3.1 Support Vector Machines (SVM) . . . . . . . . . . . . . . . . . . . . . . 54

3.3.2 AdaBoost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.3.3 Random Forests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.4 Search strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Page 9: Keraudren-K-2015-PhD-Thesis

CONTENTS 9

3.4.1 Scale and orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

3.4.2 Pruning the search space . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3.4.3 Detecting object parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

3.5 Automated localisation of anatomical regions in medical images . . . . . . . . . 63

3.5.1 Atlas-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

3.5.2 Machine learning approaches . . . . . . . . . . . . . . . . . . . . . . . . . 66

3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4 Localisation of the brain in fetal MRI 71

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4.2 Detecting the fetal head in 2D slices using the Viola-Jones detector . . . . . . . 73

4.3 Localising the fetal brain using bundled SIFT features . . . . . . . . . . . . . . . 76

4.3.1 Method overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

4.3.2 Detection and selection of MSER regions . . . . . . . . . . . . . . . . . . 77

4.3.3 Classification of selected MSER regions using bundled SIFT features . . 78

4.3.4 RANSAC fitting of a cube . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.4.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.4.2 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Page 10: Keraudren-K-2015-PhD-Thesis

10 CONTENTS

5 Brain segmentation and motion correction 86

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

5.2.1 Processing of conventional cranial MRI . . . . . . . . . . . . . . . . . . . 88

5.2.2 Brain extraction in fetal MRI . . . . . . . . . . . . . . . . . . . . . . . . 88

5.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.3.1 Brain extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.3.2 Joint segmentation and motion correction . . . . . . . . . . . . . . . . . 95

5.4 Implementation and choice of parameters . . . . . . . . . . . . . . . . . . . . . . 97

5.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

5.5.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

5.5.2 Ground truth and experiments . . . . . . . . . . . . . . . . . . . . . . . . 99

5.5.3 Evaluation metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

5.5.4 Localisation and segmentation results . . . . . . . . . . . . . . . . . . . . 103

5.5.5 Reconstruction results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.6.1 Extreme motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.6.2 Abnormal brain growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

6 Localising the heart, lungs and liver 113

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Page 11: Keraudren-K-2015-PhD-Thesis

CONTENTS 11

6.2 Segmentation with autocontext Random Forests . . . . . . . . . . . . . . . . . . 116

6.3 Random Forests with steerable features . . . . . . . . . . . . . . . . . . . . . . . 120

6.3.1 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

6.3.2 Detecting candidate locations for the heart . . . . . . . . . . . . . . . . . 122

6.3.3 Detecting candidate locations for the lungs and liver . . . . . . . . . . . 124

6.3.4 Spatial optimisation of candidate organ centers . . . . . . . . . . . . . . 126

6.3.5 Application to motion correction . . . . . . . . . . . . . . . . . . . . . . 128

6.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

6.4.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

6.4.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

6.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

6.5.1 Localising the organs of the fetus . . . . . . . . . . . . . . . . . . . . . . 135

6.5.2 Motion correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

7 Conclusion 140

7.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

7.1.1 Simultaneous localisation and motion correction . . . . . . . . . . . . . . 141

7.1.2 Improved slice-to-volume registration using the localisation results . . . . 142

7.1.3 Application to organ volumetry and atlas creation . . . . . . . . . . . . . 143

Page 12: Keraudren-K-2015-PhD-Thesis

Bibliography 145

A Neurological/radiological order 166

12

Page 13: Keraudren-K-2015-PhD-Thesis

List of Tables

3.1 Summary table of the properties of the different image features presented in this

chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.1 Detection rates for each orientation. . . . . . . . . . . . . . . . . . . . . . . . . . 75

4.2 Detection results averaged over the cross-validation, all orientations combined. . 83

5.1 Average computation time for the separate steps of the pipeline. . . . . . . . . . 98

5.2 Mean recall, mean precision and Dice score of the automated brain segmentation

computed over the ten-fold cross-validation and for the reconstructed volumes. . 103

5.3 Quality scores obtained for each subject for the automated and manual initialisa-

tion of motion correction of the fetal brain, and for the automated initialisation

depending on the motion scores . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

6.1 Segmentation results obtained with autocontext Random Forests on the testing

set of the CETUS challenge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

6.2 Detection rates for the heart, lungs and liver . . . . . . . . . . . . . . . . . . . . 131

13

Page 14: Keraudren-K-2015-PhD-Thesis

List of Figures

2.1 Example images of 2D ultrasound, 3D ultrasound and MRI . . . . . . . . . . . . 26

2.2 Artefacts in MRI data resulting from sudden fetal movement . . . . . . . . . . . 28

2.3 Staircase artefacts when manually segmenting fetal organs with a low through-

plane resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.4 MRI acquisitions with a low through-plane resolution resulting in a staircase

effect, and with a high through-plane resolution leading to interleaving artefacts 30

2.5 Diagramme for the interleaved acquisition of 2D slices . . . . . . . . . . . . . . . 30

2.6 Diagramme for the motion correction of fetal brain MRI. . . . . . . . . . . . . . 33

2.7 Iterations of motion correction with increasing image resolution . . . . . . . . . 35

2.8 Arbitrary position and orientation of the fetus . . . . . . . . . . . . . . . . . . . 36

2.9 Variability in MR images due to fetal development . . . . . . . . . . . . . . . . . 37

2.10 Common biometric measurements to assess fetal development . . . . . . . . . . 38

2.11 Intra uterine growth restriction . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

2.12 Average image of a healthy fetus . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.1 Gabor filters of varying frequency and orientation . . . . . . . . . . . . . . . . . 47

14

Page 15: Keraudren-K-2015-PhD-Thesis

LIST OF FIGURES 15

3.2 The integral image enables the fast computation of the sum of pixels over an

image patch in four table lookups, independently of the patch size . . . . . . . . 49

3.3 Rectangle features: Haar-like features, local binary patterns and simple rectangle

features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.4 Example HOG features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.5 Example SIFT features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.6 Spherical 3D Gabor descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3.7 Example classifications with an SVM in cases of linear and non-linear separability 54

3.8 Illustration of the reweighting of the training samples in the AdaBoost algorithm 55

3.9 Classification tree example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3.10 Example of using different weak learners in Random Forests. . . . . . . . . . . . 59

3.11 Example of Geodesic Distance Transform . . . . . . . . . . . . . . . . . . . . . . 63

3.12 T2 templates of the developing fetal brain . . . . . . . . . . . . . . . . . . . . . 64

3.13 Generic pipeline for machine learning based object detection . . . . . . . . . . . 70

4.1 Exemple scan with a large misalignment between slices due to fetal motion . . . 73

4.2 Detection pipeline for the fetal brain with Viola-Jones detector . . . . . . . . . . 74

4.3 Samples of a training dataset for a 2D detector of the fetal head . . . . . . . . . 75

4.4 Example detection results for a sagittal, coronal and transverse slice. . . . . . . 75

4.5 Local binary pattern corresponding to the first test of the cascade of weak classifiers 76

4.6 Brain detection pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4.7 Example MSER regions Ri and their fitted ellipses Ei . . . . . . . . . . . . . . . 78

Page 16: Keraudren-K-2015-PhD-Thesis

16 LIST OF FIGURES

4.8 MSER regions are first filtered by size, then classified according to their his-

tograms of SIFT features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.9 Examples of automated localisation results for the fetal brain . . . . . . . . . . . 82

4.10 Robustness of the brain detection method to low signal-to-noise ratio . . . . . . 84

4.11 As a result of fetal head rotation, the eyes of the fetus do not provide robust 3D

landmarks for the localisation of the brain. . . . . . . . . . . . . . . . . . . . . . 84

5.1 Outline of the proposed brain segmentation pipeline, which starts from the

MSER regions selected during the brain localisation process, and results in a

segmented and motion corrected volume of the fetal brain. . . . . . . . . . . . . 88

5.2 The detected MSER regions provide an initial labelling which is propagated to

obtain a segmentation of the fetal brain . . . . . . . . . . . . . . . . . . . . . . . 91

5.3 A Random Forest classifier is trained on patches extracted in the selected MSER

regions and on patches extracted in a narrow band at the boundary of the de-

tected bounding box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5.4 The brain probability map is rescaled in order to take into account an estimated

size of the fetal brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

5.5 Probability map from the Random Forest classifier and subsequent CRF seg-

mentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5.6 The masks of the fetal brain for the original 2D slices are progressively refined

using the updated slice-to-volume registration transformations. . . . . . . . . . . 96

5.7 The 2D probability maps are combined to initialise a CRF segmentation on the

reconstructed volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

5.8 Influence on the mean Dice score of the main parameters of the brain extraction

process: λ, number of trees and patch size . . . . . . . . . . . . . . . . . . . . . 98

Page 17: Keraudren-K-2015-PhD-Thesis

LIST OF FIGURES 17

5.9 The success of the motion correction procedure is strongly dependent on the

initial volume-to-volume registration . . . . . . . . . . . . . . . . . . . . . . . . 100

5.10 Example of ground truth segmentation, slices excluded from motion correction

and automated segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

5.11 Recall and precision of the brain detection at each step of the pipeline . . . . . . 105

5.12 Example reconstruction with automated initialisation and manual initialisation . 107

5.13 Example brain extraction in the case of important fetal movement . . . . . . . . 109

5.14 Surface rendering of a fetal ear after motion correction . . . . . . . . . . . . . . 110

5.15 Subject with severe bilateral ventriculomegaly that has been succesfully motion

corrected, and subject with alobar holoprosencephaly, for whom the pipeline failed112

6.1 Overview of the proposed pipeline for the automated localisation of fetal organs

in MRI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

6.2 Overview of the autocontext framework for segmenting the left ventricle endo-

cardium. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

6.3 Illustration of the proposed autocontext Random Forest framework with geodesic

features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

6.4 Size normalisation of the images according to the gestational age . . . . . . . . . 121

6.5 Average image of 30 healthy fetuses after resizing and alignment defined by

anatomical landmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

6.6 Steerable features: when training the classifier, features are extracted in the

anatomical orientation of the fetus; at test time, features are extracted in a

coordinate system rotating around the brain . . . . . . . . . . . . . . . . . . . . 124

6.7 Restricting the search space with anatomical constraints . . . . . . . . . . . . . 125

6.8 The shape and relative position of organs is modeled by gaussian distributions . 126

Page 18: Keraudren-K-2015-PhD-Thesis

18 LIST OF FIGURES

6.9 Illustration of the successive steps of the proposed detection pipeline . . . . . . . 127

6.10 Generating a mask for the fetal thorax and abdomen following the localisation

of the heart, lungs and liver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

6.11 Motion correction of the thorax and liver and exemple lung segmentation . . . . 129

6.12 Distance error between the predicted organ centers and their ground truth . . . 131

6.13 Detection error by gestational age . . . . . . . . . . . . . . . . . . . . . . . . . . 133

6.14 Examples of fully automated motion correction for the fetal thorax, with fetuses

of 24, 29 and 37 weeks of gestational ages . . . . . . . . . . . . . . . . . . . . . . 135

6.15 Feature importance of successive Random Forests. . . . . . . . . . . . . . . . . 136

6.16 Autocontext framework for tissue segmentation: left ventricle, myocardium and

mitral valves in the top row; fetal head, fetal body and amniotic fluid in the

bottom row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

6.17 Echocardiographic images in the CETUS challenge (MICCAI 2014) are all cen-

tered on the left ventricle and in a standard orientation, which is not the case

for fetal MRI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

7.1 Demonstration of the methods presented in this thesis on a 3T dataset: the

brain, heart, lungs and liver are automatically localised then motion corrected . 141

Page 19: Keraudren-K-2015-PhD-Thesis

Acronyms

2D two-dimensional.

3D three-dimensional.

BPD biparietal diameter.

CNN convolutional neural network.

CRF Conditional Random Field.

CRL crown-rump length.

CSF cerebrospinal fluid.

ECG electrocardiogram.

ED end-diastolic.

EPI echo planar imaging.

ES end-systolic.

EXIT ex-utero intrapartum treatment.

GDT geodesic distance transform.

GPU graphics processor unit.

HOG Histogram of Oriented Gradients.

19

Page 20: Keraudren-K-2015-PhD-Thesis

20 Acronyms

IUGR intrauterine growth restriction.

LBP Local Binary Pattern.

MRI Magnetic Resonance Imaging.

MSE mean squared error.

MSER Maximally Stable Extremal Regions.

OFD occipitofrontal diameter.

RANSAC RANdom SAmple Consensus.

RF radiofrequency.

ROI region of interest.

SGD spherical 3D Gabor descriptor.

SIFT Scale Invariant Feature Transform.

SNR signal-to-noise ratio.

ssFSE single shot fast spin echo.

ssTSE single shot turbo spin echo.

SVM Support Vector Machine.

SVR slice-to-volume reconstruction.

Page 21: Keraudren-K-2015-PhD-Thesis

Chapter 1

Introduction

1.1 Motivation

Magnetic Resonance Imaging (MRI) of the fetus is an invaluable diagnostic tool complementary

to ultrasound thanks to its superior soft-tissue contrast and larger field of view, while using no

ionizing radiation. Fast scan techniques do not require sedation and can acquire 2D snapshot

images of the fetus in less than a second, freezing fetal motion. Ultrasound, which is low-cost,

widely available and real-time, is the imaging modality of choice during pregnancy. However,

when anomalies are suspected during the ultrasound examination, the high contrast resolu-

tion of MR images can provide important anatomic information that aids clinical diagnosis,

parental counselling, planning the mode of delivery, perinatal care and surgical interventions.

Additionally, MRI of healthy fetuses for research purposes allows to study the development of

the fetus in utero, which can notably provide a better understanding of the consequences of

preterm birth.

The main challenge when imaging the fetus using MRI is fetal motion, which commonly oc-

curs between the acquisition of each 2D slice and results in an inconsistent 3D volume. In the

case of adult subjects, prospective motion correction methods can be used to reduce motion

21

Page 22: Keraudren-K-2015-PhD-Thesis

22 Chapter 1. Introduction

artefacts at the time of data acquisition. However, such methods rely on motion estimates

from a tracked marker or inferred from the data itself and are difficult to apply to the fetus.

Retrospective motion correction methods have been developed to produce a high-resolution 3D

volume from successive acquisitions in orthogonal directions. Retrospective motion correction

aims to iteratively align the acquired 2D slices to a reconstructed 3D volume. It can be per-

formed on rigid parts of the fetal body that move at a rate allowing the acquisition of snapshot

images, such as the fetal head or thorax. As the fetus moves independently from the surround-

ing maternal tissues, motion correction requires to isolate the region to be reconstructed so

that image registration will be mainly guided by the anatomy of the fetus. This has so far

involved a substantial amount of manual preprocessing in order to delineate the fetal anatomy

and exclude maternal tissues in each 2D slice.

In this thesis, methods for the automated localisation of fetal organs in MRI are developed.

This localisation aims to automate the preprocessing step of motion correction. It can also be

used to initialise a segmentation, from which the size and shape of fetal organs can be assessed,

or to orient images based on the anatomy of the fetus, whose position and orientation in the

womb are arbitrary, in order to facilitate clinical examination. Automated processing of fetal

MRI will facilitate a wider application of motion correction techniques in clinical practice and

enable large cohort research studies, which would otherwise be excessively time consuming.

1.2 Research contributions and thesis outline

This thesis presents novel methods based on machine learning and prior knowledge of fetal

development to automatically localise organs in fetal MRI. The proposed methods are robust

to fetal motion and independent of the position and orientation of the fetus. The brain is first

localised independently of the orientation of the fetus (Chapter 4), then used as an anchor point

to steer features used in the subsequent localisation of the heart, lungs and liver (Chapter 6).

The localisation results are used to segment fetal tissues in each 2D slice for the motion correc-

Page 23: Keraudren-K-2015-PhD-Thesis

1.2. Research contributions and thesis outline 23

tion of the fetal brain (Chapter 5). Each 2D slice segmentation is further refined throughout

the motion correction procedure, which iteratively aligns the 2D slices with the reconstructed

volume. Preliminary results of the application of the proposed localisation of the lungs to the

motion correction of the thorax are presented in Chapter 6.

Magnetic Resonance Imaging of the fetus is introduced in Chapter 2, which also discusses

basic knowledge on fetal development that could constitute priors for the automated localisation

of fetal organs. The imaging data that is used to evaluate the methods presented in this work

is described in Chapter 2. Chapter 3 reviews the state-of-the-art in machine learning, which

enables a computer program to learn from examples, and provides an overview of image features

that are used in automated object detection. A particular emphasis is placed on the Random

Forest algorithm and rectangle features, which are used respectively in Chapters 5 and 6. A

review of the current state-of-the-art methods for automated organ localisation is also presented.

The remainder of this section gives a more detailed overview of the contributions of this

thesis.

1.2.1 Automated localisation of the brain in fetal MRI

Chapter 4 presents an automated method for accurate and robust localisation of the fetal brain

in MRI when the image data is acquired as stacks of 2D slices misaligned due to fetal motion.

This localisation method provides an initialisation for motion correction of the fetal brain based

on image registration, thus allowing for a fully automated motion correction procedure, which

is presented in Chapter 5. Considering that motion correction is an iterative process where

only registration results are forwarded from one iteration to the next, we propose to update

the segmentation of the brain throughout the motion correction procedure. The registration

results are used to align probabilistic segmentations of the individual slices and initialise a Con-

ditional Random Field (CRF) segmentation of the currently reconstructed volume, providing

a segmentation update that progressively increases precision and confidence levels.

Page 24: Keraudren-K-2015-PhD-Thesis

24 Chapter 1. Introduction

1.2.2 Normalising image coordinates for fetal age

There are large morphological changes occurring in the developing fetus during the second

and third trimesters of pregnancy. By the time a woman undergoes an MR examination,

ultrasound scans have already been performed and provide an estimation of the gestational

age of the fetus. This gestational age can be used to infer biometric measurements from

standard growth charts, which are commonly used during pregnancy follow-up examinations.

Such measurements can then form priors to guide and constrain the automated localisation

process. In particular, a method is proposed in Chapter 6 to normalise the MR data for fetal

age by normalising the images to account for the gestational age. This allows subsequent

processing to become invariant to the variability in size of the fetus due to the advancement of

the pregnancy. Moreover, such an approach can allow useful results to be obtained with less

training data and enable the use of training data from a more heterogeneous group of subjects.

1.2.3 Automated localisation of the heart, lungs and liver in fetal

MRI

Building on the brain localisation method of Chapter 4, Chapter 6 presents a localisation

pipeline for the heart, lungs and liver of the fetus. We propose to use steerable cube features

in order to account for the arbitrary orientation of the fetus. Such features can be efficiently

computed using the integral image, which is an intermediate image representation for the rapid

calculation of region sums. The location of the brain is first used as an anchor point to steer

the extraction of image features and formulate candidate locations for the heart. Candidate

hypotheses for the lungs and liver locations are then obtained by rotating features around

the heart–brain axis. The classification scores are then combined with a prior for the spatial

configuration of the fetal organs in order to select the best candidate location for each organ. We

present preliminary results on the application of the proposed method to the motion correction

of the fetal thorax.

Page 25: Keraudren-K-2015-PhD-Thesis

Chapter 2

Magnetic Resonance Imaging of the

fetus and fetal development

This chapter introduces fetal MRI, in particular structural T2-weighted MRI, and its role in

the assessment of fetal development during pregnancy. It then considers the main challenges

facing the automated analysis of fetal MRI, and discusses knowledge from the medical field that

could constitute priors to guide the localisation of fetal organs, given the gestational age of the

fetus. A description of the imaging data used in the evaluation of the methods presented in this

thesis is also provided.

2.1 Imaging the developing fetus

Routine examinations during a pregnancy are carried out using 2D ultrasound imaging due to

its safety for the fetus, low cost and portability (imaging can be performed at the woman’s

bedside). In the United Kingdom, the National Health Service offers pregnant women at least

two ultrasound scans. A first scan takes place at around 8–14 weeks to confirm the pregnancy

and determine the due date, and a second scan at around 18–21 weeks to check for structural

abnormalities, in particular in the head and spine of the fetus [105]. 3D ultrasound imaging

25

Page 26: Keraudren-K-2015-PhD-Thesis

26 Chapter 2. Magnetic Resonance Imaging of the fetus and fetal development

is an extension of the standard 2D ultrasonography that can provide volumetric images of the

fetus and its internal organs. Although not as ubiquitous as its 2D counterpart, 3D ultrasound

is increasingly used for fetal examination. When anomalies are suspected during the ultrasound

examination, MR scans of the fetus can be performed to obtain a more robust diagnosis as they

provide images with a superior soft-tissue contrast. Neither ultrasound nor MRI use any ionising

radiation and they are not considered to have any teratogenic effect on the developing fetus.

While ultrasound remains the method of choice for screening for fetal anomalies, ultrasound

and MRI are complementary to each other for prenatal diagnostic imaging [117]. Ultrasound is

well suited for real-time observation of fetal behavior or 3D surface rendering of the fetal face,

while MRI is the method of choice when investigating central nervous system abnormalities.

These different imaging modalities are presented in Figure 2.1 and discussed in more details in

the remainder of this chapter.

Figure 2.1: Examples of 2D ultrasound, volume rendering of the fetal face from 3D ultrasoundand MRI1.

2.1.1 Ultrasonography (US)

2D ultrasound images result from the transmission of pulses of ultrasound waves sent through

the body from a curvilinear array of transducers. These waves are reflected off tissue boundaries

and then detected with a time delay from which the distance from the transducers can be

inferred. The intensity values in ultrasound images are related to the echogenicity of the

1Sources: 2D ultrasound from user Mirmillon of Wikimedia Commons (public domain), 3D ultrasound fromuser Patrick Evans of Flickr (license CC BY-NC 2.0)

Page 27: Keraudren-K-2015-PhD-Thesis

2.1. Imaging the developing fetus 27

underlying tissue, namely its ability to reflect the ultrasound waves. 3D ultrasound images

are reconstructed from consecutive tomographic images obtained by tilting a 2D ultrasound

transducer array. In Doppler ultrasound, ultrasound waves reflect off circulating red blood

cells, enabling the visualisation of blood flow in different parts of the fetal body, or blood

exchange between fetus and mother.

2.1.2 Magnetic Resonance Imaging (MRI)

Formation process of MR images

A MR scanner produces a constant magnetic field B0, whose effect is to align the magnetic

moments of protons (hydrogen nuclei) with the direction of the B0 field. A radiofrequency (RF)

pulse is applied to the protons, temporarily creating a second magnetic field B1 orthogonal to B0

that perturbs the equilibrium. Once the RF pulse is removed, the nuclei realign themselves with

the magnetic field B0. This return to equilibrium involves relaxation processes, characterised

by time constants and the emission of electromagnetic radiation, from which MR images are

derived. The MR signal as measured by the scanner is sampled in the spatial frequency domain

(k-space), and a spatial image is reconstructed using a Fourier transform. Intensity values

in MR images result from the proton density, which is higher in water and fat tissue, the

longitudinal relaxation time T1, which defines the recovery rate of longitudinal magnetisation,

and the transverse relaxation time T2, which defines the decay rate of transverse magnetisation.

In order to obtain a different weighting between these three parameters, MRI can use se-

quences with different echo time TE, which is the time between the application of a RF excita-

tion pulse and the time when the centre of k-space is encoded, and repetition time TR, which is

the time between two successive excitations of the same slice or slab. The TE is usually shorter

than the TR. A long TR and short TE correspond to a proton density-weighted sequence, a

short TR and short TE to a T1-weighted sequence, and a long TR and long TE a T2-weighted

sequence.

Page 28: Keraudren-K-2015-PhD-Thesis

28 Chapter 2. Magnetic Resonance Imaging of the fetus and fetal development

Fetal MRI is particularly challenging as it requires high spatial resolution to capture the

small anatomic structures of the fetus, and rapid image acquisition, due to both fetal and

maternal motion [98]. Maternal breathing may move the fetus and the fetus itself can and does

spontaneously move during imaging. These movements are unpredictable and may be large,

particularly involving substantial head rotations. High resolution MR imaging with volumetric

coverage using true 3D acquisition methods is widely available and provides rich data for image

analysis. However such detailed volumetric data generally takes several minutes to acquire

and requires the subject to remain still or move only small distances during acquisition. As a

result, although some volumetric approaches are being considered [67, 108], most fetal imaging

is performed using single shot methods to acquire 2D slices, such as single shot fast spin echo

(ssFSE) [71] and echo planar imaging (EPI) [24]. These methods can freeze fetal motion, so

that each individual slice is generally artefact free, but stacks of slices required to provide whole

coverage of the fetal anatomy may be mutually inconsistent. Portions of the fetal anatomy,

in particular the limbs, can be excluded or appear multiple times in the stack of 2D slices

(Figure 2.2).

Figure 2.2: Adjacent slices in which bulk movement of the fetus produces heterogenous signalin the amniotic fluid (red arrows). An arm of the fetus is visible in the middle slice (greenarrow), but not in the adjacent slices (fetus of 23 weeks of gestational age).

In order to reduce motion artefacts, a first scanning strategy is to shorten the acquisition

time by limiting the number of slices that are acquired, at the expense of a lower through-plane

resolution. This strategy leads to a large spacing between slices, typically 4mm thickness for

whole womb coverage (Figure 2.4.a) and 2.5mm thickness for fetal centred acquisitions, for

Page 29: Keraudren-K-2015-PhD-Thesis

2.1. Imaging the developing fetus 29

an in-plane resolution of 1mm. If the fetus stays relatively immobile during the time of the

acquisition, typically 40s, each slice can be segmented to form a coherent 3D volume. This

segmentation enables a volumetric assessment of the fetal organs [37]. However, it produces

striking staircase artefacts due to the small size of the fetus compared to the slice thickness

(Figure 2.3).

Figure 2.3: Manual segmentation on MRI data with a low through-plane resolution (4.4mm).Staircase effect, and organs such as the kidneys (white and purple) are only covered by 3 slices(fetus of 27 weeks of gestational age).

While this first strategy undersamples the 3D volume, a second strategy is to perform over-

sampling, using a high through-plane resolution and repeated acquisitions of the 3D volume. To

reduce the scan time while avoiding slice cross-talk artefacts, which correspond to interference

between adjacent slices, contiguous slices are not acquired sequentially but in an interleaved

manner across time (Figure 2.5). This gives motion artefacts a saw-toothed appearance high-

lighted in Figure 2.4.b. Several stacks of 2D slices are successively acquired in orthogonal

directions and combined to produce a high-resolution motion corrected volume (Section 2.1.3).

Safety of fetal MRI

Although MRI is considered safe for the fetus [150], the risks of MRI during early pregnancy are

unknown and MR scans are avoided as a precautionary measure before 18 weeks of gestation,

as the risk of miscarriage is higher during the first trimester. In addition, the typical resolution

Page 30: Keraudren-K-2015-PhD-Thesis

30 Chapter 2. Magnetic Resonance Imaging of the fetus and fetal development

(a)

(b)

Figure 2.4: Example scans with the native 2D slices in sagittal orientation, acquired with aslice thickness of 4.4mm (a) and a slice thickness of 1.3mm (b). A zoomed patch of the coronalsection highlights the staircase effect due to the slice thickness in (a) and interleaving artefactsdue to fetal motion in (b).

Figure 2.5: As the time required to acquire data for a single slice is less than TR, data for otherslices can be acquired during the remaining time. Adjacent slices are not acquired sequentiallybut are separated in time, so that each slice is excited by a specific RF pulse without beingaffected by adjacent slices (figure adapted from Levine [92]).

of MRI (1mm3) would be too coarse for the embryo which measures less than 2cm at 8 weeks

[113], while the fetus measures 15cm at 18 weeks [6]. In comparison, ultrasound has a typical

resolution of 0.1mm3, which is well suited for imaging the embryo during the first trimester of

pregnancy.

Page 31: Keraudren-K-2015-PhD-Thesis

2.1. Imaging the developing fetus 31

A general concern regarding MRI is to screen patients for any ferromagnetic materials, which

may turn into dangerous projectiles due to the strong magnetic fields. Additionally, the energy

deposited by RF pulses can lead to tissue heating. Following the guidelines of the UK National

Radiological Protection Board [106], the temperature of any mass of tissue should not rise

above 1° centigrade. As a precautionary measure, the body temperature of pregnant women

is checked prior to the MRI examination, which is not carried out in case of an abnormally

high temperature. The maternal tissues that surround the fetus have been shown to reduce

sufficiently the high level of noise during the MRI examination, which is due to the rapidly

oscillating electromagnetic currents within the gradient coils [8, 60].

Role of fetal MRI

The main focus of fetal MRI is in imaging of the brain due to its crucial role in fetal develop-

ment [130]. The soft tissue contrast of MR imaging enables a detailed evaluation of the central

nervous system in a manner not possible with ultrasonography [94]. The information provided

by MR images improves clinical diagnosis and aids in patient counselling, where it may as-

sist in the decision to continue or discontinue a pregnancy, as well as in planning delivery or

perinatal care. Additionally, fetal MRI is used in the planning of in-utero surgery or ex-utero

intrapartum treatment (EXIT), which is a Caesarean section immediately followed by a surgi-

cal procedure while the fetus is still attached to the umbilical cord [93]. EXIT procedures are

mostly performed to remove an obstruction in the airways of the fetus. There is also a growing

interest in using MRI to study other aspects of fetal development, such as assessing the severity

of intrauterine growth restriction (IUGR) [37], the development of the fetal lungs [76] or the

fetal musculoskeletal system [107]. The soft tissues of the fetal cleft lip and palate, which are

difficult to visualise with ultrasound, is another area where MRI shows potential [140]. The

work of Levine [92] provides a comprehensive illustration of the normal and abnormal fetal

anatomy that can be visualised using MRI.

This thesis focuses on structural T2-weighted MRI of the fetus, which enables detailed ob-

Page 32: Keraudren-K-2015-PhD-Thesis

32 Chapter 2. Magnetic Resonance Imaging of the fetus and fetal development

servations of the fetal anatomy. Other applications of Magnetic Resonance to the fetus are

functional MRI to study the brain activity of the fetus [48], diffusion weighted MRI to study

the brain connectivity [146], or cine sequences, which can be used to study fetal behaviour [64].

2.1.3 Motion correction of fetal MRI

Motion correction methods have revolutionised MRI of the fetal brain by reconstructing a

high-resolution 3D volume of the fetal brain from orthogonal acquisitions of motion corrupted

stacks of 2D slices (Figure 2.6). Such reconstructions are valuable for both clinical and research

applications. They enable a qualitative assessment of abnormalities suspected on antenatal ul-

trasound scans [130] and are an essential tool to gain a better understanding of the development

of the fetal brain, pushing back the frontiers of neuroscience [146].

In order to correct for bulk motion artefacts, motion correction methods can be performed

prospectively, namely by adapting the imaging sequence in real-time during the scan, or retro-

spectively in a post-processing step. Prospective methods are often navigator-based [163, 167]

or self-navigated sequences [116]. A navigator is an estimate of the anatomical positioning

that is used to update the acquisition process as it proceeds. It can be derived from external

markers, such as an electrocardiogram (ECG) when imaging the adult heart [163], from addi-

tional MRI measurements [167], or from the acquired data itself in the case of self-navigated

sequences [116]. In contrast to imaging adults or children, external tracking devices cannot

be used to track fetal movement and prospective motion correction methods can only rely on

image-based tracking. Bonel et al. [11] proposed to image the fetal brain using an image-based

navigator to trigger fast snapshot slice acquisition while the fetus is stationary. However, the

proposed approach greatly increases the scanning time while not being robust to vigorous fetal

motion [98]. In clinical practice, fetal MRI data is usually acquired as multiple stacks of 2D

slices in multiple directions, using fast acquisition methods and overcoming motion artefacts

by repeating the acquisition process multiple times. This has motivated the development of

retrospective motion correction methods based on image registration and superresolution [145].

Page 33: Keraudren-K-2015-PhD-Thesis

2.1. Imaging the developing fetus 33

Orthogonalstack

sof2D

slices

Motion corrected 3D volume

Figure 2.6: Diagramme for the motion correction of fetal brain MRI.

Several methods have been proposed to correct fetal motion retrospectively and reconstruct

a single high-resolution 3D volume from orthogonal stacks of misaligned 2D slices. The most

common approach is slice-to-volume registration, where slices are iteratively registered to an es-

timation of the reconstructed 3D volume [58, 71, 87, 121]. Another approach proposed by Kim

et al. [81] formulates the motion correction problem as the optimisation of the intersections of

all slice pairs from all orthogonally planned scans. In addition to recovering the rigid transfor-

mations between the acquired slices and the motion corrected volume, most of these methods

include a bias field correction as well as a superresolution step. Kuklisova-Murgasova et al.

[87] also incorporates the detection of slices that do not match with the reconstructed volume,

either due to a failed registration or excessive motion artefacts. A comprehensive overview of

motion correction methods is given in Studholme and Rousseau [146]. While most published

methods focus on applying motion correction to the fetal brain, Kainz et al. [74] applied the

method of Kuklisova-Murgasova et al. [87] to the reconstruction of the fetal thorax [73], and

later extended it to the whole fetal body using a patch-based approach [75].

Motion correction and reconstruction methods generally require a substantial amount of

preprocessing. In all proposed methods, the fetal head is considered a rigid object moving

independently from the surrounding maternal tissues. Therefore, a manual segmentation is

required to isolate the fetal brain from the rest of the image. The organ localisation methods

Page 34: Keraudren-K-2015-PhD-Thesis

34 Chapter 2. Magnetic Resonance Imaging of the fetus and fetal development

presented in this thesis can be used to automate this preprocessing step (Figure 2.6). In the

following section, we outline the main principles of slice-to-volume registration with superreso-

lution [58, 87, 122], which corresponds to the motion correction method that has been used in

the work presented in this thesis.

Slice-to-volume registration with superresolution

Retrospective motion correction of fetal MRI is both a registration problem, as it seeks to align

each 2D slice with the anatomy of the fetus, and a superresolution problem, as low resolution

images are combined to produce a higher resolution image [68]. The problem of aligning the

acquired 2D slices Y1, ..., YK into a motion corrected 3D volume X can be formulated as follows:

In order to account for the change of contrast between slices, we denote Y ∗k the scaled and bias

corrected slices Yk. The intensities of the voxels of Y ∗k can be written y∗jk = ske−bjkyjk, where

sk is a global scale factor for the kth slice and bjk the bias field at the jth voxel. Considering

that the brain is a rigid body, the relation between the acquired slices and the motion corrected

volume can then be written:

Y ∗k = DBTkX + εk (2.1)

where Tk is a rigid transformation, B is a blur matrix, D is the subsampling matrix and εk is

the observation noise [87, 121].

Motion correction is formulated as a least squares minimisation problem and solved using

gradient descent. In the absence of an image prior, high-frequency noise can be introduced in

the least squares solution. A regularisation term R(X) is therefore added to the minimisation

problem as a penalty on the image derivatives. An edge-preserving regularisation term can be

used to minimise the smoothing effect of regularisation [23]:

Page 35: Keraudren-K-2015-PhD-Thesis

2.2. Fetal development 35

Initialaverage

Iteration 1 Iteration 3 Iteration 5 Iteration 7 Iteration 9

Figure 2.7: The acquired 2D slices are iteratively registered to the reconstructed 3D volume[87] with an increasing image resolution. The segmentation of the brain is updated during thelast iterations of motion correction, as presented in Chapter 5.

R(X) =∑i

∑d

ϕ(xi+d − xiδ|d| ) (2.2)

where ϕ(t) = 2√

1 + t2− 2 and d represents a vector between an index of a voxel and one of its

26 neighbours. δ is a parameter that defines the difference of voxel intensities corresponding to

an edge. Noting ejk the jth voxel of the reconstruction error εk, the least squares minimisation

can thus be written as:

∑j

∑k

e2jk + λR(X) (2.3)

where λ is a regularisation weight. Motion correction is an iterative procedure that alternates

between slice-to-volume registration and superresolution. Following a multiscale approach, the

spatial resolution of the reconstructed volume is increased throughout this iterative process

(Figure 2.7).

2.2 Fetal development

Beside motion artefacts, the main challenges when developing methods to automatically process

fetal MRI are the unpredictable position and orientation of the fetus, as well as the large

variability in shape and appearance due to fetal development. In this section, we consider the

Page 36: Keraudren-K-2015-PhD-Thesis

36 Chapter 2. Magnetic Resonance Imaging of the fetus and fetal development

incidence of cephalic presentation and the use of standard growth charts as priors that can

guide an automated analysis of fetal MRI. We also look at the main appearance patterns that

characterise the fetal anatomy in T2 -weighted MRI.

2.2.1 Incidence of cephalic presentation

(a) (b)

Figure 2.8: These two images correspond to the same maternal position, but a position of thefetus rotated by 90°: cephalic presentation (a) and breech presentation (b). Both fetuses are30 weeks of gestational age.

Towards the end of pregnancy, most babies turn spontaneously into the cephalic presenta-

tion, with the fetal head descending into the pelvic cavity (Figure 2.8.a). The incidence of

breech presentation, corresponding to the feet or sacrum in the pelvic cavity (Figure 2.8.b),

decreases from about 20% at 28 weeks of gestation to 3–4% at term [126]. The orientation

of the fetus thus becomes more predictable as the due date approaches. The probability of a

cephalic presentation could hence be envisaged as prior knowledge when localising the fetus

near term. A localisation pipeline can then consider implementing a bias toward a cephalic

presentation, assuming the orientation of the mother is known. However, diagnostic scans are

usually performed as early as possible during the pregnancy [119] while the head engagement,

which is the movement of the fetus to cephalic presentation, occurs during the third trimester.

The position of the fetus relative to the mother is thus not considered in this thesis as a robust

Page 37: Keraudren-K-2015-PhD-Thesis

2.2. Fetal development 37

prior when automatically localising the fetus. The chosen approach is instead to consider all

maternal tissues as image background that bears little cues for the position and orientation of

the fetus.

2.2.2 Fetal growth

24 weeks 30 weeks 38 weeks

Figure 2.9: Healthy fetuses at different gestational ages, highlighting the anatomical changestaking place during the development of the fetus. The top row images are manual segmentationsfrom Damodaram et al. [37] while the bottom row images were obtained after motion correctionusing the method of Kuklisova-Murgasova et al. [87], with the interleaved segmentation andmotion correction presented in Chapter 5.

The development of the fetus during the second and third trimester is characterised by an

increase in size, as well as a maturation of tissues leading to a change of tissue contrast in MRI

(Figure 2.9). In particular, the length of the fetus increases by 50% between 23 weeks and term

(40 weeks) [6]. Several anatomical measurements are usually performed during an ultrasound

examination. In particular, the occipitofrontal diameter (OFD) and the biparietal diameter

(BPD) characterise the size and shape of the fetal head, and the crown-rump length (CRL)

is a measure of the global size of the fetus (Figure 2.10). These measurements are compared

Page 38: Keraudren-K-2015-PhD-Thesis

38 Chapter 2. Magnetic Resonance Imaging of the fetus and fetal development

to standard growth charts2 established from large scale studies [25], in order to assess the

development of the fetus. The gestational age is typically calculated from the date of the

last menstrual period, and confirmed with data from early antenatal ultrasound scans. These

biometric measurements are used as priors in the localisation methods presented in this thesis.

CRL

CRL

OFDOFD

BPDBPD

14 19 24 29 34 39Gestational Age

0

20

40

60

80

100

120

140

mm

Biparietal diameter

median5th/95th centile

14 19 24 29 34 39Gestational Age

20

40

60

80

100

120

140

mm

Occipitofrontal diameter

median5th/95th centile

12 17 22 27 32 37 42Gestational Age

50

100

150

200

250

300

350

400

450

mm

Crown-rump length

median5th/95th centile

Figure 2.10: Crown-rump length (CRL), occipitofrontal diameter (OFD) and biparietal diame-ter (BPD) are biometric measurements commonly used when assessing the development of thefetus. OFD and BPD measurements are reproduced from Snijders and Nicolaides [143], andCRL from Archie et al. [6].

It can be noted that the growth of the fetus is allometric [66], namely that different parts of

the body grow at different rates, although this is more pronounced during the embryogenesis

than during fetal development. When using biometric measurements from standard growth

charts as priors in the localisation process, it is important to consider the risks of abnormal

fetal development, such as IUGR [37], in the validation of the proposed methods. IUGR is

2http://www.rcpch.ac.uk/research/uk-who-growth-charts#charts

Page 39: Keraudren-K-2015-PhD-Thesis

2.2. Fetal development 39

diagnosed by an estimated fetal weight below the 5th centile during ultrasound examination.

IUGR fetuses are abnormally small for their gestational age (Figure 2.11).

Healthy IUGR

27 weeks of gestational age

Figure 2.11: Healthy and IUGR fetuses at the same gestational age (27 weeks). The size of thehead is preserved but the rest of the body is abnormally small.

2.2.3 Appearance patterns in fetal MRI

Although the relative ordering of MR intensity values between different tissue types will be

consistent across subjects and across scanners, there are no absolute intensity values that can

be used to characterise homologous tissue types across subjects. In order to make comparisons

on the relative intensity of specific anatomical regions across different subjects, a reference

intensity must be taken. For instance, Levine et al. [95] chose the amniotic fluid as a reference

intensity and observed that the T2 signal intensity is higher in the lungs of older gestational age

fetuses. Methods have been proposed to normalise the intensity distribution across different

MR images [109]. However such methods assume that the different images to normalise share

a similar tissue distribution, which is not the case for fetal MRI due to the amniotic fluid and

maternal tissues surrounding the fetus. This motivates the choice of image features which are

robust to contrast changes and intensity shifts in the methods developed in this thesis.

In order to automate the analysis of fetal MRI, it is important to consider which are the

repetitive intensity patterns that are consistent over a large range of fetuses. Some observations

can be performed after computing the average image of a healthy fetus from a database of fetuses

Page 40: Keraudren-K-2015-PhD-Thesis

40 Chapter 2. Magnetic Resonance Imaging of the fetus and fetal development

stomachstomach

liverliver

Coronal plane

brainbrain

heartheart

Sagittal plane

heartheart

rightrightlunglung

Transverse plane

Body Brain Heart Left lung Right lung Liver

Figure 2.12: Average image of 30 healthy fetuses after normalisation and alignment defined byanatomical landmarks. It highlights the characteristic intensity patterns in fetal MRI for thebrain, heart, lungs, stomach and liver. The bottom row corresponds to the average of eachindividual segmentation.

that have been manually segmented [37]. Figure 2.12 was obtained after aligning 30 healthy

fetuses with a gestational age from 24 to 38 weeks. This alignment has been performed using

anatomical landmarks, after normalising the size of the fetuses according to their gestational

ages.

The first consideration which can be made is that the fetal body is composed of two main

rigid components, the head and the trunk (composed by the thorax and abdomen). The limbs

are not visible in Figure 2.12 due to their large degree of freedom. A second consideration

can be made based on the contrast patterns of the fetal organs that are distinguishable in this

average image. The brain, lungs and stomach display high intensity patterns, while the heart

and the liver appear darker. The heart appears as a black sphere due to the signal loss when

imaging the fast beating heart of the fetus.

A third consideration is the left-right asymmetry of the human body: the stomach is on the

left side and the liver on the right side. The left lung is slightly smaller than the right lung,

the heart being on the left side of the body. This asymmetry has practical implications when

Page 41: Keraudren-K-2015-PhD-Thesis

2.2. Fetal development 41

developing a machine learning based model of the fetus. In particular, it is important to use

a consistent left/right convention before extracting features that are computed on the image

grid. As detailed in Appendix A, there are indeed two possible conventions, neurological or

radiological, which correspond respectively to a direct or an indirect basis for the coordinate

system of the scanner. Additionally, mirroring images is a common practice in machine learning

in order to augment the size of the training database. This approach can be considered for the

fetal head, but not for the whole body of the fetus.

2.2.4 Modelling of the fetal body in MRI

A skeleton based model, using joints articulations as landmarks, sounds appealing as it corre-

sponds to the intuitive representation of the human body. There are currently two publications

which propose such a model. The method of Anquez et al. [5] is semi-automated, with a manual

selection of landmarks in the arms and legs, while the work of Klinder et al. [83] only considers

the feasibility of estimating the pose of a skeleton model from candidate landmarks, leaving

the detection step of these landmarks as a future development.

In this thesis, we advocate for a blob-like model instead of a skeleton based model and focus

on the detection of the center of different organs (brain, heart, lungs and liver). This choice

is motivated by the fact that these organs have a size which is large compared to the slice

thickness and due to their blob-like shape, they appear as relatively coherent 3D structures

despite motion artefacts. The bones of the fetus, which are thin tubular structures, may not

form a 3D structure as coherent as the previously mentioned fetal organs in the case of fetal

movement. Moreover, unlike the bones of the fetus, these organs are clearly visible in the ssFSE

and single shot turbo spin echo (ssTSE) sequences used throughout this thesis. This choice of

model might be different for another MRI sequence, as the bones of the fetus are particularly

visible in EPI sequences [107].

Page 42: Keraudren-K-2015-PhD-Thesis

42 Chapter 2. Magnetic Resonance Imaging of the fetus and fetal development

2.3 Imaging data

The imaging data used for the evaluation of the methods presented in this work was acquired

in the context of previous works carried out at Hammersmith Hospital [37, 136].

2.3.1 Dataset 1: used for training and cross-validation of the fetal

brain localisation and segmentation methods

Between 2007 and 2011, a cohort of 80 fetuses with normal brain appearances was scanned for

the purpose of constructing a spatio-temporal atlas of the developing brain (Serag et al. [136]).

A subset of this data was obtained and used in the present work to evaluate the proposed

method for the localisation and segmentation of the fetal brain.

Fetal MRI was performed on a 1.5T Philips Achieva MRI system using a 32 channel cardiac

array for signal reception. Ethical approval was granted by the Hammersmith Hospital Ethics

Committee (ethics no. 97/H0707/105). Stacks of T2-weighted dynamic ssTSE images were

obtained with the following scanning parameters: TR 15000ms, TE 160ms, slice thickness of

2.5mm, slice overlap of 1.25mm, flip angle 90°. The in-plane resolution is 1.25mm. There were

50-100 slices per stack divided into 4–6 interleaved packets, with one packet acquired within

a single TR. Acquisition times are about 1–2min per stack, all stacks being acquired within

15–20min. A parallel imaging speed up SENSE factor of two was applied in all stacks.

Dataset 1 includes 59 healthy fetuses whose gestational age range from 22 to 39 weeks, with

5 fetuses scanned twice and 1 three times, amounting to a total of 66 sets of scans. Each set

consists of 3 to 8 scans acquired in three standard orthogonal planes, representing a total of

117 sagittal, 113 coronal and 228 transverse scans. Ground truth annotations were performed

specifically for this thesis. Bounding boxes have been tightly drawn manually around the brain

using purpose-built software3.

3https://github.com/kevin-keraudren/crop-boxes-3D

Page 43: Keraudren-K-2015-PhD-Thesis

2.3. Imaging data 43

2.3.2 Dataset 2: used for training and cross-validation of the method

to localise the fetal heart, lungs and liver

Dataset 2 and dataset 3, which consist of 1.5T T2-weighted ssTSE images, were used in the

present work to evaluate the proposed method to localise the fetal heart, lungs and liver.

The imaging data in dataset 2 has been acquired between 2007 and 2009 for a study as-

sessing the usability of MRI to characterise IUGR (Damodaram et al. [37]). Ethical ap-

proval was granted by the Hammersmith Hospital Ethics Committee (ethics no. 2003/6375

& 07/H0707/105). It consists of 55 scans covering the whole uterus for 30 healthy subjects and

25 IUGR subjects (scanning parameters: TR 1000ms, TE 98ms, 4mm slice thickness, 0.4mm

slice gap). Gestional ages range from 20 to 38 weeks. A ground truth segmentation for all fetal

organs was provided by an expert [37]. However these manual segmentations were provided for

cropped versions of the original scans and did not include the metadata corresponding to the

scanner coordinates. Template matching was used in order to align these segmentations with

the original volume of the whole uterus.

2.3.3 Dataset 3: used for testing the method to localise the fetal

heart, lungs and liver

Dataset 3 corresponds to 64 sets of scans from the same 59 healthy subjects as in dataset 1,

representing a total of 224 stacks of 2D slices. It corresponds to the same data as dataset 1

after exclusion of the scans that do not cover the thorax and abdomen of the fetus. The centres

of the heart, left lung, right lung and liver have been manually annotated in ITK-SNAP [173]

to provide ground truth for the evaluations carried out in this thesis.

Page 44: Keraudren-K-2015-PhD-Thesis

44 Chapter 2. Magnetic Resonance Imaging of the fetus and fetal development

2.4 Challenges in the automated analysis of fetal MRI

The main challenges facing the automated analysis of fetal MRI can be summarised as:

• The arbitrary position and orientation of the fetus;

• The fetus is bathed in amniotic fluid and surrounded by maternal tissues;

• There is a large variability in shape and appearance due to fetal development;

• The presence of motion artefacts caused by both the fetus and the mother.

We observed that the gestational age can be used to infer the size of the fetus, thus reducing

the number of unknown parameters in the localisation of fetal organs. Nevertheless, a large

degree of variability still remains between MR images of different fetuses. This motivates the

choice of an approach based on machine learning, which is introduced in the next chapter,

in order to automatically localise the fetal organs. Such an approach learns from annotated

examples and automatically discovers patterns in the images that allow to generalise from a

training database to new subjects.

Page 45: Keraudren-K-2015-PhD-Thesis

Chapter 3

Machine learning and object detection

This chapter presents how machine learning can be applied to the task of object detection,

following the extraction of features that summarise local information from the image data. A

detection pipeline for fetal organs must accommodate the unknown orientation of the fetus. The

emphasis in this chapter is thus placed on the choice of image features as well as the different

search strategies more than machine learning algorithms themselves. We then review the liter-

ature on organ localisation, including atlas-based methods and machine learning approaches.

3.1 Introduction

Machine learning is the study of generic algorithms able to uncover structure in the data

without being explicitly programmed [131]. Within the field of machine learning [63], supervised

learning focuses on learning from annotated examples: classification is the prediction of discrete

variables, such as categories, while regression is the prediction of continuous variables. In order

to accommodate the variable appearance of different objects within a same category, object

detection typically relies on a machine learning framework and is formulated as a supervised

learning problem. A machine learning algorithm is trained on training data, using validation

data to adjust the parameters of the model. Testing data, which has not been seen by the

45

Page 46: Keraudren-K-2015-PhD-Thesis

46 Chapter 3. Machine learning and object detection

detector during training or validation, is then used to assess the accuracy of the trained model.

In order to demonstrate that a trained model is able to generalise to unseen data, it is paramount

to preserve the separation between the training and testing datasets. A model which performs

accurate predictions for the training data but has a poor performance for unseen data is said

to be overfitting [63].

For the task of object detection, a trained detector is typically applied at every pixel location

in the image, in what is called a sliding window approach [161]. As individual pixel values may

not be sufficient to characterise the presence of an object, features are computed from the image

data in a local neighbourhood, or window. Examples of image features, also called descriptors,

are presented in Section 3.2. We present the most widely used image features for the task

of object detection, namely convolutional features, rectangle features, Histogram of Oriented

Gradients (HOG) features [36] and Scale Invariant Feature Transform (SIFT) features [97], as

well as the less known spherical 3D Gabor descriptors [139], which are of particular interest

in the context of rotation invariant object detection. These different image features cover

several properties of image descriptors, in particular sparse or dense computation and rotation

invariance. While some features, such as SIFT features, have been carefully engineered, others

are based on a simple design and a training step that is used to shape them into discriminative

descriptors, as in the case of convolutional and rectangle features.

The extracted features are then passed to a machine learning algorithm, such as an AdaBoost

classifier, a Support Vector Machine (SVM) or a Random Forest [15], introduced in Section 3.3.

In Section 3.4, we review different search strategies that can be found in the literature in order

to detect objects of unknown scale or orientation, to efficiently reduce the search space, and to

detect objects composed of multiple parts. Finally, Section 3.5 reviews the current literature

on organ localisation, with a particular focus on applications dedicated to the fetus.

Page 47: Keraudren-K-2015-PhD-Thesis

3.2. Image features 47

3.2 Image features

3.2.1 Convolutional features

Linear filters, which convolve an image with a kernel and compute for each pixel a weighted

sum of its local neighbourhood, are widely used to extract features from images [147]. The

convolution of a 2D image I with a kernel W results in the image F given in Equation 3.1:

F (i, j) =∑k,l

I(i− k, j − l)W (k, l) (3.1)

where (i, j) and (k, l) are indices over the (width, height) of the image I and the kernel W .

As a convolution in image space corresponds to a product in Fourier space, the Fast Fourier

Transform (FFT) can be used to perform large-kernel convolutions in a time independent of

the kernel size. The work of Freeman and Adelson [52] on steerable filters presents a method

to efficiently compute the output of rotated filters from the convolutions of basis filters.

Figure 3.1: Several Gabor kernels are typically used to cover different orientations and scales,forming filter banks: varying frequency ν ∈ {0.05, 0.2} and orientation θ ∈ {−45°, 0°, 45°}.

Convolution kernels can be either hand-crafted kernels, such as the Sobel kernel for edge

detection and Gabor filters [38, 55] (Figure 3.1) that can be used to characterise oriented

structures [156], or learnt kernels as in convolutional neural networks (CNNs) [91]. CNNs

incorporate feature extraction within the training stage of the classifier and learn a hierarchy of

convolutional filters using the backpropagation algorithm [129]. A CNN is made of a succession

of layers composed of multiple convolution kernels. Non-linear operations, such as max-pooling

[118], normalisation and the ReLU activation function f(x) = max(0, x) [103], are applied

Page 48: Keraudren-K-2015-PhD-Thesis

48 Chapter 3. Machine learning and object detection

before passing the convolution results from one layer to the following layer, resulting in feature

maps of decreasing size. The output of the last convolutional layer is a high dimensional feature

vector, which is transformed into a classification result using an activation function such as the

softmax function. Such methods have known recent successes in object detection [86] following

advances in the development of graphics processor units (GPUs) that made possible to train

larger networks.

Beside their computational properties, an argument in favour of convolutional features is that

they provide a plausible model for the cells in the visual cortex of mammals [39]. However,

the works of Viola and Jones [161] in object detection and Varma and Zisserman [159] in

texture classification showed that simpler features, such as the rectangle features presented in

the following section, could be sufficiently discriminative for image classification tasks.

3.2.2 Rectangle features

Viola and Jones [161] proposed to use features based on differences of the mean intensity in

rectangular regions. These features, called Haar-like features due to their similarity with the

Haar basis functions [112], are defined by a pattern (Figure 3.3.a) as well as a position in the

sliding window. They can be efficiently computed using an intermediate representation of the

image called an integral image [32]. The integral image, also known as the summed area table,

is the 2D cumulative sum of an image. It stores at a given pixel the sum of all pixels to its left

and above in the original image (Equation 3.2). Using the integral image, the mean intensity

in a rectangular region can be computed in 4 table lookups, whatever the dimensions of the

region (Figure 3.2):

ii(x, y) =∑

x′≤x,y′≤y

I(x′, y′) (3.2)

where I is the original image, ii its integral image, (x′, y′) and (x, y) are indices over the (width,

Page 49: Keraudren-K-2015-PhD-Thesis

3.2. Image features 49

1 2

3 4

A B

C D

2 = A+B3 = A+ C4 = A+B+C+DD = 4 + 1− (3 + 2)

Figure 3.2: The integral image enables the fast computation of the sum of pixels over an imagepatch D, with four table lookups independently of the size of the patch (figure reproduced from[161]).

height) of I and ii. The extension to 3D is straightforward, with the sum over an image cube

requiring only 8 table lookups. Rotated versions of the integral image have been later proposed

to compute rectangle features rotated by 45° [96], or unit-integer rotations [100]. However, the

method of Messom and Barczak [100] does not cover all rotations and more importantly, each

selected rotation will require the computation of a rotated integral image.

Local Binary Patterns (LBPs) [110] are a different set of rectangle features that are invariant

to intensity shift, which is of particular interest when working with MR images. In the 2D case,

an LBP feature is defined by a central rectangle and its 8 adjacent neighbours (Figure 3.3.b).

For each neighbour, the mean intensity is compared to the mean intensity of the central rect-

angle, leading to a pattern within a set of 28 possibilities. These features are binary as they

only compare intensities, while Haar-like features are continuous values as they compute the

difference of mean intensities.

(a) (b) (c)

Figure 3.3: Examples of (a) Haar-like features [161], (b) local binary patterns [110], whichcompares the mean intensity of the central region (grey area) to its eight neighbours, and (c)simple rectangle features [31].

While Viola and Jones [161] relied on a predefined set of patterns that could include as many

Page 50: Keraudren-K-2015-PhD-Thesis

50 Chapter 3. Machine learning and object detection

as four rectangles, Criminisi et al. [31] showed that a simpler approach, which computes the

difference of the mean intensity of two patches of random sizes and at random locations in the

sliding window (Figure 3.3.c), is sufficiently discriminative to train a classifier. Pauly et al.

[114] additionally used absolute comparisons to achieve invariance to intensity shift.

3.2.3 Histogram of Oriented Gradients (HOG)

(a) (b) (c)

Figure 3.4: (a) Original image, (b) HOG features using the visualisation of Dalal and Triggs[36] (c) inverted HOG visualisation proposed by Vondrick et al. [162].

Dalal and Triggs [36] introduced HOG features for pedestrian detection, which have since

been applied to a variety of object detection tasks, notably in the part-based object detection

model developed by Felzenszwalb et al. [45]. HOG features are normalised local histograms of

gradient orientations that are computed in a dense overlapping grid (Figure 3.4.b). Objects are

hence characterised by local distributions of edge directions. Gradients are computed at a single

scale and fixed orientation, without preprocessing such as blurring. The computed histograms

are locally normalised over small overlapping regions, resulting in a contrast normalisation that

can be observed in Figure 3.4.c. The latter visualisation has been made using the method

proposed by Vondrick et al. [162], which is based on paired dictionary learning between HOG

features and original images. HOG features are not rotation invariant but are well suited to

the detection of objects that have a natural upright position, such as pedestrians. The detector

proposed by Dalal and Triggs [36] consists of an SVM [27] trained on the high-dimensional

vector that corresponds to the concatenation of the HOG features in a sliding window. HOG

Page 51: Keraudren-K-2015-PhD-Thesis

3.2. Image features 51

features have been extended to 3D, with an implementation proposed by Klaser et al. [82] in

the context of action recognition in video sequences.

3.2.4 Scale Invariant Feature Transform (SIFT)

While convolutional features, rectangle features and HOG features are typically computed at

every pixel location, a different approach is to first detect keypoints, such as blobs or corners,

before extracting features. As the keypoint detection step largely reduces the amount of fea-

tures, these features can have a more complex design. The SIFT features are the association of

a blob detector, where keypoints correspond to local extrema in a scale-space Gaussian pyra-

mid, and a descriptor built from histograms of gradient orientations [97]. The blob detection

process provides scale invariance to the descriptor, whereas rotation invariance is obtained from

the main gradient orientation over the blob. Both SIFT and HOG are based on histograms

of gradient orientations: A major difference between these two image descriptors is that the

gradient orientations are defined in SIFT using a local orientation, thus providing a rotation

invariant descriptor, while HOG features are not rotation invariant.

Figure 3.5: After the detection of keypoints at different scales, a local orientation (yellow) isused to define a 4× 4 grid (green) and extract a patch descriptor. This descriptor is composedof histograms of gradient orientations computed in each cell of the grid.

Page 52: Keraudren-K-2015-PhD-Thesis

52 Chapter 3. Machine learning and object detection

SIFT features were originally designed in order to find exact matches between different views

of a same object, with typical applications in image stitching [16] or structure from motion

[142]. In order to use SIFT features in a machine learning framework, which requires image

features to be generalisable between different objects of a same category, a typical approach is

the Bag-of-Words model [33]. A vocabulary of SIFT features is first learnt from the training

data, for instance using the K-means algorithm [171]. The features passed to the machine

learning algorithm are then histograms of SIFT features matched to their nearest neighbour in

this vocabulary. This is a direct analogy with text categorisation. Several keypoint detectors

and descriptors have been proposed since the work of Lowe [97], notably Speeded Up Robust

Features (SURF) [9] and ORB features (Oriented FAST and Rotated BRIEF) [127] that also

provide scale and rotation invariance while reducing the computational time. SIFT descriptors

can be extracted on a dense grid at a fixed scale and orientation, without keypoint detection,

in order to obtain dense SIFT features (DSIFT) [90].

SIFT features have been extended to 3D images [51, 151]. Although the generalisation of the

keypoint detection is straightforward, defining the main orientation over the blob requires more

attention. In 2D, a single rotation is enough to define an orientation. In 3D, three rotations are

required, which can be seen as a main direction defined by two rotations, and a third rotation

around that axis.

3.2.5 Dense rotation invariant features

While SIFT features achieve rotation invariance by selecting a main orientation of the image

gradient, spherical 3D Gabor descriptors (SGDs) [139] owe their rotation invariance from the

mathematical properties of spherical harmonic expansions. The SGD coefficients ||alk(x)|| are

computed for an image I by recursive differentiation with the spherical up-derivative ∇1:

alk(x) =

(√t

−k

)l∇l

(I ? B0s(k))(x) (3.3)

Page 53: Keraudren-K-2015-PhD-Thesis

3.2. Image features 53

where ? is the cross-correlation symbol, l represents the order of the spherical harmonics (fre-

quency in angular direction), k represents the frequency of the Gabor wavelets (frequency in

radial direction), and B0s is the Gaussian windowed 0-order Bessel function. s denotes the width

of the Gaussian window while t is a scale parameter. The angular power spectrum ||alk(x)||

forms a rotation invariant feature, called spherical Gabor descriptor. This rotation invariance

is analogue to the translation invariance of the power spectrum of the Fourier transform.

SGD features have been used by Kainz et al. [72] to localise the brain in fetal MRI with a

Random Forest classifier. Example feature maps for a fetal MRI are presented in Figure 3.6.

l = 1 l = 5 l = 15 l = 20

k = 0

k = π

k = 2π

Figure 3.6: Spherical 3D Gabor descriptors for a fetal MRI, computed at an expansion l andradial frequency k.

Page 54: Keraudren-K-2015-PhD-Thesis

54 Chapter 3. Machine learning and object detection

3.3 Machine learning

3.3.1 Support Vector Machines (SVM)

A two-class SVM classifier [27] aims to construct a hyperplane w>x− b = 0 that maximises the

margin, namely the distance between the closest points on either side of the boundary, called

support vectors (Figure 3.7.a). The weight vector w can be expressed as a linear combination

of the support vectors, hence the name support vector machine. SVMs can be applied to

multi-class classification problems by considering a series of two-class problems, such as one-

against-one or one-against-all strategies [170]. If the data points are not linearly separable, that

is to say if the two classes cannot be separated by a hyperplane, the kernel trick [12] can be used

to map each point into a higher-dimensional space where such a separating hyperplane exists.

This mapping is never explicitly performed but only expressed in terms of inner products. The

resulting hyperplane in high-dimensional space corresponds to a non-linear hypersurface in the

original lower dimensional space (Figure 3.7.b). When using an SVM, it is important to scale

all features such that those with high variance do not dominate those with lower variance.

(a) (b)

Figure 3.7: Example classification with an SVM. (a) The data is linearly separable, the sep-arating hyperplane is represented by an unbroken line, the max margin by dashed lines, andthe support vectors are highlighted in bold. (b) When the data is not linearly separable, it istransformed to a higher dimensional space where it becomes linearly separable, as in example(a). These examples are adapted from Pedregosa et al. [115].

Page 55: Keraudren-K-2015-PhD-Thesis

3.3. Machine learning 55

3.3.2 AdaBoost

Adaboost (Adaptive Boosting) [53] trains a series of increasingly discriminating weak classifiers

hi(x), then forms a single strong classifier h(x) using a weighted summation that weights the

weak classifiers according to their performance (Equation 3.4). A weak classifier is defined as

having an accuracy slightly better than chance, typically just over 50% for a binary classification

problem, while a strong classifier may achieve an arbitrarily good accuracy.

h(x) = sign

(n∑i=1

αihi(x)

)(3.4)

Each hi(x) is typically a decision stump, namely a decision tree of depth one. These weak

classifiers are trained incrementally in a greedy approach, updating the weights of the training

samples after each stage depending on their current classification results. Initially, all data

points are weighted equally (Figure 3.8). In each training step, the weights of incorrectly clas-

sified examples are increased so that the following weak learners focus on the more challenging

training data points.

1st classifier Weights update 2nd classifier Weights update 3rd classifier Final classifier

Figure 3.8: In this example, an AdaBoost classifier is trained to separate two sets of points (blueand orange circles). The final AdaBoost classifier, shown on the right, is a linear combination ofsimple decision stumps. The decision boundaries of these weak classifiers are represented witha blue or orange background. Between each training step, the training samples are reweightedto increase the weights of misclassified data points.

The most notable application of the AdaBoost algorithm is the work of Viola and Jones [161]

on the detection of human faces in frontal views. This face detector learns a cascade of strong

classifiers that are trained with the AdaBoost algorithm. Structuring the strong classifiers into

a cascade enables an early rejection of background regions where only the first classifiers are

Page 56: Keraudren-K-2015-PhD-Thesis

56 Chapter 3. Machine learning and object detection

evaluated.

3.3.3 Random Forests

In this Section, we focus on Random Forests [15] which have had recent successes in object

detection [56, 138], in particular in medical applications [30, 35, 42, 57, 69, 72, 101, 111, 114,

144], for several reasons. In particular, they are able to scale to the large number of voxels

in 3D images. They can handle a large number of features (high-dimensionality of the input

space) and perform feature selection, thus ignoring features that bear little information. They

are able to cope with noise in the training data, such as wrongly labelled points as each tree is

trained on a random subset of the training data (bootstrapping). The same model can be used

for regression and classification, and even perform both at the same time in the case of Hough

Forests [56].

A Random Forest classifier [15] is an ensemble method for machine learning which averages

the predictions of decision trees trained on random subsets of the training dataset, a process

called bagging (bootstrap aggregating). Similarly, a Random Forest regressor is an ensemble of

regression trees. Bagging reduces the variance of the predictor and helps to control over-fitting.

When growing trees in a Random Forest, each tree is built recursively starting at the root node

with a subset of the training dataset, typically two thirds. At each node of the tree, a set of

tests is randomly generated and the test that best splits the data is selected. In the case of a

classification tree, tests are selected in order to maximise the information gain (Equation 3.5),

I = H(S)−∑

i∈{L,R}

|Si||S|H(Si) where H(S) = −

∑c∈C

p(c)log(p(c)) (3.5)

where S denotes the initial dataset, L and R the left and right children of the node, C a set

of classes, and H the Shannon entropy. In the case of a regression, the test that minimises the

mean squared error (MSE) is selected (Equation 3.6):

Page 57: Keraudren-K-2015-PhD-Thesis

3.3. Machine learning 57

AA

BB

µ(IA) > µ(IB)

(a) (b)

Figure 3.9: (a) Example of a classification tree with test splits using rectangle features (b).At each node, rectangle features are computed and data points are sent to the left or rightchild depending on which of the two patches has a higher mean intensity. During training, thedistribution of the training samples is stored at the leaves, represented by the color histogramsin (a). At test time, the prediction of the Random Forest classifier is the average of thepredictions of each tree.

MSE =∑

i∈{L,R}

Ni∑j=1

||xj − µi||22 (3.6)

where xj is a data sample, Ni denotes the number of samples in the left/right child and µi the

average of all points in the left/right child. The data is then partitioned based on the selected

test, and a recursion takes place, until the data is considered pure (single class in the case of

a classification, minimal MSE reached in the case of a regression), or the maximal tree depth

has been reached. Each leaf of the tree stores the distribution of the training data points that

reached it. At testing time, data points go through each tree until reaching the leaves. The

prediction of the forest corresponds to the average of the distributions stored at the leaves.

Figure 3.9.b presents an example of a classification tree using rectangle features.

Only the tests which have been selected at training time need to be computed at test time,

which allows ad-hoc implementations of Random Forests to be very efficient. This allows the

Page 58: Keraudren-K-2015-PhD-Thesis

58 Chapter 3. Machine learning and object detection

size of the space that must be explored at test time to be much smaller than in the training

phase. However, it is also possible to precompute a finite set of features (x1, ..., xn) ∈ Rn in order

to use a standard implementation of Random Forests [115], where each split hi,λ corresponds

to a threshold λ applied to chosen component xi:

hi,λ(x1, ..., xn) = [xi < λ] (3.7)

In Figure 3.10, each data point is a point (x1, x2) ∈ R2 and the training data consists of three

classes: a red, green and blue spirals. A Random Forest is trained to assign each point of R2

to one of the three spirals. This figure highlights the impact on the classification boundary of

using a single tree or an ensemble of trees. It also highlights the effect of using different weak

classifiers at the node of the trees. In a standard Random Forest implementation, each split

consists of choosing a threshold, which is applied to either the first or second axis (axis aligned

splits). A different set of splits might consists in selecting lines in R2 (linear splits). This

could be implemented within the nodes of the Random Forest, or the input data can simply be

transformed to a higher dimensional space Rn as follows:

(x1, x2) 7→ (x′1, ..., x′n) where x′i = cos(θi)(x1 − ai) + sin(θi)(x2 − bi) (3.8)

This approach, which can be found in the literature on organ localisation [69, 102], is taken in

Chapter 6. It comes with an increased runtime and higher memory cost than ad-hoc Random

Forests, but comparable accuracy thanks to the feature selection taking place during training,

while simplifying the implementation process.

In the context of object detection, a Random Forest classifier assigns a label to each voxel,

resulting in a segmentation of the input image while a Random Forest regressor assigns an offset

vector to each voxel, allowing it to vote for the location of landmarks or objects. Such landmarks

do not necessarily correspond to anatomical locations: For instance Criminisi et al. [31] used

Page 59: Keraudren-K-2015-PhD-Thesis

3.4. Search strategy 59

Single tree100 trees

(probabilities)100 trees

Axis aligned

Linear

Figure 3.10: Example of using different weak learners in Random Forests.

regression forests to predict the offsets from the current patch to the corners of the bounding

boxes of several organs. Each detected organ thus results from the votes of different parts of the

image, and not only from the current position of a sliding window. Classification and regression

forests can be combined so that only voxels assigned to a certain class can vote for the location

of specific landmarks. This corresponds to the Hough Forest [56], which is composed of trees

that randomly alternate decision and regression nodes. The Hough transform, well-known for

line detection [160], is the process of accumulating votes in a parameter space.

3.4 Search strategy

A typical object detection pipeline extracts a set of features for every pixel in the image and

assigns a label to the current pixel (classification) or votes for the location of landmarks (regres-

sion). In this section, we discuss the different strategies that can be adopted in order to account

for the unknown scale and orientation of objects to be detected, as well as some considerations

on possible optimisations for the search.

Page 60: Keraudren-K-2015-PhD-Thesis

60 Chapter 3. Machine learning and object detection

3.4.1 Scale and orientation

In the case of fetal MRI, the scale can be inferred from the gestational age and image sampling

size, as mentioned in Chapter 2. It will however be discussed in this section for the purpose of

generality.

In order to detect objects of unknown size, Viola and Jones [161] proposed a multi-scale

sliding window approach. In this approach, all objects are aligned and scaled to a same size

during training. At test time, the sliding window and its associated features are resized to

explore different scales. Resizing the rectangle features is far less expensive than resizing the

original image as it allows the integral image to be computed only once. This sliding window

approach can only accommodate a small amount of rotation (typically ±10°), and the detector

is applied to rotated versions of the image in order to account for the unknown orientation

[84]. Rowley et al. [125] proposed a two-step approach to address the unknown orientation

problem without resorting to an exhaustive search. A regressor (router network) first predicts

an orientation for each position of the sliding window, independently of the presence or absence

of the object to detect. A subsequent detector can then be applied on the rotated window and

assumes that the object, if present, is in an upright position.

If the center of rotation of the object is known, the polar transform, which maps the Cartesian

coordinates of the image grid to polar coordinates, can be used to transform the unknown

rotation into an unknown translation. The log-polar transform additionally transforms a change

of scale into a translation in the radial direction [40]. The center of rotation can also be used to

steer the extraction of features [52, 174]. Image features are then extracted in a local coordinate

system instead of rotating the image.

While previously mentioned methods train a detector on aligned images, a radically differ-

ent approach is to learn a rotation invariant classifier, either using rotation invariant features

(Section 3.2.4 and 3.2.5), or artificially rotating the training data in order to cover all possible

orientations. With Marginal Space Learning, Zheng et al. [175] proposed to train a coarse to

Page 61: Keraudren-K-2015-PhD-Thesis

3.4. Search strategy 61

fine hierarchy of detectors, estimating first the position, then the orientation, ending with the

scale of targeted objects. Structures, such as anatomical regions, can be detected sequentially,

which enables the use of steerable features [174], namely extracting image features on a rotated

grid without rotating the entire image.

3.4.2 Pruning the search space

The most common approach to object detection is to use a sliding window, namely applying

the detector at every pixel location in the image. Prior knowledge is a simple approach to

restrict the search space. In Chapter 6, anatomical landmarks, such as the center of the brain

or the heart, are used to limit the search for other organs to plausible regions of the image.

When performing an exhaustive search of the image, a possible speed-up is to quickly prune

background regions. In the detector of Viola and Jones [161], a cascade of classifiers quickly

disregards background regions and focuses on potential objects. Lampert et al. [89] proposed

an efficient sub-window search which uses a branch-and-bound algorithm to avoid performing

an exhaustive search. An upper bound of the probabilistic output of the detector is evaluated

over large rectangular regions in order to assess whether they contain the object and rapidly

disregard background regions. Regions selected can then be further refined, splitting them until

convergence.

Another approach to avoid performing an exhaustive search is to organise the image in

regions semantically meaningful, such as an over-segmentation from superpixels [54], and apply

the detector only once for each region. Object proposal is a recent approach in object detection

which is a shift from the traditional dense window classification. It consists in preprocessing

the image to generate candidate positions of objects, and only this preselected set of sliding

window positions need to be tested by the detector. Examples of such preprocessing steps are

the salient regions of Hou and Zhang [65], the objectness measure of Alexe et al. [2] or the

multiple segmentations behind the selective search of Uijlings et al. [157]. Following this idea

of object proposal, a detector of Maximally Stable Extremal Regions (MSER) [99] is used in

Page 62: Keraudren-K-2015-PhD-Thesis

62 Chapter 3. Machine learning and object detection

Chapter 4 when detecting the fetal brain in MR images.

3.4.3 Detecting object parts

When detecting objects composed of distinct parts, such as the fetal body and its internal

organs, different strategies can be adopted. A first approach is to consider one detector per

part. In order to detect an object based on the detection of its different parts, Felzenszwalb

et al. [45] proposed a model with a root filter, parts filters, and a prior on the position of

the parts relatively to the root (star-structured part based model). A detection score at a

given position and scale in an image is then the score of the root filter plus the scores of the

part filters, minus a deformation cost. This object detector used HOG features [36] as image

representation, and the object parts annotation were automatically learned from the bounding

box annotations of the training dataset.

A second approach is to train a single classifier for all parts, and to subsequently apply an

optimisation step in order to recover the pose of the whole body. In the seminal work behind

the development of Microsoft Kinect, Shotton et al. [138] trained a classifier assigning each

pixel to a body part before aggregating the classification results into joint proposals through a

mean-shift clustering [26]. Synthetic training data played a key role in the robustness of this

approach as it enabled the classifier to be trained on extremely large datasets, representative of

all possible human poses. Similar approaches to the work of Shotton et al. [138] can be found

in medical imaging applications [42, 69], albeit without the generation of synthetic data. In

the work of Donner et al. [42], candidate positions of landmarks are found independently of

each other and an optimal combination of candidate landmarks is obtained using geometric

constraints, solved through a Markov Random Field optimisation.

In autocontext [155], several classifiers are applied successively, each classifier using the de-

tection results of the previous one to gain contextual information. The accumulation of the

classification results is thus performed through a learning process instead of a clustering ap-

Page 63: Keraudren-K-2015-PhD-Thesis

3.5. Automated localisation of anatomical regions in medical images 63

(a) (b) (c)

Figure 3.11: (a) Original image. (b) GDT for the center of the brain. (c) GDT for the centerof the heart.

proach [138] or an explicit optimisation [42]. In an analogy with autocontext, Kontschieder

et al. [85] uses geodesic distance transforms (GDTs) from the predicted position of organs as

additional features in an iterative classification framework. These GDTs correspond to the

Euclidean distance weighted by the gradient of the image intensities (Figure 3.11), as described

in Equation 3.9.

d(x, y) = infΓ∈Px,y

∫ l(Γ)

0

√1 + γ2(∇I(s) · Γ′(s))2ds (3.9)

where Γ is a path in the set of all paths Px,y between x and y, parametrised by its arc length

s ∈ [0, l(Γ)], and γ is a weight between the Euclidean distance and the image gradient. GDTs

can be efficiently computed in linear time [152]. Hough Forests [56], or more generally using a

regression, is another approach to accumulate the votes from different object parts.

3.5 Automated localisation of anatomical regions in med-

ical images

Organ localisation methods in medical imaging can be divided into two groups: (1) Atlas-

based methods, which explicitly map the target image to a template or a database of already

Page 64: Keraudren-K-2015-PhD-Thesis

64 Chapter 3. Machine learning and object detection

segmented images, and (2) machine learning methods that implicitly learn an abstract repre-

sentation of a training database of segmented images through a feature extraction step.

3.5.1 Atlas-based methods

Most atlas-based methods use image registration techniques to align and deform a template

or a set of images in order to map them to a target image and transfer segmentation labels

[158]. Such methods are principally aimed at obtaining detailed segmentations rather than

localising organs due to the sensitivity of most registration techniques to their initialisation.

While some registration frameworks have been proposed in order to carry out a large search of

the parameter space, such as FSL-FLIRT [70], most methods will only carry out a limited search

to avoid converging toward a local minimum and reduce the computational cost. The similarity

metrics used in image registration, such as the sum of squared differences or mutual information,

can be seen as analogous to the image features used in machine learning approaches. Although

most atlas-based approaches have been developed in order to segment brain images [3], such

approaches have also been used for multi-organ segmentations, such as abdominal CT scans

[168], or whole body MRI [46].

23 25 27 29 31 33 35 37

Figure 3.12: T2 templates of the developing fetal brain from 23 to 37 weeks of gestational ages[135] are matched to the acquired MR scans in order to localise and extract the brain of thefetus [148, 149, 153].

As mentioned in Chapter 2, fetal MRI do not provide images of the fetus in a standard

orientation comparable to scans of an adult patient, which generally lies in a conventional

orientation in the scanner. Moreover, the fetus is surrounded by maternal tissues. Registration

methods are thus unlikely to succeed for the fetus, unless a region of interest has been isolated as

Page 65: Keraudren-K-2015-PhD-Thesis

3.5. Automated localisation of anatomical regions in medical images 65

in the case of motion correction frameworks. Most atlas based methods to localise and extract

the fetal brain in the acquired data select a region of interest before initialising a registration

process. Anquez et al. [4] proposed to localise the eyes of the fetus before selecting the sagittal

plane and aligning the brain with a rigid template. Taleb et al. [149] uses the intersection of

the orthogonal acquisitions in order to define a region of interest. Tourbier et al. [153] manually

crop a bounding box in order to initialise a registration process with a size-matched subset of

an atlas.

Atlas-based segmentation can be achieved without registration for instance through the use

of efficient nearest-neighbours structures [164]. However, such an approach still implicitly relies

on the fact that the images are all in a conventional orientation, which is not the case for fetal

MRI. Taimouri et al. [148] approached this problem by generating rotated 2D slices of an atlas,

which are then matched to the acquired data, using dimensionality reduction in order to make

the search computationaly feasible.

Several spatio-temporal atlases of the developing fetal brain have been developed in recent

years [59, 62, 135]. However, there is currently no comprehensive atlas for other parts of the

fetal body, which hinders the application of atlas-based methods outside the brain. Example

T2 MR templates of the developing fetal brain [135], similar to the ones used to localise the

brain in the previously mentioned methods, are presented in Figure 3.12.

It is interesting to note that the level of detail which can be achieved with atlases and the

generalisation aspect offered by machine learning can be combined. In the work of Gauriau et al.

[57], a probabilistic atlas is used as a shape prior to accumulate the classification results from

a Random Forest when localising organs. Oktay et al. [111] used structured forests to generate

selective edge maps that increase the registration accuracy in an atlas-based segmentation

framework.

Page 66: Keraudren-K-2015-PhD-Thesis

66 Chapter 3. Machine learning and object detection

3.5.2 Machine learning approaches

While atlas-based methods often rely on manual initialisation, such as landmark selection, ma-

chine learning enables organ localisation methods to be fully automated. This has applications

in semantic browsing, which facilitates the navigation of 3D volumes by finding standard planes,

automated tagging for large databases, automated biometric measurements, or can serve as an

initialisation step for further processing.

Machine learning approaches for organ localisation can generally be divided between classi-

fication methods, which classify each voxel as being inside a specific organ [29, 42, 69, 72], and

regression methods, where each voxel votes for the location of one or more organs [31, 114, 176].

Classification and regression can also be combined in a single localisation framework, as in the

Hough Forests of Gall and Lempitsky [56], which were initially proposed for the task of pedes-

trian detection and have since been applied in medical imaging [101]. Most of the published

methods use rectangle features (Section 3.2.2), due to their computational efficiency using inte-

gral images and ability to model a large variety of visual patterns. However, such features are

not rotation invariant. Additionally, learning the offset vectors in regression methods implies

that the data to predict will be in a similar orientation as the training data. In applications

such as whole-body imaging [31, 114, 134], the patient usually lies inside the scanner in a stan-

dard orientation, thus rotation invariance is not a requirement. Similarly, when imaging the

heart with 2D ultrasound, images are usually acquired in a standard orientation [10, 101, 176].

However, MR images of the fetus are not acquired in standard views, as the fetus has an ar-

bitrary orientation in the womb. As a result, most organ localisation methods developed for

adult subjects are unlikely to succeed for fetal imaging.

Fetal ultrasound

Due to the important role of ultrasound imaging in the follow-up of pregnancy, several methods

have been published in the last decade seeking to automate the localisation of the fetal anatomy.

Page 67: Keraudren-K-2015-PhD-Thesis

3.5. Automated localisation of anatomical regions in medical images 67

In particular, the method of Carneiro et al. [20], which has been translated into a commercial

application [19], automates biometric measurements in 2D ultrasound. In this method, a user

query, such as BPD or CRL, triggers a dedicated detector that positions an oriented bounding

box around the fetal anatomy in order to infer the required measurements. Each detector

is composed of a coarse to fine hierarchy of boosting classifiers (Marginal Space Learning,

mentioned in Section 3.4.1) using 2D Haar-like features. While the first classifier only defines a

region of interest (ROI), the subsequent steps rotate this ROI in order to obtain the orientation,

computing a new integral image at each rotation step. This approach is reminiscent of the work

of Viola and Jones [161] and has the advantage of being generalisable to most fetal structures

observable in 2D ultrasound, given a sufficiently large training database. By comparison, most

methods presented in the ISBI 2012 challenge on fetal biometric measurement1 [128] are not

machine learning based approaches but rely on edge detection and geometric cues such as ellipse

fitting, and are therefore not generalisable to different parts of the fetal anatomy. A similar

Marginal Space Learning approach as Carneiro et al. [20], using 3D Haar-like features and

steerable features [174], has been applied to localise fetal brain structures in 3D ultrasound in

order to extract standard planes and simplify the 3D navigation [18], and to enhance the 3D

rendering of the fetal face [47]. Cuingnet et al. [35] proposed a different approach to localise the

fetal head in 3D ultrasound for automated biometric measurements. Template matching and a

deformable surface are used to segment the fetal brain, while a Random Forest with geometric

and image features defined in a local basis detect the location of the orbits.

Fetal MRI

Very few methods have been published on the automated localisation of fetal organs in MRI,

and only two using machine learning based approaches [69, 72], both aiming to localise the

brain. In order to account for the arbitrary orientation of the fetus, Ison et al. [69] took advan-

tage of the fact that images are usually acquired in directions orthogonal to the fetal anatomy

1http://www.ibme.ox.ac.uk/challengeus2012

Page 68: Keraudren-K-2015-PhD-Thesis

68 Chapter 3. Machine learning and object detection

and trained distinct sagittal, coronal, and transverse detectors, which only leaves an unknown

in-plane rotation. The detection pipeline is based on a two-step Random Forests using 3D

Haar-like features, followed by an optimisation of the spatial configuration of the centroids of

the detected brain tissues. Additionally, the training dataset was augmented with rotated ver-

sions of the training data, and the feature size has been optimised to exploit brain symmetries.

Kainz et al. [72] also trained a Random Forest classifier, but with rotation invariant features

[139] (SGD features presented in Section 3.2.5) to detect the fetal brain in the acquired data.

3.6 Conclusion

The main properties of the different image features presented in this chapter are summarised

in Table 3.1. SIFT features are used in the brain detection method presented in Chapter 4 due

to their scale and rotation invariance. In the body detection method presented in Chapter 6,

the scale is inferred from the gestational age and dense image features have been used in order

to enable a pixel-wise classification. Rectangle features were chosen as thanks to their simple

formulation, they can form the basis of more complex features. In particular, rotation invariance

was obtained by extracting rectangle features in a rotated coordinate system.

Densecomputation

Sparsecomputation

RotationInvariance

Scaleinvariance

Convolutions XRectangles X XHOG XSIFT X X XDSIFT XSGD X X

Table 3.1: Summary table of the properties of the different image features presented in thischapter (Section 3.2).

Figure 3.13 presents the main steps of a typical object detection pipeline based on machine

learning. Random Forests form a safe default choice for a machine learning task as they are

Page 69: Keraudren-K-2015-PhD-Thesis

3.6. Conclusion 69

applicable to a large range of problems and can provide reasonable results without parameter

optimisation. They scale well to large datasets and are robust to noisy data. As they do

not require a vector representation of the data, image features can be computed on the fly.

Additionally, they can produce probabilistic output.

In AdaBoost, decision stomps are decision trees of depth one. The cascaded approach of [161]

allows to compute features by batches, stopping as early as possible in obvious background

regions. The main disadvantages of Boosting methods is that they require long training time

and allow for less variation in the data than Random Forests.

SVMs are fast for prediction, in particular the decision function for a linear SVM is a simple

dot product. The mathematical formulation of the decision function allows for branch-and-

bound optimisation [89]. However, they do not scale to large datasets and generally require a

grid search to optimise parameters.

In the present work, the main difficulty is finding the right set of image features, namely the

right representation of the image, more than finding the right machine learning algorithm. In

the following chapter, we will present our proposed machine learning based approach to localise

the fetal brain.

Page 70: Keraudren-K-2015-PhD-Thesis

70 Chapter 3. Machine learning and object detection

Figure 3.13: Generic pipeline for machine learning based object detection.

Page 71: Keraudren-K-2015-PhD-Thesis

Chapter 4

Localisation of the brain in fetal MRI

This chapter is based on:

K. Keraudren, V. Kyriakopoulou, M. Rutherford, J. V. Hajnal, and D. Rueckert. Localisation

of the Brain in Fetal MRI using Bundled SIFT Features. In MICCAI, pages 582–589. Springer,

2013

Abstract

This chapter describes a method for accurate and robust localisation of the fetal brain in MRI

when the image data is acquired as a stack of 2D slices misaligned due to fetal motion. Pos-

sible brain locations are first detected in 2D images with a Bag-of-Words model using SIFT

features that are aggregated within Maximally Stable Extremal Regions (called bundled SIFT).

This detection process is then followed by a robust fitting of an axis-aligned 3D box to the se-

lected regions. Prior knowledge of the fetal brain development is used to define size and shape

constraints. The method is evaluated through cross-validation on a database of 59 fetuses and

resulted in a median error distance of 5.7mm from the ground truth, without missed detection.

71

Page 72: Keraudren-K-2015-PhD-Thesis

72 Chapter 4. Localisation of the brain in fetal MRI

4.1 Introduction

Fetal MRI has had great successes in the last years with the development of motion correction

methods providing high quality isotropic volumes of the brain [71, 121], thus enabling a better

understanding of the fetal brain development. Such reconstruction methods typically rely on

data acquired as stacks of 2D slices of real-time MRI, freezing in-plane motion. Slices are quite

often misaligned due to fetal motion and form an inconsistent 3D volume (Figure 4.1). Motion

correction can be achieved via the registration of 2D slices of the fetal brain to an ideal 3D

volume. Cropping a box around the brain is a prerequisite to exclude maternal tissues that can

make the registration fail.

This chapter describes a method to automatically find a precise bounding box around the

brain in order to speed-up the preprocessing steps of the motion correction procedure. We

start by presenting a tentative approach based on the Viola-Jones object detector [161] that

was first investigated in our research process (Section 4.2). While this approach could rapidly be

implemented using the OpenCV library [14], it suffers from several limitations. In particular

it requires to train a different model for each scan direction relative to the fetal brain, it

cannot accommodate large variations between the images it learns to detect and has a poor

performance. We then present in Section 4.3 a method which overcomes these limitations.

The cascade approach of Viola and Jones [161], which allows the detector to quickly reject

background regions and focus instead on potential object locations, inspired the idea of a pre-

selection of candidate regions in our detection pipeline. This pre-selection process, previously

mentioned in Section 3.4.2, consists in detecting Maximally Stable Extremal Regions (MSER)

regions and filtering them using an estimated size of the fetal brain.

Page 73: Keraudren-K-2015-PhD-Thesis

4.2. Detecting the fetal head in 2D slices using the Viola-Jones detector 73

(a) (b) (c)

Figure 4.1: Exemple scan with a large misalignment between slices due to fetal motion: (a) na-tive slice plane in transverse direction, orthogonal cuts in sagittal (b) and coronal (c) directions(fetus of 20 weeks).

4.2 Detecting the fetal head in 2D slices using the Viola-

Jones detector

In this section, we consider a detection pipeline for the fetal head based on the Viola-Jones

detector [161], which trains a cascade of AdaBoost classifiers (Section 3.3.2). Considering that

scans of the fetal brain are preferentially acquired in sagittal, coronal or transverse orientation,

2D detectors have been trained for each view in order to detect the middle slices of the brain.

The detector is trained on images which are aligned, and at test time it is applied to rotated

versions of the slices. Such a detector could be applied to each 2D slice of a 3D stack, before

accumulating the votes in 3D (Figure 4.2). The implementation of the Viola-Jones detector

from the OpenCV library [14] was used, with Local Binary Pattern (LBP) features [110] due

to their invariance to intensity shift.

In this experiment, the available data consists of stacks of 2D slices from 66 subjects (dataset 1,

Section 2.3.1), which were partitioned into 40 training subjects and 26 testing subjects. Slices

from the central region of the head have been manually annotated, amounting to 246 coronal

views, which have been mirrored to double the data, 303 sagittal views and 666 transverse

views. Sample annotations are shown in Figure 4.3, along with the average image of each train-

ing dataset. Negative training samples were automatically obtained by excluding the region of

Page 74: Keraudren-K-2015-PhD-Thesis

74 Chapter 4. Localisation of the brain in fetal MRI

For every slice For every rotation Multi-scale sliding window

Figure 4.2: Pipeline to localise the brain of the fetus in MR images using a 2D Viola-Jonesdetector.

the head from the 2D slices. Knowing the gestational age, the size of the fetal head can be

estimated to reduce the search space in the multi-scale approach.

Table 4.1 reports the hit, miss and false detection rates when applying each detector to test

images containing a view of the same orientation as the detector. A 4° rotation step has been

used. It took 6 hours to train each detector on a desktop PC while the detection takes 2 seconds

per slice. This evaluation was only carried out on middle slices of the brain and not on the

whole volume. However, this experiment highlights the limitation of a classifier which does not

allow enough variation among the objects it learns to detect. In particular, sagittal, transverse

and coronal images cannot be considered as a single class. The high false detection rate for the

transverse view suggests that the classifier detects bright elliptic structures but does not learn

the appearance of the brain itself. This is also a downside of learning downsampled images, as

positive examples were resized to 32× 32 pixels before training (Figure 4.5).

When applying their method to detect frontal views of human faces, Viola and Jones [161]

observed that the first feature automatically selected by the AdaBoost algorithm corresponds to

the region of the eyes being darker than the upper cheeks. Similarly, in the case of mid-sagittal

views of the fetal brain, the first feature captured by the classifier corresponds to the region of

the jaws being darker than the brain (Figure 4.5).

Page 75: Keraudren-K-2015-PhD-Thesis

4.2. Detecting the fetal head in 2D slices using the Viola-Jones detector 75

Sagittal Transverse Coronal

Figure 4.3: Tightly cropped and aligned training data: sagittal, transverse and coronal views,with their corresponding average image.

Table 4.1: Detection rates for each orientation.

Hit rate (%) Miss rate (%) False detections (%)

Sagittal 87.1 12.9 3.2Transverse 82.6 17.4 24.3Coronal 61.2 38.8 2.3

Figure 4.4: Example detection results for a sagittal, coronal and transverse slice.

Page 76: Keraudren-K-2015-PhD-Thesis

76 Chapter 4. Localisation of the brain in fetal MRI

Figure 4.5: Local binary pattern corresponding to the first test of the cascade of weak classifiers:The first test performed by the sagittal view detector checks that the bottom right part of thesliding window is darker than the central part, while the upper left part is brighter than thecentral part.

4.3 Localising the fetal brain using bundled SIFT fea-

tures

4.3.1 Method overview

The method presented in this chapter focuses on the task of finding an axis-aligned bounding

box for the fetal brain. This is in contrast with Ison et al. [69], which aimed at finding an oriented

bounding box, and methods aiming to segment the skull bone content. This relaxation of the

problem allows to focus on positioning a tight bounding box on the brain while dealing with

2D slices misaligned or corrupted due to fetal motion. Although acquisitions are carried out in

conventional orientations, there is unpredictability in the positioning of the fetus. Hence, the

proposed method first performs a 2D detection process in every slice before accumulating the

votes in 3D. Additionally, the scale component is inferred from the gestational age.

In a first stage, MSER regions [99] are extracted from 2D slices and approximated by ellipses.

Then, the regions whose size and aspect ratio is consistent with prior knowledge of the fetal

development are classified into brain or not-brain based on histograms of the SIFT features

found within the fitted ellipses (bundled SIFT [172]), following a Bag-of-Words model [33].

Finally, a RANdom SAmple Consensus (RANSAC) procedure [49] is performed to find a best

fitting 3D cube whose dimensions are inferred from prior knowledge. This localisation pipeline

is summarised in Figure 4.6. In the remainder of this chapter, the proposed method will be

Page 77: Keraudren-K-2015-PhD-Thesis

4.3. Localising the fetal brain using bundled SIFT features 77

described in more details and evaluated in a 10-fold cross-validation, comparing it to a sliding

window Bag-of-Words model using 2D or 3D SIFT features.

For every slice Detect MSER regions Filter by size

Classify into brain and non-brainusing SIFT features

Fit box with RANSAC

Figure 4.6: Pipeline to localise the fetal brain. Slice-by-slice, MSER regions are first detectedand approximated by ellipses. They are then filtered by size before being classified usinghistograms of 2D SIFT features. Finally, a 3D bounding box is fitted to the selected MSERregions with a RANSAC procedure. In the last two steps, selected regions are shown in greenwhile rejected regions are shown in red.

4.3.2 Detection and selection of MSER regions

MSER regions, introduced by Matas et al. [99], are a common feature detection method in

computer vision. MSER regions can be defined as sets of connected pixels stable over a large

range of intensity thresholds. They can be computed in quasi-linear time by forming the

component tree of the image [104], a tree structure highlighting the inclusion relation of the

connected components of the level sets in the image. Such regions can be characterised by

homogeneous intensity distributions and high intensity differences at their boundary. Each

region is defined by a seed pixel as well as a lower and upper intensity thresholds: The whole

region results from a floodfill operation, starting from the seed pixel and constrained by the

two intensity thresholds. The potential use of MSER regions to characterise the brain in MRI

was first proposed by [43] who used volumetric MSER to segment simulated brain MR images.

As the brain and cerebrospinal fluid appear much brighter than the surrounding bone and skin

tissues in fetal T2 MRI, MSER regions are well suited to localise the brain.

In its first stage, our localisation pipeline proceeds slice-by-slice, working on 2D images. We

Page 78: Keraudren-K-2015-PhD-Thesis

78 Chapter 4. Localisation of the brain in fetal MRI

(a) (b) (c)

Figure 4.7: Example MSER regions Ri (red overlay) and their fitted ellipses Ei (green dashes):(a) skull bone content, (b) cerebrospinal fluid, (c) amniotic fluid.

start by detecting candidate MSER regions Ri which are sets of connected pixels. An ellipse

Ei is then fitted to each Ri with a least-squares minimization [50]. As the shape of the brain

is well approximated by an ellipsoid, the ellipses Ei have the ability to recover the shape of

the brain even if the detected Ri only corresponds to a segment of cerebrospinal fluid (CSF)

(Figure 4.7.b), or amniotic fluid surrounding the fetal head (Figure 4.7.c). The ellipses Ei are

then filtered by size and aspect ratio using the gestational age combined with prior knowledge of

the fetal development, namely the occipitofrontal diameter (OFD) and the biparietal diameter

(BPD) previously described in Section 2.2.2.

4.3.3 Classification of selected MSER regions using bundled SIFT

features

MSER regions alone do not provide a descriptor that can be used for classification. As a

versatile rotation invariant classification framework, we chose to use a Bag-of-Words model [33]

by computing for each region Ri the histogram of the SIFT features [97] within the ellipse Ei,

similarly to the bundled SIFT features of Wu et al. [172]. The rotation invariance of SIFT

features helps accommodate the unknown orientation of the fetal brain, while scale invariance

attenuates the variations due to gestational age.

In the learning stage of our pipeline, all SIFT features are extracted from the 2D slices

Page 79: Keraudren-K-2015-PhD-Thesis

4.3. Localising the fetal brain using bundled SIFT features 79

(a) (b)

Figure 4.8: (a) All detected MSER regions. (b) MSER regions are filtered based on their sizeand aspect ratio, then classified according to their histograms of SIFT features (green for brainregions, red for not-brain).

of all training scans and clustered using a k-means algorithm. The N cluster centres form the

vocabulary V . Then, MSER regions Ri are detected and selected (Section 4.3.2). The ellipses

Ei centred on the brain are used as positive examples, while those further than OFD/2 are used

as negative examples. For each Ei, a histogram of bundled SIFT features is computed. After

L2 normalisation, the histograms are used to train an SVM classifier. As in [33], a linear kernel

SVM has been used. In the detection stage, for each MSER region selected, the normalised

histogram of SIFT features is computed, and the previously trained SVM is used to classify

ellipses as brain or not-brain. Figure 4.8.a shows an example slice with all the candidate MSER

regions, while Figure 4.8.b shows the result of the selection and classification processes.

4.3.4 RANSAC fitting of a cube

The 2D detection works reliably in mid-brain slices but not in peripheral slices of the brain.

Estimating a 3D box is thus important for a reliable estimate of the entire brain region. As the

set of candidate regions classified as brain by the SVM classifier may still contain outliers, we

perform a RANSAC procedure [49] to find a best fitting position for a 3D axis-aligned bounding

cube. We thus randomly select a small set of ellipses Ei, typically five, and use their centroid

to define a bounding cube of width OFD . For a region Ri to be considered an inlier, it must

Page 80: Keraudren-K-2015-PhD-Thesis

80 Chapter 4. Localisation of the brain in fetal MRI

be completely included in this cube and the center of the ellipse Ei must be at a small distance

from the cube center (10mm). The cube position is then refined by taking the centroid of these

inliers. This process is repeated a predefined number of times, typically 1000 times, and the

cube with the largest number of inliers is selected.

4.4 Experiments

4.4.1 Implementation

To gain prior knowledge on the expected size of the brain knowing the gestational age, we used

a 2D ultrasound study from Snijders and Nicolaides [143] performed on 1040 normal singleton

pregnancies. The corresponding growth charts can be seen in Chapter 2, Figure 2.10. The

5th and 95th centile values of OFD and BPD have been used to define the acceptable size and

aspect ratio for the selected MSER regions (Section 4.3.2). The size of the detected bounding

box in the robust fitting procedure (Section 4.3.4) has been set to the 95th centile of OFD

in order to contain the whole fetal brain. Following the experiments of Fulkerson et al. [54],

the size N of the Bag-of-Words vocabulary (Section 4.3.3) has been set to 400. The code was

implemented in Python, using OpenCV [14] and scikit-learn [115]. The automated detection

process takes less than a minute on a normal PC.

4.4.2 Data

The imaging data used to evaluate the proposed method was introduced in Section 2.3.1,

dataset 1, and consists of 117 sagittal, 113 coronal and 228 transverse scans from 59 healthy

fetuses aged from 22 to 39 weeks. We performed a 10-fold cross-validation, with 39 training

subjects and 20 testing subjects for each fold.

During the localisation process, standardising the image contrast is crucial as both MSER and

Page 81: Keraudren-K-2015-PhD-Thesis

4.4. Experiments 81

SIFT features depend on gradient information. The data has thus been contrast-normalised

over all slices, with saturation of 1% on each side of the intensity histogram. The acquired

slices have been upsampled to 0.8× 0.8mm resolution to increase the number of SIFT features

detected as well as to limit the variation in scale between datasets.

4.4.3 Results

The brain localisation results are summarised in Table 4.2, whereas examples of detected bound-

ing boxes and their corresponding ground truth are shown in Figure 4.9. We compared our

method against sliding a window of fixed size OFD with a Random Forest classifier on his-

tograms of 2D SIFT features [97] or 3D SIFT features [151]. For each stack of slices, we

measured the distance between the center of the ground truth bounding box and the detected

bounding box. Similarly to Ison et al. [69], we defined a correct detection as 70% of the brain

being included in the detected box. 2D SIFT features performed better than 3D SIFT fea-

tures, which can be attributed to the inconsistency of the 3D data, whereas the bundled SIFT

drastically improved the localisation accuracy with an error below 5.7mm in more than 50%

of cases. This increased precision comes from applying a classifier only at specific locations

(MSER regions) instead of all pixels, thus resulting in a more localised probability map. This

localisation of the center of the brain shows improved results compared to Ison et al. [69] who

reported a median error of 10mm in the detection of six landmarks corresponding to centroids

of fetal head tissues. However, contrary to Ison et al. [69], the orientation of the brain is not

determined. Our method is more general than Anquez et al. [4] as it does not rely on localising

the eyes. There has been no false detection or missed detection with bundled SIFT, with a

worst case error of 25mm presented in Figure 4.9.e. In 85% of cases, the detected bounding

box contains entirely the ground truth bounding box.

The prior knowledge gained from the gestational age plays an important role in disregarding

most of the detected MSER regions. Indeed, on average during the 10-fold cross-validation, 98%

of the candidate regions are disregarded in the first stage of the pipeline. As a comparison, the

Page 82: Keraudren-K-2015-PhD-Thesis

82 Chapter 4. Localisation of the brain in fetal MRI

(a) (b) (c)

(d) (e)

Figure 4.9: Examples of detected bounding boxes (green) and ground truth (red). (a), (b)and (c) are respectively sagittal, coronal and transverse acquisitions of different subjects. (d)and (e) show detections with the largest error, which can be attributed to the presence of thematernal bladder near the fetal head.

Page 83: Keraudren-K-2015-PhD-Thesis

4.5. Discussion 83

Error (mm)

Centiles 2D SIFT 3D SIFTBundledSIFT

25th 10.9 14.8 4.050th 15.5 20.8 5.775th 20.5 30.4 8.4

Detection 98% 85% 100%Complete brain 38% 23% 85%

Table 4.2: Detection results averaged over the cross-validation, all orientations combined.

SVM classifier further disregards 41% of the remaining candidates, and the RANSAC procedure

removes 18% outliers. No distinction has been made between sagittal, transverse or coronal

acquisitions, which is another advantage of the proposed method.

When using normal growth charts to define a realistic range of sizes of the brain, our goal

is to remove improbable detections while still allowing a large variation in brain size. From

standard growth charts, extending to include 99.6% of subjects corresponds to a one week error.

Simulating extreme growth restriction by adding a five weeks error in all gestational ages, the

detection rate is still 91%. Additional detection results are presented in an online video1.

4.5 Discussion

The proposed detection method is robust to low signal-to-noise ratio (Figure 4.10). However,

the stability criterion in the detection process of MSER regions is related to the image gradient.

As a result, the detection method relies on a consistent contrast across images. For the same

reason, the brain is well characterised by MSER regions, but it is not the case for other organs,

whose intensities are less homogeneous and display a lower contrast with surrounding tissues.

The proposed method thus cannot be generalised to other parts of the fetal body. Besides, the

ellipsoid shape of the brain ensures a regular shape of the brain whatever the scan orientation.

1https://youtu.be/WdGEb7snJak

Page 84: Keraudren-K-2015-PhD-Thesis

84 Chapter 4. Localisation of the brain in fetal MRI

Figure 4.10: The proposed localisation method is robust to low signal-to-noise ratio. Theselected MSER regions are highlighted in red and the detected bounding box in green.

On the contrary, a coronal or transverse view of the lungs do not have a similar shape. This

hinders a slice-by-slice approach independent of the scan direction for other organs than the

brain. Due to the slice-by-slice detection process, there needs to be a good coverage of the

middle slices of the brain and the proposed method is therefore not adapted for acquisitions

with a large spacing between slices.

Gaussian blurring and the mean shift algorithm are other approaches that accumulate votes

from a detection process such as in Section 4.3.4. However, these approaches would only

consider the center of each detected ellipse, while RANSAC enables to incorporate constraints

on the MSER regions themselves, which must not extend beyond the detected bounding box.

These constraints are based on the estimated size of the brain from the gestational age. They

allow the exclusion of outliers such as the whole head or the surrounding amniotic fluid.

Figure 4.11: As a result of fetal head rotation, the eyes of the fetus do not provide robust 3Dlandmarks for the localisation of the brain.

Beyond the clinical interest in analysing the brain of the fetus, there are several motivations

for localising the fetal brain as a first step in the localisation of fetal organs. The main arguments

Page 85: Keraudren-K-2015-PhD-Thesis

4.6. Conclusion 85

are that the brain is the largest organ in the fetus, representing more than 10% of the total

body volume in healthy fetuses [37], and it is the most apparent feature in the data due to the

low intensity of the fetal skull and high intensity of the cerebrospinal fluid. In the method of

Anquez et al. [4] to localise the fetal brain in MR images, the eyes are used as a first landmark

in the localisation pipeline. However, as highlighted in Figure 4.11, due to the small size of the

eyes and their distance to the neck articulation, they do not provide a 3D landmark as robust

as the brain itself in our images. The eyes were also used in Cuingnet et al. [35] in order to

define the orientation of the brain in ultrasound images. However, 3D ultrasound provide a

consistent 3D volume, unlike the misaligned 2D slices of fetal MRI.

4.6 Conclusion

We presented in this chapter an automatic localisation method for the fetal brain in MRI.

Proceeding slice-by-slice, MSER regions are first detected and filtered by size and aspect ratio

before being classified using histograms of SIFT features. An expected size of the brain is

inferred from the gestational age and prior knowledge of the fetal development. Finally, a 3D

bounding cube is fitted to the selected regions with a RANSAC procedure. The method is not

specific to the scan orientation, with a median distance error of 5.7mm from the ground truth.

The automated localisation of the brain can be used to initialise motion correction, as will

be presented in the next chapter (Chapter 5), but it is also an anchor point to detect other

parts of the fetal body, as will be presented in Chapter 6.

Page 86: Keraudren-K-2015-PhD-Thesis

Chapter 5

Brain segmentation and motion

correction

This chapter is based on:

K. Keraudren, M. Kuklisova-Murgasova, V. Kyriakopoulou, C. Malamateniou, M. Ruther-

ford, B. Kainz, J. Hajnal, and D. Rueckert. Automated Fetal Brain Segmentation from 2D

MRI Slices for Motion Correction. NeuroImage, 101:633–643, 2014

Abstract

This chapter presents an extension of the localisation method described in the previous chap-

ter in order to segment the fetal brain. This segmentation process is combined with a robust

motion correction method, enabling the segmentation to be refined as the reconstruction pro-

ceeds. The proposed method is a patch-based propagation of the MSER regions selected during

the localisation process, followed by a CRF segmentation. The method was tested in a ten-fold

cross-validation experiment on 66 datasets of healthy fetuses whose gestational age ranged from

22 to 39 weeks. In 85% of the tested cases, the proposed method produced a motion corrected vol-

86

Page 87: Keraudren-K-2015-PhD-Thesis

5.1. Introduction 87

ume of a relevant quality for clinical diagnosis, thus removing the need for manually delineating

the contours of the brain before motion correction.

5.1 Introduction

Motion correction is a key element for imaging the fetal brain in-utero using MRI. Maternal

breathing can introduce motion, but a larger effect is frequently due to fetal movement within

the womb. Consequently, imaging is frequently performed slice-by-slice using single shot tech-

niques, which are then combined into volumetric images using slice-to-volume reconstruction

(SVR) methods (see Section 2.1.3 for a detailed review). For successful SVR, a key preprocess-

ing step is to isolate fetal brain tissues from maternal anatomy before correcting for the motion

of the fetal head. This has hitherto been a manual or semi-automatic procedure.

In this chapter, we propose a fully automated tightly coupled segmentation and motion

correction of the fetal brain. In Section 5.2, we give an overview of related work for processing

brain MRI. We show in Section 5.3.1 how to refine the localisation obtained in the previous

chapter from a bounding box detection to a segmentation for all 2D slices of each 3D stack.

In Section 5.3.2, we integrate this automated masking with the motion correction procedure,

allowing the mask of the segmented brain to evolve during reconstruction. The main steps

of the proposed segmentation pipeline are highlighted in Figure 5.1. The resulting method is

evaluated in two steps (Section 5.5), first assessing the quality of the slice-by-slice segmentation,

then comparing the result of motion correction using a manual and an automated initialisation.

We then discuss in Section 5.6 the limitations of the proposed method in cases of extreme motion

or abnormal brain growth.

Page 88: Keraudren-K-2015-PhD-Thesis

88 Chapter 5. Brain segmentation and motion correction

Figure 5.1: Outline of the proposed brain segmentation pipeline, which starts from the MSERregions selected during the brain localisation process, and results in a segmented and motioncorrected volume of the fetal brain.

5.2 Related Work

5.2.1 Processing of conventional cranial MRI

Numerous methods have been proposed for an automated extraction of the adult brain from

MRI data, a problem also known as skull stripping. The task is then to delineate the whole

brain in a motion free 3D volume. Most of the proposed methods take advantage of the fact

that the adult head is surrounded by air and that the brain is thus the main structure in

the image. Brain extraction can then be performed using a region growing process, such as in

FSL’s Brain Extraction Tool [141] and FreeSurfer’s Hybrid Watershed Algorithm [133], or using

morphological operations and edge detection such as in the Brain Surface Extractor of Shattuck

et al. [137]. A more general approach is to register the image to segment with a brain template

and use statistical models of brain tissues [166]. Instead of a template, the new image can also

be registered to an atlas of already skull-stripped images [158]. Tissue classification can then

be performed, for instance by seeking similar patches from the atlas in a local neighbourhood

and transferring their labels to the new image, such as in BEaST [44]. Such a patch-based

segmentation has been applied to fetal data after motion correction [169].

5.2.2 Brain extraction in fetal MRI

Current methods to segment motion-free, high resolution adult brain volumes cannot be straight-

forwardly applied to the extraction of the fetal brain from the unprocessed MR data. It is how-

Page 89: Keraudren-K-2015-PhD-Thesis

5.2. Related Work 89

ever necessary to isolate fetal tissues for motion correction to be guided by the anatomy of the

fetus. In both Rousseau et al. [121] and Gholipour et al. [58], a tight bounding box is manually

cropped for each stack of 2D slices. In Jiang et al. [71] and Kuklisova-Murgasova et al. [87], the

region containing the fetal head is manually segmented in one stack, and the segmentation is

propagated to the other stacks after volume-to-volume registration. Downsampling the target

volume followed by up-sampling the mask can be performed to save time during the manual

segmentation. In Kim et al. [81], a box around the brain is manually cropped in one stack and

propagated to the other stacks after volume-to-volume registration. An ellipsoidal mask, which

evolves while the slices are aligned is then used throughout the motion correction procedure.

Cropping only a box around the fetal brain is relatively fast but may not be enough for the

reconstruction to succeed, as tissues outside the head of the fetus may mislead the registration.

However, manual segmentation requires time, about 10–20min per stack of 2D slices, which can

be a limitation for large scale studies. Moreover, the assumption that a mask from one stack

can be propagated to all other stacks might not hold when the fetus has moved a lot between

the acquisitions of each stack. Besides, slices are acquired one at a time and in an interleaved

manner in order to reduce the scan time while avoiding slice cross-talk artefacts. In cases of

extreme motion, the misalignment between slices can cause a 3D mask to encompass too much

maternal tissue, thus requiring a slice-by-slice 2D segmentation.

Existing methods to localise the fetal brain in MRI, which have been introduced in the

previous chapter, are a key step to automate the preprocessing required for motion correction.

While the method of Ison et al. [69] produces an oriented bounding box around the brain,

Anquez et al. [4] and Kainz et al. [72] use the localisation process to initialise a segmentation

of the brain, respectively with graph-cuts and level-sets. The atlas-based methods of Taleb

et al. [149] and Taimouri et al. [148] also provide a segmentation of the fetal brain in the raw

data. In particular, the template-to-slice matching of Taimouri et al. [148] could also provide

an initialisation for the slice-to-volume registration taking place in motion correction as each

acquired slice can now be positioned into the template space. Additionally, the method of

Page 90: Keraudren-K-2015-PhD-Thesis

90 Chapter 5. Brain segmentation and motion correction

Tourbier et al. [153] focuses on brain extraction and assumes that a box has been manually

cropped around the brain in order to perform an atlas-based segmentation.

In this chapter, we extend the brain localisation method presented in the previous chapter

to automatically segment the fetal brain from the unprocessed data for the purpose of motion

correction. The main challenges are that the fetus is in an unknown position and orientation,

surrounded by maternal tissues, and may have moved between the acquisitions of 2D slices.

5.3 Method

5.3.1 Brain extraction

The result of the detection process presented in Chapter 4 is not only a bounding box, but also

a partial segmentation of the brain corresponding to the selected MSER regions. Figure 5.2.a

shows an example of the resulting mask (in red). It can be noted that the selected MSER

regions do not cover the whole brain. In particular, due to the size selection using the BPD and

OFD, the extremal slices of the brain are unlikely to be included as they will be considered too

small. The brain extraction method proposed in the present chapter is hence a propagation of

this initial labeling within the region of interest defined by the detected bounding box, resulting

in the masked brain in Figure 5.2.b.

By performing a slice-by-slice segmentation based on a classifier that uses 2D patches as

features, we are able to propagate the initial labelling within the region of interest obtained

from the detection process. The motivation for using 2D patches is that they are not corrupted

by motion. Although the 2D slices are not necessarily parallel in the anatomy of the fetal brain,

it can be expected that 2D patches small enough will share sufficient similarities across different

regions of the brain to enable label propagation.

A typical patch-based segmentation searches each region of the image to find the most similar

Page 91: Keraudren-K-2015-PhD-Thesis

5.3. Method 91

(a)

(b)

Figure 5.2: The brain localisation using MSER regions only provides an estimate of the centerof the brain and a rough segmentation of the middle of the brain (red in (a)). The labelpropagation step extends this segmentation to all slices within the detected bounding box,adding the white region in (a), resulting in the masked brain shown in (b).

patches within an atlas, a training database with known ground truth segmentation. The atlas

is hence aligned with the image and a final segmentation is obtained by fusing the votes from

selected patches of atlas images [28, 123]. In our application, we seek to segment a 3D volume

using a sparse labelling of its 2D slices. This sparse labelling corresponds to the periphery of the

bounding box around the brain (background label), as well as the selected MSER regions (brain

label), as shown in Figure 5.3. Therefore, we perform online learning in contrast to conventional

atlas based approaches, which rely on offline training. Working on a single 3D image at a time

excludes the need to normalise intensities or register the volume to an atlas. Moreover, by

learning from the same image, we can adapt to the variability of the unprocessed fetal MR

data. In particular, we can work on 2D slices without prior knowledge of the scan orientation

relative to the fetal anatomy. Similarly to Wang et al. [165], we overcome the restriction of

search windows by using a single global classifier that compares patches independently of their

location in the image, instead of applying local classifiers. However, we chose a Random Forest

classifier [15], using 2D intensity patches as features, to maintain a feasible training and testing

speed. A k-nearest neighbour classifier would have little training time, but its evaluation is a

time-consuming process.

To obtain the initial sparse labelling on which the Random Forest is trained, the mask

Page 92: Keraudren-K-2015-PhD-Thesis

92 Chapter 5. Brain segmentation and motion correction

(a) (b)

Figure 5.3: (a) Detected brain (green box ) and selected MSER regions (red overlay). The topimage shows a 2D slice while the bottom image is an orthogonal cut through the stack of 2Dslices. (b) A Random Forest classifier is trained on patches extracted in the selected MSERregions (green boxes selected within the red overlay) and on patches extracted in a narrow bandat the boundary of the detected bounding box (red boxes selected within the blue overlay)

.

resulting from the brain localisation method presented in Chapter 4 is first preprocessed by

fitting ellipses to the MSER regions and filling any remaining hole. These regions provide

positive examples of 2D patches to learn the appearance of the fetal brain, while negative

examples are obtained by extracting patches on a narrow band at the boundary of the detected

bounding box (Figure 5.3.b). The resulting classifier is then applied to the volume within the

detected bounding box, to obtain a probability map of brain tissues.

An estimation of the volume of the fetal brain from the gestational age [22] is used to

refine the probabilistic output of the Random Forest. This can be interpreted as a volume

constraint in the segmentation process. For this purpose, enough of the voxels most likely to

be within the brain are selected in the probability map to fill half of the estimated volume of

the brain, and subsequently set to the highest probability value. Likewise, enough of the voxels

most likely to be part of the background are selected to fill half of the background, and set

to the smallest probability value. The probability map is then rescaled between [0, 1]. This

corresponds to trimming both sides of the intensity histogram of the probability map before

rescaling it (Figure 5.4).

Page 93: Keraudren-K-2015-PhD-Thesis

5.3. Method 93

0.0 0.2 0.4 0.6 0.8 1.0

Brain probability

0

1× 105

2× 105

3× 105

Pix

elco

unt

Brain probability histogram before rescaling

Fetal brain

21.2%

Background

78.8%

0.0 0.2 0.4 0.6 0.8 1.0

Brain probability

0

1× 105

2× 105

3× 105

Pix

elco

unt

Brain probability histogram after rescaling

(a) (b) (c)

(d) (e)

Figure 5.4: Knowing the dimension of the image as well as an estimation of the fetal brainvolume inferred from the gestational age, we can estimate the ratio of the total volume of theimage that is occupied by the fetal brain (b). This ratio is used to rescale the brain probabilitymap. Figures (a) and (d) show the brain probability map before rescaling, while the effect ofrescaling can be seen in the figures (c) and (e).

The probability map is then converted to a binary segmentation using a CRF [88]. This is a

common approach in computer vision which enables the combination of local classification priors

(unary term) with a global consistency constraint (pairwise term). We use the formulation

proposed by Boykov and Funka-Lea [13] which seeks to minimise the energy function:

E(l) =∑p

D(p, lp) + λ∑p,q

Vpq(lp, lq)

with the unary term (data cost) defined by the probability estimated by the Random Forest

classifier:

D(p, lp) = − log(P(p|lp))

and a pairwise term (smoothness cost) penalising labelling discontinuities between pixels of

Page 94: Keraudren-K-2015-PhD-Thesis

94 Chapter 5. Brain segmentation and motion correction

similar intensities:

Vpq(lp, lq) =[lp 6= lq]

‖p− q‖2

e−(Ip−Iq)2/2σ2

where ‖p− q‖2 is the distance between pixels taking image sampling into account, |Ip− Iq| the

intensity difference between pixels and σ an estimation of the image noise. λ is a weighting

between the data cost and the smoothness cost. High values of λ will tend to assign the same

label to large connected components while small values of λ will tend to preserve the initial

labelling. This definition of the pairwise term is the same as the edge weight of the graph

cut used by Anquez et al. [4] for the segmentation of the fetal brain. An estimation of the

image noise is obtained by computing the difference between the image and a Gaussian blurred

version of the image. A more robust method that would keep image edges from contributing

to the noise estimate could be considered in future work. When building the CRF, each voxel

is connected to its 26 neighbours with an edge of cost Vpq. In order to take fetal motion into

account, we compute distinct values of σ for in-plane and through-plane edges of the CRF,

namely σxy is computed using an in-plane Gaussian blurring while σz is computed using a

through-plane Gaussian blurring.

Whereas the Random Forest classifier previously described was applied on 2D slices, we rely

on a 3D CRF to force the extraction of a blob-like structure, thus minimising the error in the

extremal slices of the brain. To ensure that enough structures of the fetal skull remain in the

segmentation as a guide in the registration process, ellipses are fitted on every 2D slice of the

segmentation mask and enlarged by 10% to obtain the final segmentation mask.

The resulting segmentation, as shown in Figure 5.5, is then used as input for the motion

correction procedure. In cases of extreme motion, it could be considered to treat each slice

independently with a 2D CRF, thus allowing for more discontinuities in the segmented structure.

If there is little motion between stacks, a 4D CRF could be considered, the fourth dimension

being the time between stack acquisitions, thus taking advantage of the overlap between images.

Page 95: Keraudren-K-2015-PhD-Thesis

5.3. Method 95

Figure 5.5: The top row shows a probability map of the Random Forest after rescaling whilethe bottom row shows the segmentation from the CRF (red contour) enlarged with 2D ellipses.

5.3.2 Joint segmentation and motion correction

The automated masking of the fetal brain has been integrated with the motion correction

method of Kuklisova-Murgasova et al. [87]. The latter performs slice-to-volume registration

interleaved with a super-resolution scheme and robust statistics to remove poorly registered

or corrupted slices. More specifically, the original slices are classified into inliers and outliers

based on their similarity with the reconstructed volume. This process seeks to remove artefacts

of motion corruption and misregistration. We propose to integrate automatically generated

masks for all input stacks instead of a manual input. Moreover, we are able to refine the

segmentation in the final iterations of the motion correction procedure.

The motion correction process is initialised as shown in Figure 5.6. Before performing any

cropping, all stacks are registered to a template stack in order to correct for maternal motion.

This template stack is typically chosen manually as the one least affected by fetal motion.

The same template stack was used for the manual and automated initialisations during testing

(Section 5.5.2). All stacks are then automatically masked and subsequently registered to the

template stack for a second time. A large mask encompassing all masks is then created to ensure

that the brain is not cropped. Indeed, each segmentation mask will have more confidence in

the central slices of the brain than in the extremal slices. This mask is then updated after each

Page 96: Keraudren-K-2015-PhD-Thesis

96 Chapter 5. Brain segmentation and motion correction

Figure 5.6: The masks of the fetal brain for the original 2D slices are progressively refined usingthe updated slice-to-volume registration transformations.

iteration of the slice-to-volume registration. This approach significantly differs from Kuklisova-

Murgasova et al. [87] who use a static mask from the template stack, but it can be related to

the evolving mask used by Kim et al. [81].

As the reconstruction progresses, the segmentation of the original slices can be refined thanks

to their recovered alignment in 3D space. In the final iterations of motion correction, the boxes

obtained with the method of Chapter 4 are used without applying any mask in order to obtain

a reconstructed volume larger than the brain. The slice-by-slice probability maps obtained

in Section 5.3.1 that correspond to slices classified as inliers, namely matching the intensity

profile of the reconstructed volume [87], are then summed to obtain a 3D probability map

(Figure 5.7). Following the same method described in Section 5.3.1, the latter is then rescaled

using an expected brain volume and used to initialise a 3D CRF on the enlarged reconstructed

volume. Finally, the resulting mask is smoothed to avoid boundary artefacts, and dilated to

incorporate skull tissues.

This mask can then be projected back to the original slices and thus correct initial errors. As

the only information that is passed on from one iteration to the next are rigid transformations,

the motion correction can proceed with the updated masked slices (Figure 5.6). This 3D CRF

is only applied in the final iterations as the motion correction starts from a low resolution

estimate of the reconstructed volume and progressively produces a sharper result.

Page 97: Keraudren-K-2015-PhD-Thesis

5.4. Implementation and choice of parameters 97

Figure 5.7: The whole boxes detected with the method of Chapter 6 are used without anymasking to reconstruct a volume larger than the brain. The probability maps obtained inSection 5.3.1 are then combined to initialise a CRF (top row). The bottom row presents thefinal reconstruction with the CRF segmentation in red.

5.4 Implementation and choice of parameters

The parameters of the proposed methods are λ, the weight between spatial constraint and prior

knowledge in the CRF segmentation, the patch size and the number of trees in the Random

Forest label propagation. These parameters have been chosen empirically, that is to say seeking

to optimise the results on a few images by trial and error. In the 3D CRF, the weight λ has

been set to 20 in Section 5.3.1 and 10 in Section 5.3.2. The performance of the segmentation

is fairly stable across a wide range of values for the number of trees and the patch size, and

the choice then becomes motivated by preserving the speed of the method, which decreases

with higher values of these parameters. We chose to use a patch size of 17 × 17 pixels (see

Figure 5.3.b) and Random Forests of 30 trees in our label propagation with patch features.

While the motion correction reconstructs a volume with a resolution of 0.75mm, the CRF

run in the last steps of motion correction is performed after downsampling the reconstructed

volume with a factor 2 to limit memory consumption, hence at 1.5mm resolution. Performing

a grid search is a computationally intensive process, however, in order to validate our choice of

parameters, we ran the brain extraction pipeline on all subjects, changing one parameter at a

time. The results, in the form of mean Dice scores, are presented in Figure 5.8 and are conform

Page 98: Keraudren-K-2015-PhD-Thesis

98 Chapter 5. Brain segmentation and motion correction

to our expectations.

1 10 20 30 40 50 60 70 80 90 100

λ

0

20

40

60

80

100

Mea

nD

ice

scor

e

1 10 20 30 40 50 75 100

Number of trees

0

20

40

60

80

100

3 7 11 17 21 31 41

Patch size

0

20

40

60

80

100

Influence of parameters in segmentation

Figure 5.8: Influence on the mean Dice score of the main parameters of the brain extractionprocess: the weight λ between prior knowledge and spatial constraints in the CRF segmentation,the number of trees and the patch size in the Random Forest label propagation. The parametervalues that have been selected for the subsequent experiments have been highlighted in red.

The code was implemented with the IRTK library, using OpenCV [14] and scikit-learn [115]

for the detection and segmentation parts, and is available online1 . The average time measure-

ment for the separate steps of the pipeline is shown in Table 5.1. While the brain detection

and extraction can be run on a normal PC, motion correction was performed on a 24 cores, 64

GB RAM computer.

Step Time

brain localisation <1minbrain extraction 2minmotion correction 1h203D CRF 10min

Table 5.1: Average computation time for the separate steps of the pipeline.

1https://github.com/kevin-keraudren/example-motion-correction

Page 99: Keraudren-K-2015-PhD-Thesis

5.5. Evaluation 99

5.5 Evaluation

5.5.1 Data

The data used for evaluating the proposed method is the same as in the previous chapter and

has been described in Section 2.3.1, dataset 1.

5.5.2 Ground truth and experiments

The automated brain extraction was evaluated using a ten-fold cross-validation: 39 subjects

were randomly selected for training the brain detector described in Chapter 4 and 20 subjects

for testing the automated slice-by-slice segmentation, and this process was repeated 10 times.

Fetuses who have been scanned several times have not been separated during cross-validation

to avoid having similar datasets for both training and testing. The Bag-of-Words classifier of

the brain detector was trained using bounding boxes tightly drawn around the brain.

A motion corrected 3D image reconstruction was generated for each subject using an au-

tomated brain mask randomly selected from one fold of the ten-fold experiment. These were

evaluated against reconstructions performed with the same algorithm, but with a manual mask

drawn on one stack and propagated to the other stacks after rigid alignment. The 3D motion

corrected reconstructions achieved using manual brain extraction were treated as a gold stan-

dard and used to directly evaluate the proposed segmentation pipeline for the fetal brain. This

was done in two ways:

Firstly an expert observer, blinded to the reconstruction method, viewed both reconstructions

for each subject in random order and rated them using the following qualitative scale:

A. this reconstruction has no evident artefacts from motion.

B. this reconstruction has minor artefacts but it can still be used for diagnosis.

Page 100: Keraudren-K-2015-PhD-Thesis

100 Chapter 5. Brain segmentation and motion correction

C. this reconstruction cannot be used for clinical diagnosis (failed motion correction).

The same expert was also asked to assign a motion score to each subject based on the appearance

of the initial average of all stacks after volume-to-volume registration and before any slice-to-

volume registration. In case of doubt between severe motion and low signal-to-noise ratio

(SNR), the expert looked at the original scan acquisitions in order to assign a score. The scores

were assigned as follows:

I. no motion (Figure 5.9.a).

II. some motion (Figure 5.9.b).

III. severe motion (Figure 5.9.c).

(a) (b) (c)

Figure 5.9: The success of the motion correction procedure is strongly dependent on the initialvolume-to-volume registration. For each pair of images, the top row presents the initial averageof all stacks while the bottom row is the final reconstruction. Column (a) is an example withalmost no motion, column (b) is an example with motion that is successfully corrected andcolumn (c), where the brain is barely recognisable in the average image, is an example withextreme motion where motion correction fails.

Secondly, the volumetric images generated starting with manual brain extraction were seg-

mented using the patch-based method of Wright et al. [169]. The rigid body transformations

determined in aligning the original slices with the reconstructed volume were then used to

Page 101: Keraudren-K-2015-PhD-Thesis

5.5. Evaluation 101

(a) (b) (c)

Figure 5.10: Example of ground truth segmentation (green), slices excluded from motion cor-rection (red) and automated segmentation (yellow). (a) and (b) are original 2D slices while (c)is a through-plane cut. (a) has been classified as inlier during motion correction and (b) asoutlier due to motion artifacts.

project the ground truth segmentation onto the original slices. The motion correction method

removes artefacts of motion corruption and misregistration by classifying the original slices into

inliers and outliers [87] based on their similarity with the reconstructed volume. Only slices

which have been included in the motion corrected volume with a strong confidence were used to

evaluate the slice-by-slice segmentation (Figure 5.10). Also, subjects whose motion correction

using manual masks failed have been excluded from the slice-by-slice evaluation, as no ground

truth could be generated.

When localising and segmenting the fetal brain, normal growth charts are used to define a

realistic range of sizes and volumes of the brain in order to remove improbable detection results,

while still allowing a large variation in brain size. From standard growth charts, extending to

include 99.6% of subjects corresponds to a 1 week error. To assess the stability of the proposed

method depending on the estimated age, we thus performed a separate ten-fold cross-validation

simulating extreme growth restriction by adding a 5 week error in all gestational ages.

Additionally, the results obtained using the method of Taleb et al. [149], which is part of the

Baby Brain Toolkit [124], are also reported.

Page 102: Keraudren-K-2015-PhD-Thesis

102 Chapter 5. Brain segmentation and motion correction

5.5.3 Evaluation metrics

The proposed segmentation pipeline for the fetal brain was evaluated using the measures of

recall and precision [7]. The recall, defined as true positivestrue positives + false negatives

, measures in our case

how much of the brain is correctly segmented:

recall =Volume of correctly classified voxels

Ground truth volume of the brain

while the precision, defined as true positivestrue positives + false positives

, gives an estimate of how much of the

background is correctly excluded:

precision =Volume of correctly classified voxels

Detected volume of the brain

Similarly to Ison et al. [69], we selected these two metrics as they provide a way to compare

the extraction of the brain in its entirety while minimising the inclusion of other tissues. More-

over, these measures have the advantage of being independent of the amount of background and

are thus not biased when images are cropped. The segmentation we seek as input for motion

correction should have a maximal recall as it should contain the whole brain. However, its

precision should be high enough for registration to succeed thanks to the exclusion of maternal

tissues, but not necessarily maximal as including tissues of the fetal head can help the registra-

tion. Additionally, we present Dice segmentation scores [41] which summarise both recall and

precision measures:

d = 2× |A ∩B||A|+ |B| = 2× precision× recall

precision + recall

where |A| and |B| represent respectively the detected and ground truth volume of the brain.

Page 103: Keraudren-K-2015-PhD-Thesis

5.5. Evaluation 103

Mean recall Mean precision Dice score

Box detection 98.8 (± 2.8) 57.0 (± 7.7) 72.0 (± 6.0)RF/CRF segmentation 90.6 (± 9.7) 90.3 (± 9.5) 90.5 (± 8.9)Enlarged segmentation 96.4 (± 9.4) 73.3 (± 9.0) 84.2 (± 8.2)Final segmentation 93.2 (± 3.7) 93.0 (± 5.1) 93.0 (± 3.7)

Taleb et al. 93.1 (± 17.4) 68.7 (± 19.8) 80.4 (± 16.2)

Table 5.2: Mean recall, mean precision and Dice score (± standard deviation) of the automatedbrain segmentation computed over the ten-fold cross-validation and for the reconstructed vol-umes.

5.5.4 Localisation and segmentation results

The density plots in Figure 5.11 summarise the distribution of the recall and precision scores

at different stages of the workflow. The box detection corresponds to the localisation of the

fetal brain. The RF/CRF segmentation is the extracted brain after the patch-based Random

Forest and CRF segmentation, while the enlarged segmentation is the same segmentation after

dilation. Lastly, the final segmentation is the segmentation of the motion corrected brain.

Table 6.1 presents the mean values for these scores.

The automated detection of a bounding box for the fetal brain (Chapter 4) contains in

average more than 98.8% of the brain. However, the detected box encompasses a large portion

of background pixels, with a precision of only 57.0%. This low score can be expected as we

select a box whereas the shape of the brain is ellipsoidal, and we purposefully overestimate the

dimensions of that box (Section 4.4.1).

We therefore segment the content of the detected box into brain and non-brain tissues.

The CRF segmentation has both recall and precision mean scores above 90%. However, this

segmentation misses parts of the brain and does not include tissues of the fetal skull, which are

potentially useful for registration. We hence enlarge it, increasing the recall score as more brain

tissues are included (96.4%), but decreasing the precision score as more background voxels are

also included (73.3%).

This enlarged segmentation is used as input for motion correction, as well as the 2D probabil-

Page 104: Keraudren-K-2015-PhD-Thesis

104 Chapter 5. Brain segmentation and motion correction

ity maps obtained at the RF/CRF segmentation stage. The detailed segmentation we obtain

after motion correction has both mean recall and mean precision scores above 93%. This

segmentation improves the robust statistics performed during motion correction, leading to a

sharper motion corrected volume (Figure 5.12).

Simulating extreme growth restriction by adding a 5 week error in all gestational ages, the

mean Dice score of the RF/CRF segmentation (Section 5.3.1) in the ten-fold cross-validation

is still 86.7%, which should be compared with the mean Dice score of 90.5% obtained when

providing correct estimates of gestational ages.

The masks produced using the method of Taleb et al. [149] obtained a mean Dice score of

80.4% (± 16.2%). These masks should be compared with the enlarged segmentation masks

produced by the proposed method that are used as input for motion correction and obtained

a mean Dice score of 84.2% (± 8.2%). The longer tail of the precision distribution in the

rightmost plot of Figure 5.11 highlights the higher number of unsuccessful localisations of the

brain using the method of Taleb et al. [149].

5.5.5 Reconstruction results

The results of the evaluation of motion correction are presented in Table 5.3.a. It can be noted

that in 88% of cases, the automated initialisation scored similarly or better than the manual

initialisation, and in 85% of cases the reconstructed volume obtained from the automated

initialisation was good enough for clinical diagnosis. Although the automated initialisation

failed to produce meaningful results in 15% of cases, neither methods produced useful results

in 7.5% of cases. This is mostly due to a poor initial alignment of the stacks (Figure 5.9.c),

discussed in Section 5.6.

Among the 6 subjects that received a quality score of C for the manual initialisation (Ta-

ble 5.3.a), we excluded from the ground truth dataset the 4 cases corresponding to registration

failures while keeping the 2 cases corresponding to low SNR. In a few cases (4.5%), the auto-

Page 105: Keraudren-K-2015-PhD-Thesis

5.5. Evaluation 105

Box

det

ecti

onR

F/C

RF

seg

men

tati

onE

nlar

ged

segm

enta

tion

Fin

al s

egm

enta

tion

Tal

eb e

t al

.

0255075100

−1.

0−

0.5

0.0

0.5

1.0−

1.0

−0.

50.

00.

51.

0−1.

0−

0.5

0.0

0.5

1.0−

1.0

−0.

50.

00.

51.

0−1.

0−

0.5

0.0

0.5

1.0

scal

ed d

ensi

ty

recall & precision

Rec

all

Pre

cisi

on

Mea

n re

call

Mea

n pr

ecis

ion

Fig

ure

5.11

:T

he

loca

lisa

tion

ofth

ebra

inin

Chap

ter

4has

ahig

hre

call

but

alo

wpre

cisi

on(B

oxde

tect

ion

):it

cove

rsth

ew

hol

ebra

inbut

does

not

dis

crim

inat

eth

ebac

kgr

ound

wel

l.T

he

subse

quen

tR

andom

For

est

and

CR

Fse

gmen

tati

onpro

vid

esa

bet

ter

pre

cisi

onbut

has

alo

wer

reca

llas

itm

isse

sfr

agm

ents

ofth

ebra

in(R

F/C

RF

segm

enta

tion

).T

his

segm

enta

tion

ishen

ceen

larg

edby

fitt

ing

2Del

lipse

sin

order

toin

clude

asm

uch

bra

inan

dsk

ull

tiss

ues

asp

ossi

ble

,th

us

incr

easi

ng

the

reca

llan

ddec

reas

ing

the

pre

cisi

on(E

nla

rged

segm

enta

tion

).T

he

Fin

alse

gmen

tati

onis

the

pre

cise

segm

enta

tion

ofth

ebra

inob

tain

edw

hen

com

ple

ting

mot

ion

corr

ecti

on.

The

righ

tmos

tplo

tpre

sents

the

bra

inex

trac

tion

resu

lts

obta

ined

usi

ng

the

met

hod

ofT

aleb

etal

.[1

49]

and

shou

ldb

eco

mpar

edw

ith

the

resu

lts

from

the

En

larg

edse

gmen

tati

on(3

rdplo

t).

Page 106: Keraudren-K-2015-PhD-Thesis

106 Chapter 5. Brain segmentation and motion correction

Automatedsegmentation

A B C

Manual

segmenta

tion

A 42 3 0

B 2 8 5

C 0 1 5

(a)

Automatedsegmentation

A B C

Motion

score

I 27 4 1

II 16 8 6

III 1 0 3

(b)

Table 5.3: Table (a) presents the quality scores obtained for each subject for the automatedand manual initialisation of motion correction. The diagonal corresponds to subjects whereboth methods gave results of similar quality, while the upper part of the array corresponds toautomated initialisations with a higher quality score and the lower part corresponds to manualinitialisations with a higher score. Table (b) presents the quality scores of the automatedinitialisation depending on the motion scores. It can be noted that most cases with littlemotion were successfully reconstructed while most cases with extreme motion failed.

mated method performed better than the manual segmentation. An example is presented in

Figure 5.12 and can be explained by the tighter mask produced by the automated method.

Table 5.3.b presents the motion scores assigned by the expert. These scores give an overview

of the amount of motion in our data, with 48% of subjects classified as no motion, 45% as some

motion and 6% as severe motion. The case with a quality score of C and a motion score of I has

a very low SNR and the default parameters of motion correction would need to be adjusted to

perform an adequate reconstruction. It can be noted that cases with extreme motion represent

only a small portion of our dataset but that our method did not enable a successful motion

correction of most of these cases.

5.6 Discussion

The scores for the RF/CRF segmentation and the enlarged segmentation can be compared

with the sensitivity (recall) and specificity (precision) scores reported by Ison et al. [69], which

were respectively between 74% and 88%, and between 85% and 89%, depending on the scan

Page 107: Keraudren-K-2015-PhD-Thesis

5.6. Discussion 107

Figure 5.12: Example reconstruction with automated initialisation (top row) and manual ini-tialisation (bottom row). It can be observed that the method with automated segmentationproduces a tighter mask of the brain, thus resulting in fewer artefacts.

orientation (sagittal, coronal or transverse). The mean Dice score for the final segmentation

is 93.0%, thus providing a good starting point for processing the fetal brain after motion

correction. Brain masks before motion correction were obtained with the method of Taleb

et al. [149] and compared to the proposed pipeline through mean Dice scores. Our proposed

method, which relies on a slice-by-slice detection and segmentation, is thus more robust than

the registration based approach of Taleb et al. [149].

As can be seen from Figure 5.11, the box detection succeeded for all stacks, but a few stacks

(less than 1%) failed to be segmented correctly. This does not necessarily mean that manual

intervention will be required in order to perform motion correction. Indeed, assuming that most

of the stacks from the same subject are correctly segmented, the motion correction procedure

then automatically rejects wrongly segmented stacks based on their dissimilarity with the rest

of the dataset.

This demonstrates the success of the proposed method: it allows a full automation of motion

correction, saving the time spent on manual segmentation while still providing comparable

results. Moreover, it can be noted that the masks resulting from the automated initialisation

are of similar quality across subjects in terms of brain coverage and exclusion of external

Page 108: Keraudren-K-2015-PhD-Thesis

108 Chapter 5. Brain segmentation and motion correction

tissues. On the contrary, the manual masks have more variability between subjects, sometimes

not fully covering all intracranial structures or including too much background, mostly due to

methodological differences between experts rather than to the fetus position or motion.

Among the 5 subjects that failed to be reconstructed with the automated method but were

successfully reconstructed with the manual initialisation, 3 failures can be attributed to part of

the maternal bladder or portions of amniotic fluid between the fetal head and the uterine wall

being included in the segmentation. This is a weakness inherent to our detection method. As

these regions are of homogeneous intensity, they are selected in the MSER detection process.

Furthermore, as the ellipses fitted to them encompass the whole brain, they are kept in the Bag-

of-Words classification. The resulting segmentation is a mask too wide for motion correction to

be efficient. For the remaining 2 cases, the segmentation missed part of the brain, which could

have been improved by setting a lower λ value in the CRF segmentations, namely a weaker

smoothness constraint.

5.6.1 Extreme motion

As presented in Figure 5.9, the success of motion correction strongly depends on the initial

alignment of all stacks after volume-to-volume registration. If the initial misalignment is too

significant, the slice-to-volume registration, which performs a spatially limited search, will not

be able to correct it. This highlights the main limitation of the proposed method: the orienta-

tion of the brain is still unknown after segmentation and the initial alignment between stacks

is left to an intensity based image registration, which is initialised using the scanner world

coordinate system.

Extreme motion between 3D stacks can occur due to a change of the parameter settings of

the scanner or a change of the mother’s position, resulting in incoherent scanner coordinates

for the center of the brain in each stack. Such cases can be automatically detected using a 3D

clustering such as mean shift [26] with a kernel size defined by the typical amplitude of a fetal

Page 109: Keraudren-K-2015-PhD-Thesis

5.6. Discussion 109

head motion. Separate motion corrections can then be performed for each cluster. In order

to combine all stacks during motion correction, an initial alignment needs to be provided, for

instance using the method of Ison et al. [69] or through the manual selection of landmarks.

Extreme motion between stacks is more likely to arise with younger fetuses who have more

freedom to move within the womb. Figure 5.13 presents a fetus of 20 weeks who moved

significantly during the acquisition process. Although the proposed segmentation method led

to a reasonable mask of the brain, most slices will be excluded from the reconstructed volume by

the automated outlier removal of Kuklisova-Murgasova et al. [87], and the reconstruction will

be performed using only the most coherent part of the acquired data. Kainz et al. [74] proposed

to estimate the relative motion within each stack from the correlation between adjacent slices.

However, this approach does not provide an absolute measure of motion and only permits to

select the stack which is the least corrupted as a template for the volume-to-volume registration.

Taimouri et al. [148] proposed to position each 2D slice in template space, which would allow to

include most slices into the motion corrected volume. However, the proposed template-to-slice

matching only considers a 45°rotation step, which is large compared to fetal head rotations,

due to the computational cost of the exhaustive search.

(a) (b) (c)

Figure 5.13: The proposed brain extraction method is robust to fetal motion, although suchscan may have to be excluded from the reconstruction process. (a) Acquired slice in transversedirection, orthogonal cuts in (b) sagittal and (c) coronal directions (fetus of 20 weeks, same asFigure 4.1).

The process of brain extraction and motion correction could be further improved if infor-

mation obtained from the segmentation was used more extensively in the alignment process.

Page 110: Keraudren-K-2015-PhD-Thesis

110 Chapter 5. Brain segmentation and motion correction

Likewise, the robust statistics that remove poorly registered or corrupted slices (Section 5.3.2)

are only based on intensities and could gain in accuracy if combined with the segmentation.

Figure 5.14: If the fetus is sufficiently far from the wall of the uterus, the amniotic fluidsurrounding the head enables a 3D rendering of the feal head.

In some rare cases, little fetal movement occurred during the scans and the fetal head is

sufficiently far from the amniotic fluid to enable a 3D rendering of the fetal head. In Fig-

ure 5.14, the external structure of the ear is particularly apparent. Only quasi-rigid parts of

the fetal anatomy can be reconstructed by currently existing motion correction methods [75].

In particular, motion correction of the fetal mouth and airways is particularly difficult due to

swallowing of the fetus.

5.6.2 Abnormal brain growth

The proposed method relies on a training phase for the brain localisation, learning from a

database of healthy subjects, and incorporates size constraints associated with the gestational

age. It is therefore important to consider the impact of abnormal brain growth on the perfor-

mance of the proposed method.

Abnormal brain growth resulting in a smaller brain was simulated by altering the gestational

ages in our dataset. As can be expected from the filtering by size taking place during the brain

detection process, the method showed some sensitivity to abnormal sizes of the brain, with the

mean Dice score of the RF/CRF segmentation decreasing by 3.8%. However, it can be expected

Page 111: Keraudren-K-2015-PhD-Thesis

5.6. Discussion 111

that brains with a very abnormal structure, in conditions such as severe ventriculomegaly or

hydrocephalus, will not be segmented with the proposed method without including subjects

with similar pathologies in the training dataset. Indeed, for the trained classifier in the detection

process to be able to generalise from the training data to any unseen subject, this training data

needs to be representative of the possible test cases.

To investigate the ability of the proposed method to deal with fetal MRI with abnormalities,

we randomly selected 16 subjects among fetuses with abnormal brain development and tested

the proposed fully automated motion correction method. An acceptable motion corrected

volume was obtained for 10 out of these 16 subjects, with the following break down:

• 7 success / 7 total for the cases with microcephaly (brain abnormally small)

• 3 / 7 for the cases with ventriculomegaly (ventricles abnormally large, see Figure 5.15)

• 0 / 1 for the case with hydrocephalus (a build-up of fluid in the brain)

• 0 / 1 for the case with alobar holoprosencephaly (see Figure 5.15).

These results confirm that an abnormal size has little impact on the motion correction pipeline,

while differences in appearance (intensity distribution) have a strong impact. Having only

been trained on healthy subjects, the classifier used in the brain detection process is unable to

correctly localise the brain of patients with significant abnormalities (the brain gets misclassified

as background). This is not a limitation of the proposed method but a limitation due to the

training database, which needs to be representative of the test cases. Figure 5.15 presents

the most abnormal brain that was successfully reconstructed (a patient with bilateral severe

ventriculomegaly), as well as one of the scans from a patient with alobar holoprosencephaly,

for whom the pipeline failed.

Page 112: Keraudren-K-2015-PhD-Thesis

112 Chapter 5. Brain segmentation and motion correction

(a)

(b)

Figure 5.15: (a) Subject with severe bilateral ventriculomegaly that has been succesfully motioncorrected, and (b) subject with alobar holoprosencephaly, for whom the pipeline failed.

5.7 Conclusion

In this chapter we have presented a method to automatically localise, segment, and refine the

segmentation of the fetal brain from misaligned 2D slices, making manual effort obsolete for

fetal brain MRI reconstruction.

The proposed method has been evaluated on a large database of 66 subjects and when assessed

by an expert, it obtained in 88% of cases a quality score similar to or better than motion

correction performed using manual segmentation. The proposed method has the advantage

of only learning from a bounding box ground truth, which is simple to obtain compared to a

detailed segmentation. Nonetheless, knowledge of the appearance of the rest of the fetal head

could be beneficial in order to obtain a more precise segmentation that would preserve more

facial tissues while excluding maternal tissues, enabling a more robust initial registration.

Page 113: Keraudren-K-2015-PhD-Thesis

Chapter 6

Localising the heart, lungs and liver

This chapter is based on:

K. Keraudren, B. Kainz, O. Oktay, V. Kyriakopoulou, M. Rutherford, J. V. Hajnal, and

D. Rueckert. Automated Localization of Fetal Organs in MRI Using Random Forests with

Steerable Features. In MICCAI. under review, 2015

K. Keraudren, O. Oktay, W. Shi, J. V. Hajnal, and D. Rueckert. Endocardial 3D Ultra-

sound Segmentation using Autocontext Random Forests. In Challenge on Endocardial Three-

dimensional Ultrasound Segmentation, MICCAI. 2014

Abstract

In this chapter, we propose a method based on Random Forests with steerable features to

automatically localise the heart, lungs and liver in fetal MRI. During training, all MR images

are mapped into a standard coordinate system that is defined by landmarks on the fetal anatomy

and normalised for fetal age. Image features are then extracted in this coordinate system.

During testing, features are computed for different orientations with a search space constrained

by previously detected landmarks. The method was tested on healthy fetuses as well as fetuses

113

Page 114: Keraudren-K-2015-PhD-Thesis

114 Chapter 6. Localising the heart, lungs and liver

with intrauterine growth restriction (IUGR) from 20 to 38 weeks of gestation. The detection

rate was above 90% for all organs of healthy fetuses in the absence of motion artefacts. In the

presence of motion, the detection rate was 83% for the heart, 78% for the lungs and 67% for

the liver. Growth restriction did not decrease the performance of the heart detection but had

an impact on the detection of the lungs and liver. Preliminary results on the application of the

proposed method to initialise the motion correction of the fetal thorax are presented.

6.1 Introduction

Although the main focus of fetal Magnetic Resonance Imaging (MRI) is in imaging of the brain

due to its crucial role in fetal development, there is a growing interest in using MRI to study

other aspects of fetal growth, such as assessing the severity of intrauterine growth restriction

(IUGR) [37]. In this chapter, we present a method to automatically localise fetal organs in

MRI (heart, lungs and liver), as well as preliminary results on how the localisation results can

be used to initialise the motion correction of the fetal organs in the thorax and abdomen.

The main challenge for the automated localisation of fetal organs is the unpredictable position

and orientation of the fetus. To the best of our knowledge, only two papers in the published

literature consider the problem of localising fetal organs in MRI beyond the brain of the fetus.

Both methods [5, 83], discussed in Section 2.2.4, rely on a skeleton model, but neither propose

a fully automated localisation for the landmarks necessary to fit the proposed model. Two

methodological approaches can be identified from the current literature for the localisation of

the fetal brain in MRI. The first approach seeks a detector invariant to the fetal orientation

[69, 72, 77]. The second approach learns to detect fetuses in a known orientation and rotates

the test image [4]. We adopt this second approach in our proposed method.

We first experimented with the Random Forest framework proposed by Criminisi et al. [29],

introducing rotation invariance by artificially rotating the training dataset. In order to evaluate

Page 115: Keraudren-K-2015-PhD-Thesis

6.1. Introduction 115

our implementation, we took part in the CETUS challenge1 (MICCAI 2014) on the segmenta-

tion of the left ventricle in echocardiographic images. Random Forests were applied sequentially,

similarly to the autocontext framework of Tu [155], in order to refine the localisation into a

segmentation. While reasonable segmentation results were obtained on the echocardiographic

images, which are all in a standard orientation, the method failed to correctly localise fetal

organs in MR data, where the orientation of the fetus is arbitrary. This work, presented in

Section 6.2, led to the decision to train a detector on aligned images, and apply this detector

at different orientations at test time.

As illustrated by steerable features [174], it is more efficient to extract features in a rotating

coordinate system rather than to rotate the whole 3D volume. Besides avoiding the com-

putational cost of interpolating the complete volume for each rotation, steerable features are

computationally efficient as they only require a single integral image. We thus propose to ex-

tend the Random Forest framework for organ localisation [29] with the extraction of features in

a local coordinate system specific to the anatomy of the fetus. Consequently, the sampling grid

of the input features of the Random Forest are rotated instead of the image itself. While the

steerable features of Zheng et al. [174] are derived from local voxel values for image intensity

and gradient, we propose to steer long range features, introduced in Criminisi et al. [29].

As discussed in Section 3.3.3, the Hough Forest approach [56] combines classification and

regression so that only voxels assigned to a certain class can vote for the location of specific

landmarks. This is of particular relevance when detecting landmarks within the fetus whose

position has little correlation with surrounding maternal tissues. While the trees in a Hough

Forest randomly alternate decision and regression nodes, we chose to sequentially apply a

classification and a regression forest. This enables us to benefit from generic Random Forest

implementations [115] while focusing on the key step of feature extraction.

At training time, image features are learnt in a coordinate system that is normalised for the

age of the fetus and defined by the anatomy of the fetus (Figure 6.5). At test time, assuming that

1http://www.creatis.insa-lyon.fr/Challenge/CETUS

Page 116: Keraudren-K-2015-PhD-Thesis

116 Chapter 6. Localising the heart, lungs and liver

Figure 6.1: Overview of the proposed pipeline for the automated localisation of fetal organs inMRI.

the center of the brain is known, image features are extracted in a coordinate system rotating

around the brain in order to find the heart. The lungs and liver are then localised using a

coordinate system in which one axis is parallel to the heart-brain axis. While the position

and orientation of a rigid body is defined by six parameters, this sequential search allows to

reduce the degrees of freedom to five when localising the heart, and to four when localising

the subsequent organs. An overview of our proposed pipeline is presented in Figure 6.1. In

the remainder of this chapter, we will first briefly describe our autocontext Random Forest

framework in Section 6.2 before presenting our main method in Section 6.3. This proposed

approach is then evaluated in Section 6.4 on two datasets of MRI scans. The first dataset, used

for training and leave-one-out cross-validation, consists of scans of 30 healthy and 25 IUGR

subjects without significant motion artefacts (Section 2.3.2, dataset 2 ). The second dataset

(64 subjects) is used to evaluate the performance of the proposed method in the presence of

motion artefacts (Section 2.3.3, dataset 3 ).

6.2 Segmentation with autocontext Random Forests

We present in this section a method which was initially aimed at localising the fetal body in MRI

data. Although this method was unable to cope with the arbitrary orientation of the fetus, it

was successfully applied to the task of segmenting the left ventricle in echocardiographic images

in the CETUS challenge (MICCAI 2014).

Page 117: Keraudren-K-2015-PhD-Thesis

6.2. Segmentation with autocontext Random Forests 117

Figure 6.2: Overview of the autocontext framework for segmenting the left ventricle endo-cardium.

While most of the image segmentation methods based on Random Forests classify each

pixel independently, we apply several Random Forest classifiers successively, each one gaining

contextual information from the classification results of their predecessors. This framework of

applying successive classifiers, outlined in Figure 6.2, is called autocontext [155]. Similarly to

Kontschieder et al. [85], we use GDTs, namely the Euclidean distance weighted by the image

gradient (Section 3.4.3), computed between each autocontext iteration, in order to enhance

contextual information. The tests used at the nodes of the trees are based on differences of

mean intensity over displaced rectangular areas [31]. Each test thus selects two 3D patches

of random sizes at random offsets from the current pixel, and compares their mean intensity

(Figure 6.3.b). As these tests are invariant to intensity shifts, image intensities do not need to

be standardised. In order to enable more interaction between the classes to be learnt (spatial

context), those patches are either both selected on the original image, or on two possibly

different images among the detection probability maps of each class and their corresponding

GDTs. Unlike Kontschieder et al. [85] which computes GDTs using the probabilistic class

regions, the class centroids are used to address the ambiguity between the four heart chambers.

To enable the comparison between class probabilities and GDTs, those images are rescaled to

the same intensity range.

The training dataset in the CETUS challenge consists of 3D echocardiography image se-

quences of the beating heart, with ground truth segmentation of the left ventricle at end-

diastolic (ED) and end-systolic (ES) frames. In order to train each classifier on as many images

as possible (data augmentation), the ground truth segmentation of ED and ES frames are prop-

Page 118: Keraudren-K-2015-PhD-Thesis

118 Chapter 6. Localising the heart, lungs and liver

(a) (b) (c)

Figure 6.3: (a) Ground truth segmentation of the myocardium (green) and the mitral valve(red) were generated using morphological operations. (b) The tests used at each tree nodes arebased on the difference of mean intensity between 3D patches. (c) These patches can be takenfrom the input image, or from the probability maps obtained during the autocontext iterations(top row) and their corresponding GDTs (bottom row).

agated to all other frames using non-rigid image registration. Additionally, randomly rotated

versions of each frame (± 30°along each axis) are generated to increase the training size as well

as the rotation invariance of the trained classifier. Finally, we formulate the two class problem

of the challenge into a four class problem: left ventricle endocardium, myocardium, mitral valve

and background. Autocontext is a framework that implicitly learns a shape model and more

importantly, the spatial relation between different classes. The two additional classes are thus

introduced in order to take advantage of the autocontext framework. Approximate segmenta-

tions are automatically generated for the myocardium and the mitral valve using morphological

operations and fitting an ellipsoid to the ground truth segmentation of the left ventricle to ob-

tain its main axis (Figure6.3.a).

Properties of the classifier: The proposed classifier is translation invariant and produces

a probabilistic classification at every pixel. The tests performed at every node of the decision

trees are invariant to intensity shifts [31]. The classifier is tolerant to some degree of rotation,

depending on the amount of rotation present in the training data. By construction, it selects

one connected component per class.

Page 119: Keraudren-K-2015-PhD-Thesis

6.2. Segmentation with autocontext Random Forests 119

Implementation: While a generic implementation of Random Forests [115] is used in the

work presented in the next section, a Random Forest implementation adapted to a sliding

window framework has been developed in C++ for the present work2. Trees are grown in parallel

using Intel Threading Building Blocks (TBB) library. At test time, instead of parallelising over

the trees, it is more efficient to parallelise over the 2D slices of the 3D volume. The autocontext

framework is written in Python. We use Random Forests of 20 trees, with a maximal tree depth

of 20. Images are resampled to 1×1×1 mm3 and the maximal patch size in the binary tests is

set to 60 pixels, for a maximal offset of 30 pixels. When performing non-local means denoising

as a preprocessing step, the patch size is set to 3×3×3 and the weights of the neighbouring

patches are computed within an image window of size 7×7×7 centred on the target patch.

Training the classifiers takes approximately a day on a 32 cores, 256 GB RAM computer, while

testing only takes 90 seconds for four autocontext iterations. Non-local means denoising for a

single frame takes about 1 minute on a quad-core machine.

Original images Speckle reduced images

Training size MAD (mm) HD (mm) DS (%) MAD (mm) HD (mm) DS (%)

100 frames 2.70±0.84 9.58±3.36 16.1±5.5 2.75±1.01 9.58±3.13 15.7±5.5300 frames 2.60±0.80 9.64±3.28 15.6±5.4 2.32±0.69 9.19±3.33 14.3±5.2600 frames 2.57±0.68 9.20±3.08 15.7±5.6 2.31±0.74 8.66±2.79 13.6±4.2

Table 6.1: Segmentation results for the left ventricle on the testing set (Patients 16 to 30):mean absolute distance (MAD), Hausdorff distance (HD) and modified Dice score (DS).

Results: The results obtained on the first testing dataset of the CETUS challenge are pre-

sented in Table 6.1 for different sizes of training dataset. The best segmentation scores were

obtained on the denoised data, for a training size of 600 frames and four iterations of autocon-

text, with a mean Dice score of 86.4%. Among all the parameters of the model, the two most

important were the size of the training dataset and the choice of additional classes that can

provide spatial context when learning to segment the left ventricle, such as the mitral valve and

the myocardium (Figure 6.3.a). Indeed, in the experiments performed, introducing a class for

2https://github.com/kevin-keraudren/CETUS-challenge

Page 120: Keraudren-K-2015-PhD-Thesis

120 Chapter 6. Localising the heart, lungs and liver

the mitral valve was found necessary to enable the classifier to position the boundary between

the left ventricle and the left atrium.

While reasonable results were obtained for the echocardiographic images of the CETUS

challenge, this method was unable to cope with the arbitrary orientation of the fetus, even

when randomly rotating the training images. A certain amount of rotation invariance can be

learned by the detector from rotated versions of the training data. However, similarly to the

detector of Viola and Jones [161] and to most sliding window approaches, it does not provide a

complete rotation invariance due to the choice of image features. As discussed in Section 6.5,

the first iteration of autocontext is a localisation step, while subsequent iterations allow to

refine the initial segmentation. This first iteration yielded poor segmentation results in the

case of the fetus, which led to the development of the method presented in the next section

that accounts for the unknown orientation of the fetus.

6.3 Random Forests with steerable features

6.3.1 Preprocessing

There is a large variability in the MR scans of fetuses due to fetal development. In particular,

the length of the fetus increases by 50% between 23 weeks and term (40 weeks) [6]. A typical

approach for object detection would be in this case to resize all fetal images using their ground

truth segmentation and apply the trained classifier at multiple scales at test time. However,

as mentioned in Chapter 2, by the time a pregnant woman undergoes an MRI scan, the gesta-

tional age of the fetus has already been determined at previous ultrasound examinations. An

approximate size of the fetus can thus be inferred in order to apply our detector at a single

scale.

We thus propose to normalise the size of all fetal images by resampling them to an isotropic

voxel size sga that is a function of the gestational age, so that a fetus of 30 weeks is resampled to

Page 121: Keraudren-K-2015-PhD-Thesis

6.3. Random Forests with steerable features 121

voxel size s30: sga = CRLga/CRL30 × s30, where CRL denotes the crown-rump length [6]. This

differs from previously proposed methods which either ignored the gestational age [4, 69, 72]

or only used fetal age to define size constraints [77]. This approach does not take into account

the allometric growth of the fetus as it considers that the images of fetuses can be normalised

across all ages using an homothecy. However, as can be seen in Figure 6.4, it is a reasonable

approximation that enables the definition of size constraints independent from the gestational

age. The proposed normalisation also does not take into account the change of T2 image

contrast that occurs during the development of the fetus [95]. However, this is accounted for

by the choice of image features, namely cube features, which are invariant to intensity shift or

rescaling. This size normalisation allows a single classifier to be trained across all ages, avoiding

training classifiers specific to certain age range, which may prove difficult with a small training

dataset.

20 25 30 35 40

Gestational age

0

20

40

60

80

100

120

140

Hea

rt-b

rain

dist

ance

inm

m

Scanner coordinates

20 25 30 35 40

Gestational age

0

20

40

60

80

100

120

140

Hea

rt-b

rain

dist

ance

invo

xels

Image grid

R1, R2

HealthyIUGR

Figure 6.4: The proposed size normalisation allows the definition of limits R1 and R2 for theheart–brain distance which are independent from the gestational age.

Before performing the isotropic resampling corresponding to the size normalisation, the im-

ages are denoised using 3D median filtering. As the 2D slices are acquired in an interleaved

order with a fast single-shot technique, the motion artefacts are observed to be marginal in

in-plane direction. Median filtering is used to exclude the sparsely distributed motion-distorted

slices (e.g. shadowing) as their signals present as outliers in the context of the undamaged

adjacent slices. This approach has been observed to perform better than Gaussian averaging.

Page 122: Keraudren-K-2015-PhD-Thesis

122 Chapter 6. Localising the heart, lungs and liver

6.3.2 Detecting candidate locations for the heart

~u0~u0

~v0~v0

~w0~w0

liverliver

Coronal plane

~u0~u0

~w0~w0

~v0~v0

brainbrain

heartheart

Sagittal plane

~v0~v0

~w0~w0

~u0~u0

heartheart

rightrightlunglung

Transverse plane

Figure 6.5: Average image of 30 healthy fetuses after resizing (Section 6.3.1) and alignmentdefined by anatomical landmarks (Section 6.3.2). It highlights the characteristic intensitypatterns in fetal MRI for the brain, heart, lungs and liver.

In theory, a detector could be trained on aligned images, while at test time, it would be

applied to all possible rotated versions of the 3D volume. However, in practice it would be too

time consuming to explore an exhaustive set of rotations. In particular, the integral image that

is used to extract intensity features would need to be computed for all tested rotations. Instead

of rotating the image itself, we thus propose to rotate the sampling grid of the image features.

This is done by defining a local orientation of the 3D volume for every voxel.

When training the classifier, all voxels are oriented using the anatomy of the fetus (Figure 6.5

and Figure 6.6.a): ~u0 points from the heart to the brain, ~v0 from the left lung to the right lung

and ~w0 from back to front (~w0 = ~u0×~v0). When applying the detector to a new image, in theory,

all possible orientations (~u,~v, ~w) should be explored. However, assuming that the location of

the brain is already known, for instance using the detector presented in Chapter 4, ~u can be

defined as pointing from the current voxel x to the brain (Figure 6.6.b). In this case, only the

orientation of the plane Px,~u passing through x and orthogonal to ~u remains undefined. (~v, ~w)

are then randomly chosen so that they form an orthonormal basis of Px,~u. The rationale for

this definition of ~u is that when x is positioned at the center of the heart, Px,~u is the transverse

plane of the fetal anatomy (Figure 6.5).

In order to address the sensitivity to the localisation of the brain, a random offset is added

to the estimated brain center with a 10mm range, which is in the order of magnitude of the

Page 123: Keraudren-K-2015-PhD-Thesis

6.3. Random Forests with steerable features 123

prediction errors of the brain detector of Chapter 4. This perturbation of the center of the brain

is performed both at test time and during training. This corresponds to data augmentation

and simulates small movements of the fetal head. Additionally, knowing the location of the

brain, the search for the heart only needs to explore the image region contained between two

spheres of radius R1 and R2 (Figure 6.7) inferred from training data (Figure 6.4). This has the

advantage of reducing both the computational cost and the false positive rate of the detector.

The localisation of the heart, which generates multiple candidate locations, consists in two

stages: a classification forest is first applied to obtain for each voxel x a probability p(x) of being

inside the heart (Figure 6.9.a). All voxels with p(x) > 0.5 are then passed on to a regression

forest, which predicts for each of them the offset to the center of the heart. Using the predicted

offsets, the probabilities are then accumulated in a voting process, and the voxels receiving the

highest sum of probabilities are considered as candidate centers for the heart (Figure 6.9.b).

The regression step plays a similar role as the mean-shift aggregation of the classification results

in Shotton et al. [138] when formulating landmark position proposals. However, the regression

forest incorporates a knowledge of the surrounding anatomy while the Gaussian kernels of

mean-shift clustering only consider spatial proximity.

While the Random Forest classifier is trained on voxels from both the background and the

heart, the Random Forest regressor is only trained on voxels from the heart. During classifi-

cation, in order to increase the number of true positives, an initial random orientation of the

plane Px,~u is chosen and the detector is successively applied by rotating (~v, ~w) with an angular

step θ. The value of (~v, ~w) maximizing the probability of the current voxel to be inside the

heart is selected, and only this orientation of Px,~u is passed on to the regression step.

Image features

Four types of features are used in the classification and regression forests: tests on image

intensity, gradient magnitude, gradient orientation and distance from the current voxel to the

brain. In order to be invariant to intensity shifts [114], image intensity or gradient magnitude

Page 124: Keraudren-K-2015-PhD-Thesis

124 Chapter 6. Localising the heart, lungs and liver

~u0~u0~v0~v0

~u0~u0~v0~v0

~u0~u0~v0~v0

~u~u~v~v

~u~u~v~v

~u~u

~v~v

(a) (b)

Figure 6.6: (a) When training the classifier, features (light blue squares) are extracted in theanatomical orientation of the fetus: ~u0 points from the heart (green) to the brain (red), and~v0 points from the left lung (blue) to the right lung (yellow). The pink contour corresponds tothe liver. (b) At test time, features are extracted in a coordinate system rotating around thebrain.

features select two cube patches of random sizes at random offsets from the current voxel and

compare their mean intensity. All features are extracted in the local orientation (~u,~v, ~w), as

shown in Figure 6.6. For computational efficiency, cube features are axis aligned and computed

using the integral image, with the center of the cubes defined using the local orientation (~u,~v, ~w).

Tests on the gradient orientation compute the dot product of the image gradient at two different

offsets from the current pixel. During the voting process, the predicted offsets are computed

with the same local orientation. Compared with the steerable features of Tu [154], which

correspond to the local pixel or gradient intensity values sampled on a rotated grid, rotated

cube features of varying size allow to capture a wider spatial context [29].

6.3.3 Detecting candidate locations for the lungs and liver

For each candidate location of the heart, candidate centers for the lungs and liver are detected.

Similarly to the heart detection (Section 6.3.2), a classification Random Forest first classifies

Page 125: Keraudren-K-2015-PhD-Thesis

6.3. Random Forests with steerable features 125

R1R1 R2R2R3

R3

Figure 6.7: Knowing the location of the brain, the search for the heart can be restricted to anarrow band within two spheres of radius R1 and R2 (red). Similarly, knowing the locationof the heart, the search for the lungs and the liver can be restricted to a sphere of radius R3

(green).

each voxel as either heart, left lung, right lung, liver or background. For each organ, a regression

Random Forest is then used to vote for the organ center. During training, the images are

oriented according to the fetal anatomy (Figure 6.5). During testing, both the location of

the brain and heart are known and only the rotation around the heart-brain axis needs to be

explored. For each voxel x, we can hence set ~u = ~u0, and only the orientation of the plane Px,~u0remains unknown. (~v, ~w) are thus randomly chosen to form an orthonormal basis of Px,~u0 .

In the set of features for the Random Forest, the distance from the current voxel to the

brain, which was used in the detection of the heart, is now replaced by the distance to the

heart. Additionally, the projection of the current voxel on the heart-brain axis is used in order

to characterise whether the current voxel is below or above the heart relative to the brain.

Similarly to the heart detection, the detector is run with an angular step θ. For each voxel, the

value of θ maximizing the probability of being inside one of the organs of the fetus is selected.

The different hypotheses for the location of the heart can be tested in parallel to reduce the

computation time.

The advantage of a cascade approach when localising organs has already been highlighted

Page 126: Keraudren-K-2015-PhD-Thesis

126 Chapter 6. Localising the heart, lungs and liver

in the works of Cuingnet et al. [34] and Gauriau et al. [57], who similarly trained succes-

sive Random Forest classifiers. Classifiers dedicated to specific organs can capture more local

information than a single whole-body classifier.

6.3.4 Spatial optimisation of candidate organ centers

Coronal plane Sagittal plane Transverse plane

Figure 6.8: The probability for a voxel to be inside an organ is modeled by gaussian distribu-tions: in red for the left lung, green for the right lung and yellow for the liver.

The voting results from the regression Random Forest are first smoothed with a Gaussian

kernel. Local maxima are then extracted to form a set of candidates xl for the center of each

organ, l ∈ L = { heart, left lung, right lung, liver }, along with their individual scores p(xl).

Each combination of candidate centers is assigned a score (Equation 6.1) based on the individual

scores p(xl) and the spatial configuration of all organs:

∑l∈L

λp(xl) + (1− λ)e−12

(xl−xl)>Σ−1l (xl−xl) (6.1)

where xl is the mean position and Σl the covariance matrix for the spatial distribution of each

organ l in the training dataset, computed in the anatomical coordinate system of the fetus. λ

is a weight that trades off individual scores and the spatial configuration of organs.

Once the organ centres have been localised, the voting process can be reversed in order to

obtain a rough segmentation (Figure 6.9.c). This segmentation is analogous to the backprojec-

tion of the Hough forest votes in Rematas and Leibe [120]. This segmentation is a point cloud

due to the random orientation of the transverse plane Px,~u during the detection process. The

Page 127: Keraudren-K-2015-PhD-Thesis

6.3. Random Forests with steerable features 127

coordinates of xl and xl are defined in the coordinate system centred on the heart, with the

axis system presented in Figure 6.5. As only a limited number of candidates is obtained when

extracting local maxima from the voting results, an exhaustive search for the best permutation

is feasible. In the case of a higher number of anatomical landmarks or candidate detections,

a Markov Random Field optimisation could have been considered, similarly to the work of

Donner et al. [42] and Ison et al. [69].

(a) (b)

(c) (d)

Figure 6.9: (a) Probability of a voxel to be inside the heart. (b) Accumulated regression votesfor the center of the heart. (d) Classification of voxels around the heart into the different organs:heart (blue), left lung (red), right lung (green), liver (yellow). (c) After all organs centers havebeen located, the voting process can be reversed, resulting in a rough segmentation for eachorgan.

Page 128: Keraudren-K-2015-PhD-Thesis

128 Chapter 6. Localising the heart, lungs and liver

6.3.5 Application to motion correction

In this section, we present preliminary results on how the proposed localisation method can be

used to automatically mask the thorax of the fetus in order to perform motion correction.

(a) (b)

Figure 6.10: (a) An initial mask (magenta) is generated by morphological dilation of the seg-mentation that results from the localisation process. (b) The random walker algorithm is thenused to produce a mask for each organ: heart (blue), left lung (red), right lung (green), liver(yellow).

For each individual 3D stack, the approximate segmentation produced by the localisation

process is first dilated to obtain a first mask of the region to be reconstructed (Figure 6.10.a).

This mask is then refined with the random walker algorithm [61], using the initial organ seg-

mentation as seeds. This region growing algorithm computes the probability that a random

walker starting at each unlabeled voxel will first reach a seed. This is done by solving an

anisotropic diffusion equation where the diffusivity coefficients are computed from the image

gradient.

The resulting mask is asymmetric as it includes the liver but excludes the stomach, and

potentially misses the lower part of the left lung. The sagittal plane passing through the center

of the heart and perpendicular to the axis defined by both lungs is thus used to mirror the mask

and ensure a symmetric coverage of both sides of the body. A final mask is then computed by

Page 129: Keraudren-K-2015-PhD-Thesis

6.3. Random Forests with steerable features 129

taking the union of each individual mask. A detailed organ segmentation is not required in order

to apply motion correction to the fetal thorax. Including quasi-rigid surrounding structures,

such as the ribs and spine, will guide the registration process taking place in the reconstruction.

Figure 6.11 presents the resulting mask for the fetal thorax and liver, and the subsequently

obtained motion corrected volume using the method of Kainz et al. [74]. Parts of the fetal

anatomy that were difficult to observe in the acquired images before motion correction, such

as the bronchi, now become apparent.

Sagittal Coronal Transverse(a) (b)

Figure 6.11: (a) The top row shows the cropping of the input 2D slices according to theautomatically generated mask. The bottom row presents the subsequent motion correctionusing two sagittal and two transverse stacks of 2D slices. The main bronchi become particularlyapparent in the motion corrected volume (yellow arrow in the coronal view). (b) The lungs(green) have been segmented in the motion corrected volume using Chan-Vese level sets [21],initialised from the output of the automated localisation process. The segmented brain (red)highlights the difference of scale between the two structures: the volume of the brain is equalto five times the volume of the two lungs.

Page 130: Keraudren-K-2015-PhD-Thesis

130 Chapter 6. Localising the heart, lungs and liver

6.4 Experiments

6.4.1 Data

Two datasets are used for the evaluation of the proposed method: dataset 2 (Section 2.3.2)

and dataset 3 (Section 2.3.3). Dataset 2, which consists of 55 scans covering the whole uterus

for 30 healthy subjects and 25 IUGR subjects, was used to train and validate the model in a

leave-one-out cross-validation, in the absence of motion artefacts. Dataset 3 is used to test the

model in the presence of motion artefacts and is a subset of the data used in the two previous

chapters that covers the thorax and abdomen of the fetus. The location of the brain is passed

to the detector for the dataset 2 in order to evaluate the proposed method on its own while

the brain detection method presented in Chapter 4 is used to localise the brain automatically

in dataset 3, thus evaluating a fully automated localisation of fetal organs.

6.4.2 Implementation

A key parameter in our proposed method is the scale of the cube features, which should capture

patterns in the fetal anatomy. Images were first resampled to a scale defined by their gestational

age, as presented in Section 6.3.1, with s30 = 1mm, and a 3D median filter of size 5 × 5 × 3

was applied. The scale of the cube feature was then optimised through a grid search, leading

to the choice of a maximal patch size of 30 voxels and a maximal offset size of 20 voxels. The

remaining parameters of the model were then empirically chosen as λ = 0.5 and θ = 90°. The

code is written in Python, using the Random Forest implementation provided in scikit-learn

[115] with 100 trees, a maximal depth of 30 and precomputed features of size 1500. Training the

detector takes 2h and testing takes 15min on a machine with 24 cores and 64 GB RAM. This

runtime could be greatly reduce using a Random Forest implementation specific for a sliding

window framework, as well as from a GPU implementation [132]. The code is available online3.

3https://github.com/kevin-keraudren/fetus-detector

Page 131: Keraudren-K-2015-PhD-Thesis

6.4. Experiments 131

Table 6.2: Detection rates for each organs.

Heart Left lung Right lung Liver

Dataset 2: healthy 90% 97% 97% 90%Dataset 2: IUGR 92% 60% 80% 76%Dataset 3 83% 78% 83% 67%

Leftlung

Rightlung

Heart Liver0

5

10

15

20

25

30

35

40

Dis

tanc

eer

ror

(mm

)

Dataset 2: healthy

Leftlung

Rightlung

Heart Liver0

5

10

15

20

25

30

35

40Dataset 2: IUGR

Leftlung

Rightlung

Heart Liver Brain0

5

10

15

20

25

30

35

40Dataset 3

Figure 6.12: Distance error between the predicted organ centers and their ground truth fordataset 2 (30 healthy fetuses and 25 IUGR fetuses, Section 2.3.2) and dataset 3 (64 healthyfetuses, Section 2.3.3). In each box plot, the red line corresponds to the median while theboundaries of the box represent the first and third quartiles.

6.4.3 Results

Localising the organs of the fetus

The results are reported as detection rates in Table 6.2, where a detection is considered suc-

cessful for the organ l if the predicted center xl is within the ellipsoid defined by

(xl − yl)>Σ−1l (xl − yl) < 1 where yl is the ground truth center. This ellipsoid corresponds to

the Gaussian distributions of each organ in Figure 6.8, thresholded at one standard deviation.

The rational for this definition of the detection rate is to assess whether the detected center

is inside the organ, although the lungs and liver do not have a convex shape. In this case the

true center of mass lies outside the organ. For the dataset 2, Σl is the covariance matrix of

the ground truth segmentation while for the dataset 3, it is defined from training data as in

Section 6.3.4. The distance error between the predicted centers and their ground truth are

Page 132: Keraudren-K-2015-PhD-Thesis

132 Chapter 6. Localising the heart, lungs and liver

reported in Figure 6.12. Although healthy and IUGR subjects were mixed in the training data,

results are reported separately to assess the performance of the detector when fetuses are small

for their gestational age. Training separate detectors for normal and IUGR subjects did not

improve the results, which can be attributed to the fact that there are different degrees of

severity of growth restriction [37]. The higher detection rate for the lungs than for the heart

for the healthy subjects of the dataset 2 (Table 6.2) is likely due to the detector overestimating

the size of the heart (Figure 6.9.c). This can be linked to the signal loss that occurs when

imaging the fast beating heart. A decrease in performance can be observed for the detection of

the lungs in IUGR fetuses, which can be associated with a reduction in lung volume reported

in Damodaram et al. [37]. The asymmetry in the performance of the detector between the left

and right lungs can be explained by the presence of the liver below the right lung, leading to

a stronger geometric constraint in Equation 6.1. In addition with the heart being on the left

side of the body, the left lung is slightly smaller than the right with an average left / right lung

volume ratio of 0.8 in both healthy and IUGR subjects in the dataset 2.

For the dataset 3, the largest error for the detection of the brain using the method presented

in Chapter 4 is 21mm, which is inside the brain and is sufficient for the proposed method to

succeed. In 90% of cases, the detected heart center is within 10mm of the ground truth. We

evaluated the mean error between two users manually selecting the heart center to be 6±2.4mm,

which suggests that the proposed method could be used to provide an automated initialisation

for the motion correction of the chest [73]. In addition to motion artefacts, the main difference

between the dataset 2 and the dataset 3 is that the dataset 3 has been acquired centred on the

brain and does not completely cover the fetal abdomen in all scans (Section 2.3.3). This could

explain the lower performance when detecting the liver. Additionally, we observed a similar

performance of the detector across gestational ages (see Figure 6.13), with the exception of the

liver. This can be linked to the fact that the MR images were acquired centred on the brain

(Section 2.3.1), and a lower signal to noise ratio can be observed in the region of the liver, which

lies at the image boundary for older fetuses. A video has been made to illustrate examples of

Page 133: Keraudren-K-2015-PhD-Thesis

6.4. Experiments 133

0

10

20

30

40

50

60

70

80D

ista

nce

erro

r(m

m)

Left lung

0

10

20

30

40

50

60

70

80Right lung

20 25 30 35 40

Gestational age

0

10

20

30

40

50

60

70

80

Dis

tanc

eer

ror

(mm

)

Heart

20 25 30 35 40

Gestational age

0

10

20

30

40

50

60

70

80Liver

Figure 6.13: A similar performance of the detector across gestational ages was observed at theexception of the liver. Results are presented for the 64 healthy subjects of the dataset 3. Theoutlier with a 70mm error for all organs corresponds to an incorrect localisation of the heart inthe folded legs of the fetus, and this detection error is then propagated to the lungs and liver.

detection results4.

Motion correction

The proposed approach was applied to the 64 fetuses of the dataset 3 in the automated lo-

calisation experiment (Section 6.4). In 47 cases out of 64, the motion corrected volume was

found of a quality sufficient for further processing (Figure 6.14), such as segmenting the lungs.

Among the remaining subjects, two cases failed due to inaccurate organ localisation, two cases

failed due to a mismatch of scanner coordinates leading to an inconsistent initialisation for the

registration process, three cases have excessive motion artefacts and one case has a very low

SNR. Additionally, among the 64 subjects, 22 reconstructions have been performed with only

one or two stacks acquired in a single orientation. While it provided reasonable results in 12

4https://youtu.be/iSe5B3pcbL8

Page 134: Keraudren-K-2015-PhD-Thesis

134 Chapter 6. Localising the heart, lungs and liver

cases with little or no motion, the motion correction failed in 10 cases with important motion

artefacts.

In contrast to Chapter 5, the motion corrected volumes were obtained from four stacks or

less, and the reconstructed volumes are hence of a lower quality than for the brain, where eight

stacks are typically used. Clinical examinations with fetal MRI currently aim to perform mo-

tion correction only on the brain region. Although sagittal and coronal acquisitions generally

cover most of the fetal body, transverse acquisitions only cover the fetal head and cannot be

used to reconstruct the fetal thorax. In order to perform motion correction and superresolu-

tion, the volume to reconstruct must be sufficiently oversampled during the image acquisition

process. Additional clinical scans dedicated to the fetal thorax will thus need to be performed

in order to enable high resolution reconstructions comparable to what is currently achieved

when reconstructing the brain.

In order to accommodate for the level of noise which is, in the dataset 3, more important

in the region of the fetal thorax than in the brain, each slice was preprocessed using non-local

mean denoising [17] before motion correction. Additionally, due to the reduced anatomical

coverage and lower intensity contrast compared with the brain region, the smoothing effect of

the regularisation taking place in the superresolution reconstruction has been increased. The

dark skull tissues, lighter brain tissues and bright cerebrospinal fluid cover the whole intensity

range of the T2 MR image, which is not the case of the fetal thorax. Compared to Kainz et al.

[74] whose parameter settings are optimised for the brain, we increased the threshold defining

the minimum intensity difference between neighbouring pixels for them to be considered as

an edge (δ = 300, which is twice the original value [74, 87]), and increased the regularisation

weight to λ = 0.03 across all iterations. These values were determined experimentally in order

to optimise the qualitative appearance of a random selection of five subjects by reducing the

level of noise introduced in the superresolution process. It can be noted that Kainz et al. [73]

used parameter settings optimised for the brain in order to reconstruct the fetal thorax, and

subsequently applied total variation denoising. Additionally, the rigid body assumption which

Page 135: Keraudren-K-2015-PhD-Thesis

6.5. Discussion 135

24 weeks

29 weeks

37 weeks

Coronal plane Sagittal plane Transverse plane

Figure 6.14: Examples of fully automated motion correction for the fetal thorax, with fetusesof 24, 29 and 37 weeks of gestational ages. The localisation of the heart, lungs and liver can beused to perform a rigid point registration with a template. Orienting the reconstructed volumein a standard orientation facilitates its visual inspection, as well as further processing such asan atlas based segmentation.

is made in the motion correction framework is well suited for the brain, but do not hold for the

fetal trunk which deforms when the fetus moves due to bending of the spine.

6.5 Discussion

6.5.1 Localising the organs of the fetus

Random Forests provide the means to study the relative importance of the features used during

training. The most important features are selected early in the tree construction, and repeated

Page 136: Keraudren-K-2015-PhD-Thesis

136 Chapter 6. Localising the heart, lungs and liver

0

20

40

60

80

100

Feat

ure

impo

rtan

ce(%

)

Classifier heart

0

20

40

60

80

100Regressor heart

Image intensityGradient magnitude

Gradient orientationBrain distance

Heart distanceHeart-brain axis

0

20

40

60

80

100Classifier heart/lungs/liver

Figure 6.15: Feature importance of successive Random Forests.

across the forest. In Figure 6.15, different patterns of feature importance can be observed

between the classifier and regressor for the heart. This difference reflects the fact that the two

forests were trained on different image regions. The classifier is trained on the narrow band

between the spheres of radius R1 and R2 (Figure 6.7), while the regressor is only trained on

voxels that are inside the heart. Similarly, since the training of the classifier for the heart, lungs

and liver focuses on the sphere of radius R3, gradient features become more important as they

are able to capture finer details than cube features.

An obvious step following the motion correction of the brain, in Chapter 5, would be to

determine the orientation of the reconstructed brain, for instance by registering the brain to

an atlas. However, orienting the brain of the fetus still leaves a high degree of freedom for the

position of the thorax due to the neck articulation. We also experimented using Random Forests

without aligning fetuses with little success. More specifically, in the iterative detection process

using autocontext Random Forests, presented in Section 6.2, the first iteration corresponds to a

standard Random Forest framework with axis aligned features (Figure 6.16). The convergence

of the autocontext framework relies on a sufficient localisation at the first iteration, while the

subsequent steps allow to refine the segmentation. As can be seen in Figure 6.16, the results

for the left ventricle (top row) already highlight the underlying anatomy in the first iteration,

which is not the case for fetal MRI (bottom row).

Page 137: Keraudren-K-2015-PhD-Thesis

6.5. Discussion 137

Iteration 1 Iteration 2 Iteration 3 Iteration 4

Figure 6.16: Autocontext framework for tissue segmentation: left ventricle (white), myocardium(green) and mitral valves (red) in the top row; fetal head (green), fetal body (white) andamniotic fluid (red) in the bottom row.

As can be seen from Figure 6.17, the echocardiographic images in the CETUS challenge were

all in a standard orientation and centred on the left ventricle [10]. This is generally not the

case of fetal MRI. The unknown orientation of the fetus motivated the development of steerable

features, and the variability of the data associated with a relatively small training dataset led to

the proposed three steps detection process (first the brain, then the heart, and lastly the lungs

and liver), which allows for successive spatial constraints. While the proposed approach used

rectangle features, different steerable features could be envisaged such as the steerable filters of

Freeman and Adelson [52], which compute the output of rotated filters from the convolutions

of basis filters (Section 3.2.1).

Regarding the computation time, it is important to highlight that fetal MRI, unlike ultra-

sound, is usually not used as a real-time imaging modality. Reducing the number of orientations

tested at each voxel (θ parameter) and evaluating the detector on a sparse grid (cube features

are not local as they represent average intensities across far regions) allow accuracy to be traded

for computation time.

In the proposed method, each 3D stack is treated independently. Future work could consider

Page 138: Keraudren-K-2015-PhD-Thesis

138 Chapter 6. Localising the heart, lungs and liver

Figure 6.17: Echocardiographic images in the CETUS challenge (MICCAI 2014) are all centeredon the left ventricle and in a standard orientation (top row), which is not the case for fetal MRI(bottom row).

combining the detection results from multiple stacks in order to perform the optimisation

presented in Section 6.3.4 both in a spatial and temporal context.

6.5.2 Motion correction

In future work, refining the segmentation mask throughout motion correction, similarly to

Chapter 5, could be considered to increase the quality of the motion corrected volume. A slice-

by-slice segmentation of the region to reconstruct could also be developed, taking advantage

of the known orientation of the fetus to apply a more specialised model. Such a model could

be based on an atlas-based segmentation, similarly to Tourbier et al. [153] and Taimouri et al.

[148]. However, in contrast to the brain, there are currently no atlases available for the fetal

thorax or abdomen. This is also an area for future development.

The main assumption in current implementations of slice-to-volume reconstruction is that

the registration is guided by a single rigid part of the anatomy, such as the brain of the fetus.

However, the fetal body bends and twists, as a result the thorax and abdomen of the fetus

do not form a single rigid component. Kainz et al. [75] proposed a partial solution to this

Page 139: Keraudren-K-2015-PhD-Thesis

6.6. Conclusion 139

problem by formulating a patch-to-volume reconstruction: instead of considering each 2D slice

as a rigid component, slices are segmented into overlapping patches, for instance using the SLIC

supervoxel algorithm [1]. The motion correction then proceeds by iteratively registering the 2D

patches to the reconstructed 3D volume. This approach allows for some non-rigid deformation

in the anatomy to reconstruct as patches from a same slice can have different registration

parameters. Instead of using an unsupervised clustering such as SLIC supervoxels to extract

patches, the localisation of fetal organs presented in this thesis could be used in future work to

steer the choice of control points or the definition of patch boundaries. However, the framework

of Kainz et al. [75] still relies on a rigid registration of each patch. It can be envisaged that

future work will include a non-rigid registration step to take into account the deformations

resulting from fetal motion. It is important to note that although the work of Kainz et al.

[75] does not require an organ localisation step in theory, the large number of patches greatly

increases the computation time, which can be reduced by focusing on a region of interest.

6.6 Conclusion

We presented in this chapter a pipeline which, in combination with automated brain detection,

enables the automated localisation of the lungs, heart and liver in fetal MRI. The localisation

results can be used to initialise a segmentation or motion correction, as highlighted in the

preliminary results of Section 6.3.5, and to orient the 3D volume with respect to the fetal

anatomy to facilitate clinical diagnosis and research. In 90% of cases, the detected center of

the heart was within 10mm of the ground truth, and in 73% of cases, the organ localisation

enabled a fully automated motion correction of the fetal thorax. The key component to our

method is to sample the image features in a local coordinate system in order to cope with the

unknown orientation of the fetus. This coordinate system is estimated as organs are detected:

first the brain, which fixes a point, then the heart, which fixes an axis, and finally the liver and

both lungs.

Page 140: Keraudren-K-2015-PhD-Thesis

Chapter 7

Conclusion

This thesis presents automated methods to localise the organs of the fetus in MRI, namely the

brain, heart, lungs and liver, when the data is acquired as stacks of 2D slices misaligned due

to fetal movement. Addressing this problem is particularly challenging due to the unknown

position and orientation of the fetus, as well as the presence of motion artefacts which makes

stacks of slices inconsistent.

In this thesis, we proposed a rotation invariant detector for the fetal brain by training a

Bag-of-Words model on images of the brain in all possible orientations. A separate detector

for the fetal body subsequently learns a representation of the fetus from a training database

of images aligned in a standard coordinate system defined by anatomical landmarks. Steerable

features are used in order to determine the orientation of the fetus at test time. There is a

large variability in the images of fetuses due to fetal development. We proposed to use prior

knowledge from standard growth charts to develop organ detection methods independent of the

gestational age.

As highlighted in Figure 7.1, these results of the organ localisation can be used to initialise

motion correction procedures, allowing for the deployment of a fully automated motion correc-

tion pipeline. A fully automated pipeline for motion correction of the fetal brain in MRI paves

140

Page 141: Keraudren-K-2015-PhD-Thesis

7.1. Future Work 141

the way for both clinical use and larger scale research studies. It is a key step to enable the

widespread uptake of 3D fetal imaging from robust, motion tolerant snapshot methods.

sagittal coronal sagittal coronal transverse

Figure 7.1: Demonstration of the methods presented in this thesis on a 3T dataset. The brain(magenta), heart (blue), left and right lungs (red and green), and liver (yellow) of the fetus areautomatically localised in MRI, when the data is acquired as 3D stacks of 2D slices misaligneddue to fetal motion. This localisation is used to initialise motion correction of the fetal brainand thorax.

7.1 Future Work

7.1.1 Simultaneous localisation and motion correction

The methods developed in this thesis considered each 3D stack of MR images independently.

Future developments should take into account the spatio-temporal constraints between suc-

cessive scans, allowing for more robust localisation results. In the work of Taleb et al. [149],

an initial region of interest for the brain of the fetus is defined from the intersection of the

different scans, which have been acquired in orthogonal directions. However, some scans may

not fully cover the fetal anatomy. In particular, the body of the fetus is generally not covered

by transverse scans of the brain. Future work should thus consider a probabilistic framework

for combining different scans of a fetus, which may not all fully cover the fetal anatomy.

Page 142: Keraudren-K-2015-PhD-Thesis

142 Chapter 7. Conclusion

Motion correction being a multi-scale approach, a low resolution volume could be first recon-

structed and used for localisation, allowing finer scale reconstruction iterations to focus on a

more restricted region, similarly to the refinement step for the brain segmentation proposed in

Chapter 5. The cube-like image features used in Chapter 6 are based on the average intensity

over potentially large patches and thus do not require a high image resolution. The work of

Kainz et al. [75] could provide such initial reconstruction of a large volume without any prior

segmentation, and organ localisation would allow subsequent reconstruction iterations to focus

on a specific anatomical region, as well as serve for further processing of the medical images.

Such developments could enable closer interaction between organ localisation and motion cor-

rection. The robust statistics and EM classification, used by Kuklisova-Murgasova et al. [87]

to reject motion corrupted or misregistered slices, could both influence and make use of the

semantic parsing of the image.

7.1.2 Improved slice-to-volume registration using the localisation

results

As mentioned in Chapter 5, the convergence of the motion correction techniques highly depends

on the initial alignment of the 2D slices. Only a limited amount of motion can be corrected

by registration methods, which hinders the application of motion correction to young fetuses.

While it is particularly important to be able to image and diagnose abnormal fetal development

as early as possible during pregnancy, young fetuses have more space to move inside the womb

compared to older fetuses who are confined between the uterine wall. In the work of Ison et al.

[69], the tissue centroids which are used to localise the fetal brain can potentially be used to

initialise a stack-to-stack registration. Similarly, the detected organ centers in Chapter 6 could

be used for an initial registration. However, such landmark registration would only provide

a coarse alignment that would only be useful in cases of large fetal movement. Besides, the

main challenge of motion correction is slice-to-volume registration. The template matching

approach proposed by Taimouri et al. [148] for brain localisation would potentially enable to

Page 143: Keraudren-K-2015-PhD-Thesis

7.1. Future Work 143

position the acquired slices in template space. It would, however, require using a sufficiently

small rotation step in order to be applicable to motion correction. The work of Oktay et al.

[111] on echocardiographic images, where probability maps of the myocardium boundaries are

used as input images during registration so that the registration process is only guided by the

relevant parts of the anatomy, provides another insight on the application of a machine learning

based localisation method in a registration framework. Localisation and segmentation results

could thus be used to guide the slice-to-volume registration procedure.

7.1.3 Application to organ volumetry and atlas creation

The methods presented in this thesis provide a way to automatically add some key landmarks

to unseen images of fetal MRI. This could have many applications. The detection of fetal

organ centroids can be used for biometric measurements to assess the development of the

fetus. A subsequent segmentation can provide volumetric measurements of fetal organs, which

can be used to diagnose IUGR and its severity [37]. Localising fetal organs during the MRI

acquisition process itself could enable tracking methods that would steer the acquisition for

a better coverage of the fetal body [75]. There is currently no fetal atlas aimed at organ

segmentation in MRI beyond the fetal brain and the proposed methods could be used in the

processing and motion correction of a large number of subjects when building atlases for the

fetal lungs or liver. Such atlases will provide a better understanding of the morphological

changes occurring across gestational ages. It will also enable atlas-based segmentations, and

the localisation of organ centroids can then be used to initialise the registration process.

Page 144: Keraudren-K-2015-PhD-Thesis

144 Chapter 7. Conclusion

Page 145: Keraudren-K-2015-PhD-Thesis

Bibliography

[1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk. SLIC superpixels

compared to state-of-the-art superpixel methods. Transactions on Pattern Analysis and

Machine Intelligence, 34(11):2274–2282, 2012. 139

[2] B. Alexe, T. Deselaers, and V. Ferrari. What is an object? In Computer Vision and

Pattern Recognition (CVPR), pages 73–80. IEEE, 2010. 61

[3] P. Aljabar, R. A. Heckemann, A. Hammers, J. V. Hajnal, and D. Rueckert. Multi-

atlas based segmentation of brain images: atlas selection and its effect on accuracy.

NeuroImage, 46(3):726–738, 2009. 64

[4] J. Anquez, E. Angelini, and I. Bloch. Automatic segmentation of head structures on fetal

MRI. In International Symposium on Biomedical Imaging (ISBI), pages 109–112. IEEE,

2009. 65, 81, 85, 89, 94, 114, 121

[5] J. Anquez, L. Bibin, E. Angelini, and I. Bloch. Segmentation of the fetal envelope on ante-

natal MRI. In International Symposium on Biomedical Imaging (ISBI), pages 896–899.

IEEE, 2010. 41, 114

[6] J. G. Archie, J. S. Collins, and R. R. Lebel. Quantitative standards for fetal and neonatal

autopsy. American journal of clinical pathology, 126(2):256–265, 2006. 30, 37, 38, 120,

121

[7] R. A. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. Addison-Wesley

Longman Publishing Co., Inc., 1999. 102

145

Page 146: Keraudren-K-2015-PhD-Thesis

146 BIBLIOGRAPHY

[8] P. N. Baker, I. R. Johnson, P. R. Harvey, P. A. Gowland, and P. Mansfield. A three-year

follow-up of children imaged in utero with echo-planar magnetic resonance. American

journal of obstetrics and gynecology, 170(1):32–33, 1994. 31

[9] H. Bay, T. Tuytelaars, and L. Van Gool. SURF: Speeded up robust features. In Computer

vision–ECCV 2006, pages 404–417. Springer, 2006. 52

[10] O. Bernard, B. Heyde, M. Alessandrini, D. Barbosa, S. Camarasu-Pop, F. Cervenansky,

S. Valette, O. Mirea, E. Galli, M. Geleijnse, A. Papachristidis, J. Bosch, and J. D’hooge.

Challenge on Endocardial Three-dimensional Ultrasound Segmentation (CETUS). 10

2014. 66, 137

[11] H. Bonel, K. Frei, L. Raio, M. Meyer-Wittkopf, L. Remonda, and R. Wiest. Prospective

navigator-echo-based real-time triggering of fetal head movement for the reduction of

artifacts. European radiology, 18(4):822–829, 2008. 32

[12] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin

classifiers. In Proceedings of the fifth annual workshop on Computational learning theory,

pages 144–152. ACM, 1992. 54

[13] Y. Boykov and G. Funka-Lea. Graph Cuts and Efficient N-D Image Segmentation. In-

ternational Journal of Computer Vision, 70(2):109–131, Nov. 2006. 93

[14] G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools, 2000. 72, 73,

80, 98

[15] L. Breiman. Random Forests. Machine learning, 45(1):5–32, 2001. 46, 56, 91

[16] M. Brown and D. G. Lowe. Automatic panoramic image stitching using invariant features.

International Journal of Computer Vision, 74(1):59–73, 2007. 52

[17] A. Buades, B. Coll, and J.-M. Morel. A non-local algorithm for image denoising. In

Computer Vision and Pattern Recognition (CVPR), volume 2, pages 60–65. IEEE, 2005.

134

Page 147: Keraudren-K-2015-PhD-Thesis

BIBLIOGRAPHY 147

[18] G. Carneiro, F. Amat, B. Georgescu, S. Good, and D. Comaniciu. Semantic-based in-

dexing of fetal anatomies from 3-D ultrasound data using global/semi-local context and

sequential sampling. In Computer Vision and Pattern Recognition (CVPR), 2008. 67

[19] G. Carneiro, B. Georgescu, and S. Good. Knowledge-based automated fetal biometrics

using syngo Auto OB measurements. Siemens Medical Solutions, 2008. 67

[20] G. Carneiro, B. Georgescu, S. Good, and D. Comaniciu. Detection and measurement of

fetal anatomies from ultrasound images using a constrained probabilistic boosting tree.

Transactions on Medical Imaging, 27(9):1342–1355, 2008. 67

[21] T. Chan and L. Vese. Active Contours Without Edges. Transactions on Image Processing,

10(2):266–277, 2001. 129

[22] C.-H. Chang, C.-H. Yu, F.-M. Chang, H.-C. Ko, and H.-Y. Chen. The assessment of

normal fetal brain volume by 3-D ultrasound. Ultrasound in Medicine & Biology, 29(9):

1267 – 1272, 2003. 92

[23] P. Charbonnier, L. Blanc-Feraud, G. Aubert, and M. Barlaud. Deterministic edge-

preserving regularization in computed imaging. Image Processing, IEEE Transactions

on, 6(2):298–311, 1997. 34

[24] Q. Chen and D. Levine. Fast Fetal Magnetic Resonance Imaging Techniques. Topics in

Magnetic Resonance Imaging, 12(1):67, 2001. 28

[25] T. J. Cole, J. V. Freeman, and M. A. Preece. British 1990 growth reference centiles for

weight, height, body mass index and head circumference fitted by maximum penalized

likelihood. Statistics in medicine, 17(4):407–429, 1998. 38

[26] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis.

Transactions on Pattern Analysis and Machine Intelligence, 24(5):603–619, 2002. 62, 108

[27] C. Cortes and V. Vapnik. Support-vector networks. Machine learning, 20(3):273–297,

1995. 50, 54

Page 148: Keraudren-K-2015-PhD-Thesis

148 BIBLIOGRAPHY

[28] P. Coupe, J. V. Manon, V. Fonov, J. Pruessner, M. Robles, and D. L. Collins. Nonlo-

cal Patch-Based Label Fusion for Hippocampus Segmentation. In T. Jiang, N. Navab,

J. Pluim, and M. Viergever, editors, Medical Image Computing and Computer Assisted

Intervention (MICCAI), volume 6363 of LNCS, pages 129–136. Springer Berlin Heidel-

berg, 2010. 91

[29] A. Criminisi, J. Shotton, and S. Bucciarelli. Decision forests with long-range spatial

context for organ localization in CT volumes. Medical Image Computing and Computer-

Assisted Interventation (MICCAI), pages 69–80, 2009. 66, 114, 115, 124

[30] A. Criminisi, J. Shotton, and E. Konukoglu. Decision forests for classification, regression,

density estimation, manifold learning and semi-supervised learning. Technical report,

Tech. Rep. MSR-TR-2011-114, Microsoft Research, Cambridge, 2011. 56

[31] A. Criminisi, J. Shotton, D. Robertson, and E. Konukoglu. Regression forests for efficient

anatomy detection and localization in CT studies. Medical Computer Vision. Recognition

Techniques and Applications in Medical Imaging, pages 106–117, 2011. 49, 50, 58, 66,

117, 118

[32] F. C. Crow. Summed-area tables for texture mapping. ACM SIGGRAPH computer

graphics, 18(3):207–212, 1984. 48

[33] G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray. Visual Categorization With

Bags of Keypoints. In Workshop on Statistical Learning in Computer Vision, ECCV,

volume 1, page 22, 2004. 52, 76, 78, 79

[34] R. Cuingnet, R. Prevost, D. Lesage, L. D. Cohen, B. Mory, and R. Ardon. Automatic

detection and segmentation of kidneys in 3D CT images using random forests. In Medical

Image Computing and Computer-Assisted Intervention (MICCAI), pages 66–74. Springer,

2012. 126

[35] R. Cuingnet, O. Somphone, B. Mory, R. Prevost, M. Yaqub, R. Napolitano, A. Papa-

georghiou, D. Roundhill, J. A. Noble, and R. Ardon. Where is my baby? A fast fetal

Page 149: Keraudren-K-2015-PhD-Thesis

BIBLIOGRAPHY 149

head auto-alignment in 3D-ultrasound. In International Symposium on Biomedical Imag-

ing (ISBI), pages 768–771. IEEE, 2013. 56, 67, 85

[36] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Com-

puter Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Con-

ference on, volume 1, pages 886–893. IEEE, 2005. 46, 50, 62

[37] M. Damodaram, L. Story, E. Eixarch, P. Patkee, A. Patel, S. Kumar, and M. Rutherford.

Foetal volumetry using Magnetic Resonance Imaging in intrauterine growth restriction.

Early Human Development, 2012. 29, 31, 37, 38, 40, 42, 43, 85, 114, 132, 143

[38] J. G. Daugman. Two-dimensional spectral analysis of cortical receptive field profiles.

Vision research, 20(10):847–856, 1980. 47

[39] J. G. Daugman. Uncertainty relation for resolution in space, spatial frequency, and

orientation optimized by two-dimensional visual cortical filters. JOSA A, 2(7):1160–1169,

1985. 48

[40] J. M. Dias. An introduction to the log-polar mapping. 1996. 60

[41] L. R. Dice. Measures of the Amount of Ecologic Association Between Species. Ecology,

26(3):297–302, 1945. 102

[42] R. Donner, E. Birngruber, H. Steiner, H. Bischof, and G. Langs. Localization of 3D

anatomical structures using random forests and discrete optimization. In Medical Com-

puter Vision. Recognition Techniques and Applications in Medical Imaging, pages 86–95.

Springer, 2011. 56, 62, 63, 66, 127

[43] M. Donoser and H. Bischof. 3D Segmentation by Maximally Stable Volumes (MSVs). In

International Conference on Pattern Recognition (ICPR), pages 63–66. IEEE Computer

Society, 2006. 77

[44] S. F. Eskildsen, P. Coupe, V. Fonov, J. V. Manjon, K. K. Leung, N. Guizard, S. N. Wassef,

Page 150: Keraudren-K-2015-PhD-Thesis

150 BIBLIOGRAPHY

L. R. Østergaard, D. L. Collins, A. D. N. Initiative, et al. BEaST: Brain extraction based

on nonlocal segmentation technique. NeuroImage, 59(3):2362–2373, 2012. 88

[45] P. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object Detection with

Discriminatively Trained Part-based Models. Pattern Analysis and Machine Intelligence,

32(9):1627–1645, 2010. 50, 62

[46] M. Fenchel, S. Thesen, and A. Schilling. Automatic labeling of anatomical structures

in MR FastView images using a statistical atlas. In Medical Image Computing and

Computer-Assisted Intervention (MICCAI), pages 576–584. Springer, 2008. 64

[47] S. Feng, S. Zhou, S. Good, and D. Comaniciu. Automatic fetal face detection from

ultrasound volumes via learning 3D and 2D information. In Computer Vision and Pattern

Recognition (CVPR), pages 2488–2495. IEEE, 2009. 67

[48] G. Ferrazzi, M. K. Murgasova, T. Arichi, C. Malamateniou, M. J. Fox, A. Makropoulos,

J. Allsop, M. Rutherford, S. Malik, P. Aljabar, et al. Resting State fMRI in the moving

fetus: A robust framework for motion, bias field and spin history correction. NeuroImage,

101:555–568, 2014. 32

[49] M. A. Fischler and R. C. Bolles. Random Sample Consensus: A Paradigm for Model Fit-

ting with Applications to Image Analysis and Automated Cartography. In M. A. Fischler

and O. Firschein, editors, Readings in Computer Vision: Issues, Problems, Principles,

and Paradigms, pages 726–740. Morgan Kaufmann Publishers Inc., 1987. 76, 79

[50] A. W. Fitzgibbon, R. B. Fisher, et al. A buyer’s guide to conic fitting. DAI Research

paper, 1996. 78

[51] G. T. Flitton, T. P. Breckon, and N. M. Bouallagu. Object Recognition using 3D SIFT

in Complex CT Volumes. In British Machine Vision Conference (BMVC), pages 1–12,

2010. 52

Page 151: Keraudren-K-2015-PhD-Thesis

BIBLIOGRAPHY 151

[52] W. T. Freeman and E. H. Adelson. The design and use of steerable filters. Transactions

on Pattern Analysis and Machine Intelligence, 13(9):891–906, 1991. 47, 60, 137

[53] Y. Freund and R. E. Schapire. A Desicion-theoretic Generalization of On-line Learn-

ing and an Application to Boosting. In Computational Learning Theory, pages 23–37.

Springer, 1995. 55

[54] B. Fulkerson, A. Vedaldi, and S. Soatto. Class Segmentation and Object Localization with

Superpixel Neighborhoods. In International Conference on Computer Vision (ICCV),

pages 670–677. IEEE, 2009. 61, 80

[55] D. Gabor. Theory of communication. Journal of the Institution of Electrical Engineers,

93:429–457, 1946. 47

[56] J. Gall and V. Lempitsky. Class-specific hough forests for object detection. In Computer

Vision and Pattern Recognition (CVPR), pages 1022–1029. IEEE, 2009. 56, 59, 63, 66,

115

[57] R. Gauriau, R. Cuingnet, D. Lesage, and I. Bloch. Multi-organ Localization Combining

Global-to-Local Regression and Confidence Maps. In Medical Image Computing and

Computer-Assisted Intervention (MICCAI), pages 337–344. Springer, 2014. 56, 65, 126

[58] A. Gholipour, J. Estroff, and S. Warfield. Robust Super-Resolution Volume Reconstruc-

tion From Slice Acquisitions: Application to Fetal Brain MRI. Transactions on Medical

Imaging, 29(10):1739–1758, Oct 2010. 33, 34, 89

[59] A. Gholipour, C. Limperopoulos, S. Clancy, C. Clouchoux, A. Akhondi-Asl, J. A. Es-

troff, and S. K. Warfield. Construction of a Deformable Spatiotemporal MRI Atlas of the

Fetal Brain: Evaluation of Similarity Metrics and Deformation Models. In Medical Im-

age Computing and Computer-Assisted Intervention (MICCAI), pages 292–299. Springer,

2014. 65

Page 152: Keraudren-K-2015-PhD-Thesis

152 BIBLIOGRAPHY

[60] P. Glover, J. Hykin, P. Gowland, J. Wright, I. Johnson, and P. Mansfield. An assessment

of the intrauterine sound intensity level during obstetric echo-planar magnetic resonance

imaging. The British journal of radiology, 68(814):1090–1094, 1995. 31

[61] L. Grady. Random Walks for Image Segmentation. Transactions on Pattern Analysis

and Machine Intelligence, 28(11):1768–1783, 2006. 128

[62] P. A. Habas, K. Kim, J. M. Corbett-Detig, F. Rousseau, O. A. Glenn, A. J. Barkovich,

and C. Studholme. A spatiotemporal atlas of MR intensity, tissue probability and shape

of the fetal brain with application to segmentation. NeuroImage, 53(2):460 – 470, 2010.

65

[63] T. Hastie, R. Tibshirani, J. Friedman, and J. Franklin. The elements of statistical learn-

ing: data mining, inference and prediction. The Mathematical Intelligencer, 27(2):83–85,

2005. 45, 46

[64] T. Hayat, A. Nihat, M. Martinez-Biarge, A. McGuinness, J. Allsop, J. Hajnal, and

M. Rutherford. Optimization and initial experience of a multisection balanced steady-

state free precession cine sequence for the assessment of fetal behavior in utero. American

Journal of Neuroradiology, 32(2):331–338, 2011. 32

[65] X. Hou and L. Zhang. Saliency detection: A spectral residual approach. In Computer

Vision and Pattern Recognition (CVPR), pages 1–8. IEEE, 2007. 61

[66] J. S. Huxley and G. Teissier. Terminology of relative growth. Nature, 137(3471):780–781,

1936. 38

[67] T. Inaoka, H. Sugimori, Y. Sasaki, K. Takahashi, K. Sengoku, N. Takada, and T. Aburano.

VIBE MRI for evaluating the normal and abnormal gastrointestinal tract in fetuses.

American Journal of Roentgenology, 189(6):W303–W308, 2007. 28

[68] M. Irani and S. Peleg. Improving resolution by image registration. CVGIP: Graphical

models and image processing, 53(3):231–239, 1991. 34

Page 153: Keraudren-K-2015-PhD-Thesis

BIBLIOGRAPHY 153

[69] M. Ison, R. Donner, E. Dittrich, G. Kasprian, D. Prayer, and G. Langs. Fully Automated

Brain Extraction and Orientation in Raw Fetal MRI. In Workshop on Paediatric and

Perinatal Imaging, MICCAI, pages 17–24, 2012. 56, 58, 62, 66, 67, 76, 81, 89, 102, 106,

109, 114, 121, 127, 142

[70] M. Jenkinson, P. Bannister, M. Brady, and S. Smith. Improved optimization for the robust

and accurate linear registration and motion correction of brain images. NeuroImage, 17

(2):825–841, 2002. 64

[71] S. Jiang, H. Xue, A. Glover, M. Rutherford, D. Rueckert, and J. Hajnal. MRI of moving

subjects using multislice snapshot images with volume reconstruction (SVR): application

to fetal, neonatal, and adult brain studies. Transactions on Medical Imaging, 26(7):

967–980, 2007. 28, 33, 72, 89

[72] B. Kainz, K. Keraudren, V. Kyriakopoulou, M. Rutherford, J. V. Hajnal, and D. Rueck-

ert. Fast Fully Automatic Brain Detection in Foetal MRI Using Dense Rotation Invariant

Image Descriptors. In IEEE International Symposium on Biomedical Imaging., 2014. 53,

56, 66, 67, 68, 89, 114, 121

[73] B. Kainz, C. Malamateniou, M. Murgasova, K. Keraudren, M. Rutherford, J. V. Haj-

nal, and D. Rueckert. Motion Corrected 3D Reconstruction of the Fetal Thorax from

Prenatal MRI. In P. G. et al., editor, Medical Image Computing and Computer-Assisted

Intervention (MICCAI) 2014, Part II, LNCS 8674, pages 284–291, 2014. 33, 132, 134

[74] B. Kainz, M. Steinberger, W. Wein, M. Murgasova, C. Malamateniou, K. Keraudren,

T. Torsney-Weir, M. Rutherford, P. Aljabar, J. V. Hajnal, and D. Rueckert. Fast Volume

Reconstruction from Motion Corrupted Stacks of 2D Slices. In IEEE Transactions on

Medical Imaging, minor revision under review, 2015. 33, 109, 129, 134

[75] B. Kainz, A. Alansary, C. Malamateniou, K. Keraudren, M. Rutherford, J. V. Hajnal,

and D. Rueckert. Flexible reconstruction and correction of unpredictable motion from

Page 154: Keraudren-K-2015-PhD-Thesis

154 BIBLIOGRAPHY

stacks of 2D images. In Medical Image Computing and Computer Assisted Intervention

(MICCAI’15), under review, 2015. 33, 110, 138, 139, 142, 143

[76] G. Kasprian, C. Balassy, P. C. Brugger, and D. Prayer. MRI of normal and pathological

fetal lung development. European Journal of Radiology, 57(2):261–270, 2006. 31

[77] K. Keraudren, V. Kyriakopoulou, M. Rutherford, J. V. Hajnal, and D. Rueckert. Lo-

calisation of the Brain in Fetal MRI using Bundled SIFT Features. In MICCAI, pages

582–589. Springer, 2013. 114, 121

[78] K. Keraudren, M. Kuklisova-Murgasova, V. Kyriakopoulou, C. Malamateniou,

M. Rutherford, B. Kainz, J. Hajnal, and D. Rueckert. Automated Fetal Brain Seg-

mentation from 2D MRI Slices for Motion Correction. NeuroImage, 101:633–643, 2014.

[79] K. Keraudren, O. Oktay, W. Shi, J. V. Hajnal, and D. Rueckert. Endocardial 3D Ul-

trasound Segmentation using Autocontext Random Forests. In Challenge on Endocardial

Three-dimensional Ultrasound Segmentation, MICCAI. 2014.

[80] K. Keraudren, B. Kainz, O. Oktay, V. Kyriakopoulou, M. Rutherford, J. V. Hajnal, and

D. Rueckert. Automated Localization of Fetal Organs in MRI Using Random Forests

with Steerable Features. In MICCAI. under review, 2015.

[81] K. Kim, P. Habas, F. Rousseau, O. Glenn, A. Barkovich, and C. Studholme. Intersection

Based Motion Correction of Multislice MRI for 3-D in Utero Fetal Brain Image Formation.

Transactions on Medical Imaging, 29(1):146–158, Jan 2010. 33, 89, 96

[82] A. Klaser, M. Marsza lek, and C. Schmid. A spatio-temporal descriptor based on 3D-

gradients. In BMVC 2008-19th British Machine Vision Conference, pages 275–1. British

Machine Vision Association, 2008. 51

[83] T. Klinder, H. Wendland, I. Waechter-Stehle, D. Roundhill, and C. Lorenz. Adaptation

of an articulated fetal skeleton model to three-dimensional fetal image data. In SPIE

Medical Imaging, 2015. 41, 114

Page 155: Keraudren-K-2015-PhD-Thesis

BIBLIOGRAPHY 155

[84] M. Kolsch and M. Turk. Analysis of rotational robustness of hand detection with a viola-

jones detector. In International Conference on Pattern Recognition (ICPR), volume 3,

pages 107–110. IEEE, 2004. 60

[85] P. Kontschieder, P. Kohli, J. Shotton, and A. Criminisi. GeoF: Geodesic Forests for

Learning Coupled Predictors. In Computer Vision and Pattern Recognition (CVPR),

pages 65–72. IEEE, 2013. 63, 117

[86] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convo-

lutional neural networks. In Advances in neural information processing systems, pages

1097–1105, 2012. 48

[87] M. Kuklisova-Murgasova, G. Quaghebeur, M. A. Rutherford, J. V. Hajnal, and J. A.

Schnabel. Reconstruction of Fetal Brain MRI with Intensity Matching and Complete

Outlier Removal. Medical Image Analysis, 16(8):1550–1564, 2012. 33, 34, 35, 37, 89, 95,

96, 101, 109, 134, 142

[88] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional Random Fields: Proba-

bilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the Eigh-

teenth International Conference on Machine Learning (ICML), pages 282–289, San Fran-

cisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc. 93

[89] C. Lampert, M. Blaschko, and T. Hofmann. Beyond sliding windows: Object localization

by efficient subwindow search. In Computer Vision and Pattern Recognition (CVPR),

pages 1–8. IEEE, 2008. 61, 69

[90] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching

for recognizing natural scene categories. In Computer Vision and Pattern Recognition,

2006 IEEE Computer Society Conference on, volume 2, pages 2169–2178. IEEE, 2006.

52

[91] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based Learning Applied to

Document Recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. 47

Page 156: Keraudren-K-2015-PhD-Thesis

156 BIBLIOGRAPHY

[92] D. Levine. Atlas of fetal MRI. CRC Press, 2005. 30, 31

[93] D. Levine. Obstetric MRI. Journal of Magnetic Resonance Imaging, 24(1):1–15, 2006.

31

[94] D. Levine, P. D. Barnes, J. R. Madsen, W. Li, and R. R. Edelman. Fetal central nervous

system anomalies: MR imaging augments sonographic diagnosis. Radiology, 204(3):635–

642, 1997. 31

[95] D. Levine, C. E. Barnewolt, T. S. Mehta, I. Trop, J. Estroff, and G. Wong. Fetal Thoracic

Abnormalities: MR Imaging 1. Radiology, 228(2):379–388, 2003. 39, 121

[96] R. Lienhart and J. Maydt. An extended set of Haar-like features for rapid object detection.

In International Conference on Image Processing, volume 1, pages I–900. IEEE, 2002. 49

[97] D. Lowe. Object Recognition from Local Scale-invariant Features. In International Con-

ference on Computer Vision (ICCV), volume 2, pages 1150–1157. IEEE, 1999. 46, 51,

52, 78, 81

[98] C. Malamateniou, S. Malik, S. Counsell, J. Allsop, A. McGuinness, T. Hayat, K. Broad-

house, R. Nunes, A. Ederies, J. Hajnal, et al. Motion-Compensation Techniques in Neona-

tal and Fetal MR Imaging. American Journal of Neuroradiology, 34(6):1124–1136, 2013.

28, 32

[99] J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust Wide Baseline Stereo from Max-

imally Stable Extremal Regions. In British Machine Vision Conference (BMVC), pages

384–393, 2002. 61, 76, 77

[100] C. Messom and A. Barczak. Fast and efficient rotated Haar-like features using rotated

integral images. In Australian Conference on Robotics and Automation, pages 1–6, 2006.

49

[101] F. Milletari, M. Yigitsoy, N. Navab, and S. Ahmadi. Left Ventricle Segmentation in

Page 157: Keraudren-K-2015-PhD-Thesis

BIBLIOGRAPHY 157

Cardiac Ultrasound Using Hough-Forests With Implicit Shape and Appearance Priors.

10 2014. 56, 66

[102] F. Milletari, M. Yigitsoy, N. Navab, and S.-A. Ahmadi. Left Ventricle Segmentation in

Cardiac Ultrasound Using Hough-Forests With Implicit Shape and Appearance Priors.

MIDAS Journal, 2014. 58

[103] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines.

In Proceedings of the 27th International Conference on Machine Learning (ICML-10),

pages 807–814, 2010. 47

[104] L. Najman, M. Couprie, et al. Quasilinear Algorithm for the Component Tree. In SPIE,

volume 5300, pages 98–107, 2004. 77

[105] National Health Service. Ultrasound scan – When it is used, 2013. URL http://www.

nhs.uk/Conditions/Ultrasound-scan/Pages/What-is-it-used-for.aspx. 25

[106] National Radiological Protection Board. Principles for the protection of patients and

volunteers during clinical magnetic resonance diagnostic procedures and limits on patient

and volunteer exposure during clinical magnetic resonance diagnostic procedures: rec-

ommendations for the practical application of the board’s statement. Documents of the

NRPB. Didcot, UK: National Radiological Protection Board, 1991. 31

[107] S. F. Nemec, U. Nemec, P. C. Brugger, D. Bettelheim, S. Rotmensch, J. M. Graham,

D. L. Rimoin, and D. Prayer. MR Imaging of the Fetal Musculoskeletal system. Prenatal

diagnosis, 32(3):205–213, 2012. 31, 41

[108] T. Niwa, K. Kusagiri, Y. Tachibana, H. Ishikawa, T. Nagaoka, K. Nozawa, and N. Aida.

Whole body fetal MRI at 3D-true-FISP imaging. 2012. 28

[109] L. G. Nyul, J. K. Udupa, and X. Zhang. New variants of a method of MRI scale stan-

dardization. Transactions on Medical Imaging, 19(2):143–150, 2000. 39

Page 158: Keraudren-K-2015-PhD-Thesis

158 BIBLIOGRAPHY

[110] T. Ojala, M. Pietikainen, and T. Maenpaa. Multiresolution Gray-scale and Rotation

Invariant Texture Classification with Local Binary Patterns. Transactions on Pattern

Analysis and Machine Intelligence, 24(7):971–987, 2002. 49, 73

[111] O. Oktay, A. Gomez, K. Keraudren, A. Schuh, W. Shi, G. Penney, and D. Rueckert.

Probabilistic Edge Map (PEM) for 3D Ultrasound Image Registration and Multi-Atlas

Left Ventricle Segmentation. In FIMH, 2015. 56, 65, 143

[112] C. P. Papageorgiou, M. Oren, and T. Poggio. A general framework for object detection.

In Sixth International Conference on Computer Vision, pages 555–562. IEEE, 1998. 48

[113] G. I. Papaioannou, A. Syngelaki, L. C. Poon, J. A. Ross, and K. H. Nicolaides. Normal

ranges of embryonic length, embryonic heart rate, gestational sac diameter and yolk sac

diameter at 6-10 weeks. Fetal diagnosis and therapy, (28):207–19, 2010. 30

[114] O. Pauly, B. Glocker, A. Criminisi, D. Mateus, A. Moller, S. Nekolla, and N. Navab. Fast

multiple organ detection and localization in whole-body MR Dixon sequences. Medical

Image Computing and Computer-Assisted Intervention (MICCAI), pages 239–247, 2011.

50, 56, 66, 123

[115] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blon-

del, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau,

M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python.

Journal of Machine Learning Research, 12:2825–2830, 2011. 54, 58, 80, 98, 115, 119, 130

[116] J. G. Pipe et al. Motion correction with PROPELLER MRI: application to head motion

and free-breathing cardiac imaging. Magnetic Resonance in Medicine, 42(5):963–969,

1999. 32

[117] D. Pugash, P. C. Brugger, D. Bettelheim, and D. Prayer. Prenatal ultrasound and fetal

MRI: The comparative value of each modality in prenatal diagnosis. European Journal

of Radiology, 68(2):214–226, 2008. 26

Page 159: Keraudren-K-2015-PhD-Thesis

BIBLIOGRAPHY 159

[118] M. A. Ranzato, F. J. Huang, Y.-L. Boureau, and Y. LeCun. Unsupervised learning of

invariant feature hierarchies with applications to object recognition. In Computer Vision

and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8. IEEE, 2007.

47

[119] G. Rao and B. Ramamurthy. Pictorial essay: MRI of the fetal brain. The Indian journal

of radiology & imaging, 19(1):69, 2009. 36

[120] K. Rematas and B. Leibe. Efficient object detection and segmentation with a cascaded

hough forest. In International Conference on Computer Vision Workshops (ICCV Work-

shops), pages 966–973. IEEE, 2011. 126

[121] F. Rousseau, O. Glenn, B. Iordanova, C. Rodriguez-Carranza, D. Vigneron, J. Barkovich,

C. Studholme, et al. Registration-based Approach for Reconstruction of High Resolution

In Utero Fetal MR Brain Images. Academic Radiology, 13(9):1072–1081, 2006. 33, 34,

72, 89

[122] F. Rousseau, K. Kim, C. Studholme, M. Koob, and J.-L. Dietemann. On super-resolution

for fetal brain MRI. In Medical Image Computing and Computer-Assisted Intervention–

MICCAI 2010, pages 355–362. Springer, 2010. 34

[123] F. Rousseau, P. A. Habas, and C. Studholme. A Supervised Patch-Based Approach for

Human Brain Labeling. Transactions on Medical Imaging, 30(10):1852–1862, 2011. 91

[124] F. Rousseau, E. Oubel, J. Pontabry, M. Schweitzer, C. Studholme, M. Koob, and J.-

L. Dietemann. BTK: An Open-Source Toolkit for Fetal Brain MR Image Processing.

Computer Methods and Programs in Biomedicine, 109(1):65–73, Jan 2013. 101

[125] H. A. Rowley, S. Baluja, and T. Kanade. Rotation invariant neural network-based face

detection. In Computer Vision and Pattern Recognition (CVPR), pages 38–44. IEEE,

1998. 60

Page 160: Keraudren-K-2015-PhD-Thesis

160 BIBLIOGRAPHY

[126] Royal College of Obstetricians and Gynaecologists. The management of breech presenta-

tion, 2006. 36

[127] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski. ORB: an efficient alternative to

SIFT or SURF. In Computer Vision (ICCV), 2011 IEEE International Conference on,

pages 2564–2571. IEEE, 2011. 52

[128] S. Rueda, S. Fathima, C. L. Knight, M. Yaqub, A. T. Papageorghiou, B. Rahmatullah,

A. Foi, M. Maggioni, A. Pepe, J. Tohka, et al. Evaluation and comparison of current fetal

ultrasound image segmentation methods for biometric measurements: a grand challenge.

Transactions on Medical Imaging, 33(4):797–813, 2014. 67

[129] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-

propagating errors. Cognitive modeling, 5:3, 1988. 47

[130] M. Rutherford, S. Jiang, J. Allsop, L. Perkins, L. Srinivasan, T. Hayat, S. Kumar, and

J. Hajnal. MR Imaging Methods for Assessing Fetal Brain Development. Developmental

neurobiology, 68(6):700–711, May 2008. 31, 32

[131] A. L. Samuel. Some studies in machine learning using the game of checkers. IBM Journal

of research and development, 3(3):210–229, 1959. 45

[132] H. Schulz, B. Waldvogel, R. Sheikh, and S. Behnke. CURFIL: Random Forests for

Image Labeling on GPU. International Conference on Computer Vision, Theory and

Applications (VISAPP), 2015. 130

[133] F. Segonne, A. Dale, E. Busa, M. Glessner, D. Salat, H. Hahn, and B. Fischl. A Hybrid

Approach to the Skull Stripping Problem in MRI. NeuroImage, 22(3):1060–1075, 2004.

88

[134] S. Seifert, A. Barbu, S. Zhou, D. Liu, J. Feulner, M. Huber, M. Suehling, A. Cavallaro,

and D. Comaniciu. Hierarchical parsing and semantic navigation of full body CT data.

Medical Imaging, 7259:725902, 2009. 66

Page 161: Keraudren-K-2015-PhD-Thesis

BIBLIOGRAPHY 161

[135] A. Serag, P. Aljabar, G. Ball, S. Counsell, J. Boardman, M. Rutherford, A. Edwards,

J. Hajnal, and D. Rueckert. Construction of a consistent high-definition spatio-temporal

atlas of the developing brain using adaptive kernel regression. NeuroImage, 2012. 64, 65

[136] A. Serag, V. Kyriakopoulou, M. Rutherford, A. Edwards, J. Hajnal, P. Aljabar, S. Coun-

sell, J. Boardman, and D. Rueckert. A multi-channel 4D probabilistic atlas of the devel-

oping brain: application to fetuses and neonates. Annals of the British Machine Vision

Association, (3):1–14, 2012. 42

[137] D. W. Shattuck, S. R. Sandor-Leahy, K. A. Schaper, D. A. Rottenberg, and R. M.

Leahy. Magnetic Resonance Image Tissue Classification Using a Partial Volume Model.

NeuroImage, 13(5):856–876, 2001. 88

[138] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman,

and A. Blake. Real-time human pose recognition in parts from single depth images. In

Computer Vision and Pattern Recognition (CVPR), volume 2, page 7, 2011. 56, 62, 63,

123

[139] H. Skibbe, M. Reisert, T. Schmidt, T. Brox, O. Ronneberger, and H. Burkhardt. Fast

rotation invariant 3D feature computation utilizing efficient local neighborhood operators.

Transactions onPattern Analysis and Machine Intelligence, 34(8):1563–1575, 2012. 46,

52, 68

[140] A. S. Smith, J. Estroff, C. Barnewolt, J. Mulliken, and D. Levine. Prenatal diagnosis of

cleft lip and cleft palate using MRI. American Journal of Roentgenology, 183(1):229–235,

2004. 31

[141] S. M. Smith. Fast Robust Automated Brain Extraction. Human Brain Mapping, 17(3):

143–155, 2002. 88

[142] N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism: exploring photo collections in

3D. In ACM transactions on graphics (TOG), volume 25, pages 835–846. ACM, 2006. 52

Page 162: Keraudren-K-2015-PhD-Thesis

162 BIBLIOGRAPHY

[143] R. Snijders and K. Nicolaides. Fetal Biometry at 14–40 Weeks’ Gestation. Ultrasound in

Obstetrics & Gynecology, 4(1):34–48, 2003. 38, 80

[144] L. B. Statistics and L. Breiman. Random Forests. In Machine Learning, pages 5–32,

2001. 56

[145] C. Studholme. Mapping fetal brain development in utero using MRI: the big bang of

brain mapping. Annual review of biomedical engineering, 13:345, 2011. 32

[146] C. Studholme and F. Rousseau. Quantifying and Modelling Tissue Maturation in the

Living Human Fetal Brain. International Journal of Developmental Neuroscience, 32:

3–10, 2013. 32, 33

[147] R. Szeliski. Computer vision: algorithms and applications. Springer Science & Business

Media, 2010. 47

[148] V. Taimouri, A. Gholipour, C. Velasco-Annis, J. A. Estroff, and S. K. Warfield. A

Template-to-Slice Block Matching Approach for Automatic Localization of Brain in Fetal

MRI. 2015. 64, 65, 89, 109, 138, 142

[149] Y. Taleb, M. Schweitzer, C. Studholme, M. Koob, J.-L. Dietemann, F. Rousseau, et al.

Automatic Template-based Brain Extraction in Fetal MR Images. HAL Abstract 2890,

Nov 2013. 64, 65, 89, 101, 104, 105, 107, 141

[150] D. Temperton. Pregnancy and work in diagnostic imaging departments. Aufl. Royal

College of Radiologists and the British Institute of Radiology, 2009. 29

[151] M. Toews and W. I. Wells. Efficient and Robust Model-to-Image Alignment using 3D

Scale-Invariant Features. Medical Image Analysis, 2012. 52, 81

[152] P. J. Toivanen. New Geodesic Distance Transforms for Gray-scale Images. Pattern Recog-

nition Letters, 17(5):437–450, 1996. 63

Page 163: Keraudren-K-2015-PhD-Thesis

BIBLIOGRAPHY 163

[153] S. Tourbier, P. Hagmann, M. Cagneaux, L. Guibaud, S. Gorthi, M. Schaer, J.-P. Thiran,

R. Meuli, and M. B. Cuadra. Automatic brain extraction in fetal MRI using multi-atlas-

based segmentation. In SPIE Medical Imaging, pages 94130Y–94130Y. International

Society for Optics and Photonics, 2015. 64, 65, 90, 138

[154] Z. Tu. Probabilistic boosting-tree: Learning discriminative models for classification,

recognition, and clustering. In International Conference on Computer Vision (ICCV),

volume 2, pages 1589–1596. IEEE, 2005. 124

[155] Z. Tu. Auto-Context and its Application to High-level Vision Tasks. In Computer Vision

and Pattern Recognition (CVPR), pages 1–8. IEEE, 2008. 62, 115, 117

[156] M. R. Turner. Texture discrimination by Gabor functions. Biological cybernetics, 55(2-3):

71–82, 1986. 47

[157] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W. Smeulders. Selective search for

object recognition. International Journal of Computer Vision, 104(2):154–171, 2013. 61

[158] K. Van Leemput, F. Maes, D. Vandermeulen, and P. Suetens. Automated Model-based

Tissue Classification of MR Images of the Brain. Transactions on Medical Imaging, 18

(10):897–908, Oct 1999. 64, 88

[159] M. Varma and A. Zisserman. Texture Classification: Are Filter Banks Necessary? In

Computer vision and pattern recognition (CVPR), volume 2, pages II–691. IEEE, 2003.

48

[160] H. P. VC. Method and means for recognizing complex patterns, Dec. 18 1962. US Patent

3,069,654. 59

[161] P. Viola and M. Jones. Rapid Object Detection Using a Boosted Cascade of Simple

Features. In Computer Vision and Pattern Recognition (CVPR), volume 1, pages I–511.

IEEE, 2001. 46, 48, 49, 55, 60, 61, 67, 69, 72, 73, 74, 120

Page 164: Keraudren-K-2015-PhD-Thesis

164 BIBLIOGRAPHY

[162] C. Vondrick, A. Khosla, T. Malisiewicz, and A. Torralba. Hoggles: Visualizing object

detection features. In Computer Vision (ICCV), 2013 IEEE International Conference

on, pages 1–8. IEEE, 2013. 50

[163] Y. Wang, S. J. Riederer, and R. L. Ehman. Respiratory motion of the heart: kinematics

and the implications for the spatial resolution in coronary imaging. Magnetic Resonance

in Medicine, 33(5):713–719, 1995. 32

[164] Z. Wang, C. Donoghue, and D. Rueckert. Patch-based segmentation without registra-

tion: application to knee MRI. In Machine Learning in Medical Imaging, pages 98–105.

Springer, 2013. 65

[165] Z. Wang, R. Wolz, T. Tong, and D. Rueckert. Spatially Aware Patch-Based Segmentation

(SAPS): An Alternative Patch-Based Segmentation Framework. In B. Menze, G. Langs,

L. Lu, A. Montillo, Z. Tu, and A. Criminisi, editors, Medical Computer Vision. Recog-

nition Techniques and Applications in Medical Imaging, volume 7766 of LNCS, pages

93–103. Springer Berlin Heidelberg, 2013. 91

[166] S. Warfield, M. Kaus, F. Jolesz, and R. Kikinis. Adaptive Template Moderated Spatially

Varying Statistical Classification. In W. Wells, A. Colchester, and S. Delp, editors,

Medical Image Computing and Computer Assisted Intervention (MICCAI), volume 1496

of LNCS, pages 431–438. Springer Berlin Heidelberg, 1998. 88

[167] N. White, C. Roddey, A. Shankaranarayanan, E. Han, D. Rettmann, J. Santos, J. Ku-

perman, and A. Dale. PROMO: Real-time prospective motion correction in MRI using

image-based tracking. Magnetic Resonance in Medicine, 63(1):91–105, 2010. 32

[168] R. Wolz, C. Chu, K. Misawa, K. Mori, and D. Rueckert. Multi-organ abdominal CT

segmentation using hierarchically weighted subject-specific atlases. In Medical Image

Computing and Computer-Assisted Intervention (MICCAI), pages 10–17. Springer, 2012.

64

Page 165: Keraudren-K-2015-PhD-Thesis

BIBLIOGRAPHY 165

[169] R. Wright, D. Vatansever, V. Kyriakopoulou, C. Ledig, R. Wolz, A. Serag, D. Rueckert,

M. A. Rutherford, J. V. Hajnal, and P. Aljabar. Age Dependent Fetal MR Segmentation

Using Manual and Automated Approaches. In Workshop on Paediatric and Perinatal

Imaging (MICCAI), pages 97–104. Springer, Sep. 2012. 88, 100

[170] T.-F. Wu, C.-J. Lin, and R. C. Weng. Probability estimates for multi-class classification

by pairwise coupling. The Journal of Machine Learning Research, 5:975–1005, 2004. 54

[171] X. Wu, V. Kumar, J. R. Quinlan, J. Ghosh, Q. Yang, H. Motoda, G. J. McLachlan,

A. Ng, B. Liu, S. Y. Philip, et al. Top 10 algorithms in data mining. Knowledge and

Information Systems, 14(1):1–37, 2008. 52

[172] Z. Wu, Q. Ke, M. Isard, and J. Sun. Bundling Features for Large Scale Partial-duplicate

Web Image Search. In Computer Vision and Pattern Recognition (CVPR), pages 25–32.

IEEE, 2009. 76, 78

[173] P. A. Yushkevich, J. Piven, H. C. Hazlett, R. G. Smith, S. Ho, J. C. Gee, and G. Gerig.

User-guided 3D active contour segmentation of anatomical structures: significantly im-

proved efficiency and reliability. Neuroimage, 31(3):1116–1128, 2006. 43

[174] Y. Zheng, A. Barbu, B. Georgescu, M. Scheuering, and D. Comaniciu. Fast Automatic

Heart Chamber Segmentation from 3D CT Data using Marginal Space Learning and

Steerable Features. In International Conference on Computer Vision (ICCV), pages 1–8.

IEEE, 2007. 60, 61, 67, 115

[175] Y. Zheng, A. Barbu, B. Georgescu, M. Scheuering, and D. Comaniciu. Four-chamber

heart modeling and automatic segmentation for 3-D cardiac CT volumes using marginal

space learning and steerable features. Transactions on Medical Imaging, 27(11):1668–

1681, 2008. 60

[176] S. Zhou, J. Zhou, and D. Comaniciu. A boosting regression approach to medical anatomy

detection. In Computer Vision and Pattern Recognition (CVPR), pages 1–8. IEEE, 2007.

66

Page 166: Keraudren-K-2015-PhD-Thesis

Appendix A

Neurological/radiological order

Medical images are stored on disk as an array of voxel data, associated to a transformation

matrix M which maps the image grid onto the coordinate system of the scanner (image-to-world

matrix). The sign of det(M) corresponds to the neurological and radiological conventions for

pixel ordering:

• det(M) > 0 corresponds to the neurological order: the image is thus represented in a

direct orthonormal system, the viewer is positioning himself in the shoes of the patient.

In this convention, the right of the patient is at the right of the image.

• det(M) < 0 corresponds to the radiological order: the image is thus represented in an

indirect orthonormal system, the viewer is looking at a patient which is looking at him,

as in a mirror. In this convention, the right of the patient is at the left of the image.

When working directly on the image grid, for instance when computing cube features with

the integral image, it is important to ensure that all images use the same convention. To go

from radiological to neurological pixel ordering, the pixel data needs to be mirrored along the

X axis, and the X axis in the image metadata must be multiplied by -1. This is valid for

the IRTK library which places the image origin at the center of the image, but an additional

166

Page 167: Keraudren-K-2015-PhD-Thesis

167

offset should be taken into account if the origin of the image is defined differently (for instance

position [0,0,0] in the array of voxels).