hippocampus segmentation on epilepsy and alzheimer’s ... · in this paper, some of those methods...

28
Hippocampus Segmentation on Epilepsy and Alzheimer’s Disease Studies with Multiple Convolutional Neural Networks Diedre Carmo, Bruna Silva, Clarissa Yasuda, Letcia Rittner, Roberto Lotufo a a School of Electrical and Computer Engineering and Faculty of Medical Sciences, UNICAMP, Campinas, So Paulo, Brazil Abstract Hippocampus segmentation on magnetic resonance imaging (MRI) is of key importance for the diagnosis, treatment decision and investigation of neuropsy- chiatric disorders. Automatic segmentation is a very active research field, with many recent models involving Deep Learning for such task. However, Deep Learning requires a training phase, which can introduce bias from the specific domain of the training dataset. Current state-of-the art methods train their methods on healthy or Alzheimer’s disease patients from public datasets. This raises the question whether these methods are capable to recognize the Hip- pocampus on a very different domain. In this paper we present a state-of-the-art, open source, ready-to-use hip- pocampus segmentation methodology, using Deep Learning. We analyze this methodology alongside other recent Deep Learning methods, in two domains: the public HarP benchmark and an in-house Epilepsy patients dataset. Our in- ternal dataset differs significantly from Alzheimer’s and Healthy subjects scans. Some scans are from patients who have undergone hippocampal resection, due to surgical treatment of Epilepsy. We show that our method surpasses others from the literature in both the Alzheimer’s and Epilepsy test datasets. Keywords: deep learning, hippocampus segmentation, convolutional neural networks, alzheimer’s disease, epilepsy Preprint submitted to Journal of Neuroscience Methods January 16, 2020 arXiv:2001.05058v1 [eess.IV] 14 Jan 2020

Upload: others

Post on 27-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

Hippocampus Segmentation on Epilepsy andAlzheimer’s Disease Studies with Multiple

Convolutional Neural Networks

Diedre Carmo, Bruna Silva, Clarissa Yasuda, Letcia Rittner, Roberto Lotufoa

aSchool of Electrical and Computer Engineering and Faculty of Medical Sciences,UNICAMP, Campinas, So Paulo, Brazil

Abstract

Hippocampus segmentation on magnetic resonance imaging (MRI) is of key

importance for the diagnosis, treatment decision and investigation of neuropsy-

chiatric disorders. Automatic segmentation is a very active research field, with

many recent models involving Deep Learning for such task. However, Deep

Learning requires a training phase, which can introduce bias from the specific

domain of the training dataset. Current state-of-the art methods train their

methods on healthy or Alzheimer’s disease patients from public datasets. This

raises the question whether these methods are capable to recognize the Hip-

pocampus on a very different domain.

In this paper we present a state-of-the-art, open source, ready-to-use hip-

pocampus segmentation methodology, using Deep Learning. We analyze this

methodology alongside other recent Deep Learning methods, in two domains:

the public HarP benchmark and an in-house Epilepsy patients dataset. Our in-

ternal dataset differs significantly from Alzheimer’s and Healthy subjects scans.

Some scans are from patients who have undergone hippocampal resection, due

to surgical treatment of Epilepsy. We show that our method surpasses others

from the literature in both the Alzheimer’s and Epilepsy test datasets.

Keywords: deep learning, hippocampus segmentation, convolutional neural

networks, alzheimer’s disease, epilepsy

Preprint submitted to Journal of Neuroscience Methods January 16, 2020

arX

iv:2

001.

0505

8v1

[ee

ss.I

V]

14

Jan

2020

Page 2: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

1. Introduction

The hippocampus is a small, medial, subcortical brain structure related to

long and short term memory [1]. Hippocampal segmentation from magnetic res-

onance imaging (MRI) is of great importance for research of neuropsychiatric

disorders and can also be used in the preoperatory investigation of pharma-

coresistant temporal lobe epilpesy [2]. The hippocampus can be affected in

shape and volume by different pathologies, such as the neurodegeneration as-

sociated to Alzheimer’s disease [3], or surgical intervention to treat temporal

lobe epilepsy [4]. The medical research of these diseases usually involves man-

ual segmentation of the hippocampus, requiring time and expertise in the field.

The high-cost associated to manual segmentation has stimulated the search for

effective automatic segmentation methods. Some of those methods, such as

FreeSurfer [5], are already used as a starting point for a manual finer segmen-

tation later [6].

While conducting research on Epilepsy and methods for hippocampus seg-

mentation, two things raised our attention. Firstly, the use of Deep Learning

and Convolutional Neural Networks (CNN) is in the spotlight, with most of the

recent hippocampus segmentation methods featuring them. Secondly, many of

these methods rely on publicly available datasets for training and evaluating

and therefore have access only to healthy scans, or patients with Alzheimer’s

disease.

Considering these facts, we present an evaluation of some recent meth-

ods [7, 8, 9], including an improved version of our own Deep Learning based

hippocampus segmentation method [10], using a dataset from a different do-

main. This in-house dataset, named HCUnicamp, contains scans from patients

with epilepsy (pre and post surgical removal of hippocampus), with different

patterns of atrophy compared to that observed both in the Alzheimer’s data

and healthy subjects. Additionally, we use the public Alzheimer’s HarP dataset

for training and further comparisons with other methods.

2

Page 3: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

1.1. Contributions

In summary, the main contributions of this paper are as follows:

• A readily available hippocampus segmentation methodology consisting of

an ensemble of 2D CNNs coupled with traditional 3D post processing,

achieving state of the art performance in public data and using recent

advancements from the Deep Learning literature.

• An evaluation of recent hippocampus segmentation methods in our Epilepsy

dataset, that includes post-operatory images of patients without one of

hippocampi. We show that our method is also superior in this domain,

although no method was able to achieve good performance in this dataset,

according to our manual annotations.

This paper is organized as follows: Section 2 presents a literature review

of recent Deep Learning based hippocampus segmentation methods. Section 3

introduces more details to the two datasets involved in this research. A detailed

description of our hippocampus segmentation methodology is in Section 4. Sec-

tion 5 has experimental results from our methodology development and qualita-

tive and quantitative comparisons with other methods in HarP and HCUnicamp,

while Sections 6 and 7 have, respectively, extended discussion of those results

and conclusion.

2. Literature Review

Before the rise of Deep Learning methods in medical imaging segmentation,

most hippocampus segmentation methods used some form of optimization of

registration and deformation to atlas(es) [11, 12, 13, 5, 14, 15]. Even today,

medical research uses results from FreeSurfer [5], a high impact multiple brain

structures segmentation work, available as a software suite. Those atlas-based

methods can produce high quality segmentations, taking, however, around 8

hours in a single volume. Lately, a more time efficient approach appeared in the

literature, namely the use of such atlases as training volumes for CNNs. Deep

3

Page 4: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

Learning methods can achieve similar overlap metrics while predicting results

in a matter of seconds per volume [16, 17, 18, 8, 7, 19, 20].

Recent literature on hippocampus segmentation with Deep Learning is ex-

ploring different architectures, loss functions and overall methodologies for the

task. One approach that seems to be common to most of the studies involves

the combination of 2D or 3D CNNs, and patches as inputs in the training phase.

Note that some works focus on hippocampus segmentation, while some attempt

segmentation of multiple neuroanatomy. Following, a brief summary of each of

those works.

Chen et al. [16] reports 0.9 Dice [21] in 10-fold 110 ADNI [3] volumes with

a novel CNN input idea. Instead of using only the triplanes as patches, it also

cuts the volume in six more diagonal orientations. This results in 9 planes, that

are fed to 9 small modified U-Net [22] CNNs. The ensemble of these U-Nets

constructs the final result.

Xie et al. [17] trains a voxel-wise classification method using triplanar patches

crossing the target voxel. They merge features from all patches into a Deep Neu-

ral Network with a fully connected classifier alongside standard use of ReLU acti-

vations and softmax [23]. The training patches come only from the approximate

central area the hippocampus usually is, balancing labels for 1:1 foreground and

background target voxels. Voxel classification methods tend to be faster than

multi-atlas methods, but still slower than Fully Convolutional Neural Networks.

DeepNat from Wachinger et al. [18] achieves segmentation of 25 structures

with a 3D CNN architecture. With a hierarchical approach, a 3D CNN separates

foreground from background and another 3D CNN segments the 25 sub-cortical

structures on the foreground. Alongside a proposal of a novel parametrization

method replacing coordinate augmentation, DeepNat uses 3D Conditional Ran-

dom Fields as post-processing. The architecture is a voxelwise classification,

taking into account the classification of neighbor voxels. This work’s results

mainly focuses on the MICCAI Labeling Challenge, with around 0.86 Dice in

hippocampus segmentation.

Thyreau et al. [8]’s model, named Hippodeep, uses CNNs trained in a region

4

Page 5: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

of interest (ROI). However, where we apply one CNN for each plane of view,

Thyreau et al. uses a single CNN, starting with a planar analysis followed

by layers of 3D convolutions and shortcut connections. This study used more

than 2000 patients, augmented to around 10000 volumes with augmentation.

Initially the model is trained with FreeSurfer segmentations, and later fine-

tuned using volumes which the author had access to manual segmentations, the

gold standard. Thyreau’s method requires MNI152 registration of input data,

which adds around a minute of computation time, but the model is generally

faster than multi-atlas or voxel-wise classification, achieving generalization in

different datasets, as verified by Nogovitsyn et al. [24].

QuickNat from Roy et al. [7] achieves faster segmentations than DeepNat by

using a multiple CNN approach instead of voxel-wise classification. Its method-

ology follows a consensus of multiple 2D U-Net like architectures specialized in

each slice orientation. The use of FreeSurfer [5] masks over hundreds of public

data to generate silver standard annotations allows for much more data than

usually available for medical imaging. Later, after the network already knows

to localize the structures, it is finetuned to more precise gold standard labels.

Inputs for this method need to conform to the FreeSurfer format.

Ataloglou et al. [19] recently displayed another case of fusion of multiple

CNN outputs, specialized into axial, coronal and sagittal orientations, into a

final hippocampus segmentation. They used U-Net like CNNs specialized in

each orientation, followed by error correction CNNs, and a final average fusion

of the results. They went against a common approach in training U-Nets of

using patches during data augmentation, instead using cropped slices. This

raises concerns about overfitting to the used dataset, HarP [25], supported by

the need of finetuning to generalize to a different dataset.

Dinsdale et al. [20] mixes knowledge from multi-atlas works with Deep Learn-

ing, by using a 3D U-Net CNN to predict a deformation field from an initial

binary sphere to the segmentation of the hippocampus, achieving around 0.86

DICE on Harp. Interestingly, trying an auxiliary classification task did not

improve segmentation results.

5

Page 6: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

It is known that Deep Learning approaches require a large amount of training

data, something that is not commonly available specially with Medical Imag-

ing. Commonly used forms of increasing the quantity of data in the literature

include using 2D CNNs over regions (patches) of slices, with some form of patch

selection strategy. The Fully Convolutional Neural Network (FCNN) U-Net [22]

architecture has shown potential to learn from relatively small amounts of data

with their decoding, encoding and concatenation schemes, even working when

used with 3D convolutions directly in a 3D volume [9].

Looking at these recent works, one can confirm the segmentation potential of

the U-Net architecture, including the idea of an ensemble of 2D U-Nets instead

of using a single 3D one, as we [26, 10], some simultaneous recent work [7, 19],

or even works in other segmentation problems [27] presented. In this paper,

some of those methods were reproduced for comparison purposes in our in-house

dataset, namely [7, 8], including a 3D UNet architecture test from [9].

3. Data

This study uses mainly two different datasets: one collected locally for an

Epilepsy study, named HCUnicamp; and one public from the ADNI Alzheimer’s

study, HarP. HarP is commonly used in the literature as a hippocampus seg-

mentation benchmark. The main difference between the datasets is, the lack of

one of the hippocampi in 70% of the scans from HCUnicamp, as these patients

underwent surgical removal (Figure 1).

Although our method needs input data to be in the MNI152 [28] orientation,

data from those datasets are in native space and are not registered. We provide

an orientation correction by rigid registration as an option when predicting in

external volumes, to avoid orientation mismatch problems.

3.1. HarP

This methodology was developed with training and validation on HarP [25],

a widely used benchmark dataset in the hippocampus segmentation literature.

6

Page 7: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

(a) (b)

(c) (d) (e)

Figure 1: (a) 3D rendering of the manual annotation (in green) of one of the HarP dataset

volumes. In (b), a coronal center crop slice of the average hippocampus mask for all volumes

in HarP (green) and HCUnicamp (red). Zero corresponds to the center. (c) Sagittal, (d)

Coronal and (e) Axial HCUnicamp slices from a post-operative scan with annotations in red.

The full HarP release contains 135 T1-weighted MRI volumes. Alzheimer’s dis-

ease classes are balanced with equal occurrence of control normal (CN), mild

cognitive impairment (MCI) and alzheimer’s disease (AD) cases [3]. Volumes

were minmax intensity normalized between 0 and 1, and no volumes were re-

moved. Training with hold-out was performed with 80% training, 10% valida-

tion and 10% testing, while k-Folds, when used, consisted of 5 folds, with no

overlap on the test sets.

7

Page 8: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

3.2. HCUnicamp

HCUnicamp was collected inhouse, by personnel from the Brazilian Institute

of Neuroscience and Neurotechnology (BRAINN) at UNICAMP’s Hospital de

Clnicas. This dataset contains 190 T1-weighted 3T MRI acquisitions, in native

space. 58 are healthy controls and 132 are Epilepsy patients. From those, 70%

had one of the hippocampus surgically removed, resulting in a very different

shape and texture than what is commonly seen in public datasets (Figure 1).

More details about the surgical procedure can be found in [2, 4]. All volumes

have manual annotations of the hippocampus, performed by one rater. The

voxel intensity is minmax normalized, between 0 and 1, per volume. This data

acquisition was approved by an Ethics and Research Committee.

Comparisons between the datasets can be seen in Figure 1. The difference

in mean mask position due to the inclusion of neck in HCUnicamp is notable,

alongside with the lower presence of left hippocampus labels due to surgical

intervention for Epilepsy (Figure 1b).

To investigate the performance of different methods in terms of dealing with

the absence of hippocampus and unusual textures, we used the HCUnicamp

dataset (considered a different domain) as a final test set and benchmark. Our

methodology was only tested in this dataset at the end, alongside other methods.

Results on HCUnicamp were not taken into consideration for our method’s

methodological choices.

4. Segmentation Methodology

In this section, the general methodology (Figure 2) for our hippocampus

segmentation method is detailed. In summary, the activations from three ori-

entation specialized modified 2D U-Net CNNs are merged into an activation

consensus. Each network’s activations for a given input volume are built slice

by slice. The three activation volumes are averaged into a consensus volume,

which is post-processed into the final segmentation mask. The following sections

go into more detail for each part of the architecture and method overall.

8

Page 9: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

Figure 2: The final segmentation volume is generated by taking into account activations from

three FCNNs specialized on each 2D orientation. Neighboring slices are taken into account in

a multi-channel approach. Full slices are used in prediction time, but training uses patches.

4.1. U-Net architecture

The basic structure of our networks is inspired by the U-Net FCNN archi-

tecture [22]. However, some modifications based on other successful works were

applied to the architecture (Figure 3). Those modifications include: instead

of one single 2D patch as input, two neighbour patches are concatenated leav-

ing the patch corresponding to the target mask in the center [29]. Residual

connections based on ResNet [30] between the input and output of the double

convolutional block were added, as 1x1 2D convolutions to account for different

number of channels. Batch normalization was added to each convolution inside

the convolutional block, to accelerate convergence and facilitate learning [31].

Also, all convolutions use padding to keep dimensions and have no bias.

4.2. Residual Connections

Residual or shortcut connections have been shown to improve convergence

and performance of CNNs [30]. Either in the form of direct connections prop-

agating past results to the next convolution input, by adding values, or in the

form of 1x1 convolutions, to deal with different number of channels. An argu-

ment to its effectiveness is that the residual connections offer a way for a simple

9

Page 10: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

Neighbour PatchCenter Patch

Neighbour Patch

3x64x64Extended 2D Input Patch Output Architecure

Diagram

x Conv 3x3 y

y Conv 3x3 yBatch Norm, ReLU

Batch Norm, ReLU

Residual Connectionx Conv 1x1 y

x Conv Block y

x: number of input channelsy: number of output channels

Input

Output

3 Conv Block 64 128 Conv Block 64Concat

TrasnposedConv 2x2MaxPool 2

64 Conv 3x3 1Softmax

64 Conv Block 128

MaxPool 2128 Conv Block 256

MaxPool 2256 Conv Block 512

MaxPool 2

512 Conv Block 512

256 Conv Block 64Concat

TrasnposedConv 2x2

512 Conv Block 128Concat

TrasnposedConv 2x2

1024 Conv Block 256Concat

TrasnposedConv 2x2

Figure 3: Final architecture of each modified U-Net in figure 2. Of note in comparison to the

original U-Net is the use of BatchNorm, residual connections in each convolutional block, the

3 channel neighbour patches input and the sigmoid output limitation. Padding is also used

after convolutions.

propagation of values without any transformation, which is not a trivial task

when the network consists of multiple non-linear transformations in the form of

convolutions followed by max pooling.

In this work, residual connections were implemented in the form of an 1x1

convolution, adding the input of the first 3x3 convolution to the result of the

batch normalization of the second 3x3 convolution in a convolutional block

(Figure 3).

4.3. Weight Initialization, Bias and Batch-normalization

It has been shown that weight initialization is crucial in proper convergence

of CNNs [32]. In computer vision related tasks, having pre-initialized weights

that already recognize basic image pattern recognition features such as border

10

Page 11: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

directions, frequencies and textures can be helpful. This works uses VGG11 [33]

weights in the encoder part of the U-Net architecture, as in [34].

4.4. Patches and Augmentation

During prediction time, slices for each network are extracted with a center

crop. When building the consensus activation volume, the resulting activation

is padded back to the original size.

For training, this method uses patches. One of the strong fits of the U-Net

architecture is its ability to learn on patches and extend that knowledge to the

evaluation of a full image, effectively working as a form of data augmentation.

In this work, batches of random patches are used when training each network.

Patches are randomly selected in runtime, not as pre-processing. Patches can

achieve many possible sizes, as long as it accommodates the number of spatial

resolution reductions present in the network (e.g. division by 2 by a max pool).

A pre-defined percentage of the patches are selected from a random point of

the brain, allowing for learning of what structures are not the hippocampus, and

are not close to the structure, such as scalp, neck, eyes and brain ridges. Those

are called negative patches, although they not necessarily have a completely

zeroed target due to being random. On the other hand, positive patches are

always centered on a random point in the hippocampus border.

In a similar approach to Pereira et al. [29]’s Extended 2D, adjacent patches

(slices on evaluation) are included in the network’s input as additional chan-

nels (Figure 2). The intention is for the 2D network to take into consideration

volumetric information adjacent to the region of interest, hence the name for

the method, Extended 2D Consensus Hippocampus Segmentation (E2DHipseg).

This approach is inspired by how physicians compare neighbor slices in multi-

view visualization when deciding if a voxel is part of the analyzed structure or

not.

Deep Learning algorithms usually require a big and varied dataset to achieve

generalization [35]. Manual segmentation by experts is used as a gold standard,

but is often not enough for the training of Deep Networks. Data augmenta-

11

Page 12: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

Aug. Chance (%) Description

0 100% Random patch selection, 80% positive 20% negative

1 100% Intensity modification by a value from the [−0.05, 0.05] uniform distribution.

2 20% Rotation and scale by a value from [−10, 10] degrees and % respectively

3 20% Gaussian noise with 0 mean and 0.0002 variance.

Table 1: Description of augmentations present in the experiments in Table 2, with the %

chance of random application during patch selection and parameters description.

tion is used to improve our dataset variance and avoid overfitting, an excessive

bias to the training data. Without augmentation, this method could overfit to

MRI machine parameters, magnetic field intensity, field of view and so on. All

augmentations perform a random small modification to the data, according to

pre-defined parameters, on runtime, not as pre-processing. Alongside the use

of random patches in runtime, the use of other transformations was tested, as

seen in Table 1.

4.5. Loss Function

The choice of Loss function plays an important role in Deep learning meth-

ods, defining what the training process will be optimizing. When using a sigmoid

output activation, Binary Cross Entropy (BCE), Mean Square Error (MSE) and

Dice Loss are examples of commonly used functions in the literature.

Dice [21] is an overlap metric widely used in the evaluation of segmenta-

tion applications. Performance in this paper is mainly evaluated with Dice, by

comparisons with the manual gold standard. Dice can be defined as:

2∑N

i pigi∑Ni p2i +

∑Ni g2i

(1)

Where the sums run over the N voxels, of the predicted binary segmentation

volume pi ∈ P and the ground truth binary volume gi ∈ G. For conversion

from a metric to a loss function, one can simply optimize 1 − Dice, therefore

optimizing a segmentation overlap metric. This is referred here as Dice Loss.

12

Page 13: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

To take into account background information, a Softmax of two-channels

representing background and foreground can be used as an output. In this case,

Generalized Dice Loss (GDL) [21] and Boundary Loss, a recent proposal of

augmentation to GDL from Kervadec et al. [36] were considered as loss options.

Generalized Dice Loss weights the loss value by the presence of a given

label in the target, giving more importance to less present labels. This solves

the a class imbalance problem that would emerge when using Dice Loss while

including background as a class.

Boundary Loss takes into consideration alongside the “regional” loss (e.g.

GDL), the distance between boundaries of the prediction and target, which does

not gives any weight to the area of the segmentation. Kervadec’s work suggests

that a loss functions that takes into account boundary distance information

can improve results, specially for unbalanced datasets. However, one needs to

balance the contribution of both components with a weight, defined as α in the

following Boundary Loss (B) equation:

B(p, g) = α G(p, g) + (1− α) S(p, g) (2)

Where G is GDL, regional component of the loss function, and S is the

surface component, that operates on surface distances. The weight factor α

changes from epoch to epoch. The weight given to the regional loss is shifted to

the surface loss, with α varying from 1 in the first epoch to 0 in the last epoch.

We followed the original implementation in [36].

4.6. Consensus and Post-processing

The consensus depicted in Figure 2 consists of taking the average from the

activations of all three CNNs. A more advanced approach of using a 4th, 3D,

U-Net as the consensus generator was also attempted.

After construction of the consensus of activations, a threshold is needed to

binarize the segmentation. While developing this methodology, it was noticed

that using patches, although improving generalization, resulted in small struc-

tures of the brain being recognized as the hippocampus. To remove those false

13

Page 14: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

positives, a 3D labeling implementation from [37] was used, with subsequent

removal of small non-connected volumes, keeping the 2 largest volumes, or 1 if

a second volume is not present (Figure 2). This post processing is performed

after the average consensus of all networks and threshold application.

5. Experiments and Results

In this section, experiments on the segmentation methodology are presented,

displaying differences in Dice in the HarP test set, resulting from our method-

ological choices. Following that, quantitative and qualitative comparisons with

other methods in HarP and HCUnicamp are presented.

0 100 200 300 400Epoch

0.5

0.6

0.7

0.8

0.9

1.0

Dic

e

sagital val

sagital t rain

coronal val

coronal t rain

axial val

axial t rain

(a)

0 200 400 600 800 1000Epoch

0.5

0.6

0.7

0.8

0.9

1.0

Dic

e

sagital val

sagital t rain

coronal val

coronal t rain

axial val

axial t rain

(b)

Figure 4: Validation and training Dice for all models, using: (a) ADAM (b) RADAM. Both

with same hyperparameters and no stepping. Early stopping is due to patience. RADAM

displays more stability.

5.1. Training: Optimizers, Learning Rate and Scheduling

Training hyperparameters are the same for all networks. Regarding the op-

timizer of choice and initial LR, grid search defined 0.0001 with ADAM [38] and

0.005 LR with SGD [39] to deliver similar performance. The recent RADAM

from Liu et al. [40] with 0.001 initial LR ended up being the optimizer of choice,

14

Page 15: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

due to improved training stability and results (see Fig 4). LR reduction schedul-

ing is used, with multiplication by 0.1 after 250 epochs, its impact is showcased

on Figure 5(a). While training on HarP with an 80% holdout training set, an

epoch consisted of going through around 5000 sagittal, 4000 coronal and 3000

axial random patches extracted from slices with presence of hippocampus, de-

pending on which network is being trained, with a batch size of 200. The max

number of Epochs allowed is 1000, with a patience early stopping of no valida-

tion improvement of 200 epochs. Weights are only saved for the best validation

Dice.

5.2. Hyperparameter Experiments

Some of the most important hyperparameter experiments can be seen in Ta-

ble 2. Results from each change in methodology or architecture were calculated

using the full consensus outlined in Figure 2, in other words, all three networks

are trained using the same parameters and Dice is calculated after consensus

and post-processing. For these experiments, holdout of 80/20% on HarP was

used, keeping Alzheimer’s labels balanced. Reported Dice is the mean over the

20% test set. Some important final experiments were selected to be presented

in Table 2.

In regards to modifications to the basic U-Net architecture, the addition of

Residual Connections, Batch Normalization, and encoder weight initialization

improved convergence stability and reduced overfitting. VGG11 weights worked

better than ResNet34 or Kaiming Uniform initialization [41]. The use of random

patches (Aug. 0) with neighbour slices (E2D) instead of center crop 1282 slices

also reduced overfitting, while increasing the number of false positive activations,

handled by post processing.

For the patch selection strategy mentioned in Section 4.4, 80/20% balance

between positive and negative patches, respectively, resulted in better conver-

gence and less false positives than a 50/50% balance. Early experiments com-

pared patch sizes between 162, 322, 642. For 162, one less U-Net layer was used,

with 3 Max Pool/Transposed Convolutions instead of 4. Smaller patches re-

15

Page 16: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

Optimizer LR Loss Aug. HarP (Dice)

SGD 0.005 Dice Loss 0, 1, 2, 3 0.8748

SGD 0.005 Dice Loss 0 0.8760

ADAM 0.0001 Dice Loss 0 0.8809

ADAM 0.0001 Dice Loss 0, 1 0.8820

ADAM 0.0001 Dice Loss 0, 2 0.8827

ADAM 0.0001 Dice Loss 0, 3 0.8832

ADAM 0.0001 GDL 0 0.8830

ADAM 0.0001 GDL 0, 1, 2, 3 0.8862

ADAM 0.0001 Boundary 0 0.9068

RADAM 0.0001 Boundary 0 0.9071

RADAM 0.001 Boundary 0, 1, 2, 3 0.9117

RADAM 0.001 Boundary 0 0.9133

Table 2: Some of the most relevant hyperparameters experiments test results, in a hold-out

approach to HarP. Aug. refers to what data augmentation strategies were used, from Table 1.

The bolded results represents the final models used in the next section. All tests in this table

use 642 patch size and the modified U-Net architecture.

sulted in less stable training, although final results for 322 and 642 patches were

not significantly different. 642 was chosen as the patch size from here forward.

While using only one channel sigmoid activations as an output. Early ex-

periments defined Dice Loss as the best convergence and results, beating MSE

and BCE. A softmax output and GDL achieved similar results to Dice Loss.

However, implementation of a recently improvement to GDL in the form of

Boundary Loss resulted in slightly better test Dice.

We found that augmentation techniques besides random patches only im-

pacted overlap results in HarP slightly, sometimes even making results worse in

testing. Augmentation’s most relevant impact, however, was avoiding overfit-

ting and very early stopping due to no validation improvements in some cases,

leading to unstable networks.

16

Page 17: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

0 200 400 600Epoch

0.5

0.6

0.7

0.8

0.9

1.0D

ice

sagital val

sagital t rain

coronal val

coronal t rain

axial val

axial t rain

(a)

Sagital Coronal Axial Consensus0.4

0.5

0.6

0.7

0.8

0.9

1.0

Dic

e

Consensus VS Individual CNNs

(b)

Figure 5: (a) Training and validation Dice curve for the best model, with RADAM and LR

step. (b) Boxplot for HarP test models, showing the improvement in variance and mean Dice

from the Consensus compared to using only one network. In the individual network studies,

post processing is also applied to remove false positives.

We found that, as empirically expected, the consensus of the results from

the three networks brings less variance to the final Dice as seen in Figure 5(b).

Early studies confirmed that 0.5 is the best value to choose for threshold after

the activation averaging. Attempts at using a fourth 3D UNet as a consensus

generator/error correction phase did not change results significantly.

5.3. Quantitative Results

In this section, we report quantitative results of our method and others from

the literature in both HarP and HCUnicamp. For comparison’s sake, we also

trained an off-the-shelf 3D U-Net architecture, from Isensee et al. [9], originally

a Brain Tumor segmentation work, with ADAM and HarP center crops as input.

For the evaluation with the QuickNat [7] method, volumes and target needed

to be conformed to its required format, causing interpolation . As far as we

know, the method does not have a way to return its predictions on the volume’s

original space. DICE was calculated with the masks on the conformed space.

Note that QuickNat performs segmentation of multiple brain structures.

17

Page 18: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

5.3.1. HarP

Deep Learning Methods HarP (DICE)

3D U-Net - Isensee et al. [9] (2017) 0.86

Hippodeep - Thyerau et al. [8] (2018) 0.85

QuickNat - Roy et al. [7] (2019) 0.80

Ataloglou et al. [19] (2019) 0.90*

E2DHipseg (this work) 0.90*

Atlas-based methods

FreeSurfer v6.0 [5] (2012) 0.70

Chincarini et al. [14] (2016) 0.85

Platero et al. [15] (2017) 0.85

Table 3: Reported testing results for HarP. This work is named E2DHipseg. Results with *

were calculated following a 5-fold cross validation.

The best hold-out mean Dice is 0.9133. In regards to specific Alzheimer’s

classes in the test set, our method achieves 0.9094 Dice for CN, 0.9378 for MCI

and 0.9359 for AD cases. When using a hold-out approach in a relatively small

dataset such as HarP, the model can be overfitting to better results in that

specific test set. With that in mind, we also report results with cross validation.

5-fold training and testing is used, where all three networks are trained and

tested with each fold. With 5-fold our model achieved 0.90± 0.01 Dice. Results

reported by other works are present in Table 3. Our methodology has simi-

lar performance to what is reported by Atalaglou et al.’s recent, simultaneous

work [19]. Interestingly, the initial methodology of both methods is similar, in

the use of multiple 2D CNNs.

5.3.2. HCUnicamp

As described previously, the HCUnicamp dataset has lack of one of the

hippocampi in many of it’s scans (Figure 1), and it was used to examine the

generalization capability of these methods. Table 4 has mean and standard

18

Page 19: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

HCUnicamp (Controls)

Method Both (Dice) Left (Dice) Right (Dice) Precision Recall

3D U-Net - Isensee et al. [9] (2017) 0.80± 0.04 0.81± 0.04 0.78± 0.04 0.76± 0.10 0.85± 0.06

Hippodeep - Thyerau et al. [8] (2018) 0.80± 0.05 0.81± 0.05 0.80± 0.05 0.72± 0.10 0.92± 0.04

QuickNat - Roy et al. [7] (2019) 0.80± 0.05 0.80± 0.05 0.79± 0.05 0.71± 0.11 0.92± 0.04

E2DHipseg without Aug. 0.82± 0.03 0.83± 0.03 0.82± 0.03 0.78± 0.10 0.88± 0.06

E2DHipseg with Aug. 0.82± 0.03 0.83± 0.03 0.82± 0.04 0.78± 0.10 0.89± 0.06

HCUnicamp (Patients)

3D U-Net - Isensee et al. [9] (2017) 0.74± 0.08 0.48± 0.39 0.56± 0.36 0.66± 0.12 0.87± 0.07

Hippodeep - Thyerau et al. [8] (2018) 0.74± 0.08 0.48± 0.39 0.57± 0.37 0.63± 0.12 0.91± 0.06

QuickNat - Roy et al. [7] (2019) 0.71± 0.08 0.47± 0.38 0.56± 0.36 0.59± 0.12 0.92± 0.06

E2DHipseg without Aug. 0.77± 0.07 0.49± 0.40 0.58± 0.37 0.69± 0.11 0.88± 0.07

E2DHipseg with Aug. 0.76± 0.07 0.50± 0.40 0.58± 0.37 0.68± 0.11 0.89± 0.07

Table 4: Locally executed testing results for HCUnicamp. All 190 volumes from the dataset

are included, and no model saw it on training. The 3D U-Net here is using the same weights

from table 3. QuickNat performs whole brain multitask segmentation, not only hippocampus.

deviation Dice for all HCUnicamp volumes, using both masks, or only one the

left or right mask, with multiple methods. “with Aug.” refers to the use of

augmentations 1, 2, 3 in training, in addition to 0. We also report Precision

and Recall, per voxel classification, where positives are hippocampus voxels and

negatives are non hippocampus voxels. Precision is defined by TP/(TP + FP )

and Recall is defined by TP/(TP+FN), where TP is true positives, FP are false

positives and FN are false negatives. All tests were run locally. Unfortunately,

we were not able to reproduce Atalaglou et al.’s method for local testing.

Our method performed better than other recent methods on the literature

in the HCUnicamp dataset, even though HCUnicamp is not involved on our

methodology development. However, no method was able to achieve more than

0.8 mean Dice in Epilepsy patients. The high number of false positives due to

hippocampus removal is notable by the low left and right DICE, and low pre-

cision. The impact of additional augmentations was not statistically significant

in the Epilepsy domain.

Our method takes around 15 seconds on a mid-range GPU and 3 minutes on

a consumer CPU to run, per volume. All the code used on its development is

19

Page 20: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

(a) (b)

Figure 6: Multiview and 3D render of our (a) best and (b) worst cases in the HarP test set.

Prediction in green, target in red and overlap in purple.

available in github.com/dscarmo/e2dhipseg, with instructions for how to run

it in an input volume. A free executable version for medical research use, with-

out environment setup requirements, is in development and will be available on

the repository soon. To avoid problems with different head orientations, there is

an option to use MNI152 registration when predicting in a given volume. Even

when performing registration, the output mask will be in the input volume’s

space, using the inverse transform. In regards to pre-processing requirements,

our method requires only for the volume to be in the correct orientation. This

can be achieved with rigid registration, and provided as an option, in a simi-

lar way to Hippodeep. A GPU is recommended for faster prediction but not

necessary.

5.4. Qualitative Results

While visually inspecting HarP results, very low variance was found, without

presence of heavy outliers. This is indicated by looking at the low deviation in

the consensus boxplot in Figure 5(b) and the best and worst segmentation in

Figure 6. Other methods present similar, stable results.

20

Page 21: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

However, in HCUnicamp, way more errors are visible in the worst segmen-

tations in Figure 7(b). Specially where the hippocampus is removed. Other

methods have similar results, with false positives in voxels where the hippocam-

pus would be in a healthy subject or Alzheimer’s patient. As expected, the best

segmentation, displayed in Figure 7(a), was in a control, healthy subject.

(a) (b)

Figure 7: Multiview and 3D render of our (a) best and (b) worst cases in the HCUnicamp

dataset. Prediction in green, target in red and overlap in purple.

6. Discussion

Regarding the Consensus approach from our method, most of the false pos-

itives some of the networks produce are eliminated by the averaging of activa-

tions followed by thresholding and post processing. This approach allows the

methodology to focus on good segmentation on the hippocampus area, without

worrying with small false positives in other areas of the brain. It was also ob-

served that in some cases, one of the networks fails and the other two “save”

the result. This is visible looking at the outliers in Figure 5(b).

The fact that patches are randomly selected and augmented in runtime

means they are mostly not repeated in different epochs. This is different to

21

Page 22: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

making a large dataset of pre-processed patches with augmentation. We believe

this random variation during training is very important to ensure the network

keeps seeing different data in different epochs, improving generalization. This

idea is similar to the Dropout technique [42], only done in data instead of

weights. Even with all the data randomness, re-runs of the same experiment

resulted mostly in the same final results, within 0.01 mean Dice of each other.

Interestingly, our method achieved better Dice in test scans with Alzheimer’s

than control subjects. This suggests that our method is focusing learning on

the Alzheimer’s atrophies present in the HarP dataset, and is able to adapt to

them.

As visible on the results of multiple methods, Dice in the HCUnicamp dataset

is not on the same level as what is seen on the public benchmark. Most meth-

ods have false positives on the removed hippocampus area, in a similar fashion

to Figure 7(b). The fact that QuickNat and Hippodeep have separate outputs

for left and right hippocampus does not seem to be enough to solve this prob-

lem. We believe the high false positive rate is due to textures similar to the

hippocampus, present in the hippocampus area, after its removal. This could

possibly be solved with a preliminary hippocampus presence detection phase.

7. Conclusion

This paper presents a hippocampus segmentation method including consen-

sus of multiple U-Net based CNNs and traditional post-processing, successfully

using a new optimizer and loss function from the literature. The presented

method achieves state-of-the-art performance on the public HarP hippocampus

segmentation benchmark. The hypothesis was raised that current automatic

hippocampus segmentation methods, including our own, would not have the

same performance on our in-house Epilepsy dataset, with some cases of hip-

pocampus removal. Quantitative and qualitative results show failure from those

methods to take into account hippocampus removal, in unseen data. This raises

the concern that current automatic hippocampus segmentation methods are not

22

Page 23: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

ready to outliers such as what is shown in this paper. In future work, improve-

ments can be made to our method to detect the removal of the hippocampus as

a pre-processing step, using part of HCUnicamp as training data.

Conflict of Interest Statement

We have no conflicts of interest to declare.

Acknowledgements

We thank FAPESP for funding this research under grant 2018/00186-0, our

partners at BRAINN (FAPESP number 2013/07559-3 and FAPESP 2015/10369-

7) for letting us use their dataset on this research and CNPq research funding,

process numbers 310828/2018-0 and 308311/2016-7.

References

[1] P. Andersen, The hippocampus book, Oxford University Press, 2007.

[2] E. Ghizoni, J. Almeida, A. F. Joaquim, C. L. Yasuda, B. M. de Campos,

H. Tedeschi, F. Cendes, Modified anterior temporal lobectomy: anatomical

landmarks and operative technique, Journal of Neurological Surgery Part

A: Central European Neurosurgery 76 (05) (2015) 407–414.

[3] R. C. Petersen, P. Aisen, L. A. Beckett, M. Donohue, A. Gamst, D. J.

Harvey, C. Jack, W. Jagust, L. Shaw, A. Toga, et al., Alzheimer’s disease

neuroimaging initiative (adni): clinical characterization, Neurology 74 (3)

(2010) 201–209.

[4] E. Ghizoni, R. N. Matias, S. Lieber, B. M. de Campos, C. L. Yasuda, P. C.

Pereira, A. C. S. Amato Filho, A. F. Joaquim, T. M. Lopes, H. Tedeschi,

et al., Clinical and imaging evaluation of transuncus selective amygdalo-

hippocampectomy, World neurosurgery 100 (2017) 665–674.

[5] B. Fischl, Freesurfer, Neuroimage 62 (2) (2012) 774–781.

23

Page 24: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

[6] C. S. McCarthy, A. Ramprashad, C. Thompson, J.-A. Botti, I. L. Coman,

W. R. Kates, A comparison of freesurfer-generated data with and without

manual intervention, Frontiers in neuroscience 9 (2015) 379.

[7] A. G. Roy, S. Conjeti, N. Navab, C. Wachinger, A. D. N. Initiative, et al.,

Quicknat: A fully convolutional network for quick and accurate segmenta-

tion of neuroanatomy, NeuroImage 186 (2019) 713–727.

[8] B. Thyreau, K. Sato, H. Fukuda, Y. Taki, Segmentation of the hippocam-

pus by transferring algorithmic knowledge for large cohort processing, Med-

ical image analysis 43 (2018) 214–228.

[9] F. Isensee, P. Kickingereder, W. Wick, M. Bendszus, K. H. Maier-Hein,

Brain tumor segmentation and radiomics survival prediction: contribution

to the brats 2017 challenge, in: International MICCAI Brainlesion Work-

shop, Springer, 2017, pp. 287–297.

[10] D. Carmo, B. Silva, C. Yasuda, L. Rittner, R. Lotufo, Extended 2d volu-

metric consensus hippocampus segmentation, in: International Conference

on Medical Imaging with Deep Learning, 2019, p. na.

[11] H. Wang, J. W. Suh, S. R. Das, J. B. Pluta, C. Craige, P. A. Yushke-

vich, Multi-atlas segmentation with joint label fusion, IEEE transactions

on pattern analysis and machine intelligence 35 (3) (2013) 611–623.

[12] J. E. Iglesias, M. R. Sabuncu, Multi-atlas segmentation of biomedical im-

ages: a survey, Medical image analysis 24 (1) (2015) 205–219.

[13] J. Pipitone, M. T. M. Park, J. Winterburn, T. A. Lett, J. P. Lerch, J. C.

Pruessner, M. Lepage, A. N. Voineskos, M. M. Chakravarty, A. D. N. Initia-

tive, et al., Multi-atlas segmentation of the whole hippocampus and sub-

fields using multiple automatically generated templates, Neuroimage 101

(2014) 494–512.

[14] A. Chincarini, F. Sensi, L. Rei, G. Gemme, S. Squarcia, R. Longo, F. Brun,

S. Tangaro, R. Bellotti, N. Amoroso, et al., Integrating longitudinal infor-

24

Page 25: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

mation in hippocampal volume measurements for the early detection of

alzheimer’s disease, NeuroImage 125 (2016) 834–847.

[15] C. Platero, M. C. Tobar, Combining a patch-based approach with a non-

rigid registration-based label fusion method for the hippocampal segmen-

tation in alzheimers disease, Neuroinformatics 15 (2) (2017) 165–183.

[16] Y. Chen, B. Shi, Z. Wang, P. Zhang, C. D. Smith, J. Liu, Hippocampus

segmentation through multi-view ensemble convnets, in: 2017 IEEE 14th

International Symposium on Biomedical Imaging (ISBI 2017), IEEE, 2017,

pp. 192–196.

[17] Z. Xie, D. Gillies, Near real-time hippocampus segmentation using patch-

based canonical neural network, arXiv preprint arXiv:1807.05482.

[18] C. Wachinger, M. Reuter, T. Klein, Deepnat: Deep convolutional neural

network for segmenting neuroanatomy, NeuroImage 170 (2018) 434–445.

[19] D. Ataloglou, A. Dimou, D. Zarpalas, P. Daras, Fast and precise hippocam-

pus segmentation through deep convolutional neural network ensembles and

transfer learning, Neuroinformatics (2019) 1–20.

[20] N. K. Dinsdale, M. Jenkinson, A. I. Namburete, Spatial warping network

for 3d segmentation of the hippocampus in mr images, in: International

Conference on Medical Image Computing and Computer-Assisted Inter-

vention, Springer, 2019, pp. 284–291.

[21] C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, M. J. Cardoso, Gener-

alised dice overlap as a deep learning loss function for highly unbalanced

segmentations, in: Deep learning in medical image analysis and multimodal

learning for clinical decision support, Springer, 2017, pp. 240–248.

[22] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for

biomedical image segmentation, in: International Conference on Medical

image computing and computer-assisted intervention, Springer, 2015, pp.

234–241.

25

Page 26: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

[23] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with

deep convolutional neural networks, in: Advances in neural information

processing systems, 2012, pp. 1097–1105.

[24] N. Nogovitsyn, R. Souza, M. Muller, A. Srajer, S. Hassel, S. R. Arnott,

A. D. Davis, G. B. Hall, J. K. Harris, M. Zamyadi, et al., Testing a deep

convolutional neural network for automated hippocampus segmentation in

a longitudinal sample of healthy participants, NeuroImage 197 (2019) 589–

597.

[25] M. Boccardi, M. Bocchetta, F. C. Morency, D. L. Collins, M. Nishikawa,

R. Ganzola, M. J. Grothe, D. Wolf, A. Redolfi, M. Pievani, et al., Training

labels for hippocampal segmentation based on the eadc-adni harmonized

hippocampal protocol, Alzheimer’s & Dementia 11 (2) (2015) 175–183.

[26] D. Carmo, B. Silva, C. Yasuda, L. Rittner, R. Lotufo, Extended 2d volumet-

ric consensus hippocampus segmentation, arXiv preprint arXiv:1902.04487.

[27] O. Lucena, R. Souza, L. Rittner, R. Frayne, R. Lotufo, Silver stan-

dard masks for data augmentation applied to deep-learning-based skull-

stripping, in: Biomedical Imaging (ISBI 2018), 2018 IEEE 15th Interna-

tional Symposium on, IEEE, 2018, pp. 1114–1117.

[28] M. Brett, K. Christoff, R. Cusack, J. Lancaster, et al., Using the talairach

atlas with the mni template, Neuroimage 13 (6) (2001) 85–85.

[29] M. Pereira, R. Lotufo, L. Rittner, An extended-2d cnn approach for diag-

nosis of alzheimers disease through structural mri, in: Proceedings of the

27th Annual Meeting of ISMRM 2019., 2019, p. na.

[30] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recog-

nition, in: Proceedings of the IEEE conference on computer vision and

pattern recognition, 2016, pp. 770–778.

[31] S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network train-

ing by reducing internal covariate shift, arXiv preprint arXiv:1502.03167.

26

Page 27: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

[32] S. K. Kumar, On weight initialization in deep neural networks, arXiv

preprint arXiv:1704.08863.

[33] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-

scale image recognition, arXiv preprint arXiv:1409.1556.

[34] V. Iglovikov, A. Shvets, Ternausnet: U-net with vgg11 encoder pre-trained

on imagenet for image segmentation, arXiv preprint arXiv:1801.05746.

[35] H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura,

R. M. Summers, Deep convolutional neural networks for computer-aided

detection: Cnn architectures, dataset characteristics and transfer learning,

IEEE transactions on medical imaging 35 (5) (2016) 1285–1298.

[36] H. Kervadec, J. Bouchtiba, C. Desrosiers, E. Granger, J. Dolz, I. Ben Ayed,

Boundary loss for highly unbalanced segmentation, in: M. J. Cardoso,

A. Feragen, B. Glocker, E. Konukoglu, I. Oguz, G. Unal, T. Vercauteren

(Eds.), Proceedings of The 2nd International Conference on Medical Imag-

ing with Deep Learning, Vol. 102 of Proceedings of Machine Learning Re-

search, PMLR, London, United Kingdom, 2019, pp. 285–296.

URL http://proceedings.mlr.press/v102/kervadec19a.html

[37] E. R. Dougherty, R. A. Lotufo, Hands-on morphological image processing,

Vol. 59, SPIE press, 2003.

[38] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv

preprint arXiv:1412.6980.

[39] Y. Bengio, I. J. Goodfellow, A. Courville, Deep learning, Nature 521 (2015)

436–444.

[40] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, J. Han, On the variance

of the adaptive learning rate and beyond, arXiv preprint arXiv:1908.03265.

[41] K. He, X. Zhang, S. Ren, J. Sun, Delving deep into rectifiers: Surpassing

human-level performance on imagenet classification, in: Proceedings of the

IEEE international conference on computer vision, 2015, pp. 1026–1034.

27

Page 28: Hippocampus Segmentation on Epilepsy and Alzheimer’s ... · In this paper, some of those methods were reproduced for comparison purposes in our in-house dataset, namely [7, 8],

[42] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov,

Dropout: a simple way to prevent neural networks from overfitting, The

journal of machine learning research 15 (1) (2014) 1929–1958.

28