underwater sparse image classification using deep convolutional neural networks

47
Underwater sparse image classification using deep convolutional neural networks Mohamed Elawady Heriot-Watt University VIBOT Msc 2014 26 Nov 2015 Deep Learning Workshop, Lyon, 2015 1

Upload: jean-monnet-university

Post on 23-Jan-2017

350 views

Category:

Technology


3 download

TRANSCRIPT

Page 1: Underwater sparse image classification using deep convolutional neural networks

Underwater sparse image classification using deep

convolutional neural networks

Mohamed Elawady

Heriot-Watt UniversityVIBOT Msc 2014

26 Nov 2015 Deep Learning Workshop, Lyon, 2015 1

Page 2: Underwater sparse image classification using deep convolutional neural networks

About Me! 2014-Current:

• PhD in Imaging Processing (Hubert Curien Laboratory, Jean Monnet University [FR])

• Thesis: “Automatic reading of photographs using formal analysis in visual arts” supervised by Christophe Ducottet, Cecile Barat, Philippe Colantoni.

2012 – 2014:• Erasmus Mundus European Masters in Vision and Robotics (VIBOT)

(University of Burgundy [FR], University of Girona [SP], Heriot-Watt University [UK]).

• Thesis: “Sparse coral classification using deep convolutional neural networks” supervised by Neil Robertson, David Lane.

2003 – 2007: • Bachelor of Science in Computers & Informatics [Major: Computer

Science] (Faculty of Computers & Informatics, Suez Canal University [EG]).• Thesis: “Self-Authenticating Image Watermarking System”.

26 Nov 2015 Deep Learning Workshop, Lyon, 2015 2

Page 3: Underwater sparse image classification using deep convolutional neural networks

Outline

• Coralbots• Introduction• Problem Definition• Related Work• Methodology• Results• Conclusion and Future Work• Deep Learning Workshop, Edinburgh,2014• Summer Internship 2014

26 Nov 2015 3Deep Learning Workshop, Lyon, 2015

Page 4: Underwater sparse image classification using deep convolutional neural networks

Coralbots

•Team members: Lea-Anne Henry, Neil Robertson, David Lane and David Corne.

•Target: Deep sea diving robots to save world’s coral reefs.

•Progress: Three generations of VIBOT Msc work (2013 - 2015).

•Resources:– http://www.coralbots.org/

– https://youtu.be/MJ-_d3HZOi4

– https://youtu.be/6q4UiuiqZuA

26 Nov 2015 Deep Learning Workshop, Lyon, 2015 4

Page 5: Underwater sparse image classification using deep convolutional neural networks

Coralbots

26 Nov 2015 Deep Learning Workshop, Lyon, 2015 5

Page 6: Underwater sparse image classification using deep convolutional neural networks

IntroductionFast facts about coral: Consists of tiny animals (not plants).

Takes long time to grow (0.5 – 2cmper year).

Exists in more than 200 countries.

Generates 29.8 billion dollars peryear through different ecosystemservices.

10% of the world's coral reefs aredead, more than 60% of the world'sreefs are at risk due to human-related activities.

By 2050, all coral reefs will be indanger.

26 Nov 2015 6Deep Learning Workshop, Lyon, 2015

Page 7: Underwater sparse image classification using deep convolutional neural networks

Introduction

Coral Transplantation: Coral gardening through involvement of

SCUBA divers in coral reef reassemble andtransplantation.

Examples: Reefs capers Project 2001 atMaldives & Save Coral Reefs 2012 atThailand.

Limitations: time & depth per dive session.

Robot-based strategy in deep-sea coralrestoration through intelligent autonomousunderwater vehicles (AUVs) grasp cold-watercoral samples and replant them in damagedreef areas.

26 Nov 2015 7Deep Learning Workshop, Lyon, 2015

Page 8: Underwater sparse image classification using deep convolutional neural networks

Problem Definition

26 Nov 2015 8

Dense Classification

Millions of coral images

Thousands of hours of underwater videos

Massive number of hours to annotate every pixel

inside each coral image or video frame

Manual sparse Classification

Manually annotated through coral experts by matching some random uniform pixels to target

classes

More than 400 hours are required to annotate

1000 images (200 coral labelled points per image)

Automatic sparse Classification

Supervised learning algorithm to annotate images autonomously

Input data are ROIs around random points

Deep Learning Workshop, Lyon, 2015

Page 9: Underwater sparse image classification using deep convolutional neural networks

Problem Definition

26 Nov 2015 9

Moorea Labeled Corals (MLC)

University of California, San Diego (UCSD)

Island of Moorea in French Polynesia

~ 2000 Images (2008, 2009, 2010)

200 Labeled Points per Image

MLC Dataset

Deep Learning Workshop, Lyon, 2015

Page 10: Underwater sparse image classification using deep convolutional neural networks

Problem Definition

26 Nov 2015 10

5 Coral Classes

• Acropora “Acrop”

• Pavona “Pavon”

• Montipora “Monti”

• Pocillopora “Pocill”

• Porites “Porit”

4 Non-coral Classes

• Crustose Coralline Algae “CCA”

• Turf algae “Turf”

• Macroalgae “Macro”

• Sand “Sand”

MLC Dataset

Deep Learning Workshop, Lyon, 2015

Page 11: Underwater sparse image classification using deep convolutional neural networks

Problem Definition

26 Nov 2015 11

Atlantic Deep Sea (ADS)

Heriot-Watt University (HWU)

North Atlantic West of Scotland and Ireland

~ 160 Images (2012)

200 Labeled Points per Image

ADS Dataset

Deep Learning Workshop, Lyon, 2015

Page 12: Underwater sparse image classification using deep convolutional neural networks

Problem Definition

26 Nov 2015 12

5 Coral Classes

• DEAD “Dead Coral”

• ENCW “Encrusting White Sponge”

• LEIO “Leiopathes Species”

• LOPH “Lophelia”

• RUB “Rubble Coral”

4 Non-coral Classes

• BLD “Boulder”

• DRK “Darkness”

• GRAV “Gravel”

• Sand “Sand”

ADS Dataset

Deep Learning Workshop, Lyon, 2015

Page 13: Underwater sparse image classification using deep convolutional neural networks

Related Work

26 Nov 2015 13Deep Learning Workshop, Lyon, 2015

Page 14: Underwater sparse image classification using deep convolutional neural networks

Related Work

26 Nov 2015 14Deep Learning Workshop, Lyon, 2015

Page 15: Underwater sparse image classification using deep convolutional neural networks

Related Work

26 Nov 2015 15Deep Learning Workshop, Lyon, 2015

Page 16: Underwater sparse image classification using deep convolutional neural networks

Related Work

26 Nov 2015 16Sparse (Point-Based) ClassificationDeep Learning Workshop, Lyon, 2015

Page 17: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 17

Shallow vs Deep Classification:

Traditional architecture extracts hand-designed key features based on human analysis for input data.

Modern architecture trains learning features across hidden layers; starting from low level details up to high level details.

Structure of Network Hidden Layers: Trainable weights and biases. Independent relationship within

objects inside. Pre-defined range measures. Further faster calculation.

Deep Learning Workshop, Lyon, 2015

Page 18: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 18

“LeNet-5” by LeCun1998

First back-propagation convolutional neural network (CNN) for handwritten digit

recognition

Deep Learning Workshop, Lyon, 2015

Page 19: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 19

Recent CNN applications

Object classification: Buyssens (2012): Cancer cell

image classification.

Krizhevsky (2013): Large scale visual recognition challenge 2012.

Object recognition: Girshick (2013): PASCAL visual

object classes challenge 2012.

Syafeeza (2014): Face recognition system.

Pinheiro (2014): Scene labelling.

Object detection system overview (Girshick)

More than 10% better than top contest performer

Deep Learning Workshop, Lyon, 2015

Page 20: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 20

Proposed CNN framework

Deep Learning Workshop, Lyon, 2015

Page 21: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 21

Proposed CNN framework

3 Basic Channels

(RGB)

Extra Channels (Feature

maps)

Find suitable weights of convolutional kernel and additive biases

Classification Layer

Color Enhancement

Deep Learning Workshop, Lyon, 2015

Page 22: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 22

Proposed CNN framework

Deep Learning Workshop, Lyon, 2015

Page 23: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 23

Hybrid patching:

Three different-in-size patches are selected across each annotated point (61x61, 121x121, 181x181).

Scaling patches up to size of the largest patch (181x181) allowing blurring in inter-shape coral details and keeping up coral’s edges and corners.

Scaling patches down to size of the smallest patch (61x61) for fast classification computation.

Deep Learning Workshop, Lyon, 2015

Page 24: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 24

Feature maps: Zero Component Analysis (ZCA)

whitening makes data less-redundant by removing any neighbouring correlations in adjacent pixels.

Weber Local Descriptor (WLD) shows a robust edge representation of high-texture images against high-noisy changes in illumination of image environment.

Phase Congruency (PC) represents image features in such format which should be high in information and low in redundancy using Fourier transform.

Deep Learning Workshop, Lyon, 2015

Page 25: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 25

Color enhancement: Bazeille’06 solves difficulties in capturing

good quality under-water images due to non-uniform lighting and underwater perturbation.

Iqbal ‘07 clears under-water lighting problems due to light absorption, vertical polarization, and sea structure.

Beijbom’12 figures out compensation of color differences in underwater turbidity and illumination.

Deep Learning Workshop, Lyon, 2015

Page 26: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 26

Proposed CNN framework

Deep Learning Workshop, Lyon, 2015

Page 27: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 27

Kernel weights & bias initialization:

The network initialized biases to zero, and kernel weights using uniform random distribution using the following range:

where Nin and Nout represent number of input and output maps for each hidden layer (i.e. number of input map for layer 1 is 1 as gray-scale image or 3 as color image), and k symbolizes size of convolution kernel for each hidden layer.

Deep Learning Workshop, Lyon, 2015

Page 28: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 28

Convolution layer:

Convolution layer construct output maps by convoluting trainable kernel over input maps to extract/combine features for better network behaviour using the following equation:

where xil-1 & xj

l are output maps of previous (l-1) & current (l) layers with convolution kernel numbers (input i and output j ) with weight kij

l, f (.) is activation sigmoid function for calculated maps after summation, and bj

l is an addition bias of current layer l with output convolution kernel number j.

Deep Learning Workshop, Lyon, 2015

Page 29: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 29

Proposed CNN framework

Deep Learning Workshop, Lyon, 2015

Page 30: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 30

Down-sampling layer:

The functionality of down-sampling layer is dimensional reduction for feature maps through network's layers starting from input image ending to sufficient small feature representation leading to fast network computation in matrix calculation, which uses the following equation:

where hn is non-overlapping averaging function with size nxn with neighbourhood weights w and applied on convoluted map x of kernel number jat layer l to get less-dimensional output map y of kernel number j at layer l (i.e. 64x64 input map will be reduced using n=2 to 32x32

output map).

Deep Learning Workshop, Lyon, 2015

Page 31: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 31

Proposed CNN framework

Deep Learning Workshop, Lyon, 2015

Page 32: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 32

Learning rate:

An adapt learning rate is used rather than a constant one with respect to network's status and performance as follows:

where αn & αn-1 are learning rates of current & previous iterations (if first network iteration is the current one, then learning rate of previous network iteration represents initial learning rate as network input), n & N are number of current network iteration & total number of iterations, en is back-propagated error of current network iteration, and g(.) is linear limitation function to keep value of learning rate in range (0,1].

Deep Learning Workshop, Lyon, 2015

Page 33: Underwater sparse image classification using deep convolutional neural networks

Methodology

26 Nov 2015 33

Error back-propagation:

The network is back-propagated with squared-error loss function as follows:

where N & C are number of training samples & output classes, and t & y are target & actual outputs.

Deep Learning Workshop, Lyon, 2015

Page 34: Underwater sparse image classification using deep convolutional neural networks

Results

26 Nov 2015 34

Parameters for Experimental ResultsRatio of training/test sets 2:1

Size of hybrid input image (61 x 61) , (121 x 121) , (181 x 181)

Number of input channels3 (RGB) , 4 +(WLD, PC, ZCA) ,

6 +(WLD + PC,+ZCA)

Number of samples per class 300

Enhancement for RBG input Bazeille'06 , Iqbal'07, Beijbom'12, NoEhance

Normalization method min-max

Initial learning rate 1

Network batch size 3

Number of network epochs 10

Number of hidden output maps (6-12) , (12-24) , (24-48)

Size of last hidden output maps 4 x 4

Number of output classes 9

Deep Learning Workshop, Lyon, 2015

Page 35: Underwater sparse image classification using deep convolutional neural networks

Results

26 Nov 2015 35

MLC

ADS

Experimental results on hybrid patching:

Unified-scaling multi-size image patches have less error rates over single-sized image patches.

Up-scaling in multi-size image patches have the best comparison results across different measurements.

Hybrid down-scaling (61) is finally selected for fast computation.

Deep Learning Workshop, Lyon, 2015

Page 36: Underwater sparse image classification using deep convolutional neural networks

Results

26 Nov 2015 36

MLC

ADS

Experimental results on feature maps:

Combination of three feature-based maps has slightly better classification results over basic color channels without any additional supplementary channels.

In conclusion, additional feature-based channels besides basic color channels can be useful in coral discrimination in both datasets (MLC,ADS)!

Deep Learning Workshop, Lyon, 2015

Page 37: Underwater sparse image classification using deep convolutional neural networks

Results

26 Nov 2015 37

MLC

ADS

Experimental results on color enhancement:

Bazeille'06 is the best color enhancement algorithm over other algorithms (Iqbal'07, Beijbom'12).

Raw image data without any enhancement is the best pre-processing choice for network classification.

Deep Learning Workshop, Lyon, 2015

Page 38: Underwater sparse image classification using deep convolutional neural networks

Results

26 Nov 2015 38

MLC

ADS

Experimental results on hidden output maps:

Outrageous number (24-48) of hidden output maps Inappropriate classification output.

(6-12) and (12-24) have similar classification rates!

Deep Learning Workshop, Lyon, 2015

Page 39: Underwater sparse image classification using deep convolutional neural networks

Results

26 Nov 2015 39

Summary for Experimental Results

Size of hybrid input image (61 x 61) , (121 x 121) , (181 x 181)

Number of input channels3 (RGB) , 4 +(WLD, PC, ZCA) ,

6 +(WLD + PC,+ZCA)

Enhancement for RBG input Bazeille'06 , Iqbal'07, Beijbom'12, NoEhance

Number of hidden output maps (6-12) , (12-24) , (24-48)

Updated Parameters for Final Results

Number of network epochs 50

Deep Learning Workshop, Lyon, 2015

Page 40: Underwater sparse image classification using deep convolutional neural networks

Results

26 Nov 2015 40

MLC

ADS

Final results:

In MLC dataset , testing phase of has almost the same results and training phase has better results number of hidden output maps (12-24) and using additional feature-based maps as supplementary channels.

In ADS dataset, testing phase has best significant accuracy results with same selected configuration.

Deep Learning Workshop, Lyon, 2015

Page 41: Underwater sparse image classification using deep convolutional neural networks

Results

26 Nov 2015 41

MLC

ADS

Final results (continued):

In MLC dataset, best classification Acrop (coral) and Sand (non-coral), and lowest classification Pavon (coral) and Turf (non-coral). Misclassification Pavon as Monti / Macro and Turf as Macro/CCA/Sand due to similarity in their shape properties or growth environment.

In ADS dataset, perfect classification DRK (non-coral) due to its distinct nature (almost dark blue plain image), excellent classification LEIO (coral) due to its distinction color property (orange).

56 %

81 %

Deep Learning Workshop, Lyon, 2015

Page 42: Underwater sparse image classification using deep convolutional neural networks

Conclusion and Future Work

26 Nov 2015 42

Conclusion

• First application of deep learning techniques in under-water image processing.

• Introduction of new coral-labeled dataset “Atlantic Deep Sea” representing cold-water coral reefs.

• Investigation of convolutional neural networks in handling noisy large-sized images, manipulating point-based multi-channel input data.

Future Work

• Composition of multiple deep convolutional models for N-dimensional data.

• Development of real-time image/video application for coral recognition and detection.

• Intensive nature analysis for different coral classes in variant aquatic environments.

Deep Learning Workshop, Lyon, 2015

Page 43: Underwater sparse image classification using deep convolutional neural networks

Deep Learning Workshop, Edinburgh,2014

• http://workshops.inf.ed.ac.uk/deep/deep2014/

• Speakers: Ruslan Salakhutdinov (Toronto), Volodymyr Mnih(Google DeepMind).

• Notes:– Dealing with N-dimensional data --> Split them in two deep models

(Basic, Extra) and then fuse them.

– Dealing with large-size images of high texture background --> windowing + CNN.

– Adding scheduling noise with certain gaps to avoid local minima.

– Try data augmentation (i.e. rotation, scaling, …) and use small initial learning rate for better results.

26 Nov 2015 Deep Learning Workshop, Lyon, 2015 43

Page 44: Underwater sparse image classification using deep convolutional neural networks

Summer Internship 2014

26 Nov 2015 Deep Learning Workshop, Lyon, 2015 44

Method Author University Result

LAB + Max Wavelet Response +

SVMBeijbom UCSD 73.3 %

Complex Algorithm Shihavuddin Girona 85.5 %

CNN (Caffe) Elawady HWU 70%

DBN (VisualRBM) Elawady HWU 20%

Scattering Transform + SVM

(LibSVM)Elawady HWU 61%

Results for subset (year 2008) of MLC dataset:

671 images 134200 patches

Page 45: Underwater sparse image classification using deep convolutional neural networks

References

a.S.M. Shihavuddin, N. Gracias, R. Garcia, A. Gleason, and B. Gintert, “Image-Based Coral Reef Classification and Thematic Mapping,” Remote Sensing, vol. 5,pp. 1809-1841, 2013.

O. Beijbom, P. J. Edmunds, D. I. Kline, B. G. Mitchell, and D. Kriegman, “Automatedannotation of coral reef survey images,” 2012 IEEE CVPR, pp. 1170–1177, 2012.

Y. A. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller, “Efficient backprop,” in Neuralnetworks: Tricks of the trade, pp. 9–48, Springer, 2012.

R. Palm, “Prediction as a candidate for learning deep hierarchical models of data,”Technical University of Denmark, Palm, 2012.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied todocument recognition,” Proceedings of the IEEE, vol. 86, pp. 2278–2324, 1998.

26 Nov 2015 45Deep Learning Workshop, Lyon, 2015

Page 46: Underwater sparse image classification using deep convolutional neural networks

Thank You!

26 Nov 2015 46Deep Learning Workshop, Lyon, 2015

Page 47: Underwater sparse image classification using deep convolutional neural networks

Questions?!

26 Nov 2015 47Deep Learning Workshop, Lyon, 2015