# Organizing a spectral image database by using Self-Organizing Maps Research Seminar 7.10.2005 Oili Kohonen.

Post on 15-Dec-2015

222 views

Category:

## Documents

6 download

Embed Size (px)

TRANSCRIPT

<ul><li> Slide 1 </li> <li> Organizing a spectral image database by using Self-Organizing Maps Research Seminar 7.10.2005 Oili Kohonen </li> <li> Slide 2 </li> <li> Motivation? Image retrieval from conventional databases since 1990's... many efficient techniques have been developed However, efficient techniques for querying images from spectral image database does not exist. Due to the high amount of data in the case of spectral images, the efficient techniques will be needed. </li> <li> Slide 3 </li> <li> Spectral imaging? Metameric imaging: cheap and practical way to achieve a color match. Spectral imaging: needed to achieve a color match for all observers across the changes in the illumination. </li> <li> Slide 4 </li> <li> Principle of SOM: The Self-Organizing Map (SOM) algorithm: Is an unsupervised learning algorithm. Defines mapping from high-dimensional data into lower-dimensional data. SOM: Consists of arranged units (or neurons), which are represented by weight vectors. Units are connected to each other by neighborhood relation. </li> <li> Slide 5 </li> <li> Principle of SOM: SOM Algorithm: begin Initialize the SOM for i = 1 : number of epochs take input vector x randomly from the training data; find the BMU for x; update the weight vectors of the map; decrease the learning rate &amp; neighborhood function; end; </li> <li> Slide 6 </li> <li> Principle of SOM: finding the BMU Mathematically the BMU is defined for input data vector, x, as follows: Euclidean distance is a typically used distance measure. </li> <li> Slide 7 </li> <li> Principle of SOM: updating the weight vectors Learning rate: product of learning rate parameter &amp; neighborhood function: </li> <li> Slide 8 </li> <li> Principle of SOM: neighborhood function Neighborhood function h(t) has to fullfill the following two requirements: It has to be symmetric about the maximum point (BMU). It's amplitude has to decrease monotonically with an increasing distance from BMU. Gaussian function is a typical choice for h(t) </li> <li> Slide 9 </li> <li> Principle of SOM: Lattice structure Lattice structures: hexagonal &amp; rectangular </li> <li> Slide 10 </li> <li> Searching Technique: Constructing histogram database Train SOM Find BMU for each pixel in an image Generate BMU-histogram &amp; normalize it by the number of pixels in an image Repeat steps 2 &amp; 3 for all images in a spectral image database Save histogram database with the information of SOM-map </li> <li> Slide 11 </li> <li> Searching Technique: making a search Choose an image and generate its histogram. Calculate the distances between the generated histogram and the existing histogram database. Order images by these distances. The results of the search are shown to user as RGB-images </li> <li> Slide 12 </li> <li> Searching techniques: One-dimensional SOM: </li> <li> Slide 13 </li> <li> Searching techniques: Two-dimensional histogram-trained SOM </li> <li> Slide 14 </li> <li> Distance Calculations: H1 &amp; H2 are the compared histograms L1 &amp; L2 are the indices of max. values| H3=(H1+H2)/2 </li> <li> Slide 15 </li> <li> Experiments: One-dimensional SOM for unweighted images One-dimensional SOM for images weighted by HVS-function Two-dimensional SOM From histogram data From spectral data Human Visual Sensitivity-function (Unweighted images) (Unweighted and weighted images) </li> <li> Slide 16 </li> <li> The Used Database: 106 images: 61 components, spectral range from 400 nm to 700 nm at 5 nm interval. </li> <li> Slide 17 </li> <li> Training of the SOMs: 10 000 spectra were selected randomly from each image. 2 000 000 &amp; 4 000 000 epochs in ordering &amp; fine tuning phases, respectively. Unit sizes: 50 chosen empirically 49 to have comparable results with 1D-SOM 14*14 map in the case of histogram-trained SOM </li> <li> Slide 18 </li> <li> Results: 1d-SOM, Unweighted images Pure data Multiplied data The distance measure: Euclidean distance </li> <li> Slide 19 </li> <li> Results: 1D, Unweighted images Energy K-L Peak DPD JD </li> <li> Slide 20 </li> <li> Results: 1D, Weighted images Energy K-L Peak DPD JD </li> <li> Slide 21 </li> <li> Conclusions I: The structure of the database is different for weighted and unweighted images. The best results were got by using euclidean distance and Jeffrey divergence. Importance of normalization?? * Better results with Euclidean distance &amp; DPD * Worse results with Jeffrey divergence </li> <li> Slide 22 </li> <li> Results: 2D, Unweighted spectral data Euclidean Energy K-L Peak DPD JD </li> <li> Slide 23 </li> <li> Results: 2D, Weighted spectral data Euclidean Energy K-L Peak DPD JD </li> <li> Slide 24 </li> <li> Conclusions II: In the case of two-dimensional SOM better results are achieved by using non-weighted images. When the weighted images are used, the use of 1D- SOM seems to be more reasonable. </li> <li> Slide 25 </li> <li> Results: histogram-trained 2D-SOM Euclidean Energy K-L Peak DPD JD </li> <li> Slide 26 </li> <li> Connections between images and histograms: non-weighted weighted </li> <li> Slide 27 </li> <li> Past, Present &amp; Future: Past: What you have seen so far... Present: Texture features in addition to color features Future: Testing the effect of different metrics in ordering and fine-tuning phases (during the training of SOM) </li> <li> Slide 28 </li> <li> Questions: ? Thank you for not asking any... =) </li> <li> Slide 29 </li> <li> Slide 30 </li> </ul>