matrix note

21
A Mini Project-I Report on Title of the PROJECT In partial fulfillment of the requirements for the award of the degree of BACHELOR OF TECHNOLOGY IN ELECTRONICS AND COMMUNICATION ENGINEERING By Name of the Student - ROLL NO. Under the guidance of Mr/Mrs/Ms. Supervisor Name Designation

Upload: manojpaidimarri

Post on 20-Jul-2016

213 views

Category:

Documents


0 download

DESCRIPTION

it is very important

TRANSCRIPT

Page 1: Matrix note

A Mini Project-I Report on

Title of the PROJECT

In partial fulfillment of the requirementsfor the award of the degree of

BACHELOR OF TECHNOLOGY

IN

ELECTRONICS AND COMMUNICATION ENGINEERING

ByName of the Student - ROLL NO.

Under the guidance of

Mr/Mrs/Ms. Supervisor NameDesignation

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

VARDHAMAN COLLEGE OF ENGINEERING (AUTONOMOUS)

Shamshabad – 501 218, Hyderabad

Page 2: Matrix note

August – 2014

VARDHAMAN COLLEGE OF ENGINEERING(AUTONOMOUS)

Shamshabad – 501 218, Hyderabad

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

CERTIFICATE

Certified that this is a bonafide record of the project work entitled ," Name of the

Project ", done by Student Name (Roll.No) ,Submitted to the faculty of

Electronics & Communication Engineering, in partial fulfillment of the

requirements for the Degree of BACHELOR OF TECHNOLOGY in Electronics &

Communication Engineering during the year 2014-2015.

Supervisor: Academic Head:XXXXXX Prof.N.U.M.RaoDesignation, Professor,Dept of ECE Dept. of ECEVardhaman College of Engineering, Vardhaman College of Engineering,Hyderabad. Hyderabad.

Page 3: Matrix note

ACKNOWLEDGEMENT

The satisfaction that accompanies the successful completion of the task would be

put incomplete without the mention of the people who made it possible, whose

constant guidance and encouragement crown all the efforts with success.

I wish to express my deep sense of gratitude to Name of Supervisor, Designation,

Department of Electronics & Communication Engineering, Vardhaman College of

Engineering, for his/her able guidance and useful suggestions, which helped me in

completing the project work, in time.

I am particularly thankful to Prof. Y. Pandurangaiah, Head, Department of

Electronics and Communication Engineering for his guidance, intense support and

encouragement, which helped us to mould my project into a successful one.

I show gratitude to my honorable Principal Dr. S. Sai Satyanarayana Reddy, for

having provided all the facilities and support.

I avail this opportunity to express my deep sense of gratitude and heartful thanks

to Dr Teegala Vijender Reddy, Chairman and Sri Teegala Upender Reddy, Secretary of

VCE, for providing congenial atmosphere to complete this project successfully.

I also thank all the staff members of Electronics and Communication Engineering

department for their valuable support and generous advice. Finally thanks to all my

friends and family members for their continuous support and enthusiastic help.

Name of Student – Roll No.

Page 4: Matrix note

ABSTRACT

(Should not exceed 200 words)

This project presents architecture to improve surveillance applications based on the

usage of the service oriented paradigm, with smart phones as user terminals, allowing

application dynamic composition and increasing the flexibility of the system.

According to the result of moving object detection research on video sequences, the

movement of the people is tracked using video surveillance. The moving object is

identified using the image subtraction method. The background image is subtracted

from the foreground image, from that the moving object is derived. So the Back ground

subtraction algorithm and the threshold value is calculated to find the moving

image by using background subtraction algorithm the moving frame is identified.

Then by the threshold value the movement of the frame is identified and tracked.

Hence the movement of the object is identified accurately. This project deals with low-

cost intelligent mobile phone-based wireless video surveillance solution using moving

object recognition technology.

Page 5: Matrix note

i

INDEX

CONTENT PAGE NO

ABSTRACT i

CHAPTER 1 INTRODUCTION 1

1.1 Overview 1

1.2 Problem Statement 2

1.3 Objective 2

CHAPTER 2 PROJECT DESCRIPTION 4

2.1 Working Principle 5

2.2 Block Diagram Description 6

2.3 Component Description 7

2.4 Methodology 8

2.5 Comparative Study 9

----

CHAPTER 3 RESULTS

3.1 Input 10

3.2 Outputs 11

CHAPTER 4 CONCLUSION 12

REFRENCES 16

Page 6: Matrix note

CHAPTER 1

INTRODUCTION1.1 Overview

The Covariance matrix is a key component in a wide array of statistical signal processing

tasks applied to remote sensing imagery from multispectral and hyper-spectral sensors. If

we let x 2 Rp correspond to the p spectral components at a given pixel, then the

distribution of these pixels over the image can be described statistically in terms of an

underlying probability distribution. For a Gaussian distribution, the parameters of interest

are the mean and the covariance. Let R 2 Rp×p be the “actual” covariance matrix for this

distribution, and suppose that x1, . . . , xn are samples drawn from the distribution. The

aim of covariance estimation is to compute a matrix bR that is in some sense close to the

actual, but unknown, covariance R. What we mean by “in some sense” is that bR should

be an approximation that is useful for the given task at hand. The maximum likelihood

solution is one such approximation, but particularly when the number of samples n is

smaller than the number of channels p, this solution tends to over-fit the data. For this

reason, a variety of regularization schemes have been investigated [1], [2], [3], [4], [5],

[6], [7], [8]. The sparse matrix transform (SMT) [9], [10], [11] is a recent addition to this

list.

When there are many more pixels than channels, the problem of estimating covariance

matrix is not a serious issue. But this is not always the case. Moving-window methods,

for instance, seek to better characterize the local statistics of an image and in this case

have many fewer pixels with which to estimate those statistics. Cluster-based methods,

which segment the image into a large number of spectrally (and in some cases, spatially)

distinct regions, have fewer pixels per cluster than are available in the full image. More

sophisticated models, such as Gaussian mixture models, also provide fewer pixels per

estimated covariance matrix. In addition to reducing the number of pixels available to

estimate a covariance matrix of a given size, there are also methods, such as spatio-

spectral enhancements, which add many more channels to the image by incorporating

local spatial information into each pixel. The choice of window size or cluster number or

number of spatio-spectral operators is often influenced by the need to estimate a good

Page 7: Matrix note

covariance matrix. By providing a tool to more accurately estimate a covariance matrix

with fewer pixels, these approaches may be further extended.

Many different measures are possible for the quality of an estimate bR, and the choice of

which estimator is best can depend on which measure is used. In [9], [10], the

effectiveness of the covariance estimator was expressed in terms of the Kullback-Leibler

distance between Gaussian distributions using R and bR, while [11] compared estimators

based on their utility for weak signal detection.

We propose a new approach to covariance estimation, which is based on constrained

maximum likelihood (ML) estimation of the covariance from sample vectors. In

particular, the covariance is constrained to be formed by an eigen-transformation that can

be represented by a sparse matrix transform (SMT); and we define the SMT to be an

orthonormal transformation formed by a product of pair wise coordinate rotations known

as Givens rotations.

1.2 Problem Statement:

To evaluate performance on covariance matrices that are observed in real hyper-

spectral imagery. The evaluation will be in terms that correspond to problems that

arise in remote sensing.

In addition to the weak signal detection problem that we investigated previously,

we will consider dimension reduction, anomaly detection, and anomalous change

detection. This is in addition to two more generic measures: likelihood and

Frobenius distance.

1.3 Objectives of the project:

The sample covariance is the most natural and most commonly employed choice

for estimating covariance from data.

We will review the justification for the sample covariance, and describe several

alternatives, all of which use the sample covariance as a starting point.

Page 8: Matrix note

CHAPTER 2

PROJECT DESCRIPTION2.1 Working Principle:

Introduced only in the last several decades, spectral image processing has morphed into a

powerful technique applicable in many areas such as agriculture, mining, emergency

management, defense, environmental monitoring, and health sciences. At the basis of

such widespread use is the concept of spectral discrimination, i.e. that even materials or

substances with strong similarities can be differentiated by analyzing their (two

dimensional) spectral signature (the specific combination of reflected and absorbed

electromagnetic radiation at varying wavelengths). Spectral image processing takes this

one step further. The images (or spectral bands) correspond to ranges of contiguous

wavelengths and are combined to form a three dimensional spectral cube. Supported by

advances in sensor design, storage and transmission techniques, spectral imaging

continues to increase in availability and resolution, at the same time decreasing in cost.

However, spectral imagery, and in particular hyper and ultra spectral data (with hundreds

to thousands of bands) continue to provide challenges in processing due to high

dimensionality and high correlation [1]. To address this problem, many feature extraction

techniques such as Principal Component Analysis (PCA), Orthogonal Subspace

Projection (OSP), and Independent Component Analysis (ICA) have been applied,

producing reduced dimensionality data.

2.2 Block Diagram Description:

Over the last few decades, data, data management, and data processing have become

ubiquitous factors in modern life and work.

Recent Data Trends

Huge investments have been made in various data gathering and data processing

mechanisms. The information technology industry is the fastest growing and most

lucrative segment of the world economy, and much of the growth occurs in the

development, management, and warehousing of prodigious streams of data for scientific,

medical, engineering, and commercial purposes.

Some recent examples include:

Page 9: Matrix note

• Biotech Data. Virtually everyone is aware of the fantastic progress made in the last five

years in gathering data about the human genome. A common sight in the press is pictures

of vast warehouses filled with genome sequencing machines working night and day, or

vast warehouses of compute servers working night and day, as part of this heroic effort.

This is actually just the opening round in a long series of developments.

• Financial Data. Over the last decade, high-frequency financial data have become

available; in the early to mid 1990’s data on individual currency trades, became available,

tracking individual transactions. Now with the advent of new exchanges such as

Island.com, one can obtain individual bids to buy and sell, and the full distribution of

such bids.

• Satellite Imagery. Providers of satellite imagery have available a vast database of such

images, and hence N in the millions. Projects are in place to compile databases to resolve

the entire surface of the earth to 1 meter accuracy. Applications of such imagery include

natural resource discovery and agriculture.

• Hyper spectral Imagery. It is now becoming common, both in airborne photographic

imagery and satellite imagery to use hyper spectral cameras which record, instead of three

color bands RGB, thousands of different spectral bands. Such imagery is presumably able

to reveal subtle information about chemical composition and is potentially very helpful in

determining crop identity, spread of diseases in crops, in understanding the effects of

droughts and pests, and so on. In the future we can expect hyper spectral cameras to be

useful in food inspection, medical examination, and so on.

• Consumer Financial Data. Every transaction we make on the web, whether a visit, a

search, a purchase, is being recorded, correlated, compiled into databases, and sold and

resold, as advertisers scramble to correlate consumer actions with pockets of demand for

various goods and services.

2.3 Component Description:

Our examples show that we are in the era of massive automatic data collection,

systematically obtaining many measurements, not knowing which ones will be relevant to

the phenomenon of interest. Our task is to find a needle in a haystack, teasing the relevant

information out of a vast pile of glut. This is a big break from the original assumptions

behind many the tools being used in high-dimensional data analysis today.

2.4 Methodology:

Page 10: Matrix note

When classifying data with the Gaussian maximum likelihood classifier, the mean vector

and covariance matrix of each class usually are not known and must be estimated from

training samples. For p-dimensional data, the sample covariance matrix estimate is

singular, and therefore unusable, if fewer than p+1 training samples from each class are

available, and it is a poor estimate of the true covariance matrix unless many more than

p+1 samples are available. In some applications, such as remote sensing, there are often a

large number of features available, but the number of training samples is limited due to

the difficulty and expense in labeling them.

The covariance matrix estimator examines mixtures of the sample covariance matrix,

common covariance matrix, diagonal sample covariance matrix, and diagonal common

covariance matrix. Whereas the maximum likelihood estimator maximizes the joint

likelihood of all the training samples, the proposed covariance matrix estimator selects

the mixture that maximizes the likelihood of training samples not included in the

covariance matrix estimation.. The results of several experiments are presented that

compare the estimator, with and without the approximation, to the sample covariance

matrix estimate, common covariance matrix, Euclidean distance, and regularized

discriminant analysis (RDA).

2.5 Comparative Study:

Apart from our shrinkage estimator, we consider the following covariance matrix

estimators proposed in the literature.

Identity: The simplest model is to assume that the covariance matrix is a scalar multiple

of the identity matrix. This is the assumption implicit in running an Ordinary Least

Squares (OLS) cross-sectional regression of stock returns on stock characteristics, as

Fama and MacBeth (1973) and their successors do. Interestingly, it yields the same

weights for the minimum variance portfolios as a two-parameter model where all

variances are equal to one another and all covariances are equal to one another. This two-

parameter model is discussed by Jobson and Korkie (1980) and by Frost and Savarino

(1986).

Constant Correlation: Elton and Gruber (1973) recommend a model where every pair

of stocks has the same correlation coefficient. Thus, there are N +1parameters to estimate:

the N individual variances, and the constant correlation coefficient.

Page 11: Matrix note

Pseudo-Inverse: It is impossible to use the sample covariance matrix directly for

portfolio selection when the number of stocks N exceeds the number of historical returns

T, which is the case here. The problem is that we need the inverse of the sample

covariance matrix, and it does not exist. One possible trick to get around this problem is

to use the pseudo-inverse, also called generalized inverse or Moore-Penrose inverse.

Market Model: This is the single-index covariance matrix of Sharpe (1963)

Industry Factors: This refinement of the single-index model assumes that market

residuals are generated by industry factors:

x i=α i+β i x0 t+∑k=1

K

c ik zkt+ε ¿ (3.1)

where K is the number of industry factors, cik is a dummy variable equal to one if stock i

belongs to industry category k, zkt is the return to the k-th industry factor in period t, and

"kt denotes residuals that are uncorrelated to the market, to industry factors, and to each

other. Every stock is assigned to one of the 48 industries defined by Fama and French

(1997). This high number of factors is similar to the one used by the company BARRA to

produce commercial multi-factor estimates of the covariance matrix (Kahn, 1994).4

Industry factor returns are defined as the return to an equally-weighted portfolio of the

stocks from this industry in our sample.

Principal Components: An alternative approach to multi-factor models is to extract the

factors from the sample covariance matrix itself using a statistical method such as

principal components. Some investment consultants such as Advanced Portfolio

Technologies successfully use a refined version of this approach (Bender and Blin, 1997).

Since principal components are chosen solely for their ability to explain risk, fewer

factors are necessary, but they do not have any direct economic interpretation.5 A

sophisticated test by Connor and Korajczyk (1993) finds between four and seven factors

for the NYSE and AMEX over 1967–1991, which is in the same range as the original test

by Roll and Ross (1980).

Shrinkage towards Identity: A related shrinkage estimator of Ledoit and Wolf (2000)

uses a scalar multiple of the identity matrix as shrinkage target; note that their estimator,

under a different asymptotic framework, is suggested for general situations where no

“natural” shrinking target exists. This seems suboptimal for stock returns, since stock

returns have different variances and mainly positive covariance. Hence, it appears

beneficial to use a shrinkage target which incorporates this knowledge, such as the single-

index covariance matrix. Nevertheless, we include this estimator.

Page 12: Matrix note

CHAPTER 3

RESULTSINPUT IMAGE: (Snap Shots in colour)

Figure 1: Ground pixels of the grass class are outlined with a white rectangle

OUPUT IMAGE 1:(Snap Shots in colour)

Figure 2: Estimation of eigen values

OUTPUT IMAGE 2:

(Snap Shots in colour)

Figure 3: Estimation of variance along eigen vector dimensions

Page 13: Matrix note

CHAPTER 4

CONCLUSIONWe have proposed a novel method for covariance estimation of high dimensional data.

The new method is based on constrained maximum likelihood (ML) estimation in which

the eigenvector transformation is constrained to be the composition of K Givens rotations.

This model seems to capture the essential behavior of the data with a relatively small

number of parameters. The constraint set is a K dimensional manifold in the space of

orthonormal transforms, but since it is not a linear space; the resulting ML estimation

optimization problem does not yield a closed form global optimum. However, we show

that a recursive local optimization procedure is simple, intuitive, and yields good results.

We also demonstrate that the proposed SMT covariance estimation method substantially

reduces the error in the covariance estimate as compared to current state-of-the-art

estimates for a standard hyper-spectral data set.

Page 14: Matrix note

REFERENCES[1] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of StatisticalLearning:

Data Mining, Inference and Prediction, 2nd ed. NewYork:Springer, 2009.

[2] D. L. Donoho, “High-dimensional data analysis: The curses and blessingsof

dimensionality,” in Math Challenges of the 21st Century. LosAngeles, CA: American

Mathematical Society, Aug. 8, 2000.

[3] R. E. Bellman, Adaptive Control Processes. Princeton, NJ: PrincetonUniv. Press,

1961.

[4] A. K. Jain, R. P. Duin, and J. Mao, “Statistical pattern recognition: Areview,” IEEE

Trans. Pattern Anal. Mach. Intell., vol. 22, no. 1, pp.4–37, Jan. 2000.

[5] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman , “Eigenfacesversus

fisherfaces: Recognition using class specific linear projection,”IEEE Trans. Pattern Anal.

Mach. Intell., vol. 19, no. 7, pp. 711–720,Jul. 1997.

[6] J. Theiler, “Quantitative comparison of quadratic covariance-basedanomalous change

detectors,” Appl. Opt., vol. 47, no. 28, pp. F12–F26,2008.

[7] C. Stein, B. Efron, and C. Morris, “Improving the usual estimator of anormal

covariance matrix,” Dept. of Statistics, Stanford Univ., Report37, 1972.

[8] K. Fukunaga, Introduction to Statistical Pattern Recognition, 2nd ed.Norwell, MA:

Academic, 1990.

[9] J. H. Friedman, “Regularized discriminant analysis,” J. Amer. Stat.Assoc., vol. 84,

no. 405, pp. 165–175, 1989.

[10] J. P. Hoffbeck and D. A. Landgrebe, “Covariance matrix estimationand

classification with limited training data,” IEEE Trans. Pattern Anal.Mach. Intell., vol. 18,

no. 7, pp. 763–767, Jul. 1996.

[11] M. J. Daniels and R. E. Kass, “Shrinkage estimators for covariancematrices,”

Biometrics, vol. 57, no. 4, pp. 1173–1184, 2001.

[12] O. Ledoit and M. Wolf, “A well-conditioned estimator for large-

dimensionalcovariance matrices,” J. Multivar.Anal., vol. 88, no. 2, pp.365–411, 2004.

Page 15: Matrix note

[13] J. Schafer and K. Strimmer, “A shrinkage approach to large-scale

covariancematrix estimation and implications for functional genomics,”Stat. Appl.

Genet.Molecular Biol., vol. 4, no. 1, 2005.