svm support vector machines presented by: anas assiri supervisor prof. dr. mohamed batouche

24
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Upload: hugo-lindsey

Post on 03-Jan-2016

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

SVMSupport Vector Machines

Presented by: Anas Assiri

SupervisorProf. Dr. Mohamed Batouche

Page 2: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Overview

Classifying data What is (SVM) ? Properties of SVM Applications

Page 3: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Classifying data

Classifying data is a common need in machine learning.

We are given data points each of which belongs to one of two classes

The goal is to determine some way of deciding which class a new data point will be in

Page 4: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Classifying data

In the case of support vector machines, we view a data point as a p-dimensional vector (a list of p numbers) .

we want to know whether we can separate them with a p − 1-dimensional. hyperplane. This is called a linear classifier.

Page 5: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

What are ( SVM ) ?

Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. They belong to a family of generalized linear classifiers.

A special property of SVMs is that they simultaneously minimize the empirical classification error and maximize the geometric margin;

They are also known as maximum margin classifiers.

Page 6: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

What are ( SVM ) ?

Input data is two sets of vectors in an n-dimensional space SVM will construct a separating hyperplane in that space, one

which maximizes the "margin" between the two data sets. Calculate the margin, we construct two parallel hyperplanes, one

on each side of the separating one, which are "pushed up against" the two data sets.

Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the neighboring datapoints of both classes.

The larger the margin or distance between these parallel hyperplanes, the better the generalization error of the classifier will be.

Page 7: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Linear Classifiers

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

How would you classify this data?

w x +

b=0

w x + b<0

w x + b>0

f x

yest

Page 8: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Linear Classifiers

denotes +1

denotes -1

How would you classify this data?

f x

f(x,w,b) = sign(w x + b)

yest

Page 9: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Linear Classifiers

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

How would you classify this data?

Page 10: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Linear Classifiers

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

Any of these would be fine..

..but which is best?

Page 11: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Linear Classifiers

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

How would you classify this data?

Misclassified to +1 class

Page 12: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Classifier Margin

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

Page 13: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Maximum Margin

f x

yest

denotes +1

denotes -1

f(x,w,b) = sign(w x + b)

The maximum margin linear classifier is the linear classifier with the, um, maximum margin.

This is the simplest kind of SVM (Called an LSVM)Linear SVM

Support Vectors are those datapoints that the margin pushes up against

1. Maximizing the margin is good according to intuition and PAC theory

2. Implies that only support vectors are important; other training examples are ignorable.

3. Empirically it works very very well.

Page 14: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Linear SVM Mathematically

What we know: w . x+ + b = +1 w . x- + b = -1 w . (x+-x-) = 2

“Predict Class

= +1”

zone

“Predict Class

= -1”

zonewx+b=1

wx+b=0

wx+b=-1

X-

x+

ww

wxxM

2)(

M=Margin Width

Page 15: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Linear SVM Mathematically Goal: 1) Correctly classify all training data

if yi = +1

if yi = -1

for all i

2) Maximize the Margin

same as minimize

We can formulate a Quadratic Optimization Problem and solve for w and b

Minimize

subject to

wM

2

www t

2

1)(

1bwxi1bwxi

1)( bwxy ii

1)( bwxy ii

i

wwt2

1

Page 16: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Dataset with noise

Hard Margin: So far we require all data points be classified correctly

- No training error

What if the training set is noisy?

- Solution 1: use very powerful kernels

denotes +1

denotes -1

OVERFITTING!

Page 17: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Slack variables ξi can be added to allow misclassification of difficult or noisy examples.

wx+b=1

wx+b=0

wx+b=-

1

7

11 2

Soft Margin Classification

What should our quadratic optimization criterion be?

Minimize

R

kkεC

1

.2

1ww

Page 18: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Non-linear SVMs: Feature spaces General idea: the original input space can always be

mapped to some higher-dimensional feature space where the training set is separable:

Φ: x → φ(x)

Page 19: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

SVM locates a separating hyperplane in the feature space and classify points in that space

It does not need to represent the space explicitly, simply by defining a kernel function

The kernel function plays the role of the dot product in the feature space.

Nonlinear SVM - Overview

Page 20: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Properties of SVM Flexibility in choosing a similarity function Sparseness of solution when dealing with large data

sets - only support vectors are used to specify the separating

hyperplane Ability to handle large feature spaces - complexity does not depend on the dimensionality of the

feature space Overfitting can be controlled by soft margin

approach Nice math property: a simple convex optimization problem

which is guaranteed to converge to a single global solution Feature Selection

Page 21: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

SVM Applications

SVM has been used successfully in many real-world problems

- text (and hypertext) categorization

- image classification

- bioinformatics (Protein classification,

Cancer classification)

- hand-written character recognition

Page 22: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Weakness of SVM It is sensitive to noise - A relatively small number of mislabeled examples can

dramatically decrease the performance

It only considers two classes - how to do multi-class classification with SVM? - Answer: 1) with output parity m, learn m SVM’s

SVM 1 learns “Output==1” vs “Output != 1” SVM 2 learns “Output==2” vs “Output != 2” : SVM m learns “Output==m” vs “Output != m”

2)To predict the output for a new input, just predict with each SVM and find out which one puts the prediction the furthest into the positive region.

Page 23: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Some Issues Choice of kernel - Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed - domain experts can give assistance in formulating appropriate

similarity measures

Choice of kernel parameters - e.g. σ in Gaussian kernel - σ is the distance between closest points with different

classifications - In the absence of reliable criteria, applications rely on the use

of a validation set or cross-validation to set such parameters.

Optimization criterion – Hard margin v.s. Soft margin - a lengthy series of experiments in which various parameters

are tested

Page 24: SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche

Thank you

Any Question !!! Any Question !!!