vectors

105
Vectors..

Upload: neil

Post on 17-Jan-2016

39 views

Category:

Documents


0 download

DESCRIPTION

Vectors. Vectors: notations. A vector in a n-dimensional space in described by a n-uple of real numbers. x 2. B 2. B. A. A 2. x 1. A 1. B 1. Vectors: sum. The components of the sum vector are the sums of the components. x 2. C 2. C. B 2. B. A. A 2. A 1. B 1. C 1. x 1. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Vectors

Vectors..

Page 2: Vectors

Vectors: notations A vector in a n-dimensional space in described by a n-uple of

real numbers

2

1

A

AA

2

AB

x1

x2

B2

A2

A1 B1

2

1

B

BB

21 AAAT

21 BBBT

Page 3: Vectors

Vectors: sum The components of the sum vector are the sums of the

components

3

AB

x1

x2

B2

A2

A1 B1

C

BAC

22

11

2

1

BA

BA

C

C

C1

C2

Page 4: Vectors

Vectors: difference The components of the sum vector are the sums of the

components

4

AB

x1

x2

B2

A2

A1 B1

ABC

22

11

2

1

AB

AB

C

C

C1

C2

-A

C

Page 5: Vectors

Vectors: product by a scalar The components of the sum vector are the difference of the

components

5

A

x1

x2

A2

A1

AaC

2

1

2

1

Aa

Aa

C

C

C1

C2

3A

Page 6: Vectors

Vectors: Norm The most simple definition for a norm is the euclidean

module of the components

6

A

x1

x2

A2

A1

i

iAA2

2221 AAA

0xx

xx

yxyx

se 0|||| 3.

|||||||| .2

|||||||||||| .1

Page 7: Vectors

Vectors: distance between two points The distance between two points is the norm of the

difference vector

7

ABBABAd ,

AB

x1

x2

B2

A2

A1 B1C1

C2

-A

C 222211, ABABBAd

Page 8: Vectors

Vectors: Scalar product The components of the sum vector are the sums of the

components

8

AB

x1

x2

B2

A2

A1 B1

i

i

iT BABAABc

cos BAcθ

0, .4

,yx, e ,yx, .3

,,zyx, e ,,zy,x .2

,, .1

xx

yxyx

zxyxzyzx

xyyx

Page 9: Vectors

Vectors: Scalar product

9

v

u

v

u

v

u

0, 90 uv

0, 90 uv 0, 90 uv

Page 10: Vectors

Vectors: Norm and scalar product The components of the sum vector are the sums of the

components

10

AAAAAA T

i

i ,2

Page 11: Vectors

Vectors: Definition of an hyperplane

In R2 , an hyperplane is a lineA line passing through the origin can be defined with as the set of the vectors that are perpendicular to a given vector W

11

x1

x2

W0

02211

XWXW

XWXW T

Page 12: Vectors

Vectors: Definition of an hyperplane

In R3 , an hyperplane is a planeA plane passing through the origin can be defined with as the set of the vectors that are perpendicular to a given vector W

12

x1

x2

0

0332211

XWXWXW

XWXW T

W

x3

Page 13: Vectors

Vectors: Definition of an hyperplane

In R2 , an hyperplane is a lineA line perpendicular to W and whose distance from the origin is equal to b is defined by the points whose scalar vector with W is equal to b

-b>0

13

x1

x2

02211

bXWXW

W

b

W

XW

W

XW T

W

X

-b/|W|

Page 14: Vectors

Vectors: Definition of an hyperplane

In R2 , an hyperplane is a lineA line perpendicular to W and whose distance from the origin is equal to b is defined by the points whose scalar vector with W is equal to b

14

x1

x2

b/||W||W

X

-b<0

02211

bXWXW

W

b

W

XW

W

XW T

Page 15: Vectors

Vectors: Definition of an hyperplane

In Rn , an hyperplane is defined by

15

0 bXWbXW T

Page 16: Vectors

An hyperplane divides the space

16

x1

x2

W

X

A

B

bBWBW

bAWAW

T

T

-b/||W||

<AW>/||W||

<BW>/||W||

Page 17: Vectors

Distance between a hyperplane and a point

17

x1

x2

-b/||W||

W

X

A

B W

bBWrBd

W

bAWrAd

),(

),(

<AW>/||W||

<BW>/||W||

Page 18: Vectors

Distance between two parallel hyperplane

18

x1

x2

-b/||W||W W

bbrrd

')',(

-b’/||W||

0'bXW T

0bXW T

Page 19: Vectors

Lagrange Multipliers

Page 20: Vectors

04/21/23 2004/21/23 20

We want to maximise the function z = f(x,y)subject to the constraintsg(x,y) = c (curve in the x,y plane)

Aim

Page 21: Vectors

04/21/23 21

Solve the constraint g(x,y) = c and express, for example, y=h(x)

The substitute in function f and find the maximum in x of

f(x, h(x))

Analytical solution of the constraint can be very difficult

Simple solution

Page 22: Vectors

2222

Geometrical interpretation

The level contours of f(x,y) are defined by f(x,y) = dn

Page 23: Vectors

2323

Suppose we walk along the contour line with g = c.

In general the contour lines of f and g may be distinct: traversing the contour line for g = c we cross the contour lines of f.

While moving along the contour line for g = c the value of f can vary.

Only when the contour line for g = c touches contour lines of f tangentially, we do not increase or decrease the value of f - that is, when the contour lines touch but do not cross.

Lagrange Multipliers

Page 24: Vectors

24

Normal to a curve

Page 25: Vectors

25

Given a curve g(x,y) = c the gradient of g is:

Consider 2 points of the curve: (x,y); (x+εx, x+εy), for small ε

Gradient of a curve

y

g

x

gg ,

(x,y)(x+εx, x+εy)

),(

),(),(

,

,,

yx

T

yx

xyx

xyx

gyxg

y

g

x

gyxgyxg

ε

Page 26: Vectors

26

Given a curve g(x,y) = c the gradient of g is:

Since both points satisfy the curve equation:

For small ε, ε is parallel to the curve and, consequently, the gradient is perpendicular to the curve

Gradient of a curve

(x,y)(x+εx, x+εy)

0),(

),(

yx

T

yx

T

g

gcc

ε

εε

grad (g)

Page 27: Vectors

2727

Lagrange Multipliers

The point on g(x,y)=c that Max-min-imize f(x,y) the gradientof f is perpendicular to the curveg, otherwise we should increase or decrease f by moving locally on the curveSo, the two gradients are parallel

for some scalar λ (where is the gradient).

Page 28: Vectors

2828

Thus we want points (x,y) where g(x,y) = c and ,

To incorporate these conditions into one equation, we introduce an auxiliary function (Lagrangian)

and solve .

Lagrange Multipliers

cyxgyxfyxF ),(),(),,(

Page 29: Vectors

Recap of Constrained Optimization

Suppose we want to: minimize/maximize f(x) subject to g(x) = 0

A necessary condition for x0 to be a solution:

: the Lagrange multiplierFor multiple constraints gi(x) = 0, i=1, …, m, we need a Lagrange multiplier i for each of the constraints

29

-

-

Page 30: Vectors

Constrained Optimization: inequality

We want to maximize f(x,y) with inequality constraint g(x,y)c.

The search must be confined in the red portion(gradient of a function points towards the direction along which it increases)

g(x,y) ≤ c

Page 31: Vectors

Constrained Optimization: inequality

maximize f(x,y) with inequality constraint g(x,y)c.

If the gradients are opposite (<0) the function increases in the allowed portion The maximum cannot be on the curve g(xy)=c

Maximum is on the curve only if >0

f increases,

g(x,y) ≤ c

cyxgyxfyxF ),(),(),,(

Page 32: Vectors

Constrained Optimization: inequality

Minimize f(x,y) with inequality constraint g(x,y)c.

If the gradients are opposite (<0) the function increases in the allowed portion

Minimum is on the curve only if <0

f increases,

g(x,y) ≤ c

cyxgyxfyxF ),(),(),,(

Page 33: Vectors

maximize f(x,y) with inequality constraint g(x,y)≥c.

If the gradients are opposite (<0) the function decreases in the allowed portion

Maximum is on the curve only if <0

Constrained Optimization: inequality

f decreases,

g(x,y) ≥ c

cyxgyxfyxF ),(),(),,(

Page 34: Vectors

Constrained Optimization: inequality

Minimize f(x,y) with inequality constraint g(x,y)≥c.

If the gradients are opposite (<0) the function decreases in the allowed portion

Minimum is on the curve only if >0

g(x,y) ≥ c

cyxgyxfyxF ),(),(),,(

f decreases,

Page 35: Vectors

Karush-Kuhn-Tucker conditions

with αi satisfying the following conditions:

and

35

The function f(x) subject to constraints gi(x) ≤or≥ 0 ismax-minimized by opimizing the Lagrange function

i

iii xgxfxF )()(),(

gi(x) ≤ 0 gi(x) ≥ 0

MIN αi ≥ 0 αi ≤ 0

MAX αi ≤ 0 αi ≥ 0

ixgii ,0)( 0

Page 36: Vectors

Constrained Optimization: inequality

Karush-Kuhn-Tucker complementarity condition

means that

The constraint is active only on the border, and cancel out in the internal regions

36

ixgii ,0)( 0

0)(0 oii xg

Page 37: Vectors

Concave-Convex functions

04/21/23 37

Convex

Concave

Page 38: Vectors

Dual problem If f(x) is a convex function

Is solved by:

From the first equation we can find x as a function of the i

These can be substituted in the Lagrangian function obtaining the dual Lagrangian function

38

i

iix

ix

i xgaxfxLL )()(inf),(inf)(

Page 39: Vectors

Dual problem

The dual Lagrangian is concave: maximising it with respect to i ,with i>0, solve the original constrained problem. We compute i as:

Then we can obtain x by substituting using the expression of x as a function of i

39

i

iix

ix

i xgaxfxLL )()(inf),(inf)(

i ii

xi

xi xgaxfxLL

iii

)()(infmax),(infmax)(max

Page 40: Vectors

Dual problem:trivial exampleMinimize the function f(x)=x2 with the constraint x≤-1(trivial: x=-1)

The Lagrangian is

Minimising with respect to x

The dual Lagrangian is

Maximising it gives: =2Then subsituting,

40

-1

)1(),( 2 xxxL

2020

xxx

L

424)(

222 L

12 x

Page 41: Vectors

An Introduction to Support Vector Machines

Page 42: Vectors

What is a good Decision Boundary?

Consider a two-class, linearly separable classification problem

Many decision boundaries! The Perceptron algorithm can be used to find such a boundary

Are all decision boundaries equally good?

42

Class 1

Class 2

Page 43: Vectors

Examples of Bad Decision Boundaries

43

Class 1

Class 2

Class 1

Class 2

Page 44: Vectors

Large-margin Decision Boundary

The decision boundary should be as far away from the data of both classes as possible

We should maximize the margin, m

44

Class 1

Class 2

m

Page 45: Vectors

Hyperplane Classifiers(2)

45

11

11

ii

ii

yforbxw

yforbxw

Page 46: Vectors

Finding the Decision Boundary

Let {x1, ..., xn} be our data set and let yi {1,-1} be the class label of xi

46

Class 1

Class 2

m

y=1y=1

y=1

y=1y=1

y=-1

y=-1

y=-1

y=-1

y=-1

y=-1

1 bxw iT

For yi=1

1 bxw iT

For yi=-1

iiiT

i yxbxwy ,,1 So:

Page 47: Vectors

Finding the Decision Boundary

The decision boundary should classify all points correctly

The decision boundary can be found by solving the following constrained optimization problem

This is a constrained optimization problem. Solving it requires to use Lagrange multipliers

47

Page 48: Vectors

The Lagrangian is

i≥0 Note that ||w||2 = wTw

48

Finding the Decision Boundary

Page 49: Vectors

Setting the gradient of w.r.t. w and b to zero, we have

49

Gradient with respect to w and b

0

,0

b

L

kw

Lk

n

i

m

k

ki

kii

m

k

kk

n

ii

Tii

T

bxwyww

bxwywwL

1 11

1

12

1

12

1

n: no of examples, m: dimension of the space

Page 50: Vectors

The Dual Problem

If we substitute to , we have

Since

This is a function of i only

50

Page 51: Vectors

The Dual Problem

The new objective function is in terms of i only It is known as the dual problem: if we know w, we know all i; if we know all i, we know w

The original problem is known as the primal problem

The objective function of the dual problem needs to be maximized (comes out from the KKT theory)

The dual problem is therefore:

51

Properties of i when we introduce the Lagrange multipliers

The result when we differentiate the original Lagrangian w.r.t. b

Page 52: Vectors

The Dual Problem

This is a quadratic programming (QP) problem A global maximum of i can always be found

w can be recovered by

52

Page 53: Vectors

Characteristics of the Solution

Many of the i are zero w is a linear combination of a small number of data points

This “sparse” representation can be viewed as data compression as in the construction of knn classifier

xi with non-zero i are called support vectors (SV) The decision boundary is determined only by the SV Let tj (j=1, ..., s) be the indices of the s support vectors. We can write

Note: w need not be formed explicitly

53

Page 54: Vectors

A Geometrical Interpretation

54

6=1.4

Class 1

Class 2

1=0.8

2=0

3=0

4=0

5=07=0

8=0.6

9=0

10=0

Page 55: Vectors

Characteristics of the Solution

For testing with a new data z

Compute and classify z as class 1 if the sum is positive, and class 2 otherwise

Note: w need not be formed explicitly

55

Page 56: Vectors

The Quadratic Programming Problem

Many approaches have been proposed Loqo, cplex, etc. (see http://www.numerical.rl.ac.uk/qp/qp.html)

Most are “interior-point” methods Start with an initial solution that can violate the constraints Improve this solution by optimizing the objective function and/or reducing the amount of constraint violation

For SVM, sequential minimal optimization (SMO) seems to be the most popular

A QP with two variables is trivial to solve Each iteration of SMO picks a pair of (i,j) and solve the QP with these two variables; repeat until convergence

In practice, we can just regard the QP solver as a “black-box” without bothering how it works

56

Page 57: Vectors

Non-linearly Separable Problems

We allow “error” i in classification; it is based on the output of the discriminant function wTx+b

i approximates the number of misclassified samples

57

Class 1

Class 2

Page 58: Vectors

Soft Margin Hyperplane

The new conditions become

i are “slack variables” in optimization Note that i=0 if there is no error for xi

i is an upper bound of the number of errorsWe want to minimize

C : tradeoff parameter between error and margin

58

n

iiCw

1

2

2

1

Page 59: Vectors

The Optimization Problem

59

n

iii

n

ii

Tiii

n

ii

T bxwyCwwL111

12

1

01

n

iijiij

j

xyww

L 01

n

iiii xyw

0

jjj

CL

01

n

iiiyb

L

With α and μ Lagrange multipliers, POSITIVE

Page 60: Vectors

The Dual Problem

n

iij

Tiji

n

i

n

jji xxyyL

11 12

1

n

iii

n

i

n

ji

Tjjjiii

n

iij

Tiji

n

i

n

jji

bxxyy

CxxyyL

11 1

11 1

1

2

1

jjC 01

n

iiiyWith

Page 61: Vectors

The Optimization Problem

The dual of this new constrained optimization problem is

New constrainsderive from since μ and α are positive.

w is recovered as

This is very similar to the optimization problem in the linear separable case, except that there is an upper bound C on i now

Once again, a QP solver can be used to find i 61

jjC

Page 62: Vectors

The algorithm try to keep ξ null, maximising the margin

The algorithm does not minimise the number of error. Instead, it minimises the sum of distances fron the hyperplane

When C increases the number of errors tend to lower. At the limit of C tending to infinite, the solution tend to that given by the hard margin formulation, with 0 errors

04/21/23 62

n

iiCw

1

2

2

1

Page 63: Vectors

Soft margin is more robust

63

Page 64: Vectors

Extension to Non-linear Decision Boundary

So far, we have only considered large-margin classifier with a linear decision boundary

How to generalize it to become nonlinear? Key idea: transform xi to a higher dimensional space to “make life easier”

Input space: the space the point xi are located Feature space: the space of (xi) after transformation

Why transform? Linear operation in the feature space is equivalent to non-linear operation in input space

Classification can become easier with a proper transformation. In the XOR problem, for example, adding a new feature of x1x2 make the problem linearly separable

64

Page 65: Vectors

XOR

X Y

0 0 0

0 1 1

1 0 1

1 1 0

65

Is not linearly separable

X Y XY

0 0 0 0

0 1 0 1

1 0 0 1

1 1 1 0

Is linearly separable

Page 66: Vectors

Find a feature space

66S.Mika: Kernel Fisher Discriminant

Page 67: Vectors

Transforming the Data

Computation in the feature space can be costly because it is high dimensional

The feature space is typically infinite-dimensional!The kernel trick comes to rescue

67

( )

( )

( )( )( )

( )

( )( )

(.)( )

( )

( )

( )( )

( )

( )

( )( )

( )

Feature spaceInput spaceNote: feature space is of higher dimension than the input space in practice

Page 68: Vectors

Transforming the Data

Computation in the feature space can be costly because it is high dimensional

The feature space is typically infinite-dimensional!The kernel trick comes to rescue

68

( )

( )

( )( )( )

( )

( )( )

(.)( )

( )

( )

( )( )

( )

( )

( )( )

( )

Feature spaceInput spaceNote: feature space is of higher dimension than the input space in practice

Page 69: Vectors

The Kernel Trick

Recall the SVM optimization problem

The data points only appear as inner productAs long as we can calculate the inner product in the feature space, we do not need the mapping explicitly

Many common geometric operations (angles, distances) can be expressed by inner products

Define the kernel function K by

69

Page 70: Vectors

An Example for (.) and K(.,.)

Suppose (.) is given as follows

An inner product in the feature space is

So, if we define the kernel function as follows, there is no need to carry out (.) explicitly

This use of kernel function to avoid carrying out (.) explicitly is known as the kernel trick

70

Page 71: Vectors

Kernels

Given a mapping:

a kernel is represented as the inner product

A kernel must satisfy the Mercer’s condition:

71

φ(x)x

i

ii φφK (y)(x)yx ),(

0)()()(0)(such that )( 2 yxyxyx,xxx ddggKdgg

Page 72: Vectors

Modification Due to Kernel Function

Change all inner products to kernel functionsFor training,

72

Original

With kernel function

Page 73: Vectors

Modification Due to Kernel Function

For testing, the new data z is classified as class 1 if f 0, and as class 2 if f <0

73

Original

With kernel function

Page 74: Vectors

More on Kernel Functions

Since the training of SVM only requires the value of K(xi, xj), there is no restriction of the form of xi and xj

xi can be a sequence or a tree, instead of a feature vector

K(xi, xj) is just a similarity measure comparing xi and xj

For a test object z, the discriminat function essentially is a weighted sum of the similarity between z and a pre-selected set of objects (the support vectors)

74

Page 75: Vectors

Example

Suppose we have 5 1D data points x1=1, x2=2, x3=4, x4=5, x5=6, with 1, 2, 6 as class 1 and 4, 5 as class 2 y1=1, y2=1, y3=-1, y4=-1, y5=1

75

Page 76: Vectors

Example

76

1 2 4 5 6

class 2 class 1class 1

Page 77: Vectors

Example

We use the polynomial kernel of degree 2 K(x,y) = (xy+1)2

C is set to 100

We first find i (i=1, …, 5) by

77

Page 78: Vectors

Example

By using a QP solver, we get 1=0, 2=2.5, 3=0, 4=7.333, 5=4.833 Note that the constraints are indeed satisfied The support vectors are {x2=2, x4=5, x5=6}

The discriminant function is

b is recovered by solving f(2)=1 or by f(5)=-1 or by f(6)=1,

All three give b=9

78

Page 79: Vectors

Example

79

Value of discriminant function

1 2 4 5 6

class 2 class 1class 1

Page 80: Vectors

Kernel Functions

In practical use of SVM, the user specifies the kernel function; the transformation (.) is not explicitly stated

Given a kernel function K(xi, xj), the transformation (.) is given by its eigenfunctions (a concept in functional analysis)

Eigenfunctions can be difficult to construct explicitly This is why people only specify the kernel function without worrying about the exact transformation

Another view: kernel function, being an inner product, is really a similarity measure between the objects

80

Page 81: Vectors

A kernel is associated to a transformation

Given a kernel, in principle it should be recovered the transformation in the feature space that originates it.

K(x,y) = (xy+1)2= x2y2+2xy+1

It corresponds the transformation

04/21/23 81

1

2

2

x

x

x

Page 82: Vectors

Examples of Kernel Functions

Polynomial kernel up to degree d

Polynomial kernel up to degree d

Radial basis function kernel with width

The feature space is infinite-dimensional Sigmoid with parameter and

It does not satisfy the Mercer condition on all and 82

Page 83: Vectors

83

Example

Page 84: Vectors

Building new kernels

If k1(x,y) and k2(x,y) are two valid kernels then the following kernels are valid

Linear Combination

Exponential

Product

Polymomial tranfsormation (Q: polymonial with non negative coeffients)

Function product (f: any function)

84

),(),(),( 2211 yxkcyxkcyxk

),(exp),( 1 yxkyxk

),(),(),( 21 yxkyxkyxk

),(),( 1 yxkQyxk

)(),()(),( 1 yfyxkxfyxk

Page 85: Vectors

Ploynomial kernel

Ben-Hur et al, PLOS computational Biology 4 (2008)

85

Page 86: Vectors

Gaussian RBF kernel

Ben-Hur et al, PLOS computational Biology 4 (2008)

86

Page 87: Vectors

Spectral kernel for sequences

Given a DNA sequence x we can count the number of bases (4-D feature space)

Or the number of dimers (16-D space)

Or l-mers (4l –D space)

The spectral kernel is

04/21/23 87

),,,()(1 TGCA nnnnx

,..),,,,,,,()(2 CTCGCCCAATAGACAA nnnnnnnnx

yxyxk lll ),(

Page 88: Vectors

Choosing the Kernel Function

Probably the most tricky part of using SVM. The kernel function is important because it creates the kernel matrix, which summarizes all the data

Many principles have been proposed (diffusion kernel, Fisher kernel, string kernel, …)

There is even research to estimate the kernel matrix from available information

In practice, a low degree polynomial kernel or RBF kernel with a reasonable width is a good initial try

Note that SVM with RBF kernel is closely related to RBF neural networks, with the centers of the radial basis functions automatically chosen for SVM

88

Page 89: Vectors

Why SVM Work?

The feature space is often very high dimensional. Why don’t we have the curse of dimensionality?

A classifier in a high-dimensional space has many parameters and is hard to estimate

Vapnik argues that the fundamental problem is not the number of parameters to be estimated. Rather, the problem is about the flexibility of a classifier

Typically, a classifier with many parameters is very flexible, but there are also exceptions

Let xi=10i where i ranges from 1 to n. The classifier

can classify all xi correctly for all possible combination of class labels on xi

This 1-parameter classifier is very flexible

89

Page 90: Vectors

Why SVM works?

Vapnik argues that the flexibility of a classifier should not be characterized by the number of parameters, but by the flexibility (capacity) of a classifier

This is formalized by the “VC-dimension” of a classifier

Consider a linear classifier in two-dimensional space

If we have three training data points, no matter how those points are labeled, we can classify them perfectly

90

Page 91: Vectors

VC-dimension

However, if we have four points, we can find a labeling such that the linear classifier fails to be perfect

We can see that 3 is the critical numberThe VC-dimension of a linear classifier in a 2D space is 3 because, if we have 3 points in the training set, perfect classification is always possible irrespective of the labeling, whereas for 4 points, perfect classification can be impossible

91

Page 92: Vectors

VC-dimension

The VC-dimension of the nearest neighbor classifier is infinity, because no matter how many points you have, you get perfect classification on training data

The higher the VC-dimension, the more flexible a classifier is

VC-dimension, however, is a theoretical concept; the VC-dimension of most classifiers, in practice, is difficult to be computed exactly

Qualitatively, if we think a classifier is flexible, it probably has a high VC-dimension

92

Page 93: Vectors

Other Aspects of SVM

How to use SVM for multi-class classification? One can change the QP formulation to become multi-class

More often, multiple binary classifiers are combined See DHS 5.2.2 for some discussion

One can train multiple one-versus-all classifiers, or combine multiple pairwise classifiers “intelligently”

How to interpret the SVM discriminant function value as probability?

By performing logistic regression on the SVM output of a set of data (validation set) that is not used for training

Some SVM software (like libsvm) have these features built-in

93

Page 94: Vectors

Software

A list of SVM implementation can be found at http://www.kernel-machines.org/software.html

Some implementation (such as LIBSVM) can handle multi-class classification

SVMLight is among one of the earliest implementation of SVM

Several Matlab toolboxes for SVM are also available

94

Page 95: Vectors

Summary: Steps for Classification

Prepare the pattern matrixSelect the kernel function to useSelect the parameter of the kernel function and the value of C

You can use the values suggested by the SVM software, or you can set apart a validation set to determine the values of the parameter

Execute the training algorithm and obtain the i

Unseen data can be classified using the i and the support vectors

95

Page 96: Vectors

Strengths and Weaknesses of SVM

Strengths Training is relatively easy

No local optimal, unlike in neural networks It scales relatively well to high dimensional data Tradeoff between classifier complexity and error can be controlled explicitly

Non-traditional data like strings and trees can be used as input to SVM, instead of feature vectors

Weaknesses Need to choose a “good” kernel function.

96

Page 97: Vectors

Other Types of Kernel Methods

A lesson learnt in SVM: a linear algorithm in the feature space is equivalent to a non-linear algorithm in the input space

Standard linear algorithms can be generalized to its non-linear version by going to the feature space

Kernel principal component analysis, kernel independent component analysis, kernel canonical correlation analysis, kernel k-means, 1-class SVM are some examples

97

Page 98: Vectors

Conclusion

SVM is a useful alternative to neural networksTwo key concepts of SVM: maximize the margin and the kernel trick

Many SVM implementations are available on the web for you to try on your data set!

98

Page 99: Vectors

Resources

http://www.kernel-machines.org/http://www.support-vector.net/http://www.support-vector.net/icml-tutorial.pdfhttp://www.kernel-machines.org/papers/tutorial-nips.ps.gz

http://www.clopinet.com/isabelle/Projects/SVM/applist.html

99

Page 100: Vectors

SVM-light

http://svmlight.joachims.orgAuthor: Thorsten Joachims , Cornell University

Can be downloaded and easily installed http://download.joachims.org/svm_light/current/svm_light.tar.gz

To install SVMlight you need to download svm_light.tar.gz. Create a new directory: mkdir svm_light

Move svm_light.tar.gz to this directory and unpack it with gunzip -c svm_light.tar.gz | tar xvf - Now execute make or make all

Two programs are compiled:svm_learn (learning module)svm_classify (classification module)

100

Page 101: Vectors

SVM-light: Training Input

1 1:2 2:1 3:4 4:3 1 1:2 2:1 3:4 4:3 -1 1:2 2:1 3:3 4:0 1 1:2 2:2 3:3 4:3 1 1:2 2:4 3:3 4:2 -1 1:2 2:2 3:3 4:0 -1 1:2 2:0 3:3-1 1:2 2:4 3:3 -1 1:4 2:5 3:3 1 1:2 2:2 3:3 4:2

Class FeatureN:ValueN

101

Page 102: Vectors

SVM-light: Training

svm_learn [options] example_file model_file

SOME OPTIONSGeneral options: -? - this help Learning options: -c float: trade-off between training error and margin (default [avg. x*x]^-1) Performance estimation options: -x [0,1] - compute leave-one-out estimates (default 0Kernel options: -t int - type of kernel function: 0: linear (default) 1: polynomial (s a*b+c)^d

2: radial basis function exp(-gamma ||a-b||^2) 3: sigmoid tanh(s a*b + c) 4: user defined kernel from kernel.h

-d int - parameter d in polynomial kernel -g float - parameter gamma in rbf kernel -s float - parameter s in sigmoid/poly kernel -r float - parameter c in

sigmoid/poly kernel -u string - parameter of user defined kernel Optimization

102

Page 103: Vectors

SVM-light: Trained ModelSVM-light Version V6.02 0 # kernel type 3 # kernel parameter -d 1 # kernel parameter -g 1 # kernel parameter -s 1 # kernel parameter -r empty# kernel parameter -u 4 # highest feature index 12 # number of training documents 13 # number of support vectors plus 1 1.0380931 # threshold b, each following line is a SV (starting with alpha*y)0.03980964156284725469214791360173 1:2 2:4 3:3 4:0 # -0.018316632908270628204983054843069 1:4 2:5 3:3 4:0 # -0.03980964156284725469214791360173 1:2 2:1 3:3 4:0 # 0.03980964156284725469214791360173 1:2 2:1 3:4 4:3 # -0.03980964156284725469214791360173 1:2 2:0 3:3 4:0 # 0.03980964156284725469214791360173 1:2 2:4 3:3 4:2 # 0.03980964156284725469214791360173 1:2 2:2 3:3 4:2 # 0.03980964156284725469214791360173 1:2 2:1 3:4 4:3 # -0.037841055392657176048576417315417 1:3 2:1 3:3 4:0 # -0.03980964156284725469214791360173 1:2 2:2 3:3 4:0 # 0.03980964156284725469214791360173 1:2 2:2 3:3 4:3 # 0.016345916179801231460366750525282 1:1 2:2 3:4 4:3 #

103

Page 104: Vectors

SVM-light: Predicting

svm_classify [options] example_file model_file output_file

104

Page 105: Vectors

SVM-light: Prediction0.88647894 0.88647894 0.81321667 0.24665358 0.29204665 0.99999997 -0.8864726 -0.93186567 -0.84107953 -1.000032 -0.90916914 -0.99999364

105