cs 541: artificial intelligence
DESCRIPTION
CS 541: Artificial Intelligence. Lecture IX: Supervised Machine Learning. Slides Credit: Peter Norvig and Sebastian Thrun. Schedule. TA: Vishakha Sharma Email Address : [email protected]. Re-cap of Lecture VIII. Temporal models use state and sensor variables replicated over time - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/1.jpg)
CS 541: Artificial Intelligence
Lecture IX: Supervised Machine Learning
Slides Credit: Peter Norvig and Sebastian Thrun
![Page 2: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/2.jpg)
Schedule
TA: Vishakha Sharma Email Address: [email protected]
Week Date Topic Reading Homework
1 08/30/2011 Introduction & Intelligent Agent Ch 1 & 2 N/A
2 09/06/2011 Search: search strategy and heuristic search Ch 3 & 4s HW1 (Search)
3 09/13/2011 Search: Constraint Satisfaction & Adversarial Search Ch 4s & 5 & 6 Teaming Due
4 09/20/2011 Logic: Logic Agent & First Order Logic Ch 7 & 8s HW1 due, Midterm Project (Game)
5 09/27/2011 Logic: Inference on First Order Logic Ch 8s & 9
6 10/04/2011 Uncertainty and Bayesian Network Ch 13 &14s HW2 (Logic)
10/11/2011 No class (function as Monday)
7 10/18/2011 Midterm Presentation Midterm Project Due
8 10/25/2011 Inference in Baysian Network Ch 14s HW2 Due, HW3 (Probabilistic Reasoning)
9 11/01/2011 Probabilistic Reasoning over Time Ch 15
10 11/08/2011 TA Session: Review of HW1 & 2 HW3 due, HW4 (MDP)
11 11/15/2011 Supervised Machine Learning Ch 18
12 11/22/2011 Markov Decision Process Ch 16 HW4 due
13 11/29/2011 Reinforcement learning Ch 21
14 12/06/2011 Final Project Presentation Final Project Due
![Page 3: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/3.jpg)
Re-cap of Lecture VIII Temporal models use state and sensor variables replicated over
time Markov assumptions and stationarity assumption, so we need
Transition model P(Xt|Xt-1)
Sensor model P(Et|Xt)
Tasks are filtering, prediction, smoothing, most likely sequence; All done recursively with constant cost per time step
Hidden Markov models have a single discrete state variable; Widely used for speech recognition
Kalman filters allow n state variables, linear Gaussian, O(n3) update
Dynamic Bayesian nets subsume HMMs, Kalman filters; exact update intractable
Particle filtering is a good approximate filtering algorithm for DBNs
![Page 4: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/4.jpg)
Outline Machine Learning Classification (Naïve Bayes) Regression (Linear, Smoothing) Linear Separation (Perceptron, SVMs) Non-parametric classification (KNN)
![Page 5: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/5.jpg)
Machine Learning
![Page 6: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/6.jpg)
Machine Learning Up until now: how to reason in a give model
Machine learning: how to acquire a model on the basis of data / experience Learning parameters (e.g. probabilities) Learning structure (e.g. BN graphs) Learning hidden concepts (e.g. clustering)
![Page 7: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/7.jpg)
Machine Learning Lingo
What? Parameters Structure Hidden concepts
What from? Supervised Unsupervised Reinforcement Self-supervised
What for? Prediction Diagnosis Compression Discovery
How? Passive Active Online Offline
Output? Classification Regression Clustering Ranking
Details?? Generative Discriminative Smoothing
![Page 8: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/8.jpg)
Supervised Machine Learning
Given a training set:
(x1, y1), (x2, y2), (x3, y3), … (xn, yn)
Where each yi was generated by an unknown y = f (x),Discover a function h that approximates the true function f.
![Page 9: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/9.jpg)
Classification (Naïve Bayes)
![Page 10: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/10.jpg)
Classification Example: Spam Filter Input: x = email Output: y = “spam” or
“ham” Setup:
Get a large collection of example emails, each labeled “spam” or “ham”
Note: someone has to hand label all this data!
Want to learn to predict labels of new, future emails
Features: The attributes used to make the ham / spam decision Words: FREE! Text Patterns: $dd, CAPS Non-text: SenderInContacts …
Dear Sir.
First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. …
TO BE REMOVED FROM FUTURE MAILINGS, SIMPLY REPLY TO THIS MESSAGE AND PUT "REMOVE" IN THE SUBJECT.
99 MILLION EMAIL ADDRESSES FOR ONLY $99
Ok, Iknow this is blatantly OT but I'm beginning to go insane. Had an old Dell Dimension XPS sitting in the corner and decided to put it to use, I know it was working pre being stuck in the corner, but when I plugged it in, hit the power nothing happened.
![Page 11: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/11.jpg)
A Spam Filter Naïve Bayes spam filter
Data: Collection of emails,
labeled spam or ham Note: someone has to
hand label all this data! Split into training, held-out,
test sets
Classifiers Learn on the training set (Tune it on a held-out set) Test it on new emails
Dear Sir.
First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. …
TO BE REMOVED FROM FUTURE MAILINGS, SIMPLY REPLY TO THIS MESSAGE AND PUT "REMOVE" IN THE SUBJECT.
99 MILLION EMAIL ADDRESSES FOR ONLY $99
Ok, Iknow this is blatantly OT but I'm beginning to go insane. Had an old Dell Dimension XPS sitting in the corner and decided to put it to use, I know it was working pre being stuck in the corner, but when I plugged it in, hit the power nothing happened.
![Page 12: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/12.jpg)
Naïve Bayes for Text Bag-of-Words Naïve Bayes:
Predict unknown class label (spam vs. ham) Assume evidence features (e.g. the words) are independent
Generative model
Tied distributions and bag-of-words Usually, each variable gets its own conditional probability
distribution P(F|Y) In a bag-of-words model
Each position is identically distributed All positions share the same conditional probs P(W|C) Why make this assumption?
Word at position i, not ith word in the dictionary!
![Page 13: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/13.jpg)
General Naïve Bayes General probabilistic model:
General naive Bayes model:
We only specify how each feature depends on the class
Total number of parameters is linear in n
Y
F1 FnF2
|Y| parameters
n x |F| x |Y| parameters
|Y| x |F|n parameterss
![Page 14: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/14.jpg)
Example: Spam Filtering Model:
What are the parameters?
Where do these tables come from?
the : 0.0156to : 0.0153and : 0.0115of : 0.0095you : 0.0093a : 0.0086with: 0.0080from: 0.0075...
the : 0.0210to : 0.0133of : 0.01192002: 0.0110with: 0.0108from: 0.0107and : 0.0105a : 0.0100...
ham : 0.66spam: 0.33
Counts from examples!
![Page 15: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/15.jpg)
Spam Example
Word P(w|spam) P(w|ham) Tot Spam Tot Ham
(prior) 0.33333 0.66666 -1.1 -0.4
Gary 0.00002 0.00021 -11.8 -8.9
would 0.00069 0.00084 -19.1 -16.0
you 0.00881 0.00304 -23.8 -21.8
like 0.00086 0.00083 -30.9 -28.9
to 0.01517 0.01339 -35.1 -33.2
lose 0.00008 0.00002 -44.5 -44.0
weight 0.00016 0.00002 -53.3 -55.0
while 0.00027 0.00027 -61.5 -63.2
you 0.00881 0.00304 -66.2 -69.0
sleep 0.00006 0.00001 -76.0 -80.5
P(spam | words) = e-76 / (e-76 + e-80.5) = 98.9%
![Page 16: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/16.jpg)
Example: Overfitting Posteriors determined by relative probabilities
(odds ratios):
south-west : infnation : infmorally : infnicely : infextent : infseriously : inf...
What went wrong here?
screens : infminute : infguaranteed : inf$205.00 : infdelivery : infsignature : inf...
![Page 17: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/17.jpg)
Generalization and Overfitting Raw counts will overfit the training data!
Unlikely that every occurrence of “minute” is 100% spam Unlikely that every occurrence of “seriously” is 100% ham What about all the words that don’t occur in the training set at all? 0/0? In general, we can’t go around giving unseen events zero probability
At the extreme, imagine using the entire email as the only feature Would get the training data perfect (if deterministic labeling) Wouldn’t generalize at all Just making the bag-of-words assumption gives us some generalization,
but isn’t enough
To generalize better: we need to smooth or regularize the estimates
![Page 18: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/18.jpg)
Estimation: Smoothing Maximum likelihood estimates:
Problems with maximum likelihood estimates: If I flip a coin once, and it’s heads, what’s the estimate for
P(heads)? What if I flip 10 times with 8 heads? What if I flip 10M times with 8M heads?
Basic idea: We have some prior expectation about parameters
(here, the probability of heads) Given little evidence, we should skew towards our prior Given a lot of evidence, we should listen to the data
r g g
![Page 19: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/19.jpg)
Estimation: Laplace Smoothing
Laplace’s estimate (extended): Pretend you saw every outcome k
extra times
What’s Laplace with k = 0? k is the strength of the prior
Laplace for conditionals: Smooth each condition
independently:
H H T
![Page 20: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/20.jpg)
Estimation: Linear Interpolation In practice, Laplace often performs poorly for P(X|
Y): When |X| is very large When |Y| is very large
Another option: linear interpolation Also get P(X) from the data Make sure the estimate of P(X|Y) isn’t too different from
P(X)
What if is 0? 1?
![Page 21: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/21.jpg)
Real NB: Smoothing For real classification problems, smoothing is
critical New odds ratios:
helvetica : 11.4seems : 10.8group : 10.2ago : 8.4areas : 8.3...
verdana : 28.8Credit : 28.4ORDER : 27.2<FONT> : 26.9money : 26.5...
Do these make more sense?
![Page 22: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/22.jpg)
Tuning on Held-Out Data Now we’ve got two kinds of unknowns
Parameters: the probabilities P(Y|X), P(Y) Hyperparameters, like the amount of
smoothing to do: k
How to learn? Learn parameters from training data Must tune hyperparameters on different
data Why?
For each value of the hyperparameters, train and test on the held-out (validation)data
Choose the best value and do a final test on the test data
![Page 23: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/23.jpg)
How to Learn
Data: labeled instances, e.g. emails marked spam/ham Training set Held out (validation) set Test set
Features: attribute-value pairs which characterize each x
Experimentation cycle Learn parameters (e.g. model probabilities) on training set Tune hyperparameters on held-out set Compute accuracy on test set Very important: never “peek” at the test set!
Evaluation Accuracy: fraction of instances predicted correctly
Overfitting and generalization Want a classifier which does well on test data Overfitting: fitting the training data very closely, but not
generalizing well to test data
TrainingData
Held-OutData
TestData
![Page 24: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/24.jpg)
What to Do About Errors? Need more features– words aren’t enough!
Have you emailed the sender before? Have 1K other people just gotten the same email? Is the sending information consistent? Is the email in ALL CAPS? Do inline URLs point where they say they point? Does the email address you by (your) name?
Can add these information sources as new variables in the Naïve Bayes model
![Page 25: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/25.jpg)
A Digit Recognizer Input: x = pixel grids
Output: y = a digit 0-9
![Page 26: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/26.jpg)
Example: Digit Recognition
Input: x = images (pixel grids) Output: y = a digit 0-9 Setup:
Get a large collection of example images, each labeled with a digit
Note: someone has to hand label all this data!
Want to learn to predict labels of new, future digit images
Features: The attributes used to make the digit decision Pixels: (6,8)=ON Shape Patterns: NumComponents,
AspectRatio, NumLoops …
0
1
2
1
??
![Page 27: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/27.jpg)
Naïve Bayes for Digits Simple version:
One feature Fij for each grid position <i,j> Boolean features Each input maps to a feature vector, e.g.
Here: lots of features, each is binary valued Naïve Bayes model:
![Page 28: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/28.jpg)
Learning Model Parameters
1 0.1
2 0.1
3 0.1
4 0.1
5 0.1
6 0.1
7 0.1
8 0.1
9 0.1
0 0.1
1 0.01
2 0.05
3 0.05
4 0.30
5 0.80
6 0.90
7 0.05
8 0.60
9 0.50
0 0.80
1 0.05
2 0.01
3 0.90
4 0.80
5 0.90
6 0.90
7 0.25
8 0.85
9 0.60
0 0.80
![Page 29: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/29.jpg)
Problem: Overfitting
2 wins!!
![Page 30: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/30.jpg)
Regression (Linear, smoothing)
![Page 31: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/31.jpg)
Regression Start with very simple example
Linear regression What you learned in high school math
From a new perspective Linear model
y = m x + b hw(x) = y = w1 x + w0
Find best values for parameters “maximize goodness of fit” “maximize probability” or “minimize loss”
![Page 32: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/32.jpg)
Regression: Minimizing Loss Assume true function f is given by
y = f (x) = m x + b + noisewhere noise is normally distributed
Then most probable values of parametersfound by minimizing squared-error loss:
Loss(hw ) = Σj (yj – hw(xj))2
![Page 33: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/33.jpg)
Regression: Minimizing Loss
![Page 34: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/34.jpg)
Regression: Minimizing Loss
Choose weights to minimizesum of squared errors
![Page 35: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/35.jpg)
Regression: Minimizing Loss
![Page 36: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/36.jpg)
Regression: Minimizing Loss
y = w1 x + w0
Linear algebra givesan exact solution tothe minimizationproblem
![Page 37: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/37.jpg)
Linear Algebra Solution
![Page 38: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/38.jpg)
Don’t Always Trust Linear Models
![Page 39: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/39.jpg)
Regression by Gradient Descent
w = any point loop until convergence do: for each wi in w do: wi = wi – α ∂Loss(w)
∂ wi
![Page 40: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/40.jpg)
Multivariate Regression You learned this in math class too
hw(x) = w ∙ x = w xT = Σi wi xi
The most probable set of weights, w*(minimizing squared error): w* = (XT X)-1 XT y
![Page 41: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/41.jpg)
Overfitting
To avoid overfitting, don’t just minimize loss Maximize probability, including prior over w Can be stated as minimization:
Cost(h) = EmpiricalLoss(h) + λ Complexity(h) For linear models, consider
Complexity(hw) = Lq(w) = ∑i | wi |q
L1 regularization minimizes sum of abs. values L2 regularization minimizes sum of squares
![Page 42: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/42.jpg)
Regularization and Sparsity
L1 regularization L2 regularization
Cost(h) = EmpiricalLoss(h) + λ Complexity(h)
![Page 43: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/43.jpg)
Linear Separation
Perceptron, SVM
![Page 44: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/44.jpg)
Linear Separator
-
![Page 45: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/45.jpg)
Perceptron
-
![Page 46: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/46.jpg)
Perceptron Algorithm Start with random w0, w1
Pick training example <x,y> Update (α is learning rate)
w1 w1+α(y-f(x))x w0 w0+α(y-f(x))
Converges to linear separator (if exists) Picks “a” linear separator (a good one?)
![Page 47: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/47.jpg)
What Linear Separator to Pick?
-
![Page 48: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/48.jpg)
What Linear Separator to Pick?
-Maximizes the “margin”
Support Vector Machines
![Page 49: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/49.jpg)
Non-Separable Data? Not linearly
separable for x1, x2
What if we add a feature?
x3= x12+x2
2
See: “Kernel Trick”
X1
X2
- X3
-
![Page 50: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/50.jpg)
Non-parametric classification
![Page 51: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/51.jpg)
Nonparametric Models
If the process of learning good values for parameters is prone to overfitting,can we do without parameters?
![Page 52: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/52.jpg)
Nearest-Neighbor Classification
Nearest neighbor for digits: Take new image Compare to all training images Assign based on closest example
Encoding: image is vector of intensities:
What’s the similarity function? Dot product of two images vectors?
Usually normalize vectors so ||x|| = 1 min = 0 (when?), max = 1 (when?)
![Page 53: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/53.jpg)
Earthquakes and Explosions
Using logistic regression (similar to linear regression) to do linear classification
![Page 54: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/54.jpg)
K=1 Nearest Neighbors
Using nearest neighbors to do classification
![Page 55: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/55.jpg)
K=5 Nearest Neighbors
Even with no parameters, you still have hyperparameters!
![Page 56: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/56.jpg)
Curse of Dimensionality
Average neighborhood size for 10-nearest neighbors, n dimensions, 1M uniform points
![Page 57: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/57.jpg)
Curse of Dimensionality
Proportion of points that are within the outer shell, 1% of thickness of the hypercube
![Page 58: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/58.jpg)
Summary Machine Learning Classification (Naïve Bayes) Regression (Linear, Smoothing) Linear Separation (Perceptron, SVMs) Non-parametric classification (KNN)
![Page 59: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/59.jpg)
Machine Learning Lingo
What? Parameters Structure Hidden concepts
What from? Supervised Unsupervised Reinforcement Self-supervised
What for? Prediction Diagnosis Compression Discovery
How? Passive Active Online Offline
Output? Classification Regression Clustering Ranking
Details?? Generative Discriminative Smoothing
![Page 60: CS 541: Artificial Intelligence](https://reader030.vdocuments.net/reader030/viewer/2022032806/56813437550346895d9b2b8f/html5/thumbnails/60.jpg)
HW#4 Russel AI Book (Edition 3)
Problem 15.1 (1 points) Problem 15.2 (2 points) Problem 15.6 (2 points) Problem 15.13(3 points) Problem 15.14(2 points) Bonus:
Problem 15.10 (3 points)