stanford ml neuralnetwork
DESCRIPTION
Explanation for Stanford Online Machine Learning class, chapter 8 neural networkTRANSCRIPT
![Page 1: Stanford ml neuralnetwork](https://reader033.vdocuments.net/reader033/viewer/2022052411/556a3c9fd8b42a4a1e8b4724/html5/thumbnails/1.jpg)
Stanford ML Lecture #8 Neural NetworkRepresentationS. Takei @shtaag
2012年1月26日木曜日
![Page 2: Stanford ml neuralnetwork](https://reader033.vdocuments.net/reader033/viewer/2022052411/556a3c9fd8b42a4a1e8b4724/html5/thumbnails/2.jpg)
Reference
Stanford Online ML class #8
”Machine Learning: An Algorithmic Perspective” (Marsland, 2009)
2012年1月26日木曜日
![Page 3: Stanford ml neuralnetwork](https://reader033.vdocuments.net/reader033/viewer/2022052411/556a3c9fd8b42a4a1e8b4724/html5/thumbnails/3.jpg)
Why NN is necessary?
in non-linear classification problem...
polynomial regression is not efficient!!
ex. 50 x 50 pixels picture
n features = 2500
quadratic features = 50C2 = 3000000
O(Tn2) : where T = iterations
2012年1月26日木曜日
![Page 4: Stanford ml neuralnetwork](https://reader033.vdocuments.net/reader033/viewer/2022052411/556a3c9fd8b42a4a1e8b4724/html5/thumbnails/4.jpg)
Neuron Model
logistic activation function
O(T m n) : where m = input n = output T = iterations
more efficient than polynomial regression!!
2012年1月26日木曜日
![Page 5: Stanford ml neuralnetwork](https://reader033.vdocuments.net/reader033/viewer/2022052411/556a3c9fd8b42a4a1e8b4724/html5/thumbnails/5.jpg)
Simple Type : PERCEPTRON
1 layer of Input nodes
1 layer of Output nodes
weighted connections
only forward propagation
learning :
update weights based on error function
error function = difference between output values
2012年1月26日木曜日
![Page 6: Stanford ml neuralnetwork](https://reader033.vdocuments.net/reader033/viewer/2022052411/556a3c9fd8b42a4a1e8b4724/html5/thumbnails/6.jpg)
Represents logical operation
AND
Input1 Input2 out
0 0 0
0 1 0
1 0 0
1 1 1In1
In2
linear boundary
2012年1月26日木曜日
![Page 7: Stanford ml neuralnetwork](https://reader033.vdocuments.net/reader033/viewer/2022052411/556a3c9fd8b42a4a1e8b4724/html5/thumbnails/7.jpg)
Represents logical operation
OR
Input1 Input2 out
0 0 0
0 1 1
1 0 1
1 1 1In1
In2
linear boundary
2012年1月26日木曜日
![Page 8: Stanford ml neuralnetwork](https://reader033.vdocuments.net/reader033/viewer/2022052411/556a3c9fd8b42a4a1e8b4724/html5/thumbnails/8.jpg)
Represents logical operation
XOR
Input1 Input2 out
0 0 0
0 1 1
1 0 1
1 1 0In1
In2
Can’t separate linearly!!
?
2012年1月26日木曜日
![Page 9: Stanford ml neuralnetwork](https://reader033.vdocuments.net/reader033/viewer/2022052411/556a3c9fd8b42a4a1e8b4724/html5/thumbnails/9.jpg)
Perceptron (2D) can’t represent XOR( Minsky & Papert, “Perceptrons”, 1969)
1, add dimension -> Support Vector Machine
2, add layers -> Neural Network
2012年1月26日木曜日
![Page 10: Stanford ml neuralnetwork](https://reader033.vdocuments.net/reader033/viewer/2022052411/556a3c9fd8b42a4a1e8b4724/html5/thumbnails/10.jpg)
Neural Network
Input layers + output layers + hidden layers
learning :
backward propagation
2012年1月26日木曜日
![Page 11: Stanford ml neuralnetwork](https://reader033.vdocuments.net/reader033/viewer/2022052411/556a3c9fd8b42a4a1e8b4724/html5/thumbnails/11.jpg)
Neural Network can represent XOR
XOR = (In1 AND In2) OR (In1 NAND In2)
2012年1月26日木曜日