linear models - numerical methods for deep learning · i part 1: linear models 1.introduction and...

42
Intro – 1 Linear Models Numerical Methods for Deep Learning Lars Ruthotto Departments of Mathematics and Computer Science, Emory University

Upload: others

Post on 07-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 1

Linear ModelsNumerical Methods for Deep Learning

Lars Ruthotto

Departments of Mathematics and Computer Science, Emory University

Page 2: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 2

Course Overview

Page 3: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 3

Course Overview

I Part 1: Linear Models

1. Introduction and Applications2. Linear Models: Least-Squares and Logistic Regression

I Part 2: Neural Networks

1. Introduction to Nonlinear Models2. Parametric Models, Convolutions3. Single Layer Neural Networks4. Training Algorithms for Single Layer Neural Networks5. Neural Networks and Residual Neural Networks

(ResNets)

I Part 3: Neural Networks as Differential Equations

1. ResNets as ODEs2. Residual CNNs and their relation to PDEs

Page 4: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 4

Intro: Machine Learning

Page 5: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 5

Machine Learning in 3 slides

Machine learning (ML) is the scientific study of algorithms andstatistical models that computer systems use to perform aspecific task without using explicit instructions, relying onpatterns and inference instead. (wiki)

Two common tasks in machine learning:

I given data, cluster it and detect patterns in it(unsupervised learning)

I given data and labels, find a functional relation betweenthem (supervised learning)

Page 6: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 6

Machine Learning in 3 slides

unsupervised semi-supervised

Unsupervised learning - given the data set Y = [y1, . . . , yn]cluster the data into ”similar” groups (labels).

I helps find hidden patterns

I often explorative and open-ended

Semisupervised - label the data based on a few examples

Page 7: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 7

Machine Learning in 3 slidestraining data trained model

Supervised learning - given the data set Y = [y1, . . . , yn] ∈ Yand their labels C = [c1, . . . , cn] ∈ C, find the relationf : Y → C

I models range in complexityI older models based on support vector machines (SVM)

and kernel methodsI recently, deep neural networks (DNNs) dominate

Page 8: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 8

Learning From Data: The Core of ScienceGiven inputs and outputs, how to choose f ?

Option 1 (Fundamental(?) understanding): For example,Newton’s formula

x(t) =1

2gt2,

with unknown parameter g .To estimate g observe falling object

t x0 01 4.92 20.13 44.1

Goal: Derive model from theory, calibrate it using data.

Page 9: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 9

Learning From Data: The Core of Science

Given inputs and outputs, how to choose f ?

Option 2 (Phenomenological models): For example, Archie’slaw - what is the electrical resistivity of a rock and how itrelates to its porosity, φ and saturation, Sw?

ρ(φ, Sw ) = aφn/2Spw

a, n, p unknown parameters

Obtaining parameters from observed data and lab experimentson rocks.

Goal: Find model that consistent with fundamental theory,without directly deriving it from theory.

Page 10: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 10

Phenomenological vs. Fundamental

Fundamental laws come from understanding(?) theunderlying process. They are assumed invariant and cantherefore be predictive(?).

Phenomenological models are data-driven. They “work” onsome given data. Hard to know what their limitations are.

But ...

I models based on understanding can do poorly - weather,economics ...

I models based on data can sometimes do better

I how do we quantify understanding?

Page 11: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 11

Intro: Deep Learning

Page 12: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 12

Deep Neural Networks: HistoryI Neural Networks with a particular (deep) architecture

I Exist for a long time (70’s and even earlier) [17, 18, 14]

I Recent revolution - computational power and lots ofdata [2, 16, 13]

I Can perform very well when trained with lots of data

I ApplicationsI Image recognition [10, 12, 13], segmentation, natural

language processing [3, 5, 11]

I A few recent news articles:I Apple Is Bringing the AI Revolution to Your iPhone,

WIRED 2016I Why Deep Learning Is Suddenly Changing Your Life,

FORTUNE 2016I Data Scientist: Sexiest Job of the 21st Century, Harvard

Business Rev ’17

Page 13: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 13

Learning Objective: Demystify Deep Learning

Learning objectives of this minicourse:

I look under the hood of some deep learning examples

I describe deep learning mathematically (see also [9])

I expose numerical challenges / approaches to improve DL

Page 14: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 14

DNN - A Quick Overview - 1

yl+1 = σ(Klyl + bl)yl+1 = yl + σ(Klyl + bl)yl+1 = yl + σ (Llσ(Klyj + bl))

...

Here:

I l = 0, 1, 2, . . . ,N is the layer

I σ : R→ R is the activation function

I y0 = y ∈ Rnf is the input data (e.g., an image)

I c ∈ Rnc is the output (e.g. class of the image)

I Ll ,Kl ,bl are parameters of the model f

Page 15: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 15

DNN - A Quick Overview - 2

Neural networks are data interpolator/classifier when theunderlying model is unknown.

A generic way to write it is

c = f (y,θ).

I the function f is the computational model

I y ∈ Rnf is the input data (e.g., an image)

I c ∈ Rnc is the output (e.g. class of the image)

I θ ∈ Rnp are parameters of the model f

In supervised learning we have examples{(yj , cj) : j = 1, . . . , n} and the goal is to estimate or “learn”the parameters θ.

Page 16: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 16

Example: Classification of Hand-written Digits

I Let yj ∈ Rnf and let cj ∈ Rnc .

I The vector c is the probability of y belonging to a certainclass. Clearly, 0 ≤ cj ≤ 1 and

∑ncj=1 cj = 1.

Examples (MNIST):

y1 y2

c1 = [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]> c2 = [0, 0.3, 0, 0, 0, 0, 0, 0.7, 0, 0]>

Page 17: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 17

Example: Classification of Natural ImagesImage classification of natural images

Examples (CIFAR-10):

Page 18: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 18

Example: Semantic Segmentation

I let yj ∈ Rn be an RGB or grey valued image.

I let the pixels in cj ∈ {1, 2, 3, . . .}k denote the labels.

y, input image c, segmentation (labeled image)

Goal: Find map c = f (y,θ)

Page 19: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 19

Example: Semantic Segmentation

Problem: Given image y and label c, find a map f (·,θ) suchthat c ≈ f (y,θ)

First step: Reduce the dimensionality of problem.

I extract features from the image

I classify in the feature space

Reduce the problem of learning from the image to featuredetection and classification

Possible features: Color, neighbors, edges ...

Page 20: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 20

Generalization

Suppose that we have examples {yj , cj}, j = 1, . . . , n, amodel f (y,θ) and some optimal parameter θ∗.Let {(ytj , ctj ) : j = 1, . . . , s} be some test set, that was notused to compute θ∗.

Loosely speaking, if

‖f (ytj ,θ∗)− ctj ‖p is small

then the model is predictive - it generalizes well

For phenomenological models, there is no reason why themodel should generalize, but in practice it often does.

Page 21: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 21

Generalization

Why would a model generalize poorly?

1� ‖f (ytj ,θ∗)− ctj ‖p

Two common reasons:

1. Our “optimal” θ∗ was optimal for the training but is lessso for other data

2. The chosen computational model f is poor (e.g.quadratic model for a nonlinear function).

Page 22: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 22

Linear Models

Page 23: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 23

Supervised Learning Problem

Given examples (inputs)

Y = [y1, y2, · · · , yn] ∈ Rnf×n

and labels (outputs)

C = [c1, c2, · · · , cn] ∈ Rnc×n,

find a classification/prediction function f (·,θ), i.e.,

f (yj ,θ) ≈ cj , j = 1, . . . , n.

Page 24: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 24

Regression and Least-SquaresSimplest option, a linear model with θ = (W,b) and

f (Y,W,b) = WY + be>n ≈ C

I W ∈ Rnc×nf are weightsI b ∈ Rnc are biasesI en ∈ Rn is a vector of ones

Equivalent notation:

f (Y,W,b) =(W b

)(Ye>n

)≈ C

Problem may not have a solution, or may have infinitesolutions (when?). Solve through optimization

minW

1

2‖WY − C‖2

F

(Frobenius norm: ‖A‖2F = trace(A>A) =

∑i ,j A

2i ,j .)

Page 25: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 25

Remark: Relation to Least-SquaresConsider the regression problem

minW

1

2‖WY − C‖2

F .

It is easy to see that this is equivalent to

minW

1

2‖Y>W> − C>‖2

F ,

which can be solved separately for each row in W

W(j , :)> = arg minw

1

2‖Y>w − C(j , :)>‖2

F .

Notation: Let A = Y> and X = W> (easy to add bias here),we solve

minX

1

2‖AX− C>‖2

F

Page 26: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 26

Iterative Regularization

Considerminx‖Ax− b‖2

I Assume that A has non-trivial null space

I The matrix A>A is not invertible

I Can we still use iterative methods (CG, CGLS, . . . )?

What are the properties of the iterates?

Excellent introduction to computational inverseproblems [7, 19, 8]

Page 27: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 27

Iterative Regularization: L-CurveThe CGLS algorithm has the following propertiesI For each iteration ‖Axk − c‖2 ≤ ‖Axk−1 − c‖2

I If starting from x = 0 then ‖xk‖2 ≥ ‖xk−1‖2

I x1, x2, . . . converges to the minimum norm solution of theproblem

I Plotting ‖xk‖2 vs ‖Axk − c‖2 typically has the shape ofan L-curve

Page 28: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 28

Cross Validation

Finding good least-squares solution requires good parameterselection.

I λ when using Tikhonov regularization (weight decay)

I number of iteration (for SD and CGLS)

Suppose that we have two different “solutions”

x1 → ‖x1‖2 = η1 ‖Ax1 − c‖2 = ρ1.

x2 → ‖x2‖2 = η2 ‖Ax2 − c‖2 = ρ2.

How to decide which one is better?

Page 29: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 29

Cross Validation

Goal: Gauge how well the model can predict new examples.

Let {ACV, cCV} be data that is not used for the training

Idea: If ‖ACVx1 − cCV‖2 ≤ ‖ACVx2 − cCV‖2, then x1 is abetter solution that x2.

When the solution depends on some hyper-parameter(s) λ, wecan phrase this as bi-level optimization problem

λ∗ = arg minλ‖ACVx(λ)− cCV‖2,

where x(λ) = arg minx ‖Ax− x‖2 + λ‖x‖2.

Page 30: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 30

Cross Validation

To assess the final quality of the solution cross validation isnot sufficient (why?).

Need a final testing set.

Procedure

I Divide the data into 3 groups {Atrain,ACV,Atest}.I Use Atrain to estimate x(λ)

I Use ACV to estimate λ

I Use Atest to assess the quality of the solution

Important - we are not allowed to use Atest to tuneparameters!

Page 31: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 31

Classification

Page 32: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 32

Logistic RegressionAssume our data falls into two classes. Denote by cobs(y) theprobability that example y ∈ Rnf belongs to first category.

Since output of our classifier f (y,θ) is supposed to beprobability, use logistic sigmoid

c(y,θ) =1

1 + exp (−f (y,θ)).

Example (Linear Classification): If f (y,θ) is a linear function(adding bias is easy), θ = w ∈ Rnf and

c(y,w) =1

1 + exp(−w>y).

for now: consider linear models for simplicity

Page 33: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 33

Multinomial Logistic Regression

Suppose data falls into nc ≥ 2 categories and the componentsof cobs(y) ∈ [0, 1]nc contain probabilities for each class.

Applying the logistic sigmoid to each component of f (y,W)not enough (probabilities must sum to one). Use

c(y,W) =

(1

e>nc exp(Wy)

)exp(Wy).

Note: Division and exp are applied element-wise!

Page 34: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 34

Logistic Regression - Loss FunctionHow similar are c(·,W) and cobs(·)?

Naive idea: Let Y ∈ Rnf×n be exampleswith class probabilities Cobs ∈ [0, 1]nc×n, use

φ(W) =1

2n

n∑j=1

‖c(yj ,W)− cj ,obs‖2F

Problems

I ignores that c(·,W) and cobs(·) aredistributions.

I leads to non-convex objective function

Frobenius

Cross Entropy

Page 35: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 35

Cross Entropy for Multinomial Logistic Regression

Similarly, for general case (nc ≥ 2 classes, n examples).Recall:

C(Y,W) = exp(WY) diag

(1

e>nc exp(WY)

)

Get cross entropy by summing over all examples

E (Cobs,C(Y,W)) = −1

ntr(C>obs log(C(Y,W))).

We will also call this the softmax (cross-entropy) function.

Page 36: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 36

Simplifying the Softmax FunctionLet S = WY, then

E (Cobs,S) = −1

ntr

(C>obs log

(exp(S)diag

(1

e>nc exp(S)

))).

Verify that this is equal to

E (Cobs,S) =− 1

ne>nc (Cobs � S) en

+1

ne>ncCobs log

(e>nc exp(S)

)>(� is Hadamard product, exp and log component-wise)

If Cobs has a unit row sum (why?) then e>ncC>obs = e>n and

E (Cobs,S) = −1

ne>nc (Cobs � S) en +

1

nlog(e>nc exp(S))en

Page 37: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 37

Numerical Considerations

Scale to prevent overflow. Note that for an arbitrary s ∈ R wehave

E (Cobs,WY − s) = E (Cobs,WY)

This prevents overflow, but may lead to underflow (anddivisions by zero).

Note that s does not need to be the same in each row(example). Hence, we can choose s = max(WY, [], 1) ∈ R1×n

to avoid underflow and overflow.

For stability use E (Cobs,S) where S = WY − encs.

Page 38: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 38

Linear ClassificationIf W can separate the classes then the goal is to minimize thecross entropy (with some potential regularization)

W∗ = arg minW

−1

ne>nc (Cobs � S) en +

1

nlog(e>nc exp(S))en

subject to S = WY − encs

This is a smooth convex optimization problem⇒ many existing optimization techniques will work

For large-scale problems, use derivative-based optimizationalgorithm. (Examples: Steepest Descent, Newton-likemethods, Stochastic Gradient Descent, ADMM, . . . )

Excellent references: [15, 4, 1, 6]

Page 39: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 39

Part 1 Summary: Linear Models

I Intro to Deep Learning ( Machine LearningI risks and promises of phenomenological modelsI importance of generalization

I (Review) Linear RegressionI iterative and direct regularizationI (generalized) cross validation

I Multinomial Logistic RegressionI cross entropiesI leads to convex optimization problem

I Linear modelsI well-understood, easy to use, not very powerfulI next: add nonlinearity (through neural networks)

Page 40: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 40

References[1] A. Beck. Introduction to Nonlinear Optimization. Theory, Algorithms, and

Applications with MATLAB. SIAM, Philadelphia, Oct. 2014.

[2] Y. Bengio et al. Learning deep architectures for AI. Foundations and trends R© inMachine Learning, 2(1):1–127, 2009.

[3] A. Bordes, S. Chopra, and J. Weston. Question Answering with SubgraphEmbeddings. arXiv preprint arXiv:1406.3676, 2014.

[4] S. P. Boyd and L. Vandenberghe. Convex Optimization. Cambridge UniversityPress, Mar. 2004.

[5] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa.Natural Language Processing (Almost) from Scratch. Journal of MachineLearning Research, 12:2493–2537, 2011.

[6] S. W. Fung, S. Tyrvainen, L. Ruthotto, and E. Haber. ADMM-SOFTMAX : AnADMM Approach for Multinomial Logistic Regression. arXiv.org, Jan. 2019.

[7] P. C. Hansen. Rank-deficient and discrete ill-posed problems. SIAM Monographson Mathematical Modeling and Computation. Society for Industrial and AppliedMathematics (SIAM), Philadelphia, PA, 1998.

[8] P. C. Hansen. Discrete inverse problems, volume 7 of Fundamentals ofAlgorithms. Society for Industrial and Applied Mathematics (SIAM),Philadelphia, PA, 2010.

[9] C. F. Higham and D. J. Higham. Deep Learning: An Introduction for AppliedMathematicians. arXiv.org, Jan. 2018.

Page 41: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 41

References (cont.)

[10] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior,V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks foracoustic modeling in speech recognition: The shared views of four researchgroups. IEEE Signal Processing Magazine, 29(6):82–97, 2012.

[11] S. Jean, K. Cho, R. Memisevic, and Y. Bengio. On Using Very Large TargetVocabulary for Neural Machine Translation. arXiv preprint arXiv:1412.2007,2014.

[12] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deepconvolutional neural networks. Advances in neural information processingsystems, 61:1097–1105, 2012.

[13] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature,521(7553):436–444, 2015.

[14] Y. LeCun, B. E. Boser, and J. S. Denker. Handwritten digit recognition with aback-propagation network. In Advances in neural information processing systems,pages 396–404, 1990.

[15] J. Nocedal and S. Wright. Numerical Optimization. Springer Series inOperations Research and Financial Engineering. Springer Science & BusinessMedia, New York, Dec. 2006.

[16] R. Raina, A. Madhavan, and A. Y. Ng. Large-scale deep unsupervised learningusing graphics processors. In the 26th Annual International Conference, pages873–880. ACM, June 2009.

Page 42: Linear Models - Numerical Methods for Deep Learning · I Part 1: Linear Models 1.Introduction and Applications 2.Linear Models: Least-Squares and Logistic Regression I Part 2: Neural

Intro – 42

References (cont.)

[17] F. Rosenblatt. The perceptron: A probabilistic model for information storage andorganization in the brain. Psychological review, 65(6):386–408, 1958.

[18] D. Rumelhart, G. Hinton, and J. Williams, R. Learning representations byback-propagating errors. Nature, 323(6088):533–538, 1986.

[19] C. R. Vogel. Computational Methods for Inverse Problems. SIAM, Philadelphia,2002.