artificial neural networks perceptron

43
P. Kaplanoglou, S. Papadopoulos, Prof. Ioannis Pitas Aristotle University of Thessaloniki [email protected] www.aiia.csd.auth.gr Version 3.2 Artificial Neural Networks Perceptron

Upload: others

Post on 04-May-2022

16 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Artificial Neural Networks Perceptron

P. Kaplanoglou, S. Papadopoulos, Prof. Ioannis Pitas

Aristotle University of Thessaloniki

[email protected]

www.aiia.csd.auth.grVersion 3.2

Artificial Neural Networks

Perceptron

Page 2: Artificial Neural Networks Perceptron

Classification/Recognition/

Identification• Given a set of classes 𝒞 = 𝒞𝑖 , 𝑖 = 1,… ,𝑚 and a sample 𝐱 ∈ ℝ𝑛, the ML

model ො𝐲 = 𝐟(𝐱; 𝛉) predicts a class label vector ො𝐲 ∈ 0, 1 𝑚 for input

sample 𝐱, where 𝛉 are the learnable model parameters.

• Essentially, a probabilistic distribution 𝑃(ො𝐲; 𝐱) is computed.

• Interpretation: likelihood of the given sample 𝐱 belonging to each class 𝒞𝑖 .

• Single-target classification:

• Classes 𝒞𝑖 , 𝑖 = 1,… ,𝑚 are mutually exclusive: ||ො𝐲||1 = 1.

• Multi-target classification:

• Classes 𝒞𝑖 , 𝑖 = 1,… ,𝑚 are not mutually exclusive : ||ො𝐲||1 ≥ 1.

Page 3: Artificial Neural Networks Perceptron

Supervised Learning

• A sufficient large training sample set 𝒟 is required for Supervised

Learning (regression, classification):

𝒟 = {(𝐱𝑖 , 𝐲𝑖), 𝑖 = 1,… ,𝑁}.

• 𝐱𝑖 ∈ ℝ𝑛 : 𝑛 –dimensional input (feature) vector of the 𝑖-th training sample.

• 𝐲𝑖: its target label (output).

• Target form 𝐲 can vary:

• it can be categorical, a real number or a combination of both.

Page 4: Artificial Neural Networks Perceptron

Classification/Recognition/

Identification• Training: Given 𝑁 pairs of training samples 𝒟 = {(𝐱𝑖 , 𝐲𝑖), 𝑖 = 1,…𝑁},

where 𝐱𝑖 ∈ ℝ𝑛 and 𝐲𝑖 ∈ 0,1 𝑚, estimate 𝛉 by minimizing a loss

function: min𝛉

𝐽(𝐲, ො𝐲).

• Inference/testing: Given 𝑁𝑡 pairs of testing examples 𝒟𝑡 = {(𝐱𝑖 , 𝐲𝑖), 𝑖 =1,… ,𝑁𝑡} , where 𝐱𝑖 ∈ ℝ𝑛 and 𝐲𝑖 ∈ 0,1 𝑚, compute (predict) ෝ𝐲𝑖 and

calculate a performance metric, e.g., classification accuracy.

Page 5: Artificial Neural Networks Perceptron

Classification/Recognition/

IdentificationOptimal step between training and testing:

• Validation: Given 𝑁𝑣 pairs of testing examples (different from either

training or testing examples) 𝒟𝑣 = {(𝐱𝑖 , 𝐲𝑖), 𝑖 = 1,… ,𝑁𝑣}, where 𝐱𝑖 ∈ ℝ𝑛

and 𝐲𝑖 ∈ 0,1 𝑚, compute (predict) ෝ𝐲𝑖 and validate using a performance

metric.

or

• k-fold cross-validation (optional): Use only a percentage (100

𝑘%, e.g.,

80%) of the data for training and the rest for validation (100

𝑘%, e.g., 20%).

Repeat it 𝑘 times, until all data used for training and testing).

Page 6: Artificial Neural Networks Perceptron

Classification

• Classification:

• Two class (𝑚 = 2) and multiple class (𝑚 > 2) classification.

• Example: Face detection (two classes), face recognition (many

classes).

6

Page 7: Artificial Neural Networks Perceptron

Classification

Multiclass Classification (𝑚 > 2):

• Multiple (𝑚 > 2) hypothesis testing: choose a winner class

out of 𝑚 classes.

• Binary hypothesis testing:

• One class against all: 𝑚 binary hypothesis.

• one must be proven true.

• Pair-wise class comparisons: 𝑚(𝑚 − 1)/2 binary hypothesis.

7

Page 8: Artificial Neural Networks Perceptron

Face

Recognition/identificationProblem statement:

• To identify a face identity

• Input for training: several facial ROIs per person

• Input for inference: a facial ROI

• Inference output: the face id

• Supervised learning

• Applications:

Biometrics

Surveillance applications

Video analytics

Sandra

Bullock

Who is he?

Hugh

Grant

Page 9: Artificial Neural Networks Perceptron

Regression

Given a sample 𝐱 ∈ ℝ𝑛 and a function 𝐲 = 𝒇(𝐱), the model predicts real-

valued quantities for that sample: ො𝐲 = 𝒇(𝐱; 𝛉), where ො𝐲 ∈ ℝ𝑚and 𝛉 are the

learnable parameters of the model.

• Training: Given 𝑁 pairs of training examples 𝒟 = {(𝐱𝑖 , 𝐲𝑖), 𝑖 = 1,…𝑁},where 𝐱𝑖 ∈ ℝ𝑛 and 𝐲𝑖 ∈ ℝ𝑚, estimate 𝛉 by minimizing a loss function:

min𝛉

𝐽(𝐲, ො𝐲) .

• Testing: Given 𝑁𝑡 pairs of testing examples 𝒟𝑡 = {(𝐱𝑖 , 𝐲𝑖), 𝑖 = 1,… ,𝑁𝑡},where 𝐱𝑖 ∈ ℝ𝑛 and 𝐲𝑖 ∈ 𝐲𝑖 ∈ ℝ𝑚, compute (predict) ො𝐲𝑖 and calculate a

performance metric, e.g., MSE.

Page 10: Artificial Neural Networks Perceptron

Biological Neuron• Basic computational unit of the brain.

• Main parts:

• Dendrites

• They act as inputs.

• Soma

• Main body of neuron.

• Axon

• It acts as neuron output.

2

Page 11: Artificial Neural Networks Perceptron

Biological Neuron Connectivity• Neurons connect with other neurons through synapses.

AxosomaticAxodendritic Axoaxonic

3

Page 12: Artificial Neural Networks Perceptron

Biological Neuron Connectivity

⚫ An electric action potential is propagated through the axon.

⚫ Signal is transmitted through the synapse gap by neurotransmitter molecules.

• Each synapse has its own synaptic weight.

⚫ Transmitted signal is a series of electrical

impulses.

• The stronger the transmitted signal, the

bigger the impulse frequency.

4

Page 13: Artificial Neural Networks Perceptron

Synaptic Integration• Electric potential received by all dendrites of a neuron

is accumulated inside its soma.

• When the electric potential at the membrane reaches a certain threshold, the neuron fires an electrical impulse.

• The signal is propagated through the axon and information is “fed” forward to all connected neurons.

+

5

Page 14: Artificial Neural Networks Perceptron

Supervised Neural Learning In most cases, function 𝑓 can have one of the following forms:

• linear function (or mapping from 𝐱 to 𝑦):

𝑦 = 𝑓 𝐱;𝐰 = 𝐰𝑇𝐱 + 𝑏.

• nonlinear function

𝑦 = 𝑓 𝐱;𝐰 = 𝐰𝑇𝜙 𝐱 + 𝑏.

• 𝜙:ℝ𝑛 → 𝐻: Nonlinear mapping of 𝐱 to a (possibly) high-dimensional space, 𝐿 = dim 𝐻 > 𝑛,𝐰 ∈ ℝ𝐿.

7

Page 15: Artificial Neural Networks Perceptron

Supervised Neural Learning • Learning consists of finding the optimal parameters 𝐰 so that ොy =𝑓 𝐱;𝐰 is as close as possible to the target 𝑦.

• Optimization of parameters 𝐰 is accomplished by minimizing a cost function 𝐽 𝐰 where cost is defined as a measure of discrepancy between 𝑦 and ො𝑦.

7

Page 16: Artificial Neural Networks Perceptron

Artificial Neurons

Artificial neurons are mathematical models loosely inspired

by their biological counterparts.

⚫ Previous dendrites fetch the input vector:

𝐱 = 𝑥1, 𝑥2, … , 𝑥𝑛𝑇, 𝑥𝑖 ∈ ℝ.

⚫ The synaptic weights are grouped in a weight vector:

𝐰 = [𝑤1, 𝑤2, … , 𝑤𝑛]𝑇 , 𝑤𝑖 ∈ ℝ.

⚫ Synaptic integration is modeled as the inner product:

𝑧 =

𝑖=1

𝑛

𝑤𝑖𝑥𝑖 = 𝐰𝑇𝐱.

Z

threshold

x1

x2

xn

w1

w1

wn

10

Page 17: Artificial Neural Networks Perceptron

Cell body

synapse

dendrite

Axon from a neuron

Activation Function

Perceptron threshold

11

Page 18: Artificial Neural Networks Perceptron

Perceptron

⚫ McCulloch & Pitts model is the simplest mathematical model of a neuron.

⚫ It has real inputs in the range 𝑥𝑖 ∈ [0,1].

⚫ It produces a single output 𝑦 ∈ [0,1], through activation function 𝑓(∙).

⚫ Output 𝑦 signifies whether the neuron will fire or not.

⚫ Firing threshold:

𝐰𝑇𝐱 ≥ −𝑏 ⇒ 𝐰𝑇𝐱 + 𝑏 ≥ 0.Cell body

12

Page 19: Artificial Neural Networks Perceptron

Perceptron – Activation⚫ Threshold can be incorporated in the weight vector.

⚫ Augmented input and weight vectors: 𝐱′ = 𝑏, 𝑥1, … , 𝑥𝑛𝑇, 𝐱′ ∈ ℝ𝑛+1

𝐰′ = 1,𝑤1, … ,𝑤𝑛𝑇, 𝐰′ ∈ ℝ𝑛+1

⚫ Step function to model the activation of the perceptron :

𝑦 = 𝑓 𝑧 = 𝑓(𝐰′𝑇𝐱′) = ቊ0, 𝐰′𝑇𝐱′ < 0

1, 𝐰′𝑇𝐱′ ≥ 0Suitable for 2-class problems:

➔ If 𝑓 𝑧 ≥ 0, assign 𝐱 to class 𝒞1

➔ If 𝑓 𝑧 < 0, assign 𝐱 to class 𝒞2.

13

Page 20: Artificial Neural Networks Perceptron

Perceptron – Activation⚫ Threshold can be incorporated in the weight vector.

⚫ Augmented input and weight vectors: 𝐱′ = 𝑏, 𝑥1, … , 𝑥𝑛𝑇, 𝐱′ ∈ ℝ𝑛+1

𝐰′ = 1,𝑤1, … ,𝑤𝑛𝑇, 𝐰′ ∈ ℝ𝑛+1

⚫ Step activation function:

𝑦 = 𝑓 𝑧 = 𝑓(𝐰′𝑇𝐱′) = ቊ0, 𝐰′𝑇𝐱′ < 0

1, 𝐰′𝑇𝐱′ ≥ 0.

Suitable for 2-class problems:➔ If 𝑓 𝑧 ≥ 0, assign 𝐱 to class 𝒞1;➔ If 𝑓 𝑧 < 0, assign 𝐱 to class 𝒞2.

13

Page 21: Artificial Neural Networks Perceptron

Perceptron-Decision

Hyperplanes

21

Linear decision line.

Perceptron decision surface:a) Line in ℝ2 ; b) plane in ℝ3; c) hyperplane in ℝ𝑛.

Page 22: Artificial Neural Networks Perceptron

AND function model

⚫ Perceptron weight vector: 𝐰 = −3

2, 1, 1

𝑇

⚫ Input vector: 𝐱 = 1, 𝑥1, 𝑥2𝑇

𝑥1 𝑥2 𝑦 = 𝐰𝑇𝐱

0 0 𝐰𝑇𝐱 < 0 ⇒ 𝑦 = 0

0 1 𝐰𝑇𝐱 < 0 ⇒ 𝑦 = 0

1 0 𝐰𝑇𝐱 < 0 ⇒ 𝑦 = 0

1 1 𝐰𝑇𝐱 ≥ 0 ⇒ 𝑦 = 1

14

Page 23: Artificial Neural Networks Perceptron

OR function model

⚫ Perceptron weight vector: 𝐰 = −1

2, 1, 1

𝑇

⚫ Input vector: 𝐱 = 1, 𝑥1, 𝑥2𝑇

𝑥1 𝑥2 𝑦 = 𝐰𝑇𝐱

0 0 𝐰𝑇𝐱 < 0 ⇒ 𝑦 = 0

0 1 𝐰𝑇𝐱 ≥ 0 ⇒ 𝑦 = 1

1 0 𝐰𝑇𝐱 ≥ 0 ⇒ 𝑦 = 1

1 1 𝐰𝑇𝐱 ≥ 0 ⇒ 𝑦 = 1

15

Page 24: Artificial Neural Networks Perceptron

XOR function mod⚫ There is no linear separating line in ℝ2 for

the XOR function.

⚫ Solution: Add an extra layer of neurons before the perceptron output 𝑦.

⚫ Extra layer consists of two perceptrons, computing the AND and the OR function respectively.

⚫ The new functional form will be 𝑓 𝐱 =𝐰𝑇𝜙 𝐱 , where 𝜙 𝐱 is the output of the extra layer, given as input to the output layer.

16

Page 25: Artificial Neural Networks Perceptron

Two-layer Perceptron – XORfunction model

Layer 1(input)

Layer 2(hidden)

Layer 3(output)

⚫ Notation:

𝐱 = 1, 𝑥1, 𝑥2𝑇

𝐰1 = −12, 1, 1

𝑇𝜙1 𝐱 = 𝐰1

𝑇𝐱

𝐰2 = −32, 1, 1

𝑇𝜙2 𝐱 = 𝐰2

𝑇𝐱

𝐰 = 3

2, 1, −2

𝑇𝑦 = 𝐰𝑇𝚽,

𝚽 = 𝜙1 𝐱 , 𝜙2 𝐱 𝑻.

𝑦

17

Page 26: Artificial Neural Networks Perceptron

Two-layer Perceptron – XORfunction model

Layer 1(input)

Layer 2(hidden)

Layer 3(output)

𝜙1 𝐱 𝜙2 𝐱 𝑦 = 𝐰𝑇𝚽

0 0 𝐰𝑇𝚽 < 0 ⇒ 𝑦 = 0

0 1 𝐰𝑇𝚽 ≥ 0 ⇒ 𝑦 = 1

1 0 𝐰𝑇𝚽 ≥ 0 ⇒ 𝑦 = 1

1 1 𝐰𝑇𝚽 < 0 ⇒ 𝑦 = 0

𝑥1 𝑥2 𝜙1 𝐱 𝜙2 𝐱

0 0 0 0

0 1 1 0

1 0 1 0

1 1 1 1

𝑦

18

Page 27: Artificial Neural Networks Perceptron

Single Perceptron training

Perceptron training as an optimization problem.

• Let 𝛉 = 𝐰 be the unknown Perceptron weight vector.

• It has to be optimized to minimize error function 𝐽 𝛉 .

• Differentiation: 𝜕𝐽

𝜕𝐖=

𝜕𝐽

𝜕𝐛= 𝟎 can provide the critical points:

• Minima, maxima, saddle points.

• Analytical computations of this kind are usually impossible.

• Need to resort to numerical optimization methods.

21

Page 28: Artificial Neural Networks Perceptron

Steepest Gradient Descent

• One of the most popular optimization algorithms.

• Iteratively searches the parameter space by following the direction of the steepest descent.

• Given any multivariate function 𝐽 𝛉 , the gradient vector points to the direction of the steepest ascent.

𝛻𝐽 𝛉 =𝜕𝑓

𝜕𝛉=

𝜕𝑓

𝜕𝜃1, … ,

𝜕𝑓

𝜕𝜃𝑚

𝑇

.

• Correspondingly, −𝛻𝐽 𝛉 points to the direction where 𝐽 𝛉 decreases more rapidly.

21

Page 29: Artificial Neural Networks Perceptron

Steepest Gradient Descent

⚫ A solution to the choice of the correction term: it must be proportional to the direction of steepest descent:

∆𝛉 = −𝜇𝜕𝐽 𝛉

𝜕𝛉→ 𝛉 𝑡 + 1 = 𝛉 𝑡 − 𝜇𝛻𝛉𝐽 𝑡

⚫ 𝜇: a parameter controlling model parameters updates called learning rate.

21

Page 30: Artificial Neural Networks Perceptron

Steepest Gradient Descent

Steepest descent on a 2D function surface.

23

Page 31: Artificial Neural Networks Perceptron

Single Perceptron training

2-class Classification:

If 𝑦 = 𝑓 𝐱;𝐰 = 𝐰𝑇 𝐱 ≥ 0, assign 𝐱 to class 𝒞1,

If 𝑦 = 𝑓 𝐱;𝐰 = 𝐰𝑇 𝐱 < 0, assign 𝐱 to class 𝒞2.

⚫ It is assumed that 𝑏 = 0.

⚫ Goal: find the optimal model parameters 𝐰, so that themodel produces the minimal number of false classassignments (decisions/predictions),

19

Page 32: Artificial Neural Networks Perceptron

Single Perceptron training

Classification is transformed to an optimization problem:

• construct a cost function, choose an optimization algorithm.

Perceptron cost function:

min𝐰

𝐽 𝐰 =

𝐱∈𝒟′

𝑑𝑥𝐰𝑇𝐱 .

• 𝒟′ is the subset of training samples that have been misclassified.

• 𝑑𝑥 ≜ −1 if 𝐱 ∈ 𝒞1 and has been misclassified to 𝒞2,

• 𝑑𝑥 ≜ 1 if 𝐱 ∈ 𝒞2 and has been misclassified to 𝒞1.

19

Page 33: Artificial Neural Networks Perceptron

Single Perceptron training

⚫ When all samples have been correctly classified: 𝒟′ = ∅ ⇒𝐽 𝐰 = 0.

⚫ If 𝐱 ∈ 𝒞1 and has been misclassified, then 𝐰𝑇𝐱 < 0 and𝑑𝑥 < 0. Thus, they produce positive error 𝐽 𝐰 .

⚫ The same applies for misclassified samples of class 𝒞2.

⚫ 𝐽 𝐰 is continuous and piece-wise linear and differentiable.

20

Page 34: Artificial Neural Networks Perceptron

Single Perceptron training

• Applying the previous parameter update rule to the Perceptron model:

𝐰 𝑡 + 1 = 𝐰 𝑡 − 𝜇𝜕𝐽 𝐰

𝜕𝐰,

𝜕𝐽(𝐰)

𝜕𝐰=

𝜕

𝜕𝐰

𝐱∈𝒟′

𝑑𝑥𝐰𝑇𝐱 =

𝐱∈𝒟′

𝑑𝑥𝐱 .

22

Page 35: Artificial Neural Networks Perceptron

Single Perceptron training

• The complete form of the update rule, widely known as the Perceptron algorithm becomes as follows:

𝐰 𝑡 + 1 = 𝐰 𝑡 − 𝜇

𝐱∈𝒟′

𝑑𝑥𝐱 .

• It is proven that this algorithm converges.

• It is an on-line algorithm (training data can be used as they come).

22

Page 36: Artificial Neural Networks Perceptron

Single Perceptron training

Perceptron training.

20

Page 37: Artificial Neural Networks Perceptron

Batch Perceptron training

20

Ho-Kashyap Algorithm

All data vectors are columns of the 𝑛 × 𝑁 data matrix 𝐗 =[𝑑𝑥𝑖𝐱𝑖], 𝑖 = 1,… , 𝑁.

⚫ Since 𝑑𝑥𝐰𝑇𝐱 > 0, then 𝐗𝑇𝐰 = 𝐛 > 𝟎 = 0 0 …0 𝑇,

⚫ Cost function can be formed as:

𝐽 𝐱;𝐰, 𝐛 =1

2

𝑖=1

𝑁

𝐱𝑖𝑇𝐰− 𝐛 = 𝐗𝑇𝐰− 𝐛 2/2 =

= 𝐰𝑇𝐗𝐗𝑇𝐰/2 + 𝐛𝑇𝐛/2 − 𝐛𝑇𝐗𝑇𝐰/2 −𝐰𝑇𝐗𝑇𝐛/2.

Page 38: Artificial Neural Networks Perceptron

Batch Perceptron training

20

Differentiation:

𝜕𝐽/𝜕𝐰 𝑇 = 𝐗𝐗𝑇𝐰− 𝐗𝐛 = 𝐗𝐗𝑇 𝐰− 𝐗+𝐛 = 𝟎,

⚫ Generalized inverse/pseudo inverse 𝑛 × 𝑛 matrix 𝐗+ ≜𝐗𝐗𝑇 −1𝐗.

⚫ Closed form iterative solution (𝑡: iteration step):

𝐰 𝑡 + 1 = 𝐗+𝐛 𝑡 + 1 .

⚫ Any change of 𝐛 𝑡 must satisfy:

𝐛 𝑡 > 𝟎, 𝐞 𝑡 = 𝐗𝑇𝐰 𝑡 − 𝐛 𝑡 > 𝟎.

Page 39: Artificial Neural Networks Perceptron

Batch Perceptron training

20

⚫ Appropriate update rule for 𝐛:

𝐛 𝑡 + 1 = 𝐛 𝑡 + 𝑐 𝐞 𝑡 + 𝐞 𝑡 , 𝑐 > 0.

⚫ |𝐞(𝑡)|: it contains the absolute values of the elements of 𝐞(𝑡).

⚫ Overall batch training algorithm:

𝐛 1 > 0, 𝐰 1 = 𝐗+𝐛(1),

𝐞 𝑡 = 𝐗𝑇𝐰 𝑡 − 𝐛 𝑡 ,

𝐛 𝑡 + 1 = 𝐛 𝑡 + 𝑐(𝐞 𝑡 + |𝐞(𝑡)|).

𝐰 𝑡 + 1 = 𝐗+𝐛 𝑡 + 1 .

Page 40: Artificial Neural Networks Perceptron

Regularized linear regression:

• Goal: Obtain a function 𝑓(𝐱) = 𝐰T𝐱 minimizing the objective function:

min𝑤

σ𝑖(𝑓 𝐱𝑖 − 𝑦𝑖)2+𝜆 𝐰 2.

• 𝐱𝑖 ∈ ℝ𝑛, 𝑖 = 1,… ,𝑁: data samples

• 𝑦𝑖, 𝑖 = 1,… ,𝑁: regression targets

• 𝐰: unknown weight vector.

• 𝜆: regularization parameter.

• It contains the regularization term 𝜆 𝐰 2 to minimize the

coefficient norm.

Batch Perceptron training

Page 41: Artificial Neural Networks Perceptron

Linear regression solution:

𝐰 = (𝐗𝐗𝑇 + λ𝐈)−1𝐗𝐲,

• 𝐗 = 𝐱1…𝐱𝑁 : 𝑁 × 𝑛 data matrix.

• 𝐲 = 𝑦1,… , 𝑦𝑁𝑇: regression target vector.

• 𝐰: unknown weight vector.

• Regularized pseudoinversion is used.

• It forms the theoretical basis for KCF object tracking.

Batch Perceptron training

Page 42: Artificial Neural Networks Perceptron

Bibliography[HAY2009] S. Haykin, Neural networks and learning machines, Prentice Hall, 2009.[BIS2006] C.M. Bishop, Pattern recognition and machine learning, Springer, 2006.[GOO2016] I. Goodfellow, Y. Bengio, A. Courville, Deep learning, MIT press, 2016[THEO2011] S. Theodoridis, K. Koutroumbas, Pattern Recognition, Elsevier, 2011.[ZUR1992] J.M. Zurada, Introduction to artificial neural systems. Vol. 8. St. Paul: West publishing

company, 1992.[ROS1958] Rosenblatt, Frank. "The perceptron: a probabilistic model for information storage and

organization in the brain." Psychological review 65.6 (1958): 386.[YEG2009] Yegnanarayana, Bayya. Artificial neural networks. PHI Learning Pvt. Ltd., 2009.[DAN2013] Daniel, Graupe. Principles of artificial neural networks. Vol. 7. World Scientific, 2013.[HOP1988] Hopfield, John J. "Artificial neural networks." IEEE Circuits and Devices Magazine 4.5

(1988): 3-10.[SZE2013] C. Szegedy, A. Toshev, D. Erhan. "Deep neural networks for object

detection." Advances in neural information processing systems. 2013.[SAL2016] T. Salimans, D. P. Kingma "Weight normalization: A simple reparameterization to

accelerate training of deep neural networks." Advances in Neural Information ProcessingSystems. 2016.

[MII2019] Miikkulainen, Risto, et al. "Evolving deep neural networks." Artificial Intelligence in theAge of Neural Networks and Brain Computing. Academic Press, 2019. 293-312.

63

Page 43: Artificial Neural Networks Perceptron

Q & A

Thank you very much for your attention!

Contact: Prof. I. Pitas

[email protected]