variational autoencoders - znu.ac.ircv.znu.ac.ir/afsharchim/deepl/variational autoencoders.pdfin...

Post on 20-May-2020

16 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Variational Autoencoders

Autoencoders vs Variational AE

Autoencoders vs Variational AE

Variational Autoencoders

Latent Variable

Latent Variable

Sampling Latent Variable

Sampling Latent Variable

VAE

KL-Divergence

ENTROPY- If we use log2 for our calculation we can

interpret entropy as "the minimum number of bits it would take us to encode our information".

Essentially, what we're looking at with the KL divergence is the

expectation of the log difference between the probability of

data in the original distribution with the approximating

distribution. Again, if we think in terms of log2 we can interpret

this as "how many bits of information we expect to lose"

Deriving Loss Function

Deriving Loss Function

Loss Function

The easiest choice is N(0,1)

Keras-VAE

Keras-VAE

top related