transport analysis of in nitely deep neural networkjournal of machine learning research 20 (2019)...
TRANSCRIPT
-
Journal of Machine Learning Research 20 (2019) 1-52 Submitted 5/16; Revised 10/18; Published 2/19
Transport Analysis of Infinitely Deep Neural Network
Sho Sonoda [email protected] for Advanced Intelligence ProjectRIKEN1β4β1 Nihonbashi, Chuo-ku, Tokyo 103β0027, Japan
Noboru Murata [email protected] of Advanced Science and Engineering
Waseda University
3β4β1 Okubo, Shinjuku-ku, Tokyo 169β8555, Japan
Editor: Yoshua Bengio
Abstract
We investigated the feature map inside deep neural networks (DNNs) by tracking thetransport map. We are interested in the role of depthβwhy do DNNs perform betterthan shallow models?βand the interpretation of DNNsβwhat do intermediate layers do?Despite the rapid development in their application, DNNs remain analytically unexplainedbecause the hidden layers are nested and the parameters are not faithful. Inspired by theintegral representation of shallow NNs, which is the continuum limit of the width, or thehidden unit number, we developed the flow representation and transport analysis of DNNs.The flow representation is the continuum limit of the depth, or the hidden layer number, andit is specified by an ordinary differential equation (ODE) with a vector field. We interpretan ordinary DNN as a transport map or an Euler broken line approximation of the flow.Technically speaking, a dynamical system is a natural model for the nested feature maps.In addition, it opens a new way to the coordinate-free treatment of DNNs by avoiding theredundant parametrization of DNNs. Following Wasserstein geometry, we analyze a flow inthree aspects: dynamical system, continuity equation, and Wasserstein gradient flow. A keyfinding is that we specified a series of transport maps of the denoising autoencoder (DAE),which is a cornerstone for the development of deep learning. Starting from the shallowDAE, this paper develops three topics: the transport map of the deep DAE, the equivalencebetween the stacked DAE and the composition of DAEs, and the development of the doublecontinuum limit or the integral representation of the flow representation. As partial answersto the research questions, we found that deeper DAEs converge faster and the extractedfeatures are better; in addition, a deep Gaussian DAE transports mass to decrease theShannon entropy of the data distribution. We expect that further investigations on thesequestions lead to the development of an interpretable and principled alternatives to DNNs.
Keywords: representation learning, denoising autoencoder, flow representation, contin-uum limit, backward heat equation, Wasserstein geometry, ridgelet analysis
1. Introduction
Despite the rapid development in their application, deep neural networks (DNN) remainanalytically unexplained. We are interested in the role of depthβwhy do DNNs performbetter than shallow models?βand the interpretation of DNNsβwhat do intermediate layers
cΒ©2019 Sho Sonoda and Noboru Murata.
License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are providedat http://jmlr.org/papers/v20/16-243.html.
https://creativecommons.org/licenses/by/4.0/http://jmlr.org/papers/v20/16-243.html
-
Sonoda and Murata
do? To the best of our knowledge, thus far, traditional theories, such as the statisticallearning theory (Vapnik, 1998), have not succeeded in completely answering the abovequestions (Zhang et al., 2018). Existing DNNs lack interpretability; hence, a DNN is oftencalled a blackbox. In this study, we propose the flow representation and transport analysisof DNNs, which provide us with insights into why DNNs can perform better and facilitateour understanding of what DNNs do. We expect that these lines of study lead to thedevelopment of an interpretable and principled alternatives to DNNs.
Compared to other shallow models, such as kernel methods (Shawe-Taylor and Cris-tianini, 2004) and ensemble methods (Schapire and Freund, 2012), DNNs have at leasttwo specific technical issues: the function composition and the redundant and complicatedparametrization. First, a DNN is formally a composite gL β¦ Β· Β· Β· β¦ g0 of intermediate mapsg` (` = 0, . . . , L). Here, each g` corresponds to the `-th hidden layer. Currently, our un-derstanding of learning machines is based on linear algebra, i.e., the basis and coefficients(Vapnik, 1998). Linear algebra is compatible with shallow models because a shallow modelis a linear combination of basis functions. However, it has poor compatibility with deepmodels because the function composition (f , g) 7β f β¦ g is not assumed in the standarddefinition of the linear space. Therefore, we should move to spaces where the functioncomposition is defined, such as monoids, semigroups, and dynamical systems. Second, thestandard parametrization of the NN, such as g`(x) =
βpj=1 c
`jΟ(a
`j Β· x β b`j), is redundant
because there exist different sets of parameters that specify the same function, which causestechnical problems, such as local minima. Furthermore, it is complicated because the in-terpretation of parameters is usually impossible, which results in the blackbox nature ofDNNs. Therefore, we need a new parametrization that is concise in the sense that differentparameters specify different functions and simple in the sense that it is easy to understand.
For shallow NNs, the integral representation theory (Murata, 1996; CandeΜs, 1998; Son-oda and Murata, 2017a) provides a concise and simple reparametrization. The integralrepresentation is derived by a continuum limit of the width or the number of hidden units.Owing to the ridgelet transform or a pseudo-inverse operator of the integral representationoperator, it is concise and simple (see Section 1.3.2 for further details on the ridgelet trans-form). Furthermore, in the integral representation, we can compute the parameters of theshallow NN that attains the global minimum of the backpropagation training (Sonoda et al.,2018). In the integral representation, thus far, the shallow NNs is no longer a blackbox,and the training is principled. However, the integral representation is again based on linearalgebra, the scope of which does not include DNNs.
Inspired by the integral representation theory, we introduced the flow representationand developed the transport analysis of DNNs. The flow representation is derived by acontinuum limit of the depth or the number of hidden layers. In the flow representation,we formulate a DNN as a flow of an ordinary differential equation (ODE) xΜt = vt(xt)with vector field vt. In addition, we introduced the transport map by which we call adiscretization x 7β x + ft(x) of the flow. Specifically, we regard the intermediate mapg : Rm β Rn of an ordinary DNN as a transport map that transfers the mass at x β Rmtoward g(x) β Rn. Since the flow and transport map are independent of coordinates,they enable us the coordinate-free treatment of DNNs. In the transport analysis, followingWasserstein geometry (Villani, 2009), we track a flow by analyzing the three profiles of the
2
-
Transport Analysis of Infinitely Deep Neural Network
X Z1 Z2 Z3 Z4 Z5 Y
R28Γ28 R1000 R1000
Figure 1: Mass transportation in a deep neural network that classifies images of digits. Inthe final hidden layer, the feature vectors have to be linearly separable because the outputlayer is just a linear classifier. Hence, through the network, the same digits graduallyaccumulate and different digits gradually separate.
flow: dynamical system, pushforward measure, and Wasserstein gradient flow (Ambrosioet al., 2008) (see Section 2 for further details).
We note that when the input and the output differ in dimension, i.e., m 6= n, we sim-ply consider that both the input space and the output space are embedded in a commonhigh-dimensional space. As a composite of transport maps leads to another transport map,the transport map has compatibility with deep structures. In this manner, transportationis a universal characteristic of DNNs. For example, let us consider a digit recognition prob-lem with DNNs. We can expect the feature extractor in the DNN to be a transport mapthat separates the feature vectors of different digits, similar to the separation of oil andwater (see Figure 1 for example). At the time of the initial submission in 2016, the flowrepresentation seemed to be a novel viewpoint of DNNs. At present, it is the mainstreamof development. For example, two important DNNsβresidual network (ResNet) (He et al.,2016) and generative adversarial net (GAN) (Goodfellow et al., 2014)βare now consideredto be transport maps (see Section 1.2 for a more detailed survey). Instead of directly investi-gating DNNs in terms of the redundant and complex parametrization, we perform transportanalysis associated with the flow representation. We consider that the flow representationis potentially concise and simple because the flow is independent of parametrization, and itis specified by a single vector field v.
In this study, we demonstrate transport analysis of the denoising autoencoder (DAE).The DAE was introduced by Vincent et al. (2008) as a heuristic modification to enhancethe robustness of the traditional autoencoder. The traditional autoencoder is an NN thatis trained as an identity map g(x) = x. The hidden layer of the network is used as afeature map, which is often called the βcodeβ because the activation pattern appears to berandom, but it surely encodes some information about the input data. On the other hand,the DAE is an NN that is trained as a βdenoisingβ map g(xΜ) β x of deliberately corruptedinputs xΜ. The DAE is a cornerstone for the development of deep learning or representationlearning (Bengio et al., 2013a). Although the corrupt and denoise principle is simple, it is
3
-
Sonoda and Murata
successful and has inspired many representation learning algorithms (see Section 1.3.1 forexample). Furthermore, we investigate stacking (Bengio et al., 2007) of DAEs. Becausestacked DAE (Vincent et al., 2010) runs DAEs on the codes in the hidden layer, it has beenless investigated, so far.
The key finding is that when the corruption process is additive, i.e., xΜ = x + Ξ΅ withsome noise Ξ΅, then the DAE g is given by the sum of the traditional autoencoder xΜ 7β xΜand a certain denoising term xΜ 7β ft(xΜ) parametrized by noise variance t:
gt(xΜ) = xΜ+ ft(xΜ). (1)
From the statistical viewpoint, this equation is reasonable because the DAE amounts toan estimation problem of the mean parameter. Obviously, (1) is a transport map becausethe denoising term ft is a displacement vector from the origin xΜ and the noise variance tis the transport time. Starting from the shallow DAE, this paper develops three topics:the transport map of the deep DAE, the equivalence between the stacked DAE and thecomposition of DAEs, and the development of the double continuum limit, or the integralrepresentation of the flow representation.
1.1. Contributions of This Study
In this paper, we introduce the flow representation of DNNs and develop the transportanalysis of DAEs. The contributions of this paper are listed below.
β’ We introduced the flow representation, which can avoid the redundancy and complex-ity of the ordinary parametrization of DNNs.
β’ We specified the transport maps of shallow, deep, and infinitely deep DAEs, andprovided their statistical interpretations. The shallow DAE is an estimator of themean, and the deep DAE transports data points to decrease the Shannon entropy ofthe data distribution. According to analytic and numerical experiments, we showedthat deep DAEs can extract much more information than shallow DAEs.
β’ We proved the equivalence between the stacked DAE and the composition of DAEs.Because of the peculiar construction, it is difficult to formulate and understand stack-ing. Nevertheless, by tracking the flow, we succeeded in formulating the stackedDAE. Consequently, we can interpret the effect of the pre-training as a regularizationof hidden layers.
β’ We provided a new direction for the mathematical modeling of DNNs: the doublecontinuum limit or the integral representation of the flow representation. We presentedsome examples of the double continuum limit of DAEs. In the integral representation,the shallow NNs is no longer a blackbox, and the training is principled. We considerthat further investigations on the double continuum limit lead to the development ofan interpretable and principled alternatives to DNNs.
4
-
Transport Analysis of Infinitely Deep Neural Network
Figure 2: The activation patterns in DeepFace gradually changes (Taigman et al., 2014).
1.2. Related Work
1.2.1. Why Deep?
Before the success of deep learning, traditional theories were skeptical of the depth concept.According to approximation theory, (not only NNs but also) various shallow models canapproximate any function (Pinkus, 2005). According to estimation theory, various shallowmodels can attain the minimax optimal ratio (Tsybakov, 2009). According to optimizationtheory, the depth does nothing but increase the complexity of loss surfaces unnecessarily(Boyd and Vandenberghe, 2004). In reality, of course, DNNs perform overwhelmingly betterthan shallow models. Thus far, the learning theory has not succeeded in explaining the gapbetween theory and reality (Zhang et al., 2017).
In recent years, these theories have changed drastically. For example, many authorsclaim that the depth increases the expressive power in the exponential order while thewidth does so in the polynomial order (Telgarsky, 2016; Eldan and Shamir, 2016; Cohenet al., 2016; Yarotsky, 2017), and that DNNs can attain the minimax optimal ratio in widerclasses of functions (Schmidt-Hieber, 2017; Imaizumi and Fukumizu, 2019). Radical reviewsof the shape of loss surfaces (Dauphin et al., 2014; Choromanska et al., 2015; Kawaguchi,2016; Soudry and Carmon, 2016), the implicit regularization by stochastic gradient descent(Neyshabur, 2017), and the acceleration effect by over-parametrization (Nguyen and Hein,2017; Arora et al., 2018) are ongoing. Besides the recent trends toward the rationalization ofdeep learning, neutral yet interesting studies have been published (Ba and Caruana, 2014;Lin et al., 2017; Poggio et al., 2017). In this study, we found that deep DAEs convergefaster and that the extracted features are different from each other.
1.2.2. What Do Deep Layers Do?
Traditionally, DNNs are said to construct the hierarchy of meanings (Hinton, 1989). Inconvolutional NNs for image recognition, such hierarchies are empirically observed (Lee,2010; Krizhevsky et al., 2012; Zeiler and Fergus, 2014). The hierarchy hypothesis seems tobe acceptable, but it lacks explanations as to how the hierarchy is organized.
Taigman et al. (2014) reported an interesting phenomenon whereby the activation pat-terns in the hidden layers change by gradation from face-like patterns to codes. Inspiredby Figure 2, we came up with the idea of regarding the activation pattern as a coordinateand the depth as the transport time.
5
-
Sonoda and Murata
1.2.3. Flow Inside Neural Networks
At the time of the initial submission in 2016, the flow representation, especially the contin-uum limit of the depth and collaboration with Wasserstein geometry, seemed to be a novelviewpoint of DNNs. At present, it is the mainstream of development.
Alain and Bengio (2014) was the first to derive a special case of (1), which motivatedour study. Then, Alain et al. (2016) developed the generative model as a probabilisticreformulation of DAE. The generative model was a new frontier at that time; now, it iswidely used in variational autoencoders (Kingma and Welling, 2014), generative adversarialnets (GANs) (Goodfellow et al., 2014), minimum probability flows (Sohl-Dickstein et al.,2015), and normalizing flows (Rezende and Mohamed, 2015). Generative models have highcompatibility with transport analysis because they are formulated as Markov processes.In particular, the generator in GANs is exactly a transport map because it is a change-of-distribution g : M β N from a normal distribution to a data distribution. From thisviewpoint, Arjovsky et al. (2017) succeeded in stabilizing the training process of GANs byintroducing Wasserstein geometry.
The skip connection in the residual network (ResNet) (He et al., 2016) is considered to bea key structure for training a super-deep network with more than 1, 000 layers. Formally,the skip connection is a transport map because it has an expression g(x) = x + f(x).From this viewpoint, Nitanda and Suzuki (2018) reformulated the ResNet as a functionalgradient and estimated the generalization error, and Lu et al. (2018) unified various ResNetsas ODEs. In addition, Chizat and Bach (2018) proved the global convergence of stochasticgradient descent (SGD) using Wasserstein gradient flow. Novel deep learning methods havebeen proposed by controlling the flow (Ioffe and Szegedy, 2015; Gomez et al., 2017; Haberand Ruthotto, 2018; Li and Hao, 2018; Chen et al., 2018).
We remark that in shrinkage statistics, the expression of the transport map x+ f(x) isknown as Brownβs representation of the posterior mean (George et al., 2006). Liu and Wang(2016) analyzed it and proposed a Bayesian inference algorithm, apart from deep learning.
1.3. Background
1.3.1. Denoising Autoencoders
The denoising autoencoder (DAE) is a fundamental model for representation learning, theobjective of which is to capture a good representation of the data. Vincent et al. (2008)introduced it as a heuristic modification of traditional autoencoders for enhancing robust-ness. In the setting of traditional autoencoders, we train an NN as an identity map x 7β xand extract the hidden layer to obtain the so-called βcode.β On the other hand, the DAE istrained as a denoising map xΜ 7β x of deliberately corrupted inputs xΜ. Although the corruptand denoise principle is simple, it has inspired many next-generation models. In this study,we analyze DAE variants such as shallow DAE, deep DAE (or composition of DAEs), in-finitely deep DAE (or continuous DAE), and stacked DAE. Stacking (Bengio et al., 2007)was proposed in the early stages of deep learning, and it remains a mysterious treatmentbecause it runs DAEs on codes in the hidden layer.
The theoretical justifications and extensions follow from at least five standpoints: man-ifold learning (Rifai et al., 2011; Alain and Bengio, 2014), generative modeling (Vincentet al., 2010; Bengio et al., 2013b, 2014), infomax principle (Vincent et al., 2010), learning
6
-
Transport Analysis of Infinitely Deep Neural Network
dynamics (Erhan et al., 2010), and score matching (Vincent, 2011). The first three stand-points were already mentioned in the original paper (Vincent et al., 2008). According tothese standpoints, a DAE extracts one of the following from the data set: a manifold onwhich the data are arranged (manifold learning); the latent variables, which often behave asnonlinear coordinates in the feature space, that generate the data (generative modeling); atransformation of the data distribution that maximizes the mutual information (infomax);good initial parameters that allow the training to avoid local minima (learning dynamics);or the data distribution (score matching). A turning point appears to be the finding of thescore matching aspect (Vincent, 2011), which reveals that score matching with a specialform of the energy function coincides with a DAE. Thus, a DAE is a density estimator ofthe data distribution Β΅. In other words, it extracts and stores information as a functionof Β΅. Since then, many researchers have avoided stacking deterministic autoencoders andhave developed generative density estimators (Bengio et al., 2013b, 2014) instead.
1.3.2. Integral Representation Theory and Ridgelet Analysis
The flow representation is inspired by the integral representation theory (Murata, 1996;CandeΜs, 1998; Sonoda and Murata, 2017a).
The integral representation
S[Ξ³](x) =
β«Ξ³(a, b)Ο(a Β· xβ b)dΞ»(a, b) (2)
is a continuum limit of a shallow NN gp(x) =βp
j=1 cjΟ(aj Β·xβbj) as the hidden unit numberp β β. In S[Ξ³], every possible nonlinear parameter (a, b) is βintegrated out,β and onlylinear parameters cj remain as a coefficient function Ξ³(a, b). Therefore, we do not need toselect which (a, b)βs to use, which amounts to a non-convex optimization problem. Instead,the coefficient function Ξ³(a, b) automatically selects the (a, b)βs by weighting them. Similarreparametrization techniques have been proposed for Bayesian NNs (Radford M. Neal, 1996)and convex NNs (Bengio et al., 2006; Bach, 2017a). Once a coefficient function Ξ³ is given,we can obtain an ordinary NN gp that approximates S[Ξ³] by numerical integration. We alsoremark that the integral representation S[Ξ³p] with a singular coefficient Ξ³p :=
βpj=1 cjΞ΄(aj ,bj)
leads to an ordinary NN gp.
The advantage of the integral representation is that the solution operatorβthe ridgelettransformβto the integral equation S[Ξ³] = f and the optimization problem of L[Ξ³] :=βS[Ξ³]βfβ2 +Ξ²βΞ³β2 is known. The ridgelet transform with an admissible function Ο is givenby
R[f ](a, b) :=
β«
Rmf(x)Ο(a Β· xβ b)dx. (3)
The integral equation S[Ξ³] = f is a traditional form of learning, and the ridgelet transformΞ³ = R[f ] satisfies S[Ξ³] = S[R[f ]] = f (Murata, 1996; CandeΜs, 1998; Sonoda and Murata,2017a). The optimization problem of L[Ξ³] is a modern form of learning, and a modifiedversion of the ridgelet transform gives the global optimum (Sonoda et al., 2018). Thesestudies imply that a shallow NN is no longer a blackbox but a ridgelet transform of thedata set. Traditionally, the integral representation has been developed to estimate the
7
-
Sonoda and Murata
approximation and estimation error bounds of shallow NNs gp (Barron, 1993; KuΜrkovaΜ,2012; Klusowski and Barron, 2017, 2018; Suzuki, 2018). Recently, the numerical integrationmethods for R[f ] and S[R[f ]] were developed (CandeΜs, 1998; Sonoda and Murata, 2014;Bach, 2017b) with various f , including the MNIST classifier. Hence, by computing theridgelet transform of the data set, we can obtain the global minimizer without gradientdescent.
Thus far, the integral representation is known as an efficient reparametrization methodto facilitate understanding of the hidden layers, to estimate the approximation and estima-tion error bounds of shallow NNs, and to calculate the hidden parameters. However, it isbased on linear algebra, i.e., it starts by regarding cj and Ο(aj Β· xβ bj) as coefficients andbasis functions, respectively. Therefore, the integral representation for DNNs is not trivialat all.
1.3.3. Optimal Transport Theory and Wasserstein Geometry
The optimal transport theory (Villani, 2009) originated from the practical requirement inthe 18th century to transport materials at the minimum cost. At the end of the 20thcentury, it was transformed into Wasserstein geometry, or the geometry on the space ofprobability distributions. Recently, Wasserstein geometry has attracted considerable atten-tion in statistics and machine learning. One of the reasons for its popularity is that theWasserstein distance can capture the difference between two singular measures, whereas thetraditional Kullback-Leibler distance cannot (Arjovsky et al., 2017). Another reason is thatit gives a unified perspective on a series of function inequalities, including the concentrationinequality. Computation methods for the Wasserstein distance and Wasserstein gradientflow have also been developed (PeyreΜ and Cuturi, 2018; Nitanda and Suzuki, 2018; Zhanget al., 2018). In this study, we employ Wasserstein gradient flow (Ambrosio et al., 2008)for the characterization of DNNs.
Given a density Β΅ of materials in Rm, a density Ξ½ of final destinations in Rm, and a costfunction c : Rm Γ Rm β R associated with the transportation, under some regularity con-ditions, there exist some optimal transport map(s) g : Rm β Rm that attain the minimumtransportation cost. Let W (Β΅, Ξ½) denote the minimum cost of the transportation problemfrom Β΅ to Ξ½. Then, it behaves as the distance between two probability densities Β΅ and Ξ½,and it is called the Wasserstein distance, which is the start point of Wasserstein geometry.
When the cost function c is given by the `p-distance, i.e., c(x,y) = |xβ y|p, the corre-sponding Wasserstein distance is called the Lp-Wasserstein distance Wp(Β΅, Ξ½). Let Pp(Rm)be the space of probability densities on Rm that have at least the p-th moment. The distancespace Pp(Rm) equipped with Lp-Wasserstein distance Wp is called the Lp-Wasserstein space.Furthermore, the L2-Wasserstein space (P2,W2) admits the Wasserstein metric g2, whichis an infinite-dimensional Riemannian metric that induces the L2-Wasserstein distance asthe geodesic distance. Owing to g2, the L
2-Wasserstein space is an infinite-dimensionalmanifold. On P2, we can introduce the tangent space TΒ΅P2 at Β΅ β P2, and the gradientoperator grad , which are fundamentals to define Wasserstein gradient flow. See Section 2for more details.
8
-
Transport Analysis of Infinitely Deep Neural Network
x2
x1
Β΅t
x
H[Β΅
t]
Β΅t
Figure 3: Three profiles of a flow analyzed in the transport analysis: dynamical system inRm described by vector field (or transport map) (left), pushforward measure described bycontinuity equation in Rm (center), and Wasserstein gradient flow in P2(Rm) (right).
Organization of This Paper
In Section 2, we describe the framework of transport analysis, which combines a quick intro-duction to dynamical systems theory, optimal transport theory, and Wasserstein gradientflow. In Section 3 and 4, we specify the transport maps of shallow, deep, and infinitelydeep DAEs, and we give their statistical interpretations. In Section 5, we present analyticexamples and the results of numerical experiments. In Section 6, we prove the equivalencebetween the stacked DAE and the composition of DAEs. In Section 7, we develop theintegral representation of the flow representation.
Remark
After the initial submission of the manuscript in 2016, the present manuscript has beensubstantially reorganized and updated. The authors presented the digests of some resultsfrom Section 3, 4 and 7 in two workshops (Sonoda and Murata, 2017b,c).
2. Transport Analysis of Deep Neural Networks
In the transport analysis, we regard a deep neural network as a transport map, and we trackthe flow in three scales: microscopic, mesoscopic, and macroscopic. Wasserstein geometryprovides a unified framework for bridging these three scales. In each scale, we analyze threeprofiles of the flow: dynamical system, pushforward measure, and Wasserstein gradient flow.
First, on the microscopic scale, we analyze the transport map gt : Rm β Rm, whichsimply describes the transportation of every point. In continuum mechanics, this viewpointcorresponds to the Eulerian description. The transport map gt is often associated witha velocity field vt that summarizes all the behavior of gt by an ODE or the continuousdynamical system: βtgt(gt(x)) = vt(gt(x)). We note that, as suggested by chaos theory, itis generally difficult to track a continuous dynamics.
Second, on the mesoscopic scale, we analyze the pushforward Β΅t or the time evolutionof the data distribution. In continuum mechanics, this viewpoint corresponds to the La-grangian description. When the transport map is associated with a vector field vt, then thecorresponding distributions evolve according to a partial differential equation (PDE) or the
9
-
Sonoda and Murata
continuity equation βtΒ΅t = ββ Β· [vtΒ΅t]. We note that, as suggested by fluid dynamics, it isgenerally difficult to track a continuity equation.
Finally, on the macroscopic scale, we analyze the Wasserstein gradient flow or the trajec-tories of time evolution of Β΅t in the space P(Rm) of probability distributions on Rm. Whenthe transport map is associated with a vector field vt, then there exists a time-independentpotential functional F on P(Rm) such that an evolution equation or the Wasserstein gradi-ent flow Β΅Μt = βgradF [Β΅t] coincides with the continuity equation. We remark that trackinga Wasserstein gradient flow may be easier compared to the two above-mentioned cases,because the potential functional is independent of time.
2.1. Transport Map and Flow
In the broadest sense, a transport map is simply a measurable map g : M β N betweentwo probability spaces M and N (see Definition 1.2 in Villani, 2009, for example). In thisstudy, we use the term as an update rule. Depending on the context, we distinguish theterm βflowβ from βtransport map.β While a flow is associated with a continuous dynamicalsystem, a transport map is associated with a discrete dynamical system. We understandthat a transport map arises as a discretization of a flow. An ordinary DNN coincides witha transport map, and the depth continuum limit coincides with a flow.
Definition 1 A transport map g : Rm β Rm is a measurable map given by{gt(x) = x+ ft(x), x β Rm, t > 0g0(x) = x, x β Rm, t = 0,
(4)
with an update vector ft.
Definition 2 A flow Οt is given by an ordinary differential equation (ODE),
{ΟΜt(x) = vt(Οt(x)), x β Rm, t > 0Ο0(x) = x, x β Rm, t = 0,
(5)
with a velocity field vt.
In particular, we are interested in the case when the update rule (4) is a tangent lineapproximation of a flow (5). i.e., gt satisfies
limtβ0
gt(x)β xt
= v0(x), x β Rm (6)
for some vt. In this case, the velocity field vt is the only parameter that determines thetransport map.
2.2. Pushforward Measure and Continuity Equation
In association with the mass transportation x 7β gt(x), the data distribution Β΅0 itselfchanges its shape to, say, Β΅t (see Figure 4, for example). Technically speaking, Β΅t is called(the density of) the pushforward measure of Β΅0 by gt, and it is denoted by gt]Β΅0.
10
-
Transport Analysis of Infinitely Deep Neural Network
Definition 3 Let Β΅ be a Borel measure on M and g : M β N be a measurable map. Then,g]Β΅ denotes the image measure (or pushforward) of Β΅ by g. It is a measure on N , definedby (g]Β΅)(B) = Β΅ β¦ gβ1(B) for every Borel set B β N .
The pushforward Β΅t is calculated by the change-of-variables formula. In particular,the following extended version by Evans and Gariepy (2015, Theorem 3.9) from geometricmeasure theory is useful.
Fact 1 Let g : Rm β Rn be Lipschitz continuous, m β€ n, and Β΅ be a probability density onRm. Then, the pushforward g]Β΅ satisfies
g]Β΅ β¦ g(x)[βg](x) = Β΅(x), a.e.x. (7)
Here, the Jacobian is defined by
[βg] =β
det |(βg)β β¦ (βg)|. (8)
The continuity equation describes the one-to-one relation between a flow and the push-forward.
Fact 2 Let Οt be the flow of an ODE (5) with vector field vt. Then, the pushforward Β΅t ofthe initial distribution Β΅0 evolves according to the continuity equation
βtΒ΅t(x) = ββ Β· [Β΅t(x)vt(x)], x β Rm, t β₯ 0. (9)
Here, βΒ· denotes the divergence operator in Rm.
The continuity equation is also known as the conservation of mass formula, and this relationbetween the partial differential equation (PDE) (9) and the ODE (5) is a well-known factin continuum physics (Villani, 2009, pp.19). See Appendix B for a sketch of the proof andAmbrosio et al. (2008, Β§ 8) for more detailed discussions.
2.3. Wasserstein Gradient Flow Associated with Continuity Equation
In addition to the ODE and PDE in Rm, we introduce the third profile: the Wassersteingradient flow or the evolution equation in the space of the probability densities on Rm. TheWasserstein gradient flow has a distinct advantage that the potential functional F of thegradient flow is independent of time t; on the other hand, the vector field vt is usually time-dependent. Furthermore, it often facilitates the understanding of transport maps becausewe will see that both the Boltzmann entropy and the Renyi entropy are examples of F .
Let P2(Rm) be the L2-Wasserstein space defined in Section 1.3.3, and let Β΅t β P2(Rm)be the solution of the continuity equation (9) with initial distribution Β΅0 β P2(Rm). Then,the map t 7β Β΅t plots a curve in P2(Rm). According to the Otto calculus (Villani, 2009,Β§ 23), this curve coincides with a functional gradient flow in P2(Rm), called the Wassersteingradient flow, with respect to some potential functional F : P2(Rm)β R.
Specifically, we further assume that the vector field vt is given by the gradient vectorfield βVt of a potential function Vt : Rm β R.
11
-
Sonoda and Murata
Fact 3 Assume that Β΅t satisfies the continuity equation with the gradient vector field,
βtΒ΅t = ββ Β· [Β΅tβVt], (10)
and that we have found F that satisfies the following equation:
d
dtF [Β΅t] =
β«
RmβVt(x)[βtΒ΅t](x)dx. (11)
Then, the Wasserstein gradient flow
d
dtΒ΅t = βgradF [Β΅t], (12)
coincides with the continuous equation.
Here, grad denotes the gradient operator on L2-Wasserstein space P2(Rm) explained inSection 1.3.3. While (12) is an evolution equation or an ODE in P2(Rm), (9) is a PDE inRm. Hence, we use different notations for the time derivatives, ddt and βt.
3. Denoising Autoencoder
We formulate the denoising autoencoder (DAE) as a variational problem, and we showthat the minimizer gβ or the training result is a transport map. Even though the termβDAEβ refers to a training procedure of neural networks, we refer to the minimizer ofDAE also as a βDAE.β We further investigate the initial velocity vector field βtgt=0 formass transportation, and we show that the data distribution Β΅t evolves according to thecontinuity equation.
For the sake of simplicity, we assume that the hidden unit number of NNs is sufficientlylarge (or infinite), and thus the NNs can always attain the minimum. Furthermore, weassume the the size of data set is sufficiently large (or infinite). In the case when the hiddenunit number and the size of data set are both finite, we understand the DAE g is composedof the minimizer gβ and the residual term h. Namely, g = gβ + h. However, theoreticalinvestigations on the approximation and estimation error h remain as our future work.
3.1. Training Procedure of DAE
Let x be an m-dimensional random vector that is distributed according to the data distri-bution Β΅0, and let xΜ be its corruption defined by
xΜ = x+ Ξ΅, Ξ΅ βΌ Ξ½t
where Ξ½t denotes the noise distribution parametrized by variance t β₯ 0. A basic example ofΞ½t is the Gaussian noise with mean 0 and variance t, i.e., Ξ½t = N(0, tI).
The DAE is a function that is trained to remove corruption xΜ and restore it to theoriginal x; this is equivalent to finding a function g that minimizes an objective function,i.e.,
L[g] := Ex,xΜ|g(xΜ)β x|2. (13)
12
-
Transport Analysis of Infinitely Deep Neural Network
Note that as long as g is a universal approximator and can thus attain the minimum, itneed not be a neural network. Specifically, our analysis in this section and the next sectionis applicable to a wide range of learning machines. Typical examples of g include neuralnetworks with a sufficiently large number of hidden units, splines (Wahba, 1990), kernelmachines (Shawe-Taylor and Cristianini, 2004) and ensemble models (Schapire and Freund,2012).
3.2. Transport Map of DAE
Theorem 4 (Modification of Theorem 1 by Alain and Bengio, 2014). The global minimumgβt of L[g] is attained at
gβt (xΜ) =1
Ξ½t β Β΅0(xΜ)
β«
RmxΞ½t(xΜβ x)Β΅0(x)dx, (14)
= xΜβ 1Ξ½t β Β΅0(xΜ)
β«
RmΡνt(Ξ΅)Β΅0(xΜβ Ξ΅)dΞ΅
οΈΈ οΈ·οΈ· οΈΈ=:ft(xΜ)
, (15)
where β denotes the convolution operator.
Here, the second equation is simply derived by changing the variable x β xΜ β Ξ΅ (seeAppendix A for the complete proof, where we used the calculus of variations). Note thatthis calculation first appeared in Alain and Bengio (2014, Theorem 1), where the authorsobtained (14).
The DAE gβt (x) is composed of the identity term x and the denoising term ft(x). If weassume that Ξ½t β Ξ΄t as tβ 0, then in the limit tβ 0, the denoising term ft(x) vanishes andDAE reduces to a traditional autoencoder. We reinterpret the DAE gβt (x) as a transportmap with transport time t that transports the mass at x β Rm toward x+ft(x) β Rm withdisplacement vector ft(x).
3.3. Statistical Interpretation of DAE
In statistics, (15) is known as Brownβs representation of the posterior mean (George et al.,2006). This is not just a coincidence, because the DAE gβt is an estimator of the mean.Recall that a DAE is trained to retain the original vector x, given its corruption xΜ = x+Ξ΅.At least in principle, this is nonsense because to retain x from xΜ means to reverse therandom walk xΜ = x + Ξ΅ (in Figure 4, the multimodal distributions Β΅0.5 and Β΅1.0 indicateits difficulty). Obviously, this is an inverse problem or a statistical estimation problem ofthe latent vector x, given the noised observation xΜ with the observation model xΜ = x+ Ξ΅.According to a fundamental fact of estimation theory, the minimum mean squared error(MMSE) estimator of x given xΜ is given by the posterior mean E[x|xΜ]. In our case, theposterior mean equals gβt .
E[x|xΜ] =β«Rm xp(xΜ | x)p(x)dxβ«Rm p(xΜ | xβ²)p(xβ²)dxβ²
=1
Ξ½t β Β΅0(xΜ)
β«
RmxΞ½t(xΜβ x)Β΅0(x)dx = gβt (xΜ). (16)
Similarly, we can interpret the denoising term ft(xΜ) as the posterior mean E[Ξ΅|xΜ] of noiseΞ΅ given observation xΜ.
13
-
Sonoda and Murata
3.4. Examples: Gaussian DAE
When the noise distribution is Gaussian with mean 0 and covariance tI, i.e.,
Ξ½t(Ξ΅) =1
(2Οt)m/2eβ|Ξ΅|
2/2t,
the transport map is calculated as follows.
Theorem 5 The transport map gβt of Gaussian DAE is given by
gβt (xΜ) = xΜ+ tβ log[Ξ½t β Β΅0](xΜ). (17)
Proof The proof is straightforward by using Steinβs identity,
βtβΞ½t(Ξ΅) = Ξ΅ Ξ½t(Ξ΅),
which is known to hold only for Gaussians.
gβt (xΜ) = xΜβ1
Ξ½t β Β΅0(xΜ)
β«
RmΡνt(Ξ΅)Β΅0(xΜβ Ξ΅)dΞ΅
= xΜ+1
Ξ½t β Β΅0(xΜ)
β«
RmtβΞ½t(Ξ΅)Β΅0(xΜβ Ξ΅)dΞ΅
= xΜ+tβΞ½t β Β΅0(xΜ)Ξ½t β Β΅0(xΜ)
= xΜ+ tβ log[Ξ½t β Β΅0(xΜ)]. οΏ½
Theorem 6 At the initial moment t β 0, the pushforward Β΅t of Gaussian DAE satisfiesthe backward heat equation
βtΒ΅t=0(x) = β4Β΅0(x), x β Rm, (18)
where 4 denotes the Laplacian.
Proof The initial velocity vector is given by the Fisher score
βtgβt=0(x) = lim
tβ0
gβt (x)β xt
= β logΒ΅0(x). (19)
Hence, by substituting the score (19) in the continuity equation (9), we have
βtΒ΅t=0(x) = ββ Β· [Β΅0(x)β logΒ΅0(x)] = ββ Β· [βΒ΅0(x)] = β4Β΅0(x). οΏ½
The backward heat equation (BHE) rarely appears in nature. However, of course, thepresent result is not an error. As mentioned in Section 3.3, the DAE solves an estimationproblem. Therefore, in the sense of the mean, the DAE behaves as time reversal. We remarkthat, as shown by Figure 4, a training result of a DAE with a real NN on a finite data setdoes not converge to a perfect time reversal of a diffusion process.
14
-
Transport Analysis of Infinitely Deep Neural Network
0.0 0.5 1.0 1.5
β3
β2
β1
01
23
t
x
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
ββ
Figure 4: Shallow Gaussian DAE, which is one of the most fundamental versions of DNNs,transports mass, from the left to the right, to decrease the Shannon entropy of data. Thex-axis represents the 1-dimensional input/output space, the t-axis represents the varianceof the Gaussian noise, and t is the transport time. The leftmost distribution depicts theoriginal data distribution Β΅0 = N(0, 1). The middle and rightmost distributions depictthe pushforward Β΅t = gt]Β΅0, associated with the transportation by two DAEs with noisevariance t = 0.5 and t = 1.0, respectively. As t increases, the variance of the pushforwarddecreases.
4. Deep DAEs
We introduce the composition gL β¦ Β· Β· Β· β¦ g0 of DAEs g` : Rm β Rm and its continuum limit:the continuous DAE Οt : Rm β Rm. We can understand the composition of DAEs as theEuler scheme or the broken line approximation of a continuous DAE.
For the sake of simplicity, we assume that the hidden unit number of NNs is infinite,and that the size of data set is infinite.
4.1. Composition of DAEs
We write 0 = t0 < t1 < Β· Β· Β· < tL+1 = t. We assume that the input vector x0 β Rm is subjectto a data distribution Β΅0. Let g0 : Rm β Rm be a DAE that is trained on Β΅0 with noisevariance t1 β t0. Then, let x1 := g0(x0), which is a random vector in Rm that is subjectto the pushforward Β΅1 := g0]Β΅0. We train another DAE g1 : Rm β Rm on Β΅1 with noisevariance t2β t1. By repeating the procedure, we obtain g`(x`) from x`β1 that is subject toΒ΅` := g(`β1)]Β΅`β1.
For the sake of generality, we assume that each component DAE is given by
g`(x) = x+ (t`+1 β t`)βVt`(x), (` = 0, . . . , L) (20)
15
-
Sonoda and Murata
M M M M M Mg`=0 g`=1 g`=2 g`=3 g`=4
Figure 5: Composition of DAEs gt0:4 : M β M , or the composite of five shallow DAEsM βM , where M = R3
where Vt` denotes a certain potential function. For example, the Gaussian DAE satisfiesthe requirement because Vt` = log[Ξ½t` β Β΅t` ].
We abbreviate the composition of DAEs by
gt0:L(x) := gL β¦ Β· Β· Β· β¦ g0(x). (21)
By definition, the βvelocityβ of a composition of DAEs coincides with the vector field
gt`+10:` (x)β g
t`0:(`β1)(x)
t`+1 β t`= βVt`(x). (22)
4.2. Continuous DAE
We fix the total time t, take the limit L β β of the layer number L, and introduce thecontinuous DAE as the limit of the βinfinite composition of DAEsβ limLββ g
t0:L.
Definition 4 We call the solution operator or flow Οt : Rm β Rm of the following dynam-ical systems as the continuous DAE associated with vector field βVt.
d
dtx(t) = βVt(x(t)), t β₯ 0. (23)
Proof According to the Cauchy-Lipschitz theorem or the Picard-LindeloΜf theorem, whenthe vector field βVt is continuous in t and Lipschitz in x, the limit limLββ g0:L convergesto a continuous DAE (23) because the trajectory t 7β g0:L(x0) corresponds to a broken lineapproximation of the integral curve t 7β Οt(x).
The following properties are immediate from Fact 2 and Fact 3. Let Οt : Rm β Rm bethe continuous DAE associated with vector field βVt. Given the data distribution Β΅0, thepushforward Β΅t := (Οt)]Β΅0 evolves according to the continuity equation
βtΒ΅t(x) = ββ Β· [Β΅t(x)βVt(x)], t β₯ 0 (24)
16
-
Transport Analysis of Infinitely Deep Neural Network
and the Wasserstein gradient flow
d
dtΒ΅t = βgradF [Β΅t], t β₯ 0 (25)
where F is given by (11).
4.3. Example: Gaussian DAE
We consider a continuous Gaussian DAE Οt trained on Β΅0 β P2(Rm). Specifically, it satisfies
d
dtx(t) = β log[Β΅t(x(t))], t β₯ 0 (26)
with Β΅t := Οt]Β΅0.
Theorem 7 The pushforward Β΅t := Οt]Β΅0 of the continuous Gaussian DAE Οt is the so-lution to the initial value problem of the backward heat equation (BHE)
βtΒ΅t(x) = β4Β΅t(x), Β΅t=0(x) = Β΅0(x). (27)
The proof is immediate from Theorem 6.
As mentioned after Theorem 6, the BHE appears because the DAE solves an estimationproblem. We remark that the BHE is equivalent to the following final value problem for theordinary heat equation:
βtut(x) = 4ut(x), ut=T (x) = Β΅0(x) for some T
where ut denotes a probability measure on Rm. Indeed, Β΅t(x) = uTβt(x) solves (27). Inother words, the backward heat equation describes the time reversal of an ordinary diffusionprocess.
According to Wasserstein geometry, an ordinary heat equation corresponds to a Wasser-stein gradient flow that increases the Shannon entropy functionalH[Β΅] := β
β«Β΅(x) logΒ΅(x)dx
(Villani, 2009, Th. 23.19). Consequently, we can conclude that the continuous GaussianDAE is a transport map that decreases the Shannon entropy of the data distribution.
Theorem 8 The pushforward Β΅t := Οt]Β΅0 evolves according to the Wasserstein gradientflow with respect to the Shannon entropy
d
dtΒ΅t = βgradH[Β΅t], Β΅t=0 = Β΅0. (28)
Proof When F = H, then Vt = β logΒ΅t; thus,
gradH[Β΅t] = β Β· [Β΅tβ logΒ΅t] = β Β· [βΒ΅t] = 4Β΅t,
which means that the continuity equation reduces to the backward heat equation.
17
-
Sonoda and Murata
4.4. Example: Renyi Entropy
Similarly, when F is the Renyi entropy
HΞ±[Β΅] :=
β«
Rm
¡α(x)β Β΅(x)Ξ±β 1 dx,
then gradHα[¡t] = 4¡αt (see Ex. 15.6 in Villani, 2009, for the proof) and thus the continuityequation reduces to the backward porous medium equation
βtΒ΅t(x) = β4¡αt (x). (29)
5. Further Investigations on Shallow and Deep DAEs through Examples
5.1. Analytic Examples
We list analytic examples of shallow and continuous DAEs (see Appendix D for furtherdetails, including proofs). In all the settings, the continuous DAEs attain a singular measureat some finite t > 0 with various singular supports that reflect the initial data distributionΒ΅0, while the shallow DAEs accept any t > 0 and degenerate to a point mass as tββ.
5.1.1. Univariate Normal Distribution
When the data distribution is a univariate normal distribution N(m0, Ο0), the transportmap and pushforward for the shallow DAE are given by
gt(x) =Ο20
Ο20 + tx+
t
Ο20 + tm0, (30)
Β΅t = N
(m0,
Ο20(1 + t/Ο20)
2
), (31)
and those of the continuous DAE are given by
gt(x) =β
1β 2t/Ο20(xβm0) +m0, (32)Β΅t = N(m0, Ο
20 β 2t). (33)
5.1.2. Multivariate Normal Distribution
When the data distribution is a multivariate normal distribution N(m0,Ξ£0), the transportmap and pushforward for the shallow DAE are given by
gt(x) = (I + tΞ£β10 )β1x+ (I + tβ1Ξ£0)
β1m0, (34)
Β΅t = N(m0,Ξ£0(I + tΞ£β10 )β2), (35)
and those of the continuous DAE are given by
gt(x) =
βI β 2tΞ£β10 (xβm0) +m0, (36)
Β΅t = N(m0,Ξ£0 β 2tI). (37)
18
-
Transport Analysis of Infinitely Deep Neural Network
5.1.3. Mixture of Multivariate Normal Distributions
When the data distribution is a mixture of multivariate normal distributionsβK
k=1wkN(mk,Ξ£k)with the assumption that it is well separated, the transport map and pushforward for theshallow DAE are given by
gt(x) =Kβ
k=1
Ξ³kt(x){
(I + tΞ£β1k )β1x+ (I + tβ1Ξ£k)
β1mk}, (38)
Β΅t βKβ
k=1
wkN(mk,Ξ£k(I + tΞ£β1k )β2), (39)
with responsibility function
Ξ³kt(x) :=wkN(x;mk,Ξ£k + tI)βKk=1wkN(x;mk,Ξ£k + tI)
, (40)
and those of the continuous DAE are given by
gt(x) ββI β 2tΞ£β1k (xβmk) +mk, (41)
Β΅t =Kβ
k=1
wkN(mk,Ξ£k β 2tI), (42)
with responsibility function
Ξ³kt(x) :=wkN(x;mk,Ξ£k β 2tI)βKk=1wkN(x;mk,Ξ£k β 2tI)
. (43)
Here, we say that the mixtureβK
k=1wkN(mk,Ξ£k) is well separated when for every clustercenter mk, there exists a neighborhood β¦k of mk such that N(β¦k;mk,Ξ£k) β 1 and Ξ³kt β1β¦k .
5.2. Numerical Example of Trajectories
We employed 2-dimensional examples, in order to visualize the difference of vector fields be-tween the shallow and deep DAEs. In the examples below, every trajectories are drawn intoattractors, however the shape of the attractors and the speed of trajectories are significantlydifferent between shallow and deep.
5.2.1. Bivariate Normal Distribution
Figure 6 compares the trajectories of four DAEs trained on the common data distribution
Β΅0 = N
([0, 0],
[2 00 1
]). (44)
The transport maps for computing the trajectories are given by (34) for the shallow DAEand composition of DAEs, and by (36) for the continuous DAE. Here, we applied (34)multiple times for the composition of DAEs.
19
-
Sonoda and Murata
The continuous DAE converges to an attractor lying on the x-axis at t = 1/2. Bycontrast, the shallow DAE slows down as tββ and never attains the singularity in finitetime. As L tends to infinity, gt0:L plots a trajectory similar to that of the continuous DAEΟt; the curvature of the trajectory changes according to βt.
5.2.2. Mixture of Bivariate Normal Distributions
Figure 7, 8, and 9 compare the trajectories of four DAEs trained on the three common datadistributions
Β΅0 = 0.5N
([β1, 0],
[1 00 1
])+ 0.5N
([1, 0],
[1 00 1
]), (45)
Β΅0 = 0.2N
([β1, 0],
[1 00 1
])+ 0.8N
([1, 0],
[1 00 1
]), (46)
Β΅0 = 0.2N
([β1, 0],
[1 00 1
])+ 0.8N
([1, 0],
[2 00 1
]). (47)
respectively.The transport maps for computing the trajectories are given by (38) for the shallow
DAE and composition of DAEs. For the continuous DAE, we compute the trajectories bynumerically solving the definition of the continuous Gaussian DAE: xΜ = β logΒ΅t(x).
In any case, the continuous DAE converges to an attractor at some t > 0, but theshape of the attractors and the basins of attraction change according to the initial datadistribution. The shallow DAE converges to the origin as t β β, and the composition ofDAEs plots a curve similar to that of the continuous DAE as L tends to infinity, gt0:L. Inparticular, in Figure 8, some trajectories of the continuous DAE intersect, which impliesthat the velocity vector field vt is time-dependent.
20
-
Transport Analysis of Infinitely Deep Neural Network
Conti DAE Οt
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y β
β
β
β
β
β β
β
β
β
β
β
β
β
β ββ
β
β
β
ββ
β
β
β
ββ
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
ββ
ββ
β
β β
β
ββ
β
ββ
ββ
β
β
β β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
Shallow DAE gt
β3 β2 β1 0 1 2 3β
3β
2β
10
12
3x
y β
β
β
β
β
β β
β
β
β
β
β
β
β
β ββ
β
β
β
ββ
β
β
β
ββ
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
ββ
ββ
β
β β
β
ββ
β
ββ
ββ
β
β
β β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
Comp. DAE gt0:L (βt = 0.05)
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y β
β
β
β
β
β β
β
β
β
β
β
β
β
β ββ
β
β
β
ββ
β
β
β
ββ
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
ββ
ββ
β
β β
β
ββ
β
ββ
ββ
β
β
β β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
Comp. DAE gt0:L (βt = 0.5)
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y β
β
β
β
β
β β
β
β
β
β
β
β
β
β ββ
β
β
β
ββ
β
β
β
ββ
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
ββ
ββ
β
β β
β
ββ
β
ββ
ββ
β
β
β β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
Figure 6: Trajectories of DAEs trained on the common data distribution (44) (Β΅0 =N([0, 0], diag [2, 1])). The gray lines start from the regular grid. The colored lines startfrom the samples drawn from Β΅0. The midpoints are plotted every βt = 0.2. Every linesare drawn into attractors.
21
-
Sonoda and Murata
Conti DAE Οt
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y
β
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
ββ
β
β
β
β
β β
β
β
ββ
ββ
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Shallow DAE gt
β3 β2 β1 0 1 2 3β
3β
2β
10
12
3
x
y
β
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
ββ
β
β
β
β
β β
β
β
ββ
ββ
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Comp. DAE gt0:L (βt = 0.05)
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y
β
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
ββ
β
β
β
β
β β
β
β
ββ
ββ
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Comp. DAE gt0:L (βt = 0.5)
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y
β
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
ββ
β
β
β
β
β β
β
β
ββ
ββ
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Figure 7: Trajectories of DAEs trained on the common data distribution (45) (a GMM withuniform weight and covariance). The gray lines start from the regular grid. The coloredlines start from the samples drawn from Β΅0. Every lines are drawn into attractors.
22
-
Transport Analysis of Infinitely Deep Neural Network
Conti. DAE Οt
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Shallow DAE gt
β3 β2 β1 0 1 2 3β
3β
2β
10
12
3x
y
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Comp. DAE gt0:L (βt = 0.05)
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Comp. DAE gt0:L (βt = 0.5)
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Figure 8: Trajectories of DAEs trained on the common data distribution (46) (a GMMwith non-uniform weight and uniform covariance). The gray lines start from the regulargrid. The colored lines start from the samples drawn from Β΅0. Every lines are drawn intoattractors.
23
-
Sonoda and Murata
Conti. DAE Οt
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Shallow DAE gt
β3 β2 β1 0 1 2 3β
3β
2β
10
12
3
x
y
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Comp. DAEs gt0:L (βt = 0.05)
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Comp. DAEs gt0:L (βt = 0.5)
β3 β2 β1 0 1 2 3
β3
β2
β1
01
23
x
y
β
β
β
β
β
β
β
βββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
ββ
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
ββ
ββ
β
β
β
β
β
β
β
β
ββ
β
ββ
ββ
β
β
β
β
β
β
β
β
β
β
β
Figure 9: Trajectories of DAEs trained on the common data distribution (47) (a GMMwith non-uniform weight and covariance). The gray lines start from the regular grid. Thecolored lines start from the samples drawn from Β΅0. Every lines are drawn into attractors.
24
-
Transport Analysis of Infinitely Deep Neural Network
5.3. Numerical Example of Trajectories in Wasserstein Space
We consider the space Q of bivariate Gaussians:
Q :={N
([0, 0],
[Ο21 00 Ο22
]) β£β£β£β£β£Ο1, Ο2 > 0}. (48)
Obviously, Q is a 2-dimensional subspace of L2-Wasserstein space, and it is closed in theactions of the continuous DAE and shallow DAE because the pushforwards are given by(37) and (35), respectively.
We employ (Ο1, Ο2) as the coordinate ofQ. This is reasonable because, in this coordinate,the L2-Wasserstein distance W2(Β΅, Ξ½) between two points Β΅ = (Ο1, Ο2) and Ξ½ = (Ο1, Ο2) issimply given by the βEuclidean distanceβW2(Β΅, Ξ½) =
β(Ο1 β Ο1)2 + (Ο2 β Ο2)2 (see Takatsu,
2011, for the proof). The Shannon entropy is given by
H(Ο1, Ο2) = (1/2) log |diag [Ο21, Ο22]|+ const. = log Ο1 + log Ο2 + const. (49)
Figure 10 compares the trajectories of the pushforward by DAEs in Q. In the left,we calculated the theoretical trajectories according to the analytic formulas (37) and (35).In the right, we trained real NNs as the composition of DAEs according to the trainingprocedure described in Section 4.1. Even though we always assumed the infinite number ofhidden units and the infinite size of data set, the results suggest that our calculus is a goodapproximation to finite settings.
Ο2
1 2 3 4 5
1
2
3
4
Ο10 1 2 3 4 5
01
23
4
Ο1
Ο 2
β
Figure 10: Trajectories of pushforward measures in a space Q of bivariate GaussiansN([0, 0], diag [Ο21, Ο
22]). In both sides, the blue lines represent the Wasserstein gradient
flow with respect to the Shannon entropy. The continuous Gaussian DAE t 7β Οt]Β΅0 alwayscoincides with the blue lines. In the left-hand side, the dashed green lines representtheoretical trajectories of the shallow DAE t 7β gt]Β΅0 and the solid green line representsa theoretical trajectory of the composition of DAEs t 7β gt0:L]Β΅0. Both the green linesgradually leave the gradient flow. In the right-hand side, the solid green lines representthe trajectories of the composition of DAEs calculated by training real NNs (10 trials). Inparticular, in the early stage, the trajectories are parallel to the gradient flow.
25
-
Sonoda and Murata
6. Equivalence between Stacked DAE and Compositions of DAEs
As an application of transport analysis, we shed light on the equivalence of the stacked DAE(SDAE) and the composition of DAEs (CDAE), provided that the definition of DAEs isgeneralized to L-DAE, which is defined below. In SDAE, we apply the DAE to the featuresvectors obtained from the hidden layer of an NN to obtain higher-order feature vectors.Therefore, the feature vectors obtained from the SDAE and CDAE are different from eachother. Nevertheless, we can prove that the trajectories generated by the SDAE and CDAEare topologically conjugate, which means that there exists a homeomorphism between thetrajectories. Moreover, we can transform the trajectory of an SDAE into that of a CDAEby using a linear map, which is obtained from the decoder of the SDAE. Thus, we cansynthesize the feature vectors of the SDAE by using CDAEs.
6.1. Definitions
To begin with, we introduce a generalized version of shallow DAE.
Definition 5 (L-DAE) Let L be an elliptic operator on the domain β¦ in Rm, Β΅ be aprobability density on β¦, and D be a positive definite matrix. The L-DAE with diffusioncoefficient D and initial data Β΅ is defined by
id + tDβ log etLΒ΅, t > 0. (50)
Here, etL is the semigroup generated by the elliptic operator L. Specifically, let Β΅t :=etLΒ΅; then, Β΅t satisfies the parabolic equation βtΒ΅t = LΒ΅t. The original Gaussian DAEcorresponds to a special case when D β‘ I and L = 4.
By dae, we denote a DAE realized by a shallow NN (Figure 11). Specifically,
dae(x) =
pβ
j=1
cjΟ(aj Β· xβ bj). (51)
By enc and dec, we denote the encoder and decoder of dae, respectively. Specifically,
encj(x) = Ο(aj Β· xβ bj), j = 1, . . . , p (52)
dec(z) =
pβ
j=1
cjzj , (53)
M H Menc dec
M M
H
enc
dae
dec
Figure 11: enc and dec correspond to the hidden layer and output layer, respectively.
26
-
Transport Analysis of Infinitely Deep Neural Network
where zj denotes the j-th element of z = enc(x). Obviously, dae = dec β¦ enc.For the sake of simipicity, even though we introduced the finite number p of hidden
units, we assume that p is large, and thus dae approximately equals L-DAE for some L.
6.2. Training Procedure of Stacked DAE (SDAE)
Let M := Rm be the space of input vectors with probability density Β΅, and let dae : M βMbe a shallow NN with p hidden units. We assume that dae is trained as the Gaussian DAEwith Β΅, and it thus approximates the DAE id + tβ log[et4Β΅]. Let H := Rp. Then, theencoder and decoder of dae are the maps enc : M β H and dec : H βM , respectively.
In the SDAE, we apply the DAE to z. Specifically, let Β΅Μ be the density of hidden featurevectors z = enc(x), and let dΜae : H β H be a shallow NN with pΜ hidden units,
dΜae(z) :=
pΜβ
Μ=1
cΜΜΟ(aΜΜ Β· z β bΜΜ).
We train dΜae by using the Gaussian DAE with Β΅Μ, where the network is decomposed asdΜae = dΜec β¦ eΜnc with eΜnc : H β HΜ and dΜec : HΜ β H, and we obtain the feature vectorszΜ := eΜnc(z) β HΜ = RpΜ. By iterating the stacking procedure, we can obtain more abstractfeature vectors (Figure 12).
H HΜ HeΜnc dΜec
M M
H H
HΜ
enc
dae
dec
eΜnc
dΜae
dΜec
Figure 12: The (feature map of) SDAE eΜnc β¦ enc is built on the hidden layer.
Technically speaking, Β΅Μ is (the density of) the pushforward dae]Β΅, and its support is
contained in the image MΜ := enc(M). In general, we assume that dim MΜ(= dimM) β€dimH; thus, the support of Β΅Μ is singular (i.e., the density vanishes outside MΜ) (see Fact 1for further details).
6.3. Topological Conjugacy
The transport map of the feature vector eΜnc β¦ enc : M β H β HΜ is somewhat unclear.According to Theorem 9 and 10, the transport map of eΜnc β¦ enc can be transformed orprojected to the ground space M by applying decβ¦ dΜec (Figure 13). Specifically, there existsan L-DAE daeβ² : M βM such that
dec β¦ dΜec β¦ eΜnc β¦ enc = daeβ² β¦ dae. (54)
27
-
Sonoda and Murata
M H HΜ H Menc eΜnc dΜec dec
M M
H
M
H
HΜ
enc
dae
dec
eΜnc
dΜae
dΜec
dec
βdaeβ²
Figure 13: By reusing dec, we can transform the SDAE eΜnc β¦ enc into a CDAE daeβ² β¦ dae.
Theorem 9 Let H and HΜ be vector spaces, dimH β₯ dim HΜ, let M0 be an m-dimensionalsmooth Riemannian manifold embedded in H, and let Β΅0 be a C
2 probability density on M0.Let f : H β H be an Lt-DAE:
f := idH + tDβ log etLtΒ΅0,
with diffusion coefficient D and time-dependent elliptic operator Lt on H, where β is thegradient operator in H.
Let T : H β HΜ be a linear map. If T |M is injective, then there exists an LΜt-DAEfΜ : HΜ β HΜ with diffusion coefficient DΜ such that
T β¦ f |M = fΜ β¦ T |M . (55)
In other words, the following diagram commutes. Here we denoted M1 := f(M0) and
H
HΜ
(M0, Β΅0)
(MΜ0, Β΅Μ0)
(M1, Β΅1)
(MΜ1, Β΅Μ1)
T T
f
βfΜ
Β΅1 := f]Β΅0. See Appendix C for the proof. The statement is general in that the choice of alinear map T is independent of the DAEs, as long as it is injective.
We note that the trajectory of the equivalent DAE fΜ may be complicated, becausethe βequivalenceβ we mean here is simply the topological conjugacy. Actually, as the proofsuggests, DΜ and LΜt contain the non-linearity of activation functions via the pseudo-inverse T
β
of T . Nevertheless, fΜ may not be much complicated because it is simply a linear projectionof the high-dimensional trajectory of Lt-DAE. According to Theorem 6, a Gaussian DAEsolves backward heat equation (at least when tβ 0). Hence, its projection to low dimensionshould also solve backward heat equation in low dimension spaces.
28
-
Transport Analysis of Infinitely Deep Neural Network
6.4. Equivalence between SDAE and CDAE
To clarify the statement, we prepare the notation. Figure 14 summarizes the symbols andprocedures.
First, we rewrite the input vector as z0 instead of x, the input space as H0 = M00 (=Rm) instead of M , and the density as Β΅00 instead of Β΅. We iteratively train the `-th NNdae`` : H
` β H` with a data distribution Β΅``, obtain the encoder enc` : H` β H`+1 anddecoder dec` : H`+1 β H`, and update the feature z`+1 := enc`(z`), the image M `+1`+1 :=enc`(M `` ) β H`+1, and the distribution Β΅`+1`+1 := (enc`)]Β΅`Β΅.
For simplicity, we abbreviate
enc`:n := encn β¦ Β· Β· Β· β¦ enc`,decn:` := dec` β¦ Β· Β· Β· β¦ decn.
In addition, we introduce auxiliary objects.
Mn`+1 := dec`:n(M `+1`+1 ), n = 0, Β· Β· Β· , `
Β΅n`+1 := dec`:n] Β΅
`+1`+1, n = 0, Β· Β· Β· , `.
By construction, M `n is an at most m-dimensional submanifold in H`, and the support of
Β΅`n is in M`n.
Finally, we denote the map dae`n : M`n β M `n+1 that is (not βtrained by DAEβ but)
defined by
dae`n := (decn:` β¦ enc0:n) β¦ (dec(nβ1):` β¦ enc0:(nβ1))β1 : M `n βM `n+1.
By Theorem 9, if dae`+1n is an L`+1n -DAE, then dae
`n exists and it is an L
`n-DAE.
Theorem 10 If every enc`|M`` is a continuous injection and every dec`|M`+1n is an injection,
then
decL:0 β¦ enc0:L = dae0L β¦ Β· Β· Β· β¦ dae00. (56)
Proof By repeatedly applying the topological conjugacy in Theorem 9,
dec` β¦ dae`+1n = dae`n β¦ dec`,
we have
decL:0 β¦ enc0:L
= dec(Lβ2):0 β¦ decLβ1 β¦ daeLL β¦ encLβ1 β¦ enc0:(Lβ2)
= dec(Lβ2):0 β¦ daeLβ1L β¦ decLβ1 β¦ encLβ1 β¦ enc0:(Lβ2)
= dec(Lβ2):0 β¦ daeLβ1L β¦ daeLβ1Lβ1 β¦ enc0:(Lβ2)
Β· Β· Β·= dae0L β¦ dae0Lβ1 β¦ Β· Β· Β· β¦ dae00. οΏ½
29
-
Sonoda and Murata
H0 = M
H1
H2
HL
HL+1
(M00 , Β΅00) (M
01 , Β΅
01)
(M11 , Β΅11)
(M02 , Β΅02)
(M12 , Β΅12)
(M22 , Β΅22)
(M03 , Β΅03)
(M13 , Β΅13)
(M23 , Β΅23)
(MLL , Β΅LL)
(M0L+1, Β΅0L+1)
(M1L+1, Β΅1L+1)
(M2L+1, Β΅2L+1)
(MLL+1, Β΅LL+1)
(ML+1L+1 , Β΅L+1L+1)
(MLL , Β΅LL)
enc0 dec0
dae00
enc1 dec1
dae11
dae22
encL decL
daeLL
β
dec0
dae01
dec0
dae02
dec0
dae0L
dec1
dae12
dec1
dae1L
dae2Lββ
β
β
β
Figure 14: By using decoders, an SDAE is transformed or projected into a CDAE. Theleftmost arrows correspond to the SDAE enc0:L, the rightmost arrows correspond to thedecoders decL:0, and the bottom arrows correspond to the CDAE dae0L β¦ Β· Β· Β· β¦ dae00.
6.5. Numerical Example
Figure 15 compares the transportation results of the 2-dimensional swissroll data by theDAEs. In both the cases, the swissroll becomes thinner by the action of transportation. Weremark that to test the topological conjugacy by numerical experiments is difficult. Here,we display Figure 15 to see typical trajectories by an SDAE and a CDAE.
In the left-hand side, we trained an SDAE enc1 β¦enc0 by using real NNs. Specifically, wefirst trained a shallow DAE dae00 on the swissroll data x0. Second, writing dae
00 = dec0β¦enc0
and letting z1 := enc0(x0), we trained a shallow DAE dae11 on the feature vectors z
1. Then,writing dae11 = dec1 β¦ enc1, we obtained x1 := dae00(x0) and x2 := dec0 β¦ dec1 β¦ enc1 β¦ enc0.The black points represent the input vectors x0, and the red and blue points representthe first and second transportation results x1 and x2, respectively. In other words, thedistribution of x0,x1 and x2 correspond to Β΅
00, Β΅
01 and Β΅
02 in Figure 14, respectively.
In the right-hand side, we trained a CDAE dae10β¦dae00 by using real NNs. Specifically, wefirst trained a shallow DAE dae00 on the swissroll data x0. Second, writing x1 := dae
00(x0),
we trained a shallow DAE dae01 on the transported vectors x01. Then, we obtained x2 :=
dae10(x1) = dae10 β¦ dae00(x0). The black points represent the input vectors x0, and the red
and blue points represent the first and second transportation results x1 and x2, respectively.
30
-
Transport Analysis of Infinitely Deep Neural Network
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
ββ
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
ββ
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β
β