the frontier of define-by-run deep learning frameworks · the frontier of define-by-run deep...

40
The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks, Inc. S9380

Upload: others

Post on 17-Sep-2019

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

The Frontier of Define-by-RunDeep Learning Frameworks

GTC 2019 @ San Jose. Mar. 20, 2019

Seiya Tokui, Preferred Networks, Inc.

S9380

Page 2: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

Deep Learning Frameworkfor fast iterative research/development

2

Page 3: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

Define-by-Run frameworks

3

by default from 2.0

Page 4: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

x = numpy.array(…)h1 = layer1(x, W1)h2 = layer2(h1, W2)loss = loss_func(h2)

loss.backward()W1.array -= lr * W1.gradW2.array -= lr * W2.grad

Write forward propas a plain Python script.

Variables hold how they were computed. Use it to compute the gradient.

4

Page 5: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

Deep learning frameworkoptimized for the Define-by-Run API design

5

Page 6: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

6

✓ Model description

✓ Distributed training

✓ Serialization, export

……

Everything is optimized forDefine-by-Run style programming

Page 7: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

class Linear(chainer.Link):

def __init__(self, n_in, n_out):super().__init__()with self.init_scope():

self.W = chainer.Parameter(I.HeNormal(), (n_in, n_out))self.b = chainer.Parameter(0, (n_out,))

def forward(self, x):return x @ self.W + self.b

Tie parameters to the forward code using OOP.

7

Page 8: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

8

class MLP(chainer.Chain):

def __init__(self):super().__init__()with self.init_scope():self.l1 = Linear(784, 200)self.l2 = Linear(200, 100)self.l3 = Linear(100, 10)

def forward(self, x):h1 = F.relu(self.l1(x))h2 = F.relu(self.l2(h1))return self.l3(h2)

Object structure =composition of NN fragments

Page 9: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

9

for batch in iterator:x, t = converter(batch)loss = loss_fun(x, t)loss.backward()optimizer.update()model.cleargrad()

Every part is plain, customizable Python code

# fetch the next minibatch# concat, transfer to the device# forward prop# backprop# update parameters# cleanup gradients

Page 10: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

10

Fast GPU computation

Page 11: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

11

Page 12: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

12

import numpy as np

def logsumexp(x):x_max = x.max(axis=1, keepaxis=True)x0 = x – x_maxlse = np.log(np.exp(x0).sum(axis=1))lse += x_maxreturn lse

x = np.array([...], dtype=np.float32)print(logsumexp(x))

Page 13: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

13

import cupy as cp

def logsumexp(x):x_max = x.max(axis=1, keepaxis=True)x0 = x – x_maxlse = cp.log(cp.exp(x0).sum(axis=1))lse += x_maxreturn lse

x = cp.array([...], dtype=np.float32)print(logsumexp(x))

Page 14: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

14

import cupy as cp, numpy as np

def logsumexp(x):x_max = x.max(axis=1, keepaxis=True)x0 = x – x_maxlse = np.log(np.exp(x0).sum(axis=1))lse += x_maxreturn lse

x = cp.array([...], dtype=np.float32)print(logsumexp(x))

Page 15: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

15

✓ cuDNN support (conv, pooling, LSTM, …)

✓ Easy custom kernel compiled at runtime

✓ FP16 support

Page 16: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

17

load & make minibatch

forward backward

parameter update

DALI,multiprocessing

float16 mode, TensorCore

Distributed training

Page 17: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

18

Mixed precision training

> TensorCore supportautomatically available

> Techniques for mixed precision trainingoptimizer.set_loss_scale(scale)optimizer.use_fp32_update()

> mixed16 mode (coming soon)CHAINER_DTYPE=mixed16

Page 18: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

19

Distributed training

Page 19: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

20

Forward Backward Optimize

Forward Backward Optimize

Forward Backward Optimize

Forward Backward Optimize

AL

L-R

ED

UC

E

Process 0 on node 0, GPU 0

Process 1 on node 0, GPU 1

Process 2 on node 1, GPU 0

Process 3 on node 1, GPU 1

Page 20: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

21

Data parallelism

comm = chainermn.create_communicator()device = comm.intra_rank # use this deviceoptimizer = chainermn.create_multi_node_optimizer(…, comm)

Scaled to V100x512 environment(https://arxiv.org/abs/1809.00778)

Page 21: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

22

Model parallelism

> Each node computes different part of the network(model itself is parallelized )

> MPI communication primitives with backprop

# rank 0phi = send(x, comm, rank=1)h = recv(comm, rank=1,

delegate_variable=phi)

# rank 1x = recv(comm, rank=0)h = f(x)phi = send(h, comm, rank=0)

Page 22: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

23

Page 23: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

24

Model parallelism

> send returns a pseudo variable φ. It simulates the topology of full computational graph

> Collective communication routines, e.g. bcast, scatter, allgather etc., are also available

# rank 0phi = send(x, comm, rank=1)h = recv(comm, rank=1,

delegate_variable=phi)loss(h).backward()

# rank 1x = recv(comm, rank=0)h = f(x)phi = send(h, comm, rank=0)phi.backward()

Page 24: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

25

Domain specific add-on packages

Page 25: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

26

✓ Support standard computer vision tasksclassification, object detection, semantic/instance segmentation

✓ Simple, unified interfaceeasy to use and compose, optimized for computer vision workloads

✓ Guaranteed reproductionevery method implemented is confirmed to reproduce the same performance as the original paper

Page 26: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

27

✓ Wide range of Deep RL methods coveredDQN, Categorical DQN, IQN, DDPG, A3C, ACER, NSQ, PCL, PPO, TRPO

✓ Clean API and abstractioneasy to combine multiple orthogonal design choices, e.g. discrete/continuous actions, recurrent models, async training, ...

✓ Environment supportcompatible with OpenAI Gym interface

Page 27: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

28

Chainer Chemistry ChainerUI

Page 28: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

29

What is needed for modern deep learning frameworks?

Page 29: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

Speed

Faster trial-and-errorLarger scale

Environment support

Quick adoption of new hardware/environment

Quick Deployment

Quick application of research outcome

Page 30: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

ChainerXincluded in Chainer v6 beta1

Page 31: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

ChainerX = NumPy-like ndarray + autograd

• in C++ w/ a thin binding layer

= far less host-side overhead

• with pluggable device backends

= open to quickly add a new device support

• with pure C++ API

= available for Python-free native apps

Speed

Environment Support

Quick Deployment

Page 32: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

33

ChainerX (with C++ API)

ChainerX Python API

High level API(Chainer)

Native backend CUDA backend custom backend …

Existing code using Chainer

Low-overhead computation written in Python

Portable code with much less overhead in C++

Nu

mP

y

Cu

Py

Page 33: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

ChainerX Python API: chainerx namespace

import chainerx as chxx = chx.ones((2, 3),

dtype=chx.float32,device='cuda:0')

y = (x + 1).require_grad()z = chx.exp(y).sum()z.backward()

> NumPy compatible API

> NN specific functionsconv, batch_norm, …

> Device support

> require_grad() to make array differentiable

Page 34: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

Chainer on ChainerX

arr = chx.ones((2, 3),dtype=chx.float32)

x = chainer.Variable(arr)

y = model(x)y.backward()

> Wraps chx.ndarray with Variable

> FunctionNode fallbacks the computation to NumPy/CuPy

> Uses ChainerX (C++) computational graph with lower overhead in backprop

Page 35: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

ChainerX C++ API

chainerx::Array x = chainerx::ones({2, 3},chainerx::Dtype::kFloat32,chainerx::GetDevice("cuda:0"));

chainerx::Array y = (x + 1).RequireGrad();chainerx::Array z = chainerx::Exp(y).Sum();chainerx::Backward(z);

> Has almost one-to-one mapping to Python API

> Runs without CPythonenvironment

Page 36: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

Host logic overhead

Framework/APITime per iteration

(=fwd+bwd+update, msec)

Chainer on NumPy 14.48

Chainer on ChainerX 7.54

ChainerX Python 1.88

PyTorch 2.45

Page 37: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

ChainerX: Roadmap

38

v6 v7 future

ChainerXBasic opsIntegration to Chainer

Wide coverage of opsReady for most usersC++ API made more accessible

Easier deployWider coverage of “compiled models”

Wide coverage of opsReady for most usersC++ API made more accessible

May. 2019 Nov. 2019 2020+

Page 38: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

PythonChainer

ONNX+ChainerX

VMExecution with

ChainerX

Vendor-specific graph formats

Native binary

Tracing(ONNX-Chainer)

Translation(Chainer to ONNX)

Graph-based optimizationGraph-based autodiffDynamic shapeControl flows

Chainer Compilerhttps://github.com/pfnet-research/chainer-compiler

Page 39: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

40

Page 40: The Frontier of Define-by-Run Deep Learning Frameworks · The Frontier of Define-by-Run Deep Learning Frameworks GTC 2019 @ San Jose. Mar. 20, 2019 Seiya Tokui, Preferred Networks,

41

> Pioneering define-by-run API design

> Being made faster and more portable with ChainerX and

Chainer Compiler

WE ARE HIRING!@ChainerOfficial on Twitter

https://bit.ly/join-chainer-slackhttps://preferred-networks.jp/en/jobs