what can in-memory computing deliver, and what are the

20
What can in-memory computing deliver, and what are the barriers? Naveen Verma ([email protected] ), L.-Y. Chen, H. Jia, M. Ozatay, Y. Tang, H. Valavi, B. Zhang, J. Zhang March 20 th , 2019

Upload: others

Post on 28-Apr-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: What can in-memory computing deliver, and what are the

What can in-memory computing deliver,and what are the barriers?

Naveen Verma ([email protected]),L.-Y. Chen, H. Jia, M. Ozatay, Y. Tang, H. Valavi, B. Zhang, J. Zhang

March 20th, 2019

Page 2: What can in-memory computing deliver, and what are the

The memory wall

MULT (INT8): 0.3pJ

MULT (INT32): 3pJMULT (FP32): 5pJ

MULT (INT4): 0.1pJ

Memory Size (!)

Ener

gy p

er A

cces

s 64

b W

ord

(pJ)

• Separating memory from compute fundamentally raises a communication cost

More data → bigger array → larger comm. distance → more comm. energy

Page 3: What can in-memory computing deliver, and what are the

So, we should amortize data movement

EMEMComp. Intensity

OPS/WCOMP=

MemoryBound

ComputeBound

• Specialized (memory-compute integrated) architectures

!"⋮!$

=&"," … &",)⋮ ⋱ ⋮

&$," ⋯ &$,)

,"⋮,)

Processing Element (PE)

• Reuse accessed data for computeoperations

!⃗ = . × ,

Page 4: What can in-memory computing deliver, and what are the

In-memory computing (IMC)

IMC Mode SRAM Mode

!

"[J. Zhang, VLSI’16][J. Zhang, JSSC’17]

• In SRAM mode, matrix A stored in bit cells row-by-row

• In IMC mode, many WLs driven simultaneously→ amortize comm. cost inside array

• Can apply to diff. mem. Technologies→ enhanced scalability→ embedded non-volatility

#$⋮#&

=($,$ … ($,+⋮ ⋱ ⋮

(&,$ ⋯ (&,+

.$⋮.+

#⃗ = 0. ⇒

Page 5: What can in-memory computing deliver, and what are the

The basic tradeoffs

CONSIDER: Accessing ! bits of data associated with computation,from array with ! columns ⨉ ! rows.

Memory(D1/2×D1/2 array)

Computation

Memory &Computation(D1/2×D1/2 array)

D1/2

Traditional IMC Metric Traditional In-memoryBandwidth 1/D1/2 1Latency D 1Energy D3/2 ~DSNR 1 ~1/D1/2

• IMC benefits energy/delay at cost of SNR

• SNR-focused systems design is critical (circuits, architectures, algorithms)

Page 6: What can in-memory computing deliver, and what are the

IMC as a spatial architecture

!

" = $!Data Movement:1. %&,(′* broadcast min. distance

due to high-density bit cells2. (Many) +,,&′* stationary

in high-density bit cells3. High-dynamic-range analog -,,(′*

computed in distributed manner

Page 7: What can in-memory computing deliver, and what are the

IMC as a spatial architecture

Operation Digital-PE Energy (fJ) Bit-cell Energy (fJ)Storage 250

50Multiplication 100Accumulation 200Communication 40 5Total 590 55

Assume:• 1k dimensionality• 4-b multiplies• 45nm CMOS

PRE

c11(23)

PRE PRE PRE

a11[3]

b11[3]

a11[2]

a11[1]

a11[0]

a12[3]

a12[2]

a12[1]

a12[0]

b21[3]

c11(22) c11(21) c11(20)

Page 8: What can in-memory computing deliver, and what are the

Where does IMC stand today?

Energy Efficiency (TOPS/W)

Norm

alize

d Th

roug

hput

(GOP

S/m

m2 )

Bankman, ISSCC’18, 28nm

Yuan, VLSI’18, 65nm

Moons, ISSCC’17, 28nmAndo, VLSI’17, 65nm

Chen, ISSCC’16, 65nm Gonug, ISSCC’18, 65nm

Biswas, ISSCC’18, 65nm

Jiang, VLSI’18, 65nm

10

10e2

10e3

10e4

Valavi, VLSI’18, 65nm

Khwa, ISSCC’18, 65nm

Zhang, VLSI’16, 130nm

Lee, ISSCC’18, 65nm

Shin, ISSCC’17, 65nm

Yin, VLSI’17, 65nm

10e-2 10e-1 1 10 10e2 10e3Energy Efficiency (TOPS/W)

On-c

hip

Mem

ory

Size

(kB)

Yuan, VLSI’18, 65nm

10e3

10e2

10

1

10e-2 10e-1 1 10 10e2 10e3

Valavi, VLSI’18, 65nm

Bankman, ISSCC’18, 28nm

Lee, ISSCC’18, 65nm

Zhang, VLSI’16, 130nm

Khwa, ISSCC’18, 65nm

Jiang, VLSI’18, 65nm

Biswas, ISSCC’18, 65nm

Gonug, ISSCC’18, 65nm

Chen, ISSCC’16, 65nm

Yin, VLSI’17, 65nm

Ando, VLSI’17, 65nm Moons, ISSCC’17, 28nm

IMCNot IMC

• Potential for 10× higher efficiency & throughput

• Limited scale, robustness, configurability

Page 9: What can in-memory computing deliver, and what are the

Challenge 1: analog computation• Use analog circuits to ‘fit’ compute in bit cells⟶ SNR limited by analog-circuit non-idealities⟶ Must be feasible/competitive @ 16/12/7nm

VBIAS,O VBIAS 1x 2x 16x

MA,R

MD,R CLASS_EN

X[0] X[1] X[4]

WL_RESET

WLXOffset

BL BLB

MA

MD

Bit-cell replica

I-DAC

Bit-cell

0.02

0.04

0.06

WLDAC Code

ΔVBL

(V)

05 10 15 20 25 30 35

Ideal transfer curve

Nominal transfer curve

[J. Zhang, VLSI’16][J. Zhang, JSSC’17]

Page 10: What can in-memory computing deliver, and what are the

Algorithmic co-design(?)

••

• • ••• •• ••

••••••

WEAK classifier K

WEAK classifier 2Weighted

Voter

Classifier

Trainer

WEAK classifier 1

Feature 1

Feat

ure

2

••• • ••• •• ••

••••••

••

• • ••• •• ••

••••••

••• • ••• •• ••

••••••

• Chip-specific weight tuning

[Z. Wang, TVLSI’15][Z. Wang, TCAS-I’15]

[S. Gonu., ISSCC’18]

• Chip-generalized weight tuning

G giTraining InferenceParameters

$(&, (, ℒ)

Normalized MRAM cell standard dev.1 2 3 4 5 6 7 8 9 1010

2030405060708090

100

Accu

racy

L = |- − /-(&, $)|0

L = |- − /-(&, $, ()|0

E.g.: BNN Model (applied to CIFAR-10)

[B. Zhang, ICASSP 2019]

Page 11: What can in-memory computing deliver, and what are the

Challenge 2: programmability

[B. Fleischer, VLSI’18]General Matrix Multiply

(~256⨉2300=590k elements)

Single/few-word operands

(traditional, near-mem. acceleration)

• Matrix-vector multiply is only 70-90% of operations

⟶ IMC must integrate in programmable, heterogenous architectures

Page 12: What can in-memory computing deliver, and what are the

Challenge 3: efficient application mappings• IMC engines must be ‘virtualized’⟶ IMC amortizes MVM costs, not weight loading. But…⟶ Need new mapping algorithms (physical tradeoffs very diff. than digital engines)

(output activations)

"#,%,&'

(N - I⨉J⨉K filters)

)*+,,,-(X⨉Y⨉Z input

activations)

• EDRAM→IMC/4-bit: 40pJ• Reuse: .×0×1 (10-20 lyrs)• EMAC,4-b: 50fJ

Activation Accessing Weight Accessing• EDRAM→IMC/4-bit: 40pJ• Reuse: 2×3• EMAC,4-b:50fJ

Reuse ≈ 1k

MemoryBound

ComputeBound

Page 13: What can in-memory computing deliver, and what are the

Path forward: charge-domain analog computing

1. Digital multiplication2. Analog accumulation

[H. Valavi, VLSI’18]

~1.2fF metal capacitor(on top of bit cell)

Page 14: What can in-memory computing deliver, and what are the

2.4Mb, 64-tile IMC

Moons,ISSCC’17

Bang,ISSCC’17

Ando,VLSI’17

Bankman,ISSCC’18

Valavi, VLSI’18

Technology 28nm 40nm 65nm 28nm 65nm

Area (!!") 1.87 7.1 12 6 17.6

Operating VDD 1 0.63-0.9 0.55-1 0.8/0.8 (0.6/0.5) 0.94/0.68/1.2

Bit precision 4-16b 6-32b 1b 1b 1b

on-chip Mem. 128kB 270kB 100kB 328kB 295kB

Throughput (GOPS) 400 108 1264 400 (60) 18,876

TOPS/W 10 0.384 6 532 (772) 866

• 10-layer CNN demos for MNIST/CIFAR-10/SVHN at energies of 0.8/3.55/3.55 μJ/image

• Equivalent performance to software implementation

[H. Valavi, VLSI’18]

Page 15: What can in-memory computing deliver, and what are the

Programmable IMC

CPU(RISC-V)

AXI Bus

DMA Timers GPIO UART

32

Program Memory(128 kB)

Boot-loader

Data Memory(128 kB)

Compute-In-Memory Unit (CIMU)

• 590 kb • 16 bank

Ext. Mem. I/F

Config.Regs.

To E2PROM To DRAM Controller

Config

APB Bus 32

32

Tx Rx

8 13(data) (addr.)

32(data/addr.)

[H. Jia, arXiv:1811.04047]w

2b R

esha

ping

Buf

fer

Spar

sity

/AND

-logi

c Co

ntro

ller

x

Data Mask

Row

Dec

oder

/ WL

Driv

ers

Memory Read/Write I/F

<0>

<767>

x0

32b

<0><255>

8bNear-Mem. Data Path<0>

Near-Mem. Data Path

<31>32b

ADC

& AB

N <63><192>

A32b

Bit Cell

ADC

& AB

N

f(y = A x)

Compute-In-Memory Array

(CIMA)

xb0

x2303 xb2303

Page 16: What can in-memory computing deliver, and what are the

Bit-scalable mixed-signal compute

10

203040

6

10

14

18

2 3 4 5 6 7 8

2

4

6

BA

SQNR

(dB)

Bx=2

Bx=4

Bx=8

N=2304, 2000, 1500, 1000, 500, 255

N=2304, 2000, 1500, 1000, 500, 255

N=2304, 2000, 1500, 1000, 500, 255

• SQNR different that standard integer compute

[H. Jia, arXiv:1811.04047]

Page 17: What can in-memory computing deliver, and what are the

Development board

To Host Processor

Page 18: What can in-memory computing deliver, and what are the

Design flow1. Deep-learning Training Libraries

(Keras)2. Deep-learning Inference Libraries

(Python, MATLAB, C)

Dense(units, ...)Conv2D(filters, kernel_size, ...)...

Standard Keras libs:

QuantizedDense(units, nb_input=4, nb_weight=4, chip_quant=True, ...)

QuantizedConv2D(filters, kernel_size, nb_input=4, nb_weight=4, chip_quant=True, ...)

...

QuantizedDense(units, nb_input=4, nb_weight=4, chip_quant=False, ...)

QuantizedConv2D(filters, kernel_size, nb_input=4, nb_weight=4, chip_quant=False, ...)

...

Custom libs:(INT/CHIP quant.)

chip_mode = Trueoutputs = QuantizedConv2D(inputs,

weights, biases, layer_params)outputs = BatchNormalization(inputs,

layer_params)...

High-level network build (Python):

Embedded C:

Function calls to chip (Python):chip.load_config(num_tiles, nb_input=4,

nb_weight=4)chip.load_weights(weights2load)chip.load_image(image2load)outputs = chip.image_filter()

chip_command = get_uart_word();chip_config();load_weights(); load_image();image_filter(chip_command);read_dotprod_result(image_filter_command);

Page 19: What can in-memory computing deliver, and what are the

Demonstrations

2 4 6 82

4

6

8

2 4 6 85

10

15

20

SQNR

(dB)

Multi-bit Matrix-Vector Multiplication

N=1152Bit-true Sim.

N=1728

MeasuredN=1152N=1728

Bx=2

BA

Bx=4

0 20 40 60 80-500

0

500

0 20 40 60 80-60

-40

-20

0

20

Data Index

Com

pute

Val

ue

Bx=2, BA=2 Bx=4, BA=4

Bit True Sim.Measured

BA

Data Index

Neural-Network DemonstrationsNetwork A

(4/4-b activations/weights)Network B

(1/1-b activations/weights)Accuracy of chip

(vs. ideal)92.4%

(vs. 92.7%)89.3%

(vs. 89.8%)Energy/10-way

Class.1 105.2 μJ 5.31 μJ

Throughput1 23 images/sec. 176 images/sec.

Neural Network Topology

L1: 128 CONV3 – Batch normL2: 128 CONV3 – POOL – Batch norm.L3: 256 CONV3 – Batch. normL4: 256 CONV3 – POOL – Batch norm.L5: 256 CONV3 – Batch norm.L6: 256 CONV3 – POOL – Batch norm.L7-8: 1024 FC – Batch norm.L9: 10 FC – Batch norm.

L1: 128 CONV3 – Batch Norm.L2: 128 CONV3 – POOL – Batch Norm.L3: 256 CONV3 – Batch Norm.L4: 256 CONV3 – POOL – Batch Norm.L5: 256 CONV3 – Batch Norm.L6: 256 CONV3 – POOL – Batch Norm.L7-8: 1024 FC – Batch norm.L9: 10 FC – Batch norm.

DM

EM

PM

EM

CP

U

CIMU

AD

C

AB

N

DM

A e

tc. W2b Reshaping Buffer

4×4CIMATiles

3mm

4.5mm

Nea

r-m

em. D

atap

ath

Sparsity Controller

[H. Jia, arXiv:1811.04047]

Page 20: What can in-memory computing deliver, and what are the

Conclusions

Matrix-vector multiplies (MVMs) are a little different than other computations

⟶ high-dimensionality operands lead to data movement (memory accessing)

Bit cells make for dense, energy-efficient PE’s in spatial array

⟶ but require analog operation to fit compute, and impose SNR tradeoff

Must focus on SNR tradeoff to enable

scaling (technology/platform) and architectural integration

In-memory computing greatly affects the architectural tradeoffs,

requiring new strategies for mapping applications

Acknowledgements: funding provided by ADI, DARPA, NRO, SRC/STARnet