exploring dma-assisted prefetching strategies for software ... · • internal data structures...

25
Exploring DMA-assisted Prefetching Strategies for Software Caches on Multicore Clusters Christian Pinto and Luca Benini GRANT N° 288574 GRANT N° 611016 Zurich 18-20 June 2014

Upload: others

Post on 26-Jul-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Exploring DMA-assisted Prefetching Strategies for Software Caches on Multicore Clusters

Christian Pinto and Luca Benini

GRANT N° 288574 GRANT N° 611016 Zurich 18-20 June 2014

Page 2: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

External Memory

HOST

Processor

Many-Core

Accelerator UART USB GPIO

Memory

Controller

GLOBAL INTERCONNECTION

Introduction

Page 3: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Interconnect

cluster

L2 mem

L1 mem

… cluster

L1 mem

NI

Main

memory

cluster-based many-core

Introduction

• SPM used to mitigate DRAM latency: explicit management

• Software caches[1] to hide DRAM-to-SPM copies

shared-mem CLUSTER

STM STHORM:

• Multiple clusters

• 16 cores x cluster

• Per cluster shared SPM

• Per cluster DMA

• No data caches

[1] “A Highly Efficient, Thread-Safe Software Cache Implementation for Tightly-Coupled Multicore Clusters”. Christian Pinto and Luca Benini

Page 4: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Introduction

void main(){

/*intialization*/

int a = cache_lookup(address, &x);

do_smthg(a);

/*some more computation on the same

line, further lookups - only hits*/

int a = cache_lookup(address + K, &y);

...

}

Software caches are great, but…

Cache miss! Blocking!

Computation

Cache miss! Blocking!

Software caches are inherently synchronous: processors have to wait line refills to

be completed

Clock cycles

waste

Cache lines can be prefetched asynchronously using the

DMA engines available in the platform

Extend the software cache in [1] to

suppot DMA-assisted Lines prefetching

[1] “A Highly Efficient, Thread-Safe Software Cache Implementation for Tightly-Coupled Multicore Clusters”. Christian Pinto and Luca Benini

Page 5: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Related work

HW prefetch SW prefetch

Software caches & prefetch

Mitigate DRAM

latency

(1978)

Improve the

performance

available caching

systems

Prefetching for

cache-less

processors

Smart prefetch

strategies

[1] T. Chen, et. al, “Prefetching irregular references for software cache on cell”

Our approach relies on programmer’s hint for irregular references, and still uses

the SW cache for the regular ones. Direct buffering waste memory!!!

Software prefetching for irregular memory

references:

• Based on compiler hints

Regular memory references:

• resolved using direct buffering [1]

+ performance

- flexibility + flexibility

- performance

Page 6: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Target Software Cache

Per cluster shared SW cache:

• Internal data structures allocated in shared SPM (Tags table, Lines table)

• Exploiting multi-bank design

• Fast Lookup function

• Hash function: simple bitwise operations

• 18 clock cycles Lookup & Hit

• Cache lines can be accessed in parallel

• One lock associated to each cache line for thread safety

• Operation mode

• Default mode: standard cache behavior

• Object mode: reduce overhead minimizing number of cache accesses

“A Highly Efficient, Thread-Safe Software Cache Implementation for Tightly-Coupled Multicore Clusters”. Christian Pinto and Luca Benini

Page 7: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Cache lines software prefetch

Main goal: transform the target Software Cache into a pro-active

entity

void main(){

/*intialization*/

int a = cache_lookup(0x20000, &x);

do_smthg(a);

/*some more computation on the same

line, further lookups - only hits*/

int a = cache_lookup(0x20020, &y);

...

}

Interconnect

STxP70 + FPx

#1

16KB-P$

STxP70 +FPx

#2

16KB-P$

STxP70 +FPx #16

16KB-P$

SPM

Tags Table

Lines Table

DMA

Hit

Miss

Refill

• Overlap of computation and next line

prefetch

• HIT!!!! Processor saving clock cycles

Page 8: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Software prefetch schemes

Prefetch is sensitive to application’s memory access pattern

Regular access pattern:

• Easy to predict

• Benefits from automatich prefetch

Irregular access pattern:

• Hard to predict

• Benefits from static hints given by the programmer

• Which line to prefetch next

We support both automatic and programmer

assisted prefetch schemes

Page 9: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Automatic prefetch

Many-core accelerators often used for computer vision applications:

foreach(subframe)

do

foreach(p in subframe)

do

for(N:=0; N<K)

do

execute function(p+N)

done

done

done

Nested Loops:

• Images processed in

subframes

• Subframes (usually) scanned

with fixed spatial pattern

• Easily predictable

• Innermost loop accessing

several adjacent pixels

• Overlaps with the

prefetch of the next line

Prefetch triggered only at first reference to each

line:

• Minimize possibility of waiting for prefetch

completion

Page 10: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Automatic prefetch

Possible access patterns:

We defined two automatic prefetch schemes:

• Horizontal prefetching

• Vertical prefetching

Requires:

• Extension of lookup routine

• Definition of extra data structures

• Definition of a prefetch routine

Page 11: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Programmer assisted prefetch

Applications with irregular memory patterns may benefit from programers

hint!!!

void prefetch_line ( void * ext_mem_address, cache_parameters

* cache );

Explicitly programs the prefetch of a specifica cache line

Some applications have long prologue (initialization phase):

• It might be exploited to pre-load data into the cache

dma_req_t * prefetch_cache (void * emt_mem_address,

uint32_t from, cache_parameters * cache) ;

void wait_cache_prefetch (dma_req_t * dma_events);

Page 12: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Additional data structures

- 1 DMA event (32 bytes) per line prefetch:

• 16 in total: considering at most 1 prefetch per core at same time

• Allocated in a common shared buffer in SPM (events_buffer)

• Circular buffer

• Requires one extra lock (1 byte)

- 1 byte per cache line:

• index of prefetch DMA event in events_buffer

- Enhanced line status register (1 byte per line):

• DIRTY flag (already present in original impl, embedded in tag)

• PREFETCHING flag (true if a line is prefetching)

• FIRST_ACCESS flag (true only at the first access to the line)

Total memory overhead: 513 bytes + 2 bytes per cache line

Page 13: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Lookup routine extension

Retrieve

DMA event id

Wait for DMA

transfer completion

Lookup

FIRST_ACCESS Prefetch Next

PREFETCHING

Y

N

Y

N

DMA event id retreived

by looking at the field

associated to the line

Next line prefetch

invoked only at first

access to the line

Page 14: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Line Prefetch Routine

1. Identify the prefetch candidate

2. Acquire its line lock

3. Sanity checks

4. Acquire DMA event index (by finding the first free entry in event_buffer)

5. Compute source and destination addresses

6. Trigger the DMA transfer

7. Save the dma event into the event_buffer

a) The line selected as vitcim is already prefetching

b) The line to prefetch is already in cache

a) and b) may overlap:

some other processor has

programmed the prefetch of

the same line

Page 15: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Experimental setup

Experiments performed on an evaluation board of STHORM:

• STHORM acceleration fabric in 28 nm CMOS.

• four computing clusters

• 16 STxP70 pe cluster

• processors working at a clock frequency of 430 MHz

• shared data scratchpad of 256 KBytes.

• memory bandwith towards the external memory is 300 MB/sec.

• Xilinx ZYNQ ZC-702 chip (Host):

• dual Core ARM Cortex A9

• Benchmarks implemented in OpenCL

• Software cache plain C code linked as external library to applications

• Experiments:

• Prefetch overhead characterization

• Real applications: Normalized Cross-Correlation (NCC), Viola-Jones face

Detection, color conversion algorithm.

Page 16: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Overhead characterization (1)

Single core runs: avoid any unwanted interference

32 bytes cache line size

Inst/clk Hit Miss Hit&Prefetch Miss&Prefetch

No prefetch 11/18 118/893 - -

Prefetch 21/46 - 127/197 215/1073

Lower IPC

w.r.t baseline

Two pipeline flushes added with prefetching

code:

• Line not prefetching

• Not First access

HIT CASE!!!!

Page 17: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Overhead characterization (2)

Single core runs: avoid any unwanted interference

32 bytes cache line size

Inst/clk Hit Miss Hit&Prefetch Miss&Prefetch

No prefetch 11/18 118/893 - -

Prefetch 21/46 - 127/197 215/1073

~ same # instr

much less clk cycles

Hit&Prefetch:

• 1st access to prefetched line

• Processor accessing the line gets hit instead of miss

893 – 197 = 696 clock cycles saved!!!

Page 18: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Overhead characterization (3)

Single core runs: avoid any unwanted interference

32 bytes cache line size

Inst/clk Hit Miss Hit&Prefetch Miss&Prefetch

No prefetch 11/18 118/893 - -

Prefetch 21/46 - 127/197 215/1073

Case less suffering from code increase due to prefetch!!!

Page 19: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Case studies setup

Case study 1: Normalized Cross Correlation (NCC)

2 Input images:

• Consecutive lines in the same column are accessed

• One sw cache per input image, 32 KB each

Case Study 2: Face Detection

• Cacade classifier accessed through sw cache

• Object caching to reduce the number of cache accesses

• 64 KB sw cache, 32bytes objects

Case Study 3: Color Conversion:

• Low re-usage of input data: image rows are swiped from first to last pixel

• 32 KB sw cache with 512 byte lines

Page 20: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Case study 1: NCC

50000

55000

60000

65000

70000

75000

80000

85000

90000

95000

32 64 128

Exe

cuti

on

tim

e n

s (T

ho

usa

nd

s)

Line Size(bytes)

software cache horizontal prefetch vertical prefetch

initial cache prefetch DMA

Baseline software cache 40% slowdown wrt DMA impl:

• Tight innermost loop, only 3 arith operation per cache access

Vertical prefetch was already supposed to work better!!!

• Only 5% overhead w.r.t. hand-tuned DMA

Page 21: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

0%

5%

10%

15%

20%

25%

30%

35%

40%

horizontal prefetch vertical prefetch initial cache prefetch

Imp

rove

me

nt

w.r

.t b

ase

line

32 64 128

Case study 1: NCC (2)

• # executed instructions increase from 19% to 24%

• Overall benefit of w.r.t. baseline sw cache up to ~ 40%

Page 22: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Case study 2: Face Detection

0,0

1,0

2,0

3,0

4,0

5,0

6,0

7,0

8,0

9,0

image 1 image 2

No

rmal

ize

d e

xecu

tio

n t

ime

No prefetch line prefetch initial cache prefetch

Image1: sw cache performance far from hand-tuned DMA:

• Small image (53x72)

• 7.5% Miss rate

Not enough work to hide sw cache iverhead (due to high # misses)

Initial cache prefetching:

• Slowdown reduced to 2.5x from 8.6x

• Due to high reduction of # cache misses (86%)

• from 2483 to 435 misses over 33117 total cache accesses

Page 23: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Case study 3: Color Coversion

1

1,05

1,1

1,15

1,2

1,25

1,3

1,35

image 1 image 2

Exe

cuti

on

tim

e n

orm

aliz

ed

to

DM

A

SW CACHE SW CACHE + PREFETCHING

Even if data is not re-used line prefetchingis still beneficial for the overall

perf:

• Slowdown w.r.t. DMA is reduced down to 13%

Page 24: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

Conclusion

• SW caches are a reactive component:

• Processors wait for cache refills to be completed

• Clock cycles waste

• DMA engines can be used to prefetch cache lines ahead of time

• Both automatic and programmer assisted prefetch studied

• Thanks to prefetching sw cache performance much closer to

hand tuned DMA: only 4% slowdown in the best case.

• High reduction of miss rate: up to 86%

• Makes sw cache suitable for applications with low data re-

usage

Page 25: Exploring DMA-assisted Prefetching Strategies for Software ... · • Internal data structures allocated in shared SPM (Tags table, Lines table) • Exploiting multi-bank design •

THANK YOU!!!

ANY QUESTION??