micro architecture for exascale - mcspress3.mcs.anl.gov/computingschool/files/2013/07/...m e m i o...

26
Argonne Training Program on Extreme-Scale Computing 2013 Tryggve Fossum Computer Architect Micro Architecture for Exascale

Upload: others

Post on 03-Jul-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

Argonne Training Program on Extreme-Scale Computing 2013

Tryggve Fossum

Computer Architect

Micro Architecture for Exascale

Page 2: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

2

Legal Information

THIS REPORT IS PROVIDED "AS IS" WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY OF MERCHANTABILITY, NONINFRINGEMENT FITNESS FOR ANY PARTICULAR PURPOSE, OR ANY WARRANTY OTHERWISE ARISING OUT OF ANY PROPOSAL,

SPECIFICATION OR SAMPLE.

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT OR BY THE SALE OF

INTEL PRODUCTS. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR

INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life saving, life sustaining, critical control or safety systems, or in nuclear facility applications. Intel may make changes to

specifications and product descriptions at any time, without notice. This document contains information on products in the design phase of development. The information here is subject to change

without notice. Do not finalize a design with this information. Intel retains the right to make changes to its test specifications at any time, without notice.

Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration

may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products,

call (U.S.) 1-800-628-8686 or 1-916-356-3104.

Data has been simulated and is provided for informational purposes only. Data was derived using simulations run on an architecture simulator. Any difference in system hardware or software design or configuration may affect actual performance.

Pentium® and Xeon™ are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

*Other names and brands may be claimed as the property of others.

Copyright © 2013, Intel Corporation

Argonne Training Program on Extreme-Scale Computing 2013

Page 3: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

3

CMP Systems Benefits •  On die interconnect:

–  Higher Bandwidth: TB/s vs. GB/s –  Shorter Latency: ns vs. 100 ns –  MilliWatts vs. Watts

•  Shared On-die Cache –  Fast Communication –  Better hit rate –  Faster synchronization –  Less false sharing

•  Memory attached to single socket –  Simplifies System Design –  Reduces NUMA effects –  Simplifies application development –  Simplifies performance tuning

•  On-die performance scaling can be almost linear with core count

16 cores 16MB Cache

Tanglewood socket

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

M E M

I O

Argonne Training Program on Extreme-Scale Computing 2013

Page 4: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

Knights Corner Micro-architecture

PCIe Client Logic

Core

L2

Core

L2

Core

L2

Core

L2

TD TD TD TD

Core

L2

Core

L2

Core

L2

Core

L2

TD TD TD TD GDDR MC

GDDR MC

GDDR MC

GDDR MC

4 Argonne Training Program on Extreme-Scale Computing 2013

Page 5: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

5

Design Example: A Ring-based On Die Interconnect

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

M E M

I O

•  Topology: Ring shaped Pipelined Bus connects array of processor cores to a distributed, multi-access, cache

Ring BW ~ Frequency x Width x Stops / Distance

–  While a single, unidirectional ring can work, two counter-rotating rings cut average distance in half

–  Adding stops, adds raw bandwidth, canceling out added occupancy

–  Data path wide enough to minimize Serialization effects

•  Additional rings can be targeted to shorter, more specialized messages.

•  Power encoding to minimize power and di/dt. •  Ring Latency grows with total distance, but is

reasonable for a shared, higher level cache, close to Manhattan distance wire delay.

Off-die, such a network would be costly. On-die, the regular layout makes it practical.

Argonne Training Program on Extreme-Scale Computing 2013

Page 6: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

L3 Read Request

(hit)

Request, CORE[0] sends address to Cache[15].

Cache[15] performs cache line read.

Response, Cache[15] returns data to CORE[0].

6 Argonne Training Program on Extreme-Scale Computing 2013

Page 7: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

7

Ring Design Considerations

• Flow Control: Rotary rule. –  Once a packet is on the ring, it can keep going with priority over incoming

traffic. No further arbitration. No store and forward. –  What if destination cannot sink packet?

•  Common network challenge, multiple options

•  Routing Algorithm: –  Greedy – pick ring with shortest distance

•  Minimizes best case latency •  Not the best for worst case traffic (Tornado example) •  Hashing algorithms for allocating memory blocks to cache segments

eliminate worst case –  Adaptive:

•  With Rotary rule, congestion shows up as not getting on the ring at some stop, making adaptive routing more beneficial.

•  Select ring that minimizes a x D + b x Q, where D is distance and Q is number of Queue entries.

Argonne Training Program on Extreme-Scale Computing 2013

Page 8: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

8

Ring Topologies

–  Floorplan matches CMP layout –  Very simple, but does it scale?

•  Wider, faster •  Multiple Rings •  Hierarchical rings

–  May mean extra buffering –  More complex flow control –  Locality dependent

–  Possible to increase locality: •  Affinity aware caching

–  Bypass access to local ring cache •  Divide ring cache into private segments

–  Most accesses to near-by ring stops –  Use ring for global snoops and invalidates

–  Mesh of Rings •  Keep benefits of rings •  Watch corner turns

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

P

C

M E M

I O

P C

P C

P C

P C

P C

P C

P C P C

P C

P C

P C

P C

P C

P C

P C

P C

M E M

I O

Argonne Training Program on Extreme-Scale Computing 2013

Page 9: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

Knights Corner Coprocessor

KN

KNC Card

KN

Intel® Xeon® Processor PCIe x16

>= 8GB GDDR5 memory

TCP/IP

System Memory

60 Cores

Linux OS

GDDR5 Channel …PCIe x16

KNC Card GDDR5 Channel

GDDR5 Channel … GDDR5

Channel

GD

DR

5 C

hannel …G

DD

R5

Channel

9 Argonne Training Program on Extreme-Scale Computing 2013

Page 10: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

10

Teraflops Research Chip 100 Million Transistors ● 80 Tiles ● 275mm2

First tera-scale programmable silicon:

•  Teraflops performance

•  Tile design approach

•  On-die mesh network

•  Novel clocking

•  Power-aware capability

•  Supports 3D-memory

Not designed for IA or product

[Slides from Intel CTG: Jerry Bautista, et al]

Follow On: Rock Creek 48 IA cores

Argonne Training Program on Extreme-Scale Computing 2013

Page 11: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

11

Tiled Design & Mesh Network

Repeated Tile Method:

Compute + router

Modular, scalable

Small design teams

Short design cycle

Mesh Interconnect:

“Network-on-a-Chip” •  Cores networked in a grid allows for super high bandwidth communications in and

between cores

5-port, 80GB/s* routers

Low latency (1.25ns*)

South neighbor

North neighbor

To Future Stacked Memory

West neighbor

East neighbor

One tile

Argonne Training Program on Extreme-Scale Computing 2013

Page 12: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

12

Fine Grain Power Management

FP Engine 1

Sleeping: 90% less

power

FP Engine 2

Sleeping: 90% less

power

Router Sleeping:

10% less power (stays on to pass traffic)

Data Memory Sleeping:

57% less power

Instruction Memory Sleeping:

56% less power

Argonne Training Program on Extreme-Scale Computing 2013

Page 13: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

Limits to CMP

• CMP concentrates computation in a single chip, but…

• CMP also concentrates demand for •  Power and Cooling •  Memory Bandwidth •  Memory Capacity •  Network and IO Bandwidth •  Cache capacity

• Diminishing returns for large core counts •  Communication latencies increase •  NOC Bandwidth per core may drop

• Complications •  Process variation, Clock skew •  Visibility, Fault isolation, Test •  Power management: Current limits, di/dt, hot spots

13 Argonne Training Program on Extreme-Scale Computing 2013

Page 14: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

128-Core CMP

Scalability Links

Memory Links

M e m

o r y L i

n k s

M e m

o r y L

i n k s

Core Cell Memory Interface Unit

Scalability Interface Unit

On - Die Interconnect

Mem

ory

Link

s

Mem

ory

Link

s

14 Argonne Training Program on Extreme-Scale Computing 2013

Page 15: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

0

5

10

15

20

25

30

35

40

45

50

Memory BW L2 Cache BW L1 Cache BW

Relative BW Relative BW/Watt

Caches – For or Against?

Coherent Caches are a key MIC Architecture Advantage

Results have been simulated and are provided for informational purposes only. Results were derived using simulations run on an architecture simulator or model. Any difference in system hardware or software design or configuration may affect actual performance.

Caches:   high data BW

  low energy per byte of data supplied   programmer friendly (coherence just works)

15 Argonne Training Program on Extreme-Scale Computing 2013

Page 16: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

Cache Coherence

• Integration creates gap between integrated and non-integrated functions, such as Main Memory

• The architectural solution has been bigger, better caches

• Which can create coherence bottleneck and design complexity

•  Want low (no?) coherence cost when there is no sharing

•  Must work well when there is sharing –  Low power, high bandwidth, low latency –  When compared to the cost of sharing when there is no cache

coherence

•  Avoid cache thrashing –  False sharing, ping pong effects –  Streaming data wiping out data with high locality • Smart allocation

16 Argonne Training Program on Extreme-Scale Computing 2013

Page 17: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

Keeping Track of the data

Full Map – historically default approach. Keeps bit for every cache in the system (with private caches implies a bit for every core in the system). Doesn’t scale. Requires extra memory writes to keep up to date.

Linked List of cache entries for same line. Invalidates follow the list. Complex to evict from the middle of the list.

Invalidate Rings – path through all cores/caches that visits each once.

Hierarchical Organization. Use domain bits in second level of tag-directory and then core-valid bits in first level.

17

L2

Vic B

MAF

L1I

core

L1D

TD2 TD1

Argonne Training Program on Extreme-Scale Computing 2013

Page 18: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

Memory

Memory

Memory

Memory

Memory

Memory

Memory

Read and service from local neighbor

Two level Directory of Caches in a Mesh 18 Argonne Training Program on Extreme-Scale Computing 2013

Page 19: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

Memory

Memory

Memory

Memory

Memory

Memory

Memory

Read Service from other memory

Two level Directory of Caches in a Mesh 19 Argonne Training Program on Extreme-Scale Computing 2013

Page 20: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

20

Parallel Bioinformatics Workloads

Structure Learning:

•  GeneNet – Hill Climbing, Bayesian network learning

•  SNP – Hill Climbing, Bayesian network learning

•  SEMPHY – Structural Expectation Maximization algorithm

Optimization:

•  PLSA – Dynamic Programming

Recognition:

•  SVM-RFE – Feature Selection

OpenMP workloads developed by Intel Corporation

•  Donated to Northwestern University, NU-MineBench Suite

Next four slides from: [Jaleel: HPCA 2006]

Argonne Training Program on Extreme-Scale Computing 2013

Page 21: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

21

Data Sharing Behavior

4M

B

8M

B

16M

B

32M

B

64M

B

0 20 40 60 80 100

PLSA GeneNet SEMPHY SNP SVM

How

Muc

h Sh

ared

?

0 20 40 60 80 100

Acc

ess

Freq

uenc

y

Sharing is dependent on algorithm and varies with cache size

Workloads fully utilize a 64MB LLC

Reducing cache misses improves data sharing

Despite size of shared footprint, shared data frequently referenced

(4 T

hrea

ded

Run

)

1 Th

read

2

Thre

ad

3 Th

read

4

Thre

ad

Cac

he M

iss

(4 T

hrea

ded

Run

)

Argonne Training Program on Extreme-Scale Computing 2013

Page 22: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

22

Shared/Private Cache – SEMPHY M

iss

Rat

e M

iss

Rat

e

SEMPHY with 4-threads

Shared cache out-performs private caches

Private Cache (16MB TOTAL LLC, 4MB/CORE)

Shared Cache (16MB TOTAL LLC)

Total Instructions (billions)

Argonne Training Program on Extreme-Scale Computing 2013

Page 23: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

23

Fin

Argonne Training Program on Extreme-Scale Computing 2013

Page 24: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

24

Scaling Example: Alpha EV5

•  EV5 was a 4-wide, in order RISC processor •  Scale the process ten times, from 0.5 micron to 16 nm •  EV5 area is 16.5 x 18.1 mm² in .5 micron technology, including pin

interface and 2nd level cache –  Moore’s law suggests this would reduce by a factor of 2¹º, to 0.3 mm² in

16 nm technology

•  EV5 frequency is 300 MHz –  Traditional frequency scaling of 1.4 every process generation, leads to a

frequency of 9 GHz in 16 nm technology

•  EV5 power is 50W, at 3.3V –  Assuming a voltage of 0.5V, and changes to C and f cancel out, the

dynamic power becomes 50W x (0.5/3.3) ² = 1.1W

•  Core count: 1024 x •  Frequency: 30 x •  Power: > 20 x •  Other growth factors: Interconnects, caches, memory, IO, multi

threading, vectors, …

Argonne Training Program on Extreme-Scale Computing 2013

Page 25: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

25

Single Stream, Moore’s Law, and CMP

CMP Performance: Performance ~ x Transistor Count As long as there are no system limitations!

Transistor Count

Perf

orm

ance

Interesting Core Design Area: Slopes are similar

Gap driving us to Multi-core

Single Stream Performance: Perf ≈ √Transistor Count Historically fairly accurate Relates Moore’s Law to Performance: √2T *√2 = 2√T

Transistor speedup due to technology shrink factor of 0.7

Argonne Training Program on Extreme-Scale Computing 2013

Page 26: Micro Architecture for Exascale - MCSpress3.mcs.anl.gov/computingschool/files/2013/07/...M E M I O Argonne Training Program on Extreme-Scale Computing 2013 . Knights Corner Coprocessor

26

Example: Power Management with CMP, optimized for OLTP

Argonne Training Program on Extreme-Scale Computing 2013