1 cache basics. 2 1977: dram faster than microprocessors

77
1 CACHE BASICS

Upload: milo-washington

Post on 16-Jan-2016

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

1

CACHE BASICS

Page 2: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

2

1977: DRAM faster than microprocessors

Page 3: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

3

Since 1980, CPU has outpaced DRAM ...

Page 4: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

4

How do architects address this gap?

• Programmers want unlimited amounts of memory with low latency

• Fast memory technology is more expensive per bit than slower memory

• Solution: organize memory system into a hierarchy– Entire addressable memory space available in largest,

slowest memory– Incrementally smaller and faster memories, each containing a

subset of the memory below it, proceed in steps up toward the processor

• Temporal and spatial locality insures that nearly all references can be found in smaller memories– Gives the allusion of a large, fast memory being presented to

the processor

Page 5: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

5

Memory Hierarchy

Page 6: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

6

Memory Hierarchy Design

• Memory hierarchy design becomes more crucial with recent multi-core processors:– Aggregate peak bandwidth grows with # cores:

• Intel Core i7 can generate two references per core per clock• Four cores and 3.2 GHz clock

– 25.6 billion 64-bit data references/second +– 12.8 billion 128-bit instruction references– = 409.6 GB/s!

• DRAM bandwidth is only 6% of this (25 GB/s)• Requires:

– Multi-port, pipelined caches– Two levels of cache per core– Shared third-level cache on chip

Page 7: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

7

Memory Hierarchy Basics

• When a word is not found in the cache, a miss occurs:– Fetch word from lower level in hierarchy,

requiring a higher latency reference– Lower level may be another cache or the main

memory– Also fetch the other words contained within the

block• Takes advantage of spatial locality

– Place block into cache in any location within its set, determined by address

• block address MOD number of sets

Page 8: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

8

Locality

• A principle that makes memory hierarchy a good idea

• If an item is referenced

– Temporal locality: it will tend to be referenced again soon

– Spatial locality: nearby items will tend to be referenced soon

• Our initial focus: two levels (upper, lower)

– Block: minimum unit of data

– Hit: data requested is in the upper level

– Miss: data requested is not in the upper level

Page 9: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

9

Memory Hierarchy Basics

• Note that speculative and multithreaded processors may execute other instructions during a miss– Reduces performance impact of misses

Page 10: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

10

For each item of data at the lower level, there is exactly one location in the cache where it might be.

e.g., lots of items at the lower level share locations in the upper level

Cache

• Two issues

– How do we know if a data item is in the cache?

– If it is, how do we find it?

• Our first example

– Block size is one word of data

– ”Direct mapped"

Page 11: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

1100001 00101 01001 01101 10001 10101 11001 11101

000

Cache

Memory

001

010

011

100

101

110

111

Direct mapped cache

• Mapping

– Cache address is Memory address modulo the number of blocks in the cache

– (Block address) modulo (#Blocks in cache)

Page 12: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

12

• What kind of locality are we taking advantage of?

Direct mapped cache

Page 13: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

13

Direct mapped cache

• Taking advantage of spatial locality

• (16KB cache, 256 Blocks, 16 words/block)

Page 14: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

14

Block Size vs. Performance

Page 15: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

15

Block Size vs. Cache Measures

• Increasing Block Size generally increases Miss Penalty and decreases Miss Rate

Block Size Block Size

Block Size

MissRate

MissPenalty

Avg.MemoryAccessTime

X =

Page 16: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

16

Four Questions for Memory Hierarchy Designers

• Q1: Where can a block be placed in the upper level? (Block placement)

• Q2: How is a block found if it is in the upper level? (Block identification)

• Q3: Which block should be replaced on a miss? (Block replacement)

• Q4: What happens on a write? (Write strategy)

Page 17: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

17

Q1: Where can a block be placed in the upper level?

• Direct Mapped: Each block has only one place that it can appear in the cache.

• Fully associative: Each block can be placed anywhere in the cache.

• Set associative: Each block can be placed in a restricted set of places in the cache.– If there are n blocks in a set, the cache placement is called n-

way set associative

Page 18: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

18

Associativity Examples

Fully associative:Block 12 can go anywhere

Direct mapped: Block no. = (Block address) mod (No. of blocks in cache)Block 12 can go only into block 4(12 mod 8)

Set associative:Set no. = (Block address) mod (No. of sets in cache)Block 12 can go anywhere in set 0(12 mod 4)

Page 19: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

19

Direct Mapped Cache

Page 20: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

20

2 Way Set Associative Cache

Page 21: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

21

Fully Set Associative Cache

Page 22: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

22

An implementation of a four-way set associative cache

Page 23: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

23

Performance

Page 24: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

24

Q2: How Is a Block Found If It Is in the Upper Level?

• The address can be divided into two main parts– Block offset: selects the data from the block

offset size = log2(block size)– Block address: tag + index

• index: selects set in cacheindex size =

log2(#blocks/associativity)– tag: compared to tag in cache to determine hit

tag size = addreess size - index size - offset size

Tag Index

Page 25: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

25

Q3: Which Block Should be Replaced on a Miss?

• Easy for Direct Mapped• Set Associative or Fully Associative:

– Random - easier to implement– Least Recently used - harder to implement - may

approximate• Miss rates for caches with different size, associativity and

replacement algorithm.Associativity: 2-way 4-way 8-waySize LRU Random LRU Random LRU Random16 KB 5.18% 5.69% 4.67% 5.29% 4.39% 4.96%64 KB 1.88% 2.01% 1.54% 1.66% 1.39% 1.53%256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12%

For caches with low miss rates, random is almost as good as LRU.

Page 26: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

26

Q4: What Happens on a Write?

Page 27: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

27

Q4: What Happens on a Write?

• Since data does not have to be brought into the cache on a write miss, there are two options:– Write allocate

• The block is brought into the cache on a write miss• Used with write-back caches• Hope subsequent writes to the block hit in cache

– No-write allocate

• The block is modified in memory, but not brought into the cach

• Used with write-through caches• Writes have to go to memory anyway, so why bring

the block into the cache

Page 28: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

28

Hits vs. misses

• Read hits

– This is what we want!

• Read misses

– Stall the CPU, fetch block from memory, deliver to cache, restart

• Write hits

– Can replace data in cache and memory (write-through)

– Write the data only into the cache (write-back the cache later)

• Write misses

– Read the entire block into the cache, then write the word

Page 29: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

29

Cache Misses

• On cache hit, CPU proceeds normally

• On cache miss

– Stall the CPU pipeline

– Fetch block from next level of hierarchy

– Instruction cache miss

• Restart instruction fetch– Data cache miss

• Complete data access

Page 30: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

30

Cache Measures

• Hit rate: fraction found in the cache– Miss rate = 1 - Hit Rate

• Hit time: time to access the cache• Miss penalty: time to replace a block from lower level,

– access time: time to access lower level – transfer time: time to transfer block

CPU time = (CPU execution cycles+ Memory stall cycles)*Cycle time

penalty MissnInstructio

Misses

Program

nsInstructio

penalty Missrate MissProgram

accessesMemory

cycles stallMemory

Page 31: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

31

Improving Cache Performance

• Average memory-access time = Hit time + Miss rate x Miss penalty

• Improve performance by:1. Reduce the miss rate: 2. Reduce the miss penalty, or3. Reduce the time to hit in the cache.

Page 32: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

32

Types of misses

• Compulsory– Very first access to a block (cold-start miss)

• Capacity– Cache cannot contain all blocks needed

• Conflict– Too many blocks mapped onto the same

set

Page 33: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

33

How do you solve

• Compulsory misses?– Larger blocks with a side effect!

• Capacity misses?– Not much options: enlarge the cache

otherwise face “thrashing!”, computer runs at a speed of the lower memory or slower!

• Conflict misses?– Full associative cache with a cost of

hardware and may slow the processor!

Page 34: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

34

Basic cache optimizations:

– Larger block size• Reduces compulsory misses• Increases capacity and conflict misses, increases miss penalty

– Larger total cache capacity to reduce miss rate• Increases hit time, increases power consumption

– Higher associativity• Reduces conflict misses• Increases hit time, increases power consumption

– Higher number of cache levels• Reduces overall memory access time

– Giving priority to read misses over writes• Reduces miss penalty

Page 35: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

35

Other Optimizations: Victim Cache

• Add a small fully associative victim cache to place data discarded from regular cache

• When data not found in cache, check victim cache• 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct

mapped data cache• Get access time of direct mapped with reduced miss rate

Page 36: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

36

Other Optimizations: Reducing Misses by HW Prefetching of Instruction & Data

• E.g., Instruction Prefetching– Alpha 21064 fetches 2 blocks on a miss– Extra block placed in stream buffer– On miss check stream buffer– Jouppi [1990] 1 data stream buffer got 25% misses from 4KB

cache; 4 streams got 43%• Works with data blocks too:

– Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches

• Prefetching relies on extra memory bandwidth that can be used without penalty

Page 37: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

37

Other Optimizations: Reducing Misses by Compiler Optimizations

• Instructions– Reorder procedures in memory so as to reduce misses– Profiling to look at conflicts– McFarling [1989] reduced caches misses by 75% on 8KB direct

mapped cache with 4 byte blocks• Data

– Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays

– Loop Interchange: change nesting of loops to access data in order stored in memory

– Loop Fusion: Combine 2 independent loops that have same looping and some variables overlap

– Blocking: Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows

Page 38: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

38

. Problem: referencing multiple arrays in the same dimension, with the same index, at the same time can lead to conflict misses. . Solution: Merge the independent arrays into a compound array. /* Before */ int val[SIZE]; int key[SIZE];

/* After */ struct merge { int val; int key; }; struct merge merged_array[SIZE];

Merging Arrays Example

Page 39: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

39

Miss Rate Reduction Techniques: Compiler Optimizations

– Loop Interchange

Page 40: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

40

Miss Rate Reduction Techniques: Compiler Optimizations

– Loop Fusion

Page 41: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

41

. Problem: When accessing multiple multi-dimensional arrays (e.g., for matrix multiplication), capacity misses occur if not all of the data can fit into the cache. . Solution: Divide the matrix into smaller submatrices (or blocks) that can fit within the cache

. The size of the block chosen depends on the size of the cache

. Blocking can only be used for certain types of algorithms

Blocking

Page 42: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

42

Performance Improvement

1 1.5 2 2.5 3

compress

cholesky

(nasa7)

spice

mxm (nasa7)

btrix (nasa7)

tomcatv

gmty (nasa7)

vpenta (nasa7)

merged

arrays

loop

interchange

loop fusion blocking

Summary of Compiler Optimizations to Reduce Cache Misses

Page 43: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

43

Decreasing miss penalty with multi-level caches

• Add a second level cache:

– Often primary cache is on the same chip as the processor

– Use SRAMs to add another cache above primary memory (DRAM)

– Miss penalty goes down if data is in 2nd level cache

• Using multilevel caches:

– Try and optimize the hit time on the 1st level cache

– Try and optimize the miss rate on the 2nd level cache

Page 44: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

44

Multilevel Caches

• Primary cache attached to CPU

– Small, but fast

• Level-2 cache services misses from primary cache

– Larger, slower, but still faster than main memory

• Main memory services L-2 cache misses

• Some high-end systems include L-3 cache

Page 45: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

45

Virtual Memory

• Use main memory as a “cache” for secondary (disk) storage– Managed jointly by CPU hardware and the

operating system (OS)• Programs share main memory

– Each gets a private virtual address space holding its frequently used code and data

– Protected from other programs• CPU and OS translate virtual addresses to physical

addresses– VM “block” is called a page– VM translation “miss” is called a page fault

Page 46: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

46

Virtual Memory

• Main memory can act as a cache for the secondary storage (disk)

• Advantages:– illusion of having more physical memory– program relocation – protection

Virtual addresses Physical addresses

Address translation

Disk addresses

Page 47: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

47

Pages: virtual memory blocks

• Page faults: the data is not in memory, retrieve it from disk

– huge miss penalty, thus pages should be fairly large (e.g., 4KB)

• What type (direct mapped, set or fully set associative)– reducing page faults is important (LRU is worth the price)

– can handle the faults in software instead of hardware

– using write-through is too expensive so we use writeback

Virtual page number Page offset

31 30 29 28 27 3 2 1 015 14 13 12 11 10 9 8

Physical page number Page offset

29 28 27 3 2 1 015 14 13 12 11 10 9 8

Virtual address

Physical address

Translation

Book title

Lib. Location

Page 48: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

48

Page Tables (Fully Associative Search Time)

Page 49: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

49

A Program’s State

• Page Table

• PC

• Registers

Page 50: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

50

Page Tables

Page 51: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

51

Page Faults

• Replacement Policy

• Handle with Hardware or Software– External memory excess time is large relative to

software based solution

• LRU– Costly to keep track of every page– Mechanism?

• Keep refreshing the 1 bit

Page 52: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

52

Making Address Translation Fast

• Page tables in memory• Memory access by a program : twice as long

– Obtain physical address– Get data

• Make us of locality of reference– Temporal & Spatial (Words in a page)

• Solution– Special cache

• Keep track of recently used translations• Translation Lookaside Buffer (TLB)

– Translation cache– Your piece of paper where you record the location of books

you need from the library

Page 53: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

53

Making Address Translation Fast

• A cache for address translations: translation lookaside buffer

Typical values: 16-512 entries, miss-rate: .01% - 1%miss-penalty: 10 – 100 cycles

Page 54: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

54

TLBs and caches

YesWrite access

bit on?

No

YesCache hit?

No

Write data into cache,update the dirty bit, and

put the data and theaddress into the write buffer

YesTLB hit?

Virtual address

TLB access

Try to read datafrom cache

No

YesWrite?

No

Cache miss stallwhile read block

Deliver datato the CPU

Write protectionexception

YesCache hit?

No

Try to write datato cache

Cache miss stallwhile read block

TLB missexception

Physical address

Page 55: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

55

TLBs and Caches

Page 56: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

56

• Processor speeds continue to increase very fast— much faster than either DRAM or disk access times

• Design challenge: dealing with this growing disparity

– Prefetching? 3rd level caches and more? Memory design?

Some Issues

Year

Performance

1

10

100

1,000

10,000

100,000

CPU

Memory

Page 57: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

57

Memory Technology

• Performance metrics– Latency is concern of cache– Bandwidth is concern of multiprocessors

and I/O– Access time

• Time between read request and when desired word arrives

• DRAM used for main memory, SRAM used for cache

Page 58: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

58

Latches and Flip-flops

Q

C

D

_Q

Page 59: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

59

Latches and Flip-flops

D

C

Dlatch

D

C

QD

latch

D

C

Q Q

QQ

Page 60: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

60

Latches and Flip-flops

Latches: whenever the inputs change, and the clock is assertedFlip-flop: state changes only on a clock edge

(edge-triggered methodology)

Page 61: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

61

SRAM

Page 62: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

62

SRAM vs. DRAM

Which one has a better memory density?

Which one is faster?

static RAM (SRAM): value stored in a cell is kept on a pair of inverting gates

dynamic RAM (DRAM), value kept in a cell is stored as a charge in a capacitor.

DRAMs use only a single transistor per bit of storage, By comparison, SRAMs require four to six transistors per bit

In DRAMs, the charge is stored on a capacitor, so it cannot be kept indefinitely and must periodically be refreshed. (called dynamic)

Every ~ 8 msEach row can be refreshed simultaneously

Must be re-written after being read

Page 63: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

63

Memory Technology

• Amdahl:– Memory capacity should grow linearly with

processor speed (followed this trend for about 20 years)

– Unfortunately, memory capacity and speed has not kept pace with processors

– Fourfold improvement every 3 years (originally)

– Doubled capacity from 2006-2010

Page 64: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

64

Memory Optimizations

Page 65: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

65

Memory Technology

• Some optimizations:– Synchronous DRAM

• Added clock to DRAM interface• Burst mode with critical word first

– Wider interfaces• 4 bit transfer mode originally• In 2010, upto 16-bit busses

– Double data rate (DDR)• Transfer data on both rising and falling edge

Page 66: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

66

Memory Optimizations

Page 67: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

67

Memory Optimizations

• DDR:– DDR2

• Lower power (2.5 V -> 1.8 V)• Higher clock rates (266 MHz, 333 MHz, 400 MHz)

– DDR3• 1.5 V• 800 MHz

– DDR4 (scheduled for production in 2014)• 1-1.2 V• 1600 MHz

• GDDR5 is graphics memory based on DDR3

Page 68: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

68

Memory Optimizations

• Graphics memory:– Achieve 2-5 X bandwidth per DRAM vs.

DDR3• Wider interfaces (32 vs. 16 bit)• Higher clock rate

– Possible because they are attached via soldering instead of socketted Dual Inline Memory Modules (DIMM)

• Reducing power in SDRAMs:– Lower voltage– Low power mode (ignores clock, continues

to refresh)

Page 69: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

69

Virtual Machines

• First developed in 1960s• Regained popularity recently

– Need for isolation and security in modern systems

– Failures in security and reliability of standard operation systems

– Sharing of single computer among many unrelated users (datacenter, cloud)

– Dramatic increase in raw speed of processors• Overhead of VMs now more acceptable

Page 70: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

70

Virtual Machines

•Emulation methods that provide a standard software interface

– IBM VM/370, VMware, ESX Server, Xen•Create the illusion of having an entire computer to yourself including a copy of the OS•Allows different ISAs and operating systems to be presented to user programs

– “System Virtual Machines”– SVM software is called “virtual machine monitor” or

“hypervisor”– Individual virtual machines run under the monitor

are called “guest VMs”

Page 71: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

71

Impact of VMs on Virtual Memory

• Each guest OS maintains its own set of page tables– VMM adds a level of memory between

physical and virtual memory called “real memory”

– VMM maintains shadow page table that maps guest virtual addresses to physical addresses

• Requires VMM to detect guest’s changes to its own page table

• Occurs naturally if accessing the page table pointer is a privileged operation

Page 72: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

72Assume 75% instruction, 25% data access

Page 73: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

73

Page 74: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

74

Cost of Misses, CPU time

Page 75: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

75

Page 76: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

76

Page 77: 1 CACHE BASICS. 2 1977: DRAM faster than microprocessors

77

Example

• CPI of 1.0 on a 5Ghz machine with a 2% miss rate and 100ns main memory access

• Adding 2nd level cache with 5ns access time decreases miss rate to 0.5%

• How much faster is the new configuration?

0.11500*%20.1

500/2.0

100

TotalCPI

ninstructiolcyclesperMemorystalBaseCPITotalCPI

sclockcycleclockcylens

ns

8.24

11

0.45.25.01500*%5.025*%21

Pr1

25/2.0

5

Speedup

TotaCPI

tructionallsPerInsecondaryStutionsperinstrcimarystallTotalCPI

sclockcycleclockcylens

ns