lec21.1 cs152 computer architecture and engineering lecture 21 caches (con’t) and virtual memory
TRANSCRIPT
Lec21.1
CS152Computer Architecture and Engineering
Lecture 21
Caches (Con’t)and
Virtual Memory
Lec21.2
° The Five Classic Components of a Computer
° Today’s Topics: • Recap last lecture
• Virtual Memory
• Protection
• TLB
• Buses
The Big Picture: Where are We Now?
Control
Datapath
Memory
Processor
Input
Output
Lec21.3
° Set of Operations that must be supported• read: data <= Mem[Physical Address]• write: Mem[Physical Address] <= Data
° Determine the internal register transfers° Design the Datapath° Design the Cache Controller
Physical Address
Read/Write
Data
Memory“Black Box”
Inside it has:Tag-Data Storage,Muxes,Comparators, . . .
CacheController
CacheDataPathAddress
Data In
Data Out
R/WActive
ControlPoints
Signalswait
How Do you Design a Memory System?
Lec21.4
Impact on Cycle Time
IR
PCI -Cache
D Cache
A B
R
T
IRex
IRm
IRwb
miss
invalid
Miss
Cache Hit Time:directly tied to clock rateincreases with cache sizeincreases with associativity
Average Memory Access time = Hit Time + Miss Rate x Miss Penalty
Time = IC x CT x (ideal CPI + memory stalls)
Lec21.5
Options to reduce AMAT:
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
Time = IC x CT x (ideal CPI + memory stalls)
Average Memory Access time = Hit Time + (Miss Rate x Miss Penalty) =
(Hit Rate x Hit Time) + (Miss Rate x Miss Time)
Improving Cache Performance: 3 general options
Lec21.6
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
Improving Cache Performance
Lec21.7
Cache Size (KB)
Mis
s R
ate
per
Typ
e
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
1 2 4 8
16
32
64
12
8
1-way
2-way
4-way
8-way
Capacity
Compulsory
Conflict
3Cs Absolute Miss Rate (SPEC92)
Lec21.8
Cache Size (KB)
Mis
s R
ate
per
Typ
e
0
0.02
0.04
0.06
0.08
0.1
0.12
0.141 2 4 8
16
32
64
12
8
1-way
2-way
4-way
8-way
Capacity
Compulsory
Conflict
miss rate 1-way associative cache size X = miss rate 2-way associative cache size X/2
2:1 Cache Rule
Lec21.9
Cache Size (KB)
Mis
s R
ate
per
Typ
e
0%
20%
40%
60%
80%
100%
1 2 4 8
16
32
64
12
8
1-way
2-way4-way
8-way
Capacity
Compulsory
Conflict
3Cs Relative Miss Rate
Lec21.10
Block Size (bytes)
Miss Rate
0%
5%
10%
15%
20%
25%
16
32
64
12
8
25
6
1K
4K
16K
64K
256K
1. Reduce Misses via Larger Block Size
Lec21.11
° 2:1 Cache Rule: • Miss Rate DM cache size N Miss Rate 2-way cache size N/2
° Beware: Execution time is only final measure!• Will Clock Cycle time increase?
• Hill [1988] suggested hit time for 2-way vs. 1-way external cache +10%, internal + 2%
2. Reduce Misses via Higher Associativity
Lec21.12
° Assume CCT = 1.10 for 2-way, 1.12 for 4-way, 1.14 for 8-way vs. CCT direct mapped
Cache Size Associativity
(KB) 1-way 2-way 4-way 8-way
1 2.33 2.15 2.07 2.01
2 1.98 1.86 1.76 1.68
4 1.72 1.67 1.61 1.53
8 1.46 1.48 1.47 1.43
16 1.29 1.32 1.32 1.32
32 1.20 1.24 1.25 1.27
64 1.14 1.20 1.21 1.23
128 1.10 1.17 1.18 1.20
(Red means A.M.A.T. not improved by more associativity)
Example: Avg. Memory Access Time vs. Miss Rate
Lec21.13
To Next Lower Level InHierarchy
DATATAGS
One Cache line of DataTag and Comparator
One Cache line of DataTag and Comparator
One Cache line of DataTag and Comparator
One Cache line of DataTag and Comparator
3. Reducing Misses via a “Victim Cache”
° How to combine fast hit time of direct mapped yet still avoid conflict misses?
° Add buffer to place data discarded from cache
° Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache
° Used in Alpha, HP machines
Lec21.14
° E.g., Instruction Prefetching• Alpha 21064 fetches 2 blocks on a miss
• Extra block placed in “stream buffer”
• On miss check stream buffer
° Works with data blocks too:• Jouppi [1990] 1 data stream buffer got 25% misses from 4KB
cache; 4 streams got 43%
• Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches
° Prefetching relies on having extra memory bandwidth that can be used without penalty
• Could reduce performance if done indiscriminantly!!!
4. Reducing Misses by Hardware Prefetching
Lec21.15
° Data Prefetch• Load data into register (HP PA-RISC loads)• Cache Prefetch: load into cache
(MIPS IV, PowerPC, SPARC v. 9)• Special prefetching instructions cannot cause faults;
a form of speculative execution
° Issuing Prefetch Instructions takes time• Is cost of prefetch issues < savings in reduced
misses?• Higher superscalar reduces difficulty of issue
bandwidth
5. Reducing Misses by Software Prefetching Data
Lec21.16
° McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in software
° Instructions• Reorder procedures in memory so as to reduce conflict misses• Profiling to look at conflicts(using tools they developed)
° Data• Merging Arrays: improve spatial locality by single array of
compound elements vs. 2 arrays• Loop Interchange: change nesting of loops to access data in
order stored in memory• Loop Fusion: Combine 2 independent loops that have same
looping and some variables overlap• Blocking: Improve temporal locality by accessing “blocks” of
data repeatedly vs. going down whole columns or rows
6. Reducing Misses by Compiler Optimizations
Lec21.17
1. Reduce the miss rate,
2. Reduce the miss penalty, or
3. Reduce the time to hit in the cache.
Improving Cache Performance (Continued)
Lec21.18
0. Reducing Penalty: Faster DRAM / Interface
° New DRAM Technologies • RAMBUS - same initial latency, but much higher bandwidth
• Synchronous DRAM
• TMJ-RAM (Tunneling magnetic-junction RAM) from IBM??
• Merged DRAM/Logic - IRAM project here at Berkeley
° Better BUS interfaces
° CRAY Technique: only use SRAM
Lec21.19
° A Write Buffer is needed between the Cache and Memory• Processor: writes data into the cache and the write buffer
• Memory controller: write contents of the buffer to memory
° Write buffer is just a FIFO:• Typical number of entries: 4
• Works fine if:Store frequency (w.r.t. time) << 1 / DRAM write cycle
• Must handle burst behavior as well!
ProcessorCache
Write Buffer
DRAM
1. Reducing Penalty: Read Priority over Write on Miss
Lec21.20
° Write-Buffer Issues: Could introduce RAW Hazard with memory!• Write buffer may contain only copy of valid data
Reads to memory may get wrong result if we ignore write buffer° Solutions:
• Simply wait for write buffer to empty before servicing reads:- Might increase read miss penalty (old MIPS 1000 by 50% )
• Check write buffer contents before read (“fully associative”); - If no conflicts, let the memory access continue- Else grab data from buffer
° Can Write Buffer help with Write Back?• Read miss replacing dirty block
- Copy dirty block to write buffer while starting read to memory
• CPU stall less since restarts as soon as do read
RAW Hazards from Write Buffer!
Lec21.21
° Don’t wait for full block to be loaded before restarting CPU• Early restart—As soon as the requested word of the block
arrives, send it to the CPU and let the CPU continue execution
• Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first
° Generally useful only in large blocks,
° Spatial locality a problem; tend to want next sequential word, so not clear if benefit by early restart
block
2. Reduce Penalty: Early Restart and Critical Word First
Lec21.22
° Non-blocking cache or lockup-free cache allow data cache to continue to supply cache hits during a miss
• requires extra bits on registers or out-of-order execution
• requires multi-bank memories
° “hit under miss” reduces the effective miss penalty by working during miss vs. ignoring CPU requests
° “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses
• Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses
• Requires muliple memory banks (otherwise cannot support)
• Pentium Pro allows 4 outstanding memory misses
3. Reduce Penalty: Non-blocking Caches
Lec21.23
° For in-order pipeline, 2 options:• Freeze pipeline in Mem stage (popular early on: Sparc, R4000)
IF ID EX Mem stall stall stall … stall Mem Wr IF ID EX stall stall stall … stall stall Ex
Wr
• Use Full/Empty bits in registers + MSHR queue- MSHR = “Miss Status/Handler Registers” (Kroft)
Each entry in this queue keeps track of status of outstanding memory requests to one complete memory line.– Per cache-line: keep info about memory address.– For each word: register (if any) that is waiting for result.– Used to “merge” multiple requests to one memory line
- New load creates MSHR entry and sets destination register to “Empty”. Load is “released” from pipeline.
- Attempt to use register before result returns causes instruction to block in decode stage.
- Limited “out-of-order” execution with respect to loads. Popular with in-order superscalar architectures.
° Out-of-order pipelines already have this functionality built in… (load queues, etc).
What happens on a Cache miss?
Lec21.24
° FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26
° Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19
° 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss
Hit Under i Misses
Av
g.
Me
m.
Acce
ss T
ime
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
eqntott
espresso
xlisp
compress
mdljsp2
ear
fpppp
tomcatv
swm256
doduc
su2cor
wave5
mdljdp2
hydro2d
alvinn
nasa7
spice2g6
ora
0->1
1->2
2->64
Base
Integer Floating Point
“Hit under n Misses”
0->11->22->64Base
Value of Hit Under Miss for SPEC
Lec21.25
° L2 EquationsAMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1
Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2
AMAT = Hit TimeL1 +
Miss RateL1 x (Hit TimeL2 + Miss RateL2 x Miss PenaltyL2)
° Definitions:• Local miss rate— misses in this cache divided by the total
number of memory accesses to this cache (Miss rateL2)
• Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU (Miss RateL1 x Miss RateL2)
• Global Miss Rate is what matters
4. Reduce Penalty: Second-Level Cache
Proc
L1 Cache
L2 Cache
Lec21.26
° Reducing Miss Rate1. Reduce Misses via Larger Block Size
2. Reduce Conflict Misses via Higher Associativity
3. Reducing Conflict Misses via Victim Cache
4. Reducing Misses by HW Prefetching Instr, Data
5. Reducing Misses by SW Prefetching Data
6. Reducing Capacity/Conf. Misses by Compiler Optimizations
Reducing Misses: which apply to L2 Cache?
Lec21.27
Relative CPU Time
Block Size
11.11.21.31.41.51.61.71.81.9
2
16 32 64 128 256 512
1.361.28 1.27
1.34
1.54
1.95
° 32KB L1, 8 byte path to memory
L2 cache block size & A.M.A.T.
Lec21.28
1. Reduce the miss rate, 2. Reduce the miss penalty, or3. Reduce the time to hit in the cache:
Lower Associativity (+victim caching or 2nd-level cache)?Multiple cycle Cache access (e.g. R4000)Harvard Architecture Careful Virtual Memory Design (rest of lecture!)
Improving Cache Performance (Continued)
Lec21.29
° Sample Statistics:• 16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47%• 32KB unified: Aggregate miss rate=1.99%
° Which is better (ignore L2 cache)?• Assume 33% loads/store, hit time=1, miss time=50• Note: data hit has 1 stall for unified cache (only one port)
AMATHarvard=(1/1.33)x(1+0.64%x50)+(0.33/1.33)x(1+6.47%x50) = 2.05AMATUnified=(1/1.33)x(1+1.99%x50)+(0.33/1.33)X(1+1+1.99%x50)= 2.24
ProcI-Cache-1
Proc
UnifiedCache-1
UnifiedCache-2
D-Cache-1
Proc
UnifiedCache-2
Example: Harvard Architecture
Unified
Harvard Architecture
Lec21.30
CPU Registers100s Bytes<10s ns
CacheK Bytes10-100 ns$.01-.001/bit
Main MemoryM Bytes100ns-1us$.01-.001
DiskG Bytesms10 - 10 cents-3 -4
CapacityAccess TimeCost
Tapeinfinitesec-min10-6
Registers
Cache
Memory
Disk
Tape
Instr. Operands
Blocks
Pages
Files
StagingXfer Unit
prog./compiler1-8 bytes
cache cntl8-128 bytes
OS512-4K bytes
user/operatorMbytes
Upper Level
Lower Level
faster
Larger
Recall: Levels of the Memory Hierarchy
Lec21.31
° Virtual memory => treat memory as a cache for the disk° Terminology: blocks in this cache are called “Pages”
• Typical size of a page: 1K — 8K° Page table maps virtual page numbers to physical frames
• “PTE” = Page Table Entry
Physical Address Space
Virtual Address Space
What is virtual memory?
Virtual Address
Page Table
indexintopagetable
Page TableBase Reg
V AccessRights PA
V page no. offset10
table locatedin physicalmemory
P page no. offset10
Physical Address
Lec21.32
Three Advantages of Virtual Memory° Translation:
• Program can be given consistent view of memory, even though physical memory is scrambled
• Makes multithreading reasonable (now used a lot!)• Only the most important part of program (“Working Set”)
must be in physical memory.• Contiguous structures (like stacks) use only as much
physical memory as necessary yet still grow later.° Protection:
• Different threads (or processes) protected from each other.• Different pages can be given special behavior
- (Read Only, Invisible to user programs, etc).
• Kernel data protected from User programs• Very important for protection from malicious programs
=> Far more “viruses” under Microsoft Windows° Sharing:
• Can map same physical page to multiple users(“Shared memory”)
Lec21.33
What is the size of information blocks that are transferred from secondary to main storage (M)? page size(Contrast with physical block size on disk, I.e. sector size)
Which region of M is to hold the new block placement policy
How do we find a page when we look for it? block identification
Block of information brought into M, and M is full, then some region of M must be released to make room for the new block replacement policy
What do we do on a write? write policy
Missing item fetched from secondary memory only on the occurrence of a fault demand load policy
pages
reg
cachemem disk
frame
Issues in Virtual Memory System Design
Lec21.34
How big is the translation (page) table?
° Simplest way to implement “fully associative” lookup policy is with large lookup table.
° Each entry in table is some number of bytes, say 4
° With 4K pages, 32- bit address space, need:232/4K = 220 = 1 Meg entries x 4 bytes = 4MB
° With 4K pages, 64-bit address space, need: 264/4K = 252 entries = BIG!
° Can’t keep whole page table in memory!
Virtual Page Number Page Offset
Lec21.35
Large Address Spaces
Two-level Page Tables
32-bit address:
P1 index P2 index page offest
4 bytes
4 bytes
4KB
10 10 12
1KPTEs
° 2 GB virtual address space
° 4 MB of PTE2
– paged, holes
° 4 KB of PTE1
What about a 48-64 bit address space?
Lec21.36
Inverted Page Tables
V.Page P. FramehashVirtual
Page
=
IBM System 38 (AS400) implements 64-bit addresses.
48 bits translated
start of object contains a 12-bit tag
=> TLBs or virtually addressed caches are critical
Lec21.37
Virtual Address and a Cache: Step backward???
° Virtual memory seems to be really slow:• Must access memory on load/store -- even cache hits!
• Worse, if translation not completely in memory, may need to go to disk before hitting in cache!
° Solution: Caching! (surprise!)• Keep track of most common translations and place them
in a “Translation Lookaside Buffer” (TLB)
CPUTrans-lation
Cache MainMemory
VA PA miss
hitdata
Lec21.38
Making address translation practical: TLB
° Virtual memory => memory acts like a cache for the disk
° Page table maps virtual page numbers to physical frames
° Translation Look-aside Buffer (TLB) is a cache translations
PhysicalMemory Space
Virtual Address Space
TLB
Page Table
2
0
1
3
virtual address
page off
2frame page
250
physical address
page off
Lec21.39
TLB organization: include protection
° TLB usually organized as fully-associative cache• Lookup is by Virtual Address
• Returns Physical Address + other info
° Dirty => Page modified (Y/N)? Ref => Page touched (Y/N)?Valid => TLB entry valid (Y/N)? Access => Read? Write? ASID => Which User?
Virtual Address Physical Address Dirty Ref Valid Access ASID
0xFA00 0x0003 Y N Y R/W 340xFA00 0x0003 Y N Y R/W 340x0040 0x0010 N Y Y R 00x0041 0x0011 N Y Y R 0
Lec21.40
Example: R3000 pipeline includes TLB stages
Inst Fetch Dcd/ Reg ALU / E.A Memory Write Reg
TLB I-Cache RF Operation WB
E.A. TLB D-Cache
MIPS R3000 Pipeline
ASID V. Page Number Offset12206
0xx User segment (caching based on PT/TLB entry)100 Kernel physical space, cached101 Kernel physical space, uncached11x Kernel virtual space
Allows context switching among64 user processes without TLB flush
Virtual Address Space
TLB64 entry, on-chip, fully associative, software TLB fault handler
Lec21.41
What is the replacement policy for TLBs?° On a TLB miss, we check the page table for an entry.
Two architectural possibilities:• Hardware “table-walk” (Sparc, among others)
- Structure of page table must be known to hardware
• Software “table-walk” (MIPS was one of the first)- Lots of flexibility- Can be expensive with modern operating systems.
° What if missing Entry is not in page table?• This is called a “Page Fault” requested virtual page is not in memory
• Operating system must take over (CS162)- pick a page to discard (possibly writing it to disk)- start loading the page in from disk- schedule some other process to run
° Note: possible that parts of page table are not even in memory (I.e. paged out!)
• The root of the page table always “pegged” in memory
Lec21.42
Page Replacement: Not Recently Used (1-bit LRU, Clock)
Set of all pagesin Memory
Tail pointer:Mark pages as “not used recently
Head pointer:Place pages on free list if they are still marked as “not used”. Schedule dirty pages for writing to disk
Freelist
Free Pages
Lec21.43
Page Replacement: Not Recently Used (1-bit LRU, Clock)
Associated with each page is a “used” flag such that used flag = 1 if the page has been referenced in recent past = 0 otherwise
-- if replacement is necessary, choose any page frame such that its reference bit is 0. This is a page that has not been referenced in the recent past
page table entry
pagetableentry
last replaced pointer (lrp)if replacement is to take place,advance lrp to next entry (modtable size) until one with a 0 bitis found; this is the target forreplacement; As a side effect,all examined PTE's have theirreference bits set to zero.
1 0
Or search for the a page that is both not recently referenced AND not dirty.
useddirty
Architecture part: support dirty and used bits in the page table => may need to update PTE on any instruction fetch, load, storeHow does TLB affect this design problem? Software TLB miss?
page fault handler:
1 00 11 10 0
Lec21.44
° As described, TLB lookup is in serial with cache lookup:
° Modern machines with TLBs go one step further: they overlap TLB lookup with cache access.
• Works because lower bits of result (offset) available early
Reducing translation time further
Virtual Address
TLB Lookup
V AccessRights PA
V page no. offset10
P page no. offset10
Physical Address
Lec21.45
TLB 4K Cache
10 2
00
4 bytes
index 1 K
page # disp20
assoclookup
32
Hit/Miss
FN Data Hit/Miss
=FN
What if cache size is increased to 8KB?
° If we do this in parallel, we have to be careful, however:
Overlapped TLB & Cache Access
Lec21.46
Overlapped access only works as long as the address bits used to index into the cache do not change as the result of VA translation
This usually limits things to small caches, large page sizes, or high n-way set associative caches if you want a large cache
Example: suppose everything the same except that the cache is increased to 8 K bytes instead of 4 K:
11 2
00
virt page # disp20 12
cache index
This bit is changedby VA translation, butis needed for cachelookup
Solutions: go to 8K byte page sizes; go to 2 way set associative cache; or SW guarantee VA[13]=PA[13]
1K
4 410
2 way set assoc cache
Problems With Overlapped TLB Access
Lec21.47
Only require address translation on cache miss!
synonym problem: two different virtual addresses map to same physical address => two different cache entries holding data for the same physical address!
nightmare for update: must update all cache entries with same physical address or memory becomes inconsistent
determining this requires significant hardware, essentially an associative lookup on the physical address tags to see if you have multiple hits. (usually disallowed by fiat)
data
CPUTrans-lation
Cache
MainMemory
VA
hit
PA
Another option: Virtually Addressed Cache
Lec21.48
° TLBs fully associative° TLB updates in SW
(“Priv Arch Libr”)° Separate Instr & Data
TLB & Caches° Caches 8KB direct
mapped, write thru° Critical 8 bytes first° Prefetch instr. stream
buffer° 4 entry write buffer
between D$ & L2$° 2 MB L2 cache, direct
mapped, (off-chip)° 256 bit path to main
memory, 4 x 64-bit modules
° Victim Buffer: to give read priority over write
StreamBuffer
WriteBuffer
Victim Buffer
Instr Data
Cache Optimization: Alpha 21064
Lec21.49
Summary #1/2 : TLB, Virtual Memory
° Caches, TLBs, Virtual Memory all understood by examining how they deal with 4 questions:
1) Where can block be placed? 2) How is block found? 3) What block is replaced on miss? 4) How are writes handled?
° More cynical version of this: Everything in computer architecture is a cache!
° Techniques people use to improve the miss rate of caches:
Technique MR MP HT ComplexityLarger Block Size + – 0Higher Associativity + – 1Victim Caches + 2Pseudo-Associative Caches + 2HW Prefetching of Instr/Data + 2Compiler Controlled Prefetching + 3Compiler Reduce Misses + 0
Lec21.50
Summary #2 / 2: Virtual Memory
° VM allows many processes to share single memory without having to swap all processes to disk
° Translation, Protection, and Sharing are more important than memory hierarchy
° Page tables map virtual address to physical address• TLBs are a cache on translation and are extremely
important for good performance• Special tricks necessary to keep TLB out of critical
cache-access path • TLB misses are significant in processor performance:
- These are funny times: most systems can’t access all of 2nd level cache without TLB misses!