memory managementcsg.csail.mit.edu/6.823/lectures/l04split.pdf · recap: cache organization...

99
L04-1 MIT 6.823 Spring 2020 Mengjia Yan Computer Science and Artificial Intelligence Laboratory M.I.T. Based on slides from Daniel Sanchez Memory Management: From Absolute Addresses to Demand Paging February 13, 2020

Upload: others

Post on 19-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

L04-1MIT 6.823 Spring 2020

Mengjia YanComputer Science and Artificial Intelligence Laboratory

M.I.T.

Based on slides from Daniel Sanchez

Memory Management:From Absolute Addresses

to Demand Paging

February 13, 2020

Page 2: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Recap: Cache Organization

• Caches are small and fast memories that transparently retain recently accessed data

• Cache organizations– Direct-mapped– Set-associative– Fully associative

• Cache performance– AMAT = HitLatency + MissRate * MissLatency– Minimizing AMAT requires balancing competing tradeoffs

February 13, 2020 L04-2

Page 3: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Replacement Policy

February 13, 2020

Which block from a set should be evicted?

• Random

• Least Recently Used (LRU)• LRU cache state must be updated on every access• true implementation only feasible for small sets (2-way)• pseudo-LRU binary tree was often used for 4-8 way

• First In, First Out (FIFO) a.k.a. Round-Robin• used in highly associative caches

• Not Least Recently Used (NLRU)• FIFO with exception for most recently used block or blocks

• One-bit LRU• Each way represented by a bit. Set on use, replace first unused.

L03-3

Page 4: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Multilevel Caches

• A memory cannot be large and fast• Add level of cache to reduce miss penalty

– Each level can have longer latency than level above– So, increase sizes of cache at each level

February 13, 2020

CPU L1 L2 DRAM

L04-4

Page 5: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Multilevel Caches

• A memory cannot be large and fast• Add level of cache to reduce miss penalty

– Each level can have longer latency than level above– So, increase sizes of cache at each level

February 13, 2020

CPU L1 L2 DRAM

Metrics:

Local miss rate = misses in cache / accesses to cache

Global miss rate = misses in cache / CPU memory accesses

Misses per instruction = misses in cache / number of instructions

L04-5

Page 6: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Inclusion Policy• Inclusive multilevel cache:

– Inner cache holds copies of data in outer cache– External access need only check outer cache– Most common case

• Exclusive multilevel caches:– Inner cache holds data not in outer cache– Swap lines between inner/outer caches on miss– Used in AMD Athlon with 64KB primary and 256KB secondary cache

• Non-inclusive multilevel caches:– Some cache lines duplicate in outer cache, and some do not– Intel Skylake L3

Why choose one type or the other?

February 13, 2020 L04-6

Page 7: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Inclusion Policy• Inclusive multilevel cache:

– Inner cache holds copies of data in outer cache– External access need only check outer cache– Most common case

• Exclusive multilevel caches:– Inner cache holds data not in outer cache– Swap lines between inner/outer caches on miss– Used in AMD Athlon with 64KB primary and 256KB secondary cache

• Non-inclusive multilevel caches:– Some cache lines duplicate in outer cache, and some do not– Intel Skylake L3

Why choose one type or the other?

February 13, 2020 L04-7

Inclusive: Less traffic, easier coherenceExclusive: Outer cache retains more data

Page 8: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Victim Caches (HP 7200)

February 13, 2020

L1 Data Cache

Unified L2 Cache

RF

CPU

L04-8

Page 9: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Victim Caches (HP 7200)

February 13, 2020

Victim cache is a small associative back up cache, added to a direct mapped cache, which holds recently evicted lines

L1 Data Cache

Unified L2 Cache

RF

CPU

Evicted data from L1

Hit data (miss in L1)Victim CacheFA, 4 blocks

L04-9

Page 10: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Victim Caches (HP 7200)

February 13, 2020

Victim cache is a small associative back up cache, added to a direct mapped cache, which holds recently evicted lines• First look up in direct-mapped cache• If miss, look in victim cache• If hit in victim cache, swap hit line with line now evicted from L1

L1 Data Cache

Unified L2 Cache

RF

CPU

Evicted data from L1

Hit data (miss in L1)Victim CacheFA, 4 blocks

L04-10

Page 11: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Victim Caches (HP 7200)

February 13, 2020

Victim cache is a small associative back up cache, added to a direct mapped cache, which holds recently evicted lines• First look up in direct-mapped cache• If miss, look in victim cache• If hit in victim cache, swap hit line with line now evicted from L1• If miss in victim cache, L1 victim -> VC, VC victim->?

L1 Data Cache

Unified L2 Cache

RF

CPU

Evicted data from VCwhere ?

Evicted data from L1

Hit data (miss in L1)Victim CacheFA, 4 blocks

L04-11

Page 12: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Victim Caches (HP 7200)

February 13, 2020

Victim cache is a small associative back up cache, added to a direct mapped cache, which holds recently evicted lines• First look up in direct-mapped cache• If miss, look in victim cache• If hit in victim cache, swap hit line with line now evicted from L1• If miss in victim cache, L1 victim -> VC, VC victim->?+ Fast hit time of direct-mapped but with reduced conflict misses

L1 Data Cache

Unified L2 Cache

RF

CPU

Evicted data from VCwhere ?

Evicted data from L1

Hit data (miss in L1)Victim CacheFA, 4 blocks

L04-12

Page 13: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Memory Management

February 13, 2020

• The Fifties- Absolute Addresses- Dynamic address translation

• The Sixties- Atlas and Demand Paging- Paged memory systems and TLBs

• Modern Virtual Memory Systems

L04-13

Page 14: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Names for Memory Locations

• Machine language address– as specified in machine code

• Virtual address (sometimes called effective address)– ISA specifies translation of machine code address into

virtual address of program variable

• Physical addressÞ operating system specifies mapping of virtual address into

name for a physical memory location

February 13, 2020

physicaladdress

virtualaddress

machinelanguageaddress

AddressMappingISA

Physical Memory(DRAM)

L04-14

Page 15: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Absolute Addresses

L04-15February 13, 2020

virtual address = physical memory address

EDSAC, early 50’s

Page 16: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Absolute Addresses

• Only one program ran at a time, with unrestricted access to entire machine (RAM + I/O devices)

L04-16February 13, 2020

virtual address = physical memory address

EDSAC, early 50’s

Page 17: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Absolute Addresses

• Only one program ran at a time, with unrestricted access to entire machine (RAM + I/O devices)

• Addresses in a program depended upon where the program was to be loaded in memory

• But it was more convenient for programmers to write location-independent subroutines

L04-17February 13, 2020

virtual address = physical memory address

EDSAC, early 50’s

Page 18: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Absolute Addresses

• Only one program ran at a time, with unrestricted access to entire machine (RAM + I/O devices)

• Addresses in a program depended upon where the program was to be loaded in memory

• But it was more convenient for programmers to write location-independent subroutines

L04-18February 13, 2020

virtual address = physical memory address

EDSAC, early 50’s

How could location independence be achieved?

Page 19: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Absolute Addresses

• Only one program ran at a time, with unrestricted access to entire machine (RAM + I/O devices)

• Addresses in a program depended upon where the program was to be loaded in memory

• But it was more convenient for programmers to write location-independent subroutines

L04-19February 13, 2020

virtual address = physical memory address

EDSAC, early 50’s

How could location independence be achieved?Linker and/or loader modify addresses of subroutines and callers when building a program memory image

Page 20: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Multiprogramming

L04-20February 13, 2020

Motivation• In the early machines, I/O operations were slow

and each word transferred involved the CPU • Higher throughput if CPU and I/O of 2 or more

programs were overlapped. How?Þ multiprogramming

Page 21: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Multiprogramming

L04-21February 13, 2020

Motivation• In the early machines, I/O operations were slow

and each word transferred involved the CPU • Higher throughput if CPU and I/O of 2 or more

programs were overlapped. How?Þ multiprogramming prog1

prog2 Phys

ical

Mem

ory

Page 22: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Multiprogramming

L04-22February 13, 2020

Motivation• In the early machines, I/O operations were slow

and each word transferred involved the CPU • Higher throughput if CPU and I/O of 2 or more

programs were overlapped. How?Þ multiprogramming

Location-independent programs• Programming and storage management ease

prog1

prog2 Phys

ical

Mem

ory

Page 23: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Multiprogramming

L04-23February 13, 2020

Motivation• In the early machines, I/O operations were slow

and each word transferred involved the CPU • Higher throughput if CPU and I/O of 2 or more

programs were overlapped. How?Þ multiprogramming

Location-independent programs• Programming and storage management ease

Þ need for a base register

prog1

prog2 Phys

ical

Mem

ory

Page 24: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Multiprogramming

L04-24February 13, 2020

Motivation• In the early machines, I/O operations were slow

and each word transferred involved the CPU • Higher throughput if CPU and I/O of 2 or more

programs were overlapped. How?Þ multiprogramming

Location-independent programs• Programming and storage management ease

Þ need for a base register

Protection• Independent programs should not affect each other

inadvertently

prog1

prog2 Phys

ical

Mem

ory

Page 25: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Multiprogramming

L04-25February 13, 2020

Motivation• In the early machines, I/O operations were slow

and each word transferred involved the CPU • Higher throughput if CPU and I/O of 2 or more

programs were overlapped. How?Þ multiprogramming

Location-independent programs• Programming and storage management ease

Þ need for a base register

Protection• Independent programs should not affect each other

inadvertentlyÞ need for a bound register

prog1

prog2 Phys

ical

Mem

ory

Page 26: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Simple Base and Bound Translation

L04-26February 13, 2020

Base and bounds registers are visible/accessible only when processor is running in supervisor mode

Load X

ProgramAddressSpace

Mai

n M

emor

y

currentsegment

BaseRegister

+

PhysicalAddressEffective

Address

Base Physical Address

Page 27: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Simple Base and Bound Translation

L04-27February 13, 2020

Base and bounds registers are visible/accessible only when processor is running in supervisor mode

BoundRegister £

BoundsViolation?

Segment Length

Load X

ProgramAddressSpace

Mai

n M

emor

y

currentsegment

BaseRegister

+

PhysicalAddressEffective

Address

Base Physical Address

Page 28: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Separate Areas for Code and Data

L04-28February 13, 2020

What is an advantage of this separation?(Scheme used on all Cray vector supercomputers prior to X1, 2002)

Load X

ProgramAddressSpace

Mai

n M

emor

y

datasegment

Data BoundRegister

Effective AddrRegister

Data BaseRegister

£

+

BoundsViolation?

Code BoundRegister

ProgramCounter

Code BaseRegister

£

+

BoundsViolation?

codesegment

Page 29: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Memory Fragmentation

L04-29February 13, 2020

OSSpace

16K

24K

24K

32K

24K

user 1

user 2

user 3

free

Page 30: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Memory Fragmentation

L04-30February 13, 2020

OSSpace

16K

24K

24K

32K

24K

user 1

user 2

user 3

Users 4 & 5 arrive

free

Page 31: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Memory Fragmentation

L04-31February 13, 2020

OSSpace

16K

24K

24K

32K

24K

user 1

user 2

user 3

OSSpace

16K

24K

16K

32K

24K

user 1

user 2

user 3

user 5

user 48K

Users 4 & 5 arrive

free

Page 32: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Memory Fragmentation

L04-32February 13, 2020

OSSpace

16K

24K

24K

32K

24K

user 1

user 2

user 3

OSSpace

16K

24K

16K

32K

24K

user 1

user 2

user 3

user 5

user 48K

Users 4 & 5 arrive

Users 2 & 5leave

free

Page 33: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Memory Fragmentation

L04-33February 13, 2020

OSSpace

16K

24K

24K

32K

24K

user 1

user 2

user 3

OSSpace

16K

24K

16K

32K

24K

user 1

user 2

user 3

user 5

user 48K

Users 4 & 5 arrive

Users 2 & 5leave

OSSpace

16K

24K

16K

32K

24K

user 1

user 48K

user 3

free

Page 34: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Memory Fragmentation

L04-34February 13, 2020

As users come and go, the storage is “fragmented”. Therefore, at some stage programs have to be movedaround to compact the storage.

OSSpace

16K

24K

24K

32K

24K

user 1

user 2

user 3

OSSpace

16K

24K

16K

32K

24K

user 1

user 2

user 3

user 5

user 48K

Users 4 & 5 arrive

Users 2 & 5leave

OSSpace

16K

24K

16K

32K

24K

user 1

user 48K

user 3

free

Page 35: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Paged Memory Systems• Processor-generated address can be interpreted as

a pair <page number, offset>

L04-35February 13, 2020

0123

Address Spaceof User-1

10

2

3

page frame number (PN) offset

Page 36: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Paged Memory Systems• Processor-generated address can be interpreted as

a pair <page number, offset>

• A page table contains the physical address of the base of each page

L04-36February 13, 2020

0123

Address Spaceof User-1

0x60x70x10x3

0123

Page Table of User-1

10

2

3

page frame number (PN) offset

<virtual PN, physical PN>

Page 37: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Paged Memory Systems• Processor-generated address can be interpreted as

a pair <page number, offset>

• A page table contains the physical address of the base of each page

L04-37February 13, 2020

Page tables make it possible to store the pages of a program non-contiguously.

0123

Address Spaceof User-1

0x60x70x10x3

0123

Page Table of User-1

10

2

3

page frame number (PN) offset

<virtual PN, physical PN>

Page 38: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Private Address Space per User

L04-38February 13, 2020

• Each user has a page table • Page table contains an entry for each user page

VA1User 1

Page Table

Phys

ical

Mem

ory

free

OSpages

Page 39: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Private Address Space per User

L04-39February 13, 2020

• Each user has a page table • Page table contains an entry for each user page

VA1User 1

Page Table

VA1User 2

Page Table

Phys

ical

Mem

ory

VA1User 3

Page Table free

OSpages

Page 40: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Where Should Page Tables Reside?

• Space required by the page tables (PT) is proportional to the virtual address space, number of users, ...

Þ Space requirement is large Þ Too expensive to keep in registers

L04-40February 13, 2020

Page 41: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Where Should Page Tables Reside?

• Space required by the page tables (PT) is proportional to the virtual address space, number of users, ...

Þ Space requirement is large Þ Too expensive to keep in registers

• Idea: Keep PT of the current user in special registers

L04-41February 13, 2020

Page 42: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Where Should Page Tables Reside?

• Space required by the page tables (PT) is proportional to the virtual address space, number of users, ...

Þ Space requirement is large Þ Too expensive to keep in registers

• Idea: Keep PT of the current user in special registers– may not be feasible for large page tables – Increases the cost of context swap

L04-42February 13, 2020

Page 43: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Where Should Page Tables Reside?

• Space required by the page tables (PT) is proportional to the virtual address space, number of users, ...

Þ Space requirement is large Þ Too expensive to keep in registers

• Idea: Keep PT of the current user in special registers– may not be feasible for large page tables – Increases the cost of context swap

• Idea: Keep PTs in the main memory

L04-43February 13, 2020

Page 44: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Where Should Page Tables Reside?

• Space required by the page tables (PT) is proportional to the virtual address space, number of users, ...

Þ Space requirement is large Þ Too expensive to keep in registers

• Idea: Keep PT of the current user in special registers– may not be feasible for large page tables – Increases the cost of context swap

• Idea: Keep PTs in the main memory– needs one reference to retrieve the page base address and another

to access the data wordÞ doubles the number of memory references!

L04-44February 13, 2020

Page 45: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Page Tables in Physical Memory

L04-45February 13, 2020

PFN for VA1 PT User 1

PT User 2

VA1

User 1

VA1

User 2

Page 46: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Page Tables in Physical Memory

L04-46February 13, 2020

PFN for VA1 PT User 1

PT User 2

VA1

User 1

VA1

User 2

Page 47: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Page Tables in Physical Memory

L04-47February 13, 2020

PFN for VA1 PT User 1

PT User 2

VA1

User 1

VA1

User 2

Page 48: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Page Tables in Physical Memory

L04-48February 13, 2020

PFN for VA1 PT User 1

PT User 2

VA1

User 1

VA1

User 2

Idea: cache the address translation of frequently used pages – TLBs (translation lookaside buffer)

Page 49: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

A Problem in Early Sixties

• There were many applications whose data could not fit in the main memory, e.g., payroll– Paged memory system reduced fragmentation but still required

the whole program to be resident in the main memory

L04-49February 13, 2020

Page 50: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

A Problem in Early Sixties

• There were many applications whose data could not fit in the main memory, e.g., payroll– Paged memory system reduced fragmentation but still required

the whole program to be resident in the main memory

• Programmers moved the data back and forth from the secondary store by overlaying it repeatedly on the primary store

tricky programming!

L04-50February 13, 2020

Page 51: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Manual Overlays• Assume an instruction can address all the

storage on the drum

L04-51February 13, 2020

Ferranti Mercury1956

40k bitsmain

640k bitsdrum

Central Store

Page 52: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Manual Overlays• Assume an instruction can address all the

storage on the drum

• Method 1: programmer keeps track of addresses in the main memory and initiates an I/O transfer when required

L04-52February 13, 2020

Ferranti Mercury1956

40k bitsmain

640k bitsdrum

Central Store

Page 53: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Manual Overlays• Assume an instruction can address all the

storage on the drum

• Method 1: programmer keeps track of addresses in the main memory and initiates an I/O transfer when required– Difficult, error prone

L04-53February 13, 2020

Ferranti Mercury1956

40k bitsmain

640k bitsdrum

Central Store

Page 54: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Manual Overlays• Assume an instruction can address all the

storage on the drum

• Method 1: programmer keeps track of addresses in the main memory and initiates an I/O transfer when required– Difficult, error prone

• Method 2: automatic initiation of I/O transfers by software address translation– Brooker’s interpretive coding, 1960

L04-54February 13, 2020

Ferranti Mercury1956

40k bitsmain

640k bitsdrum

Central Store

Page 55: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Manual Overlays• Assume an instruction can address all the

storage on the drum

• Method 1: programmer keeps track of addresses in the main memory and initiates an I/O transfer when required– Difficult, error prone

• Method 2: automatic initiation of I/O transfers by software address translation– Brooker’s interpretive coding, 1960– Inefficient

L04-55February 13, 2020

Ferranti Mercury1956

40k bitsmain

640k bitsdrum

Central Store

Page 56: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Demand Paging in Atlas (1962)

L04-56February 13, 2020

Secondary(Drum)

32x6=192 pages

Primary32 Pages

512 words/page

Central Memory

“A page from secondarystorage is brought into the primary storage whenever it is (implicitly) demanded by the processor.”

Tom Kilburn

Page 57: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Demand Paging in Atlas (1962)

L04-57February 13, 2020

Secondary(Drum)

32x6=192 pages

Primary32 Pages

512 words/page

Central Memory

“A page from secondarystorage is brought into the primary storage whenever it is (implicitly) demanded by the processor.”

Tom Kilburn

Primary memory as a cachefor secondary memory

Page 58: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Demand Paging in Atlas (1962)

L04-58February 13, 2020

Secondary(Drum)

32x6=192 pages

Primary32 Pages

512 words/page

Central MemoryUser sees 32 x 6 x 512 words

of storage

“A page from secondarystorage is brought into the primary storage whenever it is (implicitly) demanded by the processor.”

Tom Kilburn

Primary memory as a cachefor secondary memory

Page 59: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Hardware Organization of Atlas

L04-59February 13, 2020

InitialAddressDecode

16 ROM pages0.4 ~1 µsec

2 subsidiary pages1.4 µsec

Main32 pages1.4 µsec

Drum (4)192 pages 8 Tape decks

88 sec/word

1 Page Address Register (PAR) per physical page frame

EffectiveAddress

system code(not swapped)

system data(not swapped)

0

31

PARs

<effective PN , status>

Page 60: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Hardware Organization of Atlas

L04-60February 13, 2020

InitialAddressDecode

16 ROM pages0.4 ~1 µsec

2 subsidiary pages1.4 µsec

Main32 pages1.4 µsec

Drum (4)192 pages 8 Tape decks

88 sec/word

1 Page Address Register (PAR) per physical page frame

Compare the effective page address against all 32 PARsmatch Þ normal accessno match Þ page fault

save the state of the partially executedinstruction

EffectiveAddress

system code(not swapped)

system data(not swapped)

0

31

PARs

<effective PN , status>

Page 61: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Atlas Demand Paging Scheme

L04-61February 13, 2020

InitialAddressDecode

16 ROM pages0.4 ~1 µsec

2 subsidiary pages1.4 µsec

Main32 pages1.4 µsec

Drum (4)192 pages 8 Tape decks

88 sec/word

On a page fault: - Input transfer into a free page is initiated- The Page Address Register (PAR) is updated

EffectiveAddress

system code(not swapped)

system data(not swapped)

0

31

PARs

<effective PN , status>

Page 62: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Atlas Demand Paging Scheme

L04-62February 13, 2020

InitialAddressDecode

16 ROM pages0.4 ~1 µsec

2 subsidiary pages1.4 µsec

Main32 pages1.4 µsec

Drum (4)192 pages 8 Tape decks

88 sec/word

On a page fault: - Input transfer into a free page is initiated- The Page Address Register (PAR) is updated- If no free page is left, a page is selected to be replaced (based on usage)- The replaced page is written on the drum (to minimize the drum latency

effect, the first empty page on the drum was selected)- The page table is updated to point to the new location of the page on the

drum

EffectiveAddress

system code(not swapped)

system data(not swapped)

0

31

PARs

<effective PN , status>

Page 63: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Caching vs. Demand Paging

L04-63February 13, 2020

CPU cache primarymemory

secondarymemory

Caching Demand pagingcache entry page framecache block (~32 bytes) page (~4K bytes)

primarymemory CPU

Page 64: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Caching vs. Demand Paging

L04-64February 13, 2020

CPU cache primarymemory

secondarymemory

Caching Demand pagingcache entry page framecache block (~32 bytes) page (~4K bytes)cache miss rate (1% to 20%) page miss rate (<0.001%)cache hit (~1 cycle) page hit (~100 cycles)cache miss (~100 cycles) page miss (~5M cycles)

primarymemory CPU

Page 65: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Caching vs. Demand Paging

L04-65February 13, 2020

CPU cache primarymemory

secondarymemory

Caching Demand pagingcache entry page framecache block (~32 bytes) page (~4K bytes)cache miss rate (1% to 20%) page miss rate (<0.001%)cache hit (~1 cycle) page hit (~100 cycles)cache miss (~100 cycles) page miss (~5M cycles)a miss is handled a miss is handled

in hardware mostly in software

primarymemory CPU

Page 66: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Modern Virtual Memory SystemsIllusion of a large, private, uniform store

L04-66February 13, 2020

Page 67: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Modern Virtual Memory SystemsIllusion of a large, private, uniform store

L04-67February 13, 2020

Protection & Privacy• several users, each with their private

address space and one or more shared address spaces• page table º name space

OS

useri

Page 68: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Modern Virtual Memory SystemsIllusion of a large, private, uniform store

L04-68February 13, 2020

Protection & Privacy• several users, each with their private

address space and one or more shared address spaces• page table º name space

Demand Paging• Provides the ability to run programs

larger than the primary memory• Hides differences in machine

configurations

OS

useri

PrimaryMemory

SwappingStore

Page 69: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Modern Virtual Memory SystemsIllusion of a large, private, uniform store

L04-69February 13, 2020

Protection & Privacy• several users, each with their private

address space and one or more shared address spaces• page table º name space

Demand Paging• Provides the ability to run programs

larger than the primary memory• Hides differences in machine

configurations

The price is address translation on each memory reference

OS

useri

PrimaryMemory

SwappingStore

VA PAmappingTLB

Page 70: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Linear Page Table

L04-70February 13, 2020

Data Pages

PPNPPN

DPNPPN

PPNPPN

Page Table

DPN

PPN

DPNDPN

DPNPPN

• Page Table Entry (PTE) contains:– A bit to indicate if a page

exists– PPN (physical page number)

for a memory-resident page– DPN (disk page number) for a

page on the disk– Status bits for protection and

usage

Page 71: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Linear Page Table

L04-71February 13, 2020

VPN OffsetVirtual address

Data word

Data Pages

PPNPPN

DPNPPN

PPNPPN

Page Table

DPN

PPN

DPNDPN

DPNPPN

• Page Table Entry (PTE) contains:– A bit to indicate if a page

exists– PPN (physical page number)

for a memory-resident page– DPN (disk page number) for a

page on the disk– Status bits for protection and

usage

Page 72: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Linear Page Table

L04-72February 13, 2020

VPN OffsetVirtual address

Data word

Data Pages

PPNPPN

DPNPPN

PPNPPN

Page Table

DPN

PPN

DPNDPN

DPNPPN

PT Base Register

• Page Table Entry (PTE) contains:– A bit to indicate if a page

exists– PPN (physical page number)

for a memory-resident page– DPN (disk page number) for a

page on the disk– Status bits for protection and

usage• OS sets the Page Table

Base Register whenever active user process changes

Page 73: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Linear Page Table

L04-73February 13, 2020

VPN OffsetVirtual address

Data word

Data Pages

PPNPPN

DPNPPN

PPNPPN

Page Table

DPN

PPN

DPNDPN

DPNPPN

PT Base Register

• Page Table Entry (PTE) contains:– A bit to indicate if a page

exists– PPN (physical page number)

for a memory-resident page– DPN (disk page number) for a

page on the disk– Status bits for protection and

usage• OS sets the Page Table

Base Register whenever active user process changes V

PN

Page 74: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Linear Page Table

L04-74February 13, 2020

VPN OffsetVirtual address

Data word

Data Pages

PPNPPN

DPNPPN

PPNPPN

Page Table

DPN

PPN

DPNDPN

DPNPPN

PT Base Register

• Page Table Entry (PTE) contains:– A bit to indicate if a page

exists– PPN (physical page number)

for a memory-resident page– DPN (disk page number) for a

page on the disk– Status bits for protection and

usage• OS sets the Page Table

Base Register whenever active user process changes V

PN

Page 75: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Linear Page Table

L04-75February 13, 2020

VPN OffsetVirtual address

Data word

Data Pages

PPNPPN

DPNPPN

PPNPPN

Page Table

DPN

PPN

DPNDPN

DPNPPN

PT Base Register

• Page Table Entry (PTE) contains:– A bit to indicate if a page

exists– PPN (physical page number)

for a memory-resident page– DPN (disk page number) for a

page on the disk– Status bits for protection and

usage• OS sets the Page Table

Base Register whenever active user process changes V

PNO

ffse

t

Page 76: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Size of Linear Page Table

L04-76February 13, 2020

With 32-bit addresses, 4 KB pages & 4-byte PTEs:

Page 77: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Size of Linear Page Table

L04-77February 13, 2020

With 32-bit addresses, 4 KB pages & 4-byte PTEs:Þ 220 PTEs, i.e, 4 MB page table per userÞ 4 GB of swap space needed to back up the full virtual

address space

Page 78: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Size of Linear Page Table

L04-78February 13, 2020

With 32-bit addresses, 4 KB pages & 4-byte PTEs:Þ 220 PTEs, i.e, 4 MB page table per userÞ 4 GB of swap space needed to back up the full virtual

address space

Larger pages?

Page 79: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Size of Linear Page Table

L04-79February 13, 2020

With 32-bit addresses, 4 KB pages & 4-byte PTEs:Þ 220 PTEs, i.e, 4 MB page table per userÞ 4 GB of swap space needed to back up the full virtual

address space

Larger pages?• Internal fragmentation (Not all memory in a page is used)• Larger page fault penalty (more time to read from disk)

Page 80: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Size of Linear Page Table

L04-80February 13, 2020

With 32-bit addresses, 4 KB pages & 4-byte PTEs:Þ 220 PTEs, i.e, 4 MB page table per userÞ 4 GB of swap space needed to back up the full virtual

address space

Larger pages?• Internal fragmentation (Not all memory in a page is used)• Larger page fault penalty (more time to read from disk)

What about 64-bit virtual address space???• Even 1MB pages would require 244 8-byte PTEs (35 TB!)

Page 81: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Size of Linear Page Table

L04-81February 13, 2020

With 32-bit addresses, 4 KB pages & 4-byte PTEs:Þ 220 PTEs, i.e, 4 MB page table per userÞ 4 GB of swap space needed to back up the full virtual

address space

Larger pages?• Internal fragmentation (Not all memory in a page is used)• Larger page fault penalty (more time to read from disk)

What about 64-bit virtual address space???• Even 1MB pages would require 244 8-byte PTEs (35 TB!)

What is the “saving grace”?

Page 82: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Hierarchical Page Table

L04-82February 13, 2020

Level 1 Page Table

Level 2Page Tables

Data Pages

page in primary memory page in secondary memory

Root of the CurrentPage Table

p1

offset

p2

Virtual Address

(ProcessorRegister)

PTE of a nonexistent page

p1 p2 offset01112212231

10-bitL1 index

10-bit L2 index

Page 83: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Address Translation & Protection

L04-83February 13, 2020

• Every instruction and data access needs address translation and protection checks

Physical Address

Virtual Address

AddressTranslation

Virtual Page No. (VPN) offset

Physical Page No. (PPN) offset

Exception?

Page 84: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Address Translation & Protection

L04-84February 13, 2020

• Every instruction and data access needs address translation and protection checks

Physical Address

Virtual Address

AddressTranslation

Virtual Page No. (VPN) offset

Physical Page No. (PPN) offset

Exception?

ProtectionCheck

Kernel/User Mode

Read/Write

Page 85: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Address Translation & Protection

L04-85February 13, 2020

• Every instruction and data access needs address translation and protection checks

A good VM design needs to be fast (~ one cycle) and space-efficient

Physical Address

Virtual Address

AddressTranslation

Virtual Page No. (VPN) offset

Physical Page No. (PPN) offset

Exception?

ProtectionCheck

Kernel/User Mode

Read/Write

Page 86: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Translation Lookaside Buffers

L04-86February 13, 2020

Address translation is very expensive!• In a two-level page table, each reference

becomes several memory accesses

Solution: Cache translations in TLB

VPN offset

V R W D tag PPN

physical address PPN offset

virtual address

hit?

(VPN = virtual page number)

(PPN = physical page number)

Page 87: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Translation Lookaside Buffers

L04-87February 13, 2020

Address translation is very expensive!• In a two-level page table, each reference

becomes several memory accesses

Solution: Cache translations in TLBTLB hit Þ Single-cycle TranslationTLB miss Þ Page Table Walk to refill

VPN offset

V R W D tag PPN

physical address PPN offset

virtual address

hit?

(VPN = virtual page number)

(PPN = physical page number)

Page 88: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

TLB Designs

• Typically 32-128 entries, usually fully associative– Each entry maps a large page, hence less spatial locality across

pages è more likely that two entries conflict– Sometimes larger TLBs (256-512 entries) are 4-8 way set-

associative

• Random or FIFO replacement policy

L04-88February 13, 2020

Page 89: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

TLB Designs

• Typically 32-128 entries, usually fully associative– Each entry maps a large page, hence less spatial locality across

pages è more likely that two entries conflict– Sometimes larger TLBs (256-512 entries) are 4-8 way set-

associative

• Random or FIFO replacement policy• No process information in TLB?

L04-89February 13, 2020

Page 90: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

TLB Designs

• Typically 32-128 entries, usually fully associative– Each entry maps a large page, hence less spatial locality across

pages è more likely that two entries conflict– Sometimes larger TLBs (256-512 entries) are 4-8 way set-

associative

• Random or FIFO replacement policy• No process information in TLB?• TLB Reach: Size of largest virtual address space

that can be simultaneously mapped by TLB

Example: 64 TLB entries, 4KB pages, one page per entry

TLB Reach = _____________________________________?

L04-90February 13, 2020

Page 91: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

TLB Designs

• Typically 32-128 entries, usually fully associative– Each entry maps a large page, hence less spatial locality across

pages è more likely that two entries conflict– Sometimes larger TLBs (256-512 entries) are 4-8 way set-

associative

• Random or FIFO replacement policy• No process information in TLB?• TLB Reach: Size of largest virtual address space

that can be simultaneously mapped by TLB

Example: 64 TLB entries, 4KB pages, one page per entry

TLB Reach = _____________________________________?

L04-91February 13, 2020

64 entries * 4 KB = 256 KB (if contiguous)

Page 92: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Variable-Sized Page Support

L04-92February 13, 2020

Level 1 Page Table

Level 2Page Tables

Data Pages

page in primary memorylarge page in primary memory page in secondary memoryPTE of a nonexistent page

Root of the CurrentPage Table

p1

offset

p2

Virtual Address

(ProcessorRegister)

p1 p2 offset01112212231

10-bitL1 index

10-bit L2 index

Page 93: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Variable-Sized Page Support

L04-93February 13, 2020

Level 1 Page Table

Level 2Page Tables

Data Pages

page in primary memorylarge page in primary memory page in secondary memoryPTE of a nonexistent page

Root of the CurrentPage Table

p1

offset

p2

Virtual Address

(ProcessorRegister)

p1 p2 offset01112212231

10-bitL1 index

10-bit L2 index

Page 94: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Variable-Size Page TLB

L04-94February 13, 2020

Some systems support multiple page sizes.

VPN offset

physical address PPN offset

virtual address

hit?

V RWD Tag PPN L

Page 95: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Handling a TLB Miss

L04-95February 13, 2020

Software (MIPS, Alpha)TLB miss causes an exception and the operating system walks the page tables and reloads TLB. A privileged “untranslated” addressing mode used for walk

Hardware (SPARC v8, x86, PowerPC)A memory management unit (MMU) walks the page tables and reloads the TLB

If a missing (data or PT) page is encountered during the TLB reloading, MMU gives up and signals a Page-Fault exception for the original instruction

Page 96: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Hierarchical Page Table Walk: SPARC v8

L04-96February 13, 2020

31 11 0

Virtual Address Index 1 Index 2 Index 3 Offset31 23 17 11 0

ContextTableRegister

ContextRegister

root ptr

PTPPTP

PTE

Context Table

L1 Table

L2 TableL3 Table

Physical Address PPN Offset

MMU does this table walk in hardware on a TLB miss

Page 97: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020

Address Translation:putting it all together

L04-97February 13, 2020

Virtual Address

TLBLookup

Page TableWalk

Update TLBPage Fault(OS loads page)

ProtectionCheck

PhysicalAddress(to cache)

miss hit

the page is Ïmemory Îmemory denied permitted

ProtectionFault

hardwarehardware or softwaresoftware

SEGFAULTWhere?

Page 98: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

L04-98MIT 6.823 Spring 2020

Next lecture:

Modern Virtual Memory Systems

February 13, 2020

Page 99: Memory Managementcsg.csail.mit.edu/6.823/Lectures/L04split.pdf · Recap: Cache Organization •Caches are small and fast memories that transparently retain recently accessed data

MIT 6.823 Spring 2020 L04-99February 13, 2020