chapter 3 memory management: virtual memorygmitweb.gmit.ie/pdunne/opsysbeds2/handouts/05-ch03... ·...
TRANSCRIPT
1
Chapter 3Memory Management:
Virtual Memory
Understanding Operating Systems, Fourth Edition
Objectives You will be able to describe:The basic functionality of the memory allocation methods covered in this chapter: paged, demand paging, segmented, and segmented/demand paged memory allocationThe influence that these page allocation methods have had on virtual memory The difference between a first-in first-out page replacement policy, a least-recently-used page replacement policy, and a clock page replacement policyThe mechanics of paging and how a memory allocation scheme determines which pages should be swapped out of memoryThe concept of the working set and how it is used in memory allocation schemesThe impact that virtual memory had on multiprogrammingCache memory and its role in improving system response time
2
Memory Management Memory Management –– Where weWhere we’’re goingre going
• Disadvantages of early schemes:– Required storing entire program in
memory– Fragmentation– Overhead due to relocation
• Evolution of virtual memory helps to:– Remove the restriction of storing
programs contiguously– Eliminate the need for entire program to
reside in memory during execution
• First we have to cover some background material
Virtual MemoryVirtual Memory
3
Paged Memory AllocationPaged Memory Allocation
Note: This scheme works well when sectors, pages and page frames are the same size
RAM
Page Frame 99
Page Frame 123
Page Frame 100JOB
Page 0
Page 5Page 4
6 sectorsH/D
OSOSMemory Manager
Compiler
# inc
Paged Memory AllocationIn this scheme the compiler divides the job into a number of pages. In this example the job can be broken into 6 pages. Even if the job only required 5 + a bit pages its allocated 6 pages (i.e. nearest round number)Its subsequently stored on six sectors of H/D. (I’m assuming that the disk sector size matches the job page size). Later the OS memory manager loads the ‘job’ into 6 page frames of RAM. The OS will probably not be able to load the jobs into 6 contiguous pages. Equally it doesn’t worry about them being in order; Page 5 could be loaded into memory before Page 1 if it was read of the H/D first.
This scheme works well if page size, memory block size (page frames), and size of disk section (sector, block) are all equal.
Before executing a program, Memory Manager:Determines number of pages in programLocates enough empty page frames in main memoryLoads all of the program’s pages into them
4
Paged Memory AllocationPaged Memory Allocation (continued)(continued)
Paged memory allocation scheme for a job of 350 lines. Notice pages are not required to be in adjacent blocks
5
Benefits Benefits -- ProblemsProblems
☺• Memory usage is more efficient
– An empty page frame can be used by any page of any job
• No compaction required
• However we need to keep track of where each job’s pages are in memory.
• We still have entire job in memory
6
Job TableJob Table –– Page Map TablePage Map Table –– Memory Map TableMemory Map Table
Actual Address 0x074
MMTMMT
Job1Job2Job3
JTJT
address where Job1’s PMT is in memory
2 4 0 3
PMTPMT
Job1 page 2 held in page frame 0
0x03096
Memory Manager’s Tables
0
1
2
3
4
5
Stored or calculated with some Math in
H/W memory-address = page-frame-num*
bytes-per-page-frame
0x074 mapped
11
22
33
Memory Manager requires three tables to keep track of the job’s pages:Job Table (JT) contains information about;
Size of the job Memory location where its PMT is storedTypically the JT is placed into a special register in the CPU by the OS when the job is selected to run. The memory manager HW then has the address of the PMT (held in RAM) for this job.
Page Map Table (PMT) The job assumes its pages are all in memory and all stored from page 0 to page XX in an orderly manner.In reality the pages are stored in real RAM page frames.So the PMT is really a cross ref between a page number and its corresponding page frame memory number. This is essentially an index into the MMT. Which will tell us where the page really is.
Memory Map Table (MMT) contains;Location for each page frame could be stored in MMT or it could simply be calculated using H/W. The math is: memory-address = page-frame-number* bytes-per-page-frame
Mapped/Unmapped status (whether this page is in RAM or still on the H/D)
7
Hardware AsideHardware Aside
8
Memory Map TableMemory Map Table
• Lives in RAM and is accessed by the MMU• Updated/modified by the OS• Memory Map tables can be extremely large
• Consider 32 bit machines which can address 4GB of memory
– 232 = 4GB address space– with 4-KB page size => 1 million pages– That means a table with 1-million entries– Note: On the new 64 bit machines the (e.g.
Linux) kernel supports a larger 2 million GB address space !!
• Large, fast page mapping is a constraint on modern computers
» This describes the mapping from virtual-to-physical address mappings
9
A brief look at the H/W:A brief look at the H/W:MMU and TLBMMU and TLB
Tanenbaum section 4.3.3
Tanenbaum section 4.3.3
11 66More over
10
Internals of the MMU /2Internals of the MMU /2
11
22
55 12 bits allows us to access all4096 bytes within a page
Number of page frame44
Present/Absent bit
Page number Offset in that page 33
66
PMT MMT
11
Finish Hardware AsideFinish Hardware Aside
12
Paged Memory AllocationPaged Memory Allocation (continued)(continued)
Page number 1Contains first
100 lines
Displacement Describes how far away
a line if from start of page-frame
Page size = 100
Displacement (offset) of a line: Determines how far away a line is from the beginning of its page
Used to locate that line within its page frameHow to determine page number and displacement of a line:
Page number = the integer quotient from the division of the job space address by the page sizeDisplacement = the remainder from the page number division
13
Paged Memory AllocationPaged Memory Allocation (continued)(continued)
CPU executes line 203. How does it now find line 204 in RAM to execute it?
page size required line numberpage number
xxxxxxxxx
displacement
100 2042
2004displacement
page number
But ... ->
Steps to determine exact location of a line in memory:Determine page number and displacement of a line
Refer to the job’s PMT and find out which page frame contains the required page
Get the address of the beginning of the page frame by multiplying the page frame number by the page frame size
Add the displacement (calculated in step 1) to the starting address of the page frame
Or Refer to the job’s PMT and find out where in memory that page is located (as next slide)
Add the displacement (calculated in step 1) to the starting address of the page frame
The math here is actually carried out in H/W and the OS (memory manager is responsible for remembering the info )
14
But you said earlier that the pages could be any where in memory so where is Page 2 ?
(1) Consult the Page Map Table for this job
15
ExerciseExercise• 1 MB System RAM with page frame size of 512 Bytes• Job 1 has 680 lines of code
– One byte per instruction (or data value).– 512 byte page size
• The current line being executed by the CPU in Job 1 is line 25 :– LOAD R1, 518 //Load value at memory location 518 into register 1
Job 1 : Page Map TablePage no. Page Frame Num0 51 3
Question
What address in RAM is memory location 518 mapped to?
16
AnswerAnswer
• Line 518 must be in second page (512 bytes per page)• Page number 1 is stored in RAM page frame 5• 518 is displaced by ‘6’ from the start of page number 1
– (i.e. 518 – 512(per page) = 6)
• RAM page frame 5 is at location (6* 512) = 3072– (remember page frame number 5 is the 6th page frame)
• We must displace by 6 => Its at memory location 3078• Therefore memory location 3078 holds the value that must be
stored in Register 1
17
Paged Memory AllocationPaged Memory Allocation (continued)(continued)
• Advantages:– Allows jobs to be allocated in non-contiguous
memory locations• Memory used more efficiently; more jobs can fit
• Disadvantages:– Address resolution causes increased overhead – Internal fragmentation still exists, though in last page– Requires the entire job to be stored in memory
location– Size of page is crucial (not too small, not too large)
18
Next ...
Demand Paging
19
Next step on the way to VM:Next step on the way to VM:Demand PagingDemand Paging
• Pages are brought into memory only as they are needed
• Programs are written sequentially so not all pages are necessary at once. – For example:
• User-written error handling modules
• Mutually exclusive modules
• Options are not always accessible
Demand Paging: Pages are brought into memory only as they are needed, allowing jobs to be run with less main memoryTakes advantage that programs are written sequentially so not all pages are necessary at once. For example:
User-written error handling modules are processed only when a specific error is detected
Error might not occur etcMutually exclusive modules
Can’t do input and do CPU work for a module at the same timeCertain program options are not always accessible
Can’t open file and edit its contents at the same time
20
Demand PagingDemand Paging (continued)(continued)
• Demand paging made virtual memory widely available (More later..)
• More jobs with less main memory
• Needs high-speed direct access storage device
• Predefined policies – How and when the pages are passed (or “swapped”)
Demand paging made virtual memory widely availableCan give appearance of an almost infinite or nonfinite amount of physical memory
Allows the user to run jobs with less main memory than required in paged memory allocationRequires use of a high-speed direct access storage device that can work directly with CPUHow and when the pages are passed (or “swapped”) depends on predefined policies that determine when to make room for needed pages and how to do so.
21
OS tables revampedOS tables revamped
Figure 3.5: A typical demand paging scheme
Total job pages are 15, and the number of totalavailable page frames is 12 (OS takes 4)
3 new fields!3 new fields!
The OS depends on following tables:Job TablePage Map Table with 3 new fields to determine
Status: Tells if page is already in memoryModified: If page contents have been modified since last saveReferenced: If the page has been referenced recently
Used to determine which pages should remain in main memory and which should be swapped out
It should make sense that you swap out the least used page.Memory Map Table
22
What Tanenbaum says is in the PMT (aka Page Table Entries, PKE)
23
Demand PagingDemand Paging (continued)(continued)
• Not enough main memory to hold all the currently active processes?
• Swapping Process:
To move in a new page, a resident page may have to be swapped back into secondary storage;Swapping involves
Copying the resident page to the disk (if it was modified ; no need to save if it wasn’t)Writing the new page into the (now) empty page frame
Requires close interaction between hardware components, software algorithms, and policy schemes
24
Paging ExamplePaging Example-- 11
Note:(Virtual) pages and (Real) page frames are the same size, here its 4K (as in the Pentium) but in real systems size ranges from 512bytes to 64KB
We have 16 virtual pages and 8 page frames
Transfers from RAM to disk are always in units of pages
X means that the page is not (currently) mapped to a frame
Assume pages (and frames) are counted from 0.
program thinks it has this much space but the machine only has this much
25
Paging ExamplePaging Example-- -- 22• Example
– Assume a program tries to access address 8192
– This address is sent to the MMU– The MMU recognizes that this
address falls in virtual page 2 (assume pages start at zero)
– The MMU looks at its page mapping and sees that page 2 maps to physical page 6
– The MMU translates 8192 to the relevant address in page frame 6 (this being 24576)
– This address is output by the MMU and the memory board simply sees a request for address 24576. It does not know that the MMU has intervened. The memory board simply sees a request for a particular location, which it honours.
frame 6
MMU
26
Paging ExamplePaging Example-- -- 33
• We have eight virtual pages which do not map to a physical page
• Each virtual page will have a present/absent bit which indicates if the virtual page is mapped to a physical page
Present/Absent bit
27
Paging ExamplePaging Example-- 44• What happens if we try to use an
unmapped page? For example, the program tries to access address 24576 (i.e. 24K)– The MMU will notice that the page is
unmapped and will cause the CPU to trap to the OS
– This trap is called a page fault– The operating system will decide to evict
one of the currently mapped pages and use that for the page that has just been referenced
– The page that has just been referenced is copied (from disc) to the virtual page that has just been freed.
– The virtual page frames are updated.– The trapped instruction is restarted.
Page fault handler: The section of the operating system that determinesWhether there are empty page frames in memory
If so, requested page is copied from secondary storageWhich page will be swapped out if all page frames are busy
Decision is directly dependent on the predefined policy for page removal
28
Page faultsPage faults
Two reasons why we could/would get a page fault
29
Paging ExamplePaging Example-- 55• Example (trying to access address 24576)
– (1) The MMU would cause a trap to the operating system as the virtual page is not mapped to a physical location (shown by an X)
– (2) A virtual page that is mapped is elected for eviction (we’ll assume that page 11 is nominated)
– (2a)Virtual page 11 is mark as unmapped (i.e. the present/absent bit is changed)
– (3) Physical page 7 is written to disc (we’ll assume for now that this needs to be done). That is the physical page that virtual page 11 maps onto
– (4) Virtual page 6 is loaded to physical address 28672 (28K)
– (5)The entry for virtual page 6 is changed so that the present/absent bit is changed.
– (5a) Also the ‘X’ is replaced by a ‘7’ so that it points to the correct physical page
– When the trapped instruction is re-executed it will now work correctly
11
x 22To
Disk33
744
From Disk
55
30
Demand PagingDemand Paging (continued)(continued)
• Advantages:– Job no longer constrained by the size of physical
memory (concept of virtual memory)– Utilizes memory more efficiently than the previous
schemes• Disadvantages:
– Increased overhead caused by the tables and the page interrupts
31
Demand PagingDemand Paging Problems: Problems: Thrashing Thrashing
• An excessive amount of page swapping between main memory and secondary storage– Operation becomes inefficient– Caused when a page is removed from memory but is called
back shortly thereafter– Can occur across jobs, when a large number of jobs are vying
for a relatively few number of free pages– Can happen within a job (e.g., in loops that cross page
boundaries)– How would the OS spot that its going on?
• Large amount of page faults occur– Page fault: a failure to find a page in memory
– How would the OS eliminate it• It may well remove other jobs pages from memory to
allocate space for the job that is trashing
32
Page Replacement PoliciesPage Replacement Policies
33
Page Replacement Policies Page Replacement Policies and Conceptsand Concepts
• Policy that selects the page to be removed
• Crucial to system efficiency.
• Types include:– First-in first-out (FIFO) policy: Removes page that has been
in memory the longest– Least-recently-used (LRU) policy: Removes page that has
been least recently accessed
• Also– Most recently used (MRU) policy– Least frequently used (LFU) policy
This is an area that enjoys a lot of research activity.
FIFOA nice analogy is the new jumper. If you buy a new jumper and you wish to put into your jumper drawer, but you find that its full you could decide to remove your oldest jumper to make room for the newest one
LRUYou could also decide to remove your least worn jumper to make room for the newest one
34
FIFO Page Replacement PolicyFIFO Page Replacement Policy
System has onlytwo page frames
Job
Page A
Page B
Page C
Page D
11 Sequence of page requests in Job A
An interrupt * is generated when a new page needs to be brought into memory. Here we have 9 of 11 (poor)
22
A was first into memory and someone has to make room for C so A has to be first out
33
Here we see that when page C is requested page A is removed from RAM because it was the first page into memory. This concept is analogous to a queue. The person who first to the queue will be at the head of the queue and then will the first one out of the queue.FIFO isn’t necessarily bad but this example shows it can have poor performance 9/11 = 82% failure rate (or 18% success rate).
FIFO anomaly: No guarantee that buying more memory will always result in better performance
35
LRU Page Replacement PolicyLRU Page Replacement Policy
An interrupt * is generated when a new page is brought into memory. Here we have 8 of 11
B has only been used two times so it has to leave to make room for C
Here the principle of locality is used. If we recently used a page its likely we will again in the near future
36
Page Replacement Policies and Page Replacement Policies and ConceptsConcepts (continued)(continued)
• Efficiency (ratio of page interrupts to page requests) is slightly better for LRU as compared to FIFO
• In LRU case, increasing main memory will cause either decrease in or same number of interrupts
• Implementing LRU poilicy (NEXT SLIDE)– One mechanism for the LRU uses an 8-bit reference
byte and a bit-shifting technique to track the usage of each page currently in memory
37
LRU implementationLRU implementation
Figure 3.11: Bit-shifting technique in LRU policy
•Initially @ Time 0 each page has the leftmost bit of its reference byte set to 1, all other bits are zero•Time moves on
For those pages who have been referenced in the previous time interval the bits are right-shifted and the leftmost bit is set to 1; 0 for the others
After 4 timeintervalsleast used
38
Mechanisms Of PagingMechanisms Of Paging
39
The Mechanics of PagingThe Mechanics of Paging
• The memory manager needs info to help it decide who to swap out.– Status bit: Indicates if page is currently in memory– Referenced bit: Indicates if page has been referenced recently– Modified bit: Indicates if page contents have been altered
• Used to determine if page must be rewritten to secondary storage when it’s swapped out
40
Used by the FIFO algorithm
Used by the LRU algorithm
Q: Which page will LRU swap out?
AnswerEither of Page 1 or Page 2 as neither have been referenced or modified (so they won’t require saving to secondary storage)
41
The Working SetThe Working Set
42
The Working SetThe Working Set• This is an innovation that improved the performance of
demand paging schemes.• This program needs 120ms to run but 900ms to load the
pages into memory! Very wasteful
• Wouldn’t it be great if we could load the useful pages as we picked the program to run = > much quicker execution
43
Working SetWorking Set
• Working set:• Set of pages residing in memory that can be accessed
directly without incurring a page fault• Improves performance of demand page schemes• Requires the concept of “locality of reference”
• To load a working set into memory the system must decide– The maximum number of pages the operating system will allow
for a working set– If this is the first time its run how can it know the working set?
• Maybe this could be observed as the program is first loaded into memory and then this info is used for its next turn on the CPU
44
Segmented Memory AllocationSegmented Memory Allocation
45
Segmented Memory AllocationSegmented Memory Allocation
• Programmers normally decompose their programs into modules
• With segmented memory allocation each job is divided into several segments of different sizes, – one for each module that contains pieces to perform
related functions– Main memory is no longer divided into page frames,
• Segments are set up according to the program’s structural modules when a program is compiled or assembled– Each segment is numbered – Segment Map Table (SMT) is generated
What we want is for the system to conform to the programmers model of how the program is decomposed into functions/modules. We would like function blocks to be loaded as required to enable different functional parts of the program as required.
46
Segmented Memory AllocationSegmented Memory Allocation(continued)(continued)
Figure 3.13: Segmented memory allocation. Job1 includes a main program, Subroutine A, and Subroutine B. It’s one job divided into three segments.
47
Segmented Memory AllocationSegmented Memory Allocation (continued)(continued)
Figure 3.14: The Segment Map Table tracks each segment for Job 1
Memory Manager tracks segments in memory using following three tables:Job Table lists every job in process (one for whole system. Not shown on slide)Segment Map Table lists details about each segment (one for each job)Memory Map Table monitors allocation of main memory (one for whole system)
Segments don’t need to be stored contiguouslyThe addressing scheme requires segment number and displacement
48
Segmented Memory AllocationSegmented Memory Allocation(continued)(continued)
• Advantages:– Internal fragmentation is removed
• Disadvantages:– Difficulty managing variable-length segments in
secondary storage– External fragmentation
49
Segmented/Demand PagedSegmented/Demand PagedMemory AllocationMemory Allocation
• Subdivides segments into pages of equal size,– smaller than most segments, – and more easily manipulated than whole segments.
• It offers:– Logical benefits of segmentation– Physical benefits of paging
• Removes the problems of compaction, external fragmentation, and secondary storage handling
• The addressing scheme requires segment number, page number within that segment, and displacement within that page
50
Segmented/Demand Paged Memory Segmented/Demand Paged Memory AllocationAllocation (continued)(continued)
• This scheme requires following four tables:– Job Table lists every job in process (one for the
whole system)– Segment Map Table lists details about each
segment (one for each job)– Page Map Table lists details about every page (one
for each segment)– Memory Map Table monitors the allocation of the
page frames in main memory (one for the whole system)
51
Segmented/Demand Paged Memory AllocationSegmented/Demand Paged Memory Allocation
Figure 3.16: Interaction of JT, SMT, PMT, and main memory in a segment/paging scheme
52
Segmented/Demand Paged Memory Segmented/Demand Paged Memory AllocationAllocation (continued)(continued)
• Advantages:– Large virtual memory – Segment loaded on demand
• Disadvantages:– Table handling overhead– Memory needed for page and segment tables
To minimize number of references, many systems use associative memory to speed up the process
Its disadvantage is the high cost of the complex hardware required to perform the parallel searches
53
SummarySummary
54
Virtual MemoryVirtual Memory
• Demand paging allows programs to be executed even though they are not stored entirely in memory
• Requires cooperation between the Memory Manager and the processor hardware
• Advantages of virtual memory management:– Job size is not restricted to the size of main memory– Memory is used more efficiently– Allows an unlimited amount of multiprogramming
55
Virtual MemoryVirtual Memory (continued)(continued)
• Advantages (continued):– Eliminates external fragmentation and minimizes
internal fragmentation – Allows the sharing of code and data– Facilitates dynamic linking of program segments
• Disadvantages:– Increased processor hardware costs– Increased overhead for handling paging interrupts– Increased software complexity to prevent thrashing
56
Virtual MemoryVirtual Memory (continued)(continued)
Table 3.6: Comparison of virtual memory with paging and segmentation
57
Summary (continued)Summary (continued)
• Paged memory allocations allow efficient use of memory by allocating jobs in noncontiguousmemory locations
• Increased overhead and internal fragmentation are problems in paged memory allocations
• Job no longer constrained by the size of physical memory in demand paging scheme
• LRU scheme results in slightly better efficiency as compared to FIFO scheme
• Segmented memory allocation scheme solves internal fragmentation problem
58
Summary (continued)Summary (continued)
• Segmented/demand paged memory allocation removes the problems of compaction, external fragmentation, and secondary storage handling
• Associative memory can be used to speed up the process
• Virtual memory allows programs to be executed even though they are not stored entirely in memory
• Job’s size is no longer restricted to the size of main memory by using the concept of virtual memory
• CPU can execute instruction faster with the use of cache memory