module 5 memory management unit - 2014
DESCRIPTION
Paging: Principle Of Operation, Page Allocation, H/W Support For Paging, Multiprogramming With Fixed partitions, Segmentation, Swapping, Virtual Memory: Concept, Performance Of Demand Paging, Page Replacement Algorithms, Thrashing, Locality.TRANSCRIPT
Memory Management Walking towards the future
Memory Management (why)
• Memory is cheap today, and getting cheaper
But applications are demanding more and more memory, there is never enough!
• Memory Management, involves swapping blocks of data from secondary storage.
• Memory I/O is slow compared to a CPU
• The OS must cleverly time the swapping to maximize the CPU’s efficiency
Memory needs to be allocated to ensure a reasonable supply of ready processes to consume available processor time.
Memory Management Requirement
• Relocation• Protection• Sharing• Logical organisation• Physical organisation
Relocation
• Programmer does not know where the program will be placed in memory when it is executed
• While the program is executing, it may be swapped to disk and returned to main memory at a different location (relocated)
• Memory references must be translated in the code to actual physical memory address
Memory Management Terms
Addressing Requirement
Protection - Security
• Processes should not be able to reference memory locations in another process without permission
• Impossible to check absolute addresses at compile time• Must be checked at rum time• Memory protection requirement must be satisfied by the processor
(hardware) rather than the operating system (software)• Operating system cannot anticipate all of the memory references a
program will make
Sharing
• Allow several processes to access the same portion of memory• Better to allow each process access to the same copy of the program
rather than have their own separate copy
Logical Organization
• Programs are written in modules• Modules can be written and compiled independently• Different degrees of protection given to modules (read-only, execute-
only)• Share modules among processes
Physical Organization
• Memory available for a program plus its data may be insufficient• Overlaying allows various modules to be assigned the same region of memory
• Programmer does not know how much space will be available
Picture we want to paint
Essence of memory management
The task of moving information between the two levels of memory
Basic Memory ManagementMono-programming without Swapping or Paging
Three simple ways of organizing memory- an operating system with one user process
Fixed Partitioning
• Any process whose size is less than or equal to the partition size can be loaded into an available partition
• If all partitions are full, the operating system can swap a process out of a partition
• A program may not fit in a partition. The programmer must design the program with overlays.
• Main memory use is inefficient. Any program, no matter how small, occupies an entire partition. This is called internal fragmentation.
Placement Algorithm With Partition
• Equal-size partitions• Because all partitions are of equal size, it does not matter which partition is
used
• Unequal-size partitions• Can assign each process to the smallest partition within which it will fit• Queue for each partition• Processes are assigned in such a way as to minimize wasted memory within a
partition
Dynamic Partition
• Partitions are of variable length and number• Process is allocated exactly as much memory as required• Eventually get holes in the memory. This is called external
fragmentation• Must use compaction to shift processes so they are contiguous and all
free memory is in one block
Dynamic Storage Allocation Problem
• First-fit: Allocate the first hole that is big enough.• Best-fit: Allocate the smallest hole that is big enough; must search
entire list, unless ordered by size. Produces the smallest leftover hole.• Worst-fit: Allocate the largest hole; must also search entire list.
Produces the largest leftover hole.
How to satisfy a request of size n from a list of free holes.
First-fit and best-fit better than worst-fit in terms of speed and storage utilization.
Dynamic Partitioning
• Next-fit• Scans memory from the location of the last placement• More often allocate a block of memory at the end of memory where the
largest block is found• The largest block of memory is broken up into smaller blocks• Compaction is required to obtain a large block at the end of memory
Best Fit Vs. First Fit
• Memory sizes 1300 and 1200• Requests: 1000, 1100, 250
• Request First-Fit Best-Fit• 1300, 1200 1300, 1200• 1000 300, 1200 1300, 200• 1100 300, 100 200, 200• 250 50, 100 stuck
Best Fit Vs. First Fit
• Memory sizes 1300 and 1200• Requests: 1100, 1050, 250
• Request First-Fit Best-Fit• 1300, 1200 1300, 1200• 1100 200, 1200 1300, 100• 1050 200, 150 250, 200• 250 stuck 0, 200
Placement Algorithm
• Used to decide which free block to allocate to a process of 16MB.
• Goal: reduce usage of compaction procedure (its time consuming).
• Example algorithms:• First-fit• worst-fit• Best-fit
Final Comments
• First-fit favors allocation near the beginning: tends to create less fragmentation then Next-fit.
• Next-fit often leads to allocation of the largest block at the end of memory.
• Best-fit searches for smallest block: the fragment left behind is small as possible –
• main memory quickly forms holes too small to hold any process: compaction generally needs to be done more often.
• First/Next-fit and Best-fit better than Worst-fit (name is fitting) in terms of speed and storage utilization.
Final Comments - Fragmentation
• There are really two types of fragmentation:1. Internal Fragmentation –
allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.
2. External Fragmentation – total memory space exists to satisfy a size n request, but that memory is not contiguous.
RIVEW
• Consider a swapping system in which memory consists of the following hole sizes in memory order: 10 KB, 4 KB, 20 KB, 18 KB, 7 KB. 9 KB, 12 KB, and 15 KB. Which hole is taken for successive segment requests of
1. 12 KB2. 10 KB3. 9 KB
for first fit? Now repeat the question for best fit, worst fit, and next fit.
GTU -2011Given memory partition of 100K, 500K, 200K, 300K, and 600K in order, How would each of the First-fit, Best-fit and Worst-fit algorithms place the processes of 212K, 417K, 112K and 426K in Order ?
Which algorithm makes the most efficient use of memory?Show the diagram of memory status in each cases.
Answer:First fit takes 20 KB, 10 KB, 18 KB.Best fit takes 12 KB, 10 KB, and 9 KB.Worst fit takes 20 KB, 18 KB, and 15 KB. Next fit takes 20 KB, 18 KB, and 9 KB.
Assume that the list of holes in a variable partitions memory system contains the following entries (in the given order): 190 KB, 550 KB, 220 KB, 420KB, 650KB, and 110KB. Consider the following sequence of requests (in the given order): A= 210KB, B=430 KB, C=100KB, D=420KB, E=515KB.Determine which holes would be allocated to which request by each of the following schemes and compute total internal fragmentation and external fragmentation for each algorithm.
a. First fit
b. Best fit
c. Worst fit
Answer :
Total internal fragmentation for first-fit =750KB Total internal fragmentation for best fit = 275KB Total internal fragmentation for worst fit = 880KB
Review
• Assume that the main memory has the following 5 fixed partitions with the following sizes: 100KB, 500KB, 200KB, 300KB and 600KB (in order)
a) How would each of the First-fit, Best-fit and Worst-fit algorithms place processes of 212KB, 417KB, 112KB and 426KB (in order)?
b) Compute the total memory size that is not used for each algorithm. c) Which algorithm makes the efficient use of the memory?
Solution
SolutionTotal memory size that is not used for each algorithm
• First-fit = 1700 – 741 = 959• Best-fit = 1700 – 1167 = 533• Worst-fit = 1700 – 741 = 959
Memory Utilization Ratio:• First-fit = 741 / 1700 = 43.5%• Best-fit = 1167 / 1700 = 68.6%• Worst-fit = 741 / 1700 = 43.5 %
• Best-fit has the most efficient use of the memory
36
Memory Management with Bit Maps
• Part of memory with 5 processes, 3 holes
• tick marks show allocation units
• shaded regions are free
• Corresponding bit map• Same information as a list
37
Memory Management with Linked Lists
Four neighbor combinations for the terminating process X
MMU
Virtual or Logical Memory
• The basic idea behind virtual memory is that the combined size of the program, data, and stack may exceed the amount of physical memory available for it.
• The operating system keeps those parts of the program currently in use in main memory, and the rest on the disk.
• These program-generated addresses are called virtual addresses and form the virtual address space.
• MMU (Memory Management Unit) that maps the virtual addresses onto the physical memory addresses
Case Study
• We have a computer that can generate 16-bit addresses, from 0 up to 64K. These are the virtual addresses.
• This computer, however, has only 32 KB of physical memory, so although 64-KB programs can be written, they cannot be loaded into memory in their entirety and run.
• A complete copy of a program’s core image, up to 64 KB, must be present on the disk, however, so that pieces can be brought in as needed.
Paging• Divide logical memory into blocks of
same size called pages• Divide physical memory into fixed-sized
blocks called frames. Keep track of all free frames
• To run a program of size n pages, need to find n free frames.
• Set up a page table to translate logical to physical addresses
• The pages and frames are always the same size.
Case Study – Continue
• In this example pages are 4 KB, but page sizes from 512 bytes to 64 KB have been used in real systems.
• With 64 KB of virtual address space and 32 KB of physical memory, we get 16 virtual pages and 8 page frames.
• Transfers between RAM and disk are always in units of a page.
• When the program tries to access address 0, for example, using the instruction
• MOV REG,0• virtual address 0 is sent to the MMU. The
MMU sees that this virtual address falls in page 0 (0 to 4095), which according to its mapping is page frame 2 (8192 to 12287).
• It thus transforms the address to 8192 and outputs address 8192 onto the bus.
• Example, virtual address 20500 is 20 bytes from the start of virtual page 5 (virtual addresses 20480 to 24575) and maps onto physical address 12288 + 20 = 12308.
Review
• What is the difference between a physical address and a virtual address?
Answer:
Real memory uses physical addresses. These are the numbers that the memory chips react to on the bus. Virtual addresses are the logical addresses that refer to a process’ address space. Thus a machine with a 16-bit word can generate virtual addresses up to 64K, regardless of whether the machine has more or less memory than 64 KB.
Review
1.For each of the following decimal virtual addresses, compute the virtual page number and offset for a 4-KB page and for an 8 KB page: 20000, 32768, 60000.
Answer: For a 4-KB page size the (page, offset) pairs are (4, 3616), (8, 0), and (14, 2656). For an 8-KB page size they are (2, 3616), (4, 0), (7, 2656).
Review• Give the physical address
corresponding to each of the following virtual addresses:(a) 20(b) 4100(c) 8300
Answer: (a) 8212 (b) 4100 (c) 24684
Review Continue
• A machine has 48-bit virtual addresses and 32-bit physical addresses. Pages are 8 KB. How many entries are needed for the page table?
Address Translation
Page Fault
• What happens if the program tries to use an unmapped page, for example, by using the instruction
• MOV REG,32780• Which is byte 12 within virtual page 8 (starting at 32768)? • The MMU notices that the page is unmapped (indicated by a cross in the
figure) and causes the CPU to trap to the operating system. This trap is called a page fault.
• The operating system picks a little-used page frame and writes its contents back to the disk. It then fetches the page just referenced into the page frame just freed, changes the map, and restarts the trapped instruction.
Processes & Frames & Page Fault
A.0A.1A.2A.3B.0B.1B.2C.0C.1C.2C.3
D.0D.1D.2
D.3D.4
Page Table of Process
Summary of Mapping
• The virtual address is split into a virtual page number (high-order bits) and an offset (low-order bits).
• For example, with a 16-bit address and a 4-KB page size, the upper 4 bits could specify one of the 16 virtual pages and the lower 12 bits would then specify the byte offset (0 to 4095) within the selected page.
• However a split with 3 or 5 or some other number of bits for the page is also possible. Different splits imply different page sizes.
Page Table
Purpose : map virtual pages onto page frames
• Major issues to be faced1.The page table can be extremely large2.The mapping must be fast.
1. Page Table is extremely large
• Modern computers use virtual addresses of at least 32 bits. With, say, a 4-KB page size, a 32-bit address space has 1 million pages, and a 64-bit address space has more than you want to contemplate.
• With 1 million pages in the virtual address space, the page table must have 1 million entries. And remember that each process needs its own page table (because it has its own virtual address space).
2. Mapping must be fast
• The second point is a consequence of the fact that the virtual-to-physical mapping must be done on every memory reference.
• A typical instruction has an instruction word, and often a memory operand as well. Consequently, it is necessary to make 1, 2, or sometimes more page table references per instruction, If an instruction takes, say, 4 nsec.
• The page table lookup must be done in under 1 nsec to avoid becoming a major bottleneck.
Example
• Process of 100kb transferred from backing store to memory and 200kb of process to a backing store & av. disk latency=8ms,swap time=(?). if transfer ratio is 1 mbps.
• Answer : 316
Solution - Multilevel Page Table• Multilevel page tables avoid keeping one huge page table in memory all the time: this
works because most processes use only a few of its pages frequently and the rest, seldom if at all.
Scheme: the page table itself is paged.
• EX. Using 32 bit addressing:• The top-level table contains 1,024 pages (indices). The entry at each index contains the
page frame number of a 2nd-level page table. This index (or page number) is found in the 10 highest (leftmost) bits in the virtual address generated by the CPU.
• The next 10 bits in the address hold the index into the 2nd-level page table. This location holds the page frame number of the page itself.
• The lowest 12 bits of the address is the offset, as usual.
Review of Multilevel Page Table
• Assume a 32 bit system, with 2-level page table (page size is 4KB, |p1|=|p2|=10bits, |offset|=12bits).
• Program “A” on this system requires 12 MB of memory. The bottom 4MB of memory are used by the program text segment, followed by 4MB for data and lastly, the top 4MB for stack.
• Question:1. How many page table pages are actually required for this process.2. Describe the lookup within the page tables of address 0x00403004.
60
Review
• We use the following scheme:
• The 12 least significant digits in this address, allow access for 212 bytes – 4 Kb.• These are pointed to by any of the 210 entries of p2. In total, a second level page
table can point to 222 bytes – 4 MB.• Each such page table is pointed to by a first level table entry.• In our case – we require 4 page table pages: a single first level page table (also
known as the “directory”), which points to 3 second level page tables.
page offset
p1 p2 d
10 10 12
61
Two-level Page Tables (cont.)Ex. Given 32 bit virtual address 00403004 (hex) = 4,206,596 (dec)
converting to binary we have:
0000 0000 0100 0000 0011 0000 0000 0100
regrouping 10 highest bits, next 10 bits, remaining 12 bits:
0000 0000 01 00 0000 0011 0000 0000 0100
PT1 = 1 PT2 = 3 offset = 4
PT1 = 1 => go to index 1 in top-level page table. Entry here is the page frame number of the 2nd-level page table. (entry =1 in this ex.)
PT2 = 3 => go to index 3 of 2nd-level table 1. Entry here is the no. of the page frame that actually contains the address in physical memory. (entry=3 in this ex.) The address is found using the offset from the beginning of this page frame. (Remember each page frame corresponds to 4096 addresses of bytes of memory.)
Review
page numberpage offset
p1 p2 d
10 10 12
Top-level page table
01234
1023 4095
32 bit virtual address, 4K pages, lookup of 0x00403004 (4,206,596(dec)) in binary 0000 0000 01 00 0000 0011 0000 0000 0100
Binary: 0000000001 = 1(dec)
0000000011 = 3(dec)
000000000100 = 4(dec)
1023
01234
1023
01234
01234
4 – 8 MB
12288 – 16383 Byte
4 MB Entry
4 KB Entry
1 Byte Entry
BLOCK OF 4 MB
PAGE OF 4 KB
Corresponds to bytes 12,288 -
16,384 from beginning of page table 1
12,292
Structure of Page Table Entry
•If a referenced page is not in memory, the present/absent bit will be zero, and a page fault occurs and the operating system will signal the process.
•Memory protection in a paged environment is accomplished by protections for each frame, also kept in the page table. One bit can define a page as read-only.
•The “dirty bits” are set when a page has been written to. In that case it has been modified. When the operating system decides to replace that page frame, if this bit (also called the modified or referenced bit) is set, the contents must be written back to disk or page is in use (reference bit). If not, that step is not needed: the disk already contains a copy of the page frame.
Implementation of Page Table• Page table is kept in main memory
• Page-table base register (PTBR) points to the page table
• Page-table length register (PRLR) indicates size of the page table
• In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction.
• The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)
Translation Lookaside Buffer (TLB)
• It is a hardware. It’s a cache actually.• When a virtual address is generated by the CPU, its page
number is presented to the TLB. If the page number is found, its frame is immediately available and used to access memory.
• If the page number is not in the TLB ( a miss) a memory reference to the page table must be made.
Paging Hardware With TLB
Segmentation
• Segmentation is a technique for breaking memory up into logical pieces
• Each “piece” is a grouping of related information• data segments for each process• code segments for each process• data segments for the OS• etc.
• Like paging, use virtual addresses and use disk to make memory look bigger than it really is
• Segmentation can be implemented with or without paging
Segmentation
logical address space
OS Code OS data OS stack
P1 data
P1 code
printfunction
P2 code
P2 data
Addressing Segments• User generates logical addresses• These addresses consist of a segment number and an offset into the
segment• Use segment number to index into a table• Table contains the physical address of the start of the segment
• often called the base address
• Add the offset to the base and generate the physical address• before doing this, check the offset against a limit• the limit is the size of the segment
Addressing Segments
S o
o < limit
limit base
+Physical Address
error
yes
no
segment table
logical address
Segmentation Hardware• Sounds very similar to paging• Big difference – segments can be variable in size• Most systems provide segment registers• If a reference isn’t found in one of the segment registers
• trap to operating system• OS does lookup in segment table and loads new segment descriptor into the
register• return control to the user and resume
• Again, similar to paging
Logical Addresses
Paging
Segmentation
Mix IT – Paging & Segmentation
• A virtual address becomes a segment number, a page within that segment, and an offset within the page.
• The segment number indexes into the segment table which yields the base address of the page table for that segment.
• Check the remainder of the address (page number and offset) against the limit of the segment.
• Use the page number to index the page table. The entry is the frame. (The rest of this is just like paging.)
• Add the frame and the offset to get the physical address.
offsetdescriptor
limit base
segment table
+
virtual address from user
linear address
PT directory page tablepage frame
Dir (PT-1) page offset
directory base
Benefits: 1. faster process start times2. faster process growth 3. memory sharing between processes.
Costs: 4. somewhat slower context switches 5. slower address translation.
Motivation for Page Replacement
• When a page fault occurs, the operating system must choose a page to remove from memory to make room for the page that has to be brought in.
• A page-replacement strategy is characterized by • Heuristic it uses to select a page for replacement • The overhead it incurs
FIFO
• Treats page frames allocated to a process as a circular buffer:
• When the buffer is full, the oldest page is replaced. Hence first-in, first-out:
• A frequently used page is often the oldest, so it will be repeatedly paged out by FIFO.
• Simple to implement:• requires only a pointer that circles through the page frames of
the process.
Example - FIFO
Belady’s (or FIFO) Anomaly
Certain page reference patterns actually cause more page faults when number of page frames allocated to a process is increased
FIFO page replacement algorithm
Disadvantage:• The oldest page may be needed again soon• Some page may be important throughout execution• It will get old, but replacing it will cause an immediate
page fault
Review - FIFO
• If FIFO page replacement is used with four page frames and eight pages, how many page faults will occur with the reference string
0 1 7 2 3 2 7 1 0 3 • if the four frames are initially empty?Answer: FIFO yields 6 page faults
Optimal Page Replacement
• The Optimal policy selects for replacement the page that will not be used for longest period of time.
• Impossible to implement (need to know the future) but serves as a standard to compare with the other algorithms we shall study.
• On the second run of a program, if the operating system kept track of all page references, the “Optimal Page Replacement Algorithm” could be used:
Example
One more example
• Reference string : 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5• 4 frames example
• How do you know future use? You don’t!• Used for measuring how well your algorithm performs.
6 page faults
How can we do better?
• Need an approximation of how likely each frame is to be accessed in the future
• If we base this on past behavior we need a way to track past behavior
• Tracking memory accesses requires hardware support to be efficient
Page table: referenced and dirty bits
• Each page table entry (and TLB entry!) has a• Referenced bit - set by TLB when page read / written• Dirty / modified bit - set when page is written
• Idea: use the information contained in these bits to drive the page replacement algorithm
Not recently used page replacement alg.
• Uses the Referenced Bit and the Dirty Bit
• Initially, all pages have• Referenced Bit = 0• Dirty Bit = 0
• Periodically... (e.g. whenever a timer interrupt occurs)• Clear the Referenced Bit• Referenced bit now indicates “recent” access
Not recently used page replacement alg.• When a page fault occurs...
• Categorize each page...• Class 1: Referenced = 0 Dirty = 0• Class 2: Referenced = 0 Dirty = 1• Class 3: Referenced = 1 Dirty = 0• Class 4: Referenced = 1 Dirty = 1
• Choose a victim page from class 1 … why?• If none, choose a page from class 2 … why?• If none, choose a page from class 3 … why?• If none, choose a page from class 4 … why?
Although class 2 pages seem, at first glance, impossible, they occur when a class 4 page has its R bit cleared by a clock interrupt.
Second chance page replacement alg.• An implementation of NRU based on FIFO• Pages kept in a linked list
• Oldest is at the front of the list• Look at the oldest page
• If its “referenced bit” is 0...• Select it for replacement
• Else• It was used recently; don’t want to replace it• Clear its “referenced bit”• Move it to the end of the list
• Repeat
Example
Clock algorithm (an implementation of NRU)• Maintain a circular list of pages in memory• Set a bit for the page when a page is referenced• Clock sweeps over memory looking for a victim page that
does not have the referenced bit set• If the bit is set, clear it and move on to the next page• Replaces pages that haven’t been referenced for one
complete clock revolution – essentially an implementation of NRU
Least Recently Used (LRU)
• Replaces the page that has not been referenced for the longest time Or Evict the page that was used the longest time ago
• By the principle of locality, this should be the page least likely to be referenced in the near future.
• Temporal locality: Memory accessed recently tends to be accessed again soon
• Spatial locality: Memory locations near recently-accessed memory is likely to be referenced soon
Example - LRU
One more example • Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
8 page faults
5
2
4
3
1
2
3
4
1
2
5
4
1
2
5
3
1
2
4
3
Implementation● Every time a page is accessed, record a timestamp of the access time
● When choosing a page to evict, scan over all pages and throw out page with oldest timestamp
Review - LRU
• If LRU page replacement is used with four page frames and eight pages, how many page faults will occur with the reference string
0 1 7 2 3 2 7 1 0 3 • if the four frames are initially empty?Answer: LRU yields 7 page faults
Not frequently used algorithm (NFU)
• Bases decision of frequency of use rather than recency• Associate a counter with each page• On every clock interrupt, the OS looks at each page.
• If the Reference Bit is set...• Increment that page’s counter & clear the bit.
• The counter approximates how often the page is used.• For replacement, choose the page with lowest counter.
Not frequently used algorithm (NFU)
• Problem:• Some page may be heavily used
• ---> Its counter is large
• The program’s behavior changes• Now, this page is not used ever again (or only rarely)
• This algorithm never forgets!• This page will never be chosen for replacement!
Modified NFU with aging• Associate a counter with each page• On every clock tick, the OS looks at each page.
• Shift the counter right 1 bit (divide its value by 2)• If the Reference Bit is set...
• Set the most-significant bit • Clear the Referenced Bit
T1100000 = 32
T2010000 = 16
T3001000 = 8
T4000100 = 4
T5100010 = 34
Working set page replacement
• Demand paging• Pages are only loaded when accessed• When process begins, all pages marked INVALID
• Locality of Reference• Processes tend to use only a small fraction of their pages
• Working Set• The set of pages a process needs• If working set is in memory, no page faults• What if you can’t get working set into memory?
Working set page replacement
• Thrashing• If you can’t get working set into memory, page faults
occur every few instructions• Little work gets done• Most of the CPU’s time is going on overhead
Working set page replacement
• Based on prepaging (prefetching)• Load pages before they are needed
• Main idea:• Try to identify the process’s “working set”
• How big is the working set?• Look at the last K memory references• As K gets bigger, more pages needed.• In the limit, all pages are needed.
Working set page replacement
• The size of the working set:
k (the time interval)
Working set page replacement
• Idea:• Look back over the last T msec of time• Which pages were referenced?
• This is the working set.• Current Virtual Time
• Only consider how much CPU time this process has seen.• Implementation
• On each clock tick, look at each page• Was it referenced?
• Yes: Make a note of Current Virtual Time• If a page has not been used in the last T msec,
• It is not in the working set!• Evict it; write it out if it is dirty.
Working set page replacement
WSClock page replacement algorithm-Imple.• All pages are kept in a circular list (ring)• As pages are added, they go into the ring• The “clock hand” advances around the ring• Each entry contains “time of last use”• Upon a page fault...
• If Reference Bit = 1...• Page is in use now. Do not evict.• Clear the Referenced Bit.• Update the “time of last use” field.
WSClock page replacement algorithm• If Reference Bit = 0
• If the age of the page is less than T...• This page is in the working set.• Advance the hand and keep looking
• If the age of the page is greater than T...• If page is clean
• Reclaim the frame and we are done!• If page is dirty
• Schedule a write for the page• Advance the hand and keep looking
Thank You