ct213 – memory management petronel bigioi. content memory management requirements –relocation...

69
CT213 – Memory Management Petronel Bigioi

Upload: adrian-foster

Post on 23-Dec-2015

235 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

CT213 – Memory Management

Petronel Bigioi

Page 2: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Content• Memory management requirements

– Relocation– Protection – Sharing– Logical organization– Physical organization

• Address space of a process– Address binding– Static address binding– Dynamic address binding– Hardware assisted relocation and protection

• Segmentation– Segmented virtual addresses– Address translation– Segmentation summary

• Paging– Paged virtual addresses– Address translation– Paging summary

• Combining segmentation and paging• Operating Systems policies

Page 3: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Memory management

• In uni-programming systems the main memory is divided into two parts– One for operating system– The other one for the program being executed

• In multiprogramming systems, the user part of memory has to be further subdivided to accommodate multiple processes– The task of subdivision is carried out by the operating

system and is known as memory management. – Memory needs to be allocated efficiently to pack as

many processes into memory as possible

Page 4: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Memory management requirements• Relocation

- Loading dynamically the program into an arbitrary memory space, whose address limits are known only at execution time

• Protection– Each process should be protected against unwanted interference from other

processes• Sharing

– Any protection mechanism should be flexible enough to allow several processes to access the same portion in the main memory

• Logical organization– Most programs are organized in modules some of which are un-modifiable

(read only and/or execute only) and some of which contain data that can be modified; the operating system must take care of the possibility of sharing modules across processes.

• Physical organization– Memory is organized as at least two level hierarchy; The OS should hide this

fact and should perform the data movement between the main memory and secondary memory without the programmer’s concern

Page 5: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Memory hierarchy review

• It is a tradeoff between size, speed and cost• Register

– Fastest memory element; but small storage; very expensive• Cache

– Fast and small compared to main memory; acts as a buffer between the CPU and main memory: it contains the most recent used memory locations (address and contents are recorded here)

• Main memory is the RAM of the system• Disk storage - HDD

Registers(CPU)

Cache(Hardwarecontrolled)

MainMemory

DiskStorage

Specialized bus(internal or external

to CPU)

Memory bus I/O bus

Page 6: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Cache review• Every address reference goes first to the cache; if the desired address is not here,

then we have a cache miss;• The contents are fetched from main memory into the indicated CPU register and

the content is also saved into the cache memory• Most software exhibits temporal locality of access, meaning that it is likely that

same address will be used again soon, and if so, the address will be found in the cache, when we have a cache hit

• Transfers between main memory and cache occur at granularity of cache lines, around 32 or 64 bytes (rather than bytes or processor words). Burst transfers of this kind receive hardware support and exploit spatial locality of access to the cache (future access are often to address near to the previous one)

• Up to the type of write, caches can be– Write-through (when the update must be generated to the main memory as well as to

the cache)– Write-back (when the update is generated only to the cache, that will propagate it to

the main memory at a latter time)

Page 7: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Process address space• When accessing memory, a process is said to operate within an

address space (data items are accessible within the range of addresses available to the process)

• I.e. a RISC processor has only a few instructions that contain addresses (an address field which contains a representation of a memory address)– LOAD (a specified processor register with the contents of an address)– STORE (the contents of a processor register to a specified memory address)– CALL (a procedure at a specified address)– JUMP or BRANCH (to an specified address)

• The number of bits allocated to specify the address is an architectural decision– Many early computers had 16 bits for address (thus allowing for a space of

64KB of direct addressing)– A typical value is now 32, which allows for 4GB of direct addressing

memory space; we say that such a system gives a virtual address space of 4GB (the amount of physical memory in such a system is likely to be less than this)

Page 8: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Address binding

• An address used in an instruction can point anywhere in the virtual address space of the process, it still must be bound to a physical memory address

• Programs are made of modules. Compilers or assemblers that are translating a module, don’t know where the module will be loaded in the physical memory– One way to deal with this is by assuming that code that they output will start at

address zero in memory– A linker can take a sequence of such modules and create a single composite module,

by adjusting the (relative) addresses in all but the first module; the addresses in the resulting module are still relative to its start

– Compilers have the whole virtual address space at their disposal • PC contains a virtual address that must be translated to a physical address before an

instruction is fetched. Addresses of operands are also virtual addresses that need to be translated before the operand is brought from the main memory (or cache)

• Address translation can be dynamic or static.

Registers(CPU)

Address translation(MMU - Memory

Management Unit)

MainMemory

DiskStorage

Virtual Address Real Address I/O bus

HardwareCache

Memory Bus

Page 9: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Static address binding• In a simple system, a module has to be loaded into the

physical main memory• OS is responsible for managing the memory, so it will

give the loader a base address where to load the module (assuming that the OS identified a large enough free memory zone to contain the program)– The loader should adjust all the relative addresses in the

module, converting them to absolute physical addresses.– This is called static relocation or static binding

• Problems with static binding:– Once loaded, the code or data of the program can’t be moved

into the memory without further relocation– All the processes executing in such a system would share same

physical address space; they would not be protected from other if addressing errors occur; Even the OS code is exposed to addressing errors

Page 10: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Dynamic address binding• Considers keeping the loaded addresses relative to the

start of a process and using some other mechanism to translate from the virtual address space of the program to the physical address space of the computer

• Advantages of dynamic address binding:– A given program can run anywhere in the physical memory and

can be moved around by the operating system; all of the addresses that it is using are relative to its own virtual address space, so it is unaware of the physical locations at which it happens to have been placed

– It is possible to protect processes from each other and protect the operating system from application processes by a mechanism we employ for isolating the addresses seen by the processes

• To realize the advantages we will need a mechanism to bind the virtual address within the loaded instructions to physical addresses when the instructions are executed

Page 11: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Dynamic address binding• The real machine is

shared by a number of processes and we see the virtual memory of a process occupying a portion of real physical memory

• The way in which the virtual address space is mapped to the physical memory must therefore be changed each time the operating system switches from one process to another

Virtual Memory Physical Memory

The size of thephysical memory islikely to be smaller

than virtual memory

The size of thevirtual memory is

defined bythenumber of bits inan address (i.e. a 32bits address gives a4GB virtual memory)The user program isshown here startingfrom address zero

Page 12: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Hardware assisted relocation and protection

• Dynamic binding must be implemented in hardware, since it introduces translation as part of every memory access

• If the virtual address space is organized as the one described earlier, than the basic requirement is for modules to be held contiguously in physical memory and contain addresses relative to their first location– The first location is called the base of the process

– Suppose that an instruction is fetched and decoded and contains an address reference

• This address reference is relative to the base, so the value of the base must be added to it in order to obtain the correct physical address to be sent to the memory controller

Page 13: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Hardware relocation and protection

• The simplest form of dynamic relocation hardware is a base register and a memory management unit MMU to perform the translation– The operating system must load the base register as

part of setting up the state of a process before passing control to it

• Problems with this approach is that it doesn’t provide any protection between processes: simply, adding a base address to any addresses from the process space, doesn’t help to check the result against the physical memory allocated for the process.

Page 14: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Hardware relocation and protection

• It is natural to combine the relocation and protection functions in one unit, by adding a second register (the limit register) that delimits the upper bound of the program in the physical memory

• Dynamic hardware relocation and protection units have been used in early 60’s; since, those units have evolved and the contemporary processors are using far more advanced units; however, the principles still apply

• Nowadays, the basic schema with one base register and one limit register is used extensively in embedded systems.

Page 15: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Hardware relocation and protection

• Instruction execution with protection and relocation

Physical Memory

The size of thephysical memory islikely to be smallerthan virtual memory

BASE

LIMIT

PC

(relative to thefirst address)

CheckBASE<=PC<LIMIT Protection

failureOK

Fetch Instruction

Decode Instruction (assume itcontains an address referece)

Add BASE to theaddress

Relocation

Check BASE<=address<LIMIT

ProtectionfailureOK

Finish Instruction

Page 16: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Segmented virtual memory

• In practice is not very useful for a program to occupy a single contiguous range of physical addresses– Such as scheme would prevent two processes from

sharing the code• i.e. using this scheme it is difficult to arrange two executions

of same program (two processes) to access different data while still being able to share same code

– This can be achieved if the system has two base registers and two limit registers, thus allowing two separate memory ranges or segments per process

Page 17: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Segmented virtual memory

• Two processes sharing a code segment but having private data segments

Private data

Sharedcode

B’s data

A’s data

Virtual Memoryprocess A

Physical Memory

Code to beshared

Private data

Virtual Memoryprocess B

Code to beshared

Page 18: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Segmented virtual memory

Private data

Virtual MemoryProcess A

Code to beshared

Physical Memory

A’s data

Shared code

B’s data

Base0

Limit0

Base1

Limit1

Address1000…

.

.

.

Address0000…

.

.

.

Indicatessegment 0 - use

Base0 to relocate

Indicatessegment 1 - use

Base1 to relocate

Most significant bit of the virtual address is taken as a segment identifier, with 0 for data segment and 1 for code segment

Page 19: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Segmented virtual memory• Within a single program is

usual to have separate areas for code, stack and heap;

• Language systems have conventions on how the virtual address space is arranged – Code segment will not grow in

size– Heap (may be growing)– Stack at the top of virtual memory,

growing downwards• In order to realize the relocation

(and protection), three segments would be preferable

• Easy to give example where would be nice to have four segments

Code

Virtual MemoryProcess A

Data(heap)

Physical Memory

Data(stack)

Code

Data(heap)

Data(stack)

Page 20: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Segmented virtual addresses

• The segment is the unit of protection and sharing and the more we have, the more flexible we can be

• Two different ways the segmented address can be organized:– Virtual address space is split into a segment number and a byte number

within a segment• The number of bits used for segment addressing is usually fixed by the CPU

designer– The segment number is supplied separated from the offset portion of the

address• This is done in X86 processors – by default instructions fetches are performed

from designated code segment, data accesses from a data segment and stack operations from a stack segment; separate instructions will be used to select which segments will fulfill each of these roles

Segment numberX bits

Byte offset in segmentY bits

Virtual Address : address field of an instruction

Maximum numberof segments is 2x Maximum segment size is 2y

Page 21: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Segmented address translation• Part of the hardware support for dynamic address translation the

operating system must keep a segment table for each process in which the location of each segment is recorded; these tables need to be in fast memory

• If a process can have many segments, only those currently being used for instruction fetch and operand access need to be in main memory; other segments could be held on backing store until they are needed

• If an address is presented for a segment that is not present in main memory, then the address translation hardware generates an addressing exception. This is handled by the operating system, causing the segment to be fetched into main memory and the mechanism restarted– This may required an existing segment to be evicted from the main memory

in order to make space– The general concept is called virtual memory. A program’s segments are

located throughout the memory hierarchy and operating system will take care of segment transfers (will see again in the context of paging)

Page 22: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Address translation in segmentation system

Page 23: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Segmentation example

• Assume 16 bits logical addresses and 16 bits physical addresses– Assume the segment is represented using 4 bits– The offset is represented using 12 bits

• Assuming that the segment resides in physical memory at physical address 0010000000100000 (base address), then the virtual address 0001001011110000 corresponds to 0010001100010000

Page 24: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Segmentation Summary• A process is divided into a number of segments that don’t need to be equal in

size• When a process is brought into the main memory, then all of its segments are

usually brought into main memory and a process segment table is setup.• Advantages:

– The virtual address space of a process is divided into logically distinct units which correspond to constituent parts of a process

– Segments are the natural units of access control; the process may have different access rights for different segments

– Segments are the natural units for sharing code and data objects with other processes• Disadvantages:

– Inconvenient for operating system to manage storage allocation for variable-sized segments because each segment that is held in physical memory must be contiguous; after the system has been running for a while, the free memory available can be fragmented

• This is known as external fragmentation, because though the total free memory might be far greater than the size of some segment that must be loaded, still there is no single area large enough to load it

• External fragmentation might be solved by moving segments that are not needed out to the disk and moving them back when they are needed. This is called swapping.

Page 25: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paged virtual memory• The need to keep each loaded

segment contiguous in the physical memory poses a significant disadvantage because it leads to fragmentation and complicates the physical storage allocation problem

• A solution to this is called paging, where blocks of a fixed size are used for memory allocation (so that if there is any free space, it is of the right size)

• Memory is divided into page frames, and the user program is divided into pages of the same size

Page 0

Virtual MemoryProcess A

Page 1

Physical Memory

A’s page 0

A’s page 2

A’s page 1

Page 2

Page 26: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paged virtual memory• Typical page size is small (1 to 4KB) and in paged systems, a

process would require many pages• The limited size of physical memory can cause problems so a

portion of the disk storage (or a separate, dedicated high speed storage device) could be used as an extension to the main memory and the pages of a process may be in the main memory and/or in this backing store

• The operating system must manage the two levels of storage and the transfer of pages between them

• It must keep a page table for each process to record information about the pages– A present bit is needed to indicate whether the page is in main memory or

not– A modify bit indicates if the page has been altered since last loaded into

main memory– If not modified, the page does not have to be written to the disk when

swapped out

Page 27: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paging Example

All the processes (A, B, C and D) are stored on disk and are about to be loaded in the memory (by the operating system)

Process A has four pagesProcess B has three pagesProcess C has four pagesProcess D has five pages

Page 28: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paging Example

• Various page tables at the time – see previous slide– Each page table entry contains the number of the frame

in main memory (if any) that holds that page– In addition, typically, the operating system maintains a

list of all frames in main memory that are currently unoccupied and available for pages

Page 29: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paged virtual memory address translation

• Involves translation of a virtual address (page + offset) into a physical address (frame + offset) using a page table

• Page table will not be hold inside of internal registers (size issues), so it must be in main memory. Each process maintains a pointer in one of its registers, to the page table

• The page number is used to index that table and lookup the corresponding frame number, which combined with the offset from the virtual address, gives the real physical address

Page 30: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paged virtual memory address translation

• Processes could occupy huge amounts of virtual memory – i.e. assume 32 bit typical addressing, each process could have up to 232 (4GB) bytes

of virtual memory– Using 4KB pages (12 bits for offset), means 220 entries could be required per process

in the page table– Assuming that each page table entry (PTE) in the page table occupies 4 bytes, then

the size of the page table would take 4MB per process, unacceptably high• Solution is to use a two level scheme to organize large page tables

– Root page table with 210 PTEs occupying 4 kbyte of main memory– Root page always remains in the main memory, the pages in the user page table can

reside in the virtual memory

Page 31: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paged virtual memory address translation

• The first 10 bits of a virtual address are used to index into the root page table to find a PTE for a page of the user page table– If that page is in main memory, then the next 10 bits of the virtual memory

address are used to index into the user PTE page table to find the PTE for the page that is referenced by the virtual address

Page 32: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paged virtual memory address translation

• TLB - Translation Lookaside Buffer ( a kind of cache memory) contains the page entries that have been most recently used

• Before a page can be addressed, it must have an entry set up by the operating system in the TLB

• TLB is searched associatively part of each address reference

Page 33: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paged virtual memory address translation• The virtual page number is extracted from the virtual address and an associative

lookup is initiated– If multiple processes, then special care needs to be taken, so the page from one

process would not be confused with another’s (Solution to this is to either hold in the TLB the process id or have the TLB flushed out every context switch)

• If a match is found (TLB hit), then an access check is made, based in the information stored in the flags– i.e. if a write is being attempted to a page that is read only

• The physical page base, taken form TLB is appended to the offset from the virtual address to form the complete physical address– The flags field will indicate the access rights and other information

• If an address reference is made to a page that is in the main memory but not in the TLB, then address translation fails (TLB miss)– A new entry in the TLB needs to be created for that page

• If and address reference is made to a page that is not in the main memory, the address translation will fail again. No match will be found in the address table and the addressing hardware will raise an exception, called page fault– The operating system will handle this exception

Page 34: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paged virtual memory address translation

Page 35: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization
Page 36: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paging address translation example

• Virtual to physical address translation in paging systems

Page 37: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Paging Summary• Advantages – by using fixed size pages in virtual address space and

fixed size pages in physical address space, it addresses some of the problems with segmentation:– External fragmentation is no longer a problem (all frames in physical

memory are same size)– Transfers to/from disks can be performed at granularity of individual pages

• Disadvantages– The page size is a choose made by CPU or OS designer

• It may not fit the size of program data structures and lead to internal fragmentation in which storage allocation request must be rounded to an integral number of pages

– There may be no correspondence between page protection settings and application data structures

• If two process are to share data structures, they may do so at the level of sharing entire pages

– Requiring per process page tables, it is likely that the OS require more storage for its internal data structures

Page 38: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Combined paging and segmentation• Both paging and segmentation have their strengths

– Paging – transparent to the programmer eliminates external fragmentation, thus providing efficient use of main memory

– Segmentation – visible to the programmer but with ability to handle growing data structure, modularity and support for sharing and protection

• Some systems are equipped with hardware (processor) and software (operating system) to provide both– User address space is broken up into a number of segments, at the discretion

of the programmer– Each segment is broken up into a number of fixed size pages, which are

equal in length to a main memory frame– From the programmer point of view an logical address still consists of a

segment number and an segment offset• From the system point of view, the segment offset is seen as a page number and a

page offset for a page within the specified segment

Page 39: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Combined paging and segmentation

Page 40: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Memory Management Design Issues• The design of an memory management system for an OS

depends on three areas of choice:– Use or not VM memory techniques

• Any new OS provides support for it

– Use paging, segmentation or both• Pure segmentation is very rare; when segmentation combined with

paging, most of the design issues are in the area of paging

– Algorithms employed for various aspects of memory management

• the idea is to try and minimize the rate at which page faults occur, because page faults cause software overhead

• Performance of any policies set depends on main memory size, the relative speed of main and secondary memory, the size and number of processes competing for resources, the execution behavior of individual processes.

Page 41: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Operating system policies for VM

• Fetch Policy– When a page should be brought in main memory

• Demand , Pre-paging

• Placement policy– Where in real memory a process piece is to reside

• Replacement policy– Which page to be replaced when a new one is brought

in the main memory– Basic algorithms

• Optimal, Least recently used (LRU), First in first out (FIFO), Clock

Page 42: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Operating system policies for VM• Resident set management

– Not all pages of a process need (or could) to be brought in the main memory. How much memory to allocate to a process??

• Resident set size– Fixed, variable

• Replacement scope– Global, local

• Cleaning policy– Opposite to fetch policy. Determining when a modified page

should be written to secondary memory• Demand, pre-cleaning

• Load control– Determining the number of processes that would be resident in

the main memory• Degree of multiprogramming

Page 43: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Fetch Policy• When to bring a page into memory• Demand Paging – Load the page when a process tries to

reference it– Tends to produce a large number of page faults when the

process starts, then the page fault ratio settles down• Pre-paging – Bring in pages that are likely to be used in

the near future– Try to take advantage of disk characteristics

• Generally more efficient to load several consecutive sectors/pages than individual sectors due to seek, rotational latency

– Hard to correctly guess which pages will be referenced• Easier to guess at program startup• May load unnecessary pages

– Usefulness is not clear

Page 44: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Placement Policy

• Determines where in the real memory a process piece is to reside– In pure segmentation systems, it is an important design

issue; since most of the modern operating systems are not based on pure segmentation VM techniques, this discussion is not within our goal

– In pure paging or segmentation combined with paging systems, placement is irrelevant, since the address translation hardware and main memory access hardware can perform their functions for any page-frame combinations with equal efficiency

Page 45: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Replacement Policy• Which page to be replaced when a new one is brought in

the main memory? Several inter-related concepts are involved:– Resident set management

• How many page frames are to be allocated to each active process?• The set of pages considered for replacement should be limited per

process that caused the page fault or for all the page frames in the main memory?

– Replacement Policy• Across the considered set pages, which particular page should be selected

for replacement?

• Page removed should be the page least likely to be referenced in the near future

• Most policies predict the future behavior on the basis of past behavior

Page 46: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Replacement Policy

• One restriction on replacement policy: some of the frames in main memory may be locked

• Frame Locking– If frame is locked, it may not be replaced

• Typical usage for locked frames:– Kernel of the operating system – Key control structures– I/O buffers

• Locking is achieved by associate a lock bit with each frame

Page 47: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Basic Replacement Algorithms

• Replacement algorithms:– Optimal, Least recently used (LRU), First-in-first-out

(FIFO), Clock

• Optimal policy– Selects for replacement that page for which the time to

the next reference is the longest– It can be shown that this algorithm results in the fewest

number of page faults. – This algorithm is impossible to implement, since it is

impossible to have perfect knowledge of future events

Page 48: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Basic Replacement Algorithms

• Least Recently Used (LRU)– Replaces the page that has not been referenced for the

longest time– By the principle of locality, this should be the page

least likely to be referenced in the near future– Difficult to implement

• Each page could be tagged with the time of last reference. This would require a great deal of overhead, since it has to be done at each page reference (both data and instructions)

• Another approach would be to maintain a stack of page references, again an expensive prospect

Page 49: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Basic Replacement Algorithms

• First-in, first-out (FIFO)– Treats page frames allocated to a process as a circular

buffer– Pages are removed in round-robin style

• All that is required is a pointer that circles through the page frames of the process

– Simplest replacement policy to implement– Page that has been in memory the longest is replaced– These pages may be needed again very soon

Page 50: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Basic Replacement Algorithms

• Clock Policy– Attempt to get performance of LRU with low overhead of FIFO

– Include a use bit with each page

– Think of pages as circular buffer• Keep a pointer to the buffer

– Set the use bit when loaded or used

– When we need to remove a page:• Use pointer to scan all available pages

• Scan for page with use=0

• Set bit to 0 as we go by

– Performance close to LRU

Page 51: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

•Just prior to replacement of a page from the buffer with incoming page 727, the next frame pointer points at frame 2 (which contains page 45); the clock policy is now executed•Because the use bit on page 45 in frame 2, this is not replaced, and the pointer advanced to frame 3, where again the use bit is 1. The use bit is set to 0 and frame pointer advanced to next frame, where the use bit is 0. Page 556 located in frame 4 gets replaced with page 727, use bit is set to 1 and next frame pointer advances to frame 5

Page 52: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Basic Replacement Algorithms

• Fixed frame allocation is assumed (3 frames per process) and the execution of the process requires references to five distinct pages, in the given order

• For the clock example, asterisk indicates that the corresponding use bit is set to 1 and the arrow indicates the current position of the pointer

Page 53: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Resident Set Size• How many pages to bring in the main memory for a

process, how much memory to allocate?– The smaller the memory allocation per process, the more

processes could reside in the main memory (increases the probability that the OS will find at least one ready process at any given time, reducing the swapping time)

– If a relatively small number of pages per process, the rate of page faults will be higher

– Beyond of a certain size, additional allocation of main memory to a process will have no noticeable effect on the page fault rate for that process because of the principle of locality

• Two policies are present in operating systems:– Fixed allocation– Variable allocation

Page 54: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Resident Set Size• Fixed-allocation

– gives a process a fixed number of pages within which to execute; the number is decided at the initial process loading time and it may be a function of the process type (interactive, batch, etc…) or may based on guidance from the programmer or system manager

– when a page fault occurs, one of the pages of that process must be replaced

• Variable-allocation– number of pages allocated to a process varies over the lifetime of the process– If a process suffers persistently high rate page faults, will be given additional

frames to reduce the page fault rate– A process with very low page fault rate will be given a reduced frame

allocation– Requires the operating system to continuously asses the behavior of the

process, leading to software overhead and complexity

Page 55: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Replacement Scope (1)

• Scope of replacement policy can be:– Local replacement policy – chooses only among the

resident pages of the process that generated page fault in selecting a page to replace

– Global replacement policy – considers all unlocked pages in main memory as candidates for replacement, regardless of which process owns a particular page

• Global policies are more attractive because of the simplicity of implementation and minimal overhead

Page 56: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Replacement Scope (2)

• Fixed Allocation, Local Replacement Scope– Number of frames allocated to a process is fixed in

advance• Too small: leads to thrashing

• Too big: wastes memory

– Reduces the available number of processes that can be in memory

– Operating system is choosing the page to be replaced among the frames allocated to that process

– Global replacement scope is not possible

Page 57: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Replacement scope (3)

• Variable Allocation, Global Replacement Scope– This combination is the easiest to implement and is a

common technique in operating systems– When a page fault occurs, a free frame is added to the

resident set of the process who experiences a page fault and the page is brought up in that frame

– Harder to determine who should lose a page; the selection is made among all the frames in the memory (except the locked ones), there is no discipline to determine which process should loose a page from its resident set

• May remove a page from a process that needs it

Page 58: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Replacement Scope (4)

• Variable Allocation, Local Replacement Scope– Concept

• When loading a process, allocate it a set of page frames

• After a page fault, replace one of those pages

• Periodically reevaluate set size, adjust it to improve performance

– For this method, it is important to determine the rezident set size and timing of changes.

• working set strategy – one method to deal with those aspects

Page 59: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Cleaning Policy• Demand cleaning

– A page is written out only when it has been selected for replacement

• Pre-cleaning– Writes modified pages before their associated frames are needed

so that pages can be written out in batches

• Best approach uses page buffering– Replaced pages are placed in two lists

• Modified and unmodified

– Pages in the modified list are periodically written out in batches– Pages in the unmodified list are either reclaimed if referenced

again or lost when its frame is assigned to another page

Page 60: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Load Control

• Determines the number of processes that will be resident in main memory– referenced as multiprogramming level

• Too few processes, many occasions when all processes will be blocked and much time will be spent in swapping

• Too many processes will lead to thrashing (page fault rate will be very high)

Page 61: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Multiprogramming Level (1)

• If too many processes, the rate of page faults increases dramatically

Page 62: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Multiprogramming Level (2)

• Solutions:– Having a page fault frequency algorithm implicitly

incorporates load control (only those processes which have a sufficiently large resident set are allowed to execute);

• In providing the required resident set size for each active process, the replacement policy automatically and dynamically determines the number of active processes

– Adjust dynamically the multiprogramming level so that the mean time between page faults equals the mean time required to process a page fault;

• studies show that this is the point where the processor utilization is attained a maximum

Page 63: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

References

• “Operating Systems”, William Stallings, ISBN 0-13-032986-X

• “Operating Systems”, Jean Bacon and Tim Harris, ISBN 0-321-11789-1

Page 64: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

• Additional slides

Page 65: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Linux Memory Management (1)

• Makes use of three level page table structure, consisting of the following tables– Page directory

• An active process has a single page directory that is the size of one page.

• Each entry in the page directory points to one page of the page middle directory.

• The page directory has to be in the main memory for an active process

– Page middle directory• May span multiple pages.

• Each entry in the page middle directory points to one page in the page table

– Page table• May span multiple pages. Each page table entry refers to one virtual page

of the process

Page 66: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Linux Memory Management (2)

• Each virtual address in Linux is made of four fields. – The left-most field is used as an index into the page directory.

– The next field is used as an index into the page middle directory

– The third field servers as an index into the page table

– The forth field gives the offset within selected page of memory

• Linux page table structure is platform independent and was designed to accommodate the 64 bit Alpha processor (which has hardware support for three level paging)– The Pentium/X86 architecture has two level paging hardware

support, Linux is accommodating the addressing schema by defining the size of the middle page directory as 1.

Page 67: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Linux Memory Management (3)

• Page Replacement Algorithm– Is based on the clock algorithm– The bit is replaced with an 8 bit age variable

• Each time a page is accessed, the age variable is incremented

• In the background, Linux goes through the global page pool and decrements the age variable for each page as it rotates through all the pages in main memory

• A page with 0 is an “old” page that has not been referenced in some time and it is best candidate for replacement

• The larger the age value is, the more the page has been used (referenced) recently and the less eligible is for replacement.

Page 68: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Windows 2000 Memory Management(1)

• Designed to operate over a variety of platforms and use page sizes ranging from 4KB to 64KB

• Virtual address map– Each user process sees a

separate 32 bit address space, allowing 4G of memory per process

• 2 G for user space

• 2 G for operating system

Page 69: CT213 – Memory Management Petronel Bigioi. Content Memory management requirements –Relocation –Protection –Sharing –Logical organization –Physical organization

Windows 2000 Memory Management (2)

• W2K Paging – When a process is created it can make use of the entire user

space of 2GB (less 128KB). This space is divided into fixed-size pages; any of them could be brought into the main memory

• Resident set management schema – variable allocation, local scope– When loaded, a process is allocated a number of frames of main

memory as its working set– When a page fault occurs, one of resident pages of that process

is swapped out and a new page is brought in– Working sets of active processes are adjusted dynamically

when:• When main memory is plentiful, the VM manager allows the resident

sets of active processes to grow• When memory becomes scarce , the VM manager recovers memory for

the system by moving less recently used pages out of the working sets of active processes, reducing the size of those resident sets