memory management 1 - cit - cork institute of technology - cit
Embed Size (px)
4 Main Memory Management4.1 IntroductionA large part of the OS’s responsibility is organising main memory for processes.
GOAL: pack as many processes into memory as possible so that the processor will never be stuck for work.
There are a number of necessary characteristics of good memory management (MM) software. First we look at those (section 4.2).
Then we look at the simple management schemes in order to get used to the fundamental concepts of MM (section 4.3).
4.2 CharacteristicsFour characteristics of good Memory Management software:
1. Relocation2. Protection3. Sharing4. Neat mapping between Logical org. and Physical org.
4.2.1 RelocationIt is necessary that a process can be moved around in memory, i.e. a memory manager must allow process relocation. Because:
• too strict to demand that a process always gets loaded into the same space;
• need to allow processes that have been suspended (swapped out) to be unsuspended (swapped in) and assigned different memory space;
• cannot anticipate what memory space processes will require on any particular run (may need to relocate to a larger space).
Program code however contains references to memory locations. So the memory manager, if it is to be able to move processes about, must have some way of mapping addresses in the program code (logical addresses) onto the actual addresses where the data is stored (physical addresses).
4.2.2 ProtectionProcesses need to be protected from interference by other processes (intentional or accidental).
So every time a memory reference (read or write) is made it must be checked to ensure it is legal when the reference is made (i.e. at run time). This is a lot of checking and the processor hardware does this work.
4.2.3 SharingSometimes processes want to be interfered with. Sometimes processes share their data and code with others. Protection must be flexible enough to allow this. That is, protection must facilitate sharing.
4.2.4 Mapping between Logical Organisation & Physical Organisation
Processes are organised into modules; some of these are modifiable (e.g. data), some (e.g. code) are not. To the programmer a process is loaded into memory as one contiguous chunk and run. This is known as the ‘logical view.’
In reality, some of the process is stored in memory (which is organised as a linear array of words (there are usually 4 bytes in each word) addressed from say, 0000 to NNNN.) And some of the process is actually stored on disk and moved into main memory when needed. This is known as the ‘physical view.’ It is the MM’s responsibility to hide this from the programmer so that she doesn’t have to worry about it.
Put another way: The physical organisation needs to be managed in such a way that it looks to the programmer as if there is an infinite amount of contiguous main memory available (as far as practicable)
If the OS and the hardware are able to deal with this then there are advantages:
1. Modules can be written and compiled separately with inter-module reference being resolved at run time.E.g. two modules of an application can be written, compiled and tested by different programmers and linked for running.
2. Different protection can be given to different modules (read only, execute only, etc.)
3. Module sharing is possible.
The general advantage is that the more the memory seems like the way the programmer views it, the easier it is for the programmer to use it effectively and correctly.
4.3 Simple Management SchemesAll of these simple schemes assume that the entire process is loaded into memory. In reality usually only a part of a process is in memory at any one time. But these simple schemes allow us to look at the fundamental concepts without undue complication. For each scheme we will examine how it loads processes, how it deals with addressing and protection, and how efficiently it uses the available RAM space.
4.3.1 PartitioningMemory can be divided up into partitions. Partitioning allows the memory to be divided up among several processes.
These partitions can be static or dynamic:StaticThe partitions are decided at system configuration time and cannot be altered on the fly (inflexible). There can be equal or unequal sized partitions.DynamicThe partitions are made and remade as the system is running.
126.96.36.199 Equal Sized Static partitionsLoading: Programs are loaded into any available partition. If a program doesn’t fit then the programmer must find some way around that (e.g. overlays).
Addressing (& protection): Simple, use Base-Limit registers (a hardware mechanism).
For example:Consider this silly program:
i--;} while (i!=0);
Say this program is written in assembly language as below as if it started at address 000016 and say the program and data is 51210 bytes long (i.e. 20016).
Logical Address Instruction0000 load A,012A
0001 sub A,0129
0002 cmp A,#0
0003 jne 0001
Now say that the program is actually loaded into RAM starting at address 15BA, i.e. the program is loaded into a partition starting at address 15BA. This is the base address.
Physical Address Instruction15BA load A,012A
15BB sub A,0129
15BC cmp A,#0
15BD jne 0001
The data that the program assumes is at address 0129 is now actually stored at 16E3 because 15BA+0129=16E3. When the process is to run, a base register is given the starting address of the program (15BA) and a limit register gets the length of the process (200).
All of the addresses referred to in the program’s operands are called relative addresses. Relative addresses are particular examples of logical addresses (i.e.
references to memory locations independent of the current location in memory). The logical addresses need to be mapped onto the actual addresses in memory when the program is being run. These actual addresses are called physical addresses.
This is how:If logical_address > limit then
trap to OS (memory error)else
physical_address = logical_address + base
In this example, when the instruction load A,012A is run, the logical address 012A is less than length stored in the limit register (0200) and so will be mapped by adding it to the base address 15BA to get the physical address 16E4.
The hardware that implements this logic is loosely referred to as ‘Base-Limit Registers.’ This mapping using base-limit registers is very simple, easy to understand, and easy to build). Also it facilitates:
• program loading (base and limit are set to partition start address and program size respectively),
• address mapping (logical addresses are manipulated to yield physical addresses), and
• dynamic relocation (the mapping is redone whenever the memory reference is made).
So why not use simple equal sized partitions and base-limit registers?Base-limit registers are fine (for the moment) but equal sized partitions are awkward because:
• Programs must fit in the partition;• inefficient use of memory. If, as is common, a process does not use all
of its partition then the extra is wasted. This kind of waste is called Internal Fragmentation.
Internal fragmentation occurs when a process’s allocated space is not all used and thus cannot be used by another process.
188.8.131.52 Unequal Sized Static partitionsLoading: Because there are many different sized partitions the following question arises: How do you decide which partition to use? (see section 184.108.40.206 later).
These unequal sized partitions help to reduce internal fragmentation because now there is a choice: some partitions are big and some are smaller. Big processes can use big partitions and smaller processes can use smaller partitions and so waste is reduced.
Addressing (& protection): Same as before: use Base-Limit registers.
But still disadvantages because:• the partitions are still static (i.e. partitions predetermined). So it is still
inflexible.• may rarely or never use very large and very small partitions => waste.
This waste is called External Fragmentation.
External Fragmentation occurs when a portion of unallocated space is unusable/unused because it is too small/large.
One solution to the external fragmentation problem is Compaction. That means moving processes together so that all the free fragments can be gathered up into a useful chunk. But this cannot be done when partitions are static. In any case it is time consuming.
An example system, which used static partitions of unequal size, was IBM’s mainframe Operating System: OS/MFT (Multi-programming with Fixed number of Tasks.) [1960s]
220.127.116.11 Dynamic partitionsHere the size of a partition is determined at process load time based on the process’s exact requirement. This is more flexible and eliminates internal fragmentation
What is the best way to allocate available partitions? (This question applies to unequal sized static partitions also.) Initially, it is easy as memory is allocated as requested. Later, as processes are swapped in and out there will be a variety of candidate memory ‘holes’ within which a partition can be made to accommodate an incoming process. Which is best?
There are four candidate Memory Allocation Algorithms for deciding this. The algorithms are compared based on the degree of external fragmentation and the amount of searching.
1. Best fitAllocate the block of memory closest in size to the process. • Leaves the smallest possible fragment unused and quickly builds up a
collection of these. Thus need compaction more frequently. • Slow (Search is longer as it must check all partitions).
2. Worst fitAllocate the largest block of memory that fits the process. The reasoning is that this will leave a larger (and thus more usable) chunk as leftover. • Slow (Search is longer as it must check all partitions).
3. First fit (Next fit)Searching from the start (or last space allocated) of user memory, allocate the first block of memory that fits the process. • Simplest and fastest - only searches until a fit is found (first fit is the
Whatever method is used, as time goes on and processes are swapped in and out, holes begin to appear between partitions i.e. External Fragmentation.Again a solution to the external fragmentation problem is Compaction. That means moving processes together so that all the free fragments can be gathered up into a useful chunk. This can be done when partitions are dynamic. However, as we have seen it is time consuming.
Addressing (& protection): Same as before: use Base-Limit registers.
18.104.22.168 Buddy SystemStatic partitioning puts too low a limit on the number of processes that can use memory and will use space inefficiently either through internal or external fragmentation.Dynamic partitioning slightly improves the efficiency of space use but is more complex to maintain due to need for ‘fit’ algorithms and compaction.
The buddy system introduces a useful compromise. The buddy system is used for some parallel programming applications and a modified version is used in UNIX kernel and in Linux for some memory allocation.
The buddy system only allocates memory in blocks of size 2i kilobytes where i=0,1,2,3,…U e.g. 256K, 64K, 32K, … up to some limit 2u. Initially all available memory is one large block of size 2u. As memory is allocated the system keeps track of what blocks of what sizes are free. A bit is reserved in each block to indicate whether it is free. At any one time there will be a number of free blocks of varying sizes available in memory.When a request for memory space of size S comes in, a free block of size 2k is found such that 2k-1 < S<= 2k (or, if none exists, one is formed by splitting a larger free block until one is found). In other words, it finds a block either just the right size or bigger. E.g. a request for 65K space would be met by 128K block because the next smallest block would be too small (64K) and the next largest (256K) is way too big.
Whenever memory is freed up and this results in having two adjacent equal-sized free blocks that used to be 2 halves of a larger block, they are coalesced to form a free block twice the size. They are the 'buddies'.
Consider the following example:
This system still results in internal fragmentation but does not require compaction over and above the coalition of equal sized adjacent free blocks and does not require ‘fit’ algorithms to allocate memory – the allocation decision is very simple.
4.4 Simple PagingPaging is a better solution to external fragmentation than compacting dynamic partitions. It avoids external fragmentation altogether and also minimises internal fragmentation.
4.4.1 An aside on Virtual MemoryBefore examining paging it is useful to point out that, in explaining how paging works, we will be assuming that an entire process is loaded into RAM when it is created. In fact this is never the case. A process’s contents are only ever partially loaded into RAM. The remainder is stored on a reserved portion of the disk called the swap space. When it is needed it is ‘swapped in’ to RAM through a mechanism called ‘page faulting’. This arrangement is called virtual memory.
4.4.2 Loading in PagingAll processes are divided into equal, fixed size pages. That is, logical memory is divided.
Also all of RAM is divided into equal, fixed size page frames, each large enough to hold exactly one page. That is, physical memory is divided.
Then a process is stored page by page in various available page frames. When a process is being loaded into memory it doesn’t necessarily get contiguous memory - it can be scattered about memory.
There will be no external fragmentation and only a fragment of the last page will be unused; i.e. minimal internal fragmentation.
Example:Process Physical Memorylogical memory frame numbers pagespage 0 0page 1 1 page 0page 2 2page 3 3 page 2: 4
56 page 17 page 3:
4.4.3 Address MappingAs pages of a process can be scattered about memory then the mapping of logical to physical addresses is more complex. A simple base-limit register mechanism is not good enough.
The OS maintains a page table for each process showing the page frame number for each of the process pages stored.
Example:Page TablePage Number Frame Number0 11 62 33 7
Each process’s PCB holds the address of the start of the page table. This value is loaded into the Page Table Base Register (PTBR) when the process is given the CPU. Page numbers act as indexes into the page table.
Programs will contain references to logical addresses that assume the program starts at some known address, usually 0000. These logical addresses are thus also known as relative addresses; i.e. addresses relative to some known starting point. Each relative address will actually occur in a particular page of the process. Further, the address may refer to a page that is not actually in RAM – remember virtual memory. So these addresses are sometimes called virtual addresses. For now:
Logical addresses = relative addresses = virtual addresses
Address Mapping ExampleAssume page size = 1K = 1024 memory words and that pages & frames are numbered from 0 (working in Decimal).
How do we locate logical address: 1502 (assume relative to 0 starting point)?
Each logical address actually consists of a page number and an offset within that page. The offset is the number of memory words from the start of the page. Memory words are individually addressable memory locations – usually 4 bytes long.
Logical address 1502 must be on the second page because it is bigger than 1024 and less than 2048. It is on page 1 at offset 478 (1024 +478 = 1502).
logical address DIV page size = page no. - i.e. 1502 div 1024 = 1 (page number)logical address MOD page size = offset - i.e. 1502 mod 1024 = 478 (offset)
The mapping hardware must access the page table to find the page frame number.
The physical address is therefore: frame 6 offset 478 (page 1 is in frame 6 in the example above).
For address mapping, if a page size is set at 2n words then implementation is simple. n can be any value depending on the desired page size. For example take page size = 210 = 1024 words and address length = 24 = 16 bits.
Memory location 150210 = 00000101110111102 i.e. 1502 words from start of program.
Instead of doing the calculations:logical address DIV page size = page no. - i.e. 1502 div 1024 = 1 (page number)logical address MOD page size = offset - i.e. 1502 mod 1024 = 478 (offset)
Notice that, in the 16 bit address, the leftmost 6 bits contain the page number and the rightmost 10 bits (remember page size = 210) contain the offset:
page 1, offset 478 words = 0000010111011110Thus hardware can easily extract the page number and offset from the address bytes.
Physical address:frame 6, offset 478 words = 0001100111011110 i.e. page 1 stored in frame 6000001 maps to 000110Thus hardware can simply replace the page number with the frame number to generate the physical address.
In general, if the leftmost k bits of a relative address give the page number in the logical address (k=6 in e.g.) and the remaining n bits give the logical address offset (n=10 in e.g.) then to get the physical address:
1. extract page number as leftmost k bits;2. use page number as index into page table to find frame number j.3. replace k leftmost bits of logical address with the frame number j to get
physical address. This works because the physical starting address of any frame j is = j*2n.
(e.g. frame 6 starting address is 6*210=6*1024=6144=0001100000000000)
The offset need only be added to this to get the physical byte number in memory.
4.4.4 Implementation of Page tablePage table can be implemented as:
1. a set of dedicated registersVery fast but requires many registers (sometimes not feasible; e.g. in VAX may need up to 222 entries per process). Also context switching must include switching the register values.
2. store page table in RAM and maintain a page table base register (PTBR)which stores the starting address of the page table for the current process. Reduces context switching time (just change PTBR). Not so expensive on registers but slow (two actual memory accesses needed for every program memory access).
3. a combination of associative registers (aka translation lookaside buffers, cache, lookaside memory, content addressable memory) and RAM page table. Recommended.Each reg. has two parts: key and value (i.e. page number and frame number). Searching for keys is parallel and v. fast. If the desired page no. is not in Associative registers then it is taken in from RAM page table.
NOTE: paging allows dynamic relocation of processes because physical addresses are efficiently mapped ‘on the fly’ by hardware.
4.5 Simple Segmentation4.5.1 IntroductionWe can divide all processes into segments that reflect the logical design of the program. E.g. code segment, data segment, stack segment. That is, logical memory is divided.
Memory is allocated as segments as needed by particular programs, each large enough to hold exactly one segment. That is, physical memory is divided into unequal portions. This is just like dynamic partitioning described above. However, now the segments to be stored are smaller than an entire process.
Then a process is stored segment by segment in available memory. Thus, as with paging, when a process is being loaded into memory it doesn’t require contiguous memory - it can be scattered about memory.
There will be no internal fragmentation (each segment fits exactly) however there may be external fragmentation as with dynamic partitioning. But as the segments are smaller than whole processes this external fragmentation will not be as bad as it is for dynamic partitioning.
Example:Logical Memory Physical MemoryProcess Address Contents
main programsegment 3
symbol tablesegment 2
4700 : :5700
4.5.2 Address MappingAs with paging, because segments of a process can be scattered about memory, then the mapping of logical to physical addresses is complex. For dynamic partitioning we used base-limit registers. Now, because there are several segments, instead of one base-limit register a collection of base-limit pairs is needed - one for each segment.
OS maintains a segment table for each process showing the base and limit for each of the process segments stored.
Example:segment tablesegment limit base0 1000 14001 0400 63002 0400 43003 1100 32004 1000 4700
Each process’s PCB holds the address of the segment table. This address will be loaded into the segment table base register (STBR) when the process is dispatched. Segment numbers act as indexes into the segment table.
Each logical address will consist of a segment number and an offset within that segment. The offset is the number of words from the start of the segment.
Logical address: segment 1 offset 752 The mapping h/w must access the segment table to find the segment’s starting address (base) and add it to the offset to get the physical address.
Physical address: 6300 + 752 = 7052
For address mapping, a maximum segment size is set at some power of 2 (i.e. 2m). E.g. 212 = 4096 = maximum segment size. Then, in the address, the rightmost m bits represent the offset and the remaining leftmost n bits
represent the segment number. The segment number is used to find the segment base and limit. Then
if offset >= limit thentrap to OS (mem. error)
else physical address = offset + base
Logical address:segment 1, offset 752 = 0001001011110000i.e. logical_address mod 4096 = 752 (offset)
logical_address div 4096 = 1 (segment no.)
Physical address: starting address of segment 1 + offset
4.5.3 Implementation of segment tableAs with page table except need segment table base register (STBR).NOTE: segmentation also allows dynamic relocation of processes because physical addresses are mapped ‘on the fly’ by h/w.
4.6 Other Issues 4.6.1 Protection and sharing of memoryIn both paged and segmented memory it is easy to allow for protection and sharing of a process’s memory. In the page/segment table, protection bits (flags) can be associated with each page/segment and checked at each memory reference.
However, segmentation allows for more sensible protection and sharing as the segments reflect the logical structure of the program - pages are not guaranteed to reflect the logical structure. E.g. a page may mix data and code.
4.6.2 Views of memoryPaging introduces large separation between the programmer’s (logical) view of how memory is structured and the system’s (physical) view.
Segmentation closes that gap slightly though address mapping is still required.
4.6.3 Combined paging and segmentationPaging and segmentation have their faults and strengths. Combining the two methods allows us to avoid the faults and use the strengths.
Paging eliminates external fragmentation but does not reflect the logical structure of the programs. Segmentation is the opposite: allows external fragmentation but reflects logical structure.
Using paged segmentation each process is divided into a number of segments and then each segment is divided into a number of pages that are then stored in page frames. Thus external fragmentation is eliminated AND the logical structure is maintained.
Address mapping is lengthier but essentially just an extension of old ideas. From the programmer’s point of view each logical address will consist of a segment number and an offset within that segment as before. However, the memory manager can break the segment offset into a page number and an offset within the page.
1. extract the segment number;2. get the segment table entry (i.e. segment number + STBR)3. in the segment table get the Page Table Base value and add to page
number to get the page table entry which contains the frame number.4. Map to the offset within that frame.