chap08

29
Virtual Memory • Program addresses only logical addresses, Hardware maps to physical addresses • Possible to load only part of a process into memory – Possible for a process to be larger than main memory – Also allows additional processes since only part of each process needs to occupy main memory – Load memory as the program needs it • Real Memory – The physical memory occupied by a program • Virtual memory – The larger memory space perceived by the program

Upload: mbadubai

Post on 08-Nov-2015

212 views

Category:

Documents


0 download

DESCRIPTION

OS SLIDES

TRANSCRIPT

  • Virtual MemoryProgram addresses only logical addresses, Hardware maps to physical addressesPossible to load only part of a process into memoryPossible for a process to be larger than main memoryAlso allows additional processes since only part of each process needs to occupy main memoryLoad memory as the program needs itReal Memory The physical memory occupied by a programVirtual memory The larger memory space perceived by the program

  • Virtual MemoryPrinciple of Locality A program tends to reference the same items (figure 8.1, page 338)Even if same item not used, nearby items will often be referencedResident Set Those parts of the program being actively usedRemaining parts of program on diskThrashing Constantly needing to get pages off secondary storageHappens if the O.S. throws out a piece of memory that is about to be usedCan happen if the program scans a long array continuously referencing pages not used recentlyO.S. must watch out for this situation

  • Paging HardwareUse page number as a index into the page table, which then contains the physical frame holding that page (figure 8.3, page 340)

    Typical Flag bits: Present, Accessed, Modified, various protection-related bits

  • More Paging HardwareFull page tables can be very large4G space @ 4K pages = 1M entries Some systems put page tables in virtual address spaceMultilevel page tablesTop level page table has a Present bit to indicate entire range is not validSecond level table only used if that part of the address space is usedSecond level tables can also be used for shared librariesFigure 8.5, page 341

  • More Paging HardwareInverted page tablesUses a hash table (Fig 8.6, pg 342)

    TLBCache of recently translated entriesProcessor first looks in TLB to see if this page entry is presentIf not, then goes to the page table structure in main memory

  • PagingPage FaultsPage marked as not presentCPU determines page isnt in memoryInterrupts the program, starts O.S. page fault handlerO.S. verifies the reference is valid but not in memoryOtherwise reports illegal addressSwap out a page if neededRead referenced page from diskUpdate page table entryResume interrupted process (or possible switch to another process)Page SizeSmaller has more frames, less internal fragmentation larger tablesCommon sizes: Table 8.2, page 348

  • SegmentationProgrammer sees memory as a set of multiple segments, each with a separate address spaceGrowing data structures easier to handleO.S. can expand or shrink segmentCan alter the one segment without modifying other segmentsEasy to share a libraryShare one segment among processesEasy memory protectionCan set values for the entire segmentImplementation:Have a segment table for each processSimilar to one-level paging methodInfo: present, modified, location, size

  • Paging + SegmentationPaging: No external fragmentationEasier to manage memory since all items are the same sizeSegmentation: Easier to manage growing data structures, sharingSome processors have both (386)Each segment broken up into pagesAddress Translation (Fig 8.13, pg 351)Do segment translationTranslate that address using pagingSome internal fragmentation at the end of each segmentWindows 3.1 Uses bothWindows 95/LinuxUse paging for virtual memoryUse segments only for privilege level (segment = address space)

  • O.S. PoliciesDecisions to make:Support virtual memory?Paging, Segmentation, or both?Memory management algorithmsDecisions about virtual memory:Fetch Policy When to bring a page inWhen needed or in expectation of need?Placement Where to put itReplacement What to remove to make room for a new pageResident Set Management How many pages to keep in memoryFixed # of pages or variable?Reassign pages to other processes?Cleaning Policy When to write a page to diskLoad Control Degree of multiprogramming

  • Fetch PolicyWhen to bring a page into memoryDemand Paging Load the page when a process tries to reference itTends to produce a flurry of page faults early, then settles downPrepaging Bring in pages that are likely to be used in the near futureTry to take advantage of disk characteristicsGenerally more efficient to load several consecutive sectors/pages than individual sectors due to seek, rotational latencyHard to correctly guess which pages will be referencedEasier to guess at program startupMay load unnecessary pagesUsefulness is not clear

  • PoliciesPlacement PolicyWhere to put the pageTrivial in a paging systemBest-fit, First-Fit, or Next-Fit can be used in with segmentationIs a concern with distributed systemsReplacement PolicyWhich page to replace when a new page needs to be loadedTends to combine several things:How many page frames are allocatedReplace only a page in the current process, or from all processesFrom pages being considered, selecting one page to be replacedWill consider first two items later (Resident Set Management)

  • Replacement PolicyFrame LockingRequire a page to stay in memoryO.S. Kernel and Interrupt HandlersReal-Time processesOther key data structuresImplemented by bit in data structuresBasic AlgorithmsOptimalLeast recently usedFirst in, First outClockOptimalSelect the page that will not be referenced for the longest time in the futureProblem: no crystal ballGives a standard for other algorithms

  • Replacement PolicyLeast Recently UsedLocate the page that hasnt been referenced for the longest timeNearly as good at optimal policy in many casesDifficult to implement fullyMust keep a ordered list of pages or last accessed time for all pagesDoes poorly on sequential scansFIFO (First In, First Out)Easy to implementAssume page in memory longest has fallen into disuse often wrong

  • Replacement PolicyClock PolicyAttempt to get performance of LRU with low overhead of FIFOInclude a use bit with each pageThink of pages as circular bufferKeep a pointer to the bufferSet the use bit when loaded or usedWhen we need to remove a page:Use pointer to scan all available pagesScan for page with use=0Set bit to 0 as we go bySee Figure 8.16, page 359Performance close to LRU

  • Replacement PolicyClock Policy VariantAlso can examine the modified bitScan buffer for page with Mod=0, Use=0If none, Scan buffer for page with Use=0, Mod=1 , clear Use bit as we go byRepeat if necessary (know we will find a page this time since we set Use=0)Gives preference to pages that dont have to be written to diskSee Figure 8.18, page 362Page BufferingEach process uses FIFORemoved pages added to global list (modified and unmodified lists)Get page from unmodified list first, else write all modified pages to diskFree list gives LRU behaviorPlacement can influence cache operation

  • Resident Set ManagementSize How many pages to bring inSmaller sets allow more processes in memory at one timeLarger sets reduce # of page faults for each processEach page has less effect as set growsFixed AllocationEach process is assigned a fixed number of pagesVariable AllocationAllow the number of pages for a given process to vary over timeRequires the O.S. to asses the behavior of the processesScope - what page to removeLocal Only look at that processGlobal Look at all processes

  • Resident Set ManagementFixed Allocation, Local ScopeO.S. selects a page from that process to replaceNumber of pages fixed in advanceToo small: leads to thrashingToo big: wastes memoryReduces the available number of processes that can be in memoryVariable Allocation, Global ScopeCommonPage added to a process who experiences a page faultHarder to determine who should lose a pageMay remove a page from a process that needs itHelped by use of page buffering

  • Variable Alloc, Local ScopeConcept:When loading a process, allocate it a set of page framesAfter a page fault, replace one of those pagesPeriodically reevaluate set size, adjust it to improve performanceWorking Set StrategyTrack pages used in last ___ time units the program has been runningFigure 8.19, page 366Size of set may vary over time O.S. monitors working set of each processRemove pages no longer in working setProcess runs only if working set is in main memory

  • Working Set StrategyProblems:Past may not match FutureHard to measuring true working setCan try varying set size based on page fault frequencyIf rate is low, safe to remove pagesIf rate is high, add more pagesPage-Fault Frequency (PFF)Look at time since last page faultIf less than threshold, add page to the working setIf more than threshold, discard pages with use bit of 0May also include dead space in middleDoesnt handle transient periods wellRequires time before any page is removed

  • Variable-Interval W.S.Algorithm:Clear use bits of resident pages at the beginning of an intervalDuring interval, add new pages to setRemove pages with use=0 at the end of an intervalParametersM: Minimum duration of an intervalL: Maximum duration of an intervalQ: Number of page faults that can occur in an intervalInterval length:After time L, end intervalAfter Q page faultsIf time elapsed < M, continue processElse end intervalScans more often during transitions

  • PoliciesCleaning PolicyDemand Cleaning Write out when selected for replacementProcess may have to wait for two I/O operations before being able to proceedPrecleaning Write out in bunchesDo it too soon, and the page may be modified again before being replaced Works well with page bufferingLoad ControlManaging how many processes we keep in main memoryIf too few, all processes may be blocked, may have a lot of swappingToo many can lead to thrashingDenning Want time between faults = time to process a faultWith clock algorithm, can use rate we scan pages to determine proper load

  • Load ControlSuspending a processUse it when we need to reduce the level of multiprogrammingSix possibilities:Lowest-PriorityBased on schedulingFaulting ProcessWorking set is probably not residentProcess would probably block soonLast ActivatedLeast likely to have full working set residentSmallest resident setLeast effort to suspend or reloadLargest processMakes most number of frames availableLargest remaining execution window

  • Memory ManagementTwo separate memory management schemes in Unix SVR4 and SolarisPaging System Allocate page framesKernel Memory Allocator Allocate memory for the kernelPaging System Data StructuresFigure 8.22, Table 8.5, pg. 373-374Page Table One entry for each page of virtual memory for that processPage Frame Number Physical frame #Age How long in memory w/o referenceCopy on Write Are two processes sharing this page: after fork(), waiting for exec()Modify Page modified?Reference Set when page accessedValid Page is in main memoryProtect Are we allowed to write to this page?

  • Paging Data StructuresDisk Block DescriptorSwap Device Number Logical device Device Block Number Block locationType of storage Swap or executable, also indicate if we should clear firstPage Frame Data TablePage State Available, in use (on swap device, in executable, in transfer)Reference Count # processes using pageLogical Device Device holding copyBlock Number Location on devicePfdata pointer For linked list of pagesSwap-use TableReference Count # entries pointing to a page on a storage devicePage/storage unit number Page ID

  • SVR4 Page ReplacementClock algorithm variant (Fig 8.23)Fronthand Clear Use bitsBackhand Check Use bits, if use=0 prepare to swap page outScanrate How fast the hands moveFaster rate frees pages fasterHandspread Gap between handsSmaller gap frees pages fasterSystem adjusts values based on free memory

  • SVR4 Kernel AllocationUsed for structures < size of a pageLazy buddy systemDont split/coalesce blocks as oftenFrequently allocate/release memory, but the amount of blocks is use tends to remain steadyLocally free Not coalescedGlobally free Coalesce if possibleWant: # locally free # in useAlgorithm (figure 8.24, pg. 378)Di : In Use-Local Free for size 2i Allocate: get any free block of size 2i If locally free, add 2 to DiIf globally free, add 1 to DiIf none free, split a larger block, mark other half locally free (Di unchanged)Free (block of size 2i):Di 2: free locally, subtract 2 from DiDi = 1: free globally, subtract 1 from DiDi = 0: free globally, select a locally free block and free it globally

  • Linux Memory MgmtVirtual Memory AddressingSupports 3-level page tablesPage DirectoryOne page in size (must be in memory)Page Middle DirectoryCan span multiple pagesWill have size=1 on PentiumPage TablePoints to individual pagesPage AllocationUses a buddy system with 1-32 page block sizesPage ReplacementBased on clock algorithmUses age variableIncremented when page is accessedDecremented as it scans memoryWhen age=0, page may be replacedHas effect of least frequently used methodKernel Memory AllocationUses scheme called slab allocationBlocks of size 32 through 4080 bytes

  • Win 2000 Memory MgmtVirtual Address Map (figure 8.25)00000000 to 00000FFF ReservedHelp catch NULL pointer uses00001000 to 7FFFEFFF User space7FFFEFFF to 7FFFFFFF ReservedHelp catch wild pointers80000000 to FFFFFFFF SystemPage StatesAvailable Not currently usedReserved Set aside, but not counted against memory quota (not in use)No disk swap space allocated yetProcess can declare memory that can be quickly allocated when it is neededCommitted: space set aside in paging file (in use by the process)

  • Win 2000 Resident Set ManagementUses variable allocation, local scopeWhen a page fault occurs, a page is selected from the local set of pagesIf main memory is plentiful, allow the resident set to grow as pages are brought into memoryIf main memory is scarce, remove less recently accessed pages from the resident set