operating systems and background

Upload: bharadwaj-santhosh

Post on 08-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 Operating Systems and Background

    1/56

    1.INTRODUCTION TO OPERATING SYSTEMS AND

    BACKGROUND

    An operating system (OS) is:

    A software layer to abstract away and manage details of hardware resources.

    A set of uilities to simplify application development

    All the code you didnt write in order to implement your application

    The OS and Hardware

    1. An OS mediates programs access to hardware resources

    a. Computation (CPU)

    b. Volatile storage (memory) ans persistent storage (disk, etc)

    c. Network communications (TCP/IP stacks, ethernet cards, etc)

    d. Input/output devices (keyboard, display, sound card, etc)

    2. The OS abstracts hardware into logical resources ans well-defined interfaces to those

    resources.

    a. Processes (CPU, memory)

    b. Files(disk)

    i. Programs (sequence of instructions)

    c. Sockets(network)

    Why bother with an OS?

    1. Application benefits

    a. Programming simplicity

    i. See high-level abstractions (files) instead of low-level

    ii. Hardware details (device registers)

    iii. Abstractions are reusable across many programs

    b. Portability (across machine configurations or architectures)

    i. Device independence:3 com card or Intel card

    2. User benfits

    i. Safety

    1. Program sees own virtual machine, thinks it owns computer

  • 8/7/2019 Operating Systems and Background

    2/56

    2. OS protects programs from each other.

    3. OS fairly multiplexes resources across programs

    ii. Efficiency (cost and speed)

    1. Share one computer across many users.

    2. Concurrent execution of multiple programs

    The major operating system issues

    1. Structure: how is the OS organized?

    2. Sharing: how are resources shared across users?

    3. Naming: how are resources named (by users or programs)?

    4. Security: how is the integrity of the OS and its resources ensured?

    5. Protection: how is one user/program protected from another?

    6. Performance: how do we make it all go fast?

    7. Reliablity: what happens if something goes wrong (either with hardware or with a program)?

    8. Extensibility: can we add new features?

    9. Communication: how do programs exchange information, including across a network?

    10. Concurrency: how are parallel activities (computation and I/O) created and controlled?

    11. Scale: what happens as demands or resources increases?

    12. Persistence: hoe do you make data last longer than program execution?

    13. Distribution: how do multiple computers interact with each other?

    14. Accouting: how do we keep track of resource usage, and perhaps charge for it?

    Operating system goals

    1. Execute user programs ans make solving user problems easier.

    2. Make the computer system convenient to use.

    3. Use the computer hardware in an efficient manner.

    4. Ability to evolve (scalability)

    Computer system components

    1. Hardware- provides basic computing resources (CPU, memory, I/O devices)

    2. Operating system: controls and coordinates the use of the hardware among the various application

    programs for the various users.

  • 8/7/2019 Operating Systems and Background

    3/56

    3. Applications programs: define the ways in which the system resources are used to solve the

    computing problems of the users (compilers, database systems, video games, business programs)

    4. Users (people, machines, other computers)

    Abstract view of system components

    Operating system definitions

    1. Resource allocator: manages and allocates resources

    2. Control program: controls the execution of user programs and operations of I/O devices.

    3. Kernel: the one program running at all times (all else being application programs)

    Evolution of operation system

    Serial processing

    1. No operating system

    2. Machines run from a console with display lights and toggle switches, input device and printer.

    3. Schedule time

    4. Setup included loading the compilier, source program, saving compilied program and loading

    and linking.

    Simple Batch systems

    1. Monitors

    a. Software controls the running programs

    b. Batch jobs together

    c. Program branches back to monitor when finished

    d. Resident monitor is in main memory and available for execution

    User-1 User-2 User-3 User-n

    compiles assembler texteditorsystemdatabase

    Operating system

    ComputerHardware

    programsnapplicatioandsysytem ...

  • 8/7/2019 Operating Systems and Background

    4/56

    Uniprogramming

    1. Processor must wait for I/O instruction to complete before proceeding

    Multiprogramming

    1. When one job needs to wait for I/O, the processor can switch to the other job

    2. Several jobs are kept in main memory at the same time, and the CPU is multiplexed among

    them.

    Run RunWait WaitProgram -A

    Run RunWait WaitProgram -A

    Run RunWait WaitProgram

    -BWait

    Run RunWait WaitProgram -C Wait

    Operating System

    Job-1

    Job-2

    Job-3

    Job-4

  • 8/7/2019 Operating Systems and Background

    5/56

    OS features needed for multiprogramming

    1. I/O routine supplied by the system

    2. Memory management the system must allocate the memory to serval jobs.

    3. CPU scheduling- the system must choose among several jobs ready to run.

    4. Allocation of devices.

    Time-sharing systems-Interactive computing

    1. Using multiprogramming to handle multiple interactive jobs.

    2. Processores time is shared among multiple users.

    3. Multiple users simultaneously access the system through terminals.

    4. The CPU is multiplexed among several jobs that are kept in memory and on disk (the CPU is

    allocated to a job only if the job is in memory).

    5. A job swapped in and out of memory to the disk.

    Desktop systems

    1. Personal computers-computer system dedicated to a single user.

    2. I/O devices-keyboards, mice, display screens, small printers.

    3. User convenience and responsiveness.

    4. Can adopt technology developed for larger operating system often individuals

    5. Have sole use of computer and do not need advanced CPU utilization of protection features.

    a. May run several different types of operating systems (windows, UNIX,Linux)

    Parallel systems

    1. Multiprocessor systems with more than on CPU in close communication.

    2. Tightly coupled system-processors share memory and a clock communication.

    3. Usually takes place through the shared memory.

    4. Advantages of parallel systems

    a. Increased throughput

    b. Economial

    c. Increased reliablity

    i. Graceful degradation

  • 8/7/2019 Operating Systems and Background

    6/56

    ii. Fail-soft systems

    Symmetric multiprocessing (SMP)

    1. Each processor runs and identical copy of the operating system.

    2. Many processes can run at once without performance deterioration.

    3. Most modern oprating systems support SMP.

    Asymmetric multiprocessing

    1. Each processor is assigned a specific task; master processor schedules and allocated work to slave

    processors.

    2. More common in extremely large systems.

    Distributed systems

    1. Distribute the computation among several physical processors.

    2. Loosely coupled system-each processor has its own local memory; processors communicate with

    one another through various communications lines, such as high-speed buses or telephone lines.

    3. Advantages of distributed systems

    a. Resources sharing

    b. Computation speed up-load sharing

    c. Reliablity

    d. Communications

    e. Requires networking infrastructure

    f. Loacal area networks (LAN) or wide area networks (WAN)

    g. May be either client-server or peer-to-peer systems.

    Clustered systems

    1. Clustering allows two or more systems to share storage.

    2. Provides high reliability

    3. Asymmetric clustering: one server runs the application while other servers standby.

    4. Symmetric clustering: all N hosts are running the application

    Real-time systems

    1. Often used as a control device in a dedicated application such as controlling scientific experiments,

    medical imaging systems, industrial control systems and some display systems.

    2. Well-defined fixed-time constraints.

  • 8/7/2019 Operating Systems and Background

    7/56

    3. Real-time systems may be either hard or soft real-time.

    Hard real-time

    1. Secondary storage limited or absent, data stored in short term memory, or read-only memory (ROM)

    2. Conflicts with time-sharing systems,not supported by general-purpose operating systems.

    Soft real-time

    1. Limited utility in industrial control of robotics

    2. Useful in applications (multimedia, virtual reality) requiring advanced operating-system festures.

    Handheld systems

    1. Personal digital assistants (PDAs)

    2. Cellular telephones

    3. Issues:

    a. Limited memory

    b. Slow processors

    4. Small display screens

    Authorised By

    SANTOSH BHARADWAJ REDDY

    Email: [email protected]

    Engineeringpapers.blogspot.com

    More Papers and Presentations available on above

    site

    mailto:[email protected]:[email protected]
  • 8/7/2019 Operating Systems and Background

    8/56

    2. OPERATING SYSTEM STRUCTURE

    Operating System Components

    1. Process Management

    2. Main Memory Management

    3. File Management

    4. I/O System Management

    5. Secondary Management

    6. Networking

    7. Protection System

    8. Command-Interpreter System

    Process Management

    A process is a program in execution. A process needs certain resources, including CPU time,

    memory, files, and I/O devices, to accomplish its task.

    The operating system is responsible for the following activities in connection with process

    management.

    Process creation and deletion.

    Process suspension and resumption.

    Provision of mechanisms for:

    Process synchronization.

    Process communication.

    Main-Memory Management

  • 8/7/2019 Operating Systems and Background

    9/56

    Memory is a large array of words or bytes, each with its own address. It is a repository of quickly

    accessible data shared by the CPU and I/O devices.

    Main memory is a volatile storage device. It loses its contents in the case of system failure.

    The operating system is responsible for the following activities in connections with memory

    management.

    Keep track of which parts of memory are currently being and by whom.

    Decide which processes to load when memory space becomes available.

    Allocate and deallocate memory space as needed.

    File Management

    A file is a collection of related information defined by its creator. Commonly, files represent

    programs (both source and object forms) and data.

    The operating system is responsible for the following activities in connections with file management.

    File creation and deletion.

    Directory creation and deletion.

    Support of primitives for manipulating files and directories.

    Mapping files onto secondary storage.

    File backup on stable (nonvolatile) storage.

    I/O System Management

    The I/O system consists of:

    A buffer caching system.

    A general device-driver interface.

    Drivers for specific hardware devices.

    Secondary-Storage Management

  • 8/7/2019 Operating Systems and Background

    10/56

    Since main memory (primary storage) is volatile and too small to accommodate all data and

    programs permanently. The computer system must provide secondary storage to back up main

    memory.

    Most modern computer systems use disks as the principle on-line storage medium, for both programs

    and data.

    The operating system is responsible for the following activities in connection with disk management:

    Free space management.

    Storage allocation.

    Disk scheduling.

    Networking (Distributed Systems)

    A distributed system is collection processors that do not share memory or a clock. Each processor

    has its own local memory.

    The processors in the system are connected through a communication network.

    Communication takes place using a protocol.

    A distributed system provides user access to various system resources.

    Access to a shared resource allows:

    Computation speed-up.

    Increased availability.

    Enhanced reliability.

    Protection system

    1. Protection refers to a mechanism for controlling access by prograsms, processes, or users to both

    system and user resources.

    2. The protection mechanism must:

    a. Distinguish between authorized and unauthorized usage.

    b. Specify the controls to be imposed.

    c. Provide a means of enforcement.

    Command-interpreter system

  • 8/7/2019 Operating Systems and Background

    11/56

    Many commands are given to the operating system by control statements which deal with:

    1. Process creation and management

    2. I/O handling

    3. Secondary-storage management

    4. Main-memory management

    5. File-system access

    6. Protection7. Networking

    8. The program that reads and interprets control statements is called variously

    a. Command-line interpreter

    b. Shell (in UNIX)

    9. Its function is to get and execute the next command statement.

    Operating system services

    1. Program execution-system capability to load a program into memory and to run it

    2. I/O operations-since user programs cannot execute I/O operations directly, the operating systemmust provide some means to perform I/O.

    3. File-system manipulation-program capabilities to read, write, create and delete files.

    4. Communications-exchange of information between processes, executing either on the sam

    computer or on different systems tied together by a network. Implemented via shared memory or

    message passing. Error detection-ensure correct computing by detecting errors in the CPU and

    memory hardware, in I/O devices, or in user programs.

    Additional operating system functions

    Additional functions exist not for helping the user, but rather for ensuring efficient system operations.

    1. Resource allocation-allocating resources to multiple users or multiple jobs running at the same time.

    2. Accounting-keep track of and record which users use how much and what kinds of computer

    resources for account billing or for accumulating usage statistics.

    3. Protection-ensuring that all access to system resources is controlled.

    System calls

    1. System calls provide the interface between a running program and the operating system.

    2. Generally available as assembly-language instructions.

    3. Languages defined to replace assembly language for systems programming allow system calls to be

    made directly. (eg:C,C++)4. Three general methods are used to pass parameters between a running program and

    the operating system.

    a. Pass parameters in registers.

    b. Store the parameters in a table in memory, and the table address is passed as a parameter in a

    register.

    c. Push (store) the parameter onto the stack by the program, and pop off the stack by operating

    system.

    d. Passing of parameters as a table.

    Types of system calls

    1. Process control

  • 8/7/2019 Operating Systems and Background

    12/56

    2. File management

    3. Device management

    4. Information maintenance

    5. Communications

    System programs

    1. System programs provide a convenient environment for program development and execution. They

    can be divided into:

    a. File manipulation

    b. Status information

    c. File modification

    d. Programming language support

    e. Program loading and execution

    f. Communications

    g. Application programs

    Most users view of the operating system is defined by system programs, not the actual system calls.

    MS-DOS system structure

    1. MS-DOS written to provide the most functionally in the least space.

    2. Not divided inti modules.

    3. Although MS-DOS has some structure, its interfaces and levels of functionality are not well

    separated.

    UNIX system structure

    1. UNIX-limited by hardware functionality, the original UNIX operating system had lim

    structuring. The UNIX OS consists of two separable parts.

    2. Systems programs

    3. The kernel

    a. Consists of everything below the system-call interface and above the physical hardware.

    b. Provides the file system, CPU scheduling, memory management, and other operating system

    functions, a large number of functions for one level.

    UNIX layered approach

    1. The operating system is divided into a number of layers (levels), each built on top of lower layers.The bottom layer (layer 0), is the hardware, the highest (layer N) is the user interface.

    2. With modularity, layers are selected such tha each uses functions (operations) and services of only

    lower-level layers.

    Windows NT client-server structure

    Virtual machines

    1. A virtual machine takes the layered approach to its logical coclusion. It treats hardware and the

    operating system kernel as though they were all hardware.

    2. A virtual machine provides an interface identical to the underlying bare hardware.

    3. The operating system creates the illusion of nultiple processes, each executing on its own processor

    with its own (virtual) memory.

  • 8/7/2019 Operating Systems and Background

    13/56

  • 8/7/2019 Operating Systems and Background

    14/56

    1. OS are designed to run on any of a class of machines; the system must be configured for each

    specific computer site.

    2. SYSGEN program ontains information concerning the specific configuration of the hardware

    system.

    3. Booting-starting a computer by loading the kernel

    4. Bootstrap program-code stored in ROM that is able to locate the kernel, load it into memory, and

    starts its execution.

    3.PROCESS MANAGEMENT(Process concept, states of process,

    schedules)

    The Process Concept

    The process is the OSs abstraction for execution

    The unit of execution

    The unit of scheduling

    The dynamic (active) execution context Compared with program: static , just a bunch of bytes

    Process is often called a job, task, or sequential process

  • 8/7/2019 Operating Systems and Background

    15/56

    Sequential process is a program in execution

    Defines the instruction-at-a-time execution of a program

    An operating system executes a variety of programs:

    Batch system jobs

    Time-shared systems user programs or tasks

    Process a program in execution; process execution must progress in sequentialfashion.

    Whats in a process?

    A process consists of (at least):

    An address space

    The code for the running program

    The data for the running program

    An execution stack and stack pointer (SP)

    Traces state of procedure calls made

    The program counter (PC), indicating the next instruction.

    A set of general-purpose processor registers and their values A set of OS resources

    Open files, network connections, sound channels

    The process is a container for all of this state

    A process is named by a process ID (PID)

    Just an integer (actually, typically a short)

    A Processs Address Space

    Process StateAs a process executes, it changes state

    1. New: The process is being created.

    2. Running: Instructions are being executed.

    3. Waiting: The process is waiting for some event to occur.

    4. Ready: The process is waiting to be assigned to a process.

    5. Terminated: The process has finished execution.

    Process State Transition Diagram

    STACK(DYNAMIC ALLOCATED MEM)

    HEAP(DYNAMIC ALLOCATED MEM)

    Static Data(Data Segment)

    Code

    (Text Segment)

    ne

    w

    Rea

    dy

    Runni

    ng

    Waitin

    g

    Terminat

    ed

  • 8/7/2019 Operating Systems and Background

    16/56

  • 8/7/2019 Operating Systems and Background

    17/56

    Process Scheduling Queues

    Job queue set of all processes in the system

    Ready queue set of all processes residing in main memory, ready and waiting to execute.

    Device queue set of processes waiting for an I/O device.

    Process migration between the various queues.

    Ready Queue and Various I/O Device Queues

    Representation of Process Scheduling

    Hea

    d

    Tail

    Hea

    dTail

    Hea

    dTail

    Hea

    dTail

    Hea

    dTail

    Registe

    rs

    Registe

    rs

    Queue

    headerrea

    dyQue

    ue

    PCB7

    PCB2

    PCB3

    PCB14

    PCB6

    PCB5

    Ready

    queue

    I/O

    request

    Time sliceexpired

    Fork achild

    Wait for aninterrupt

    CP

    U

    I/O queue

    Childexecut

    es

    Interru

    pt

    occurs

    I/O

  • 8/7/2019 Operating Systems and Background

    18/56

    Schedulers

    Long-term schedulers (or job scheduler) selects which processes should brought into the ready

    queue.

    Short-term scheduler (or CPU scheduler) selects which process should be executed next andallocates CPU.

    Addition of Medium Term Scheduling

    Short-term scheduler is invoked very frequently (milliseconds) (must be fast).

    Long-tem scheduler is invoked very frequently (seconds, minutes)

    (may be slow). The long-term scheduler controls the degree of multiprogramming.

    Processes can be described as either:

    I/O-bound process spends more time doing I/O than computations, many short

    CPU bursts.

    CPU-bound process spends more time doing computations; few very long CPU

    bursts.

    Context Switch

    When CPU switches to another process, the system must save the state of the old process and

    load the saved state for the new process.

    Context-switch time is overhead; the system does no useful work while switching.

    Time dependent on hardware support.

    Process Creation

    Partially executed

    swapped-out processes

    Readyqueue

    I/O waiting

    queues

    CPU

    I/

    O

  • 8/7/2019 Operating Systems and Background

    19/56

    Parent process creates children processes, which, in turn create other processes, forming a tree of

    processes.

    Resource sharing

    Parent and children share all resources.

    Children share subset of parents resources.

    Parent and child share no resources.

    Execution

    Parent and children execute concurrently.

    Parent waits until children terminate.

    Address space

    Child duplicate of parent.

    Child has a program loaded into it.

    UNIX examples

    Forksystem call creates new process.

    Exec system call used after a fork to replace the process memory space with a new program.

    4. Process Management C.P.U SchedulingMultiprogramming and Scheduling

    Multiprogramming increases resource utilization and job throughput by overlapping I/O and CPU.

    which process to run, and for how long

    schedulable entities are usually called jobs

    Processes, threads, people, disk arm movements

  • 8/7/2019 Operating Systems and Background

    20/56

    There are two time scales of scheduling the CPU:

    long term: determining the multiprogramming level

    how many jobs are loaded into primary memory

    act of loading in a new job (or loading one out) is swapping

    short-term: which job to run next to result in good service

    happens frequently, want to minimize context-switch overhead

    good service could mean many things

    The scheduler is the module that moves jobs from queue to queue

    The scheduling algorithm determines which job(s) are chosen to run next, and which queues they

    should wait on.

    The scheduler is typically run when:

    A job switches from running to waiting

    when an interrupt occurs

    especially a timer interrupt

    when a job created or terminated

    There are two major classes of scheduling systems

    In preemptive systems, the scheduler can interrupt a job and force a context switch. In non-preemptive systems, the scheduler waits for the running job to explicitly

    (voluntarily) block

    Scheduling Goals

    Scheduling algorithms can have many different goals (which sometimes conflict)

    maximize CPU utilization

    maximize job throughput (#job/s)

    minimize job turnaround time (Tfinish - Tstart)

    minimize job waiting time (Avg(Twait): average time spent on wait queue)

    minimize response time (Avg(Tresp): average time spent on wait queue)

    Goals may depend on type of system

    batch system strive to maximize job throughput and

    minimize turnaround time

    interactive systems: minimize response time of interactive jobs (such as editors or

    web browsers)

    Scheduler Non-goals

    Schedulers typically try to prevent starvation

    Starvation occurs when a process is prevented from making progress, because another

    process has a resource it needs

    A poor scheduling policy can cause starvation

    -- e.g., if a high-priority process always prevents allow-priority process from running on the CPU

    Synchronization can also cause starvation

    Alternating Sequence of CPU and I/O Bursts

  • 8/7/2019 Operating Systems and Background

    21/56

    CPU Scheduler Selects from among the processes in memory that are ready to execute, allocates the CPU to one ofthem.

    CPU scheduling decisions may take place when a process:

    1. Switches from running to waiting state.

    2. Switches from running to ready state.

    3. Switches from waiting to ready.

    4. Terminates.

    Scheduling under 1 and 4 is non-preemptive.

    All other scheduling is preemptive.

    Dispatcher

    Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; thisinvolves:

    Switching context

    Switching to user mode

    Jumping to the proper location in the user program to restart that program

    Dispatcher latency-time it takes for the dispatcher to stop one process and start another running

    Scheduling Criteria

    CPU utilizationkeep the CPU as busy as possible

    Throughput - # of processes that complete their execution per time unit

    Turnaround time amount of time to execute a particular process

    Waiting time amount of time a process has been waiting in the ready queue Response time amount of time it takes from when a request was submitted until the first response

    is produced, not output (for time-sharing environment)

    Optimization Criteria

    Max CPU utilization

    Max throughout

    Min turnaround time

    Min waiting time

    Min response time

    First-come, first-Served (FCFS) Scheduling

    Process Burst Time

    Wait for

    I/O

    LoadstoreAddstoreReadstore

    Storeincrement

    indexwrite to fileWait for

    I/OLoadstoreAddstoreReadstoreWait for

    I/O

    CPU

    Burst

    I/O

    burstCPU

    Burst

    I/O

    burst

    CPU

    Burst

    I/O

    burst

  • 8/7/2019 Operating Systems and Background

    22/56

    P1 24P2 3P3 3

    Suppose that the processes arrive in the order: P1, P2, P3The Gantt chart for the schedule is:

    Waiting time for P1=0; P2=24; P3=27

    Average waiting time: (0+24+27)/3=27

    Suppose that the processes arrive the order

    P2, P3, P1.

    The Gantt chart for the schedule is:

    Waiting time for P1=6; P2=0; P3=3

    Average waiting time: (6+0+3)/3 = 3

    Much better than previous case

    Convoy effect short process behind long process

    Problems of FCFS:

    Average response time and turnaround time canbe large

    E.g., small jobs waiting behind long ones

    Results in high turnaround time

    May lead to poor overlap of I/O and CPU

    Shortest-job-First (SJF) Scheduling

    Associate with each process the length of its next CPU burst Use these lengths to schedule the

    process with the shortest time.

    Two schemes:

    Non-Preemptive once CPU given to the process it cannot be preempted until completes its CPU

    burst. Preemptive if a new process arrives with CPU burst length less than remaining time of current

    executing process, preempt. This scheme is known as the Shortest-Remaining-Time-First (SRTF).

    SJF is optimal gives minimum average waiting

    time for a given set of processes.

    Example of Non-Preemptive SJF

    Process Arrival Time Burst TimeP1 0.0 7P2 2.0 4P3 4.0 1

    P4 5.0 4

    SJF (non-preemptive)

    30 2 2

    P P P

    2P 3P 1P

    0 3 6 30

    1P

    3P

    2P

    1612870

    4P

    3

  • 8/7/2019 Operating Systems and Background

    23/56

    10,

    Average waiting time = (0+6+3+7)/4 4

    Examples of Preemptive SJF

    Process Arrival Time Burst TimeP1 0.0 7P2 2.0 4P3 4.0 1

    P4 5.0 4

    SJF (preemptive)

    Average waiting time =(9 + 1 + 0 + 2)/4 3

    Determining Length of Next CPU Burst

    Can only estimate the length.

    Can be done by using the length of previous

    CPU bursts, using exponential averaging.

    1. tn = actual length of nth CPU burst2. n+1 = predicted value for the next CPU burst3.

    4. Define: nnn t )1(1 +=+

    Examples of Exponential Averaging

    =0

    1+n = n

    Recent history does not count.

    =1

    1+n = tn

    Only the actual last CPU burst counts.

    If we expand the formula, we get:

    1+n =tn+ (1- )tn-1+..

    Since bothand (1- ) are less than or equal to 1, each successive term has less weight than its

    predecessor.

    Problems of SJF:

    impossible to know size of future CPU burst

    from your theory class, equivalent to the halting problem

    Can you make a reasonable guess?

    yes, for instance looking at past as predictor of future

    But, might lead to starvation in some cases!

    Priority Scheduling

    A priority number (integer) is associated with each process

    The CPU is allocated to the process with the highest priority (smallest integer = highest priority).

    16

    1P

    3P2P

    75420

    4P2P

    11

    1P

  • 8/7/2019 Operating Systems and Background

    24/56

    Preemptive

    Non-Preemptive

    SJF is a priority scheduling where priority is the predicted next CPU burst time.

    Problem Starvation low priority processes may never execute.

    Solution Aging as time progresses increase the priority of the process.

    Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this

    time has elapsed, the process is preempted and added to the end of the ready queue.

    If there are n processes in the ready queue and the time quantum is q, their each process gets 1/n of

    the CPU time in chunks of at most q time units at once. No process waits more than (n-1) q time units.

    Performance

    q large FIFO

    q Small q must be large with respect to context switch, otherwise overhead is too high.

    Example of RR with Time Quantum = 20Process Burst Time

    P1 53

    P2 17P3 68P4 24

    The Gantt chart is:

    P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

    0 20 37 57 77 97 1

    Typically, higher average turnaround than SJF, but better response.

    Time Quantum and Context Switch Time

    Quantu

    m12

    6

    1

    Context

    switches0

    1

    9

    01

    1 23

    1 4 51 6 71 8 91 10

    0 6 10

    0 10

    Process time =10

  • 8/7/2019 Operating Systems and Background

    25/56

    Turnaround Time Varies With the Time Quantum

    1 2 3 4 5 6 7

    Multilevel Queue

    Ready queue is partitioned into separate queues:

    Foreground (interactive)

    Background (batch)

    Each queue has its own scheduling algorithm,

    Foreground RR

    Background FCFS

    Scheduling must be done between the queues.

    Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility

    of starvation.

    Time slice each queue gets a certain amount of CPU time which it can schedule amongst

    its processes; i.e., 80% to foreground in RR

    20% to background in FCFS

    Multilevel Queue Scheduling

    Interactive process

    Interactive editing process

    Batch process

    Student processes

  • 8/7/2019 Operating Systems and Background

    26/56

    Multilevel Feedback Queue

    A process can move between the various queues; aging can be implemented this way.

    Multilevel-feedback-queue scheduler defined by the following parameters:

    number of queues

    scheduling algorithms for each queue

    method used to determine when to upgrade a process

    method used to determine when to demote a process

    method used to determine which queue will enter when that process needs service

    Example of Multilevel Feedback Queue

    Three queues:

    Q0 time quantum 8 milliseconds

    Q1 time quantum 16 milliseconds

    Q2 FCFS

    Scheduling A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds.

    If It Does not finish in 8 milliseconds, job is moved to queue Q1

    At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not

    complete, it is preempted and moved to queue Q2.

    Multilevel Feedback Queues

    Multiple-processor Scheduling

    CPU scheduling more complex when multiple CPU are available.

    Homogeneous processors within a multiprocessor.

    Load sharing

    Asymmetric multiprocessing only one processor accesses the system data structures, alleviating the need

    for data sharing.

    Real-Time Scheduling

    Hard real-time systems required to complete a critical task within a guaranteed amount of time.

    Soft real-time computing requires that critical processes receive priority over less fortunate ones

    Quantum = 16

    FCFS

    Quantum = 8

  • 8/7/2019 Operating Systems and Background

    27/56

    #define BUFFER_SIZE 10Typedef struct {

    DATA data;} item;Item buffer[BUFFER_SIZE];int in = 0; //Location of next input to buffer

    int out = 0; //Location of next removal from buffer

    Item nextProduced; PRODUCEDwhile (1) {

    while (counter == BUFFER_SIZE);buffer[in] = nextproduced;in = (in + 1) % BUFFER_SIZE;

    counter++;}

    5. Process Synchronization

    Background

    Concurrent access to shared data may result in data inconsistency.

    Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating

    processes.

    Shared-memory solution to bounded-buffer problem allows at most n - 1 items in buffer at the same time.

    Suppose that we modify the producer-consumer code by adding a variable counter, initialized to 0 and

    incremented each time a new item is added to the buffer

    The Producer Consumer Problem

    A Producer process produces information consumed by a consumer process. Here are thevariables needed to define the problem:

    Consider the code segment given below: Does it work

    All all buffers utilized?

    Producer

    Consumer

  • 8/7/2019 Operating Systems and Background

    28/56

    #define BUFFER_SIZE 10Typedef struct {

    DATA data;

    } item;Item buffer[BUFFER_SIZE]Int in = 0

    Int out = 0;

    Algorithm

    2

    Algorithm

    3

    FLAG FOR EACH PROCESS GIVES STATE.

    Each process maintains a flag indicating that it wants to get into the critical section. It checks the

    flag of the other process and doesnt enter the critical section if that other process wants to get in.Shared variablesboolean flag [2];

    initially flag [0] = flag [1] = false.Flag [i] = true Piready to enter its critical section

    Are the three critical

    Section Requirements Met?

    FLAG TO REQUEST ENTRY:Each process sets a flag to request entry. Then each process toggles a bit to allow the other in first.

    This code is executed for each process i.

    Shared variables

    Boolean flag[2];Initially flag [0] = flag [1] = false.flag [i]= true piready to enter its critical section

    Are the three Critical

    Section Requirements Met?

    Petersons solution for achieving mutual exclusion#define FALSE 0#define TRUE 1

    #define N 2 /* number of processes */

    int turn /* whose turn is it? */

    do {

    flag[i]:= true;while (flag[i]);

    critical sectionflag [i] = false;remainder section

    } while (1);

    do {

    flag[i]:= true;turn = j;while (flag[j]) and turn = j);

    critical sectionflag [i] = false;remainder section

    } while (1);

  • 8/7/2019 Operating Systems and Background

    29/56

    int interested[N]; /* all values initially 0 (FALSE) */

    void enter_region(int process); /* process is 0 or 1 */{

    Int other; /* number of the other process */other = 1 process; /* the opposite of process */interested[process] = TRUE /* show that you are interested */

    turn = process; /* set flag */while (turn == process && interested[other] == TRUE) /* null statement */ ;

    }

    void leave_region(int process) /* process: who is leaving */{

    interested [process] = FALSE; /* indicate departure from critical region */}

    Semaphores:

    PURPOSE:

    We want to able to write more complex constructs and so need a language to do so. We thus definesemaphores which we assume are atomic operations:

    As given here, these are not atomic as written in macro code. We define these operations,

    however, to be atomic (Protected by a hardware lock.)

    FORMAT:Wait ( mutex ); Mutual exclusion: mutex init to 1.

    CRITICAL SECTIONSignal( mutex );

    REMAINDER

    Semaphores can be used to force synchronization (precedence) if the preceeder does a signal at the

    end, and the follower does wait at beginning. For example, here we want P1 to execute before P2.

    P1: P2:

    Statement 1; wait ( synch );Signal ( synch ); statement 2;

    We dont want to loop on busy, so will suspend instead:

    Block on semaphore = = False,

    Wakeup on signal ( semaphore becomes True),

    There may be numerous processes waiting for the semaphore, so keep a list of

    blocked

    Processes,

    Wake up one of the blocked processes upon getting a signal (choice of who depends on

    strategy).

    To PREVENT looping, we redefine the semaphore structure as:

    Typedef struct {

    WAIT ( S ):While ( S

  • 8/7/2019 Operating Systems and Background

    30/56

    int value; struct process *list; /* linked list of PTBL waiting on S */

    } SEMAPHORE;

    Its critical that these be atomic- in uniprocessors we can disable interrupts, but in multiprocessors other

    mechanisms for atomicity are needed. Popular incarnations of semaphores are as event counts and lock managers.

    DEADLOCKS

    May occur when two or more processes try to get the same multiple resources at the same time.P1: P2:

    Wait(S); Wait(Q)Wait(Q); wait(S). ..Signal(S) Signal(Q)Signal(Q) Signal(S)

    Classical IPC Problems

    The bounded buffer (producer/consumer) problem using sleep and wakeup.

    #define N 100 /* number of slots in the buffer*/Int count = 0; /* number of items in the buffer*/

    Void procedure (void){

    Int item;

    While (TRUE) { /* repeat forever*/Item = procedure_item(); /*generate next item*/

    If(count==N)sleep(); /*if buffer is full, go to sleep*/Insert_item(item); /*put item in buffer*/Count = count+1; /*increment count of items in buffer*/If(count==1)wakeup(consumer); /*was buffer empty?*/

    }}

    Void consumer (void)

    {Int item;

    While(TRUE){ /* repeat forever*/If(count==0)sleep(); /*if buffer is empty, go to sleep*/Item = remove_item(); /*take item out of buffer*/

    SEMAPHORE s;

    wait (s) {s.value = s.value 1;

    if ( s.value < 0) {

    add this process to s.L;

    block;

    }

    }

    SEMAPHORE s;

    Signal (s) {s.value = s.value 1;

    if ( s.value

  • 8/7/2019 Operating Systems and Background

    31/56

    Count = count-1; /*decrement count of items in buffer*/If(count==N-1)wakeup(producer); /*was buffer full?*/Consume_item(item); /*print item*/

    }}

    The bounded buffer (producer/consumer) problem using semaphores.

    #define N 100 /* number of slots in the buffer*/Typedef Int semaphore; /* semaphores are a special kind of int*/Semaphore mutex =1; /*controls access to critical region*/Semaphore empty =N; /* counts empty buffer slots*/Semaphore full =0; /* counts full buffer slots*/

    Void procedure (void)

    {Int item;

    While (TRUE) { /* repeat forever*/

    Item = procedure_item();Down(&empty);Down(&mutex);Insert_item(item);

    up(&mutex);up(&full);

    }}

    Void consumer(void){

    Int item;

    While(TRUE){ /* repeat forever*/Down(&full);

    Down(&mutex);Item=remove_item(item);

    up(&mutex);up(&empty);

    consume_item(item);}}

    Critical regions Regions referring to the same shared variable exclude each other in time.

    When a process tries to execute the region statement, the Boolean expression B isevaluated. If B is true, statement S is executed. If it is false, the process is delayed until B

    becomes true and no other process is in the region associated with v.

    Example-Bounded Buffer Shared data:

    Struct buffer {

    Int pool[n];Int count, in, out;

    }

    Bounded buffer procedure process Procedure process inserts nextup into the shared buffer

    Region buffer when (count

  • 8/7/2019 Operating Systems and Background

    32/56

    Pool[in]=nextp;In:=(in+1)%n;Count++;

    }

    Bounded buffer consumer process

    consumer process removes an item from the shared buffer and puts it in nextup

    Region buffer when (count>0){Nextc=pool[out];Out=(out+1)%n;Count--;

    }

    Implemented region X when B do S associate with the shared variable x, the following variables:

    o semaphore mutex, firsy-delay, second-delay,

    int first-count, second-count;

    Mutually exclusive access to the critical section is provided by mutex.

    If a process cannot enter the critical section because the Boolean expression B is falseit initially waits on the first-delay semaphore, moved to the second-delay semaphore before it

    is allowed to reevaluate B.

    Implementation Keep track of the number of processes waiting on first-delay and second-delay, with

    first count and second-count respectively.

    The algorithm assumes a FIFO ordering in the queuing of processes for a semaphore.

    For an arbitrary queuing discipline, a more complicated implementation is required.

    Monitors

    High-level synchronization construct that allows the safe sharing of an abstract data

    type among concurrent processes.

    Monitor is a software module.

    Chief characteristics

    o Local data variables are accessible only by the monitor.

    o Process enters monitor by invoking one of its procedures.

    o Only one process may be executing in the monitor at a time.

    Local dataCondition

    variables

    Procedure1

    Procedure N

    Initiazation code

  • 8/7/2019 Operating Systems and Background

    33/56

    Syntax

    Monitor monitor-name

    {Shared variable declarations

    Procedure body P1 () {}Procedure body P2 () {}Procedure body Pn() {}

    { Initiazation code

    }}

    To allow a process to wait within the monitor, a condition variable must be declared, ascondition x,y.

    Condition variable can only be used with the operations wait and signal

    o The operation x.wait();

    Means that the process invoking this operation is suspended until anotherprocess invokes x.signal();

    o The x.signal operation resumes exactly one suspended process. If no process is

    suspended, then the signal operation has no effect.

    Schematic view of a monitor

    Initializati

    on code

    Shared

    data

    Operation

    s

  • 8/7/2019 Operating Systems and Background

    34/56

    Monitor with condition variables

    Dining Philosophers example

    Monitor dp

    {Enum{thinking, hungry, eating} state[5];Condition self[5];Void pickup(int i)Void putdown(int i)Void test(int i)

    Void init() {For (int i=0;i

  • 8/7/2019 Operating Systems and Background

    35/56

    If ((state [(I+4)%5]!=eating)&&(state[i]==hungry)&&(state [(I+1)%5]!=eating)) {

    State[i]=eating;Self[i].signal();

    }}

    Monitor implementation using semaphores Variables

    Semaphore mutex;Semaphore next;

    Int next-count=0;

    Each external procedure F will be replaced byWait(mutex);..Body of F;..If (next-count>0)

    Signal(next)

    ElseSignal(mutex);

    Mutual exclusion within a monitor is ensured

    For each condition variable x, we haveSemaphore x-sem;

    Int x-count=0;

    The operation x.wait can be implemented asx-count++;

    if(next-count>0)signal(next);

    elsesignal(mutex);

    wait(x-sem);x-count--;

    The operation x.wait can be implemented asif(x-count>0){

    next-count++;

    signal(x-sem);wait(next);

    next-count--;}

    Conditional-wait construct: x.wait(c);o C-integer expression evaluated when the wait operation is executed

    o Value of c (a priority number) stored with the name of the process that is

    suspended.

    o When x.signal is executed, process with smallest associated priority number is

    resumed next.

    Check two conditions to establish correctness of system;o User processes must always make their calls on the monitor in a correct

    sequence.

    o Must ensure that an uncooperative process doesnot ignore the mutual-exclusion

    gateway provided by the monitor, and try to access the shared resource directly,without using the access protocols.

  • 8/7/2019 Operating Systems and Background

    36/56

    6. DEADLOCKS

    System model

    Deadlock characteristization

    Methods for handling deadlocks

    Deadlock prevention

    Deadlock avoidance

    Deadlock detection

    Recovery from deadlock

    The Deadlock Problem

    1. A set of blocked processes each holding a resource and waiting to acquire a resource held by

    another process in the set.2. Example

    a. System has 2 tape drives.

    b. P1 and P2 each hold one tape drive and needs another one.

    3. Example

    a. Semaphores A and B, initialized to 1

    P0 P1Wait (A); Wait (B)Wait (B); Wait (A)

    Bridge crossing example

    Traffic only in one direction

    Each section of a bridge can be viewed as a resource

    If a deadlock occurs, it can be resolved if one car backs up (preempt resource and rollback) Several cars may have to be backed up if a deadlock occurs

    Starvation is possible

    System Model

    Resource types R1, R2,.Rm

    CPU cycles, memory space, I/O devices

    Each resource type Ri has Wi instances.

    Each process utilizes a resource as follows:

    Request

    Use Release

  • 8/7/2019 Operating Systems and Background

    37/56

    Deadlock Characterization

    Mutual exclusion: only one process at a time can use a resource.

    Hold and wait: a process holding at least one resource is waiting to acquire additonal

    resources held by other processes.

    No preemption: a resource can be released only voluntarily by the process holding it, after

    that process has completed its task.

    Circular wait: there exists a set {P0, P1, Po} of waiting processes such that P0 iswaiting for a resorce that is held by P1. P1 is waiting for a resource that is held by

    P2Pn-1 is waiting for a resource that is held by

    Pn and P0 is waiting for a resource that is held by P0.

    Resource-Allocation Graph

    V is partitioned into two types: P= {P1, P2 Pn}, the set consisting of all the processes in the system.

    R = {R1, R2Rn}, the set consisting of all resource types in the system.

    Request edge-directed edge P1-Rj

    Assignment edge-directed edge Rj-Pi

    Process

    Resource tye with 4 instances

    Pi requestes instance of Rj.

    Pi is holding an instace of Rj.

    Example of a Resource Allocation Graph

    Pi

    P

    i

    P

    i

    P

    1

    P

    2

    P

    3

  • 8/7/2019 Operating Systems and Background

    38/56

    Resource-Allocation Graph Algorithm

    Claim edge ji RP indicated that process Pi may request resource Rj; represented by a

    dashed line.

    Claim edge converts to request edge when a process requests a resource.

    When a resource is released by a process, assignment edge reconverts to a claim edge.

    Resources must be claimed a priori in the system.

    Resource-Allocated Graph For Deadlock Avoidance

    Unsafe state In Resource-Allocation Graph

    Bankers Algorithm

    Multiple instances

    Each process must a priori claim maximum use.

    When a process requests a resource it may have to wait.

    When a process gets all its resource it must return them in a finite amount of time.

    Data structure for the Bankers Algorithm

    Available: vector of length m. If available [j] =k, there are k instances of resourcetype Rj available.

    Max: nxm matrix. If Max[i,j]=k, then process Pi may request at most K instances of

    resource type Rj.

    R1

    R2

    P1

    P2

    R1

    R2

    P

    1

    P

    2

  • 8/7/2019 Operating Systems and Background

    39/56

    Allocation: nxm matrix. If allocation[i,j]=k , then Pi is currently allocated k instancesof Rj.

    Need: nxm matrix. If Need[i,j]=k, then Pi may need k more instances of Rj to completeits task.

    Need[i,j]=Max[i,j]-Allocation[i,j]

    Safety algorithm

    1. Let work and finish be vectors of length m and n respectively. Initialize:Work=available

    Finish[i] =false for i=1, 3 n2. Find and I such that both:

    a. Finish[i]=falseb. Needi

  • 8/7/2019 Operating Systems and Background

    40/56

  • 8/7/2019 Operating Systems and Background

    41/56

  • 8/7/2019 Operating Systems and Background

    42/56

    Contiguous allocation:

    1. Main memory usually into two partitions:

    a. Resident operating system. Usually held in low memory with interrupt vector.

    b. User processes then held in high memory.

    2. Single-partition allocation

    a. Relocation-register scheme used to protect user processes from each other, and from

    changing operating system code and data.

    b. Relocation register contains value of smallest physical address:limit register contains

    range of logical addresses-each logical address must be less than the limit register.

    Hardware support for relocation and limit registers:

    1. Fixed partitioning

    2. Equal-size partitions:

    a. Any process whose size is less than or equal to the partition size can be loaded into

    available partition.

    b. If all partitions are full, the operating system can swap a process out of a partition.

    c. A program may not fit in a partition. The programmer must design the program with

    overlays.

    CPU

    LimitRegiste

    r

    Relocati

    on

    register

  • 8/7/2019 Operating Systems and Background

    43/56

    3. Main memory use is inefficient. Any program, no matter how small, occupies an entire

    partition. This is called internal fragmentation.

    A) equal-size partitions B) unequal-size partitions

    Placement algorithm with partitions:

    1. Equal-size partitions

    a. Because all partitions are of equal size, it does not matter which partition is used.

    2. Unequal-size partitions

    a. Can assign each process to the smallest partition within which it will fit.

    b. Queue for each partitionc. Processes are assigned in such a way as to minimize wasted memory within a

    partition.

    Operating

    system

    8M

    8M

    8M

    8M

    8M

    8M

    8M

    8M

    16M

    12M

    8M

    6M

    4M

    2M

    8M

    Operating

    system

    Operating

    system

    Operating

    system

  • 8/7/2019 Operating Systems and Background

    44/56

    a) One process queue per partition b) single process queue

    Dynamic partitioning:

    1. Partitions are of variable length and number

    2. Process is allocated exactly as much memory as required.

    3. Eventually get holes in the memory. This is called external fragmentation.

    4. Must use compaction to shift processes so they are contiguous and all free memory is in one

    block.

    The effect of dynamic partitioning

    Dynamic partitioning placement algorithm:

    1. Operating system must decide which free block to allocate to a process.

    2. Best-fit algorithm

    a. Chooses the block that is closest in size to the request.

    b. Worst performer overall.

    c. Since smallest block is found for process, the smallest amount of fragmentation is

    left memory compaction must be done more often.

    3. First-fit algorithm

    a. Fastest

    b. May have many process loaded in the front end of memory that must be searched

    over when trying to find a free block.

    4. Next-fit

    a. More often allocate a block of memory at the end of memory where the largest blockis found.

    b. The largest block of memory is broken up into smaller blocks.

    Operatin

    gsystem

    8M

    56M

    Operatin

    gsystem

    Process1 20M

    36M

    Operatin

    gsystem

    Process1

    Process2

    20M

    14M

  • 8/7/2019 Operating Systems and Background

    45/56

    c. Compaction is required to obtain a large block at the end of memory.

    8M

    12M

    22M

    18M

    8M

    14M

    36M

    BUDDY SYSTEM:

    1. Entire space available is treated as a single block of 2U.

    2. If a request of size is such that 2U-1

  • 8/7/2019 Operating Systems and Background

    46/56

    6K K

    release 75K A=128K C=64K 64K B=25

    6KD=256

    K 256K

    release C A=128K C=64K 64K B=25

    6KD=256

    K 256K

    release E 512K D=256

    K 256K

    release D 1M

    Example of buddy system

    A=128KC=64K 64K 256K 256K 256K

    RELOCATION:

    1. When program loaded into memory the actual memory locations are determined

    2. A process may occur different partitions which means different absolute memory

    locations during execution.

    3. Compaction will also cause a program to occupy a different partition which means

    different absolute memory locations.

    Addresses:

    1. Logical

    a. Reference to a memory location independent of the current asignment of data to

    memory.

    b. Translation must be made to the physical address.

    2. Relative

    a. Address expressed as a location relative to some known point.

    3. Physical

    a. The absolute address or actual location in main memory

    Comparato

    r

    Process control

    block

    Program

    Data

    Stack

    Adder

    Adder

    Comparato

    r

  • 8/7/2019 Operating Systems and Background

    47/56

    Paging:

    1. Logical address space of a process can be noncontiguous; process is allocated physical

    memory whenever the latter is available.

    2. Divide physical memory into fixed-sized blocks called frames.

    3. Divide logical memory into blocks of same size called pages.

    4. Keep track of all free frames.

    5. To run a program of size n pages, need to find n free frames and load program.

    6. Setup a page table to translate logical to physical addresses.

    7. Internal fragmentation.

    Address translation scheme:

    1. Address generated by cpu is divided into

    a. Page number (p)-used as an index into a page table which contains base

    address of each page in physical memory.

    b. Page offset (d)-combined with base address to define the physical memory

    address that is sent to the memory unit.

    Address translation architecture:

    10000

    0000

    11111

    1111

    p d f d

    CPU

  • 8/7/2019 Operating Systems and Background

    48/56

    Page 0

    Page 1

    Page 2

    Page 3

    Page 0

    Page 2

    Page 1

    Page 3

    1

    4

    3

    7Logicalmemor

    yPagetable

    FrameNumbe

    r

    Physic

    almemory0 a

    1 b2 c3 d

    12 m13 n14 o15 p

    8 i9 j10 k11 l

    4 e5 f6 g7 h

    5

    6

    1

    2 MNO

    p

    IJKl

    EFGh

    ABCd

    12

    8

    4

    0

    28

    24

    20

    16

    Logicalmemory

    Page table

    Physicalmemory

  • 8/7/2019 Operating Systems and Background

    49/56

    Implementation of Page Table:

    Page table is kept in main memory.

    Page-table base register (PTBR) points to the page table.

    Page-table length register (PRLR) indicates size of the page table.

    In this scheme every data/instruction access requires two memory accesses. One for the

    page table and one for the data/instruction.

    The two memory access problem can be solved by the use of a special fast-lookup

    hardware cache called associative memory or translation look-aside buffers (TLBs)

    Associate Memory:

    Associate memory parallel search

    Page # Frame #

    Page 0Page 1Page 2Page 3

    Newprocess

    Page 0Page 1Page 2Page 3

    Newprocess

    13

    14

    15

    16

    17

    21

    20

    19

    18

    13Page 1

    14Page 0

    15

    16

    17

    21

    20Page 3

    19

    18Page 2

    14

    13

    18

    20

  • 8/7/2019 Operating Systems and Background

    50/56

    Address translation

    If A is in associate register, get frame #out.

    Otherwise get frame# from page table in memory.

    Paging Hardware with TLB:

    Authorised By

    SANTOSH BHARADWAJ REDDY

    Email: [email protected]

    Engineeringpapers.blogspot.com

    More Papers and Presentations available on abovesite

    Effective Access Time:

    Associate Lookup = time unit

    Assume memory cycle time is 1 microsecond

    CPU p d

    Page

    table

    f d

    Physical

    memory

    mailto:[email protected]:[email protected]
  • 8/7/2019 Operating Systems and Background

    51/56

    Hit ratio percentage of times that a page number is found in the associate registers;

    ration related to number of associate registers.

    Hit ratio =

    Effective Access Time (EAT)

    Memory Protection:

    Memory protection implemented by associating protection bit with each frame

    Valid-invalid bit attached to entry in the page table:

    Valid indicates that the associated page is in the process logical address space, and is

    thus a legal page.

    Invalid indicates that the page is not in the process logically address space.

    Hashed Page Table:

    Inverted Page Table:

    One entry for each real page of memory.

    Entry consists of the virtual address of the page stored in that real memory location; with

    information about the process that owns that page.

    Decreases memory needed to store each page table, but increases time needed to search

    the table when a page reference occurs.

    Use hash table to limit the search to one or at most a few page-table entries.

    EAT = (1+)+ (2+)

    (1- )

    pd

    yr

    d

    y

    PhysicalMemory

    Hashtable

    Hashfuncti

    on q s p r

  • 8/7/2019 Operating Systems and Background

    52/56

    Inverted Page Table Architecture:

    Shared Pages:

    Shared code

    One copy of read-only (reentrant) code shared among processes (i.e., text editors, compliers,

    window systems).

    Shared code must appear in same location in the logical address space of all processes.

    Private code and data:

    Each process keeps a separate copy of the code and data.

    The pages for the private code and data can appear anywhere in the logical address space.

    Shared Pages Examples:

    Segmentation

    CPU pid p d i d

    PhysicalMemory

    Pagetabl

    e

    pid p

    Ed 1

    Ed 2

    Ed 3

    Data1

    Ed 1

    Ed 2

    Ed 3

    Data3

    Data 1

    Data 3

    Ed 1

    Ed 2

    Ed 3

    Data2

    Ed 1

    Ed 2

    Ed 3

    Data2

    3

    4

    6

    7

    3

    4

    6

    1

    3

    4

    6

    2

  • 8/7/2019 Operating Systems and Background

    53/56

    Memory-management scheme that supports user view of memory.

    A program is a collection of segments. A segment is a logical unit such as:

    main program,

    procedure,

    function,

    method,object,

    local variables, global variables,

    common block,

    stack,

    symbol table, arrays

    Users View of a Program:

    Logical View of Segmentation

    User Space

    Segmentation Architecture:

    Logical address consists of a two tuple:

    ,

    Segment table maps two-dimensional physical addresses; each table entry has:

    Subroutine Stack

    Sqrt

    Symboltable

    Main

    program

    1

    2

    3

    4

    1

    4

    2

    3

  • 8/7/2019 Operating Systems and Background

    54/56

    base-contains the starting physical address where the segments reside in memory.

    Limit specifies the length of the segment.

    Segment-table base register (STBR) points to the segment tables location in memory.

    Segment-table length register (STLR) indicates number of segments used by a program;

    Segment number s is legal ifs < STLR.

    Relocation.

    dynamic.

    by segment table

    Sharing.

    shared segments

    same segment number

    Allocation.

    first fit/best fit external fragmentation

    Protection. With each entry in segments table associate:

    validation bit = 0 illegal segment

    read/write/execute privileges

    Protection bits associated with segments; code sharing occurs at segment level.

    Since segments vary in length, memory allocation is a dynamic storage-allocation problem.

    A segmentation example is shown in the following diagram

    Segmentation Hardware:

    CPU s d

    < +

    PhysicalMemory

    limi

    t

    bas

    e

  • 8/7/2019 Operating Systems and Background

    55/56

    Example of segmentation:

    Sharing of segments

    Subroutine

    Stac

    k

    Symboltable

    Mainprogram

    sqrt

    Segment

    0

    Segment

    3Segment

    2Segment

    4

    Segment

    1

    100040040011001000

    14006300430032004700

    limit base

    Editor

    Data 1

    Data 2

    252864425

    4306268348

    Limit BaseSegment 0

    Segment 1

    editor

    Data 1

    Segment 0

    Segment 1

    editor

    Data 2

    252868850

    4306290003

    Limit Base

  • 8/7/2019 Operating Systems and Background

    56/56

    Segmentation with paging MULTICS The MULTICS system solved problems of external fragmentation and lengthy search times by

    paging the segments.

    Solution differs from segmentation in that the segment-table entry contains not the base address

    of the segment, but rather the base address of a page table for this segment.

    Segmentation with Paging Intel 386:

    As shown in the following diagram, the Intel 386 uses segmentation with paging for memory

    management with a two-level paging scheme.

    Authorised By

    SANTOSH BHARADWAJ REDDY

    Email: [email protected]

    Engineeringpapers.blogspot.com

    More Papers and Presentations available on above

    site

    mailto:[email protected]:[email protected]