process management(thread scheduling)

Upload: ankit-khandelwal

Post on 04-Apr-2018

224 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 Process Management(Thread Scheduling)

    1/21

    THREADS

  • 7/30/2019 Process Management(Thread Scheduling)

    2/21

    Single and Multithreaded Processes

    A thread is a fundamental unit of CPU utilization that forms the basis of

    multithreaded computer systems

  • 7/30/2019 Process Management(Thread Scheduling)

    3/21

    Benefits

    Responsiveness

    Resource Sharing

    Economy

    Utilization of MP Architectures

  • 7/30/2019 Process Management(Thread Scheduling)

    4/21

    Multithreaded Server Architecture

    This architecture is not expensive as creation & switching amongst processes

    is costly

  • 7/30/2019 Process Management(Thread Scheduling)

    5/21

    User & Kernel Threads

    User Threads: Thread management done by user-level threads library

    Three primary thread libraries:

    Java threads

    Kernel Threads; Supported by the Kernel

    Examples Windows XP/2000

    Solaris

    Linux

  • 7/30/2019 Process Management(Thread Scheduling)

    6/21

    Threads vs. Processes

    Both threads and processes are methods of parallelizing an application.

    However, processes are independent execution units that contain their own

    state information, use their own address spaces, and only interact with each

    other via interprocess communication mechanisms.

    A single process might contains multiple threads; all threads within a process

    share the same state and same memory space, and can communicate with

    each other directly, because they share the same variables.

    Applications are typically divided into processes during the design phase,.

    Processes, in other words, are an architectural construct.By contrast, a thread is a coding construct that doesn't affect the architecture

    of an application.

  • 7/30/2019 Process Management(Thread Scheduling)

    7/21

    Advantages of Thread

    A process with multiple threads make a great server for example printer server.

    Because threads can share common data, they do not need to use interprocess communication.Because of the very nature, threads can take advantage of multiprocessors.

    Threads are economical in the sense that:

    They only need a stack and storage for registers therefore, threads are cheap to

    create.

    Threads use very little resources of an operating system in which they are

    working. That is, threads do not need new address space

    Context switching are fast when working with threads. The reason is that we only

    have to save and/or restore PC, SP and registers.

    But this cheapness does not come free - the biggest drawback is that there is noprotection between threads.

  • 7/30/2019 Process Management(Thread Scheduling)

    8/21

    User Level Threads

    User-level threads implement in user-level libraries, rather than via systems calls, so thread switching

    does not need to call operating system and to cause interrupt to the kernel. In fact, the kernel knows

    nothing about user-level threads and manages them as if they were single-threaded processes.

    Advantages:

    The most obvious advantage of this technique is that a user-level threads package can be

    implemented on an Operating System that does not support threads.

    User-level threads does not require modification to operating systems.

    Simple Management: This simply means that creating a thread, switching between threads and

    synchronization between threads can all be done without intervention of the kernel.

    Disadvantages:

    There is a lack of coordination between threads and operating system kernel. Therefore,

    process as 0whole gets one time slice irrespect of whether process has one thread or 1000

    threads within.

    User-level threads requires non-blocking systems call i.e., a multithreaded kernel. Otherwise,

    entire process will blocked in the kernel, even if there are runable threads left in the processes.

    For example, if one thread causes a page fault, the process blocks.

  • 7/30/2019 Process Management(Thread Scheduling)

    9/21

    Kernel-Level Threads

    In this method, the kernel knows about and manages the threads.

    Advantages:

    Because kernel has full knowledge of all threads, Scheduler may decide to give

    more time to a process having large number of threads than process having small

    number of threads.

    Kernel-level threads are especially good for applications that frequently block.

    Disadvantages:

    The kernel-level threads are slow and inefficient. For instance, threads operations

    are hundreds of times slower than that of user-level threads.

  • 7/30/2019 Process Management(Thread Scheduling)

    10/21

    Advantages of Threads over Multiple Processes

    Context Switching Threads are very inexpensive to create and destroy. For example, they

    require space to store, the PC, the SP, and the general-purpose registers, but they do not

    require space to share memory information, Information about open files of I/O devices in

    use, etc. With so little context, it is much faster to switch between threads. In other words, it is

    relatively easier for a context switch using threads.

    Sharing Treads allow the sharing of a lot resources that cannot be shared in process, for

    example, sharing code section, data section, Operating System resources like open file etc.

    Disadvantages of Threads over Multiprocesses

    Blocking The major disadvantage if that if the kernel is single threaded, a system call of

    one thread will block the whole process and CPU may be idle during the blocking period.

    Security Since there is, an extensive sharing among threads there is a potential problem ofsecurity. It is quite possible that one thread over writes the stack of another thread (or

    damaged shared data) although it is very unlikely since threads are meant to cooperate on a

    single task.

  • 7/30/2019 Process Management(Thread Scheduling)

    11/21

    Application that Benefits from Threads

    A proxy server satisfying the requests for a number of computers on a LAN would be benefitedby a multi-threaded process. In general, any program that has to do more than one task at a time

    could benefit from multitasking. For example, a program that reads input, process it, and

    outputs could have three threads, one for each task.

    Application that cannot Benefit from Threads

    Any sequential process that cannot be divided into parallel task will not benefit from thread, as

    they would block until the previous one completes. For example, a program that displays the

    time of the day would not benefit from multiple threads.

  • 7/30/2019 Process Management(Thread Scheduling)

    12/21

    Synchronization

  • 7/30/2019 Process Management(Thread Scheduling)

    13/21

    Background

    Concurrent access to shared data may result indata inconsistency (e.g., due to race conditions)

    Maintaining data consistency requires

    mechanisms to ensure the orderly execution of

    cooperating processes Suppose that we wanted to provide a solution to

    the consumer-producer problem that fills allthe

    buffers.

    We can do so by having an integer count that keepstrack of the number of full buffers.

    Initially, count is set to 0.

    Incremented by producer after producing a new

    buffer

    Decremented by consumer after consuming a buffer

  • 7/30/2019 Process Management(Thread Scheduling)

    14/21

    A race condition occurs when multiple processes access and

    manipulate the same data concurrently, and the outcome of the

    execution depends on the particular order in which the access

    takes

  • 7/30/2019 Process Management(Thread Scheduling)

    15/21

    A section of code, common to n cooperating processes, in which

    the processes may be accessing common

    variables.

    A Critical Section Environment contains:

    Entry Section Code requesting entry into the critical section.

    Critical Section Code in which only one process can execute at any

    one time.

    Exit Section The end of the critical section, releasing or allowingothers in.

    Remainder Section Rest of the code AFTER the critical section.

    Critical Sections

  • 7/30/2019 Process Management(Thread Scheduling)

    16/21

    Solution to Critical-Section Problem

    Software

    Peterson's Algorithm: based on busy waiting

    Semaphores: general facility provided by operating system

    (e.g., OS/2)

    based on low-level techniques such as busy waiting or hardware

    assistance

    described in more detail below

    Monitors: programming language technique. Key Idea: Only

    one process may be active within the monitor at a time

    Hardware

    Test-and-Set: atomic machine-level instruction

    In computer science, the test-and-set instruction is an instructionused to write to a memory location and return its old value as a

    single atomic (i.e. non-interruptible) operation. If multiple

    processes may access the same memory, and if a process is

    currently performing a test-and-set, no other process may begin

    another test-and-set until the first process is done

    Swap: atomically swaps contents of two words

  • 7/30/2019 Process Management(Thread Scheduling)

    17/21

    Peterson's Algorithm

    a simple algorithm that can be run by two processes to ensure mutualexclusion for one resource (say one variable or data structure)

    does not require any special hardware it uses busy waiting (a spinlock)

    Semaphore

    a variable used for signalling between processes

    two main operations on semaphore are:

    Wait for next process (or acquire) Signal to other process (or release)

    A resource such as a shared data structure is protected by asemaphore. You must acquire the semaphore before using theresource.

  • 7/30/2019 Process Management(Thread Scheduling)

    18/21

    Deadlock and Starvation

    Deadlock two or more processes are waiting indefinitely for anevent that can be caused by only one of the waiting processes

    Let S and Q be two semaphores initialized to 1

    P0 P1wait (S); wait (Q);

    wait (Q); wait (S);

    . .

    . .

    . .

    signal (S); signal (Q);

    signal (Q); signal (S);

    Starvation indefinite blocking. A process may never beremoved from the semaphore queue in which it is suspended.

  • 7/30/2019 Process Management(Thread Scheduling)

    19/21

    Bounded Buffer problem

    Producer

    creates data and adds to the buffer

    do not want to overflow the buffer

    Consumer

    removes data from buffer (consumes it) do not want to get ahead of producer

  • 7/30/2019 Process Management(Thread Scheduling)

    20/21

    Reader & Writer

    data set is shared among a number of concurrent processes

    Readers only read the data set; do not perform any updates

    Writers can both read and write.

    Problem

    Allow multiple readers to read at the same time.

    Only one writer can access the shared data at the same time.

  • 7/30/2019 Process Management(Thread Scheduling)

    21/21

    Dining-Philosophers Problem

    The dining philosophers problem is summarized as five philosophers

    sitting at a table doing one of two things: eating or thinking. While

    eating, they are not thinking, and while thinking, they are not eating.

    The five philosophers sit at a circular table with a large bowl of

    spaghetti in the center. A fork is placed in between each pair of

    adjacent philosophers, and as such, each philosopher has one fork to

    his left and one fork to his right. As spaghetti is difficult to serve and

    eat with a single fork, it is assumed that a philosopher must eat with

    two forks. Each philosopher can only use the forks on his immediate

    left and immediate right.

    Shared data

    Bowl of rice (data set)

    Semaphore chopstick [5] initialized to 1