chapter 3 chapter reading task

Post on 19-Nov-2014

791 Views

Category:

Technology

6 Downloads

Preview:

Click to see full reader

DESCRIPTION

Process and Threads, Symmetric Multiprocessing, Microkernels

TRANSCRIPT

OPERATING SYSTEM:CHAPTER 3 TASK

B Y :

•A M I R U L R A M Z A N I B I N R A D Z I D

•M U H A M M A D N U R I L H A M B I N I B R A H I M

•M U H D A M I R U D D I N B I N A B A S

PROCESS

INTRODUCTION OF PROCESS

• Also called a task

• Execution of an individual program

• Can be traced

- list the sequence of instructions that execute

MEANING OF PROCESS

A program in execution.

An instance of a program running on a computer.

The entity that can be assigned to and executed on a processor.

PROCESS, ON THE OTHER HAND, INCLUDES:

Current value of Program Counter (PC)

Contents of the processors registers

Value of the variables

The process stack (SP) which typically contains temporary data such as subroutine parameter, return address, and

temporary variables.

A data section that contains global variables.

While program is executing, this process can be characterized by some elements.

The information in the preceding list is stored in a data structure, typically called a process control block ( Figure 3.1 ), that is created and managed by the OS.

Process control block is that it contains sufficient information so that it is possible to interrupt a running process and later resume execution as if the interruption had not occurred.

Process = Program code + Associated data +

PCB

DISPATCHER

Small program that switches the processor from one process to another.

PROCESS STATES

PROCESS STATES1) Process Creation

REASONS DESCRIOTIONS

New batch job. The OS is provided with a batch job control stream, usually on tapeor disk. When the OS is prepared to take on new work, it will read thenext sequence of job control commands.

Interactive log-on. A user at a terminal logs on to the system.

Created by OS to provide a service.

The OS can create a process to perform a function on behalf of a userprogram, without the user having to wait (e.g., a process to controlprinting).

Spawned by existing process For purposes of modularity or to exploit parallelism, a user programcan dictate the creation of a number of processes.

PROCESS STATES

2) Process Termination

REASONS DESCRIPTION

Normal completion The process executes an OS service call to indicate that it has completedrunning.

Time limit exceeded The process has run longer than the specified total time limit. There are anumber of possibilities for the type of time that is measured. These includetotal elapsed time (“wall clock time”), amount of time spent executing, and,in the case of an interactive process, the amount of time since the user lastprovided any input.

Memory unavailable The process requires more memory than the system can provide.

Bounds violation The process tries to access a memory location that it is not allowed to access.

PROCESS STATES

3) A Two-State Process Model

PROCESS STATES

4) A Five-State Model

PROCESS STATES

5) Suspended Processes

PROCESS DESCRIPTION Operating System Control Structures

An OS maintains tables for managing processes and resources.

These are the tables: Memory tables I/O tables File tables Process tables

PROCESS DESCRIPTION

Process Control Structures

PROCESS CONTROL

Modes of ExecutionUser mode System mode, control mode, or kernel mode

Process Creation

1)assigns a unique process identifier to the new process

2)allocates space for the process

3)initializes the process control block

4)sets the appropriate linkages

5)creates or expands other data structures

Process Switching InterruptTrapSupervisor call

THREAD

INTRODUCTION OF THREAD

Basic unit of execution

Single sequential flow of control within a program

Threads are bound to a single process

Each process may have multiple threads of control within it.

DIFFERENCE BETWEEN PROCESS AND THREAD

Process Thread

Process is heavy weight or resource intensive.

Thread is light weight taking lesser resources than a process.

Process switching needs interaction with operating system.

Thread switching does not need to interact with operating system.

In multiple processing environments each process executes the same code but has its own memory and file resources

All threads can share same set of open files, child processes.

If one process is blocked then no other process can execute until the first process is unblocked.

While one thread is blocked and waiting, second thread in the same task can run.

Multiple processes without using threads use more resources.

Multiple threaded processes use fewer resources.

In multiple processes each process operates independently of the others.

One thread can read, write or change another thread’s data.

MULTITHREADING

THREAD

An execution state (running, ready, etc.)

Saved thread context when not running

Has an execution stack

Some per-thread static storage for local variables

Access to the memory and resources of its process All threads of a process share this A file open with one thread, is available to others

THREAD STATES

• Spawn: Typically, when a new process is spawned, a thread for that process is also spawned. Subsequently, a thread within a process may spawn another thread within the same process, providing an instruction pointer and arguments or the new thread. The new thread is provided with its own register context and stack space and placed on the ready queue.

• Block: When a thread needs to wait for an event, it will block (saving its user registers, program counter, and stack pointers). The processor may now turn to the execution of another ready thread in the same or a different process.

• Unblock: When the event for which a thread is blocked occurs, the thread is moved to the Ready queue.

• Finish: When a thread completes, its register context and stacks are deallocated.

TREAD STATES

Figure 4.3 shows a program that performs two remote procedure calls (RPCs) to two different host to obtain combined result.

TYPE OF THREAD

1) User Level Threads All thread management is done by the application The kernel is not aware of the existence of threads OS only schedules the process, not the threads within process. Programmer using a thread library to manage threads (create,

delete, schedule).

1) USER LEVEL THREADS

AdvantagesThread switching does not require Kernel mode privileges.User level thread can run on any operating system.Scheduling can be application specific in the user level thread.User level threads are fast to create and manage.

DisadvantagesIn a typical operating system, most system calls are blocking.Multithreaded application cannot take advantage of multiprocessing.

TYPE OF THREAD

2) Kernel Level ThreadsAll thread management is done by the kernel Kernel maintains context information for the process and the threads No thread library but an API to the kernel thread facility Switching between threads requires the kernel Scheduling is done on a thread basis Ex. W2K, Linux, and OS/2

2) KERNEL LEVEL THREADS

AdvantagesKernel can simultaneously schedule multiple threads from the same

process on multiple processes.If one thread in a process is blocked, the Kernel can schedule another

thread of thread of the same process.Kernel routines themselves can multithreaded.

DisadvantagesKernel threads are generally slower to create and manage than the

user threads.Transfer of control from one thread to another within same process

requires a mode switch to the Kernel.

USER-LEVEL AND KERNEL-LEVEL TREADS

Thread Advantages

Threads are memory efficient. Many threads can be efficiently contained within a single EXE, while each process can incur the overhead of an entire EXE.

Threads share a common program space, which among other things, means that messages can be passed by queuing only a pointer to the message. Since processes do not share a common program space, the kernel must either copy the entire message from process A's program space to process B's program space - a tremendous disadvantage for large messages, or provide some mechanism by which process B can access the message.

Thread task switching time is faster, since a thread has less context to save than a process.

With threads the kernel is linked in with the user code to create a single EXE. This means that all the kernel data structures like the ready queue are available for viewing with a debugger. This is not the case with a process, since the process is an autonomous application and the kernel is separate, which makes for a less flexible environment

Economy- it is more economical to create and context switch threads.

Utilization of multiprocessor architectures to a greater scale and efficiency.

Thread DisadvantagesThreads are typically not loadable. That is, to add a new thread, you must

add the new thread to the source code, then compile and link to create the new executable. Processes are loadable, thus allowing a multi-tasking system to be characterized dynamically. For example, depending upon system conditions, certain processes can be loaded and run to characterize the system. However, the same can be accomplished with threads by linking in all the possible threads required by the system, but only activating those that are needed, given the conditions. The really big advantage of load ability is that the process concept allows processes (applications) to be developed by different companies and offered as tools to be loaded and used by others in their multi-tasking applications.

Threads can walk over the data space of other threads. This cannot happen with processes. If an attempt is made to walk on another process an exception error will occur.

Threading IssuesFork() creates a separate duplicate and exec() replaces the entire process

with the program specified by its parameter. However, we need to consider what happens in a multi-threaded process. Exec() works in the same manor – replacing the entire process including any threads (kernel threads are reclaimed by the kernel), but if a thread calls fork(), should all threads be duplicated, or is the new process single threaded?

Some UNIX systems work around this by having two versions of fork(), a fork() that duplicates all threads and a fork() that duplicates only the calling thread, and usages of these two versions depends entirely on the application. If exec() is called immediately after a fork(), then duplicating all threads is not necessary, but if exec() is not called, all threads are copied (an OS overhead is implied to copy all the kernel threads as well).

SYMMETRIC MULTIPROCESSING

SYMMETRIC MULTIPROCESSING

Traditionally, the computer has been viewed as a sequential machine.

A processor executes instructions one at a time in sequence Each instruction is a sequence of operations

Two popular approaches to providing parallelism Symmetric MultiProcessors (SMPs) Clusters (ch 16)

CATEGORIES OF COMPUTER SYSTEMS

Single Instruction Single Data (SISD) stream Single processor executes a single instruction stream to operate on data

stored in a single memory

Single Instruction Multiple Data (SIMD) stream Each instruction is executed on a different set of data by the different

processors

Multiple Instruction Single Data (MISD) stream (Never implemented) A sequence of data is transmitted to a set of processors, each of execute a

different instruction sequence

Multiple Instruction Multiple Data (MIMD) A set of processors simultaneously execute different instruction sequences

on different data sets

SYMMETRIC MULTIPROCESSING

DEFINITION• A computer architecture that provides

fast performance by making multiple CPUs available to complete individual processes simultaneously.

SYMMETRIC MULTIPROCESSING

HOW IT WORKS?

SYMMETRIC MULTIPROCESSING

HOW IT WORKS?

SYMMETRIC MULTIPROCESSING

HOW IT WORKS?

1. There are multiple processors. - Each has access to a shared main memory and the I/O devices

2. Main Memory (MM) operating under a single OS with two or more homogeneous processors. - each processor has cache memory (or cache)

> to speed-up the MM data access> to reduce the system bus traffic.

3. Processors interconnected using interconnection mechanism - buses, crossbar switches or on-chip mesh networks.

SYMMETRIC MULTIPROCESSING

HOW IT WORKS?

4. The memory is organized oftenly - multiple simultaneous accesses to separate blocks of memory are possible.

5. SMP systems allow any processor to work on any task - no matter where data for that task located in memory - Easily move tasks between processors to balance the workload - may view the system like a multiprogramming uniprocessor system. - can construct applications that use multiple processes without regard to whether a single processor or multiple processors will be available.

6. SMP provide all the functionality of a multiprogramming system plus additional features - to accommodate multiple processors.

SYMMETRIC MULTIPROCESSING

KEY DESIGN

• Simultaneous concurrent processes or threads> Kernel routines need to be re-entrant to allow several processors to execute the

same kernel code simultaneously. > to avoid deadlock or invalid operations.

• Scheduling> Avoid conflicts.> If kernel-level multithreading is used, then the opportunity exists

• Synchronization> To provide effective synchronization.

• Memory management> Deal with all issues found on uniprocessor computers> OS needs to exploit the available hardware parallelism to achieve the best

performance. The paging mechanisms coordinated to enforce consistency

• Reliability and fault tolerance> To avoid processor failure.> Must recognize the loss of a processor and restructure management tables

accordingly.

SYMMETRIC MULTIPROCESSING

ADVANTAGES

1. In symmetric multiprocessing, any processor can run any type of thread. The processors communicate with each other through shared memory than in ASMP where the operating system typically sets aside one or more processors for its exclusive use.

2. SMP systems provide better load-balancing and fault tolerance. Because the operating system threads can run on any processor, the chance of hitting a CPU bottleneck is greatly reduced. All processors are allowed to run a mixture of application and operating system code where in ASMP, bottleneck problem is very high.

3. A processor failure in the SMP model only reduces the computing capacity of the system, than ASMP model where the processor that fails is an operating system processor, the whole computer can go down.

SYMMETRIC MULTIPROCESSING

DISADVANTAGES

1. SMP systems are inherently more complex than ASMP systems. 2. A tremendous amount of coordination must take place within the operating system to keep everything synchronized than ASMP where using the normal and simple processing way.

MICROKERNEL

MICROKERNEL

DEFINITION

• Small OS core that provide the foundation for modular extension .

• Micro Kernel was popularized by Mach OS, which is now the core of the Macintosh Mac OS X operating system.

• The philosophy underlying the microkernel is that only absolutely essential core OS functions should be in the kernel .

KERNEL ARCHITECTURE

MICROKERNEL DESIGN

It validate the message.

Pass message between components.

Grants access from the hardware.

Message Exchange

MICROKERNEL DESIGN

- It prevents message passing unless exchange is allowed.

- Steps:

1. Application wishes to open a File.

2. Micro Kernel send message to file system server.

3. If the application wishes to create a process or thread.

4. Micro Kernel send a message to the process server.

Protection Function

MICROKERNEL DESIGN

Low-level memory management - Mapping each virtual page to a physical page frame

Most memory management tasks occur in user space

Memory Management

MICROKERNEL DESIGN

Communication between processes or threads in a microkernel OS is via messages.

A message includes: A header that identifies the sending and receiving process and A body that contains direct data, a pointer to a block of data, or some

control information about the process.

Interprocess Communication

MICROKERNAL DESIGN

Within a microkernel it is possible to handle hardware interrupts as messages and to include I/O ports in address spaces.

a particular user-level process is assigned to the interrupt and the kernel maintains the mapping.

I/O and interrupt management

MICROKERNEL - ADVANTAGES

Uniform Interfaces

all services are provided by means of message passing.

 

Extensibility

allowing the addition of new services.

 

Flexibility

not only can new features be added to the OS, but also existing features can be subtracted to produce a smaller, more efficient implementation.

 

Portability

Intel’s near monopoly of many segments of the computer platform market is unlikely to be sustained indefinitely. Thus, portabilitybecomes an attractive feature of an OS. Changes needed to port the system to a new processor are changed in the microkernel and not in other services.

 

MICROKERNEL - ADVANTAGES

Reliability

The larger the size of a software product, the more difficult it is to ensure its reliability. Although modular design helps to enhance reliability, even greater gains can be achieved with a micro kernel architecture. A small microkernel can be rigorously tested.

 

Distributed system support

When a message is sent from a client to a server process, the message must include an identifier of the requested service. If a distributed system (e.g., a cluster) is configured so that all processes and services have unique identifiers, then in effect there is a single system image at the microkernel level. A process can send a message without knowing on which computer the target service resides.

 

Support for object-oriented operating systems (OOOSS)

A number of microkernel design efforts are moving in the direction of object orientation.

top related