chapter 4 - processes and chapter 5 - threads

32
Chapter 4/5-1 CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS CGS 3763 - Operating System Concepts UCF, Spring 2004

Upload: akamu

Post on 20-Jan-2016

57 views

Category:

Documents


0 download

DESCRIPTION

CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS. CGS 3763 - Operating System Concepts UCF, Spring 2004. Process Concept. An operating system executes a variety of programs: Batch system – jobs Time-shared systems – user programs or tasks - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-1

CHAPTER 4 - PROCESSESand

CHAPTER 5 - THREADS

CGS 3763 - Operating System Concepts

UCF, Spring 2004

Page 2: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-2

Process Concept

• An operating system executes a variety of programs:– Batch system – jobs

– Time-shared systems – user programs or tasks

• Textbook uses the terms job and process almost interchangeably.– For this class assume a job is a program in executable

form waiting to be brought into the computer system

– A process is a program in execution and includes:

• Process Control Block

• Program Counter

– A process is created when it is assigned memory and a PCB is created (by the OS) to hold its status

Page 3: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-3

Process States

• As a process executes, it moves through “states”:– new: The job is waiting to be turned into a process

• during process creation, memory is allocated for the job’s instruction and data segments and a PCB is populated.

– ready: The process is waiting to be assigned the CPU

– running: Instructions are being executed by the CPU

• the # of processes in the running state can be no greater than the # of processors (CPUs) in the system

– waiting: The process is waiting for some event to occur

• often associated with explicit requests for I/O operations

– terminated: The process has finished execution

• resources assigned to the process are reclaimed

Page 4: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-4

Diagram of Process State

Page 5: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-5

Queue or State?

• Some states in the previous diagram are actually queues– New or Job queue – set of all processes waiting to enter

the system.

– Ready queue – set of all processes residing in main memory, ready and waiting to execute.

– Waiting - In this class, we have somewhat abstracted away the idea of a queue for this state. In reality, processes may be placed in a device queue to wait for access to a particular I/O device.

• Processes are selected from queued states by schedulers

Page 6: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-6

Process Schedulers

• Short-term scheduler (or CPU scheduler)– selects which process in the ready queue should be

executed next and allocates CPU.

– invoked very frequently (milliseconds) (must be fast).

• Long-term scheduler (or job scheduler)– selects which processes should be created and brought

into the ready queue from the job queue.

– invoked infrequently (seconds, minutes) (may be slow).

– controls the degree of multiprogramming.

• Medium-term scheduler– Helps manage process mix by swapping in/out

processes

Page 7: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-7

Addition of Medium Term Scheduling

Page 8: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-8

Process Mix

• Processes can be described as either:– I/O-bound process

• spends more time doing I/O than computations

• many short CPU bursts.

– CPU-bound process

• spends more time doing computations

• few very long CPU bursts.

• Need to strike a balance between the two. – Otherwise either the CPU or I/O devices underutilized.

Page 9: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-9

Moving Between States

• Events as well as schedulers can cause a process to move from one state to another– Trap - during execution, process encounters and error. OS traps

error and may abnormally end (abend) the process moving it from running to terminate.

– SVC(end) - process ends voluntarily and moves from running to terminate

– SVC(I/O) - process requests OS perform some I/O operation. If synchronous I/O, process moves from running to waiting state.

– I/O Hardware Interrupt - signals the completion of some I/O operation for which a process was waiting. Process can be moved from waiting to ready.

– Timer Interrupt - signals that a process has used up its current time slice (timesharing systems). Process returned from running to ready state to await its next turn.

Page 10: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-10

Resource Allocation

• A process requires certain resources in order to execute– Memory, CPU Time, I/O Devices, etc.

• Resources can be allocated in one of two ways:– Static Allocation - resources assigned at start of

process, released during termination

• Can cause reduction in throughput

– Dynamic Allocation - resources assigned as needed while process running

• Can cause deadlock

• Resources can be “shared” to reduce conflicts– Example: Print spooling

Page 11: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-11

Process Control Block (PCB)

• Saves the status of a process– Process state

– Program counter

– CPU registers

– Scheduling information

• e.g., priority

– Memory-management information

• e.g., Base and Limit Registers

– Accounting information

– I/O status information

– Pointer to next PCB

• PCB generated during process creation

Page 12: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-12

Process Control Block (PCB)

Page 13: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-13

PCBs Stored by OS as Linked Lists

Page 14: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-14

CPU Switches From Process to Process

Page 15: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-15

Context Switching

• When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process.

• Context switch may involve more than one change in the program counter– Process 1 executing

– OS manages the switch

– Process 2 starts executing

• Context-switch time is overhead; the system does no useful work while switching.

• Time dependent on hardware support.

Page 16: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-16

Threads (Lightweight Process)

• Used to reduce context switching overhead

• Allows sharing of instructions, data, files and other resources among several related tasks

• Threads also share a common PCB

• Each thread has its own “thread descriptor”– Program Counter

– Register Set

– Stack

• Control of CPU can be shared among threads associated with the same process without a full-blown context switch. – Only change of PC and registers required.

Page 17: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-17

Single and Multithreaded Processes

Page 18: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-18

Benefits of Threads

• Responsiveness– Faster due to reduced context switching time

– Process can continue doing useful work while waiting for some event (isn’t blocked)

• Resource Sharing (shared memory/code/data)

• Economy– can get more done with same processor

– less memory required

• Utilization of multiprocessor architectures– different threads can run on different processors

Page 19: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-19

Different Types of Threads

• User Level Threads– thread management done by user-level library

– e.g., POSIX Pthreads, Mach C-threads, Solaris threads

• Kernel Level Threads– supported by the Kernel

– e.g., Windows 95/98/NT/2000, Solaris, Linux

Page 20: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-20

Multithreading Models

• Many-to-One– Many user-level threads mapped to single kernel thread.

– Used on systems that do not support kernel threads.

• One-to-One– Each user-level thread maps to kernel thread.

• Many-to-Many Model– Allows many user level threads to be mapped to many

kernel threads.

– Operating system to create a sufficient number of kernel threads.

Page 21: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-21

Many-to-One Model

Page 22: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-22

One-to-one Model

Page 23: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-23

Many-to-Many Model

Page 24: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-24

Cooperating Processes

• Independent processes cannot affect or be affected by the execution of another process.

• Dependent processes can affect or be affected by the execution of another process– a.k.a., Cooperating Processes

• Processes may cooperate for:– Information sharing

– Computation speed-up (Requires 2 or more CPUs)

– Modularity

– Convenience

Page 25: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-25

Interprocess Communication (IPC)

• Mechanism needed for processes to communicate and to synchronize their actions.– Shared Memory

• Tightly coupled systems

• Single processor systems allowing overlapping base & limit registers

• Mutli-threaded systems (between threads associated with same process)

– Message Passing

• Processes communicate with each other without resorting to shared variables.

• Uses send and receive operations to pass information

• Better for loosely coupled / distributed systems

– Can use both mechanisms on same system

Page 26: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-26

Message Passing

• If P and Q wish to communicate, they need to:– establish a communication link between them

– exchange messages via send/receive

• Implementation of communication link– physical (e.g., hardware bus, highspeed network)

– logical (e.g., direct vs. indirect, other logical properties)

• Implementation Questions– How are links established?

– Can a link be associated with more than two processes?

– How many links can there be between every pair of communicating processes?

– Does the link support variable or fixed size messages?

– Is a link unidirectional or bi-directional?

Page 27: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-27

Direct Communication

• Processes must name each other explicitly:– send (P, message) – send message to process P

– receive(Q, message) – receive message from process Q

• Properties of communication link– Links are established automatically.

– A link is associated with exactly one pair of communicating processes.

– Between each pair there exists exactly one link.

– The link may be unidirectional, but is usually bi-directional.

• Changing name of process causes problems– All references to old name must be found & modified

– Requires re-compilation of affected programs

Page 28: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-28

Indirect Communication

• Messages are directed to and received from mailboxes (also referred to as ports).– Each mailbox has a unique id.

– Processes can communicate only if they share a mailbox.

• Properties of communication link– Link established only if processes share a common

mailbox.

– A link may be associated with many processes.

– Each pair of processes may share several communication links (requires multiple mailboxes).

– Link may be unidirectional or bi-directional.

• No names to change, more modular

Page 29: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-29

Indirect Communication Operations

• Create a new mailbox– User/Application can create mailboxes through shared

memory

– Otherwise, mailboxes created by OS at the request of a user/application process

• Send and receive messages through mailbox

• Destroy a mailbox– User/Application process can destroy any mailbox

created in shared memory

– OS can destroy mailbox at request of mailbox owner (a user/application process)

– OS can destroy unused mailboxes during garbage collection

Page 30: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-30

Indirect Communication

• Mailbox sharing– P1, P2, and P3 share mailbox A.

– P1, sends; P2 and P3 receive.

– Who gets the message?

• Solutions– Allow a link to be associated with at most two

processes.

– Allow only one process at a time to execute a receive operation.

– Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.

Page 31: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-31

Message Synchronization

• Message passing may be either blocking or non-blocking.

• Blocking is considered synchronous– Process must wait until send or receive completed

– Blocking Send

– Blocking Receive

• Non-blocking is considered asynchronous– Process can continue executing while waiting for send

or receive to complete

– Non-blocking Send

– Non-blocking Receive

Page 32: CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Chapter 4/5-32

Buffering

• Message queues attached to the link can be implemented in one of three ways.– Zero capacity – 0 messages

• Sender must wait for receiver (rendezvous).

– Bounded capacity – finite length of n messages

• Sender must wait if link’s message queue full.

– Unbounded capacity – infinite number of messages

• Sender never waits.