process management
DESCRIPTION
TRANSCRIPT
Submitted by Digpal Singh Jhala
MSc IT Sem II 2009-2011
Submitted toMs. Namita
JainMADAM
• A program in exucution(process).
• Sequential manner.
• At a time only one instruction.
• Process is more than a program tool.
• It include current activity as represented by the value of program counter.& process state containing global variables.
• Program itself is not process.
• Each process provides the resources needed to execute a program
• A thread is the entity within a process that can be scheduled for execution.
• preemptive multitasking
• A job object allows groups of processes to be managed as a unit. Job objects are namable, securable, sharable objects that control attributes of the processes associated with them
• An application can use the thread pool to reduce the number of application threads and provide management of the worker threads
• User-mode scheduling (UMS) is a light-weight mechanism that applications can use to schedule their own threads.
• A fiber is a unit of execution that must be manually scheduled by the application. Fibers run in the context of the threads that schedule them
Unix process Threads within a unix process
Consider An Executing Program
• Sometimes we call a program in the system a job or a process
• We speak of the terms process and job interchangeably sometimes, but they are different
• A job is an activity submitted to the operating system to be done
• A process is an activity being worked on by the system
• Thus {processes} {jobs}
Basic Program Execution
• When a program runs, the program counter keeps track of the next instruction to be executed
• registers keep values of current variables– together the program counter and variables for
a program are the program state
• stack space in memory is used to keep values of variables from subroutines that have to be returned to
Process States• A process may go through a number of
different states as it is executed• When a process requests a resource, for
example, it may have to wait for that resource to be given by the OS
• In addition to I/O, memory, and the like, processes must share the CPU– processes are “unaware” of each others CPU
usage– virtualization allows them to share the CPU
The Five State Model
READY
NEW
WAITING
RUNNING
TERMINATED
admitted
time-out interrupt
OS services request I/O or event wait
scheduler dispatch
deallocate resources and exit
Arc labels show typical eventswhich cause that transition
Which state is my job in?
• On some systems, you can look at the status of your jobs and even others’ jobs– on UNIX, type ps at the prompt– Note: “nice” command in UNIX– print jobs on Mac or Windows
• The number of states possible may differ from our model—after all, it is only a model
Context Switching
• Strictly speaking, only one process can run on the CPU at any given time
• To share the CPU, the operating system must save the state of one process, then load another’s program and data
• Hardware has to help with the stopping of programs (time-out interrupt)
• This context switch does no useful work—it is an overhead expense
Process Control Block (PCB)
• Is a small data structure for the job submitted to the operating system
• Contents (four major parts):– unique identification– process status (ie: where in 5 state model)– process state (ie: instruction counter, memory map, list of
resources allocated, priority)– other useful information (eg: usage stats)
• Schedulers use instead of entire process (why?)
Threads versus Processes
• A thread is a single path of instruction in a program– threads share a process!
• All threads in a process share memory space and other resources
• Each thread has its own CPU state (registers, program counter) and stack
• May be scheduled by the process or by the kernel• Threads are efficient, but lack protection from each
other
Program Code
(note only one copy needed)
Thread 1 Thread 2
PC PC
Program
StackStackSharedData
Types of Threads
• User Threads– Designed for applications to use– Managed by application programmer– May not get used!
• Kernel Threads– Managed by OS– More overhead
CPU scheduling policies
• Many conflicting goals: maximum number of jobs processed, importance of the jobs, responsiveness to users, how long a job might be forced to wait, I/O utilization, CPU utilization, etc.
• Preemptive versus Nonpreemptive policies
Categories of policies
• Multi-user CPU scheduling policies– variants of Round Robin (preemptive)
• Batch systems– First Come, First Served (nonpreemptive)– Shortest Job First (nonpreemptive)– Shortest Remaining Time (preemptive)– Priority Scheduling (preemptive)
Round Robin• Almost all computers today use some form
of time-sharing, even PCs• Time divided into CPU quanta (5-100 ms)• “Proper” time quantum size? Two rules of
thumb:– At least 100 times larger than a context switch
(if not, context switch takes too much time)– At least 80% of CPU cycles allocated run to
completion(if not, RR degenerates towards FCFS)
So How Are Processes Scheduled?
• Let’s assume that we know the arrival times of the jobs, and how many CPU cycles each will take; for RR, assume quantum = 2
• Real processors don’t know arrival times and CPU cycles required in advance, so we can’t schedule on this basis!
• Let’s try out some of the algorithms…
Simple Example
0 4 5 7
A B C D
10 3 5 2
Arrives:
Job:
CPUCycles:
Round Robin (RR), First Stages
A
0 2 4
A
Round Robin, Continued (2)
A
0 2 4 6 8
A B C
Round Robin, Completed
A B C D A BA C A C A
0 2 4 6 8 10 12 13 15 17 18 20
Simple Example
0 4 5 7
A B C D
10 3 5 2
Arrives:
Job:
CPUCycles:
Shortest Remaining Time(SRT) (also preemptive)
A D ACB
0 4 7 9 14 20
Shortest Job Next(SJN) (non-preemptive)
A D CB
0 10 12 15 20
First Come, First Served(FCFS) (non-preemptive)
A DCB
0 10 13 18 20
Turnaround Time
• Is the time from job arrival to job completion
T(ji) = Finish(ji) - Arrive(ji)(and where might such information be stored?)
• We often want to know what the average turnaround time is for a given schedule
• Let’s calculate the turnaround time for the last schedule…
Calculating Average Turnaround Time for FCFS
Job:
Arrives:
Finishes:
Turnaround:
A B C D
0 4 5 7
10 13 18 20
10 9 13 13
AVERAGE = 11.25
Simple Example (Extended)
0 4 5 7
A B C D
10 3 5 2
Arrives:
Job:
CPUCycles:
Priority: 3 1 4 2
here higher priority values mean you go first…
Multilevel Priority
• Many different priority levels, 1, 2, 3, …n where 1 is lowest and n is highest
• A process is permanently assigned to one priority level
• A RR scheduler handles processes of equal priority within a level
Multilevel Priority (ML) (preemptive)
A C A D B
0 5 10 15 17 20
since each process was in a different queue, wedidn’t have to calculate RR within the queue
Multilevel Feedback (MLF)(preemptive)
• There are several priority levels 1, 2, … n
• Processes start at the highest priority level n
• A maximum time T is associated with each level—if the process uses more than that amount of time, it moves to a lower priority level
• RR is used to schedule within each level
• Top levels should be close to pure RR (for good response time),
• bottom queues should be close to pure FCFS (for good throughput).
• Both can be achieved if all levels are SRR with varying parameters.
Multilevel Feedback Time Limits
• The time limit at each priority level varies as a function of level– TP = T for P = n– TP = 2TP+1 for 1 ≤ P < n– Usually T is set to some small, reasonable
amount appropriate for the particular system
• If a process uses up time available at level 1, it is assumed to be a runaway process and is terminated
Simple Example (Extended)
0 4 5 7
A B C D
10 3 5 2
Arrives:
Job:
CPUCycles:
Priority: 3 1 4 2
here higher priority values mean you go first…
Multilevel Feedback Schedule
Assumptions: T = 2; RR Quantum = 2 at each level
A B C D
1 2 1 1 1
A
2 2 2 3
B C A
0 2 4 6 8 10 12 13 16 20
• THANKYOU