number of processes os switches context from a to b main use fork, and the child process call...
TRANSCRIPT
• Number of processes• OS switches context from A to B• main use fork, and the child process call execvp()
• Can compiler do something bad by adding privileged instructions?
Lecture 3Scheduling
How to develop scheduling policy• What are the key assumptions?• What metrics are important?• What basic approaches have been used in the
earliest of computer systems?
Workload Assumptions
1. Each job runs for the same amount of time.2. All jobs arrive at the same time.3. Once started, each job runs to completion.4. All jobs only use the CPU (i.e., they perform no I/O).5. The run-time of each job is known.
Scheduling Metrics
• Performance: turnaround time
Tturnaround = Tcompletion − Tarrival • As Tarrival is now 0, Tturnaround = Tcompletion
First In, First Out
• Work well under our assumption
• Relax “Each job runs for the same amount of time”• Convoy effect
0 12010020 40 60 80
0 12010020 40 60 80
Shortest Job First
• SJF would be optimal
• Relax “All jobs arrive at the same time.”
0 12010020 40 60 80
0 12010020 40 60 80
A B C
B/Carrive
Shortest Time-to-Completion First• STCF is preemptive, aka PSJF• “Once started, each job runs to completion” relaxed
0 12010020 40 60 80
A B C
B/Carrive
A
Scheduling Metrics
• Performance: turnaround time
Tturnaround = Tcompletion − Tarrival • As Tarrival is now 0, Tturnaround = Tcompletion
• Performance: response time
Tresponse = Tfirstrun − Tarrival
Turnaround time or response time• FIFO, SJF, or STCF
• Round robin
0 12010020 40 60 80
0 12010020 40 60 80
Conflicting criteria
• Minimizing response time• requires more context switches for many processes• incur more scheduling overhead • decrease system throughput• Increase turnaround time
• Scheduling algorithm depends on nature of system• Batch vs. interactive• Designing a generic AND efficient scheduler is difficult
Incorporating I/O
• Poor use of resources
• Overlap allows better use of resources0 12010020 40 60 80
CPU
Disk
A
A
A
A
A
A
A B
0 12010020 40 60 80
CPU
Disk
A
A
A
A
A
A
A BB B B
Workload Assumptions
1. Each job runs for the same amount of time.2. All jobs arrive at the same time.3. Once started, each job runs to completion.4. All jobs only use the CPU (i.e., they perform no I/O).5. The run-time of each job is known.
Multi-level feedback queue
• Goal• Optimize turnaround time without priori knowledge• Optimize response time for interactive users
Q6Q5Q4Q3Q2Q1
A B
C
D
• Rule 1: If Priority(A) > Priority(B) A runs (B doesn’t).• Rule 2: If Priority(A) = Priority(B) A & B run in RR.
How to Change Priority
• Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue).• Rule 4a: If a job uses up an entire time slice while
running, its priority is reduced (i.e., it moves down one queue).• Rule 4b: If a job gives up the CPU before the time
slice is up, it stays at the same priority level.
Example
0 12010020 40 60 80
Q2
Q1
Q0
A
A
A
B
B
A
Example with I/O
0 12010020 40 60 80
Q2
Q1
Q0
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
• Problems:• Starvation• Program can game the scheduler• Program may change its behavior over time
Priority Boost
• Rule 5: After some time period S, move all the jobs in the system to the topmost queue.
0 12010020 40 60 80
Q2
Q1
Q0
A
A
A
0 12010020 40 60 80
Q2
Q1
Q0
A
A
A
A
Gaming the scheduler
0 12010020 40 60 80
Q2
Q1
Q0
0 12010020 40 60 80
Q2
Q1
Q0B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A A
B
A
A A
B B
A A
B B
A A
B B
A A
B B
A A
B B
Better Accounting
• Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue).• Rule 4b: If a job gives up the CPU before the time
slice is up, it stays at the same priority level.
• Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue).
Tuning MLFQ And Other Issues• How to parameterize?• The system administrator configures it• Default values available: on Solaris, there are
• 60 queues• time-slice 20 milliseconds (highest) to 100s milliseconds (lowest)• priorities boosted around every 1 second or so.
• The users provides hints: command-line utility nice
Workload Assumptions
1. Each job runs for the same amount of time.2. All jobs arrive at the same time.3. Once started, each job runs to completion.4. All jobs only use the CPU (i.e., they perform no I/O).5. The run-time of each job is known.
MLFQ rules
• Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t).• Rule 2: If Priority(A) = Priority(B), A & B run in RR.• Rule 3: When a job enters the system, it is placed at
the highest priority (the topmost queue).• Rule 4: Once a job uses up its time allotment at a
given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue).• Rule 5: After some time period S, move all the jobs
in the system to the topmost queue.
Scheduling Metrics
• Performance: turnaround time
Tturnaround = Tcompletion − Tarrival • As Tarrival is now 0, Tturnaround = Tcompletion
• Performance: response time
Tresponse = Tfirstrun − Tarrival
• CPU utilization• Throughput• Fairness
A proportional-share or A fair-share scheduler• Each job obtain a certain percentage of CPU time.
• Lottery scheduling tickets• to represent the share of a resource that a process
should receive• If A 75 tickets, B 25 tickets, then 75% and 25%
(probabilistically)
63 85 70 39 76 17 29 41 36 39 10 99 68 83 63 62 43 0 49 49A B A A B A A A A A A B A B A A A A A A• higher priority => more tickets
Lottery Code
int counter = 0;Int winner = getrandom(0, totaltickets);node_t *current = head;while(current) { counter += current->tickets; if (counter > winner) break; current = current->next;}// current is the winner
Lottery Fairness Study
Ticket currency
User A 100 (global currency)-> 500 (A’s currency) to A1
-> 50 (global currency)-> 500 (A’s currency) to A2
-> 50 (global currency)User B 100 (global currency)
-> 10 (B’s currency) to B1-> 100 (global currency)
More on Lottery Scheduling• Ticket transfer• Ticket inflation• Compensation ticket
• How to assign tickets?
• Why not Deterministic?
Stride Scheduling:a deterministic fair-share scheduler
• Deterministic but requires global state• What if a new job enters in the middle
Scheduling
• Workload assumption• Metrics• MLFQ• Lottery Scheduling and stride scheduling
Next
• Work on PA0
• Reading: chapter 12-16
PA0
PA0. 0-1
0. Step 2, `cs-status | head -1 | sed 's/://g'` Step 6, cs-console, (control-@) OR (control-spacebar)1. .section .data
.section .text
.globl zfunction
zfunction:pushl %ebpmovl %esp, %ebp….Leaveret
Read http://en.wikibooks.org/wiki/X86_Assembly/GAS_Syntax
In C, we count from 0
PA0. 2,3, and 5
2. Try “man end” and see what you can getUse “kprintf” for output
3. Read “Print the address of the top of the run-time stack for whichever process you are currently in, right before and right after you get into the printos() function call.” carefullyYou can use in-line assemblyUse ebp, esp
5. syscallsummary_start(): should clear all numberssyscallsummary_stop(): should keep all numbers
Others
• https://vcl.ncsu.edu/help/files-data/where-save-my-files
• Know how to use VirtualBox? feel free to share