cot 5611 operating systems design principles spring 2012

31
COT 5611 Operating Systems Design Principles Spring 2012 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM

Upload: pierce

Post on 25-Feb-2016

28 views

Category:

Documents


0 download

DESCRIPTION

COT 5611 Operating Systems Design Principles Spring 2012. Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM. Lecture 21 – Wednesday March 28, 2012. Reading assignment: Chapter 9 from the on-line text Last time – - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: COT 5611 Operating Systems Design Principles  Spring 2012

COT 5611 Operating SystemsDesign Principles Spring 2012

Dan C. MarinescuOffice: HEC 304Office hours: M-Wd 5:00-6:00 PM

Page 2: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Lecture 21 – Wednesday March 28, 2012

Reading assignment: Chapter 9 from the on-line text

Last time – Conditions for thread coordination – Safety, Liveness, Bounded-Wait, Fairness Critical sections – a solution to critical section problem Deadlocks Signals Semaphores Monitors Thread coordination with a bounded buffer.

WAIT NOTIFY AWAIT

3/21/2012 2

Page 3: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Today

ADVANCE SEQUENCE TICKET Events Coordination with events Virtual memory and multi-level memory management Atomic actions All-or nothing and Before-or-after atomicity Applications of atomicity

3/21/2012 3

Page 4: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Simultaneous conditions for deadlock

Mutual exclusion: only one process at a time can use a resource. Hold and wait: a process holding at least one resource is waiting to

acquire additional resources held by other processes. No preemption: a resource can be released only voluntarily by the

process holding it (presumably after that process has finished). Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such

that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0.

3/21/2012 4

Page 5: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Wait for graphs

 Processes are represented as nodes, and an edge from thread Ti  to thread Tj  implies that Tj  is holding a resource that  Ti  needs and thus  is waiting for Tj  to release its lock on that resource. A deadlock exists if the graph

contains any cycles.

53/21/2012

Page 6: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Semaphore Abstract data structure introduced by Dijkstra to

reduce complexity of threads coordination; has two components C count giving the status of the contention for

the resource guarded by s L list of threads waiting for the semaphore s

Two operations P - wait(): Decrements the value of a semaphore

variable by 1. If the value becomes negative, the process executing wait() is blocked, i.e., added to the semaphore's queue.

V- signal(): Increments the value of semaphore variable by 1. After the increment, if the value is negative, it transfers a blocked process from the semaphore's queue to the ready queue.

3/21/2012 6

Page 7: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Counting and binary semaphores

Counting semaphore – used for pools of resources (multiple units), e.g., cores.

 If a process performs a P operation on a semaphore that has the value zero, the process is added to the semaphore's queue. When another process increments the semaphore by performing a V operation, and there are processes on the queue, one of them is removed from the queue and resumes execution. When processes have different priorities the queue may be ordered by priority, so that the highest priority process is taken from the queue first.

Binary semaphore: C is either 0 or 1.

73/21/2012

Page 8: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

The wait and signal operations

P (s) (wait) { If s.C > 0 then s.C − −; else join s.L; } V (s) (signal) { If s.L is empty then s.C + +; else release a process from s.L; }

3/21/2012 8

Page 9: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Monitors Semaphores can be used incorrectly

multiple threads may be allowed to enter the critical section guarded by the semaphore may cause deadlocks

Threads may access the shared data directly without checking the semaphore. Solution encapsulate shared data with access methods to operate on them. Monitors an abstract data type that allows access to shared data with specific

methods that guarantee mutual exclusion

3/21/2012 9

Page 10: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 213/21/2012 10

Page 11: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Asynchronous events and signals

Signals, or software interrupts, were originally introduced in Unix to notify a process about the occurrence of a particular event in the system.

Signals are analogous to hardware I/O interrupts: When a signal arrives, control will abruptly switch to the signal handler. When the handler is finished and returns, control goes back to where it came from

After receiving a signal, the receiver reacts to it in a well-defined manner. That is, a process can tell the system (OS) what they want to do when signal arrives: Ignore it. Catch it and deliver it. In this case, it must specify (register) the signal handling

procedure. This procedure resides in the user space. The kernel will make a call to this procedure during the signal handling and control returns to kernel after it is done.

Kill the process (default for most signals). Examples: Event - child exit, signal - to parent. Control signal from keyboard.

3/21/2012 11

Page 12: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Signals state and implementation

A signal has the following states: Signal send - A process can send signal to one of its group member process

(parent, sibling, children, and further descendants). Signal delivered - Signal bit is set. Pending signal - delivered but not yet received (action has not been taken). Signal lost - either ignored or overwritten.

Implementation: Each process has a kernel space (created by default) called signal descriptor having bits for each signal. Setting a bit is delivering the signal, and resetting the bit is to indicate that the signal is received. A signal could be blocked/ignored. This requires an additional bit for each signal. Most signals are system controlled signals.

3/21/2012 12

Page 13: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 213/21/2012 13

Page 14: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

NOTIFY could be sent before the WAIT and this causes problems

The NOTIFY should always be sent after the WAIT. If the sender and the receiver run on two different processor there could be a race condition for the notempty event.

Tension between modularity and locks Several possible solutions: AWAIT/ADVANCE, semaphores, etc

3/21/2012 14

Page 15: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

AWAIT - ADVANCE solution A new state, WAITING and two before-or-after actions that take a RUNNING

thread into the WAITING state and back to RUNNABLE state. eventcount variables with an integer value shared between threads and the

thread manager; they are like events but have a value. A thread in the WAITING state waits for a particular value of the eventcount AWAIT(eventcount,value)

If eventcount >value the control is returned to the thread calling AWAIT and this thread will continue execution

If eventcount ≤value the state of the thread calling AWAIT is changed to WAITING and the thread is suspended.

ADVANCE(eventcount) increments the eventcount by one then searches the thread_table for threads waiting for this eventcount if it finds a thread and the eventcount exceeds the value the thread is waiting for then

the state of the thread is changed to RUNNABLE

3/21/2012 15

Page 16: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 213/21/2012 16

Page 17: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Implementation of AWAIT and ADVANCE

3/21/2012 17

Page 18: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 213/21/2012 18

Page 19: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Solution for a single sender and single receiver

3/21/2012 19

Page 20: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Supporting multiple senders: the sequencer Sequencer shared variable supporting thread sequence coordination -it

allows threads to be ordered and is manipulated using two before-or-after actions.

TICKET(sequencer) returns a negative value which increases by one at each call. Two concurrent threads calling TICKET on the same sequencer will receive different values based upon the timing of the call, the one calling first will receive a smaller value.

READ(sequencer) returns the current value of the sequencer

3/21/2012 20

Page 21: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Multiple sender solution; only the SEND must be modified

3/21/2012 21

Page 22: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Virtual memory and multi-level memory management

Recall that there is tension between pipelining and VM management the page targeted by a load or store instruction may not be in the real memory and we experience a page fault.

Multi-level memory management brings a page from the secondary device (e.g., disk) in a frame in main memory.

Virtual memory management performs dynamic address translation, maps virtual to physical addresses.

Modular design: separate Multi-level memory management Virtual memory management

223/21/2012

Page 23: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Name resolution in multi-level memories We consider pairs of layers:

Upper level of the pair primary Lower level of the pair secondary

The top level managed by the application which generates LOAD and STORE instructions to/from CPU registers from/to named memory locations

The processor issues READs/WRITEs to named memory locations. The name goes to the primary memory device located on the same chip as the processor which searches the name space of the on-chip cache (L1 cache), the primary device with the L2 cache as secondary device.

If the name is not found in L1 cache name space the Multi-Level Memory Manager (MLMM) looks at the L2 cache (off-chip cache) which becomes the primary with the main memory as secondary.

If the name is not found in the L2 cache name space the MLMM looks at the main memory name space. Now the main memory is the primary device.

If the name is not found in the main memory name space then the Virtual Memory Manager is invoked

3/21/2012 23

Page 24: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

The modular design

VM attempts to translate the virtual memory address to a physical memory address If the page is not in main memory VM generates a page-fault exception. The exception handler uses a SEND to send to an MLMM port the page number The SEND invokes ADVANCE which wakes up a thread of MLMM The MMLM invokes AWAIT on behalf of the thread interrupted due to the page fault. The AWAIT releases the processor to the SCHEDULER thread.

3/21/2012 24

Page 25: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Application thread 1

Virtual MemoryManager

Exception Handler Scheduler

Multi-Level MemoryManager

Application thread 2

IR PC Translate (PC)into (Page#,Displ) Is (Page#) in primary storage?

YES- compute the physical addressof the instruction

IR PC

NO – page faultSave PCHandle page fault

Identify Page #

Issue AWAIT on behalf of thread 1

AWAIT

SEND(Page #)

Thread 1 WAITINGThread 2 RUNNING

IR PC Load PC of

thread 2

Find a block in primary storageIs “dirty” bit of block ON?YES- write block to secondary storage

NO- fetch block corresponding to missing page

I/O operation completsADVANCE

Thread 1 RUNNING

Load PC of thread 1

IR PC

253/21/2012

Page 26: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Atomicity Atomicity ability to carry out an action involving multiple steps as an indivisible

action; hide the structure of the action from an external observer. All-or-nothing atomicity (AONA)

To an external observer (e.g., the invoker) an atomic action appears as if it either completes or it has never taken place.

Before-or-after atomicity (BOAA) Allows several actions operating on the same resources (e.g., shared data) to act

without interfering with one another To an external observer (e.g., the invoker) the atomic actions appear as if they

completed either before or after each other. Atomicity

simplifies the description of the possible states of the system as it hides the structure of a possible complex atomic action

allows us to treat systematically and using the same strategy two critical problems in system design and implementation

(a) recovery from failures and (b) coordination of concurrent activities

263/21/2012

Page 27: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Atomicity in computer systems1. Hardware: interrupt and exception handling (AONA) + register renaming (BOAA)2. OS: SVCs (AONA) + non-sharable device (e.g., printer) queues (BOAA)3. Applications: layered design (AONA) + process coordination (BOAA)4. Database: updating records (AONA) + sharing records (BOAA) Example: exception handling when one of the following events occur

Hardware faults External events Program exception Fair-share scheduling Preemptive scheduling when priorities are involved Process termination to avoid deadlock User-initiated process termination

Register renaming avoid unnecessary serialization of program operations imposed by the reuse of registers by those operations.  High performance CPUs have more physical registers than may be named directly in the instruction set, so they rename registers in hardware to achieve additional parallelism.

r1 m(1000) r1 r1+5 m(1000) r1 r1 m(2000) r1 r1+8 m(2000) r1

273/21/2012

Page 28: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Atomicity in databases and application software Recovery from system failures and coordination of multiple activities is not

possible if actions are not atomic. Database example: a procedure to transfer from a debit account (A) to a credit

account (B) Procedure TRANSFER (debit_account, credit_account, amount) GET (temp, A) temp temp – amount PUT (temp, A) GET (temp, B) temp temp + amount PUT (temp, B) What if: (a) the system fails after the first PUT; (b) multiple transactions on the same account take place. Layered application software example: a calendar program with three layers of

interpreters: Calendar program JVM Physical layer

283/21/2012

Page 29: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

All-or-nothing atomicity The AONA is required to (1) handle interrupts (e.g., a page fault in the middle of a pipelined instruction). Need to retrofit the AONA at the machine language interface if every machine instruction is an AONA then the OS could save as the next instruction the one where the page fault occurs. Additional complications with a user-supplied exception handler. (2) handle supervisor calls (SVCs); an SVC requires a kernel action to change the PC, the mode bit (from user to kernel) and the code to carry out the required function. The SVC should appear as an extension of the hardware. Design solutions a typewriter driver activated by a user issued SVC, READ.

Implement the “nothing” option blocking read; when no input is present reissue the READ as the next instruction. This solution allows a user to supply its own exception handler.

Implement the “all” option non-blocking read; return control to the user program if no input available with a zero length input.

293/21/2012

Page 30: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Before-or-after atomicity Two approaches to concurrent action coordination:

Sequence coordination e.g., “action A should occur before B” strict ordering BOAA, the effect of A and B is the same whether A occurs before B or B before A

non-strict ordering. BOAA is more general than sequence coordination. Example: two transactions operating on account A each performs GET and PUT

Six possible sequences of actions: (G1,P1, G2, P2), (G2,P2,G1,P1), (G1,G2, P1, P2), (G1,G2, P2, P1), (G2,G1, P1,P2), (G2, G1, P2, P1). Only the first two lead to correct results. Solution the sequence Ri Pi should be atomic.

Correctness condition for coordination if every possible result is guaranteed to be the same as if the actions were applied in one after another in some order.

Before-or-after atomicity guarantees the correctness of coordination indeed it serializes the actions.

Stronger correctness requirements are sometimes necessary: External time consistency e.g., in banking the transaction should be processed in the

order they are issued. Sequential consistency e.g., instruction reordering should not affect the result

303/21/2012

Page 31: COT 5611 Operating Systems Design Principles  Spring 2012

Lecture 21

Common strategy and side-effects of atomicity

The common strategy for BOAA and AONA hide the internal structure of a complex action; prevent an external observer to discover the structure and the implementation of the atomic action.

Atomic actions could have “good” (benevolent) side-effects: An audit log records the cause of a failure and the recovery steps for later

analysis Performance optimization: when adding a record to a file the data management

may restructure/reorganize the file to improve the access time

313/21/2012