shared memory coordination

30
Shared Memory Coordination • We will be looking at process coordination using shared memory and busy waiting. So we don't send messages but read and write shared variables. When we need to wait, we loop and don't context switch. This can be wasteful of resources if we must wait a long time.

Upload: beau

Post on 25-Feb-2016

55 views

Category:

Documents


1 download

DESCRIPTION

Shared Memory Coordination. We will be looking at process coordination using shared memory and busy waiting . So we don't send messages but read and write shared variables. When we need to wait, we loop and don't context switch. This can be wasteful of resources if we must wait a long time. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Shared Memory Coordination

Shared Memory Coordination

• We will be looking at process coordination using shared memory and busy waiting.– So we don't send messages but read and write

shared variables.– When we need to wait, we loop and don't

context switch.– This can be wasteful of resources if we must

wait a long time.

Page 2: Shared Memory Coordination

Shared Memory Coordination

– Context switching primitives normally use busy waiting in their implementation.

• Mutual Exclusion– Consider adding one to a shared variable V.– When compiled onto many machines get three

instructions• load r1 V• add r1 r1+1• store r1 V

Page 3: Shared Memory Coordination

Mutual Exclusion

– Assume V is initially 10 and one process begins the 3 instruction sequence.

• After the first instruction context switch to another process.

– Registers are of course saved.• New process does all three instructions.• Context switch back.

– Registers are of course restored.• First process finishes.• V has been incremented twice but has only reached

11.

Page 4: Shared Memory Coordination

Mutual Exclusion

– The problem is that the 3 instruction sequence must be atomic, i.e. cannot be interleaved with another execution of these instructions

– That is, one execution excludes the possibility of another. So they must exclude each other, i.e. we must have mutual exclusion.

– This was a race condition.• Hard bugs to find since non-deterministic.

– Can in general involve more than two processes

Page 5: Shared Memory Coordination

Mutual Exclusion

– The portion of code that requires mutual exclusion is often called a critical section.

• One approach is to prevent context switching.– We can do this for the kernel of a uniprocessor.

• Mask interrupts.– Not feasible for user mode processes.– Not feasible for multiprocessors.

Page 6: Shared Memory Coordination

Mutual Exclusion

– Critical Section Problem is to implement:looptrying-partcritical-sectionreleasing-partnon-critical section

end loop

– So that when many processes execute this you never have more than one in the critical section.

– That is you must write the trying-part and the releasing-part.

Page 7: Shared Memory Coordination

Mutual Exclusion

– Trivial solution.• Let releasing-part be simply "halt”.

– This shows we need to specify the problem better.

– Additional requirement:• Assume that if a process begins execution of its

critical section and no other process enters the critical section, then the first process will eventually exit the critical section.

Page 8: Shared Memory Coordination

Mutual Exclusion

• Then the requirement is "If a process is executing its trying part, then some process will eventually enter the critical section".

• Software-only solutions to CS problem.– We assume the existence of atomic loads and

stores.• Only up to word length (i.e. not a whole page).

– We start with the case of two processes.– Easy if we want tasks to alternate in CS and

you know which one goes first in CS.

Page 9: Shared Memory Coordination

Mutual Exclusion

shared int turn = 1

loop loopwhile (turn=2) while (turn=1)

CS CSturn=2 turn=1NCS NCS

Page 10: Shared Memory Coordination

Mutual Exclusion

– But always alternating does not satisfy the additional requirement above.

– Let NCS for process 1 be an infinite loop (or a halt).

• We will get to a point when process 2 is in its trying part but turn=1 and turn will not change.

• So some process enters its trying part but neither process will enter the CS.

Page 11: Shared Memory Coordination

Mutual Exclusion

• The first solution that worked was discovered by a mathematician named Dekker. – Now we will use turn only to resolve disputes.

Page 12: Shared Memory Coordination

Dekker’s Algorithm

/* Variables are global and shared */for (; ;) { // process 1 - an infinite loop to show it enters

// CS more than once. Turn is initially 1.p1wants = 1;while (p2wants == 1) {

if (turn == 2) {p1wants = 0;while (turn == 2) {/* Empty loop */}p1wants = 1;

}}critical_section();turn = 2;p1wants = 0;noncritical_section();

}

Page 13: Shared Memory Coordination

Dekker’s Algorithm

/* Variables are global and shared */for (; ;) { // process 2 - an infinite loop to show it enters

// CS more than once. Turn is initially 1.p2wants = 1;while (p1wants == 1) {

if (turn == 1) {p2wants = 0;while (turn == 1) {/* Empty loop */}p2wants = 1;

}}critical_section();turn = 1;p2wants = 0;noncritical_section();

}

Page 14: Shared Memory Coordination

Mutual Exclusion

– The winner-to-be just loops waiting for the loser to give up and then goes into the CS.

– The loser-to-be:• Gives up.• Waits to see that the winner has finished.• Starts over (knowing it will win).

– Dijkstra extended Dekker's solution for > 2 processes.

– Others improved the fairness of Dijkstra's algorithm.

Page 15: Shared Memory Coordination

Mutual Exclusion

– These complicated methods remained the simplest known until 1981 when Peterson found a much simpler method.

– Keep Dekker's idea of using turn only to resolve disputes, but drop the complicated then body of the if.

Page 16: Shared Memory Coordination

Peterson’s Algorithm

/* Variables are global and shared */for (; ;) { // process 1 - an infinite loop to show it enters

// CS more than once.p1wants = 1;turn = 2;while (p2wants && turn == 2) {/* empty loop */}critical_section();p1wants = 0;noncritical_section();

}

Page 17: Shared Memory Coordination

Peterson’s Algorithm

/* Variables are global and shared */for (; ;) { // process 2 - an infinite loop to show it enters

// CS more than once.p2wants = 1;turn = 1;while (p1wants && turn == 1) {/* empty loop */}critical_section();p2wants = 0;noncritical_section();

}

Page 18: Shared Memory Coordination

Semaphores

– Trying and release often called entry and exit, or wait and signal, or down and up, or P and V (the latter are from Dutch words since Dijkstra is Dutch).

– Let’s try to formalize the entry and exit parts.– To get mutual exclusion we need to ensure that

no more than one task can pass through P until a V has occurred. The idea is to keep trying to walk through the gate and when you succeed atomically close the gate behind you so that no one else can enter.

Page 19: Shared Memory Coordination

Semaphores

– Definition (not an implementation):• Let S be an enumerated type with values closed and

open (like a gate).P(S) is while S = closed S closed• The failed test and the assignment are a single

atomic action.

Page 20: Shared Memory Coordination

Semaphores

P(S) is label: {[ --begin atomic part if S = open S closed else }] --end atomic part goto label

V(S) is S open

Page 21: Shared Memory Coordination

Semaphores

– Note that this P and V (not yet implemented) can be used to solve the critical section problem very easily.

• The entry part is P(S).• The exit part is V(S).

– Note that Dekker and Peterson do not give us a P and V since each process has a unique entry and a unique exit.

– S is called a (binary) semaphore.

Page 22: Shared Memory Coordination

Semaphores

– To implement binary semaphores we need some help from our hardware friends.

Boolean in out XTestAndSet(X) is oldx X X true return oldx

– Note that the name is a good one. This function tests the value of X and sets it (i.e. sets it true; reset is to set false).

Page 23: Shared Memory Coordination

Semaphores

– Now P/V for binary semaphores is trivial.• S is Boolean variable (false is open, true is closed).P(S) is

while (TestAndSet(S))V(S) is

S false– This works fine no matter how many processes

are involved.

Page 24: Shared Memory Coordination

Counting Semaphores

– Now want to consider permitting a bounded number of processors into what might be called a semi-critical section.

loop P(S) SCS // at most k processes can be here

// simultaneously V(S) NCS

– A semaphore S with this property is called a counting semaphore.

Page 25: Shared Memory Coordination

Counting Semaphores

– If k=1, we get a binary semaphore so counting semaphore generalizes to binary semaphore.

– How can we implement a counting semaphore given binary semaphores?

• S is a nonnegative integer.• Initialize S to k, the max number allowed in SCS.• Use k=1 to get binary semaphore (hence the name

binary).• We only ask for:

– Limit of k in SCS (analogue of mutual exclusion).– Progress: If process enters P and < k in SCS, a process

will enter the SCS.

Page 26: Shared Memory Coordination

Counting Semaphores

• We do not ask for fairness, and don't assume it (for the binary semaphore) either.

binary semaphore q // initially openbinary semaphore r // initially closedinteger NS; // might be negative, keeps value of SP(S) is V(S) is

P(q) P(q)NS-- NS++if NS < 0 if NS <= 0

V(q) V(r)P(r)

elseV(q) V(q)

Page 27: Shared Memory Coordination

Mutual Exclusion

– Try to do mutual exclusion without shared memory.

– Centralized approach• Pick a process as a coordinator (mutual-exclusion-

server)• To get access to Critical Section send message to

coordinator and await reply.• When you leave CS send message to coordinator.• When coordinator gets a message requesting CS it

– Replies if the CS is free– Enter requesters name into waiting queue

Page 28: Shared Memory Coordination

Mutual Exclusion

• When coordinator gets a message announcing departure from CS

– Removes head entry from list of waiters and replies to it

• The simplest solution and perhaps the best– Distributed solution

• When you want to get into CS • Send request message to everyone (except yourself)

– Include timestamp (logical clock!)

• Wait until you receive OK from everyone• When receive request ...

Page 29: Shared Memory Coordination

Mutual Exclusion

• If you are not in CS and don't want to be, say OK • If you are in CS, put requester's name on list• If you are not in CS but want to be,

– If your TS is lower, put name on list– If your TS is higher, send OK

• When you leave CS, send OK to all on your list

• Token Passing solution• Form logical ring• Pass token around ring• When you have the token you can enter CS (hold on

to the token until you exit)

Page 30: Shared Memory Coordination

Mutual Exclusion

• Comparison– Centralized is best– Distributed of theoretical interest– Token passing good if hardware is ring based

(e.g. a token ring LAN)