chapter 6, process synchronization, overheads, part 2 1

120
Chapter 6, Process Synchronization, Overheads, Part 2 1

Upload: gannon-carlton

Post on 15-Dec-2015

232 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Chapter 6, Process Synchronization, Overheads, Part 2 1

1

Chapter 6, Process Synchronization, Overheads, Part 2

Page 2: Chapter 6, Process Synchronization, Overheads, Part 2 1

2

• The second half of Chapter 6, Part 2, consists of these sections

• 6.7 Monitors• 6.8 Java Synchronization

Page 3: Chapter 6, Process Synchronization, Overheads, Part 2 1

3

6.7 Monitors

• Monitors are an important topic for two reasons– As seen, the use of semaphores is fraught with

difficulty, so overall, monitors might be a better concept to learn

– Monitors are worth understanding because Java synchronization is ultimately built on this more general construct

Page 4: Chapter 6, Process Synchronization, Overheads, Part 2 1

4

• A high level, O-O description of what a monitor is:

• It’s a class with (private) instance variables and (public) methods

• Mutual exclusion is enforced over all of the methods at the same time

Page 5: Chapter 6, Process Synchronization, Overheads, Part 2 1

5

• This means that no two threads can be in any of the methods at the same time

• In other words, all of the code belonging to the class is one giant critical section

• Because a monitor has mutual exclusion over all methods, there is no need for separate acquire() and release() methods

Page 6: Chapter 6, Process Synchronization, Overheads, Part 2 1

6

• The private instance variables (possibly >1) of a monitor, which in some sense may be thought of as locks or perhaps shared resources, are completely protected by definition

• There is no access to them except through the methods, and all of the methods have mutual exclusion enforced on them

Page 7: Chapter 6, Process Synchronization, Overheads, Part 2 1

7

The relationship of monitors to Java

• In Java there is a Monitor class, but that is just something made available in the API

• The monitor concept is a fundamental part of the structure and design of Java

• It is the monitor concept, not the Monitor class, that is the basis for all synchronization in Java

Page 8: Chapter 6, Process Synchronization, Overheads, Part 2 1

8

• The Object class in Java is the source of certain monitor (concept) methods that are available to its subclasses

• Java also has a Condition interface which corresponds to what is called a condition variable in the monitor concept

• The condition variable in a monitor is roughly analogous to the lock variable inside a semaphore

Page 9: Chapter 6, Process Synchronization, Overheads, Part 2 1

9

The entry set for a monitor

• After one thread has entered one method of a monitor, others may be scheduled and attempt to enter the critical section.

• The monitor has what is known as an entry set.

• It is essentially a scheduling queue for threads that want to get into the critical section.

Page 10: Chapter 6, Process Synchronization, Overheads, Part 2 1

10

Condition Variables (or Objects) in Monitors

• A monitor class can have Condition variables declared in it:

• private Condition x, y;• A monitor class will also have two special

methods:• wait() and signal()

Page 11: Chapter 6, Process Synchronization, Overheads, Part 2 1

11

• In order to understand how these methods work, it’s helpful to have a concrete scenario

• Let there be two threads (or processes) P and Q

• Let those threads share a reference to a monitor object, m

• Let the monitor, m, have a condition variable x and a method monitorMethod()

Page 12: Chapter 6, Process Synchronization, Overheads, Part 2 1

12

• Both P and Q have the ability to call m.monitorMethod()—but because m is a monitor, only one of P or Q can be running in the code monitorMethod() at a time

• Suppose that it was within code for Q that the call to m.monitorMethod() was called

• Inside the code for monitorMethod() there may be a call

• x.wait();

Page 13: Chapter 6, Process Synchronization, Overheads, Part 2 1

13

• When the thread that was “running in the monitor” calls x.wait(), the thread suspended

• In this scenario, the thread Q, which was the thread which made the call on the monitor object, m, is suspended

• Once it is suspended, it is no longer in the monitor

Page 14: Chapter 6, Process Synchronization, Overheads, Part 2 1

14

• If another thread makes a call to monitorMethod() (or any other monitor method) the new thread will be allowed into the monitor code

• The original, suspended thread, Q, will remain suspended until another thread, such as P, is running monitor code which makes a call such as this:

• x.signal()

Page 15: Chapter 6, Process Synchronization, Overheads, Part 2 1

15

• In a primitive semaphore, if a resource is not available, when a process calls acquire() and fails, the process goes into a spinlock

• The logic of wait() improves on this• A process can voluntarily step aside by calling

x.wait()• This allows another thread into the protected

code

Page 16: Chapter 6, Process Synchronization, Overheads, Part 2 1

16

• wait() can be thought of as a tool that makes it possible for a process to take actions which affect its own execution order

• It may be helpful to remember the concept of “politeness” that came up in the Alphonse and Gaston phase of trying to explain concurrency control when considering what wait() does

• Making a wait() call allows other threads to go first

Page 17: Chapter 6, Process Synchronization, Overheads, Part 2 1

17

Monitors and waiting lists

• The thread that made the original x.wait() call voluntarily stepped aside

• It goes into the waiting list• When another thread makes a call x.signal(),

the thread making the signal() call is voluntarily() stepping aside and one in the waiting list will be resumed

Page 18: Chapter 6, Process Synchronization, Overheads, Part 2 1

18

• Let this scenario be given:• Thread Q is waiting because it earlier called

x.wait()• Thread P is running and it calls x.signal()• By definition, only one of P and Q can be

running in the monitor at the same time

Page 19: Chapter 6, Process Synchronization, Overheads, Part 2 1

19

• What protocol should be used to allow Q to begin running in the monitor instead of P?

• This question is not one that has to be answered by the application programmer

• It is a question that confronts the designer of a particular monitor implementation

Page 20: Chapter 6, Process Synchronization, Overheads, Part 2 1

20

• In general, there are two alternatives:• Signal and wait: • P signals, and its call to signal() implicitly

includes a call to wait(), which allows Q to take its turn immediately.

• After Q finishes, P resumes.

Page 21: Chapter 6, Process Synchronization, Overheads, Part 2 1

21

• Signal and continue: • P signals and continues until it leaves the

monitor. • At that point Q can enter the monitor (or

potentially may not, if prevented by some other condition)

• In other words, P does not immediately leave the monitor, and Q does not immediately enter it

Page 22: Chapter 6, Process Synchronization, Overheads, Part 2 1

22

• It would be possible to make calls to wait() on any of the potentially many condition variables in a monitor, x, y, etc.

• Each of the condition variables would have its own waiting list.

• These separate waiting lists are distinct from the entry set, which contains the threads waiting to enter the critical section for the first time

Page 23: Chapter 6, Process Synchronization, Overheads, Part 2 1

23

Monitors and the Dining Philosophers

• The book next tries to illustrate the use of monitors in order to solve the dining philosophers problem

• I am not going to cover this

Page 24: Chapter 6, Process Synchronization, Overheads, Part 2 1

24

6.8 Java Synchronization

Page 25: Chapter 6, Process Synchronization, Overheads, Part 2 1

25

The term thread safe

• Term: • Thread safe. • Definition: • Concurrent threads have been implemented

so that they leave shared data in a consistent state

Page 26: Chapter 6, Process Synchronization, Overheads, Part 2 1

26

• Note: • Much of the example code shown previously

would not be thread safe. • It is highly likely that threaded code without

the use of synchronization syntax would generate a compiler error or warning indicating that it was not thread safe

Page 27: Chapter 6, Process Synchronization, Overheads, Part 2 1

27

• The most recent book examples, which used a semaphore class which did use the Java synchronization syntax internally, should not generate this error/warning

• If code does produce this as a warning only, meaning that the code can still be run, it should be made emphatically clear that even though it runs, it IS NOT THREAD SAFE

Page 28: Chapter 6, Process Synchronization, Overheads, Part 2 1

28

• In other words, unsafe code may appear to run

• More accurately it may run and even give correct results some or most of the time

• But depending on the vagaries of thread scheduling, at completely unpredictable times, it will give incorrect results

• This idea was discussed in some detail earlier

Page 29: Chapter 6, Process Synchronization, Overheads, Part 2 1

29

Preliminaries

• Keep in mind that the Java API supports synchronization syntax at the programmer level

• This is based on monitor concepts built into Java

• However, all synchronization ultimately is provided by something like a test and set instruction at the hardware level of the system that Java is running on

Page 30: Chapter 6, Process Synchronization, Overheads, Part 2 1

30

Spin locks

• The basic idea of a spin lock is that if one thread holds a resource, another thread wanting that resource will have to wait in some fashion

• In the illustrative, application level pseudo-code, this waiting took the form of sitting in a loop

Page 31: Chapter 6, Process Synchronization, Overheads, Part 2 1

31

Livelock

• Spinlocks are wasteful and they can lead to livelock

• Livelock is not quite the same as deadlock• In deadlock, two threads “can’t move” because

each is waiting for an action that only the other can take

• In livelock, both threads are alive and scheduled, but they still don’t make any progress

Page 32: Chapter 6, Process Synchronization, Overheads, Part 2 1

32

• The book suggests this bounded buffer scenario:

• A producer has higher priority than a consumer

• The producer fills the shared buffer• The producer remains alive, continuing to try

and enter items into the buffer

Page 33: Chapter 6, Process Synchronization, Overheads, Part 2 1

33

• The consumer is alive, but having lower priority, it is never scheduled, so it can never remove a message from the buffer

• Thus, the producer can never enter a new message into the buffer

• The consumer can never remove one• But they’re both alive, the producer frantically

so, and the consumer slothfully so

Page 34: Chapter 6, Process Synchronization, Overheads, Part 2 1

34

Deadlock

• Using real syntax that correctly enforces mutual exclusion can lead to deadlock

• Deadlock is a real problem in the development of synchronized code, but it is not literally a problem of synchronization syntax

• In other words, you can write an example that synchronizes correctly but still has this problem

Page 35: Chapter 6, Process Synchronization, Overheads, Part 2 1

35

Java synchronization in two steps

• The book takes the introduction of synchronization syntax through two stages:

• Stage 1: You use Java synchronization and the Thread class yield() method to write code that enforces mutual exclusion and which is essentially a correct implementation of busy waiting.

• This is wasteful and livelock prone, but it is synchronized

Page 36: Chapter 6, Process Synchronization, Overheads, Part 2 1

36

• Stage 2: You use Java synchronization with the wait(), notify(), and notifyAll() methods of the Object class.

• Instead of busy waiting, this relies on the underlying monitor-like capabilities of Java to have threads wait in queues or lists.

• This is deadlock prone, but it deals with the wastefulness of spin locks and the potential, however obscure, for live lock

Page 37: Chapter 6, Process Synchronization, Overheads, Part 2 1

37

The synchronized Keyword in Java

• Java synchronization is based on the monitor concept, and this descends all the way from the Object class

• Every object in Java has a lock associated with it• This lock is essentially like a simple monitor, or

a monitor with just one condition variable• Locking for the object is based on the single

condition variable

Page 38: Chapter 6, Process Synchronization, Overheads, Part 2 1

38

• If you are not writing synchronized code—if you are not using the keyword synchronized— the object’s lock is completely immaterial

• It is a system supplied feature of the object which lurks in the background unused by you and having no effect on what you are doing

Page 39: Chapter 6, Process Synchronization, Overheads, Part 2 1

39

• In the monitor concept, mutual exclusion is enforced on all of the methods of a class at the same time

• Java is finer-grained. • Inside the code of a class, some methods can

be declared synchronized and some can be unsynchronized

Page 40: Chapter 6, Process Synchronization, Overheads, Part 2 1

40

• However, if >1 method is declared synchronized in a class, then mutual exclusion is enforced across all of those methods at the same time for any threads trying to access the object

• If a method is synchronized and no thread holds the lock, the first thread that calls the method acquires the lock

Page 41: Chapter 6, Process Synchronization, Overheads, Part 2 1

41

• Again, Java synchronization is monitor-like. • There is an entry set for the lock• If another thread calls a synchronized method

and cannot acquire the lock, it is put into the entry set for that lock

• The entry set is a kind of waiting list, but it is not called a waiting list because that term is reserved for something else

Page 42: Chapter 6, Process Synchronization, Overheads, Part 2 1

42

• When the thread holding the lock finishes running whatever synchronized method it was in, it releases the lock

• At that point, if the entry set has threads in it, the JVM will schedule one

• FIFO scheduling may be done on the entry set, but the Java specifications don’t require it

Page 43: Chapter 6, Process Synchronization, Overheads, Part 2 1

43

• The first correctly synchronized snippets of sample code which the book offers will be given soon

• They do mutual exclusion on a shared buffer• They accomplish this by using the Thread class

yield() method to do busy waiting

Page 44: Chapter 6, Process Synchronization, Overheads, Part 2 1

44

• The Java API simply says this about the yield() method:

• “Causes the currently executing thread object to temporarily pause and allow other threads to execute.”

• We don’t know how long the yield lasts, exactly• The important point is that it lasts long enough

for another thread to be scheduled, if there is one that wants to be scheduled

Page 45: Chapter 6, Process Synchronization, Overheads, Part 2 1

45

• The book doesn’t bother to give a complete set of classes for this solution because it is not a very good one

• Because it implements a kind of busy waiting, it’s wasteful, livelock prone, and deadlock prone

• However, it’s worth asking what it illustrates that previous example didn’t

• After the code on the following overheads additional comments will be made

Page 46: Chapter 6, Process Synchronization, Overheads, Part 2 1

46

Synchronized insert() and remove() Methods for Producers and Consumers of a Bounded Buffer

• public synchronized void insert(Object item)

• {• while(count == BUFFER_SIZE)• Thread.yield();• ++count;• buffer[in] = item;• in = (in + 1) % BUFFER_SIZE;• }

Page 47: Chapter 6, Process Synchronization, Overheads, Part 2 1

47

• public synchronized Object remove()• {• Object item;• while(count == 0)• Thread.yield();• --count;• item = buffer[out];• out = (out + 1) % BUFFER_SIZE;• return item;• }

Page 48: Chapter 6, Process Synchronization, Overheads, Part 2 1

48

• In previous illustrations we showed spin locks which were pure busy waiting loops

• They had nothing in their bodies• If a thread in one method was spinning, you

would probably hope that another thread could run in the complementary method

• But the other thread would be locked out, due to synchronization

• You could literally make no progress at all

Page 49: Chapter 6, Process Synchronization, Overheads, Part 2 1

49

• In this latest code, both methods are synchronized—effectively on the same lock

• On the surface, you might think that you haven’t gained anything compared to the previous examples

• What is added by making calls to yield() in the bodies of the loops?

Page 50: Chapter 6, Process Synchronization, Overheads, Part 2 1

50

• With a call to yield(), a thread is suspended• We don’t exactly know “where it goes to”

when it’s suspended, but it is no longer “in” the synchronized code.

• We also don’t know how long it will remain suspended, but the purpose is to remain suspended long enough for another thread to be scheduled

Page 51: Chapter 6, Process Synchronization, Overheads, Part 2 1

51

• This means that progress can be made• For a thread that’s spinning through the loop,

every time through the loop it will give an opportunity to another thread to enter the complementary method

• Once the complementary method has been run, the thread in the loop ought to be able to proceed

Page 52: Chapter 6, Process Synchronization, Overheads, Part 2 1

52

The yield() spin lock bounded buffer example vs. the semaphore example

• Note that in the fully semaphore-oriented pseudo-solution given before, there were three semaphores

• One handled the mutual exclusion which the keyword synchronized handles here

• The other two handled the cases where the buffer was empty or full

Page 53: Chapter 6, Process Synchronization, Overheads, Part 2 1

53

• There is no such thing as a synchronized “empty” or “full” variable in the code just given, so there are not two additional uses of synchronized in this example

• The handling of the empty and full cases goes all the way back to the original bounded buffer example

• The code depends on a count variable and modular arithmetic to keep track of when it’s possible to enter and remove

Page 54: Chapter 6, Process Synchronization, Overheads, Part 2 1

54

• The yield() example has a count variable• This is part of what is protected by the

synchronization• The condition in the while loops for inserting

and removing depends on the value of the count variable.

• For example:• while(count == BUFFER_SIZE)• Thread.yield();

Page 55: Chapter 6, Process Synchronization, Overheads, Part 2 1

55

Code with synchronized and wait(), notify(), and notifyAll()

• Java threads can call methods wait(), notify(), and notifyAll()

• These methods are similar to the monitor wait() and signal() concepts

Page 56: Chapter 6, Process Synchronization, Overheads, Part 2 1

56

• Let P and Q be threads• Let M be a class which contains synchronized

methods• Every class in Java has one lock variable, and

this is the equivalent of a monitor condition variable

• It is this variable that controls mutual exclusion

Page 57: Chapter 6, Process Synchronization, Overheads, Part 2 1

57

• Because there is just the one lock variable, its use can be hidden

• Threads do not have to know it by name• Various thread calls can depend on the lock

variable and the system takes care of how that happens

Page 58: Chapter 6, Process Synchronization, Overheads, Part 2 1

58

• Let both P and Q have references to an instance m of class M

• Let P be running in a synchronized method inside m

• Inside that method, let there be a call wait()• The implicit parameter of the wait() call is the

thread that called the method

Page 59: Chapter 6, Process Synchronization, Overheads, Part 2 1

59

• The thread call to wait() works because under the covers there is in effect a call to mLockObject.wait()

• What happens is that the calling thread is put onto the waiting list that belongs to the object’s lock variable

Page 60: Chapter 6, Process Synchronization, Overheads, Part 2 1

60

Entry sets and wait sets

• Each Java object has exactly one lock• Each object has two sets associated with the

lock, the entry set and the wait set• These two sets together control concurrency

among threads

Page 61: Chapter 6, Process Synchronization, Overheads, Part 2 1

61

The Entry Set

• The entry set is a kind of waiting list• You can think of it as being implemented as a

linked data structure containing the “PCB’s” of threads

• Threads in the entry set are those which have reached the point in execution where they have called a synchronized method but can’t get in because another thread holds the lock

Page 62: Chapter 6, Process Synchronization, Overheads, Part 2 1

62

• A thread leaves the entry set and enters the synchronized method it wishes to run when the current lock holder releases the lock and the scheduling algorithm picks from the entry set one of the threads wanting the lock

Page 63: Chapter 6, Process Synchronization, Overheads, Part 2 1

63

The wait set

• The wait set is also a waiting list• You can also think of this as a linked data

structure containing the “PCB’s” of threads• The wait set is not the same as the entry set• Suppose a thread holds a lock on an object• A thread enters the wait set by calling the

wait() method

Page 64: Chapter 6, Process Synchronization, Overheads, Part 2 1

64

• Entering the wait set means that the thread voluntarily releases the lock that it holds

• In the application code this would be triggered in an if statement where some (non-lock related) condition has been checked and it has been determined that due to that condition the thread can’t continue executing anyway

Page 65: Chapter 6, Process Synchronization, Overheads, Part 2 1

65

• When a thread is in the wait set, it is blocked. • It can’t be scheduled but it’s not burning up

resources because it’s not busy waiting

Page 66: Chapter 6, Process Synchronization, Overheads, Part 2 1

66

The Entry and Wait Sets Can Be Visualized in this Way

Page 67: Chapter 6, Process Synchronization, Overheads, Part 2 1

67

• By definition, threads in the wait set are not finished with the synchronized code

• Threads acquire the synchronized code through the entry set

• There has to be a mechanism for a thread in the wait set to get into the entry set

Page 68: Chapter 6, Process Synchronization, Overheads, Part 2 1

68

The Way to Move a Thread from the Wait Set to the Entry Set

• If in the synchronized code, one or more calls to wait() have been made,

• At the end of the code for a synchronized method, put a call to notify()

• When the system handles the notify() call, it picks an arbitrary thread from the wait set and puts it into the entry set

• When the thread is moved from the wait set to the entry set, its state is changed from blocked to runnable

Page 69: Chapter 6, Process Synchronization, Overheads, Part 2 1

69

• The foregoing description should be sufficient for code that manages two threads

• As a consequence, it should provide enough tools for an implementation of the producer-consumer problem using Java synchronization

Page 70: Chapter 6, Process Synchronization, Overheads, Part 2 1

70

Preview of the Complete Producer-Consumer Code

• The BoundedBuffer class has two methods, insert() and remove()

• These two methods are synchronized• Synchronization of the methods protects both

the count variable and the buffer itself, since each of these things is only accessed and manipulated through these two methods

Page 71: Chapter 6, Process Synchronization, Overheads, Part 2 1

71

• Unlike with semaphores, the implementation is nicely parallel:

• You start both methods with a loop containing a call to wait() and end both with a call to notify()

• The call to wait() is in a loop rather than an if statement

• The call to wait() has to occur in a try block

Page 72: Chapter 6, Process Synchronization, Overheads, Part 2 1

72

• The use of the keyword synchronized enforces mutual exclusion

• The use of wait() and notify() have taken over the job of controlling whether a thread can insert or remove a message from the buffer depending on whether the buffer is full or not

• The code follows. • This will be followed by further commentary

Page 73: Chapter 6, Process Synchronization, Overheads, Part 2 1

73

• /**• * BoundedBuffer.java• * • * This program implements the bounded buffer using Java synchronization.• * • */

• public class BoundedBuffer implements Buffer {• private static final int BUFFER_SIZE = 5;

• private int count; // number of items in the buffer

• private int in; // points to the next free position in the buffer

• private int out; // points to the next full position in the buffer

• private Object[] buffer;

• public BoundedBuffer() {• // buffer is initially empty• count = 0;• in = 0;• out = 0;

• buffer = new Object[BUFFER_SIZE];• }

Page 74: Chapter 6, Process Synchronization, Overheads, Part 2 1

74

• public synchronized void insert(Object item) {• while (count == BUFFER_SIZE) {• try {• wait();• } catch (InterruptedException e) {• }• }

• // add an item to the buffer• ++count;• buffer[in] = item;• in = (in + 1) % BUFFER_SIZE;

• if (count == BUFFER_SIZE)• System.out.println("Producer Entered " + item + " Buffer

FULL");• else• System.out.println("Producer Entered " + item + " Buffer

Size = "• + count);

• notify();• }

Page 75: Chapter 6, Process Synchronization, Overheads, Part 2 1

75

• // consumer calls this method• public synchronized Object remove() {• Object item;

• while (count == 0) {• try {• wait();• } catch (InterruptedException e) {• }• }

• // remove an item from the buffer• --count;• item = buffer[out];• out = (out + 1) % BUFFER_SIZE;

• if (count == 0)• System.out.println("Consumer Consumed " + item + " Buffer EMPTY");• else• System.out.println("Consumer Consumed " + item + " Buffer Size = "• + count);

• notify();

• return item;• }

• }

Page 76: Chapter 6, Process Synchronization, Overheads, Part 2 1

76

An example scenario showing how the calls to wait() and notify() work

• Assume that the lock is available but the buffer is full

• The producer calls insert()• The lock is available so it gets in• The buffer is full so it calls wait()• The producer releases the lock, gets blocked,

and is put in the wait set

Page 77: Chapter 6, Process Synchronization, Overheads, Part 2 1

77

• The consumer eventually calls remove()• There is no problem because the lock is

available• At the end of removing, the consumer calls

notify()• The call to notify() removes the producer from

the wait set, puts it into the entry set, and makes it runnable

Page 78: Chapter 6, Process Synchronization, Overheads, Part 2 1

78

• When the consumer exits the remove() method, it gives up the lock

• The producer can now be scheduled• The producer thread begins execution at the line

of code following the wait() call which caused it to be put into the wait set

• After inserting, the producer calls notify()• This would allow any other waiting thread to run• If nothing was waiting, it has no effect

Page 79: Chapter 6, Process Synchronization, Overheads, Part 2 1

79

• Why is the call to wait() in a loop rather than an if statement?

• When another thread calls notify() and the waiting thread is chosen to run, it has to check again what the contents of the buffer are

• Just because it’s been scheduled doesn’t mean that the buffer is ready for it to run

• The code contains a loop because the thread has to check whether or not it can run every time it is scheduled

Page 80: Chapter 6, Process Synchronization, Overheads, Part 2 1

80

• The rest of the code is given here so it’s close by for reference

• It is the same as the rest of the code for the previous examples, so it may not be necessary to look at it again

Page 81: Chapter 6, Process Synchronization, Overheads, Part 2 1

81

• /**• * An interface for buffers• *• */

• public interface Buffer• {• /**• * insert an item into the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract void insert(Object item);

• /**• * remove an item from the Buffer.• * Note this may be either a blocking• * or non-blocking operation.• */• public abstract Object remove();• }

Page 82: Chapter 6, Process Synchronization, Overheads, Part 2 1

82

• /**• * This is the producer thread for the bounded buffer problem.• */

• import java.util.*;

• public class Producer implements Runnable {• private Buffer buffer;

• public Producer(Buffer b) {• buffer = b;• }

• public void run() {• Date message;

• while (true) {• System.out.println("Producer napping");• SleepUtilities.nap();

• // produce an item & enter it into the buffer• message = new Date();• System.out.println("Producer produced " + message);

• buffer.insert(message);• }• }

• }

Page 83: Chapter 6, Process Synchronization, Overheads, Part 2 1

83

• /**• * This is the consumer thread for the bounded buffer problem.• */• import java.util.*;

• public class Consumer implements Runnable {• private Buffer buffer;

• public Consumer(Buffer b) {• buffer = b;• }

• public void run() {• Date message;

• while (true) {• System.out.println("Consumer napping");• SleepUtilities.nap();

• // consume an item from the buffer• System.out.println("Consumer wants to consume.");

• message = (Date) buffer.remove();• }• }

• }

Page 84: Chapter 6, Process Synchronization, Overheads, Part 2 1

84

• /**• * This creates the buffer and the producer and consumer threads.• *• */• public class Factory• {• public static void main(String args[]) {• Buffer server = new BoundedBuffer();

• // now create the producer and consumer threads• Thread producerThread = new Thread(new Producer(server));• Thread consumerThread = new Thread(new Consumer(server));• • producerThread.start();• consumerThread.start(); • }• }

Page 85: Chapter 6, Process Synchronization, Overheads, Part 2 1

85

• /**• * Utilities for causing a thread to sleep.• * Note, we should be handling interrupted exceptions• * but choose not to do so for code clarity.• */

• public class SleepUtilities• {• /**• * Nap between zero and NAP_TIME seconds.• */• public static void nap() {• nap(NAP_TIME);• }

• /**• * Nap between zero and duration seconds.• */• public static void nap(int duration) {• int sleeptime = (int) (duration * Math.random() );• try { Thread.sleep(sleeptime*1000); }• catch (InterruptedException e) {}• }

• private static final int NAP_TIME = 5;• }

Page 86: Chapter 6, Process Synchronization, Overheads, Part 2 1

86

Multiple Notifications

• A call to notify() picks one thread out of the wait set and puts it into the entry set

• What if there are >1 waiting threads?• The book points out that using notify() alone

can lead to deadlock• Deadlock is an important problem, which

motivates a discussion of notifyAll(), but it will not be covered in detail until the next chapter

Page 87: Chapter 6, Process Synchronization, Overheads, Part 2 1

87

• The general solution to any problems latent in calling notify() is to call notifyAll()

• This moves all of the waiting threads to the entry set

• At that point, which one runs next depends on the scheduler

• The selected one may immediately block

Page 88: Chapter 6, Process Synchronization, Overheads, Part 2 1

88

• However, if notifyAll() is always called, statistically, if there is at least one thread that can run, it will eventually be scheduled

• Any threads which depend on it could then run when they are scheduled, and progress will be made

Page 89: Chapter 6, Process Synchronization, Overheads, Part 2 1

89

• It actually seems like many problems could be avoided if you always just called notifyAll() instead of notify()

• But there must be cases where a call to notify() would be preferable, and not a call to notifyAll()

• The next example illustrates the use of both kinds of calls

Page 90: Chapter 6, Process Synchronization, Overheads, Part 2 1

90

notifyAll() and the Readers-Writers Problem

• The book gives full code for this• I will try to abstract their illustration without

referring to the complete code• Remember that a read lock is not exclusive– Multiple reading threads are OK at the same time– Only writers have to be blocked

• Write locks are exclusive– Any one writer blocks all other readers and writers

Page 91: Chapter 6, Process Synchronization, Overheads, Part 2 1

91

Synopsis of Read Lock Code

• acquireReadLock()• {• while(there is a writer)• wait();• …• }• releaseReadLock()• {• …• notify();• }

Page 92: Chapter 6, Process Synchronization, Overheads, Part 2 1

92

• One writer will be notified when the readers are finished.

• By definition, no reader could be waiting. • It does seem possible to call notifyAll(), in which

case possibly >1 writer would contend to be scheduled, but it is sufficient to just ask the system to notify one waiting thread.

• It really seems just to be a choice between the notification algorithm for the wait set and the scheduling algorithm for the entry set.

Page 93: Chapter 6, Process Synchronization, Overheads, Part 2 1

93

Synopsis of Write Lock Code

• acquireWriteLock()• {• while(there is any reader or writer)• wait();• …• }• releaseWriteLock()• {• …• notifyAll();• }

Page 94: Chapter 6, Process Synchronization, Overheads, Part 2 1

94

• All readers will be notified when the writer finishes

• Any waiting writers would also be notified• They would all go into the entry set and be

eligible for scheduling• The point is to make it possible to get all of the

readers active since they are all allowed to read concurrently

Page 95: Chapter 6, Process Synchronization, Overheads, Part 2 1

95

Block Synchronization

• Lock scope definition: • Time between when a lock is acquired and released • This might also refer to the location in the code

where the lock is in effect• Declaring a method synchronized may lead to an

unnecessarily long scope if large parts of the method don’t access the shared resource

• Block synchronization refers to synchronizing subsets of methods.

Page 96: Chapter 6, Process Synchronization, Overheads, Part 2 1

96

• Block synchronization is based on the idea that every object has a lock

• You can construct an instance of the Object class and use it as the lock for a block of code

• In other words, you use the lock of that object as the lock for the block

• The lock applies to the block of code in the matched braces following the synchronized keyword

• Example code follows

Page 97: Chapter 6, Process Synchronization, Overheads, Part 2 1

97

• Object mutexLock = new Object();• …• public void someMethod()• {• nonCriticalSection();• …• synchronized(mutexLock)• {• criticalSection();• }• remainderSection();• …• }

Page 98: Chapter 6, Process Synchronization, Overheads, Part 2 1

98

• Block synchronization also allows the use of wait() and notify calls()

• Example code follows• Honestly, without a concrete example, this

doesn’t really show what you might use it for• However, it does make the specific syntax

clear

Page 99: Chapter 6, Process Synchronization, Overheads, Part 2 1

99

• Object mutexLock = new Object();• …• synchronized(mutexLock)• {• …• try• {• mutexLock.wait();• catch(InterruptedException ie)• {• …• }• …• Synchronized(mutexLock)• {• mutexLock.notify();• }

Page 100: Chapter 6, Process Synchronization, Overheads, Part 2 1

100

Synchronization Rules: I.e., Rules Affecting the Use of the Keyword synchronized

• 1. A thread that owns the lock for an object can enter another synchronized method (or block) for the same object. – This is known as a reentrant or recursive lock.

• 2. A thread can nest synchronized calls for different objects. – One thread can hold the lock for >1 object at the

same time.

Page 101: Chapter 6, Process Synchronization, Overheads, Part 2 1

101

• 3. Some methods of a class may not be declared synchronized. – A method that is not declared synchronized can be

called regardless of lock ownership—that is, whether a thread is running in a synchronized method concurrently or not

• 4. If the wait set for an object is empty, a call to notify() or notifyAll() has no effect.

Page 102: Chapter 6, Process Synchronization, Overheads, Part 2 1

102

• 5. wait(), notify(), and notifyAll() can only be called from within synchronized methods or blocks. – Otherwise, an IllegalMonitorStateException is

thrown.• 6. An additional note: For every class, in

addition to the lock that every object of that class gets, there is also a class lock. – That makes it possible to declare static methods or

blocks in static methods to be synchronized

Page 103: Chapter 6, Process Synchronization, Overheads, Part 2 1

103

Handling the InterruptedException

• This almost feels like a step too far—what’s it all about and why is it necessary to discuss?

• However, the correct example code that has finally been given has required the use of try/catch blocks

• The question is, why are the blocks necessary and what do they accomplish?

Page 104: Chapter 6, Process Synchronization, Overheads, Part 2 1

104

• If you go back to chapter 4, you’ll recall that the topic of asynchronous (immediate) and deferred thread cancellation (termination) came up

• Deferred cancellation was preferred. • This meant that threads were cancelled by

calling interrupt() rather than stop()

Page 105: Chapter 6, Process Synchronization, Overheads, Part 2 1

105

• The specifics can be recalled with a scenario• Let thread1 have a reference to thread2• Within the code for thread1, thread2 would

be interrupted in this way:• thread2.interrupt();

Page 106: Chapter 6, Process Synchronization, Overheads, Part 2 1

106

• Then in the code for thread2, thread2 can check its status with one of these two calls:

• me.interrupted();• me.isInterrupted();• thread2 can then do any needed

housekeeping (preventing inconsistent state) before terminating itself

Page 107: Chapter 6, Process Synchronization, Overheads, Part 2 1

107

• In the context of Java synchronization, this is the question:

• Is it possible to interrupt (cancel or kill) a thread like thread2 that is in a wait set (is suspended or blocked)?

• A call to wait() has to occur in a try block as shown on the following overhead

Page 108: Chapter 6, Process Synchronization, Overheads, Part 2 1

108

• try• {• wait();• }• catch(InterruptedException ie)• {• …• }

Page 109: Chapter 6, Process Synchronization, Overheads, Part 2 1

109

• If a thread calls wait(), it goes into the wait set and stops executing

• As explained up to this point, the thread can’t resume, it can’t do anything at all, until notify() or notifyAll() are called and it is picked for scheduling

• This isn’t entirely true

Page 110: Chapter 6, Process Synchronization, Overheads, Part 2 1

110

• The wait() call is the last live call of the thread• The system is set up so that thread1 might

make a call like this while thread2 is in the wait set:

• thread2.interrupt();

Page 111: Chapter 6, Process Synchronization, Overheads, Part 2 1

111

• If such a call is made on thread2 while it’s in the wait set, the system will throw an exception back out where thread2 made the call to wait()

• At that point, thread2 is no longer blocked because it’s kicked out of the wait set

Page 112: Chapter 6, Process Synchronization, Overheads, Part 2 1

112

• This means that thread2 becomes runnable without a call to notify(), but its status is now interrupted

• If thread2 is scheduled, then execution begins at the top of the catch block

• If you choose to handle the exception, then what you should do is provide the housekeeping code which thread2 needs to run so that it will leave shared resources in a consistent state and then terminate itself

Page 113: Chapter 6, Process Synchronization, Overheads, Part 2 1

113

• The foregoing can be summarized as follows:• Java has this mechanism so that threads can

be terminated even after they’ve disappeared into a wait set

• This can be useful because there should be no need for a thread to either waste time in the wait set or run any further if it is slated for termination anyway

Page 114: Chapter 6, Process Synchronization, Overheads, Part 2 1

114

• This is especially useful because it allows a thread which is slated for termination to release any locks or resources it might be holding.

• Why this is good will become even clearer in the following chapter, on deadlocks

Page 115: Chapter 6, Process Synchronization, Overheads, Part 2 1

115

Concurrency Features in Java—at this point it’s hard to say how useful this list is

• If you want to write synchronized code in Java, check the API documentation

• What follows is just a listing of the features—beyond what was just explained—with minimal explanation

Page 116: Chapter 6, Process Synchronization, Overheads, Part 2 1

116

• 1. As mentioned earlier, there is a class named Semaphore.

• Technically, the examples earlier were based on the authors’ hand-coded semaphore.

• If you want to use the Java Semaphore class, double check its behavior in the API

Page 117: Chapter 6, Process Synchronization, Overheads, Part 2 1

117

• 2. There is a class named ReentrantLock. • This supports functionality similar to the

synchronized keyword (or a semaphore) with added features like enforcing fairness in scheduling threads waiting for locks

Page 118: Chapter 6, Process Synchronization, Overheads, Part 2 1

118

• 3. There is an interface named Condition, and this type can be used to declare condition variables associated with reentrant locks.

• They are related to the idea of condition variables in a monitor, and they are used with wait(), notify(), and notifyAll() with reentrant locks

Page 119: Chapter 6, Process Synchronization, Overheads, Part 2 1

119

• 6.9 Synchronization Examples: Solaris, XP, Linux, Pthreads. SKIP

• 6.10 Atomic Transactions: This is a fascinating topic that has as much to do with databases as operating systems… SKIP

• 6.11 Summary. SKIP

Page 120: Chapter 6, Process Synchronization, Overheads, Part 2 1

120

The End