1.1 t5-multithreading so-grade 2013-2014-q2. 1.2 processes vs. threads thread libraries...

26
1.1 T5-multithreading SO-Grade 2013-2014-Q2

Upload: jacoby-guyon

Post on 30-Mar-2015

217 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.1

T5-multithreading

SO-Grade 2013-2014-Q2

Page 2: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.2

Processes vs. Threads Thread libraries Communication based on shared memory

Race condition Critical section Mutual exclusion access

Index

Page 3: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.3

Until now… Just one sequence of execution: just one program counter and just one

stack There is not support to execute different concurrent functions inside one

process But there can be some independent functions that could exploit

concurrency

Processes vs. Threads

Page 4: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.4

Single-process server: Server cannot serve more than one client at the same time It is not possible to exploit advantage of concurrency and parallelism

Multi-process server: one process per simultaneous client to be served Concurrent and/or parallel execution But… there exists resource wasting

Replication of data structures that keep the same information, replication of logical address spaces, inefficient communication mechanisms,…

Example: client-server applicationClient 1{..Send_request();Wait_response();Process_response();…}

GLOBAL DATAServer{while(){ Wait_request(); Prepare_response(); Send_response();}}

Client 2{..Send_request();Wait_response();Process_response();…}

Client N{..Send_request();Wait_response();Process_response();…}

Page 5: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.5

Example: client-server application

Client 1{..Send_request();Wait_response();Process_response();…}

GLOBAL DATAServer{while() {START_process Wait_request(); Prepare_response(); Send_response();END_process}}

Client 2{..Send_request();Wait_response();Process_response();…}

Client N{..Send_request();Wait_response();Process_response();…}

GLOBAL DATAServer{while() {START_process Wait_request(); Prepare_response(); Send_response();END_process}}

GLOBAL DATAServer{while() {START_process Wait_request(); Prepare_response(); Send_response();END_process}}

GLOBAL DATAServer{while() {START_process Wait_request(); Prepare_response(); Send_response();END_process}}

GLOBAL DATAServer{while() {START_process Wait_request(); Prepare_response(); Send_response();END_process}}

Page 6: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.6

Alternative: multithreaded server Enable several concurrent executions associated to one process What is it necessary to describe one execution sequence?

Stack Program counter Values of the general purpose registers

Rest of process characteristics can be shared (rest of the logical address space, information about devices, signals management, etc.)

Example: server application

Page 7: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.7

Most of resources are assigned to processes Characteristics/resources per thread:

Next instruction to execute (PC value) A memory region to hold its stack Value of general purpose registers An identifier

Scheduling unit is thread (each thread requires a CPU) Rest of resources/characteristics are shared by all threads in a process Traditional process: contains just one execution thread

Processes vs. Threads

Page 8: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.8

Example: client-server application

Client 1{..Send_request();Wait_response();Process_response();…}

GLOBAL DATAServer{while() {

INICIO_FLUJO Esperar_peticion(); Preparar_respuesta(); Enviar_respuesta();FIN_FLUJO}}

Client 2{..Send_request();Wait_response();Process_response();…}

Client N{..Send_request();Wait_response();Process_response();…}

START_thread Wait_request(); Prepare_response(); Send_response();END_thread

START_thread Wait_request(); Prepare_response(); Send_response();END_thread

START_thread Wait_request(); Prepare_response(); Send_response();END_thread

START_thread Wait_request(); Prepare_response(); Send_response();END_thread

START_thread Wait_request(); Prepare_response(); Send_response();END_thread

Page 9: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.9

Internals: Processes vs. Threads

1 process with N threads 1 PCB N different code sequences can be executed concurrently PCB allocates space to store execution context all threads Address space

– 1 code region– 1 data region– 1 heap region + N stack regions (1 per thread)

Page 10: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.11

Memory Sharing Between processes

All process memory is private by default: no other process can access it (there are system calls to ask explicitly for shared memory between processes)

Between threads All threads in a process can access all process address space. Some considerations

– Each thread has its own stack region, to keep its local variables, parameters and values to control the execution flow

– However, all stacks regions are also accessible by all threads in the process» Variables/parameters scope vs. permission of access to

memory

Internals: Processes vs. Threads

Page 11: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.12

Potential scenarios for multithreaded or multiprocess applications Exploiting parallelism and concurrency Improving modularity I/O bounded applications

Processes or threads dedicated just to implement device accesses Server applications

Using concurrent applications

Page 12: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.13

Benefits from using threads compared to using processes Management costs: creation, destruction and context switch Improve resource exploitations Communication mechanism is very simple: shared memory

Benefits from using threads

Page 13: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.14

There is not a standard interface common to all OS kernels: applications using kernel interface are not portable

POSIX threads (Portable Operating System Interface, defined IEEE) Thread management interface in user-level

Creation and destruction Synchronization Scheduling configuration

It uses the OS system calls as required There exist implementations for all OS: using this interface applications

become portable API is very complete and for some OS it is only partially

implemented

User level management: thread libraries

Page 14: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.15

Pthread management services

Creation Processesfork() Threads pthread_create(out Pth_id,in NULL, in function_name, in

Pparam) Identification

Processes : getpid() Threads : pthread_self()

Ending Processes : exit(exit_code) Threads Pthexit_code)

Synchronization with the end of execution Processes : waitpid(pid,ending_status, FLAGS) Threads: pthread_join(in thread_id, out PPexit_code)

Check in the web the interfaces (man pages are not installed in the labs)

Page 15: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.16

pthread_create Creates a new thread that will execute start_routine using arg parameter

#include <pthread.h>

int pthread_create(pthread_t *th, pthread_attr_t *attr, void *(*start_routine)(void *), void *arg);

th: will hold the thread identifier

attr: initial characteristics of the thread (if NULL thread start the execution with the default characteristics)

start_routine: routine @ that will execute the new thread (in C, the name of a function represents its starting address). This routine can receive just one parameter of void* type

arg: routine parameter

Returns 0 if creation ends ok or an error code otherwise

Thread creation

Page 16: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.17

pthread_self Returns the identifier of the thread that executes this function

#include <pthread.h>

int pthread_self(void);

Returns thread identifier

Thread identification

Page 17: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.18

pthread_exit It is executed by the thread that ends the execution Its parameter is the thread ending code

#include <pthread.h>

int pthread_exit(void *status);

status: thread return value (ending code)

Retunrs 0 if creation ends ok or an error code otherwise

Thread destruction

Page 18: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.20

Threads in a process can exchange information through memory (all memory is shared between all threads in a process) Accessing same variables

Risk: race condition There is a race condition when results of the execution depends

on the relative execution order between the instructions of threads (or processes)

Shared memory communication

Page 19: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.21

int first= 1 /* shared variable*/

Example : race condition

/* thread 1 */if (first) {

first--;task1();

} else {task2();

}

/* thread 2 */if (first) {

first--;task1();

} else {task2();

}

task1 task2

Thread 1 Thread 2

Thread 2 Thread 1

Thread 1 and Thread 2 --

WRONG RESULT

Programmer goal: use first boolean to distribute task 1 and task 2 between two threads. But using non-atomic operations!!!

Page 20: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.22

Assembler code

Do_task:

pushl %ebp

movl %esp, %ebp

subl $8, %esp

movl first, %eax

testl %eax, %eax

je .L2

movl first, %eax

subl $1, %eax

movl %eax, first

call task1

jmp .L5

.L2:

call task2

.L5:

leave

ret

This is if code more tan 1 instruction

This is substraction code more tan 1 instruction

This is else code

Which will be the effects if after executing movl instruction in the if section happens a context switch?

Page 21: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.23

What happens?…eax is already set to 1

Do_task:

pushl %ebp

movl %esp, %ebp

subl $8, %esp

movl first, %eax

testl %eax, %eax

je .L2

movl first, %eax

subl $1, %eax

movl %eax, first

call task1

jmp .L5

.L2:

call task2

.L5:

leave

ret

Do_task :

pushl %ebp

movl %esp, %ebp

subl $8, %esp

movl first, %eax

testl %eax, %eax

je .L2

movl first, %eax

subl $1, %eax

movl %eax, first

call task1

jmp .L5

.L2:

call task2

.L5:

leave

ret

THREAD 1 THREAD 2

Context switch!

Context switch!

Page 22: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.24

Critical section Sequence of code lines that contains race conditions that may cause

wrong results Sequence of code lines that access shared changing variables

Solution Mutual exclusion access to that code regions Avoid context switching?

Critical section

Page 23: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.25

Ensures that access to a critical section it is sequential Only one thread can execute code in a critical section at the same time

(even if a context switch happens) Programmer responsibilities:

Identify critical sections in the code Mark starting point and ending point of each critical section using toolds

provided by OS OS provides programmers with system calls to mark starting point and ending

point of a critical section: Starting point: if there is not other thread with permission to access the

critical section, this thread gets the permission to access and continues with the code execution. Otherwise, this thread waits until access to critical section is released.

Ending point: critical section is released and gives permission to one thread waiting for accessing the critical section, if there is any waiting.

Mutual exclusion access

Page 24: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.26

Mutual exclusion: pthread interface

To consider: Each critical section is identified through a global variable of type

pthread_mutex_t. It its necessary to define one variable per type of critical section.

It is necessary to initialize this variable before using it. Ideally, this initialization should be performed before creating the pthreads that will use it.

Functin Description

pthread_mutex_init Initializes a pthread_mutex_t variable

pthread_mutex_lock Blocks access to a critical section

pthread_mutex_unlock Releases access to a critical section

Page 25: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.27

int first= 1 // shared variable

pthread_mutex_t rc1; // New shared variable

Exemple: Mutex

pthread_mutex_init(& rc1,NULL); // INITIALIZE rc1 VARIABLE: JUST ONCE…..pthread_mutex_lock(& rc1); // BLOCK ACCESSif (first) {

first--;pthread_mutex_unlock (& rc1); //RELEASE ACCESStask1();

} else {pthread_mutex_unlock(& rc1); // RELEASE ACCESStask2();

}

Page 26: 1.1 T5-multithreading SO-Grade 2013-2014-Q2. 1.2 Processes vs. Threads Thread libraries Communication based on shared memory Race condition Critical section

1.28

Programming considerations Critical sections should be as small as possible in order to maximize

concurrency Mutual exclusion access is driven by the identifier (variable) used in the

starting and ending point It is not necessary to have the same code in related critical sections If there exists several independent shared variable may be

convenient to use different identifiers to protect them

Mutual exclusion: considerations