is473 distributed systems chapter 6 operating system support

32
IS473 Distributed Systems CHAPTER 6 Operating System Support

Upload: cameron-edwards

Post on 14-Jan-2016

228 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: IS473 Distributed Systems CHAPTER 6 Operating System Support

IS473 Distributed Systems

CHAPTER 6

Operating System Support

Page 2: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 2

OUTLINE

Applications

Computer and network hardware

PlatformOperating system

Middleware

Page 3: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 3

OUTLINE

Distributed Operating System.

Operating System Layer.

Processes and Threads.

Communication and Invocation.

Operating System Architecture.

Page 4: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 4

Network and Distributed OS Network operating system:

Have networking capability to access remote resources. Retain autonomy (استقالل) in managing own resources. Remote resource access not always transparent. Separate system image on each node.

Distributed operating system: Single system image across multiple nodes. Resource access completely transparent. Not practically in use:

• Compatibility with existing applications.• Emulations offer very bad performance.• Users prefer a degree of autonomy for their machines.

Middleware and network operating systems combination provides an acceptable balance between the requirement of autonomy and network-transparent resource access.

Page 5: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 5

Operating System Layer

Applications, services

Computer &

Platform

Middleware

OS: kernel,libraries & servers

network hardware

OS1

Computer & network hardware

Node 1 Node 2

Processes, threads,communication, ...

OS2Processes, threads,communication, ...

System layers

Page 6: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 6

Users satisfaction is achieved if middleware-OS combination has good performance. OS running at a node provides its own abstractions of local

hardware resources for processing, storage and communication. Middleware utilizes a combination of local resources to

implement its mechanisms for remote invocations between objects or processes.

OSs provide support for middleware layer to work effectively: Encapsulation: provide transparent service interface to resources

of the computer. Protection: protect resources from illegitimate access. Concurrent processing: users/clients may share resources and

access concurrently. Provide the resources needed for (distributed) services and

applications to complete their task: Communication and scheduling.

Operating System Layer

Page 7: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 7

Operating System Layer

Communicationmanager

Thread manager Memory manager

Supervisor

Process manager

Core OS functionality

Page 8: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 8

The core OS components include the following: Process manager:

• Handles the creation of and operation upon process.

Thread manager:• Handles thread creation, synchronization and scheduling.

Communication manager:• Handles communication between threads attached to different

processes on the same computer.

• Some OSs support communication between threads in remote processes.

Memory manager:• Manages physical and virtual memory.

Supervisor:• Dispatches interrupts, system call traps and other exceptions.

Operating System Layer

Page 9: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 9

Process (program in execution): Unit of resource management for operating system. Execution environment:

• Address space.• Thread synchronization and communication resources (e.g.

semaphores).• Computing resources (file systems, windows, etc.)

Expensive to create and manage. Threads (lightweight process):

Schedulable activities attached to processes. Arise from the need for concurrent activities to share

resources within one process.• Enable to overlap computation with input and output.• Allow concurrent processing of client requests in servers – each

request handled by one thread. Easier to create and destroy.

Processes and Threads

Page 10: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 10

A unit of management a process’s virtual memory. Large and consists of one or more regions separated

by inaccessible area of virtual memory to allow growth. A region is an area of contiguous virtual memory

accessible by the threads of the owning process. Each region is specified by:

Lowest virtual address and size. Read/write/execute permissions for the process’s threads. Whether can be grown upwards or downwards.

Gaps are left between regions to allow for growth and regions can be overlapped when extended in size.

Data files can be mapped into the address space as an array of bytes in memory.

Processes and ThreadsAddress Spaces

Page 11: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 11

Processes and ThreadsAddress Spaces

Stack

Text

Heap

Auxiliaryregions

0

2N There are at least three main regions:

Text: a fixed unmodifiable region containing program code.

Heap: extensible region initialized by values stored in the program binary file.

Stack: downward extensible region used by subroutines.

The need to support a separate stack for each thread is the main reason of an indefinite number of regions.

Shared regions are regions of virtual memory mapped to identical physical memory for different processes to enable inter-process communication.

Page 12: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 12

Processes and ThreadsNew Process Creation

An indivisible operation provided by the operating system.

The UNIX fork system call creates a process with an execution environment copied from the caller.

But, the creation of a new process in a distributed system can be separated into two independent aspects:

The choice of a target host. The creation of an execution environment.

Choice of process host: Determine the node at which the new process will reside according to

transfer and location policies for sharing the processing load:

• The transfer policy determines whether to situate a new process locally or remotely.

• The location policy determines which node should host a new process.

Page 13: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 13

Processes and ThreadsNew Process Creation

Choice of process host (cont.): Location polices may be static or adaptive:

• Static location policies operate without regard to the current state of the system based on a mathematical analysis aimed at optimizing the all system and may be deterministic or probabilistic.

• Adaptive location polices apply heuristics to make the decision based on unpredictable run-time factors on each node.

Load sharing system may be centralized, hierarchical or decentralized:• One manger component take the decision in the centralized system.

• There are several mangers organized in a tree structure in hierarchical system and each manger makes the decisions as far down the tree.

• Nodes in the decentralized system exchange information with one another directly to make allocation decisions using: • Sender-initiated algorithm: the node requires creating a new process is

responsible for initiating the transfer decisions.

• Receiver-initiated algorithm: the node with relatively low load advertises its existence other nodes to transfer work to it.

Page 14: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 14

Processes and ThreadsNew Process Creation

Creation of a new execution environment: There are two approaches to defining and initiating the

address space of a new created process:

• The address space is of statically defined format and initialized with zeros.

• The address space is defined with respect to an existing execution environment.

• In case of UNIX fork, the newly created child process share the parent’s text region and has its own heap and stack regions.

Copy-on-write approach:

• A general approach of inheriting all regions of the parent process by the child process.

• An inherited region is logically copied from the parent’s region by sharing its frame between the two address spaces.

• A page in a region is physically copied when one or other process attempts to modify it.

Page 15: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 15

Processes and ThreadsNew Process Creation

a) Before write b) After write

Shared

frameA's pagetable B's page

table

Process A’s address space Process B’s address space

Kernel

RA RBRB copied

from RA

Page 16: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 16

Processes and ThreadsThreads Performance

Consider the server has a pool of one or more threads. Each thread removes a client request from a queue of

received requests and process it. Example: (how multi-threading maximize the server throughput)

Request processing: 2 ms I/O delay (no caching): 8 ms Single thread:

• 10 ms per requests, 100 requests per second. Two threads (no caching):

• 8 ms per request, 125 requests per second two threads and caching:

• 75% hit rate• mean I/O time per request: 0.75 * 0 + 0.25 * 8ms = 2 ms

• 500 requests per second• increased processing time per request as a result of caching : 2.5

ms• 400 requests per second

Page 17: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 17

Processes and ThreadsThreads Performance

Server

N threads

Input-output

Client

Thread 2 makes

T1

Thread 1

requests to server

generates results

Requests

Receipt &queuing

Client and server with threads

Page 18: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 18

Processes and Threads Multi-threaded Server Architectures There are various ways to mapping requests to threads

within a server. The threading architectures of various implementations

are: Worker pool architecture:

• Pool of server threads serves requests in queue.• Possible to maintain priorities per queue.

Thread-per-request architecture:• Thread lives only for the duration of request handling.• Maximizes throughput (no queueing).• Expensive overhead for thread creation and destruction.

Thread-per-connection/per-object architecture:• Compromise solution.• No overhead for creation and deletion of threads.• Requests may still block, hence throughput is not maximal.

Page 19: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 19

Processes and Threads Multi-threaded Server Architectures

a. Thread-per-request b. Thread-per-connection c. Thread-per-object

remote

workers

I/O remoteremote I/O

per-connection

threads

per-object

threads

objects objectsobjects

Alternative server threading architectures

Page 20: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 20

Processes and Threads Threads vs. Multiple Processes

Why the multi-threaded process model is preferred than multiple single-threaded processes?

Creating a new thread within an existing process is cheaper than creating a process (~10-20 times)• New process under Unix: 11ms, new thread under Topaz: 1 ms

Switching to a different thread within the same process is cheaper than switching between threads in different processes (~5-50 times).• Process switch in Unix: 1.8ms, thread switch in Topaz: 0.4 ms.

Threads within a process can share data and other resources more conveniently and efficiently (without copying or messages).• No need for message passing.• Communication via shared memory.

Threads within a process are not protected from each other.• One thread can access other thread's data, unless a type-safe

programming language is being used

Page 21: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 21

Processes and ThreadsThreads Programming

Some languages provided direct support for threads concurrent programming (e.g. C, Ada95, Modula-3 and Java).

Java provides Thread class that includes the following methods for creating, destroying and synchronizing threads: Thread(ThreadGroup group, Runnable target, String name) Creates

a new thread, in the SUSPENDED state, belong to group and be identified as name; the thread will execute the run() method of target.

setPriority(int newPriority), getPriority() - Set and return the thread’s priority.

run() - A thread executes the run() method of its target object, if it has one, and otherwise its own run() method.

start() - Change the state of the thread from SUSPENDED to RUNNABLE.

sleep(int millisecs) - Cause the thread to enter the SUSPENDED state for the specified time.

destroy() - Destroy the thread.

Page 22: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 22

Processes and ThreadsJava Thread Lifetimes

A new thread is created in the SUSPENDED state on the same Java Virtual Machine (JVM) as its creator.

A thread executes run() method after it is made RUNABLE with the start() method.

Threads can be assigned a priority and Java implementations will run a particular thread in preference to any thread with lower priority.

A thread ends its life when it returns from the run() method or when its destroy() method is called.

Programs can manage threads in groups: Thread group is assigned at the time of its creation. thread groups useful to shield various applications running in

parallel on one JVM. A thread in one group may not interrupt thread in another group. Thread group facilitates control of the relative priorities of

threads.

Page 23: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 23

Processes and Threads Java Thread Synchronization Each thread’s local variables in methods are private to it. An object can have synchronized and non-synchronized

methods. Example: Synchronized addTo() and removeFrom() methods to

serialize requests in worker pool example. Any object can only be accessed through one invocation of

any of its synchronized methods. Threads can be blocked and woken up via condition

variables: Thread awaiting a certain condition calls an object’s wait() method. Other thread calls notify() or notifyAll() to awake one or all blocked

threads. Example:

• When worker thread discovers no requests to be processed calls wait() on instance of Queue.

• When I/O thread adds request to queue calls notify() method of queue to wake up worker.

Page 24: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 24

Processes and ThreadsJava Thread Scheduling

A special yield() method is used to enable scheduling of threads to make progress.

There are two types of scheduling threads: Preemptive scheduling threads

• A thread may be suspended at any point to make away for another thread.

Non-preemptive scheduling threads• A thread runs until makes a call to the threading system to de-schedule it

and schedule another thread to run.

• A code section without a threading system call is automatically a critical section.

• Run exclusively and therefore it can not take advantage of a multiprocessor.

• Must take care long-running code sections that do not contain threading system calls.

• Unsuitable for real-time applications processed in absolute times.

Page 25: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 25

Processes and ThreadsThreads Implementation

Many operating system kernels provide support for multi-threaded processes (e.g. Windows NT, Solaris, and Mach). Provide thread creation, management system calls and scheduling

individual threads. Other operating systems have only a single-threaded

process abstraction. Multi-threaded processes are implemented by linking a user library

of procedures to application programs. Suffer from the following problems:

• Threads within a process can not take advantage of a multiprocessor.• A thread that takes a page fault blocks the entire process and all its

threads.• Threads within different process can not scheduled according to a single

schema of relative prioritization. But have significant advantages:

• Operations of thread creation are significantly less costly.• Allow customizing the thread scheduling module and support more user-

level threads to suit particular application requirements.

Page 26: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 26

The advantages of user-level and kernel-level threads implementations can be combined: Mach OS enable user-level code to provide scheduling hints to the

kernel’s thread scheduler. Solaris 2 adopts hierarchical scheduling that supports both kernel-

level and user-level threads.• A user-level scheduler assigns each user-level thread to a kernel-level

thread. • Take the advantage of a multiprocessor.• Disadvantage: still lakes flexibility

• If a kernel-level thread is blocked, then all user-level threads assigned to it are also prevented from running.

Several research projects have developed hierarchical scheduling further to provide greater efficiency and flexibility:• FastThreads implementation of a hierarchic event-based scheduling

system:• Consider the main system components is a kernel running on a computer with

one or more processors and a set of application programs running on it.• Each application process contains a user-level scheduler to manage its threads.• The kernel is responsible for allocating virtual processors to processes.

Processes and ThreadsThreads Implementation

Page 27: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 27

Processes and ThreadsThreads Implementation

ProcessA

ProcessB

Virtual processors Kernel

Process

Kernel

P idle

P needed

P added

SA blocked

SA unblocked

SA preempted

A. Assignment of virtual processors to processes

B. Events between user-level scheduler & kernel

Key: P = processor; SA = scheduler activation

Scheduler activations

Page 28: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 28

The performance of RPC and RMI mechanisms is a critical factor for effective distributed systems.

Clients and servers may make many millions of invocation-related operations in their lifetimes.

Software overheads often predominate over network overheads in invocation times.

Invocation times have not decreased in proportion with increases in network bandwidth.

Each invocation mechanism executes a code out of the calling procedure or object scope and involves the arguments communication to the code and the data values return to the caller.

The important performance-related distinctions between invocation mechanisms:

Whether they are synchronous or asynchronous. Whether they involve a domain transition. Whether they involve communication across a network.

Communication and InvocationInvocation Performance

Page 29: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 29

Communication and InvocationInvocation Performance

Control transfer viatrap instruction

User Kernel

Thread

User 1 User 2

Control transfer viaprivileged instructions

Thread 1 Thread 2

Protection domainboundary

(a) System call

(b) RPC/RMI (within one computer)

Kernel

(c) RPC/RMI (between computers)

User 1 User 2

Thread 1 Network Thread 2

Kernel 2Kernel 1

Page 30: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 30

A null invocation is an RPC (or RMI) without parameters that execute a null procedure and returns no values. Important to measure a fixed overhead, the latency. Execution time for a null procedure call:

• Local procedure call < 1 microseconds• Remote procedure call ~ 10 milliseconds

Much of the delay taken by the actions of the operating system kernel and middleware. Network time involving about 100 bytes (null invocation size)

transferred at 100 megabits/sec. accounts for only .01 millisecond.

Factors affecting remote invocation performance: Marshalling/unmarshalling + operation dispatch at the server Data copying:- application -> kernel space -> communication

buffers Thread scheduling and context switching Protocol processing:- for each protocol layer Network access delays:- connection setup, network latency

Communication and InvocationInvocation Performance

10,000 times slower!

Page 31: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 31

Shared regions may be used for rapid communication between a user process and the kernel or between user processes. Data is communicated by writing to and reading from the shared region

without coping them to and from the kernel address spaces. The delay of a client experiences during request-replay

interactions over TCP is not necessarily worse than UDP and it is sometimes better for large messages. The operating system default buffering can be used to collect several

small massages and then send them together rather than sending them in separate packets.

Develop a more efficient invocation mechanism for the case of two processes on the same computer, lightweight RPC (LRPC): Based on optimization concerning data coping and thread scheduling. Use shared regions for client-server communication with a different

private region (A stack) between the server and each of its local clients. Each client and the server are able to pass arguments and return values

directly via an A stack.

Communication and InvocationInvocation Performance

Page 32: IS473 Distributed Systems CHAPTER 6 Operating System Support

Dr. Almetwally Mostafa 32

Communication and InvocationInvocation Performance

1. Copy args

2. Trap to Kernel

4. Execute procedureand copy results

Client

User stub

Server

Kernel

stub

3. Upcall 5. Return (trap)

A A stack

A lightweight remote procedure call