challenges of process allocation in distributed system presentation 1 group a4: syeda taib, sean...

25
Challenges of Process Allocation in Distributed System Presentation 1 Group A4: Syeda Taib, Sean Hudson, Manasi Kapadia

Upload: blaze-glenn

Post on 26-Dec-2015

217 views

Category:

Documents


2 download

TRANSCRIPT

Challenges of Process Allocation in Distributed

System

Presentation 1Group A4:

Syeda Taib, Sean Hudson, Manasi Kapadia

OUTLINE

Definitions Related to Distributed Systems Problem Definition Allocation Strategies Classifications of Process Allocation Process Allocation Algorithms Summary References

Definitions Related to Distributed Systems

Multiprocessor Systems A collection of tightly-coupled processors that share common

memory. Distributed Systems (the hardware perspective)

Several different, conflicting definitions in literature. A collection of loosely-coupled, independent processors that

do not share common memory. Linked by some type of communication network.

Can be geographically dispersed processors e.g. a LAN, or clustered tightly.

Sometimes called multi-computers.

Definitions Related to Distributed Systems

(cont.) Distributed Systems (the software

perspective) Software for these systems are called Distributed

Operating Systems (DOS’s) DOS’s presents a single, unified view of all system

resources regardless of physical location. Services provided by DOS’s are completely

transparent to users.

Problem Definition

In a collection of independent processor nodes, how do we decide on which processor to run a particular process?

Possible Optimization Goals: Maximize overall CPU utilization Minimize average response time Minimize response time ratio

Assumptions: All processors are the same (clock speeds may be

different). All processors are fully connected to each other.

Characteristics of Process Allocation Algorithms

Static vs. Dynamic. Non-migratory vs. Migratory. Deterministic vs. Heuristic. Centralized vs. Distributed. Optimal vs. Sub optimal. Local vs. Global. Sender–initiated vs. Receiver-initiated.

Static vs. Dynamic Allocation

Static Allocation: Initial processor assignment is fixed at compile time.

Dynamic Allocation: Initial processor assignment is determined at run

time.

Non-migratory vs. Migratory Allocation

Non-migratory Allocation: After a process is assigned to a processor it

cannot move until it terminates. Simple to implement, but unfair load balancing.

Migratory Allocation: Processes are allowed to move to other

processors during execution. Can be used to achieve better load balancing,

but greatly adds to complexity to the system design.

Deterministic vs. Heuristic Allocations

Deterministic Allocation: Information of process behavior is known in advance.

Information can include: a list of processes, computing requirements, file requirements and communication requirements, etc.

Provides an “optimal” assignment of processes. Heuristic Allocation:

Realistically, process behavior is not known in advance because of unpredictable loads on the system.

Uses an approximation of the real load. Also known as ad-hoc technique.

Centralized vs. Distributed Allocations

Centralized Allocation: Collects all necessary information in one central

location. Central location allows better decisions to be made. Less robust due to single point of failure. Performance bottleneck occurs under heavy loads.

Distributed Allocation: Collects all necessary information locally. Process assignment is made locally. Increases communication overhead and complexity.

Optimal vs. Sub-optimal Allocations

Optimal Allocation: Obvious

Sub-optimal Allocation: An acceptable, approximate solution.

Local vs. Global Allocations

Relates to transfer policy: Transfer policy determines when to transfer a process.

Local Allocation: Transfer decision of process is made based on local load information. If machine load is below some threshold, then keep the

process; otherwise transfer it to somewhere else. Simple, but no an optimal solution.

Global Allocation: Transfer decision of process is made based on global load

information. Provides slightly better results, but expensive to implement.

Sender–Initiated vs. Receiver-Initiated Allocations

Relates to Location policy: Location policy determines where to transfer a

process.

Sender–Initiated Allocation: Overloaded(Sender) machine sends out requests for

help to other machines, to offload its new processes. Sender locates more CPU cycles in other machines.

Sender–Initiated vs. Receiver-Initiated Allocations (cont.)

Receiver–Initiated Allocation: Under-loaded (receiver) machines announces to

other machines to take up extra work.

Sender-Initiated

I’m overloaded !

Receiver-Initiated

I’m free tonight !

Some Design Issues

How to measure load? # of processes on each machine, but it doesn’t

include background processes, daemons, etc. # of running or ready processes. CPU utilization using timer-interrupts.

Is the communication cost justified? Transferring the process may not improve

performance. Is complex better?

Performance of an algorithm does not increase with the complexity of the algorithm.

Allocation Algorithms

Graph-Theoretic Deterministic Algorithm Up-Down Algorithm Sender–Initiated Distributed Heuristic

Algorithm Receiver–Initiated Distributed Heuristic

Algorithm

Graph-Theoretic Deterministic Algorithm Given:

CPU and memory requirements Average amount of traffic between each pair of

processes. System represented as a weighted graph. Each node represents a process and each arc

represents a flow of messages between pairs of processes.

Partition the graph into K disjoint sub-graphs. e.g. limit CPU and memory requirements for each

sub-graph.

Graph-Theoretic Deterministic Algorithm

(cont.)

Arcs that go from one sub-graph to another represent network traffic.

Total network traffic = is the sum of the arcs intersected by the dotted cut lines. (e.g. 30 units)

Goal is to find the partitioning that minimizes the network traffic while meeting all the constraints.

A

HG

FE

I

B C D3

1

18

5

54

32

4

3

62

4

2

2

CPU 1

CPU 2

CPU 3

Graph-Theoretic Deterministic Algorithm

(cont.) Look for clusters of processes that are tightly

coupled (high inter-cluster traffic flow) but which interact little with other clusters (low inter-cluster traffic flow).

Disadvantage: limited applicability because it must know information about the system in advance.

Up-Down Algorithm

Centralized usage table: Maintained by a coordinator Each workstation w has an entry U(w) in the table,

initially 0. Update the table whenever significant event occurs. When workstation w is running processes on another

workstation, U(w) accumulates positive penalty points over time.

When workstation w has unsatisfied requests pending, U(w) accumulates negative penalty points over time.

When no requests are pending and processors are idle, U(w) is moved a certain point closer to zero, until it gets there. Hence, the score goes up and down.

Up-Down Algorithm(cont.)

Centralized usage table (cont.): Positive points indicate the workstation is a net user of system

resources. Negative point means that it needs resources. Zero is neutral.

Allocation decisions are based on the usage table whenever scheduling events occur.

When a process is created, the machine ask the usage table coordinator to allocate it a processor. If not available then the request is temporarily denied and a note is made of the request.

When a processor becomes free, the pending request whose owner has the lowest score wins.

It allocates capacity fairly. Disadvantages: central node soon becomes bottleneck.

Sender-Initiated Distributed Heuristic

Algorithm Process creator polls the other nodes. A process is transferred if the load on the polled

node is below some threshold. If no suitable node is found, the process is run on

originating node.

Random polling policy: avoid thrashing by imposing limit on the transfer.

Under high system loads, constant polling will consume CPU cycles.

Receiver-Initiated Distributed Heuristic

Algorithm After process completion, nodes poll for more work. Transfer decisions are made at process completion

time based on CPU queue length. Randomly poll a limited # of nodes until one is

found that has an acceptable load level. If search fails, then wait for some pre-determined

time interval, before polling again. It does not put extra load on the system at critical

times.

Summary

References

Casvant, Thomas and Singhal, Mukesh. Distributed Computing Systems. IEEE Press.

Mullender, Sape. Distributed Systems. ACM Press.

Stallings, William. Operating System. Tanenbaum, Andrew. Distributed Operating

System.