communication and synchronization in distributed systems

24
Communication and synchronization in distributed systems Johanna Ortiz Diego Niño Alejandro Velandia

Upload: guest61205606

Post on 09-Jun-2015

2.374 views

Category:

Technology


2 download

TRANSCRIPT

Page 1: Communication And Synchronization In Distributed Systems

Communication and synchronization in distributed

systems

Johanna OrtizDiego NiñoAlejandro Velandia

Page 2: Communication And Synchronization In Distributed Systems

Communication in distributed systems

In a distributed system there is no shared memory and thus the whole nature of the communication between processes should be reconsidered. The processes, to communicate, must adhere to rules known as protocols.

For distributed systems in a wide area, these protocols often take the form of several layers and each layer has its own goals and rules.

Messages are exchanged in various ways, and there are many design options in this regard, an important option is the "remote procedure call.

It is also important to consider the possibilities of communication between groups of processes, not only between two processes.

Page 3: Communication And Synchronization In Distributed Systems

client-server model

Two different roles in the interaction Client: requesting service. Request: Operation + data Server: provides service. response:

result

Page 4: Communication And Synchronization In Distributed Systems

RPC

The model client - server is a convenient way of structuring a S. O. distributed, but has a flaw The essential paradigm that is built around communication is the input / output. The procedures send / receive are reserved for conducting e / s. A different option was raised by Birrell and Nelson: Allow programs that call procedures located on other machines. When a process in the machine "A" calls a procedure in the machine "B": The process that makes the call is suspended. The implementation of the procedure is done in "B". The information can be transported from one place to another using the parameters and can return in the subsequent procedure. The programmer does not worry about a transfer of messages or e / s. This method is called Remote Procedure Call or RPC. The procedure makes the call and the person receiving it run on different machines, ie using different address spaces.

Page 5: Communication And Synchronization In Distributed Systems

proxy or cache model

three different roles in the interaction: Client: requesting service

Server: provides service Proxy: agent

Page 6: Communication And Synchronization In Distributed Systems

Multilayer model

server can be a client of another server typical web applications:presentation + business logic + Data access

Page 7: Communication And Synchronization In Distributed Systems

Peer-to-peer model

Dialogue protocol: chordin entities among themselves the end of each stage entities

synchronize and exchange information

Page 8: Communication And Synchronization In Distributed Systems

communication characteristics

blocking or non-blocking operation modeShipping:

blocking: the sender is blocked until it has been successfully sent to destination.non-blocking: the sender stores the data in a kernel buffer and resumes execution

reception: reception non-blocking: If there is available data is read by the receiver, otherwise indicates that no message had.

blocking reception: if there is no available data the receiver is blocked

Page 9: Communication And Synchronization In Distributed Systems

Reliability

Issues related to reliability of communication; - Ensuring that the message was received on node (s) target (s) - Maintenance of order in the delivery of messages - Flow control to avoid "flooding" the receiving node - Fragmentation of the messages to eliminate limitations on size Maximum messages If the communication system does not guarantee some of these aspects, it must send the application

Page 10: Communication And Synchronization In Distributed Systems

Communication in groups:

Destination of message is a group of processes:   Multicast Possible applications in distributed systems - Using multiple updates replicated data. - Use of replicated services. - Collective operations in parallel computation. Implementation depends on whether the network provides multicast - If no, is implemented by sending N messages A process can belong to several groups There is a group address EI group usually has a dynamic nature - You can add and remove group processes - Management of membership must be coordinated with the communication

Page 11: Communication And Synchronization In Distributed Systems

Com design aspects group

· Models of groups:- Open Group. · External process can send message to group - Typically used to replicate data or services - Group closed. - Only group processes can send messages. · Is commonly used in parallel processing (model peer-to-peer · Atomicity - Or get the message all processes or none

Page 12: Communication And Synchronization In Distributed Systems

Order reception of messages:

three choices: - Order FIFO: Messages from one source reach each receiver in the order they are sent. · There are no guarantees on messages from different issuers - Causal ordering: If the messages sent between two emitting a possible relationship "cause and effect, all group process first receive the message "cause" and then message "effect." - If no connection, no guarantee any delivery order - Definition of "causality" is discussed in "Synchronization" - Total Management: All messages (various sources) sent a group are received in the same order for all items.

Page 13: Communication And Synchronization In Distributed Systems

SYNCHRONIZATION OF DISTRIBUTED SYSTEMS

Page 14: Communication And Synchronization In Distributed Systems

SYNCHRONIZATION OF DISTRIBUTED SYSTEMS

Algorithms for clock synchronization Cristian's algorithm Berkeley Algorithm Algorithm with Average Algorithms for Mutual Exclusion Centralization Distributed

Page 15: Communication And Synchronization In Distributed Systems

Cristian's algorithm

This algorithm is based on the use of coordinated universal time (acronym in English, UTC) which is received by a computer within the distributed system. This team, called receptor UTC, in turn receives periodic requests from the time of other machines to each system of which sends a reply in the shortest possible time UTC requested report, which all machines update their system time and keeping it synchronized across the system. The receiver receives the UTC time through various means available, including mention the airwaves, the Internet, among others. A major problem in this algorithm is that time can run backwards: The UTC time of the receiver can not be shorter than the time of the machine that requested time. The UTC server must process requests for time with the concept of disruption, which affects the attention span. The range of transmission of the request and its response must be taken into account for synchronization. The propagation time adds to the time server to synchronize the sender when it receives the response.

Page 16: Communication And Synchronization In Distributed Systems

Berkeley Algorithm

A distributed system based on the Berkeley algorithm has no coordinated universal time (UTC) instead, the system manages its own time. For synchronizing the time in the system, there is also a time server that unlike Cristian algorithm behaves proactively. This server performs periodic sampling of the time have some of the machines in the system, which calculates an average time, which is sent to all machines in the system to synchronize.

Page 17: Communication And Synchronization In Distributed Systems

Algorithm with Average

 This algorithm does not have a server to control, centralize and maintain time synchronization in the system. In contrast, each machine in the system informs its local time with each message you send requires another machine or machines of the system. From that moment, each machine locally initialize a timer, whose duration is fixed interval and length. From that moment, each machine averages your local time using the hours you report the rest of the machines that interact with it.

Page 18: Communication And Synchronization In Distributed Systems

Algorithms for Mutual Exclusion These algorithms are defined to ensure compliance of mutual exclusion

between processes that require access to a critical region of the system.

Centralized:

This algorithm simulates the operating philosophy of mutual exclusion used in uniprocessor systems. To do this, there is a machine in the distributed system which is responsible for controlling access to the various critical sections, which is called the coordinator. Each system process that requires access to a critical section, you must request access to the coordinator, which is awarded in the event that the critical section is available, otherwise placed in a queue to process applicant. When a process received access to the critical section completes its task, i nform equally to the coordinator to enable it to grant access to a next requesting process or that is in queue. This algorithm presents a major constraint, namely that the coordinator represents a single point of control for access to the various critical sections of the distributed system, which becomes a bottleneck that can affect the efficiency of processes running in the system. Similarly, any failure to present the result in the cessation coordinator processes.

Page 19: Communication And Synchronization In Distributed Systems

Distributed

This algorithm was developed to eliminate the latent problem in the centralized algorithm. Therefore, its approach is based on not having a single coordinator to control access to critical sections of the distributed system. In this sense, each process that requires access to a critical section, send your request to all processes in the system, identifying themselves as the critical section who wish to access. Each receiving process sends its response to the process requesting   indicating one of the following possible answers: Critical section not in use by the receiving process. Response Message: OK. Critical section used by the receiving process.   Response Message: Not applicable, place the transmitter in process queue. Critical section in use but not requested by the receiving process. Response Message: OK, if the application is earlier than the receiver. Response Message: Not applicable, if the request is downstream of the receptor,   sender puts the process in queue. However, this algorithm also contains a problem, namely that if a process has a failure can not send its response to a request from a sender process, so this will be interpreted as a denial of access, blocking all processes requesting access to any critical section.

Page 20: Communication And Synchronization In Distributed Systems

Ring (Token Ring)

This algorithm establishes a logical ring of processes, software-controlled through the which circulates a token or control (token) between each process. When a process receives the tab, you can enter a   critical section if required, to process all tasks, leaving the critical section and deliver the card to the next process of the ring. This process is repeated continuously in the ring of processes. When a process receives the information and does not require entering a critical section, it passes the tab immediately to the next process. This algorithm contains a weakness associated with the possible loss of the card for access control to critical sections. If this occurs, the system processes assume that the card is in use by some process that is in the critical section.

Page 21: Communication And Synchronization In Distributed Systems

Election

These algorithms are designed to elect a coordinator process. In the same ensures that once the   coordinator election process, it concluded with the agreement of all the system processes the election of a new coordinator. The big fella (Garcia Molina) This algorithm is initiated when a process determines that there is any response to requests made to the process coordinator. At this time, this process sends to all processes older than him a message of electing a new coordinator, which can lead to the following scenarios: A process with a number greater than the sender of the message process, answer OK, which was elected as coordinator of the system. No process responds election message, which the sender process is elected as the coordinator process. Ring This algorithm operates similarly to the algorithm of the big fella, with the difference that in this method has the following variants: The election message is circulated to all system processes, and processes not only larger than the issuer. Each part of the message process identification. Once the complete message back to the ring and sending process, who sets the new coordinator to process the larger number. It circulates through the ring a new message indicating who is the coordinator of the system.

Page 22: Communication And Synchronization In Distributed Systems

Atomic Transactions

Synchronization method is a high level, unlike the revised methods so far, does not hold the developer on issues of mutual exclusion, prevent crashes and failover.   On the contrary, this method guides the developer's effort to real problems and substance of the synchronization of distributed systems. The concept of atomic transactions is to ensure that all processes that make a transaction should   implemented fully and satisfactorily. Of a breakdown in one of the processes, the entire transaction fails,   reversed the same and proceeded to restart.

Page 23: Communication And Synchronization In Distributed Systems

Threads, Processes (Threads)

Today's operating systems can support multiple threads of control within a process.   Two notable features in the processes is that threads share a single address space,   and in turn, simulate a multi-ordered, as if it were separate processes in parallel. Only in a multiprocessor machine with may actually run parallel processes. The wires can be placed in four states: Running, when the process is running. Locked, when a process depends on a critical resource. Usually, when it can be used again. Over, when the task ends. Implementation of a Yarn Package There are two ways to implement threads: In the user When performing the installation of user-level packages, the core must not know of its existence, so the kernel will handle only a single thread. The threads are executed in the runtime system in groups of procedures. In the event the system or procedure required to suspend a thread in its handling, the thread stores the records in a table, look for unlocked and reload the machine registers with initial values. Its main advantages are: Each process or thread can have its own algorithm or process planning. The exchange is faster, and identifiers used in the core. It has a more scalable processes increase.

Page 24: Communication And Synchronization In Distributed Systems

In the Core

Unlike the implementation on the client, the implementation in the kernel does not need the runtime management;   every process in the same table manages its processes, even if it means higher cost in resources and processing time machine. One of the most important advantages is that it requires blocking calls to the system.