inter-process communication in hadoop. different types of network interactions found in hadoop...

14
Inter-process Communication in Hadoop

Upload: amelia-hudson

Post on 24-Dec-2015

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

Inter-process Communication in Hadoop

Page 2: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

Different types of network interactions found in Hadoop (source Cloudera)

1. Hadoop RPC calls – These are performed by clients using the Hadoop API, by MapReduce jobs, and among Hadoop services (JobTracker, TaskTrackers, NameNodes, DataNodes).2. HDFS data transfer – Done when reading or writing data to HDFS, by clients using Hadoop API, by MapReduce jobs and among Hadoop services. HDFS data transfers are done using TCP/IP sockets directly.3. MapReduce Shuffle – The shuffle part of a MapReduce job is the process of transferring data from the Map tasks to Reducer tasks. As this transfer is typically between different nodes in the cluster, the shuffle is done using the HTTP protocol.4. Web UIs – The Hadoop daemons provide Web UIs for users and administrators to monitor jobs and the status of the cluster. The Web UIs use the HTTP protocol.5. FSImage operations – These are metadata transfers between the NameNode and the Secondary NameNode. They are done using the HTTP protocol.

This means that Hadoop uses three different network communication protocols:---Hadoop RPC, Direct TCP/IP, HTTP

Page 3: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

IPC in Hadoop Distributed System

• IPC: Inter-process communication• Based on: Using Hadoop IPC/RPC for

distributed applications, by Kelvin Tan• http://www.supermind.org/blog/520/using-h

adoop-ipcrpc-for-distributed-applications• Hadoop IPC is the underlying mechanism is

used whenever there is a need for out-of-process applications to communicate with one another.

Page 4: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

Hadoop IPC

• Hadoop IPC1. uses binary serialization using java.io.DataOutputStream and java.io.DataInputStream, unlike SOAP or XML-RPC. 2. is a simple low-fuss RPC mechanism. 3. is unicast does not support multi-cast. • Why use Hadoop IPC over RMI or java.io.Serialization?• RMI: Remote method invocation• RPC: Remote Procedure Call

Page 5: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

Server code

Configuration conf = new Configuration();Server server = RPC.getServer(this, "localhost", 16000, conf); // start a server on localhost:16000server.start();

Page 6: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

Client code

Configuration conf = new Configuration();InetSocketAddress addr = new InetSocketAddress("localhost",16000); // the server's inetsocketaddressClientProtocol client = (ClientProtocol) RPC.waitForProxy(ClientProtocol.class, ClientProtocol.versionID, addr, conf);

Page 7: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

ClientProtocol Interface

interface ClientProtocol extends org.apache.hadoop.ipc.VersionedProtocol { public static final long versionID = 1L;

HeartbeatResponse heartbeat();}

Note: HeartbeatReponse is a class

Page 8: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

HeartbeatResponse

public class HeartbeatResponse implements org.apache.hadoop.io.Writable { String status;

public void write(DataOutput out) throws IOException { UTF8.writeString(out, status); }

public void readFields(DataInput in) throws IOException { this.status = UTF8.readString(in); }}

Page 9: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

Summary

1. A server which implements an interface (which itself extends VersionedProtocol) 2. 1 or more clients which call the interface methods. 3. Any method arguments or objects returned by methods must implement org.apache.hadoop.io.Writable

Learn about RPC, RMI, …TCP/IP, .. SOAP, REST…Also see ;)http://www.cse.buffalo.edu/~bina/cse486/spring2008/RMIJan30.ppt

Page 10: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

Now to answer Troy’s and Saurab’s questions:

• Output of mapper is buffered• When the buffer is 80% full it is transferred to “local file

system” (spill files) not to HDFS …no need… no need for replication. It is after all local…

• “The map outputs are copied to the reduce tasktracker’s memory if they are small enough (the buffer’s size is controlled by mapred.job.shuffle.input.buffer.percent, which specifies the proportion of the heap to use for this purpose); otherwise, they are copied to disk. …”, T.White, Hadoop Definitive Guide

• So the question is: How will you design/decide the size of the input split to the mapper? How many mappers?

• Memory(RAM) size/heapsize, bandwidth between machines are important

Page 11: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

There was a question about partitioner

• Here is the code:

public interface Partitioner<K, V> extends JobConfigurable { int getPartition(K key, V value, int numPartitions);}

Return value is the partition # , that is the reducer#

Page 12: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

Hadoop Task flow

Map Partition Combine Shuffle Reduce

Local Disk

Buffer/Cache

HDFS HDFS

Copy #Mapper X #Reducer times

What sort could the partiton, shuffle use?

Page 13: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

Improvements?• parameters to tune in mapred.site.xml are:

• io.sort.mb This is the output buffer of a mapper. When this buffer is full the data is sorted and spilled to disk. Ideally you avoid having to many spills. Note that this memory is part of the maptask heap size.

• mapred.map.child.java.opts This is the heap size of a map task, the higher this is the higher you can put output buffer size.

• In principle the number of reduce tasks also influences the shuffle speed. The number of reduce rounds is the total number of reduce slots / the number of reduce tasks. Note that the initial shuffle (during the map phase) will only shuffle data to the active reducers. So mapred.reduce.tasks is also relevant.

• io.sort.factor is the number threads performing the merge sort, both on the map and the reduce side.

• Compression also has a large impact (it speeds up the transfer from mapper to reducer but the compr/decompr comes at a cost!

• mapred.job.shuffle.input.buffer.percent is the percentage of the reducer's heap to store map output in memory.

Page 14: Inter-process Communication in Hadoop. Different types of network interactions found in Hadoop (source Cloudera) 1. Hadoop RPC calls – These are performed

More info• Increasing the sort buffer memory and the performance gain from

that will depend on:

• a)The size/total Number of the Keys being emitted by the mapper

• b) The Nature of the Mapper Tasks : (IO intensive, CPU intensive)

• c) Available Primary Memory, Map/Reduce Slots in the given Node

• d) Data skewness