hadoop distributed file system

Post on 27-Jan-2015

1.518 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

 

TRANSCRIPT

DFSDistributed File

System

Share Files Easily in Public Folder

What about this type of networks?

What Is DFS In Real World?

DFS allows administrators to consolidate file

shares that may exist on multiple servers to

appear as though they all live in the same

location so that users can access them from a

single point on the network

Example:

Benefits of DFS•

Resources management – (users access all resources through a single point)

• Accessibility– (users do not need to know the physical location of the shared folder,

then can navigate to it through Explorer and domain tree)

• Fault tolerance – (shares can be replicated, so if the server in Chicago goes down,

resources still will be available to users)

• Work load management – (DFS allows administrators to distribute shared folders and workloads

across several servers for more efficient network and server resources use)

Hadoop

Assumptions and Goals (1)

• HDFS instance consist of thousands of server

• HDFS is always non-fuctional

• Automatic recovery is a architectural goals of

HDFS

Assumptions and Goals (2)

• HDFS needs streaming access to their DataSets• HDFS is designed for batch processing rather

than interactive use y users

• HDFS has Large DataSets same as GB & TB

Assumptions and Goals (3)

• Moving Computation is Cheaper Than Moving

Data

• Portability across Heterogenous HW & SW

NameNode and DataNodes (1)

• Master/slave architecture• An HDFS cluster consists of:

- Single NameNode - a Master Server

- Number of DataNodesmanages file system namespace and regulates access to files by clients

One per node in clusterManage storage attached to the nodes they run on

NameNode and DataNodes (2)

• Internally, a file is split into one or more blocks and these blocks are stored in a set of DataNodes

• The NameNode executes file system namespace operations like opening, closing, and renaming files and directories

NameNode and DataNodes (3)

• The DataNodes are responsible for serving read and write requests from the file system’s clients

• The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode

NameNode and DataNodes (4)

NameNode and DataNodes (5)

• HDFS Run a GNU/Linux operating system (OS)

• HDFS is built using the Java language

File System NameSpace (1)

• HDFS supports a traditional hierarchical file organization

• HDFS does not yet implement user access permissions

• HDFS does not support hard links or soft links

• NameNode maintains the file system namespace

File System NameSpace (2)

• An application can specify the number of replicas of a file that should be maintained by HDFS

• The number of copies of a file is called the replication factor of that file

Data Replication (1)• HDFS reliably store very large files across

machines in a large cluster.

• It stores each file as a sequence of blocks

• all blocks except the last block are same size

• The block size and replication factor are configurable per file

Data Replication (2)

• NameNode makes all decisions for replication of blocks.

• It periodically receives a Heartbeat and a Blockreport from each of DataNodes in the cluster

Data Replication (3)

• Receipt of a Heartbeat implies that the DataNode is functioning properly.

• A Blockreport contains a list of all blocks on a DataNode

Data Replication (4)

File System Metadata (1)

Name Node Local File

FSImage EditLog

File System Metadata (2)

• EditLog– records any changes in File system

• FSimage – Stores blockmaping and filesystem properties

File System Metadata (3)

• The NameNode keeps an image of the entire file system namespace and file Blockmap in memory.

• This key metadata is compact, (4GB of RAM = huge number of files)

• checkpoint – NN starts up, it reads the FsImage and EditLog – applies all the transactions from the EditLog to the in-

memory representation of the FsImage– flushes out this new version into a new FsImage on disk.– checkpoint only occurs when the NameNode starts up.

• Blockreport– When a DataNode starts up, it scans through its

local file system, generates a list of all HDFS data blocks that correspond to each of these local files and sends this report to the NameNode

Robustness

• Cluster Rebalancing

• Data Integrity(checksum)

• Metadata Disk Failure

• Snapshots

Refrences:

1. http://www.maxi-pedia.com/what+is+DFS2. www.Apachi.org

top related