hadoop and hdfs presented by vijay pratap singh
DESCRIPTION
Hadoop is an open-source software framework . Hadoop framework consists on two main layers Distributed file system (HDFS) Execution engine (MapReduce) Supports data-intensive distributed applications. Licensed under the Apache v2 license. It enables applications to work with thousands of computation-independent computers and petabytes of data Hadoop is the popular open source implementation of map/reduce MapReduce is a programming model for processing large data sets MapReduce is typically used to do distributed computing on clusters of computers MapReduce can take advantage of locality of data, processing data on or near the storage assets to decrease transmission of data. The model is inspired by the map and reduce functions "Map" step: The master node takes the input, divides it into smaller sub-problems, and distributes them to slave nodes. The slave node processes the smaller problem, and passes the answer back to its master node. "Reduce" step: The master node then collects the answers to all the sub-problems and combines them in some way to form the final output Highly scalable file system 6k nodes and 120pb Add commodity servers and disks to scale storage and IO bandwidth Supports parallel reading & processing of data Optimized for streaming reads/writes of large files Bandwidth scales linearly with the number of nodes and disks Fault tolerant & easy management Built in redundancy Tolerate disk and node failure Automatically manages addition/removal of nodes One operator per 3k nodes Very Large Distributed File System 10K nodes, 100 million files, 10PB Assumes Commodity Hardware Files are replicated to handle hardware failure Detect failures and recover from them Optimized for Batch Processing Data locations exposed so that computations can move to where data resides Provides very high aggregate bandwidth Hdfs provides a reliable, scalable and manageable solution for working with huge amounts of data Future secure Hdfs has been deployed in clusters of 10 to 4k datanodes Used in production at companies such as yahoo! , FB , Twitter , ebay Many enterprises including financial companies use hadoop.TRANSCRIPT
Hadoop Distributed File System
(HDFS)
SEMINAR GUIDEMr. PRAMOD PAVITHRANHEAD OF DIVISIONCOMPUTER SCIENCE & ENGINEERINGSCHOOL OF ENGINEERING, CUSAT
PRESENTED BY VIJAY PRATAP SINGHREG NO: 12110083S7, CS-BROLL NO: 81
CONTENTS WHAT IS HADOOP
PROJECT COMPONENTS IN HADOOP
MAP/REDUCE
HDFS
ARCHITECTURE
GOALS OF HADOOP
COMPARISION WITH OTHER SYSTEMS
CONCLUSION
REFERENCES
WHAT IS HADOOP…???
WHAT IS HADOOP…???
WHAT IS HADOOP…???
WHAT IS HADOOP…???o Hadoop is an open-source software framework .
o Hadoop framework consists on two main layerso Distributed file system (HDFS)o Execution engine (MapReduce)
o Supports data-intensive distributed applications.
o Licensed under the Apache v2 license.
o It enables applications to work with thousands of computation-independent computers and petabytes of data
WHY HADOOP…???
PROJECT COMPONENTS IN HADOOP
MAP/REDUCEo Hadoop is the popular open source implementation of map/reduce
o MapReduce is a programming model for processing large data sets
o MapReduce is typically used to do distributed computing on clusters of computers
o MapReduce can take advantage of locality of data, processing data on or near the storage assets to decrease transmission of data.
oThe model is inspired by the map and reduce functions o"Map" step: The master node takes the input, divides it into smaller sub-problems, and
distributes them to slave nodes. The slave node processes the smaller problem, and passes the answer back to its master node.
o"Reduce" step: The master node then collects the answers to all the sub-problems and combines them in some way to form the final output
HDFS
Highly scalable file system◦ 6k nodes and 120pb◦ Add commodity servers and disks to scale storage and IO bandwidth
Supports parallel reading & processing of data◦ Optimized for streaming reads/writes of large files◦ Bandwidth scales linearly with the number of nodes and disks
Fault tolerant & easy management◦ Built in redundancy◦ Tolerate disk and node failure◦ Automatically manages addition/removal of nodes◦ One operator per 3k nodes
Scalable, Reliable & Manageable
ISSUES IN CURRENT SYSTEM
BIG DATA
INCREASING BIG DATA
HADOOP’S APPROACH
Big DataComputation
Computation
Computation
ComputationCombined Result
ARCHITECTURE OF HADOOP
HADOOP MASTER/SLAVE ARCHITECTURE
MAP REDUCE ENGINE
MAP REDUCE ENGINE
ARCHITECTURE OF HDFS
ARCHITECTURE OF HDFS
CLIENT INTERACTION TO HADOOP
• A
Rack 1
DataNode 1
DataNode 9
DataNode 7
Client
F
CBA
Rack 5
NameNode
Rack AwarenessRack 1:DN 1
Rack 2:DN7,9Core Switch
Switch Switch
I want to write file.txt
block A
Ok, Write toData Nodes
[1,7,9]
Ready DN 7+9 Ready
9
Ready!A A
A
HDFS WRITE
• A
Rack 1
DataNode 1
DataNode 9
DataNode 7
Client
F
CBA
Rack 5
NameNode
Rack AwarenessRack 1:DN 1
Rack 2:DN7,9Core Switch
Switch Switch
A A
A
Block Received
Success
MetadataFile.txt = Blk DN : 1,7,9
A
HDFS WRITE (PIPELINED)
• A
Rack 1
DataNode 1
DataNode 9
DataNode 7
Client
F
CBA
Rack 5
NameNode
Rack AwarenessRack 1:DN 1
Rack 2:DN7,9Core Switch
Switch Switch
I want to read file.txt block
A
Available at nodes [1,7,9]
A A
A
HDFS READ
GOALS OF HDFS Very Large Distributed File System
◦ 10K nodes, 100 million files, 10PB
Assumes Commodity Hardware◦ Files are replicated to handle hardware failure◦ Detect failures and recover from them
Optimized for Batch Processing◦ Data locations exposed so that computations can move to where data resides◦ Provides very high aggregate bandwidth
SCALABILITY OF HADOOP
EASE TO PROGRAMMERS
HADOOP VS. OTHER SYSTEMS
HADOOP USERS
TO LEARN MORE Source code
◦ http://hadoop.apache.org/version_control.html◦ http://svn.apache.org/viewvc/hadoop/common/trunk/
Hadoop releases◦ http://hadoop.apache.org/releases.html
Contribute to it◦ http://wiki.apache.org/hadoop/HowToContribute
CONCLUSION Hdfs provides a reliable, scalable and manageable solution for working with huge amounts of data
Future secure Hdfs has been deployed in clusters of 10 to 4k datanodes
◦ Used in production at companies such as yahoo! , FB , Twitter , ebay◦ Many enterprises including financial companies use hadoop
REFERENCES [1] M. Zukowski, S. Heman, N. Nes, And P. Boncz. Cooperative Scans: Dynamic Bandwidth
Sharing In A DBMS. In VLDB ’07: Proceedings Of The 33rd International Conference On Very Large Data Bases, Pages 23–34, 2007.
[2] Tom White, Hadoop The Definite Guide, O’reilly Media ,Third Edition, May 2012
[3] Jeffrey Shafer, Scott Rixner, And Alan L. Cox, The Hadoop Distributed Filesystem: Balancing Portability And Performance, Rice University, Houston, TX
[4] Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler, The Hadoop Distributed File System, Yahoo, Sunnyvale, California, USA
[5] Jens Dittrich, Jorge-arnulfo Quian, E-ruiz, Information Systems Group, Efficient Big Data Processing In Hadoop Mapreduce , Saarland University
Thankyou…
Queries