storage-based convergence between hpc and cloud...

36
STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD TO HANDLE BIG DATA Deliverable number D2.1 Deliverable title Intermediate Report Big Data processing: state of the art WP2 DATA SCIENCE Editor Gabriel Antoniu (Inria) Main Authors Alvaro Brandon (UPM), Ovidiu Marcu (Inria), Pierre Matri (UPM) Grant Agreement number 642963 Project ref. no MSCA-ITN-2014-ETN-642963 Project acronym BigStorage Project full name BigStorage: Storage-based convergence between HPC and Cloud to handle Big Data Starting date (dur.) 1/1/2015 (48 months) Ending date 31/12/2018 Project website http://www.bigstorage-project.eu Coordinator María S. Pérez Address Campus de Montegancedo sn. 28660 Boadilla del Monte, Madrid, Spain Reply to [email protected] Phone +34-91-336-7380

Upload: others

Post on 28-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

STORAGE-BASEDCONVERGENCEBETWEENHPCANDCLOUDTOHANDLEBIGDATA

Deliverablenumber D2.1

Deliverabletitle IntermediateReport

BigDataprocessing:stateoftheart

WP2DATASCIENCE

Editor GabrielAntoniu(Inria)

MainAuthors AlvaroBrandon(UPM),OvidiuMarcu(Inria),PierreMatri(UPM)

GrantAgreementnumber 642963Projectref.no MSCA-ITN-2014-ETN-642963Projectacronym BigStorageProjectfullname BigStorage: Storage-based convergence between HPC and Cloud to

handleBigDataStartingdate(dur.) 1/1/2015(48months)Endingdate 31/12/2018Projectwebsite http://www.bigstorage-project.euCoordinator MaríaS.PérezAddress CampusdeMontegancedosn.28660BoadilladelMonte,Madrid,SpainReplyto [email protected] +34-91-336-7380

Page 2: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page2of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

ExecutiveSummaryThis document provides an overview of the progress of the work done until M24 of the Project

BigStorage(from01-01-2015until31-12-2016)inWP2DataScience.

This report presents an overview of the major systems that have been studied by the Early Stage

Researchers(ESRs).Thiscoversthefollowingtopics:

• MapReduce, the de facto state-of-the-art programming model for Big Data processing and its

referenceimplementations(GoogleMapReduceandHadoop)

• Performance-optimizedMapReduceprocessingbasedonin-memorystorage(Spark,Flink)

• More generic models for BigData processing, supporting SQL-like queries, data streams, more

genericworkflows,generalgraphprocessing

• Systemsleveragingmachine-learningforBigDataprocessing.

Thisreportpresentsanoverviewofeachofthesetopics,includingabriefanalysisofthestate-of-the-art

in each case and a preliminary presentation of the specific problems to be addressed in the project.

Page 3: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page3of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

DocumentInformationISTProjectNumber MSCA-ITN-2014-ETN-642963Acronym BigStorageTitle Storage-basedconvergencebetweenHPCandCloudtohandleBigDataProjectURL http://www.bigstorage-project.euDocumentURL http://bigstorage-project.eu/index.php/deliverablesEUProjectOfficer Mr.SzymonSrodaDeliverable D2.1IntermediateReportonWP2Workpackage WP2DataScienceDateofDelivery Planned:31.12.2016

Actual:15.12.2016Status Version1.0final!draft□Nature prototype□report!dissemination□Disseminationlevel public□consortium!DistributionList ConsortiumPartnersDocumentLocation http://bigstorage-project.eu/index.php/deliverablesResponsibleEditor GabrielAntoniu(Inria),[email protected],Tel:+33299847244Authors(Partner) AlvaroBrandon(UPM),OvidiuMarcu(Inria),PierreMatri(Inria)Reviewers AlladvisorsAbstract(fordissemination)

Thisreportpresentsanoverviewofthemajorsystemsthathavebeenstudied

bytheEarlyStateResearchers(ESRs).Thiscoversthefollowingtopics:

• MapReduce,thedefactostate-of-the-artprogrammingmodelforBigData

processing and its reference implementations (Google MapReduce and

Hadoop)

• Performance-optimized MapReduce processing based on in-memory

storage(Spark,Flink)

• Moregenericmodels forBigDataprocessing, supportingSQL-likequeries,

datastreams,moregenericworkflows,generalgraphprocessing

• Systemsleveragingmachine-learningforBigDataprocessing.

This report presents an overview of each of these topics, including a briefanalysisofthestate-of-the-art ineachcaseandapreliminarypresentationofthespecificproblemstobeaddressedintheproject.

Keywords Data processing, data science, Big Data processing, MapReduce, streaming,workflowprocessing,Spark,Flink

Version Modification(s) Date Author(s)0.1 Initialtemplateandstructure 22.09.2016 GabrielAntoniu,Inria0.2 Sectionsfromallauthors 14.10.2016 Allauthors0.3 Internalversionforreview 7.12.2016 GabrielAntoniu,Inria

Page 4: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page4of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

0.4 Commentsfrominternalreviewers 12.12.2016 Internalreviewers0.5 Finalversiontocommission 15.12.2016 GabrielAntoniu,Inria

Page 5: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page5of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

ProjectConsortiumInformationParticipants ContactUniversidadPolitécnicadeMadrid(UPM),Spain

MaríaS.PérezEmail:[email protected]

BarcelonaSupercomputing Center(BSC),Spain

ToniCortesEmail:[email protected]

Johannes GutenbergUniversity (JGU) Mainz,Germany

AndréBrinkmannEmail:[email protected]

Inria,France

GabrielAntoniuEmail:[email protected]:[email protected]

Foundation for Researchand Technology - Hellas(FORTH),Greece

AngelosBilasEmail:[email protected]

Seagate,UK

MalcolmMuggeridgeEmail:[email protected]

DKRZ,Germany

ThomasLudwigEmail:[email protected]

CA TechnologiesDevelopment Spain (CA),Spain

VictorMuntesEmail:[email protected]

CEA,France

JacqueCharlesLafoucriereEmail:[email protected]

Fujitsu TechnologySolutionsGMBH,Germany

SeppStiegerEmail:[email protected]

Page 6: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page6of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

TableofContents

EXECUTIVESUMMARY ......................................................................................................................................... 2

DOCUMENTINFORMATION ................................................................................................................................. 3

PROJECTCONSORTIUMINFORMATION................................................................................................................ 5

TABLEOFCONTENTS............................................................................................................................................ 6

1 INTRODUCTION............................................................................................................................................. 7

1.1 WP2OVERVIEW................................................................................................................................................ 7

1.2 ESRPARTICIPATIONANDCONTRIBUTIONS............................................................................................................... 8

2 MAPREDUCE:THEDEFACTOSTATE-OF-THE-ARTPROGRAMMINGMODELFORBIGDATAANALYTICS........... 9

2.1 GOOGLEMAPREDUCE......................................................................................................................................... 9

2.2 HADOOP:THE“STANDARD”OPEN-SOURCEIMPLEMENTATIONOFMAPREDUCE............................................................ 10

3 GOINGFASTER:IN-MEMORYMAPREDUCEPROCESSING ............................................................................. 12

3.1 SPARK ............................................................................................................................................................ 12

3.2 FLINK ............................................................................................................................................................. 13

3.3 EXPERIMENTALFRAMEWORKCOMPARISONS.......................................................................................................... 14

4 BEYONDMAPREDUCE:MOREGENERICPROGRAMMINGMODELSFORDATAANALYTICS............................ 16

4.1 SUPPORTINGSQL-LIKEQUERIES .......................................................................................................................... 16

4.2 PROCESSINGBIGDATASTREAMS ........................................................................................................................ 19

STREAMINGSQL ........................................................................................................................................................ 23

4.3 MACHINELEARNINGTOOLSFORBIGDATAPROCESSING........................................................................................... 24

4.4 GRAPHPROCESSING.......................................................................................................................................... 26

4.5 WORKFLOWS................................................................................................................................................... 26

THEDATAFLOWMODEL .............................................................................................................................................. 26

5 BIGDATAPROCESSING:WHATREQUIREMENTSFORSTORAGE?.................................................................. 29

6 LIMITATIONSANDGOALS............................................................................................................................ 32

6.1 IDENTIFIEDLIMITATIONSOFTHESTATEOFTHEART.................................................................................................. 32

6.2 GOALS ........................................................................................................................................................... 33

7 REFERENCES ................................................................................................................................................ 34

Page 7: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page7of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

1 Introduction

1.1 WP2Overview

ThissubsectionprovidesanoverviewofWP2aswaspresentedintheprojectproposal.

TheDataSciencehasemergedasthefourthparadigmforscientificdiscovery,basedondata-intensive

computing[Tan09].MotivatedbytheemergenceofBigDataapplications,DataScienceinvolvesseveral

data-centric aspects: storage, manipulation, analysis with statistics and machine learning, decision-

making, among others. During the recent years, the MapReduce [Dea04] programming model (e.g.,

AmazonElasticMapReduce,HadooponAzure-HDInsight)hasemergedasthedefactostandardforBig

Data processing. However,many Data Science applications do not fit thismodel and require amore

generaldataorchestration,independentofanyprogrammingmodel.Modelingthemasworkflowsisan

option[3,4],butcurrentcloudinfrastructureslackspecificsupportforefficientworkflowdatahandling.

State-of-the-artsolutionsrelyonhigh-latencystorageservices(AzureBlobs,AmazonS3)or implement

application-specific overlays that pipeline data from one task to another. However, in large-scale

distributed environments consisting of multiple datacenters, this would suffer from latencies when

accessinglargeremotedatasetsfrequently.

Data Science involves unprecedented complexity in the Big Data management process, which is not

addressed by existing cloud data handling services. This WP aims to investigate new models,

mechanisms and policies that are needed to support dynamic coordination for data sharing,

dissemination, analysis and exploitation across widely distributed sites with reasonable QoS levels.

Another major aspect regards energy, where some research work has explored power savings in

MapReduceclustersthroughvirtualmachineplacement[5],datacompression[6],andstaticdatalayout

techniques[7].However,sucheffortswerelimitedtoasingleMapReduceapplication:substantialeffort

is needed to cope with shared platforms, with multiple, diverse Data Science applications run

concurrently.WeplantoinvestigatesuchchallengesinthisWP.

WP2isorganizedin3tasks,asfollows.

Task2.1.BigDataProcessingModels

The objective of this task is to design next-generation data processing models for applications that

require general data orchestration, independent of any programmingmodel. Based on the use cases

studiedinWP1,wewillfocusonworkflowscomposedofmanytasks,linkedviadata-andcontrol-flow

dependenciesaswellasonstreamdataprocessing.Wewillexaminethelimitationsandbottlenecksof

currentNoSQL/MapReducebasedsolutions;defineasetofminimalrequirementsforefficientworkflow

and real-time datamanagement; design techniques for fast transfers betweenworkflownodes (inter

andintradatacenter)andallowefficientstreamprocessing.

Task2.2GreenBigDataAnalysis

Page 8: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page8of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

Thistaskwillfocusoninvestigatingnewtechniquesforenergy-efficientBigDataanalysis,reducingthe

data set inputby theapplicationofmachine learning techniqueson shared clusters andbydesigning

newenergy-awarejobschedulersforMapReduce-likedataanalysis.Morespecifically,wewilldefine:a

data model for energy consumption for MapReduce-like data analysis on clouds; mechanisms for

scheduling data analysis tasks across clouds tomeet response-time requirementswhile reducing the

overall energy-consumption; new data layout schemes that guarantee data availability and reduce

responsetimewhilereducingenergyconsumption.

Task2.3.Data-DrivenDecisionMaking

Predictive insights and accurate predictive models from data are essential in nowadays business

applications.Thistaskaimstotrainresearchersintheexpertiseofdata-drivendecision,connectingthe

low-level (scalabledata infrastructures) totherealneedsofapplicationsWewill: studyof thecurrent

data-drivendecisionapproaches,identifyingtheirlimitationsinscalabilityandperformance;defineaset

of minimal requirements for efficient and scalable data-driven decision approaches; design novel

techniquesforefficientandscalabledata-drivendecisionmakingprocesses.

1.2 ESRParticipationandcontributions

ThefulllistofESRsinthecurrentformoftheprojectis:

ESR Name Institution Mainadvisor1 Ovidiu-CristianMarcu INRIA GabrielAntoniu/AlexandruCostan2 AlvaroBrandon UPM MariaS.Perez3 PierreMatri UPM MariaS.Perez4 MuhammadUmarHameed Mainz AndreBrinkmann5 RizkallahTouma BSC AnnaQueralt/ToniCortes6 FotiosPapaodyssefs Seagate MalcolmMuggeridge7 LinhThuyNguyen INRIA AdrienLebre8 AthanasiosKiatipis Fujitsu SeppStieger9 FotisNikolaidis CEA PhilippeDeniel10 DimitriosGanosis FORTH AngelosBilas/ManolisMarazakis

11 NafisehMoti Mainz AndreBrinkmann12 GeorgiosKoloventzos BSC RamonNou/ToniCortes13 Mohammed-YacineTaleb INRIA ShadiIbrahim14 YevhenAlforov DKRZ ThomasLudwig/MichaelKuhn15 MichałZasadziński CA VictorMuntes

TheESRsparticipatinginWP2areESR1(leadforTask2.1),ESR2(leadforTask2.2),ESR3(leadforTask

2.3).TheyareallalsoinvolvedinotherWPs.

Note: This report aims to present the state-of-the-art in Big Data processing by describing existing

systems with their particular features. AsWP5 is fully dedicated to energy-related aspects, to avoid

redundancy,wedecidednottodetailtheenergydimensioninthisreport.

Page 9: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

2 MapReduce:thedefactostate-of-the-artprogrammingmodelfor

BigDataanalytics

2.1 GoogleMapReduce

Figure1.MapReduceConceptualArchitecture(extractedfrom[Dea04]).

AfirstmilestoneoftheevolutionofDataprocessingistheintroductionoftheMapReduceprogramming

modelbyGooglein2004[Dea04].AmongthemotivationsthattriggeredtheproposaloftheMapReducemodel therewere two important challenges at that time: 1) provide effective scalability (hard to getimplemented efficiently in practice with traditional database-oriented techniques); 2) ensure that a

fault-toleranceimplementationdoesnotgenerateahighcomplexityforsystemdevelopersorthefinalusers,anditisimplementedcorrectly.

Initsessence,aMapReducejobconsistsofasequenceoftwoprocessingstages(asillustratedinFigure1):aMapstage,consistinginfilteringtheinputdatatoproduceintermediatedatathatisrelevantfora

particular data analysis request; a Reduce stage, consisting in aggregating the intermediate data toproducethefinalresult.Eachstageis(potentially)highlyparallel:theinitialinputdatasetissplitintoanarbitrary large number of subsets, each of which is typically processed by a separateMap task; the

intermediate data produced by theMap tasks is shuffled and sorted, then processed by the Reducetasks,withfullparallelismagain.Typicallyboththeinputandtheoutputdataofthejobarestoredinafile-system.Theframeworktakescareofschedulingtasks,monitoringthemandre-executingthefailed

tasks.

Page 10: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page10of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

GoogleMapReduce is the first reference implementationof theMapReducemodel.Over the years ithasbeenupdatedwithmultiplefeaturesandoptimizations[Aki16a],provingitsefficiencyforexample

bysorting10PBofdatainabout6.5hours.ThesystemhasbeenusedatGoogleuntil2015[Aki16a].

2.2 Hadoop:the“standard”open-sourceimplementationofMapReduce

Soonafter theMapReducemodelwaspublished, theApacheHadoop [Hada]OpenSourceecosystem

tookbirthandbecame thede facto standard forMapReduceprocessing, through its adoptionby themaincloudcomputingproviders.Hadoopcurrentlyconsistsofthreemainprojects:

• Hadoop Distributed File System (HDFS) to provide high-throughput data access to the

application;• HadoopYARNframeworkforresourceandclustermanagement;• HadoopMapReducewhichimplementstheMapReduceprocessingmodel.

The MapReduce model and its open-source implementation Hadoop MapReduce were quickly andwidely adopted by both industry and academia, mostly because of their simplified yet powerful

programming model. In order to understand how MapReduce works, its main components andarchitecture,wecanfollowthesimpletutorialfrom[Hadb],brieflysummarizedbelow.

InatypicalimplementationofMapReducesuchasHadoop,thecomputenodesandthestoragenodesare the same, that is, theMapReduce framework and theHadoopDistributed File System (seeHDFSArchitectureGuide)runonthesamesetofnodes.Thisconfigurationallowstheframeworktoeffectively

scheduletasksonthenodeswheredataisalreadypresent,resultinginveryhighaggregatebandwidthacrossthecluster.In a minimal utilization mode, applications specify the input/output locations and supplymap and

reduce functions via implementations of appropriate interfaces and/or abstract-classes. These, and

otherjobparameters,comprisethejobconfiguration.

The Hadoop job client then submits the job (jar/executable etc.) and configuration to a

ResourceManagerwhich thenassumes the responsibilityofdistributing the software/configuration to

the slaves, scheduling tasks andmonitoring them, providing status and diagnostic information to the

job-client.

Maps are individual tasks that transform input records into intermediate records. The transformed

intermediaterecordsdonotneedtobeofthesametypeastheinputrecords.Agiveninputpairmay

maptozeroormanyoutputpairs.AllintermediatevaluesarepassedtotheReducer(s)todeterminethe

finaloutput.UserscancontrolthegroupingbyspecifyingaComparator.Thenumberofmapsisusually

drivenbythetotalsizeoftheinputs,thatis,thetotalnumberofblocksoftheinputfiles.

Page 11: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page11of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

AReducerhas3primaryphases:

• Shuffle: in this phase the framework fetches the relevant partition of the output of all the

mapperstoplaceitonthelocalstorageofthereducertaskswhichwillprocessthepartition;

• Sort: theframeworkgroupsReducer inputsbykeys (sincedifferentmappersmayhaveoutput

thesamekey)inthisstage;

• Reduce:theactualreduceprocessingtakesplace.

Theshuffleandsortphasesoccursimultaneously;whenmap-outputsarefetchedtheyaremerged.

Page 12: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page12of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

3 Goingfaster:in-memoryMapReduceprocessingThevery simpleAPIofMapReducecomeswith the importantcaveat thatusersare forced toexpress

applications in termsofmapand reduce functions.However,moreapplicationsweredevelopedwiththe requirement that should be independent of any programming model. For instance, iterative

algorithmsusedingraphanalyticsandmachinelearningarenotwellservedbytheoriginalMapReducemodel.Theselimitationsareaddressedbyasecondgenerationofanalyticsplatforms(e.g.ApacheSpark[Spark],ApacheFlink[Flink])thatemergedinanattempttounifythelandscapeofBigDataprocessing.

SparkandFlinkfacilitatethedevelopmentofmulti-stepdatapipelinesusingdirectlyacyclicgraph(DAG)patterns. At a higher level, both engines implement a driver program that describes the high-level

controlflowoftheapplication,whichreliesontwomainparallelprogrammingabstractions:(1) structurestodescribethedataand(2) paralleloperationsonthesedata.

Whilethedatarepresentationsdiffer,bothFlinkandSpark implementsimilardataflowoperators(e.g.map,reduce,filter,distinct,collect,count,saveetc.),oranAPIcanbeusedtoobtainthesameresult.

For instance, Spark’s reduceByKey operator (called on a dataset of key-value pairs to return a newdatasetofkey-valuepairswherethevalueofeachkeyisaggregatedusingthegivenreducefunction)isequivalenttoFlink’sgroupByfollowedbytheaggregateoperatorsumorreduce.

3.1 Spark

Apache Spark [Spark] introduced Resilient Distributed Datasets (RDDs) [Zah12a], a set of in-memory

data structures able to cache intermediate data across a set of nodes, in order to efficiently supportiterativealgorithms.RDDs(read-only,resilientcollectionsofobjectspartitionedacrossmultiplenodes)holdprovenanceinformation(referredtoaslineage).Theycanberebuiltincaseoffailuresbypartialre-

computationfromancestorRDDs.EachRDDisbydefaultlazy(i.e.computedonlywhenneeded)andephemeral(i.e.onceitactuallygets

materialized,itwillbediscardedfrommemoryafteritsuse).However,sinceRDDsmightberepeatedlyneededduringcomputations, theusercanexplicitlymark themaspersistent,whichmoves them inadedicatedcacheforpersistentobjects.

TheoperationsavailableonRDDsseemtoemulatetheexpressivityoftheMapReduceparadigmoverall,however,therearetwoimportantdifferences:

1. Due to their lazy nature, maps will only be processed before a reduce, accumulating allcomputationalstepsinasinglephase;

2. RDDs can be cached for later use, which greatly reduces the need to interact with the

underlyingdistributedfilesysteminmorecomplexworkflowsthat involvemultiplereduceoperations.

Page 13: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page13of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

Tachyon

At the core of Spark stays Tachyon [Hao14], later renamed Alluxio, an open source in-memory

distributedstoragethatenablesreliabledatasharingatmemoryspeedacrosscluster frameworks likeSparkorFlink.Tachyoneliminatesthebottleneckofusingreplicationforfault-tolerance,byleveraginglineage, thewell-known techniqueused inRDDs.Anexperimental study [Hao14] shows that Tachyon

achieves110xhigherwritethroughputthanin-memoryHDFS.

DataFrame

Recently,thenewDataFrameAPIwasdevelopedforSpark[Data].ItisanextensionoftheRDDAPI,inwhichdata isnoworganized intonamedcolumns. It is very similar toa relationaldatabaseor to the

widelyusedmodelofdataframesinR/Python.Itsbenefitscomefromknowingthestructureofthedata.ThisallowsSparktodooptimizationsunderthehood.Spark’sCatalysoptimizercompilesoperationsintoJVMbytecodewithplans that involve intelligentactions likebroadcastsor skipping reading irrelevant

dataforaparticularoperation.Recently,theDataFramesAPIwasmergedwiththeDatasetAPI.ThisnewunifiedAPIissupposedtobe

thesinglehighlevelAPIforSpark[Datb]consistingof:• An untyped API where a DataFrame is considered as a Dataset of a generic untyped object

“Row”(inotherwordsaDataset[Row]);

• A strongly-typed API where a Dataset is a collection of strongly typed JVM objects (in otherwordsDataset[T]).

ThebenefitofaDatasetoveraDataframeisthatitisstaticallytypedandissafeatruntimesincealltheerrors are checked at compile time. It also gives users a high level abstraction and a view of the

structureddataathandwitharichAPI.Lastly,becauseDatasetisstronglytyped,objectscanbemappedtoTungsten’sinternalmemoryrepresentationforefficientserialize/deserializeoperationsandcompactbytecode,meaningsuperiorperformance.

3.2 Flink

FlinkisbuiltontopofDataSets(collectionsofelementsofaspecifictypeonwhichoperationswithanimplicit type parameter are defined), Job Graphs and Parallelisation Contracts (PACTs) [War09]. Job

Graphs represent parallel data flows with arbitrary tasks, that consume and produce data streams.PACTs are second-order functions thatdefinepropertieson the input/outputdataof their associateduserdefined(firstorder)functions(UDFs);thesepropertiesarefurtherusedtoparallelizetheexecution

ofUDFsandtoapplyoptimizationrules[Ale11].

Page 14: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page14of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

IncontrasttoFlink,SparkuserscancontroltwoveryimportantaspectsoftheRDDs:thepersistence(i.e.inmemoryordiskbased)andthepartitionschemeacrossthenodes[Zah12a].Thisfine-grainedcontrol

over thestorageapproachof intermediatedataproves tobeveryuseful forapplicationswithvaryingI/Orequirements.IterationhandlingAnotherimportantdifferencerelatestoiterationshandling.Sparkimplementsiterationsasregularfor-loops and executes them by loop unrolling. This means that for each iteration a new set of

tasks/operators is scheduled and executed. Each iteration operates on the result of the previousiterationwhichisheldinmemory.

Flinkexecutesiterationsascyclicdataflows.Thismeansthatadataflowprogram(andallitsoperators)isscheduledjustonceandthedataisfedbackfromthetailofaniterationtoitshead.Basically,dataisflowingincyclesaroundtheoperatorswithinaniteration.Sinceoperatorsarejustscheduledonce,they

canmaintainastateoveralliterations.Flink’sAPIofferstwodedicatediterationoperatorstospecifyiterations:

1. bulkiterations,whichareconceptuallysimilartoloopunrolling,and2. deltaiterations,aspecialcaseofincrementaliterationsinwhichthesolutionsetismodified

bythestepfunctioninsteadofafullrecomputation;theycansignificantlyspeedupcertain

algorithmssincetheworkineachiterationdecreasesasthenumberofiterationsgoeson.

3.3 Experimentalframeworkcomparisons

OnesizefitsallisaconceptthatrapidlyprovestobewronginBigDataprocessingframeworks.

Hadoopvs.Spark

In [Shi15] the authors analyze three major architectural components (shuffle, execution model and

caching)inHadoopandSparkandshowthatalthoughSparkismoreefficientthanHadoop,thereisonecase where for a Sort workload Hadoop MapReduce is twice faster than Spark, because of a moreefficient executionmodel for shuffling data:MapReduce can overlap the shuffle stagewith themap

stageinordertoeffectivelyhidethenetworkoverhead.

Sparkvs.Flink

In the framework of our BigStorage projectweperformed another comparative study [Mar16]whichevaluates the performance of Spark versus Flink in order to identify and explain the impact of thedifferent architectural choices and the parameter configurations on the perceived end-to-end

performance.

Page 15: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page15of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

Basedonexperimentalevidence,thestudypointsoutthat inBigDataprocessingthere isnotasingleframeworkforalldatatypes,sizesandjobpatternsandemphasizesasetofdesignchoicesthatplayan

important role in the behavior of a Big Data framework:memorymanagement, pipelined execution,optimizations and parameter configuration easiness. What raises our attention is that a streamingengine (i.e. Flink) delivers in many benchmarks better performance than a batch-based engine (i.e.

Spark), showingthatamoregeneralBigDataarchitecture (treatingbatchesas finitesetsofstreameddata)isfeasibleandmaysubsumebothstreamingandbatchingusecases.

Page 16: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page16of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

4 BeyondMapReduce:more generic programmingmodels for data

analytics

4.1 SupportingSQL-likequeries

Hive

Hiveisadata-warehousesolutionthatisbuiltontopoftheHadoopEcosystem[Ash09].Itallowsusersto build SQL queries that extract data from several databases and file systems that integrate withHadoop.ThesequeriesarelatertranslatedintoMapReduce,TezorSparkJobs.ItwasinitiallydevelopedbyFacebookandisnowusedbyseveralcompaniesandofferedinserviceslikeAmazonWebServices.InfactitwaspartforalongtimeofthecoreofSparkSQL.

Thedatamodelisdividedinto:

• Tables:theytranslatetoHDFSdirectories.Thedataofeachtableisserializedandstoredinfilesinsidethatdirectory.Whenaqueryiscompiledorexecuted,thadataisdeserializedthankstotheinformationstoredinthesystemcatalogabouttheserialisationformatforthattable.

• Partitions: denote the distribution of the data inside the directory. This is done throughsubdirectories that are named with the specific values of the partition, e.g a partition oncolumnsdate and country.Datawith aparticular valueof 20091112andUSwill be stored insubdirectories/date=20091112/country=US

• Buckets:data inpartitionsare further splitted into filesbasedon thehashofacolumn in thetable.

The query language supports select, project, join, aggregate, union and subqueries. It also supportsUDFs and aggregations (UDAF) implemented in Java. Users can also create tables with specificserialisation,partitioningandbucketingoptionsandloadandinsertoperations.Moreover,userscanusemap-reducescriptsontherowsofthetables.

ThearchitectureofHive,illustratedinFigure2,iscomposedof:

• AMetastore:storesmetadataforthetablesandinformationtokeeptrackofthedatasets;• ADriver:managestheexecutionoftheSQLqueriesfrombeginningtoendandcommunicates

withtheHadoopecosystem;• Compiler:invokedbythedriverandtotransformthequeryintoanexecutionplan;• Optimiser: transforms the execution plan to get the best DAG to be executed in theHadoop

platform;• ExecutionEngine:passesallthemap-reducejobstotheexecutionengine;• CLI,UI,andThriftServer:interactionlayerbetweenusersandthedriver.

Page 17: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page17of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

Figure2.TheHivearchitecture[Ash09].

SparksupportforSQL:SharkandSparkSQLShark uses the Spark engine to speed up SQL queries [Xin13]. It supports Hive queries with the

advantages of an RDD: in memory data and fine grained fault-tolerance. It also includes someoptimizationstotheRDDabstraction:

- memorycolumnarstorageandcolumnarcompression

- optimizedqueriesbasedonobservedstatistics

Page 18: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page18of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

Figure3.TheSharkqueryexecutionplan[Lev13]

SharkusedHive’s language, itsmetadataand its interfaces, beingable toworkoverunmodifiedHive

datawarehouses.However,thatHivebasecodemadeittoodifficulttomaintainandoptimizeanditwaseventuallyreplacedbySparkSQL.

SparkSQListheevolutionofSharkbasedonalltheinsightslearntfromHive[Rey14].Again,itcanworkwithanyexistingHivedatasourcesanduseitsmetastore.ItprovidesanunificationoftherelationandproceduralprocessingpowerofSpark,beingabletomixSQLquerieswithnormalproceduralcallseven

with machine learning. Moreover, it provides support for semistructured data sources like JSON orparquetandinferscharacteristicsofthedataautomatically.ItalsousesCatalyst[Arm15],anoptimizerbasedonprogramming language featuresofScala likepatternmatchingandquasiquotes.Thisengine

analysesthequeryoftheusertofirstcreateanoptimallogicalplan.Thenthisistranslatedintoseveralphysical plans that are evaluated thanks to a cost based model. The best physical plan is finally

convertedintoJavacodethatisgoingtoberunontheRDD.Theobjectiveistofreetheuserfromtheburdenofoptimizingqueries.Itcanbeextendedwithnewrulesforthecostmodelbythecommunity.

Figure4.OptimiserflowinCatalyst[Arm15]

Page 19: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page19of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

FlinksupportforSQL:TableAPI

ApacheFlinkusesthetableAPItoallowSQLqueriesoverbatchorstreameddata[Flia].ThequeriescanbemadethroughanSQLsyntaxoranexpressionsyntaxusingcallsfromtheAPI.e.g.:

Tablein=tableEnv.fromDataSet(ds,"a,b,c");

Tableresult=in.where("b='red'");

orTableresult=tableEnv.sql(

"SELECTproduct,amountFROMOrdersWHEREproductLIKE'%Rubber%'");

The architecture of the system relies on optimizations made with Apache Calcite. The queries arevalidatedagainstregisteredtablesbytheuserandthenconvertedtoaCalcitelogicalplan.Thisplanis

optimizedbytheenginedependingonthedatasource:staticorstreaming.TheoptimizationisexecutedasanormalFlinkprogramwithallthepreviousoptimizations.ThemainpurposeofthisAPIistoprovide

auniqueengineforbothstreamingandSQLlikequeries.

Copingwithapproximatequeries:BlinkDB

The main characteristic of BlinkDB is to allow fast queries over big amounts of data by using data

samples [Aga13]. Different stratified samples of the data can be created through frequently usedqueries.Thesesamplesarelaterselectedbyastrategythatdependsonaconfidenceintervalandatimelimitforaquery,bothdefinedbytheuser.Thegoalistoachievefastquerytimesbylosingasmallpart

oftheinformationofthedata.

4.2 ProcessingBigDataStreams

Streamingsystemsaredealingwithunbounded input,unordereddatastreamsathighdatarates,and

arerequiredtoprocesslargeamountsofdatawithlowlatency.Real-timedataprocessinggotrecentlyalotofattentioninbothacademiaandindustry,withnumerousopensourceframeworksnowavailableinordertosolvevarioususecases:ApacheStorm[Tos14,R12],ApacheSparkStreaming[Zah12],Apache

Flink[Flink2],TwitterHeron[Heron],ApacheSamza[Samza].

Page 20: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page20of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

Streamprocessinginreal-time:Storm,Flink,TwitterHeron,SparkStreaming

Storm

BigData streamprocessing startedmore consistentlywith the first ideas thatweregathered insideaproject that will later become Apache Storm. To represent a stream as a distributed abstraction (to

produceandprocessstreamsinparallel),twoconceptswereintroduced[Nat05]:aspoutproducesnewstreamsandabolttakesstreamsasinputandproducesotherstreamsasoutput;spoutsandboltsarehandled in parallel just likemappers and reducers inMapReduce; finally, a topology is a network of

spoutsandbolts.In[Tos14]theauthorsdescribetheuseofStormatTwitter,itsarchitectureandthewaytopologiesare

executedinStorm.Atopologyisadirectedgraphthatrepresentscomputationthroughverticesandthedataflowbetweencomponentsthroughedges.Verticescanbespoutsandbolts.AtypicalspoutcanpulldatafromaqueuesuchasKafka.Boltsareresponsibleforprocessingincomingtuplesandpassingthem

tothenextsetofbolts.Thearchitecturecanbesummarizedintothenextactions:aclientsubmitsatopologytoamasternode

which is responsible for the distribution and execution of a topology. Amaster coordinates a set ofworkersnodes,which runoneormoreworkerprocesses. Eachworkerprocess ismapped toa singletopologyandstartsaJVMinwhichoneormoreexecutorsrunoneormoretasks.

Stormprovidestwotypesofprocessingsemanticsguarantees:at leastonceguaranteesmeaningeachtupleofatopologywillbeprocessedatleastonce;atmostoncesemanticsdictatesthatatupleiseither

processedonceorisdroppedinthecaseofafailure.Heron

Pressedbyan increasedscaleofdataprocessingandthediversityofnewusecases,alongwithmanylimitations of Storm [Kul15], Twitter developed Heron, a new stream processing frameworkwhich is

compatiblewithStorm'sAPI,withitsmaingoalsbeingperformancepredictability,improveddeveloperproductivityandeaseofmanagement.

Oneof themechanismsthatHeron implementedandwhichwasmissing inStorm is thebackpressuremechanism (handling spikesandcongestion): if the receiver component isunable tohandle incomingdata/tuples,inStormthesenderwillsimplydroptuples.Heron'stupleprocessingsemanticsaresimilar

tothatofStorm,henceneitherHeronimplementsexactly-onceguarantees,althoughtheauthorsclaimthat the design allows it and they are considering its implementation [Kul15]. Finally, the authorsmention thatatTwitter,Stormwas replacedbyHeron,showing large improvements inbothresource

usage,throughputandlatency.

Page 21: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page21of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

SparkStreamingAlmostatthesametimewithStormbeingreleasedasanApacheopensourceproject,SparkStreaming

came into play with its discretized streams efficient and fault-tolerant model for stream processing[Zah12].SparkStreamingtakeawaysarethehigh-levelfunctionalprogrammingAPI,strongconsistency(exactly-once) and efficient fault recovery (avoiding traditional replication or upstreambackupwhere

messagessentarebufferedandreplayedonacopyofthefaileddownstreamnode). It introducedtheconceptofD-Streams[Aki15]:

"Thekey ideabehindD-Streams is to treatastreamingcomputationasaseriesofdeterministicbatchcomputationsonsmalltimeintervals"anditisbasedonRDDs.SparkStreamingcapabilitiesarelimitedto in-order streaming processing and provide windowing semantics that are limited to tuple- or

processingtime-basedwindows”Samza

Apache Samza [Samza] is a distributed stream processing framework. It uses Apache Kafka formessaging and Apache Hadoop YARN to provide fault tolerance, processor isolation, security, and

resourcemanagement.Itfeaturesseveralproperties:• SimpleAPI:Unlikemostlow-levelmessagingsystemAPIs,Samzaprovidesaverysimplecallback-

based“processmessage”APIcomparabletoMapReduce.• Managed state: Samza manages snapshotting and restoration of a stream processor’s state.

When the processor is restarted, Samza restores its state to a consistent snapshot. Samza is

builttohandlelargeamountsofstate(manygigabytesperpartition).• Fault tolerance: Whenever a machine in the cluster fails, Samza works with YARN to

transparentlymigrateyourtaskstoanothermachine.

• Durability:SamzausesKafkatoguaranteethatmessagesareprocessedintheordertheywerewrittentoapartition,andthatnomessagesareeverlost.

• Scalability: Samza is partitioned and distributed at every level. Kafka provides ordered,

partitioned, replayable, fault-tolerant streams. YARN provides a distributed environment forSamzacontainerstorunin.

• Pluggable: Though Samza works out of the box with Kafka and YARN, Samza provides a

pluggable API that lets you run Samza with other messaging systems and executionenvironments.

• Processorisolation:SamzaworkswithApacheYARN,whichsupportsHadoop’ssecuritymodel,

andresourceisolationthroughLinuxCGroups.

Page 22: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page22of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

Apex

ApexisaHadoopYARNnativeplatformthatunifiesstreamandbatchprocessing.ItprocessesBigDatain-motioninawaythatishighlyscalable,highlyperformant,faulttolerant,stateful,secure,distributed,andeasilyoperable[Apex].

[DEC]providesasurveyofKafka,Samza,Heron,FlinkStreamingSystems.

StreamprocessinginFlinkApache Flink provides one interface for batch processing (i.e the DataSet API) and another one for

distributedstreamprocessing(theDataStreamAPI).ThelatterisdedicatedtounboundedstreamsandenablesJavaandScalaprogrammerstowriteprogramsthatimplementvarioustransformationsondatastreams,whichcanbeeitherstateless (simply lookingatone individualelementata time)orstateful

(operationsthatrememberinformationacrossindividualevents,e.gwindowoperators).Flink’s basic building blocks are streams (intermediate results) and transformations (operations that

takesoneormorestreamsas input-sources,andcomputeoneormoreresultstreamsfromthemasoutput - sinks). When executed, Flink programs are mapped to streaming dataflows (may resemblearbitrary directed acyclic graphs - DAG), consisting of streams and transformation operators. Each

dataflowstartswithoneormoresourcesandendsinoneormoresinks.

Parallelism in Flink. Programs in Flink are inherently parallel and distributed. Streams are split intostream partitions and operators are split into operator subtasks. The operator subtasks executeindependently from each other, in different threads and on different machines or containers. The

numberofoperatorsubtasksistheparallelismofthatparticularoperator.Theparallelismofastreamisalways that of its producing operator. Different operators of the program may have a differentparallelism.

One of the fundamental challenges of distributed stateful stream processing is providing processingguarantees under failures. To overcome existing approaches, which rely on periodic global state

snapshotsthatimpacttheoverallcomputation,alightweightalgorithmwasproposedandimplementedontopofFlink[Car15].

Flink’sarchitectureforstreamprocessing.TheFlinkruntimeconsistsoftwotypesofprocesses:• Themasterprocess(alsocalledJobManagers)coordinatethedistributedexecution.Itschedules

tasks, coordinate checkpoints, coordinate recovery on failures, etc. (high-availability setups

includes multiple master processes, one of which is always the leader, the other remain instandby).

• The worker processes (also called TaskManagers, at least one) execute the subtasks of a

dataflow, and buffer and exchange the data streams. Each worker is a JVM process and issplittedintotaskslots,eachtaskslotrepresentingafixedsubsetoftheresources.

Page 23: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page23of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

Theclientisnotpartoftheruntimeandprogramexecution,butisusedtoprepareandsendadataflow

tothemaster.Afterthat,theclientcandisconnect,orstayconnectedtoreceiveprogressreports.Flinkexecutesbatchprogramsasaspecialcaseofstreamingprograms(aDataSetistreatedinternallyas

astreamofdata),wherethestreamsarebounded(finitenumberofelements)[Flink2].

Streamdatatransferandmanagement

Kafka

Apache Kafka is a distributed stream platform that provides durability and publish/subscribefunctionality for data streams (making streaming data available tomultiple consumers), being the de

facto open source solution (for data durability and availability) used in end-to-end pipelines withstreamingengineslikeSparkorFlink,whichattheirturnprovidesdatamovementandcomputation.

AKafkaclusterisasetofoneormoreserversthatstorestreamsofrecordsincategoriescalledtopics.EachtopiccanbesplitintomultiplepartitionsthataremanagedbyaKafkabroker,aservicecollocatedto thenode storing thepartition, that furtherenables consumersandproducers toaccess the topic’s

data.ApacheKafkagetsusedby:

• Producerservicesthatarerequiredtoputdataintoatopic(ProducerAPI).• Consumerservicesthatcansubscribeandprocessrecordsfromatopic(ConsumerAPI).• Applications(e.g.streamingengines)thatcanconsumeinputstreamsfromoneormoretopics

andcanproduceoutputstreamstooneormoreoutputtopics(Streams/ConnectorAPI)

MoretechnicaldetailsabouthowKafkahandlesdataorhowSparkandFlinkintegratewithKafkacanbefurtherexploredin[Kafka1,Kafka3,Kafka2].

StreamingSQL

Opensourcesystemsfordistributedstreamprocessingareevolvingfastandthe(STREAM)SQLinterface

forprocessing SQLquerieson the fly as streamprocessinghas recently started tobe consideredand

implementedinbothSpark[SparqS]andFlink[FlinkS].

The approach used so farwas to extend their current APIs and to offer somebasic operationswhile

promising to cover in the futuremanyunsupportedoperations, e.g.multiple streaming aggregations,

sortingoperations or joinsbetween twoormany (un)boundeddatasets. It isnotclearhowtheopen

Page 24: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page24of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

sourcecommunitywilldriveandextend thestreamingSQLsemantics,apiandexecution frameworks,

butitisoneofthetopicsthatrecentlygotanincreasedattention[Jai08].

4.3 MachineLearningtoolsforBigDataprocessing

MLtoolsintheSparkstack:MLlib

ThemachinelearningsupportinSparkconsistsofthreecomponents[MLB]:

- MLlib: Apache’s Spark distributedmachine learning library. It inherits from the previous onesbutit’snowconsideredalibraryonitsownoutoftheMLBasebounds.Itiswidelysupportedbythecommunityandthemostcommonlyused.

- MLoptimiser:optimisesthewholemachinelearningpipeline.- MLI: anAPI that facilitates feature extraction and algorithmdevelopment through an easy to

useAPI

MLlib[MLLib]providesanAPIthatusestheApacheSparkenginetoexecutecommonmachinelearningalgorithmsinadistributedsetting[Spa].ItalsousesSparkinmemorydatacapabilitiestospeedupthe

computations.Itiswidelysupportedbythecommunityandeasytouse.ItisalsotightlyintegratedwithotherSparkcomponentslikeSparkSQLorGraphXallowingtoseamlesslychangebetweenthem.Someofthemainfunctionalitiesare:

- Severalclassificationandregressionalgorithms- Featuretransformations:Standardization,Normalization,Hashing…- Modelevaluationandhyperparameters

- MLPipelines: spark.mlpackageallows to create sequencesofmachine learning steps likedatapreprocessing,featureextraction,modelfittingandvalidation.

- MachineLearningpersistence:SavingMLmodelsforlateruse

- DistributedLinearAlgebra:PCA,SVD.- Statistics:Hypothesistesting

Inaddition,internaloptimizationsareappliedtoadaptthesealgorithmstoadistributedsettinginSpark.Some examples are optimizing Java Garbage collection times, using feature discretization to reducecommunicationcostsandusingC++specializedlibrariesintheworkernodes[Men16].

Page 25: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page25of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

Figure5.MLibsitsontopoftheApacheSparkCoreanditcanworkondatareadfromstreaming,graph

orSQLsources.

MLtoolsintheFlinkstack:FlinkML

SimilartoMLlibinSpark,FlinkML[Flib]providesaseriesofmachinelearningalgorithmstoexecuteon

the Flink engine. The main objective was to provide not only algorithms but also means to extractinformationfromdatasetsandbuildmaintainablemachinelearningpipelines.Atthecurrentversion1.2it only allows to apply these algorithms over batch data sources. Also there are fewer algorithms

availablecomparedtoSpark.

MLtools:SparkversusFlink

AshortcomparisonoftheML-orientedfeaturesinSparkandFlinkissummarizedinthetablebelow.

Characteristic FlinkML MLlib

NumberofAlgorithms 3 20+

DataPreprocessing 3 25+

Pipelines Yes Yes

ModelSelection Yes Yes

ParameterTuning No Yes

BasicStatistics No Yes

MLoverStreaming No Yes

Page 26: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page26of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

Conclusion:Sparknotonlyhasmorealgorithmsandfeaturesbutitalsounifiesthestreaming,SQLandmachine learningcapabilitiesoftheframeworktousethemtogether.Thereason is thatSparkalways

usestheRDDabstractiontoanalyzethedata.AtthemomentMLlibprovidesmorefunctionalitythanitsFlinkcounterpart.

4.4 Graphprocessing

GraphXonSpark

GraphXadaptsgraphprocessingalgorithmsandabstractionstoadistributedcomputationenvironment

likeSpark [GraphX]. It isa thin layer thatsitson topofSparkandhasseveral implementationsof themostimportantgraphalgorithmslikePageRank,TriangleCounttonameafew.Itallowstoeasilybuildgraphswith tabularorunstructureddata and view it asbothanormal collectionor a graph. TheAPI

providesgraphoperatorslikesubgraph,joinVerticesoraggregateMessagesthatfacilitateoperationstotheuser.ItalsopartitionsthedataofthegraphinapairofvertexandedgecollectionsthatareRDD’s[Gon14]. The collectionsprovidea special indexingandpartitioningadapted to thegraph. Toachieve

maximumefficiencyGraphX implementsseveraloptimizations toexploitgraphspecificpropertiesandoperations.

AgainthemainbenefitisthatitprovidesaunifiedenvironmentwherewecanuseSQLtobuildgraphs,orapplymachinelearningtechniquesoverthegraphstructure.GraphXusestheRDDabstraction,sameasMLlib,SparkStreamingandSparkSQL.RecentlyGraphFrameshasbeenintroduced,allowingtoeasily

performqueriesaboutthegraphverticesandedgesandevensearchforstructuralpatternsinthedatalike,tripletsofnodesA,B,CwhereAisconnectedwithBandBconnectedwithC,butAisnotconnectedwith C. For that reason GraphX can be easily adapted in many scenarios and by users with a lot of

domainknowledge.

GellyonFlink

Gelly is an API for Flink [Gelly] to create graphs from a set of vertices and edges read from files or

collections. This Graph abstraction has several properties and functions that allows the user to getinformation of the graph and apply applications like map, translate or filter. Same as in GraphX itprovidesimplementationsforpopularalgorithmslikePageRankorTriangleCount.

4.5 Workflows

TheDataflowModel

Massivescaledataprocessingisnowshapedbythreemainfactors:

• Global-scaleprocesseddatasetsareunboundedandunordered.

Page 27: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page27of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

• Consumersimposemoresophisticatedrequirementssuchasevent-timeordering.• Optimizationstrade-offsbetweencorrectness,latencyandcost.

Withtheseaspectsinmind,Googleproposedafundamentalshiftofapproach[Aki15],thatisto“stoptryingtogroomunboundeddatasetsintofinitepoolsofinformationthateventuallybecomecomplete”

andtoworkwithprincipledabstractionsthatallowthefinalusertochoosethe“tradeoffsalongtheaxesofinterest:correctness,latency,andcost”.In[Aki15]theauthorsproposedonesuchapproach,givingthedataflowmodelcomposedby:

• Awindowingmodeltosupportunalignedevent-timewindows.• Atriggeringmodeltobindoutputsemanticstoruntimecharacteristicsofthepipeline.• Anincrementalprocessingmodeltointegrateupdatesandretractionsintothewindowingand

triggeringmodels.TylerAkidau,oneof theauthorsof theDataflowmodel,makes in [Bey16a]and [Bey16b]ahigh-level

survey of modern data processing concepts: streaming, unbounded data (processing), event time vsprocessingtime,windows,sessions,watermarks,triggers,andaccumulation.

ApacheBeam

ApacheBeamistheopensourceextensionoftheGoogleDataFlowModel,stillaworkinprogresswithyetno stable releases, that aims toproposeaunifiedprogrammingmodel inorder tobuildpipelinesthatcanbefurtherexecutedbymultiplerunners,suchasApex,Flink,SparkorGoogleCloudDataflow

commercialengine.TheBeamcapabilitymatrix[Beam2]respondstofourquestions:

• Whatisbeingcomputed?(transformations)

• Whereineventtime?(event-timewindowing)• Wheninprocessingtime?(watermarksandtriggers)• Howdorefinementsrelate?(accumulatinganddiscarding).

WithApacheBeamit isclearthattheauthorstrytomovetheonesizefitsallsolutionparadigmfromtheexecutionlayertothesemantics(API) layer,beingbuiltontopofmanymodelsinordertosustain

efficient (un)bounded data processing. This work is highly influenced by Google, which is againreshaping the world of modern Big Data processing, allowing the final user not only to choose thetradeoffs of its interest between correctness, latency, and cost, but also to leverage various engine

runners.

Page 28: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big
Page 29: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

5 BigDataprocessing:whatrequirementsforstorage?

Big Data systems share the goals of traditional data warehousing systems (extract value from the

analysisofdata),whileaiming tocopewith their limitations.Bothsystemssignificantlydiverge in the

analytics processes and the organization of the source data. In practice, traditional datawarehouses

used to organize the data in repositories, collecting them from other source databases such as

enterprisemanagement systems, analytics engines, etc.Warehousing systems are poor at organizing

andqueryingdatastreamslikestreamlogs,sensordata,locationdatafrommobiledevices,etc.BigData

technologies aim to overcome these weaknesses of data warehousing systems by leveraging and

harnessing new sources of data and new processing paradigms to facilitate scalable data analytics

despitechallengesduetounprecedentedvolume,velocityandvarietyofdata.

The storage layer supports the processing of the data by providing core functionality such as data

organization,accessandretrievalofthedata.Thislayerindexesthedataandorganizesthemoverthe

distributed store devices. The data is partitioned into blocks and organized onto the multiple

repositories.Thedatatoorganizecanbeanyoneofseveralforms;hence,severaltoolsandtechniques

areemployedforeffectiveorganizationandretrievalofthedata.Theexamplesincludekey/valuepair;

columnorienteddata,documentdatabase,semistructuredXMLdata,rawformatsetc.

– File systems: The file system is responsible for the organization, storage, naming, sharing, and

protectionoffiles.BigDatafilemanagementissimilartodistributedfilesystem,howevertheread/write

performance, simultaneous data access, on demand file system creation, efficient techniques for file

synchronizationswouldbemajorchallengesfordesignandimplementation.Thegoalsindesigningthe

BigDatafilesystemsshouldincludecertaindegreeoftransparencyasmentionedbelow.

(a)Distributedaccessandlocationtransparency:Clientsareunawarethatfilesaredistributedandcan

accesstheminthesamewaylocalfilesareaccessed.Consistentnamespaceencompasseslocalaswell

asremotefileswithoutanylocationinformation.

(b)Failurehandling:Thedataprocessing layershouldoperateevenwitha fewcomponent failures in

thesystem.Thiscanbeachievedwithsomelevelofreplicationandredundancy.

(c) Heterogeneity: File service should be provided across different hardware and operating system

platforms.

(d) Support fine-grained distribution of data: To optimize performance, individual objects should be

locatedneartheprocessesthatusethem.

Page 30: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page30of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

Examples of file systems implementing the above features and consequently suitable for Big Data

analyticsincludeApacheHDFS[Hdfs]orCeph[Ceph].

Distributedfilesystemscancopewithlargedatavolumesofvarioustypes.However,comparedwitha

file system,adatabaseallowsa farbetterandmoreefficientaccess to thedata,primarilydue to the

query language included. Currently the most widespread database is doubtlessly the relational

database.Relationaldatabasesareaperfectlysuitedtotransactionprocessingofstructureddataofa

limitedsize.Theyareoptimizedforarecord-wiseparallelaccessbymanyusers,aswellasforinserting,

updating and deleting records, while queries cause a higher timely effort. Big Data exceeds the size

limits, includes a lot of data which is not structured, and it is about analyzing data which includes

frequentqueries.All theseareaspectsthatshowthatrelationaldatabasesarenotthebestchoicefor

BigData.

Consequently,BigDataanalyticsusually runsatopdata storagemodels thatarebest suited for these

specificrequirements:

–Key-ValueStores:Key-valuepair(KVP)tablesareusedtoprovidepersistencemanagementformany

NoSQLtechnologies.Thetablehasonlytwocolumns:oneisthekey,theother isthevalue.Thevalue

couldbeasinglevalueoradatablockcontainingmanyvalues, the formatofwhich isdeterminedby

programcode.KVPtablesmayuse indexing,hastablesorsparsearraystoproviderapidretrievaland

insertion capability, depending on the need for fast look up, fast insertion or efficient storage. KVP

tables are best applied to simple data structures and on the Hadoop Map Reduce environment.

Examplesofkey-valuedatastoresareAmazon’sDynamoandOracle’sBerkeleyDB.

– Document Oriented Database: A document oriented database is a database designed for storing,

retrievingandmanagingdocumentorientedorsemi-structureddata.Thecentralconceptofadocument

oriented database is the notion of a document where the contents within the document are

encapsulated or encoded in some standard format such as JavaScript Object Notation (JSON), Binary

JavaScriptObjectNotation(BJSON)orXML.ExamplesofthesedatabasesareApache’sCouchDB[couch]

and10gen’sMongoDB[mongo].

–Columnfamily/bigtabledatabase:Insteadstoringkey-valuesindividually,theyaregroupedtocreate

the composite data, each column containing the corresponding row number as key and the data as

Page 31: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page31of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

value.Thistypeofstorageisusefulforstreamingdatasuchasweblogs,timeseriesdatacomingfrom

severaldevices,sensorsetc.TheexamplesareHBase[Hbase]andGoogleBigTable[Bigt].

–Graphdatabase:Agraphdatabaseusesgraphstructuressimilar tonodes,edgesandproperties for

datastoringandsemanticqueryonthedata.Inagraphdatabase,everyentitycontainsdirectpointers

toitsadjacentelementandindexlook-upsarenotrequired.Agraphdatabaseisusefulwhenlarge-scale

multi-level relationship traversals are common and desirable for processing complex many-to-many

connectionssuchassocialnetworks.Agraphmaybecapturedbyatablestore,whichsupportsrecursive

joinssuchasBigTable[Bigt]andCassandra[Cass].ExamplesofgraphdatabasesincludeInfiniteGraph

[igraph]fromObjectivityandtheNeo4j[neo4j]opensourcegraphdatabase.

Adetaileddescriptionofstate-of-the-artstoragesystemsincloudandHPCsystemsthatcanserveasa

basisforconvergedinfrastructuresisdevelopedindeliverableD3.1.

Page 32: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page32of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

6 LimitationsandgoalsTheprevioussectionspresentedadetailedsurveyofstate-of-theartsystemsforBigDataprocessing.In

this sectionwe summarize a set of gaps in the state-of-the-art we identified and state a number of

correspondingchallengesthatwillbeaddressedinthesecondphaseoftheBigStorageproject.

6.1 Identifiedlimitationsofthestateoftheart

• ThereisalackoftoolsthatsitbetweentheparallelcomputingplatformsforBigDataandthe

user.Thegoal istobeabletoexplaintheimpactofchangesinthecluster infrastructure(how

manymachines,what hardware, etc.) and framework configuration (a vast list of parameters

andsettings)totheuserandprovidegoodrecommendationsbasedontheusetheygivetotheir

data.

• Unifyingthemanagementofamultisiteinfrastructureisalsoanopenchallenge.Allocatingjobs

dependingonthedatalocation,loadofthemachines,jobcharacteristicsanddataprivacyitisof

greatimportanceformanyorganizationsnowadays.

• Fastdecisionsonourcluster infrastructure isalsoan interesting lineof research.Deactivating

machines to save energy depending on the current load or detecting node failures and

communicationproblemsintheclusteronrealtimecanreducecostsfortheorganization

• BigData(streaming) frameworksarenotarchitectedforefficientlyprocessingdatathat isnot

limitedtoasinglegeographicallocation.OneworkthattriestofillthisgapisJetStream[Tud14],

ahighperformancebatch-basedstreamingmiddlewareforefficienttransfersofeventsbetween

clouddatacenters.

• SQL streaming for Big Data processing is an important interface that needs multiple

enhancements.Thedifficultyofprovidingnewfeaturesinashorttimespan(requiresveryfast

developmentontopofexistingframeworks)indicatessomelimitations:nogeneralconsenton

semantics,needofmoregeneralabstractionsthatcaneasilyintegratenewuse-cases,etc.

• RDDsareanefficientfault-tolerantimmutabledatastructureforbatchandstreamprocessing.

There is a need to efficiently define and implement a new abstraction Hybrid Resilient

Distributed Datasets (HRDD) that can manage data (stream) sources to which the user is

provided with partially (or fully) random access, while considering that data can be both

mutableandimmutable.

• The data storage system used as a data source for Big Data analytics greatly impacts the

performance – and sometimes the available features – of the upper-level data processing

system. Transitioning from relational databases to Big Data-oriented systems lead to the

removal of some features such as transaction processing. As a new generation of Big Data

storage systems now offer such functionality, there may be some possible improvements,

Page 33: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page33of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

optimizations or new capabilities that could be added to existing big data processing

frameworks.Suchpossibilityshouldbefurtherstudied.

6.2 Goals

Forthenextstepsoftheproject,wesetupthefollowinggoalsforWP2:

• Based on the identified limitations and bottlenecks of current frameworks for general

workflow/streamingBigDataapplications,formalizethecorrespondingrequirementsinrelation

withtheuse-casesstudiedinWP1.

• Propose processing models and techniques for optimized data transfers between workflow

nodesandforefficientstreamprocessing(WP2T2.1).

• Research the efficient replication of data whenwe have amultisite structure, its constraints

(e.g.dataprivacy)anditsautomation(WP2Task2.1).

• Applymachinelearningtechniquestomodeloptimaldecisionswhenpreparingbigdataanalytic

processes(WP2Task2.3andTask2.1).

• Find and extract features of parallel computing applications and workflows and model their

impactintheinfrastructureandmachines(WP2Task2.3).

• Modelthelevelofutilizationandloadinoneclusterandestimatetheefficiencyofanincoming

jobinthatclusterWP2(Task2.1andTask2.3).

• Studymulticriteria DSS that are able to take into account multiple aspects, including energy

consumption, tohelp companiesmakedecisionsonhow tobuildbigdataanalyticsprocesses

(WP2Task2.2).

• Ultimately, design next generation data processingmodels, independent of any programming

model,with a focus on general data orchestration forworkflows and streamdata processing

(WP2T2.1).

Page 34: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page34of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

7 References

[Aga13] Agarwal, S.,Mozafari, B., Panda, A.,Milner, H.,Madden, S., & Stoica, I. (2013). BlinkDB: Queries withBounded Errors and Bounded Response Times on Very Large Data. Eurosys’13, 29–42.

http://doi.org/10.1145/2465351.2465355[Aki15]TylerAkidau,RobertBradshaw,CraigChambers,SlavaChernyak,RafaelJ.Fernandez-Moctezuma,Reuven

Lax,SamMcVeety,DanielMills,FrancesPerry,EricSchmidt,SamWhittle.TheDataflowModel:APracticalApproach toBalancingCorrectness, Latency, andCost inMassive-Scale,Unbounded,Out-of-OrderData

Processing,Proc.VLDBEndow.,August2015[Aki16a]Tyler Akidau, The Evolution of Massive-Scale Data Processing

https://docs.google.com/presentation/d/13YZy2trPugC8Zr9M8_TfSApSCZBGUDZdzi-WUz95JJw/present?slide=id.g63ca2a7cd_0_527

[Ale11]A.Alexandrovetal.,“MapReduceandPACT-comparingdataparallelprogrammingmodels”inProceedingsofthe14thConferenceonDatabaseSystemsforBTW2011.Kaiserslautern,Germany,pp.

25–44.[Apex] http://apex.apache.org/

[Arm15]Armbrust,M.,Ghodsi, A., Zaharia,M., Xin, R. S., Lian, C., Huai, Y.,… Franklin,M. J. (2015). Spark SQL.Proceedingsofthe2015ACMSIGMODInternationalConferenceonManagementofData-SIGMOD’15,

1383–1394.http://doi.org/10.1145/2723372.2742797[Ash09] Thusoo, A., Sarma, J. Sen, Jain, N., Shao, Z., Chakka, P., Anthony, S., … Murthy, R. (2009). Hive - A

Warehousing Solution Over a Map-Reduce Framework. Sort, 2, 1626–1629.http://doi.org/10.1109/ICDE.2010.5447738

[Bdb] Michael A. Olson, Keith Bostic, and Margo Seltzer. 1999. Berkeley DB. In Proceedings of the annual

conferenceonUSENIXAnnual Technical Conference (ATEC '99).USENIXAssociation,Berkeley, CA,USA,43-43.

[Beam]http://beam.apache.org/[Beam2]http://beam.apache.org/documentation/runners/capability-matrix/

[Bey16a]https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101 [Bey16b]https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-102

[Bigt] FayChang,JeffreyDean,SanjayGhemawat,WilsonC.Hsieh,DeborahA.Wallach,MikeBurrows,TusharChandra,AndrewFikes,andRobertE.Gruber.2008.Bigtable:ADistributedStorageSystemforStructured

Data. ACM Trans. Comput. Syst. 26, 2, Article 4 (June 2008), 26 pages. DOI:http://dx.doi.org/10.1145/1365815.1365816

[Car15] Paris Carbone, Gyula Fora, Stephan Ewen, Seif Haridi, Kostas Tzoumas. Lightweight AsynchronousSnapshotsforDistributedDataflows,2015,https://arxiv.org/pdf/1506.08603.pdf

[Cass] Avinash Lakshman and Prashant Malik. 2010. Cassandra: a decentralized structured storage system.SIGOPSOper.Syst.Rev.44,2(April2010),35-40.DOI:http://dx.doi.org/10.1145/1773912.1773922

[Ceph] Sage A.Weil, Scott A. Brandt, Ethan L. Miller, Darrell D. E. Long, and Carlos Maltzahn. 2006. Ceph: ascalable, high-performance distributed file system. In Proceedings of the 7th symposium on Operating

systemsdesignandimplementation(OSDI'06).USENIXAssociation,Berkeley,CA,USA,307-320.[Couch] Jan Lehnardt J. Chris Anderson and Noah Slater. CouchDB: The Definitive Guide. O’Reilly, first edition

edition,2010[Data] https://databricks.com/blog/2015/02/17/introducing-dataframes-in-spark-for-large-scale-data-

science.html

Page 35: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page35of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

[Datb] https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html

[Dea04]JeffreyDeanandSanjayGhemawat,MapReduce:SimplifiedDataProcessingonLargeClusters,OSDI2004,SanFrancisco,http://dl.acm.org/citation.cfm?id=1251254.1251264.

[DEC] http://sites.computer.org/debull/A15dec/A15DEC-CD.pdf[Dynamo] Swaminathan Sivasubramanian. 2012. Amazon dynamoDB: a seamlessly scalable non-relational

databaseservice.InProceedingsofthe2012ACMSIGMODInternationalConferenceonManagementofData(SIGMOD'12).ACM,NewYork,NY,USA,729-730.DOI:http://dx.doi.org/10.1145/2213836.2213945

[Flia] https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/table_api.html[Flib] https://cwiki.apache.org/confluence/display/FLINK/FlinkML%3A+Vision+and+Roadmap

[Flink] http://flink.apache.org/ [Flink2]https://ci.apache.org/projects/flink/flink-docs-release-1.2/concepts/index.html [FlinkS]https://flink.apache.org/news/2016/05/24/stream-sql.html[Gelly] https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/libs/gelly/index.html

[Gon14] Gonzalez, J. E., Xin, R. S., Dave, A., Crankshaw, D., Franklin, M. J., Gonzalez, J. E., … Stoica, I. (2014).GraphX:GraphProcessing inaDistributedDataflowFramework.11thUSENIXSymposiumonOperating

Systems Design and Implementation, 599–613. Retrieved fromhttps://www.usenix.org/conference/osdi14/technical-sessions/presentation/gonzalez

[GraphX]http://spark.apache.org/graphx/[Hada] http://hadoop.apache.org/

[Hadb] https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html

[Hao14] Haoyuan Li, Ali Ghodsi, Matei Zaharia, Scott Shenker, Ion Stoica, Tachyon: Reliable, Memory SpeedStorage for Cluster Computing Frameworks, SoCC 2014, Seattle USA, pp. 6:1--6:15,

http://doi.acm.org/10.1145/2670979.2670985 [Hbase]Hbase,https://hbase.apache.org/

[Hdfs] KonstantinShvachko,HairongKuang,SanjayRadia,andRobertChansler.2010.TheHadoopDistributedFileSystem.InProceedingsofthe2010IEEE26thSymposiumonMassStorageSystemsandTechnologies

(MSST) (MSST '10). IEEE Computer Society, Washington, DC, USA, 1-10. DOI:http://dx.doi.org/10.1109/MSST.2010.5496972

[Heron]https://blog.twitter.com/2015/flying-faster-with-twitter-heron[Igraph]InfiniteGraph,http://www.objectivity.com/products/infinitegraph/

[Jai08]NamitJainetal.TowardsaStreamingSQLStandard,Proc.VLDBEndow.,August2008,pages1379--1390,http://dx.doi.org/10.14778/1454159.1454179.

[Kafka]https://kafka.apache.org/[Kafka1]http://kafka.apache.org/intro

[Kafka2]http://data-artisans.com/kafka-flink-a-practical-how-to/[Kafka3]https://spark.apache.org/docs/1.6.1/streaming-kafka-integration.html

[Kul15] Sanjeev Kulkarni et al. Twitter Heron: Stream Processing at Scale, SIGMOD 2015, Melbourne, Victoria,Australia,pages239--250,http://doi.acm.org/10.1145/2723372.2742788

[Lev13]http://blogs.cisco.com/security/big-data-in-security-part-ii-the-amplab-stack[Mar16]Ovidiu-CristianMarcu,AlexandruCostan,GabrielAntoniu,MariaS.Perez-Hernandez,SparkversusFlink:

Understanding Performance in Big Data Analytics Frameworks, CLUSTER, Sept. 2016, Taiwan, Taipei,https://hal.inria.fr/hal-01347638v2.

[Men16]Meng,X.,Bradley,J.,Street,S.,Francisco,S.,Sparks,E.,Berkeley,U.C.,…Hall,S.(2016).MLlib :MachineLearninginApacheSpark,17,1–7.

Page 36: STORAGE-BASED CONVERGENCE BETWEEN HPC AND CLOUD …bigstorage-project.eu/images/Deliverables/BigStorage-D2.1.pdf · Deliverable number D2.1 Deliverable title Intermediate Report Big

Page36of36

MSCA-ITN-2014-ETN-642963

D4.1WP4IntermediateReport

[MLB] http://web.cs.ucla.edu/~ameet/mlbase_website/mlbase_website/index.html

[MLlib] http://spark.apache.org/mllib/[Mongo]https://www.mongo.com

[Nat05]http://nathanmarz.com/blog/history-of-apache-storm-and-lessons-learned.html[Neo4j]https://neo4j.com/

[Rey14] https://databricks.com/blog/2014/07/01/shark-spark-sql-hive-on-spark-and-the-future-of-sql-on-spark.html

[Samza]http://samza.apache.org/[Shi15]J.Shietal.,“Clashofthetitans:Mapreducevs.sparkforlargescaledataanalytics,”Proc.VLDBEndow.,

vol.8,pp.2110–2121,Sep.2015,http://dx.doi.org/10.14778/2831360.2831365[Spark] http://spark.apache.org/ [SparqS]http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html[Storm]http://storm.apache.org/

[Tan09]StewartTansley,KristinMicheleTolle.TheFourthParadigm:Data-intensiveScientificDiscovery,MicrosoftResearch,2009

[Tos14]Ankit Toshniwal, Siddarth Taneja, Amit Shukla, Karthik Ramasamy, JigneshM. Patel*, Sanjeev Kulkarni,Jason Jackson,KrishnaGade,MaosongFu, JakeDonham,NikunjBhagat,SaileshMittal,DmitriyRyaboy.

Storm @Twitter, SIGMOD 2014, Snowbird, Utah, USA, pages 147--156,http://doi.acm.org/10.1145/2588555.2595641

[Tud14] Radu Tudoran, Alexandru Costan, Olivier Nano, Ivo Santos, Hakan Soncu, Gabriel Antoniu. JetStream:Enablinghighthroughputliveeventstreamingonmulti-siteclouds.FutureGenerationComputerSystems,

Elsevier,54:274-291,2016.DOI:10.1016/j.future.2015.01.016[War09]D.WarnekeandO.Kao,“Nephele:Efficientparalleldataprocessing inthecloud,” inProceedingsofthe

2ndWorkshoponMany-TaskComputingonGridsandSupercomputers2009.NY,USAPortland,Oregon,8:1--8:10,http://doi.acm.org/10.1145/1646468.1646476

[Xin13]Xin,R.S.,Rosen,J.,Zaharia,M.,Franklin,M.J.,Shenker,S.,Stoica,I.,…Xin,R.S.(2013).Shark:SQLandrich analytics at scale. Proceedings of the 2013 International Conference on Management of Data -

SIGMOD’13,13.http://doi.org/10.1145/2463676.2465288 [Zah12a]MateiZaharia,MosharafChowdhury,TathagataDas,AnkurDave,JustinMa,MurphyMcCauley,Michael

J. Franklin, Scott Shenker, IonStoica,ResilientDistributedDatasets:A Fault-TolerantAbstraction for In-MemoryClusterComputing,NSDI2012,SanJose,http://dl.acm.org/citation.cfm?id=2228298.2228301

[Zah12]MateiZaharia,TathagataDas,HaoyuanLi,ScottShenker,IonStoica.DiscretizedStreams:AnEfficientandFault-Tolerant Model for Stream Processing on Large Clusters. HotCloud June 2012, Boston, MA,

http://dl.acm.org/citation.cfm?id=2342763.2342773