# accumulo summit 2015: building aggregation systems on accumulo [leveraging accumulo]

of 36 /36
Building Scalable Aggregation Systems Accumulo Summit April 28th, 2015 Gadalia O’Bryan and Bill Slacum

Author: accumulo-summit

Post on 15-Jul-2015

312 views

Category:

## Technology

Tags:

• #### import rfiles

Embed Size (px)

TRANSCRIPT

Building Scalable Aggregation Systems

Building Scalable Aggregation SystemsAccumulo SummitApril 28th, 2015Gadalia OBryan and Bill [email protected]@koverse.comOutlineThe Value of AggregationsAbstractionsSystemsDetailsDemoReferences/Additional Information

Aggregation provides a means of turning billions of pieces of raw data into condensed, human-consumable information.Aggregation of AggregationsTime SeriesSet Size/CardinalityTop-KQuantilesDensity/Heatmap

16.3k Unique Users

G1

G2

Abstractions123410+++=Concept from (P1)12343++=7=10=+We can parallelize integer additionAssociative + Commutative Operations Associative: 1 + (2 + 3) = (1 + 2) + 3Commutative: 1 + 2 = 2 + 1Allows us to parallelize our reduce (for instance locally in combiners)Applies to many operations, not just integer addition.Spoiler: Key to incremental aggregations

{a, b}{b, c}{a, c}{a}{a, b, c}++={a, c}={a, b, c}=+We can also parallelize the addition of other types, like Sets, as Set Union is associativeMonoid InterfaceAbstract Algebra provides a formal foundation for what we can casually observe.

Dont be thrown off by the name, just think of it as another trait/interface. Monoids provide a critical abstraction to treat aggregations of different types in the same way

Many Monoid Implementations Already Existhttps://github.com/twitter/algebird/Long, String, Set, Seq, Map, etcHyperLogLog Cardinality EstimatesQTree Quantile EstimatesSpaceSaver/HeavyHitters Approx Top-KAlso easy to add your own with libraries like stream-lib [C3]

SerializationOne additional trait we need our aggregatable types to have is that we can serialize/deserialize them. 12343++=7=10=+1) zero()2) plus()3) plus()4) serialize()6) deserialize()5) zero()7) plus()9) plus()378) deserialize()These abstractions enable a small library of reusable code to aggregate data in many parts of your system. SystemsSQL on HadoopImpala, Hive, SparkSQLmillisecondssecondsminuteslargemanyfewsecondsminuteshoursbillionsmillionsthousandsQuery Latency# of UsersFreshnessData SizeOnline Incremental SystemsTwitters Summingbird [PA1, C4], Googles Mesa [PA2], Koverses Aggregation FrameworkmillisecondssecondsminuteslargemanyfewsecondsminuteshoursbillionsmillionsthousandsQuery Latency# of UsersFreshnessData SizeSMKOnline Incremental Systems: Common ComponentsAggregations are computed/reduced incrementally via associative operations Results are mostly pre-computed for so queries are inexpensiveAggregations, keyed by dimensions, are stored in low latency, scalable key-value storeSummingbird ProgramSummingbirdDataHDFSQueuesStorm TopologyHadoop JobOnline KV storeBatch KV storeClient LibraryClientReduceReduceReduceReduceMesaData (batches)ColossusQuery Server61629192Singletons61-70Cumulatives61-8061-900-60BaseCompaction WorkerReduceReduceClientKoverseDataApache AccumuloKoverseServerHadoop JobReduceReduceClientRecordsAggregatesMin/Maj Compation IteratorReduceScan IteratorReduceDetailsIngest (1/2)We bulk import RFiles over writing via a BatchWriterFailure case is simpler as we can retry whole batch in case an aggregation job fails or a bulk import failsBatchWriters can be used, but code needs to be written handle Mutations that are uncommitted and theres no roll back for successful commitsIngest (2/2)As a consequence of importing (usually small) RFiles, we will be compacting moreIn testing (20 nodes, 200+ jobs/day), we have not had to tweak compaction thresholds nor strategiesCan possibly be attributed to relatively small amounts of data being held at any given time due to reductionAccumulo IteratorCombiner Iterator: A SortedKeyValueIterator that combines the Values for different versions (timestamp) of a Key within a row into a single Value. Combiner will replace one or more versions of a Key and their Values with the most recent Key and a Value which is the result of the reduce method.

Our CombinerWe can re-use Accumulo's Combiner type here:

override def reduce:(key: Key, values: Iterator[Value]) Value = { val sum = agg.reduceAll( values.map(v => agg deserialize v)) return (key, sum)}

Our function has to be commutative because major compactions will often pick smaller files to combine, which means we only see discrete subsets of data in an iterator invocation.

Accumulo Table Structurerowcolfcolqvisibilitytimestampvaluefield1Name\x1Ffield1Value\x1Ffield2Name\x1Ffield2Value...

AggregationType

relationvisibilitytimestampSerialized aggregation results

Example: origin\x1FBWI count: [U] 6074Example: origin\x1FBWI topk:destination [U] {DIA: 1}Example: origin\x1FBWI\x1Fdate\x1F20150427 count: [U] 104

Visibilities (1/2)Easy to store, bit tougher to queryData can be stored at separate visibilities Combiner logic has no concept of visibility, it only loops over a given PartialKey.ROW_COLFAM_COLQUALWe know how to combine values (Longs, CountMinSketchs), but how do we combine visibilities? Visibilities (2/2)Say we have some data on Facebook photo albums:facebook\x1falbum_size count: [public] 800facebook\x1falbum_size count: [private] 100Combined value would be 900But, what should we return for the visibility of public + private? We need more context to properly interpret this value.Alternatively, we can just drop itQueriesThis schema is geared towards point queries.Order of fields matters.GOOD What are the top-k destinations from BWI?NOT GOODWhat are all the dimensions and aggregations I have for BWI?Demo

ReferencesPresentationsP1. Algebra for Analytics - https://speakerdeck.com/johnynek/algebra-for-analytics