spark cassandra integration, theory and practice

90
@doanduyhai Spark/Cassandra integration Theory and Practice DuyHai DOAN, Technical Advocate

Upload: duyhai-doan

Post on 19-Jul-2015

555 views

Category:

Technology


4 download

TRANSCRIPT

Page 1: Spark cassandra integration, theory and practice

@doanduyhai

Spark/Cassandra integration Theory and Practice DuyHai DOAN, Technical Advocate

Page 2: Spark cassandra integration, theory and practice

@doanduyhai

Who Am I ?!Duy Hai DOAN Cassandra technical advocate •  talks, meetups, confs •  open-source devs (Achilles, …) •  OSS Cassandra point of contact

[email protected] ☞ @doanduyhai

2

Page 3: Spark cassandra integration, theory and practice

@doanduyhai

Datastax!•  Founded in April 2010

•  We contribute a lot to Apache Cassandra™

•  400+ customers (25 of the Fortune 100), 200+ employees

•  Headquarter in San Francisco Bay area

•  EU headquarter in London, offices in France and Germany

•  Datastax Enterprise = OSS Cassandra + extra features

3

Page 4: Spark cassandra integration, theory and practice

Spark & Cassandra Presentation !

Spark & its eco-system!Cassandra & token ranges!

Stand-alone cluster deployment!!

Page 5: Spark cassandra integration, theory and practice

@doanduyhai

What is Apache Spark ?!Created at Apache Project since 2010 General data processing framework Faster than Hadoop, in memory One-framework-many-components approach

5

Page 6: Spark cassandra integration, theory and practice

@doanduyhai

Spark code example!Setup

Data-set (can be from text, CSV, JSON, Cassandra, HDFS, …)

val$conf$=$new$SparkConf(true)$$ .setAppName("basic_example")$$ .setMaster("local[3]")$$val$sc$=$new$SparkContext(conf)$

val$people$=$List(("jdoe","John$DOE",$33),$$$$$$$$$$$$$$$$$$$("hsue","Helen$SUE",$24),$$$$$$$$$$$$$$$$$$$("rsmith",$"Richard$Smith",$33))$

6

Page 7: Spark cassandra integration, theory and practice

@doanduyhai

RDDs!RDD = Resilient Distributed Dataset val$parallelPeople:$RDD[(String,$String,$Int)]$=$sc.parallelize(people)$$val$extractAge:$RDD[(Int,$(String,$String,$Int))]$=$parallelPeople$$ $ $ $ $ $ .map(tuple$=>$(tuple._3,$tuple))$$val$groupByAge:$RDD[(Int,$Iterable[(String,$String,$Int)])]=extractAge.groupByKey()$$val$countByAge:$Map[Int,$Long]$=$groupByAge.countByKey()$

7

Page 8: Spark cassandra integration, theory and practice

@doanduyhai

RDDs!RDD[A] = distributed collection of A •  RDD[Person] •  RDD[(String,Int)], … RDD[A] split into partitions Partitions distributed over n workers à parallel computing

8

Page 9: Spark cassandra integration, theory and practice

@doanduyhai

Direct transformations!

9

map(f: A => B): RDD[B] filter(f: A => Boolean): RDD[A] …

Page 10: Spark cassandra integration, theory and practice

@doanduyhai

Transformations requiring shuffle!

10

groupByKey(): RDD[(K,V)] reduceByKey(f: (V,V) => V): RDD[(K,V)] join[W](otherRDD: RDD[(K,W)]): RDD[(K, (V,W))] …

Page 11: Spark cassandra integration, theory and practice

@doanduyhai

Actions!

11

collect(): Array[A] take(number: Int): Array[A] foreach(f: A => Unit): Unit …

Page 12: Spark cassandra integration, theory and practice

@doanduyhai

Partitions transformations!

12

map(tuple => (tuple._3, tuple))

Direct transformation

Shuffle (expensive !)

groupByKey()

countByKey()

partition

RDD

Final action

Page 13: Spark cassandra integration, theory and practice

@doanduyhai

Spark eco-system!

Local Standalone cluster YARN Mesos

Spark Core Engine (Scala/Java/Python)

Spark Streaming MLLib GraphX Spark SQL

Persistence

Cluster Manager

13

etc…

Page 14: Spark cassandra integration, theory and practice

@doanduyhai

Spark eco-system!

Local Standalone cluster YARN Mesos

Spark Core Engine (Scala/Java/Python)

Spark Streaming MLLib GraphX Spark SQL

Persistence

Cluster Manager

14

etc…

Page 15: Spark cassandra integration, theory and practice

@doanduyhai

What is Apache Cassandra?!Created at Apache Project since 2009 Distributed NoSQL database Eventual consistency (A & P of the CAP theorem) Distributed table abstraction

15

Page 16: Spark cassandra integration, theory and practice

@doanduyhai

Cassandra data distribution reminder!Random: hash of #partition → token = hash(#p) Hash: ]-X, X] X = huge number (264/2)

n1

n2

n3

n4

n5

n6

n7

n8

16

Page 17: Spark cassandra integration, theory and practice

@doanduyhai

Cassandra token ranges!A: ]0, X/8] B: ] X/8, 2X/8] C: ] 2X/8, 3X/8] D: ] 3X/8, 4X/8] E: ] 4X/8, 5X/8] F: ] 5X/8, 6X/8] G: ] 6X/8, 7X/8] H: ] 7X/8, X] Murmur3 hash function

n1

n2

n3

n4

n5

n6

n7

n8

A

B

C

D

E

F

G

H

17

Page 18: Spark cassandra integration, theory and practice

@doanduyhai

Linear scalability!

n1

n2

n3

n4

n5

n6

n7

n8

A

B

C

D

E

F

G

H

user_id1

user_id2

user_id3

user_id4

user_id5

18

Page 19: Spark cassandra integration, theory and practice

@doanduyhai

Linear scalability!

n1

n2

n3

n4

n5

n6

n7

n8

A

B

C

D

E

F

G

H

user_id1

user_id2

user_id3

user_id4

user_id5

19

Page 20: Spark cassandra integration, theory and practice

@doanduyhai

Cassandra Query Language (CQL)!

INSERT INTO users(login, name, age) VALUES(‘jdoe’, ‘John DOE’, 33);

UPDATE users SET age = 34 WHERE login = ‘jdoe’;

DELETE age FROM users WHERE login = ‘jdoe’;

SELECT age FROM users WHERE login = ‘jdoe’;

20

Page 21: Spark cassandra integration, theory and practice

Spark & Cassandra Connector!

Spark Core API!SparkSQL!

SparkStreaming!

Page 22: Spark cassandra integration, theory and practice

@doanduyhai

Spark/Cassandra connector architecture!All Cassandra types supported and converted to Scala types Server side data filtering (SELECT … WHERE …) Use Java-driver underneath !Scala and Java support

22

Page 23: Spark cassandra integration, theory and practice

@doanduyhai

Connector architecture – Core API!Cassandra tables exposed as Spark RDDs

Read from and write to Cassandra

Mapping of C* tables and rows to Scala objects •  CassandraRDD and CassandraRow •  Scala case class (object mapper) •  Scala tuples

23

Page 24: Spark cassandra integration, theory and practice

Spark Core

https://github.com/doanduyhai/Cassandra-Spark-Demo

Page 25: Spark cassandra integration, theory and practice

@doanduyhai

Connector architecture – Spark SQL !Mapping of Cassandra table to SchemaRDD •  CassandraSQLRow à SparkRow •  custom query plan •  push predicates to CQL for early filtering

SELECT * FROM user_emails WHERE login = ‘jdoe’;

25

Page 26: Spark cassandra integration, theory and practice

Spark SQL

https://github.com/doanduyhai/Cassandra-Spark-Demo

Page 27: Spark cassandra integration, theory and practice

@doanduyhai

Connector architecture – Spark Streaming !Streaming data INTO Cassandra table •  trivial setup •  be careful about your Cassandra data model when having an infinite

stream !!!

Streaming data OUT of Cassandra tables (CDC) ? •  work in progress …

27

Page 28: Spark cassandra integration, theory and practice

Spark Streaming

https://github.com/doanduyhai/Cassandra-Spark-Demo

Page 29: Spark cassandra integration, theory and practice

Q & R

! " !

Page 30: Spark cassandra integration, theory and practice

Spark/Cassandra best practices!

Data locality!Failure handling!

Cross-region operations!

Page 31: Spark cassandra integration, theory and practice

@doanduyhai

Cluster deployment!

C* SparkM SparkW

C* SparkW

C* SparkW

C* SparkW

C* SparkW

Stand-alone cluster

31

Page 32: Spark cassandra integration, theory and practice

@doanduyhai

Cluster deployment!

Spark Master

Spark Worker Spark Worker Spark Worker Spark Worker

Executor Executor Executor Executor

Driver Program

Cassandra – Spark placement 1 Cassandra process ⟷ 1 Spark worker

C* C* C* C*

32

Page 33: Spark cassandra integration, theory and practice

@doanduyhai

Remember token ranges ?!A: ]0, X/8] B: ] X/8, 2X/8] C: ] 2X/8, 3X/8] D: ] 3X/8, 4X/8] E: ] 4X/8, 5X/8] F: ] 5X/8, 6X/8] G: ] 6X/8, 7X/8] H: ] 7X/8, X]

n1

n2

n3

n4

n5

n6

n7

n8

A

B

C

D

E

F

G

H

33

Page 34: Spark cassandra integration, theory and practice

@doanduyhai

Data Locality!C*

SparkM SparkW

C* SparkW

C* SparkW

C* SparkW

C* SparkW

Spark partition RDD

Cassandra tokens ranges

34

Page 35: Spark cassandra integration, theory and practice

@doanduyhai

Data Locality!C*

SparkM SparkW

C* SparkW

C* SparkW

C* SparkW

C* SparkW

Use Murmur3Partitioner

35

Page 36: Spark cassandra integration, theory and practice

@doanduyhai

Read data locality!Read from Cassandra

36

Page 37: Spark cassandra integration, theory and practice

@doanduyhai

Read data locality!Spark shuffle operations

37

Page 38: Spark cassandra integration, theory and practice

@doanduyhai

Write to Cassandra without data locality!

Async batches fan-out writes to Cassandra

38

Because of shuffle, original data locality is lost

Page 39: Spark cassandra integration, theory and practice

@doanduyhai

Write to Cassandra with data locality!

Write to Cassandra

rdd.repartitionByCassandraReplica("keyspace","table")

39

Page 40: Spark cassandra integration, theory and practice

@doanduyhai

Write data locality!

40

•  either stream data in Spark layer using repartitionByCassandraReplica() •  or flush data to Cassandra by async batches •  in any case, there will be data movement on network (sorry no magic)

Page 41: Spark cassandra integration, theory and practice

@doanduyhai

Joins with data locality!

41

CREATE TABLE artists(name text, style text, … PRIMARY KEY(name));

CREATE TABLE albums(title text, artist text, year int,… PRIMARY KEY(title));

val join: CassandraJoinRDD[(String,Int), (String,String)] = sc.cassandraTable[(String,Int)](KEYSPACE, ALBUMS) // Select only useful columns for join and processing .select("artist","year") .as((_:String, _:Int)) // Repartition RDDs by "artists" PK, which is "name" .repartitionByCassandraReplica(KEYSPACE, ARTISTS) // Join with "artists" table, selecting only "name" and "country" columns .joinWithCassandraTable[(String,String)](KEYSPACE, ARTISTS, SomeColumns("name","country")) .on(SomeColumns("name"))

Page 42: Spark cassandra integration, theory and practice

@doanduyhai

Joins pipeline with data locality!

42

LOCAL READ FROM CASSANDRA

Page 43: Spark cassandra integration, theory and practice

@doanduyhai

Joins pipeline with data locality!

43

SHUFFLE DATA WITH SPARK

Page 44: Spark cassandra integration, theory and practice

@doanduyhai

Joins pipeline with data locality!

44

REPARTITION TO MAP CASSANDRA REPLICA

Page 45: Spark cassandra integration, theory and practice

@doanduyhai

Joins pipeline with data locality!

45

JOIN WITH DATA LOCALITY

Page 46: Spark cassandra integration, theory and practice

@doanduyhai

Joins pipeline with data locality!

46

ANOTHER ROUND OF SHUFFLING

Page 47: Spark cassandra integration, theory and practice

@doanduyhai

Joins pipeline with data locality!

47

REPARTITION AGAIN FOR CASSANDRA

Page 48: Spark cassandra integration, theory and practice

@doanduyhai

Joins pipeline with data locality!

48

SAVE TO CASSANDRA WITH LOCALITY

Page 49: Spark cassandra integration, theory and practice

@doanduyhai

Perfect data locality scenario!

49

•  read localy from Cassandra •  use operations that do not require shuffle in Spark (map, filter, …) •  repartitionbyCassandraReplica() •  à to a table having same partition key as original table •  save back into this Cassandra table

Sanitize, validate, normalize, transform data USE CASE

Page 50: Spark cassandra integration, theory and practice

@doanduyhai

Failure handling!Stand-alone cluster

50

C* SparkM SparkW

C* SparkW

C* SparkW

C* SparkW

C* SparkW

Page 51: Spark cassandra integration, theory and practice

@doanduyhai

Failure handling!What if 1 node down ? What if 1 node overloaded ?

51

C* SparkM SparkW

C* SparkW

C* SparkW

C* SparkW

C* SparkW

Page 52: Spark cassandra integration, theory and practice

@doanduyhai

Failure handling!What if 1 node down ? What if 1 node overloaded ? ☞ Spark masterwill re-assign the job to another worker

52

C* SparkM SparkW

C* SparkW

C* SparkW

C* SparkW

C* SparkW

Page 53: Spark cassandra integration, theory and practice

@doanduyhai

Failure handling!

Oh no, my data locality !!!

53

Page 54: Spark cassandra integration, theory and practice

@doanduyhai

Failure handling!

54

Page 55: Spark cassandra integration, theory and practice

@doanduyhai

Data Locality Impl!

55

Remember RDD interface ?

abstract'class'RDD[T](…)'{'' @DeveloperApi'' def'compute(split:'Partition,'context:'TaskContext):'Iterator[T]''' protected'def'getPartitions:'Array[Partition]'' '' protected'def'getPreferredLocations(split:'Partition):'Seq[String]'='Nil'''''''''''''''}'

Page 56: Spark cassandra integration, theory and practice

@doanduyhai

Data Locality Impl!

56

def getPreferredLocations(split: Partition): Cassandra replicas IP address corresponding to this Spark partition

Page 57: Spark cassandra integration, theory and practice

@doanduyhai

Failure handling!If RF > 1 the Spark master chooses the next preferred location, which

is a replica 😎 Tune parameters: ①  spark.locality.wait ②  spark.locality.wait.process ③  spark.locality.wait.node

57

C* SparkM SparkW

C* SparkW

C* SparkW

C* SparkW

C* SparkW

Page 58: Spark cassandra integration, theory and practice

@doanduyhai

val confDC1 = new SparkConf(true) .setAppName("data_migration") .setMaster("master_ip") .set("spark.cassandra.connection.host", "DC_1_hostnames") .set("spark.cassandra.connection.local_dc", "DC_1") val confDC2 = new SparkConf(true) .setAppName("data_migration") .setMaster("master_ip") .set("spark.cassandra.connection.host", "DC_2_hostnames") .set("spark.cassandra.connection.local_dc", "DC_2 ") val sc = new SparkContext(confDC1) sc.cassandraTable[Performer](KEYSPACE,PERFORMERS) .map[Performer](???) .saveToCassandra(KEYSPACE,PERFORMERS) (CassandraConnector(confDC2),implicitly[RowWriterFactory[Performer]])

Cross-DC operations!

58

Page 59: Spark cassandra integration, theory and practice

@doanduyhai

val confCluster1 = new SparkConf(true) .setAppName("data_migration") .setMaster("master_ip") .set("spark.cassandra.connection.host", "cluster_1_hostnames") val confCluster2 = new SparkConf(true) .setAppName("data_migration") .setMaster("master_ip") .set("spark.cassandra.connection.host", "cluster_2_hostnames") val sc = new SparkContext(confCluster1) sc.cassandraTable[Performer](KEYSPACE,PERFORMERS) .map[Performer](???) .saveToCassandra(KEYSPACE,PERFORMERS) (CassandraConnector(confCluster2),implicitly[RowWriterFactory[Performer]])

Cross-cluster operations!

59

Page 60: Spark cassandra integration, theory and practice

Spark/Cassandra use-cases!

Data cleaning!Schema migration!

Analytics!!

Page 61: Spark cassandra integration, theory and practice

@doanduyhai

Use Cases!

Load data from various sources

Analytics (join, aggregate, transform, …)

Sanitize, validate, normalize, transform data

Schema migration, Data conversion

61

Page 62: Spark cassandra integration, theory and practice

@doanduyhai

Data cleaning use-cases!

62

Bug in your application ? Dirty input data ? ☞ Spark job to clean it up! (perfect data locality)

Sanitize, validate, normalize, transform data

Page 63: Spark cassandra integration, theory and practice

Data Cleaning

https://github.com/doanduyhai/Cassandra-Spark-Demo

Page 64: Spark cassandra integration, theory and practice

@doanduyhai

Schema migration use-cases!

64

Business requirements change with time ? Current data model no longer relevant ? ☞ Spark job to migrate data !

Schema migration, Data conversion

Page 65: Spark cassandra integration, theory and practice

Data Migration

https://github.com/doanduyhai/Cassandra-Spark-Demo

Page 66: Spark cassandra integration, theory and practice

@doanduyhai

Analytics use-cases!

66

Given existing tables of performers and albums, I want:

①  top 10 most common music styles (pop,rock, RnB, …) ?

②  performer productivity(albums count) by origin country and by decade ?

☞ Spark job to compute analytics !

Analytics (join, aggregate, transform, …)

Page 67: Spark cassandra integration, theory and practice

@doanduyhai

Analytics pipeline!

67

①  Read from production transactional tables

②  Perform aggregation with Spark

③  Save back data into dedicated tables for fast visualization

④  Repeat step ①

Page 68: Spark cassandra integration, theory and practice

Analytics

https://github.com/doanduyhai/Cassandra-Spark-Demo

Page 69: Spark cassandra integration, theory and practice

The “Big Data Platform”!

Page 70: Spark cassandra integration, theory and practice

@doanduyhai

Our vision!

70

We had a dream …

to provide a Big Data Platform … built for the performance & high availabitly demands

of IoT, web & mobile applications

Page 71: Spark cassandra integration, theory and practice

@doanduyhai

Until now in Datastax Enterprise!Cassandra + Solr in same JVM Unlock full text search power for Cassandra CQL syntax extension

71

SELECT * FROM users WHERE solr_query = ‘age:[33 TO *] AND gender:male’;

SELECT * FROM users WHERE solr_query = ‘lastname:*schwei?er’;

Page 72: Spark cassandra integration, theory and practice

Cassandra + Solr

Page 73: Spark cassandra integration, theory and practice

@doanduyhai

Now with Spark!Cassandra + Spark Unlock full analytics power for Cassandra Spark/Cassandra connector

73

Page 74: Spark cassandra integration, theory and practice

@doanduyhai

Tomorrow (DSE 4.7)!Cassandra + Spark + Solr Unlock full text search + analytics power for Cassandra

74

Page 75: Spark cassandra integration, theory and practice

@doanduyhai

Tomorrow (DSE 4.7)!Cassandra + Spark + Solr Unlock full text search + analytics power for Cassandra

75

Page 76: Spark cassandra integration, theory and practice

@doanduyhai

How to speed up Spark jobs ?!Bruce force solution for rich people: •  Add more RAM (100Gb, 200Gb, …)

76

Page 77: Spark cassandra integration, theory and practice

@doanduyhai

How to speed up Spark jobs ?!Smart solution for broke people: •  Filter at maximum the initial dataset

77

Page 78: Spark cassandra integration, theory and practice

@doanduyhai

The idea!①  Filter the maximum with Cassandra and Solr

②  Fetch only small data set in memory

③  Aggregate with Spark ☞ near real time interactive analytics query possible if restrictive criteria

78

Page 79: Spark cassandra integration, theory and practice

@doanduyhai

Datastax Enterprise 4.7!With a 3rd component for full text search …

how to preserve data locality ?

79

Page 80: Spark cassandra integration, theory and practice

@doanduyhai

Stand-alone search cluster caveat!The bad way v1: perform search from the Spark « driver program »

80

Page 81: Spark cassandra integration, theory and practice

@doanduyhai

Stand-alone search cluster caveat!The bad way v2: search from Spark workers with restricted routing

81

Page 82: Spark cassandra integration, theory and practice

@doanduyhai

Stand-alone search cluster caveat!The bad way v3: search from Cassandra nodes with a connector to Solr

82

Page 83: Spark cassandra integration, theory and practice

@doanduyhai

Stand-alone search cluster caveat!The ops won’t be your friend •  3 clusters to manage: Spark, Cassandra & Solr/whatever •  lots of moving parts Impacts on Spark jobs •  increased response time due to latency •  the 99.9 percentile latency can be very high

83

Page 84: Spark cassandra integration, theory and practice

@doanduyhai

Datastax Enterprise 4.7!The right way, distributed « local search »

84

Page 85: Spark cassandra integration, theory and practice

@doanduyhai

val join: CassandraJoinRDD[(String,Int), (String,String)] =

sc.cassandraTable[(String,Int)](KEYSPACE, ALBUMS)

// Select only useful columns for join and processing

.select("artist","year").where("solr_query = 'style:*rock* AND ratings:[3 TO *]' ")

.as((_:String, _:Int))

.repartitionByCassandraReplica(KEYSPACE, ARTISTS)

.joinWithCassandraTable[(String,String)](KEYSPACE, ARTISTS, SomeColumns("name","country"))

.on(SomeColumns("name")).where("solr_query = 'age:[20 TO 30]' ")

Datastax Enterprise 4.7!

①  compute Spark partitions using Cassandra token ranges ②  on each partition, use Solr for local data filtering (no distributed query!) ③  fetch data back into Spark for aggregations

85

Page 86: Spark cassandra integration, theory and practice

@doanduyhai

Datastax Enterprise 4.7!

86

SELECT … FROM … WHERE token(#partition)> 3X/8 AND token(#partition)<= 4X/8 AND solr_query='full text search expression';

1

2

3

Advantages of same JVM Cassandra + Solr integration

1

Single-pass local full text search (no fan out) 2

Data retrieval

Token Range : ] 3X/8, 4X/8]

Page 87: Spark cassandra integration, theory and practice

@doanduyhai

Datastax Enterprise 4.7!Scalable solution: x2 volume à x2 nodes à constant processing time

87

Page 88: Spark cassandra integration, theory and practice

Cassandra + Spark + Solr

https://github.com/doanduyhai/Cassandra-Spark-Demo

Page 89: Spark cassandra integration, theory and practice

Q & R

! " !

Page 90: Spark cassandra integration, theory and practice

Thank You @doanduyhai

[email protected]

https://academy.datastax.com/