hazelcast and mongodb at cloud cms

Post on 10-May-2015

4.114 Views

Category:

Technology

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

This is a presentation given on October 24 by Michael Uzquiano of Cloud CMS (http://www.cloudcms.com) at the MongoDB Boston conference. In this presentation, we cover Hazelcast - an in-memory data grid that provides distributed object persistence across multiple nodes in a cluster. When backed by MongoDB, objects are naturally written to Mongo by Hazelcast. The integration points are clean and easy to implement. We cover a few simple cases along with code samples to provide the MongoDB community with some ideas of how to integrate Hazelcast into their own MongoDB Java applications.

TRANSCRIPT

Distributed Caching with Hazelcast + MongoDB

at Cloud CMS

Michael Uzquiano

uzi@cloudcms.com@uzquiano

Agenda

• What is Cloud CMS?

• Why we chose MongoDB

• What is Hazelcast?

• Code Samples

• Implementation with MongoDB

http://www.cloudcms.com

• The fastest, easiest and most cost-effective way to build cloud-connected web and mobile applications.

• An application server in the cloud for cloud-connected applications

• Built to leverage the strengths of MongoDB

• Hosted or On-Premise

Mobile Apps

Touch Apps

Application Experiences

Consumer Experiences

Cloud CMS provides

• Content Management

• Users, Groups, Roles, Permissions

• Personalization (Behavioral Targeting)

• Analytics, Reporting

• Identity Management

• Integrated Services

• Email Campaigns, CRM, Integrated Billing

Silos

Keep things cost-effective

• iOS, Android, Windows Mobile

• JavaScript, Java, PHP, Ruby, Node.js

• jQuery, jQuery Mobile, Dojo, YUI,Sencha Touch, Titanium,PhoneGap

Mobile Ready

The right tool for the job

• JSON

• Query and Indexing

• Doesn’t overstep into application domain• No transactions• No referential integrity• No triggers, foreign keys, procedures

• Gets out of the way so we can tackle these

Community + Momentum

• Really great language drivers

• Community contributors

• Frequent release schedule

• Exciting roadmap

Performance

• Very fast• Anywhere from 2 to 10x faster than MySQL• About 50 times faster than CouchDB• Lots of benchmarks but the point is, it’s fast!

• Sharding built-in, automatic, and *Just Works™• *Just Works™ guarantee applies only if you have a cluster of shard replica

sets with config servers and routing servers and you define your own shard key(s) with appropriate uniformity and granularity

• Asynchronous replication for redundancy/failover

What is Hazelcast?

• In-Memory Data Grid (IMDG)• Clustering and highly scalable data distribution

solution for Java• Distributed Data Structures for Java• Distributed Hashtable (DHT) and more

What does Hazelcast do?

• Scale your application• Share data across cluster• Partition your data• Send/receive messages• Balance load across cluster• Process in parallel on many JVM

Advantages

• Open source (Apache License)• Super light, simple, no-dependency• Distributed/partitioned implementation of map,

queue, set, list, lock and executor service• Transactional (JCA support)• Topic for pub/sub messaging• Cluster info and membership events• Dynamic clustering, backup, fail-over

Data Partitioning in a Cluster

If you have 5 million objects in your 5-node cluster, then each node will carry1 million objects and 1 million backup objects.

Server1 Server2 Server3 Server4 Server5

SuperClient in a Cluster• -Dhazelcast.super.client=true

• As fast as any member in the cluster• Holds no-data

Server1 Server2 Server3 Server4 Server5

Code Samples

Code Samples – Cluster Interface

import com.hazelcast.core.*;import java.util.Set;

Cluster cluster = Hazelcast.getCluster();cluster.addMembershipListener(listener);       

Member localMember  = cluster.getLocalMember();System.out.println (localMember.getInetAddress());

Set setMembers  = cluster.getMembers();

Code Samples – Distributed Map

import com.hazelcast.core.Hazelcast;import java.util.Map;

Map<String, User> map = Hazelcast.getMap(”users");

map.put ("1", user);

User user = map.get("1");

Code Samples – Distributed Queue

import com.hazelcast.core.Hazelcast;import java.util.concurrent.BlockingQueue;import java.util.concurrent.TimeUnit;

BlockingQueue<Task> queue = Hazelcast.getQueue(“tasks");

queue.offer(task);

Task t = queue.poll();Task t = queue.poll(5, TimeUnit.SECONDS);

Code Samples – Distributed Set

import com.hazelcast.core.Hazelcast;import java.util.Set;

Set<Price> set = Hazelcast.getSet(“IBM-Quote-History");

set.add (new Price (10, time1));set.add (new Price (11, time2));set.add (new Price (13, time3));

for (Price price : set) { // process price}

Code Samples – Distributed Lock

import com.hazelcast.core.Hazelcast;import java.util.concurrent.locks.Lock;

Lock mylock = Hazelcast.getLock(mylockobject);mylock.lock();try { // do something} finally { mylock.unlock();}

Code Samples – Distributed Topic

import com.hazelcast.core.*;

public class Sample implements MessageListener {

    public static void main(String[] args) {         Sample sample = new Sample();        Topic topic = Hazelcast.getTopic ("default");          topic.addMessageListener(sample);                       topic.publish ("my-message-object");    }               public void onMessage(Object msg) {        System.out.println("Got msg :" + msg);    } }

Code Samples – Distributed Eventsimport com.hazelcast.core.IMap; import com.hazelcast.core.Hazelcast;import com.hazelcast.core.EntryListener;import com.hazelcast.core.EntryEvent;

public class Sample implements EntryListener { public static void main(String[] args) {         Sample sample = new Sample();        IMap   map   = Hazelcast.getMap ("default");        map.addEntryListener (sample, true);                  map.addEntryListener (sample, "key");                    }

   public void entryAdded(EntryEvent event) {  System.out.println("Added " + event.getKey() + ":" + event.getValue());    }

    public void entryRemoved(EntryEvent event) {       System.out.println("Removed " + event.getKey() + ":" + event.getValue());    }

   public void entryUpdated(EntryEvent event) {        System.out.println("Updated " + event.getKey() + ":" + event.getValue());    } }

Code Samples – Executor Service

  FutureTask<String> futureTask = new DistributedTask<String>(new Echo(input), member);       ExecutorService es = Hazelcast.getExecutorService();

es.execute(futureTask);

String result = futureTask.get();

Sample Configuration<hazelcast>        <group>                <name>dev</name>                <password>dev-pass</password>        </group>        <network>                <port auto-increment="true">5701</port>                <join>                        <multicast enabled="true">                                <multicast-group>224.2.2.3</multicast-group>                                <multicast-port>54327</multicast-port>                        </multicast>                        <tcp-ip enabled="false">                                <interface>192.168.1.2-5</interface> <hostname>istanbul.acme</hostname>                        </tcp-ip>                </join>                <interfaces enabled="false">                        <interface>10.3.17.*</interface>                </interfaces>        </network>        <executor-service>                <core-pool-size>16</core-pool-size>                <max-pool-size>64</max-pool-size>                <keep-alive-seconds>60</keep-alive-seconds>        </executor-service>        <queue name="tasks">                <max-size-per-jvm>10000</max-size-per-jvm>        </queue></hazelcast>

Distributed Job QueueElasticity with Cloud CMS

Distributed Job Queue

Upload of a 20 page PDF

Write PDF to GridFSAdd 2 jobs to Queue(each to build 10 pngs)

Distributed Job Queue

Upload of a 20 page PDF

Write PDF to GridFSAdd 2 jobs to Queue(each to build 10 pngs)

Uh oh…

Distributed Job Queue

Upload of a 20 page PDF

Write PDF to GridFSAdd 2 jobs to Queue(each to build 10 pngs)

Distributed Job Queue

Upload of a 20 page PDF

Write PDF to GridFSAdd 2 jobs to Queue(each to build 10 pngs)

Let’s do this…

Hey there!

Distributed Job Queue

Job Scheduler determineswhich jobs get priority

Picks job fromqueue and works on it

Picks Job fromqueue and works on it

Jobs may run asynchronously(returns once transaction complete)

Distributed Job Queue

Distributed Job Queue

Implementation with MongoDB

com.hazelcast.core.MapStore

MapStorepublic interface MapStore<K,V> extends MapLoader<K,V>{ void store(K k, V v); void storeAll(Map<K,V> kvMap); void delete(K k); void deleteAll(Collection<K> ks);}

public interface MapLoader<K,V>{ V load(K k); Map<K,V> loadAll(Collection<K> ks); Set<K> loadAllKeys();}

MapLoaderLifecycleSupport

public interface MapLoaderLifecycleSupport{ void init(HazelcastInstance hazelcastInstance, Properties properties, String mapName); void destroy(); Set<K> loadAllKeys();}

public class MongoUsersMapStore implements MapStore, MapLoaderLifecycleSupport{

}

public class MongoUsersMapStore implements MapStore, MapLoaderLifecycleSupport{ private Mongo mongo; private DBCollection col;

public void init(HazelcastInstance hazelcastInstance, Properties properties, String mapName) {

this.mongo = new Mongo(“localhost”, 27017);

String dbname = properties.get(“db”); String cname = properties.get(“collection”);

DB db = this.mongo.getDB(dbname); this.col = db.getCollection(cname); }}

public class MongoUsersMapStore implements MapStore, MapLoaderLifecycleSupport{ private Mongo mongo; private DBCollection col;

...

public void destroy() { this.mongo.close(); }

public Set<K> loadAllKeys() { return null; }}

public class MongoUsersMapStore implements MapStore, MapLoaderLifecycleSupport{ private Mongo mongo; private DBCollection col;

...

public Set loadAllKeys() { Set keys = new HashSet();

BasicDBList fields = new BasicDBList(); fields.add(“_id”);

DBCursor cursor = this.col.find(null, fields); while (cursor.hasNext()) { keys.add(cursor.next().get(“_id”)); }

return keys; }}

public class MongoUsersMapStore implements MapStore, MapLoaderLifecycleSupport{ ...

public void store(K k, V v) { DBObject dbObject = convert(v); dbObject.put(“_id”, k);

this.col.save(dbObject); }

public void delete(K k) { DBObject dbObject = new BasicDBObject(); dbObject.put(“_id”, k);

this.col.remove(dbObject); }}

public class MongoUsersMapStore implements MapStore, MapLoaderLifecycleSupport{ ...

public void storeAll(Map map) { for (Object key : map.keySet()) { store(key, map.get(key)); } }

public void deleteAll(Collection keys) { for (Object key: keys) { delete(key); } }}

public class MongoUsersMapStore implements MapStore, MapLoaderLifecycleSupport{ ...

public void storeAll(Map map) { for (Object key : map.keySet()) { store(key, map.get(key)); } }

public void deleteAll(Collection keys) { BasicDBList dbo = new BasicDBList(); for (Object key : keys) { dbo.add(new BasicDBObject("_id", key)); } BasicDBObject dbb = new BasicDBObject("$or", dbo); this.col.remove(dbb); }}

Spring Config<beans xmlns="http://www.springframework.org/schema/beans" xmlns:hz="http://www.hazelcast.com/schema/spring">

<hz:hazelcast id="hzInstance”> <hz:config> <hz:map name=”users” backup-count="1" max-size="2000” eviction-percentage="25" eviction-policy="LRU" merge-policy="hz.LATEST_UPDATE"> <hz:map-store enabled="true" write-delay-seconds="0" implementation="mymap" /> </hz:map> </hz:config></hz:hazelcast>

<bean id=”mymap” class="org.sample.MongoUsersMapStore" />

Implementation with MongoDB

com.hazelcast.core.EntryListener

EntryListenerpublic interface EntryListener<K,V>{ void entryAdded(EntryEvent<K, V> event); void entryUpdated(EntryEvent<K, V> event); void entryRemoved(EntryEvent<K, V> event); void entryEvicted(EntryEvent<K, V> event);}

EntryListenerpublic class UserCacheEntryListener<String, User>{ private Map<String, User> cache;

public User getUser(String key) { return cache.get(key); }

public void entryAdded(EntryEvent<String, User> event) { cache.put(event.getKey(), event.getValue()); }

void entryUpdated(EntryEvent<K, V> event); void entryRemoved(EntryEvent<K, V> event); void entryEvicted(EntryEvent<K, V> event);}

EntryListener

Node 1 Node 3Node 2

Cache 1 Cache 2 Cache 3

Spring Framework

• Spring Data for MongoDB• http://www.springsource.org/spring-data/

mongodb• com.hazelcast.spring.mongodb.MongoMapStore

• Based on Spring API Template pattern

• com.hazelcast.spring.mongodb.MongoTemplate

• Easy to get started, base implementation

• You still might want to roll your own

Questions?

• Michael Uzquiano• uzi@cloudcms.com• @uzquiano

• Cloud CMS• http://www.cloudcms.com• @cloudcms

• Hazelcast• http://www.hazelcast.com• https://github.com/hazelcast/hazelcast

top related