designing for distributed systems with reactor and reactive streams

Post on 03-Aug-2015

922 Views

Category:

Software

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Designing  for  Distributed  Systems  with  Reactor  and  Reactive  Streams

Stephane  Maldini  @smaldini

1

WHY  DISTRIBUTED  SYSTEMS  ARE  THE  NEW  NORMAL  ?

SCALING  MONOLITHIC  APPS

Payments Adult

E-­‐Learning Entertainment

Music Search

Payments Adult

E-­‐Learning Entertainment

Music Search

Payments Adult

E-­‐Learning Entertainment

Music Search

Payments Adult

E-­‐Learning Entertainment

Music Search

HOW  YOU  SHOULD  SCALE  ?

HOW  YOU  SHOULD  SCALE  ?

Enternaiment

E-­‐Learning

Payments

Adult

Music

Search

Requests

MICROSERVICES

Dependencies ?

The  Speed  Of  Light

I wish I acted in Star Trek Teleporting anyone ?

There’s  latency  between  each  remote  call

Let’s  use  asynchronous  processing

There’s  latency  between  each  remote  call

Let’s  use  asynchronous  processing

I  shall  not  block  incoming  requests  to  keep  serving

There’s  latency  between  each  remote  call

Let’s  use  asynchronous  processing

I  shall  not  block  incoming  requests  to  keep  serving

Thread  Executor  !

There’s  latency  between  each  remote  call

private ExecutorService threadPool = Executors.newFixedThreadPool(2); final List<T> batches = new ArrayList<T>();

//…

Callable<T> t = new Callable<T>() {

public T run() { T result = callDatabase(); synchronized(batches) { batches.add(result); return result; } } };

Future<T> f = threadPool.submit(t); T result = f.get()

Vanilla  background-­‐processing

private ExecutorService threadPool = Executors.newFixedThreadPool(2); final List<T> batches = new ArrayList<T>();

//…

Callable<T> t = new Callable<T>() {

public T run() { T result = callDatabase(); synchronized(batches) { batches.add(result); return result; } } };

Future<T> f = threadPool.submit(t); T result = f.get()

New  Allocation  By  Request

Vanilla  background-­‐processing

private ExecutorService threadPool = Executors.newFixedThreadPool(2); final List<T> batches = new ArrayList<T>();

//…

Callable<T> t = new Callable<T>() {

public T run() { T result = callDatabase(); synchronized(batches) { batches.add(result); return result; } } };

Future<T> f = threadPool.submit(t); T result = f.get()

New  Allocation  By  Request

Queue-­‐based  message  passing

Vanilla  background-­‐processing

private ExecutorService threadPool = Executors.newFixedThreadPool(2); final List<T> batches = new ArrayList<T>();

//…

Callable<T> t = new Callable<T>() {

public T run() { T result = callDatabase(); synchronized(batches) { batches.add(result); return result; } } };

Future<T> f = threadPool.submit(t); T result = f.get()

New  Allocation  By  Request

Queue-­‐based  message  passing

Vanilla  background-­‐processing

What  if  this  message  fails  ?

private ExecutorService threadPool = Executors.newFixedThreadPool(2); final List<T> batches = new ArrayList<T>();

//…

Callable<T> t = new Callable<T>() {

public T run() { T result = callDatabase(); synchronized(batches) { batches.add(result); return result; } } };

Future<T> f = threadPool.submit(t); T result = f.get()

New  Allocation  By  Request

Queue-­‐based  message  passing

Vanilla  background-­‐processing

What  if  this  message  fails  ?

censored

Mixing  latency  with  queue  based  handoff

http://ferd.ca/queues-don-t-fix-overload.html

Mixing  latency  with  queue  based  handoff

http://ferd.ca/queues-don-t-fix-overload.html

Mixing  latency  with  queue  based  handoff

http://ferd.ca/queues-don-t-fix-overload.html

Queues  must  be  bounded

Passing  message  /w  little  overhead

Queues  must  be  bounded

Passing  message  /w  little  overhead

Tolerating  some  consuming  rate  difference

Queues  must  be  bounded

Passing  message  /w  little  overhead

Tolerating  some  consuming  rate  difference

Reactor  !

Queues  must  be  bounded

Pre-­‐allocating  Slots  -­‐>  The  Ring  Buffer

Publishing events to the right slot

dude!

Re-­‐using  Threads  too  :  Event  Loop

I’m an event loop, consuming messages in the right order

//RingBufferProcessor with 32 slots by default RingBufferProcessor<Integer> processor = RingBufferProcessor.create(); //Subscribe to receive events processor.subscribe( //Create a subscriber from a lambda/method ref SubscriberFactory.unbounded((data, s) ->

System.out.println(data) )

);

//Dispatch data asynchronously int i = 0; while(i++ < 100000) processor.onNext(i)

//Terminate the processor processor.shutdown();

//a second subscriber to receive the same events in a distinct thread processor.subscribe( SubscriberFactory.unbounded((data, s) -> {

//a slow callback returning false when not interested in data anymore if(!sometimeSlow(data)){ //shutdown the consumer thread s.cancel();

}

}) );

Hold  on  !  The  guy  said  bounded  number  of  slots

So  we  still  block  when  the  buffer  is  full!

Hold  on  !  The  guy  said  bounded  number  of  slots

So  we  still  block  when  the  buffer  is  full!

…So  why  sending  more  requests

Hold  on  !  The  guy  said  bounded  number  of  slots

So  we  still  block  when  the  buffer  is  full!

…So  why  sending  more  requests

Reactive  Streams!

Hold  on  !  The  guy  said  bounded  number  of  slots

What  is  defined  in  Reactive  Streams

Async non-blocking data sequence

What  is  defined  in  Reactive  Streams

Async non-blocking data sequence

Minimal resources requirement

What  is  defined  in  Reactive  Streams

Interoperable protocol (Threads, Nodes…)Async non-blocking data sequence

Minimal resources requirement

What  is  defined  in  Reactive  Streams

Async non-blocking flow-control

Interoperable protocol (Threads, Nodes…)Async non-blocking data sequence

Minimal resources requirement

What  is  defined  in  Reactive  Streams

Async non-blocking flow-control

Interoperable protocol (Threads, Nodes…)Async non-blocking data sequence

Minimal resources requirement

What  is  defined  in  Reactive  Streams

What  is  reactive  flow  control  ?

PUBLISHER

What  is  reactive  flow  control  ?

PUBLISHER

SUBSCRIBER

What  is  reactive  flow  control  ?

PUBLISHER

SUBSCRIBER

What  is  reactive  flow  control  ?

SUBSCRIPTION

PUBLISHER

SUBSCRIBER

Events

What  is  reactive  flow  control  ?

SUBSCRIPTION

PUBLISHER

SUBSCRIBER

Events

What  is  reactive  flow  control  ?

SUBSCRIPTION

PUBLISHER

SUBSCRIBER

Events

Demand

What  is  reactive  flow  control  ?

SUBSCRIPTION

All  of  that  in  4  interfaces!

And  there’s  a  TCK  to  verify  implementations

All  of  that  in  4  interfaces!

And  there’s  a  TCK  to  verify  implementations

No  More  Unwanted  Requests

All  of  that  in  4  interfaces!

And  there’s  a  TCK  to  verify  implementations

No  More  Unwanted  Requests

Need  to  propagate  that  demand  upstream!

All  of  that  in  4  interfaces!

ccv

Doug Lea – SUNY Oswego

Some  Smart  Guys  Involved

RingBufferProcessor<Integer> processor = RingBufferProcessor.create(); //Subscribe to receive event […]

//Data access gated by a Publisher with backpressure PublisherFactory.forEach( sub -> { if(sub.context().hasNext()) sub.onNext(sub.context().readInt());

else sub.onComplete();

}, sub -> sqlContext(), context -> context.close()

) .subscribe(processor);

//Terminate the processor [..]

RingBufferProcessor<Integer> processor = RingBufferProcessor.create(); //Subscribe to receive event […]

//Data access gated by a Publisher with backpressure PublisherFactory.forEach( sub -> { if(sub.context().hasNext()) sub.onNext(sub.context().readInt());

else sub.onComplete();

}, sub -> sqlContext(), context -> context.close()

) .subscribe(processor);

//Terminate the processor [..]Connect  processor  to  this  publisher  and  start  requesting

RingBufferProcessor<Integer> processor = RingBufferProcessor.create(); //Subscribe to receive event […]

//Data access gated by a Publisher with backpressure PublisherFactory.forEach( sub -> { if(sub.context().hasNext()) sub.onNext(sub.context().readInt());

else sub.onComplete();

}, sub -> sqlContext(), context -> context.close()

) .subscribe(processor);

//Terminate the processor [..]Connect  processor  to  this  publisher  and  start  requesting

For  the  new  connected  processor,  create  some  sql  

context

RingBufferProcessor<Integer> processor = RingBufferProcessor.create(); //Subscribe to receive event […]

//Data access gated by a Publisher with backpressure PublisherFactory.forEach( sub -> { if(sub.context().hasNext()) sub.onNext(sub.context().readInt());

else sub.onComplete();

}, sub -> sqlContext(), context -> context.close()

) .subscribe(processor);

//Terminate the processor [..]Connect  processor  to  this  publisher  and  start  requesting

For  the  new  connected  processor,  create  some  sql  

context

Keep  invoking  this  callback  until  there  is  no  more  

pending  request

What  about  combining  multiple  asynchronous  calls

And  everything  in  a  controlled  fashion

What  about  combining  multiple  asynchronous  calls

And  everything  in  a  controlled  fashion

Including  Errors  and  Completion  

What  about  combining  multiple  asynchronous  calls

And  everything  in  a  controlled  fashion

Including  Errors  and  Completion  

Reactive  Extensions  !

What  about  combining  multiple  asynchronous  calls

FlatMap  and  Monads..  Nooooo  Please  No

24

Streams.just(‘doge’).flatMap{ name -> Streams.just(name) .observe{ println 'so wow' } .map{ 'much monad'} }.consume{ assert it == 'much monad' }

FlatMap  and  Monads..  Nooooo  Please  No

24

Streams.just(‘doge’).flatMap{ name -> Streams.just(name) .observe{ println 'so wow' } .map{ 'much monad'} }.consume{ assert it == 'much monad' }

A publisher that only sends “doge” on request

FlatMap  and  Monads..  Nooooo  Please  No

24

Streams.just(‘doge’).flatMap{ name -> Streams.just(name) .observe{ println 'so wow' } .map{ 'much monad'} }.consume{ assert it == 'much monad' }

A publisher that only sends “doge” on request

Sub-Stream definition

FlatMap  and  Monads..  Nooooo  Please  No

24

Streams.just(‘doge’).flatMap{ name -> Streams.just(name) .observe{ println 'so wow' } .map{ 'much monad'} }.consume{ assert it == 'much monad' }

A publisher that only sends “doge” on request

Sub-Stream definition

All Sub-Streams are merged under a single sequence

Scatter  Gather  and  Fault  Tolerance

25

Streams.merge(      userService.filteredFind(“Rick"),  //  Stream  of  User      userService.filteredFind(“Morty")  //  Stream  of  User  )  .buffer()  //  Accumulate  all  results  in  a  List  .retryWhen(  errors  -­‐>  //Stream  of  Errors      errors      .zipWith(Streams.range(1,3),  t  -­‐>  t.getT2())          .flatMap(  tries  -­‐>  Streams.timer(tries)  )  )  .consume(System.out::println);

Scatter  Gather  and  Fault  Tolerance

25

Interleaved merge from 2 upstream publishers

Streams.merge(      userService.filteredFind(“Rick"),  //  Stream  of  User      userService.filteredFind(“Morty")  //  Stream  of  User  )  .buffer()  //  Accumulate  all  results  in  a  List  .retryWhen(  errors  -­‐>  //Stream  of  Errors      errors      .zipWith(Streams.range(1,3),  t  -­‐>  t.getT2())          .flatMap(  tries  -­‐>  Streams.timer(tries)  )  )  .consume(System.out::println);

Scatter  Gather  and  Fault  Tolerance

25

Interleaved merge from 2 upstream publishers

Up to 3 tries

Streams.merge(      userService.filteredFind(“Rick"),  //  Stream  of  User      userService.filteredFind(“Morty")  //  Stream  of  User  )  .buffer()  //  Accumulate  all  results  in  a  List  .retryWhen(  errors  -­‐>  //Stream  of  Errors      errors      .zipWith(Streams.range(1,3),  t  -­‐>  t.getT2())          .flatMap(  tries  -­‐>  Streams.timer(tries)  )  )  .consume(System.out::println);

Scatter  Gather  and  Fault  Tolerance

25

Interleaved merge from 2 upstream publishers

Up to 3 tries

All Sub-Streams are merged under a single sequence

Streams.merge(      userService.filteredFind(“Rick"),  //  Stream  of  User      userService.filteredFind(“Morty")  //  Stream  of  User  )  .buffer()  //  Accumulate  all  results  in  a  List  .retryWhen(  errors  -­‐>  //Stream  of  Errors      errors      .zipWith(Streams.range(1,3),  t  -­‐>  t.getT2())          .flatMap(  tries  -­‐>  Streams.timer(tries)  )  )  .consume(System.out::println);

Scatter  Gather  and  Fault  Tolerance

25

Interleaved merge from 2 upstream publishers

Up to 3 tries

All Sub-Streams are merged under a single sequence

Streams.merge(      userService.filteredFind(“Rick"),  //  Stream  of  User      userService.filteredFind(“Morty")  //  Stream  of  User  )  .buffer()  //  Accumulate  all  results  in  a  List  .retryWhen(  errors  -­‐>  //Stream  of  Errors      errors      .zipWith(Streams.range(1,3),  t  -­‐>  t.getT2())          .flatMap(  tries  -­‐>  Streams.timer(tries)  )  )  .consume(System.out::println);

Delay retry

REACTOR2™  =    REACTOR  +  REACTIVE  EXTENSIONS  +  REACTIVE  STREAMS

reactor-net

reactor-streams

reactor-bus

reactor-core

reactor-net

reactor-streams

reactor-bus

reactor-core

Event Bus

Core Dispatchers & Processors

Streams and Promises

reactor-net

reactor-streams

reactor-bus

reactor-coreFunctional artifacts

(SAM components, tuples, timers)

Event Bus

Core Dispatchers & Processors

Streams and Promises

reactor-net

reactor-streams

reactor-bus

reactor-coreFunctional artifacts

(SAM components, tuples, timers)

Event Bus

Core Dispatchers & Processors

Streams and Promises

NetStreams [ Clients/Server ] [TCP, UDP, HTTP]

reactor-net

reactor-streams

reactor-bus

reactor-coreFunctional artifacts

(SAM components, tuples, timers)

Event Bus

Core Dispatchers & Processors

Streams and Promises

NetStreams [ Clients/Server ] [TCP, UDP, HTTP]

Fast Data [buffer, codec]

reactor-net

reactor-streams

reactor-bus

reactor-coreFunctional artifacts

(SAM components, tuples, timers)

Event Bus

Core Dispatchers & Processors

Streams and Promises

NetStreams [ Clients/Server ] [TCP, UDP, HTTP]

Fast Data [buffer, codec]

reactor-net

reactor-streams

reactor-bus

reactor-coreFunctional artifacts

(SAM components, tuples, timers)

Event Bus

Core Dispatchers & Processors

Streams and Promises

NetStreams [ Clients/Server ] [TCP, UDP, HTTP]

Fast Data [buffer, codec]

ASYNC  IO  ?

30

NetStreams.<String,  String>httpServer(spec  -­‐>        spec.codec(StandardCodecs.STRING_CODEC).listen(3000)  ).     ws("/",  channel  -­‐>  {       System.out.println("Connected  a  websocket  client:  "  +  channel.remoteAddress());  

    return  somePublisher.         window(1000).         flatMap(s  -­‐>  channel.writeWith(  s.  

reduce(0f,  (prev,  trade)  -­‐>  (trade.getPrice()  +  prev)  /  2).  map(Object::toString)  )  

);     }).     start().     await();

30

NetStreams.<String,  String>httpServer(spec  -­‐>        spec.codec(StandardCodecs.STRING_CODEC).listen(3000)  ).     ws("/",  channel  -­‐>  {       System.out.println("Connected  a  websocket  client:  "  +  channel.remoteAddress());  

    return  somePublisher.         window(1000).         flatMap(s  -­‐>  channel.writeWith(  s.  

reduce(0f,  (prev,  trade)  -­‐>  (trade.getPrice()  +  prev)  /  2).  map(Object::toString)  )  

);     }).     start().     await();

Listen  on  port  3000  and  convert  bytes  into  String  inbound/

outbound

30

NetStreams.<String,  String>httpServer(spec  -­‐>        spec.codec(StandardCodecs.STRING_CODEC).listen(3000)  ).     ws("/",  channel  -­‐>  {       System.out.println("Connected  a  websocket  client:  "  +  channel.remoteAddress());  

    return  somePublisher.         window(1000).         flatMap(s  -­‐>  channel.writeWith(  s.  

reduce(0f,  (prev,  trade)  -­‐>  (trade.getPrice()  +  prev)  /  2).  map(Object::toString)  )  

);     }).     start().     await();

Listen  on  port  3000  and  convert  bytes  into  String  inbound/

outbound

Upgrade  clients  to  websocket  on  root  URI

30

NetStreams.<String,  String>httpServer(spec  -­‐>        spec.codec(StandardCodecs.STRING_CODEC).listen(3000)  ).     ws("/",  channel  -­‐>  {       System.out.println("Connected  a  websocket  client:  "  +  channel.remoteAddress());  

    return  somePublisher.         window(1000).         flatMap(s  -­‐>  channel.writeWith(  s.  

reduce(0f,  (prev,  trade)  -­‐>  (trade.getPrice()  +  prev)  /  2).  map(Object::toString)  )  

);     }).     start().     await();

Listen  on  port  3000  and  convert  bytes  into  String  inbound/

outbound

Upgrade  clients  to  websocket  on  root  URI

Flush  every  1000  data  some  data  with  writeWith  +  window

30

NetStreams.<String,  String>httpServer(spec  -­‐>        spec.codec(StandardCodecs.STRING_CODEC).listen(3000)  ).     ws("/",  channel  -­‐>  {       System.out.println("Connected  a  websocket  client:  "  +  channel.remoteAddress());  

    return  somePublisher.         window(1000).         flatMap(s  -­‐>  channel.writeWith(  s.  

reduce(0f,  (prev,  trade)  -­‐>  (trade.getPrice()  +  prev)  /  2).  map(Object::toString)  )  

);     }).     start().     await();

Listen  on  port  3000  and  convert  bytes  into  String  inbound/

outbound

Upgrade  clients  to  websocket  on  root  URI

Flush  every  1000  data  some  data  with  writeWith  +  windowClose  connection  when  

flatMap  completes,  which  is  when  all  Windows  are  done

THE  STATE  OF  THE  ART

Nowreactive-­‐streams.1.0.0  

50%  guide  complete  :  http://projectreactor.io/docs/reference    

reactor-­‐*.2.0.3.RELEASE  -­‐  2.0.4  around  the  corner  

reactor-­‐*.2.1.0.BUILD-­‐SNAPSHOT  -­‐  early  access,  no  breaking  changes  

Now

Initial  Reactor  2  Support  in  :  Spring  Integration  4.2  Spring  Messaging  4.2  Spring  Boot  1.3    Spring  XD  1.2  Grails  3.0

After  Now

Spring  Integration  DSL  +  Reactive  Streams  Dynamic  Subscribers  on  Predefined  Channels  !  Best  Tool  for  each  job:  SI  for  integrating  Reactor  for  scaling  up  too  fast  to  be  true

SI  Java  DSL  +  Reactive  Stream  Preview@Configuration @EnableIntegration public static class ContextConfiguration {

@Autowired private TaskScheduler taskScheduler;

@Bean public Publisher<Message<String>> reactiveFlow() { return IntegrationFlows .from(“inputChannel”) .split(String.class, p -> p.split(",")) .toReactiveStreamsPublisher(); }

@Bean public Publisher<Message<Integer>> pollableReactiveFlow() { return IntegrationFlows .from("inputChannel") .split(e -> e.get().getT2().setDelimiters(",")) .<String, Integer>transform(Integer::parseInt) .channel(Channels::queue) .toReactiveStreamsPublisher(this.taskScheduler); }

}

After  Now

Reactor  +  Spring  Cloud  •  Annotation  Driven  FastData  •  Async  IO  :  (proxy,  client)  •  Circuit  breaker,  Bus,  …  

Reactor  +  Spring  XD  •  Scale  up  any  XD  pipeline  •Reactive  Backpressure  in  XD

37

@EnableReactorModule(concurrency  =  5)  public  class  PongMessageProcessor  implements  ReactiveProcessor<Message,  Message>  {         @Override     public  void  accept(Stream<Message>  inputStream,  ReactiveOutput<Message>  output)  {       output.writeOutput(           inputStream               .map(simpleMap())               .observe(simpleMessage())       );     }  

  //…  }  

37

@EnableReactorModule(concurrency  =  5)  public  class  PongMessageProcessor  implements  ReactiveProcessor<Message,  Message>  {         @Override     public  void  accept(Stream<Message>  inputStream,  ReactiveOutput<Message>  output)  {       output.writeOutput(           inputStream               .map(simpleMap())               .observe(simpleMessage())       );     }  

  //…  }  

Split  input  XD  channel  in  5  threads  !  With  a  blazing  fast  Processor

37

@EnableReactorModule(concurrency  =  5)  public  class  PongMessageProcessor  implements  ReactiveProcessor<Message,  Message>  {         @Override     public  void  accept(Stream<Message>  inputStream,  ReactiveOutput<Message>  output)  {       output.writeOutput(           inputStream               .map(simpleMap())               .observe(simpleMessage())       );     }  

  //…  }  

Split  input  XD  channel  in  5  threads  !  With  a  blazing  fast  Processor

Register  the  sequence  to  write  on  the  output  channel  after  some  operations

After

<3  Spring  5  +  Reactor  &  Reactive  Streams  <3

After

Reactive  IPC  for  the  JVM  <3  RxNetty  +  reactor-­‐net  <3  

Reactor  2.0.3.+  already  previews  the  concept  and  API  flavor…  

because  REACTOR  DOESN’T  KNOW  WAITING  (LOL)

Future  ??

• RxJava  2.0  timeline  (2016  and  after?)  • Thanks  Interop,  Reactive  Extensions  and  Naming  conventions,  it  can  converge  with  reactor-­‐streams  

• https://github.com/ReactiveX/RxJava/wiki/Reactive-­‐Streams  

Take-­‐away

• Distributed  System  is  the  new  cool  and  comes  at  some  cost,  two  big  ones  are  Latency  and  Failure  Tolerance  

• Asynchronous  Processing  and  Error  Handling  by  Design  deal  with  these  two  problems  -­‐>  Reactor,  Reactive  Extensions  /  Streams  

• However  to  fully  operate,  Asynchronous  Processing  should  be  bounded  proactively  (stop-­‐read)  -­‐>  Reactive  Streams

REACTIVE

Extra

Take-­‐away  links

• http://reactive-­‐streams.org    • http://projectreactor.io    • https://github.com/spring-­‐projects/spring-­‐integration-­‐java-­‐dsl/pull/31  

• https://github.com/smaldini/spring-­‐xd/tree/refresh-­‐reactor-­‐module/spring-­‐xd-­‐reactor/    

• https://github.com/reactive-­‐ipc/reactive-­‐ipc-­‐jvm    • https://github.com/smaldini/ReactiveStreamsExamples  

top related