how to bake reactive behavior into your java ee applications

Download How to bake reactive behavior into your Java EE applications

If you can't read please download the document

Upload: ondrej-mihalyi

Post on 21-Jan-2017

89 views

Category:

Software


1 download

TRANSCRIPT

Prezentcia programu PowerPoint

How To Bake Reactive Behavior Into Your JavaEE Applications

Ondrej Mihlyi

@omihalyi

[email protected]

AgendaReactive Support in JavaEE7

Java 8 goodness on Reactive Implementations

Payara Micro Additions

Live demo

Q&A

-ondrej -mert

Reactive ManifestoResponsiveElastic

ResilientMessage Driven

Since application requirements changed in recent years, we went through from having tens of servers and gigabytes of data to thousands of multi processors and petabytes of data. We now want applications to be more flexible, loosely coupled from each other and scale at any factor. The idea behind the reactive manifesto is to suffice those needs, by having response, elastic, resilient, message driven applications.

Reactive means system will response in a timely manner, which will increase the usability of the application. With the help of asynchronous implementation models, the user could get partial responses and the executed code parts would not block each other for the sake of the responsiveness. It also states that the application will be resilient, meaning that application will stay responsive if any problem occurs. In order to be responsiveness, meaning that to get back to the user faster at every request, application should be elastic, it should increase or decrease its resources according to the input. That would also be the cost-effective way for implementing reactive approach. For having loosely coupled components, reactive systems favor message driven approach, which will allow non-blocking communication between the components.

Of course application do not inherently like this, since there is a good deal of traditional blocking API and they are being heavily used inside applications, such as JDBC. And monolithic architectures need to evolve in order to support reactive approach and that means time + money. Its also hard to do reactive coding, you will be dealing with callbacks and threads spawning everywhere.

There are frameworks of course that embodies reactive approach from ground up but they are completely new frameworks like Vert.x, as a Java EE developer you should learn everything from scratch.

So in JavaEE wise, well be talking about where you can introduce reactive approach into your existing code base, where it adds the most value. For that reason, we converter the famous cargo-tracker application, into reactive cargo-tracker ;)

Traditional approachRequest started external resource required (DB, WS, files) request external resource external resource is slowWHAT DO WE DO?When dealing with incoming requests, the traditional approach is one thread per request, all is sequential, waiting for slow resources like DB or external services adds to the request processing time.

-mert

Traditional approach

WE SIMPLY WAIT

Just a transition slide to next slide next slide automatically appears after 10 seconds.

-mert

Traditional approach

WE SIMPLY WAITSimple

Thread-safe

Often Sufficient (when waiting time is negligible)

But waiting is not bad, if we can afford it. It is simple, as sequential is easier than parallel, and it is thread-safe. And we can afford it in many scenarios. Thus additional complexity to solve blocking calls is not required in many cases.

But

-ondrej

At this point a warning finger is raised with But, and a small pause to watch the video.

If there are too many slow blocking calls involved, often many resources (threads, CPU) are being wasted during waiting.

But if waiting is too long

Thread resources are wasted

Processing may halt if all threads waiting

-ondrej

And what if all threads are waiting? You may guess the system is far from being responsive. The application may halt processing new requests and even crash under high load. Being responsive is one of the key aspects of the Reactive Manifesto, and if our application fails to meet that too often, it becomes unusable and therefore not suitable for production. So lets look at how the responsiveness can be improved.

Spawn a separate threadIdea:
- Blocking call in a new thread
- Do something while waiting
- Retrieve results
- Fail after timeout vs. block infinitely

How we can improve on this? Lets introduce more threads, so that we can continue processing something else in the original thread while waiting.`Future`API supports this concept in Java.

We gained more control over waiting. Now, we can decide not to wait for a result after timeout and release resources with an error response. But users dont like error messages, do they?

DrawbacksComplexity

keep asking Are you ready?

Requires one more thread

blocked while waiting

There are also technically drawbacks to this approach. The API may become complex, requiring checking for results iteratively, or falling back to waiting infinitely once further processing depends on the result.It is also not efficient to resources. It even consumes more threads now, just to give us more control over waiting.

Finish in a new thread!Idea:

Blocking call in another thread

Transfer context to another thread

No need to wait for the blocking call

But new thread may still be blocked

If we let any thread to finish the response, theres no need to wait for threads to finish. We may simply release threads as the work is finished, and the task that finishes last will complete the request. We only need to pass the context of the request between threads.

But since the blocking call is still involved, there is always at least one thread being blocked by it. We need still need to add something more to the recipe to utilize resources effectively.

and asynchronous callsIdea:

Provide a callback

Call finishes immediately

Callback called in a new thread

Finish request in the callback

No need to wait at all

But, is it possible?

Ideally, we should not block any thread while waiting for a resource. We only need to handle the response from the resource in another thread. The catch is that the API to access the resource must provide a non-blocking call, which accepts a callback.

So the recipe for making the application responsive and resource efficient is: - ability to transfer the request context to another thread - asynchronous API to avoid blocking

And the question is how Java EE supports these 2 concepts?

Possible in Enterprise?New tools and frameworks

High risks and costs

Fully reactive approachHigh cost of development

Harder to avoid and track bugs

Advice:reactive where its worth it

leave the door open for future

@OMihalyi-ondrej

Is it possible to build reactive applications in enterprise environment? And is it worth it?

Although Java EE is also suitable in other areas, it already provides a stable and robust platform to build applications in the enterprise. Building new enterprise applications on top of completely new frameworks introduces a great risk. But the real problem is that majority of enterprise applications are not new, and rebuilding them from scratch using a new platform is not even an option.On the other hand, rebuilding whole application to meet reactive expectations has its drawbacks too. Reactive approach is not natural for many developers and is not straightforward in most languages. Furthermore, the asynchronous aspect of it brings new challenges to making sure that the code is correct and to track and fix bugs.

Although Java EE might not be the best platform to build fully reactive applications, it provides enough means to significantly improve reactiveness of existing applications and build new reactive applications with the investment in learning a completely new framework. An important promise of Java EE is that it makes it simple and fast to build traditional enterprise applications, while leaving the door open for future enhancements and refactoring step by step to improve its reactiveness You can thus continue later with making only critical parts of your applications reactive and improve other parts later.

Java EE leaves the door openEstablished and wide-spread

Built with resilience in mind (Transactions)

Messaging is first-class citizen (JMS)

Continuous improvementsAsynchronous API, thread-management

Scalability improvements (JCache)

Portable CDI extensions

@OMihalyiEstablished and improving platform small costs of upgrade or redesign simplifications (EJB3, CDI, JMS 2.0 simplified API)

Improvements of the application where needed turn many synchronous API to async split the task into multiple threads - managed executor easily extend with CDI extensions to provide custom reactive features

Asynchronous API in Java EEAsync Servlet request completion

Async JAX-RS request completion

Async JAX-RS client

Async IO in Servlet

Async EJB calls

Managed Executors

WebSockets

Java EE supports passing of a request context and finishing the response in another thread: - Servlet - JAX-RS endpoint

For example, Servlet IO and JAX-RS client API provide asynchronous calls with callbacks when a response from a resource is ready.But the truth is that this is not possible with every Java EE API. In that case, you need to resort to wrapping a blocking call in a separate thread. But keep in mind that you should get it from a separate thread pool so that it does not affect other processing while being blocked.

Async EJB calls provide a simple way to run tasks in separate threads. Managed executors are exactly what you need for a finer-grained control over scheduling tasks in separate threads, even in various thread pools.

Finally, WebSockets are convenient to build highly responsive frontends, which are updated as soon as possible and not only after all the data is available.

Async API ExampleJAX-RS AsyncResponse @Suspended

@GETvoid get(@Suspended AsyncResponse response) response.resume("OK");

The JAX-RS 2.0 specification has added asynchronous HTTP support via two classes. The @Suspended annotation, and AsyncResponse interface.

Having AsyncResponse instance passed as a parameter to the method states that the HTTP request/response should be detached from the currently executing thread and that the current thread should not try to automatically process the response.

AsyncResponse is a callback object, meaning that calling resume method will cause a response to be sent back to the client and will also terminate the HTTP request.

-ondrej

Async API ExampleJAX-RS async client

resourceTarget .request(MediaType.TEXT_PLAIN) .async() .get( new InvocationCallback()

For the client side its also possible to introduce async calls, with the help of .async() method invocation and the instance of InvocationCallback that we are providing enables the way to interfere with the server response. Here InvocationCallback is a callback implementation as you would guess, which offers methods named: completed() and failed().

-ondrej

Java EE + Java 8Future CompletableFuture ?

No, not compatible

Callbacks CompletableFuturecallback triggers cf.complete()

Pass CF as additional parameter to complete it later

Where Java EE provides the means, Java 8 makes it easier to write reactive code, mainly with 2 new additions - lambdas, which make it very easy to build callbacks - CompletableFuture, which makes it easy to handle asynchronous calls and chain callbacks

But how to combine them effectively, as Java EE is built on top of Java 7 only?Ideally, CF or its interface counterpart CompletionStage could be used instead of Future. But that would require support in Java EE, no sooner than in Java EE 8.Therefore we need to adopt another approach.

We need to create a CF ourselves and pass it to an asynchronous call, either within the callback object, or as a parameter to an asynchronous EJB method. The asynchronous call will then call either cf.complete() or cf.completeExceptionally() after the call is finished.

Complete CF in a callbackcf = new CompletableFuture(); webtarget.async() .get(new InvocationCallback() { cf.complete(result); });cf.thenAccept(result -> )

CompletableFutureChain callbacks (like promises)

thenRun(), thenApply(),

thenCompose()

Complete execution in any threadcompFuture.complete()

Why is CF so much better than Future? Chain of callbacks executable in various threads, instead of just waiting for a thread to complete. It is possible to chain CF themselves using thenCompose, making it easy to chain multiple asynchronous calls.

Furthermore provides a standard way to complete the future or throw and exception.

Concurrency utilitiesctxSvc.createContextualProxy( () -> {wrapped code},Runnable.class})Requests span multiple threads

Context must be retained

Managed threads (ExecutorService)

ContextService for non-managed

We usually want to access the same resources during request processing. A request spans multiple threads and this causes some problems with the traditional Java EE. References to resources are bound to a thread and have to be transferred to subsequent threads.

When new threads are provided using the managed executor service, it automatically passes the context to the new threads.

If threads are created using plain Java SE, they context needs to be transferred manually, using the ContextService. In a simple scenario, we can easily wrap a lambda within a contextual proxy using appropriate functional reference. The proxy will then transfer the thread context before each method call.

Remember, the contextual proxy has to be created in a managed thread. Its methods can be called in unmanaged threads. It would not work if we created the proxy inside a callback.

Live Demo

Revisit famous cargo-tracker in a reactive way

Available at:
https://github.com/OndrejM-demonstrations/Reactive-CargoTracker-ondrej -mert

Traditional Design

-ondrej -mert

More Reactive Design

-ondrej -mert

Other parts of being Reactive

Weve shown responsive API

The other 3 reactive concepts:

Resilience

Messaging

Elasticity

-ondrej

Theres more to being reactive than just responsiveness.We can continue to extend our analogy with the Kung-fu master to explain the other 3 concepts. - we need to be resilient to problems and any damage received from our enemies and keep fighting -> Resilience - when fighting together with our allies, we shout for help but cannot afford to wait for it and need to continue fighting. The help will come eventually. -> hence Messaging for communication - and of course Elasticity it would be great if we could clone ourselves to fight more enemies, wouldnt it?

Java EE traditionally offers means to ensure resilience using transactions and clustering; and provide messaging, mainly by the JMS. In a bigger picture, It is usually extend by additional infrastructure to improve resilience and elasticity, e.g. by load balancing and service discovery. But this is not enough. We often need to resort to additional features of a specific application server to complete what is missing. An example of an application server providing more support for building and running reactive applications, is Payara Micro.

Payara MicroApplication server as executable JAR

Runs WAR apps from command line

automatic and elastic clustering

spawn many micro services dynamically replication using distributed cache shared 60MB runtime, 40MB in heapwww.payara.fish

So Payara Micro enables you to run war files from the command line without any application server installation. It is small, around 70mb and you can just execute it by using java jar. You can deploy a web application archive through the command line and with the hazelcast integration each Payara Micro instance that will be spawned will automagically cluster with other Payara Micro processes on the network, giving you an automatic and elastic clustering configuration.

If you havent tried our Micro before, give it a try we have a good deal of resources on payara.fish, you can even execute Micro on your Raspberry PI!

Simple messaging and caching

@Inject @OutboundEvent ev;

// handle in different JVM void handle(@Observes @Inbound MyMsg ev) { }Payara Micro Event Bus events triggered on all nodes

Asynchronous micro services

No need for service registry

On top of CDI events

JCache APIStandard Java API for caching

Distributed cache

Simple messaging and cachingSo dispatching events across JVM is simple as using CDIs outbound event mechanism where you wrap your instance type with Event and then fire it in order to have it observed in the handle method given below. One thing to be noted here is: this model executes as the topic approach of JMS.

So Event Bus of Payara Micro handles the events on any distributed node, which brings asynchronous model to the implementation. Yay! We have reactive micro services now right? There is no need for an UDDI / Service Registry definition and these all can be implemented with the help of 2 qualifiers on CDI events, @Outbound and @Inbound.

Payara Micro uses Hazelcasts distributed executor in the background to achieve this.

Dynamic scalingJust run repeatedly

binds to a free port to avoid port collisions

All instances autoconnect to a cluster

Even across network (multicast)

java -jar payara-micro.jar
--deploy app.war --autoBindHttp

In order to spawn a new server, just run java jar payara-micro.jar and with deploy parameter specify your war application and with autoBindHttp parameter we will enable the application server to auto-bind to an HTTP port to prevent collisions.All spawned micro instances will auto-connect to a cluster, even across networks.

Even More Reactive

-ondrej -mert

Live Demo

Enhancements to the Cargo Tracker app using Payara Micro features

-ondrej -mert

Fully reactive comes at a greater cost

Dealing with threads, hard to track origin, communication overhead

Dont over-engineer, but leave doors open

Java EE enables gradual improvement

General advice-ondrej -mert

Questions?Thank you

-ondrej -mert

@OMihalyi