adobe® livecycle® data services 3 performance brief€¦ · adobe® livecycle® data services 3...

15
Adobe LiveCycle ES2 Technical Guide Adobe® LiveCycle® Data Services 3 Performance Brief LiveCycle Data Services 3 is a scalable, high performance, J2EE™ based server designed to help Java™ enterprise developers rapidly develop Flash based applications. Introduction User experience, especially application performance, is critical to Rich Internet Applications (RIAs). To deliver a high level of scalability and performance, Adobe LiveCycle Data Services 3 uses an asynchronous messaging infrastructure at its core to deliver these key capabilities: Server Push, Remoting, and Data Management. This is further enhanced by the use of RTMP (real time messaging protocol) and NIO (New I/O) server. LiveCycle Data Services 3 has new features such as Edge Server, Adaptive Throttling, Reliable Communications, and Conflation that enable users to develop better performing applications. This paper reviews the performance characteristics of the LiveCycle Data Services Messaging and Remoting infrastructure, but also provides an overview of the high-performance aspects of the Adobe Flash Platform itself, including the Flash Player and the binary AMF messaging protocol. The result is the fastest performing platform for real-time RIA applications available today. This PDF contains (as attachments) the actual load testing example, test assets, and instructions needed to run all test scenarios discussed. Customers are encouraged to use these examples to reproduce results in their own environments or perform “what-if” scenarios testing with test parameters more representative of their own applications. Please refer to Appendix B for additional information. LiveCycle Data Services 3 is extremely efficient in handling large number of messages with very low message latency and can push up to 400,000 messages to 500 concurrent clients with an average latency of less than 15 milliseconds on a single dual-CPU machine. Figure 1: LiveCycle Data Services can push up to 400,000 messages in under 15 ms Download Adobe® LiveCycle® Data Services hp://www.adobe.com/go/ trylivecycle_dataservices Adobe® LiveCycle® Data Services Home Page hp://www.adobe.com/products/ livecycle/dataservices/ Adobe® LiveCycle® Data Services Developer Center hp://www.adobe.com/devnet/livecycle/ dataservices.html

Upload: others

Post on 21-Apr-2020

21 views

Category:

Documents


0 download

TRANSCRIPT

  • Adobe LiveCycle ES2 Technical Guide

    Adobe® LiveCycle® Data Services 3 Performance Brief

    LiveCycle Data Services 3 is a scalable, high performance, J2EE™ based server designed to help Java™ enterprise developers rapidly develop Flash based applications.

    IntroductionUser experience, especially application performance, is critical to Rich Internet Applications (RIAs). To deliver a high level of scalability and performance, Adobe LiveCycle Data Services 3 uses an asynchronous messaging infrastructure at its core to deliver these key capabilities: Server Push, Remoting, and Data Management. This is further enhanced by the use of RTMP (real time messaging protocol) and NIO (New I/O) server. LiveCycle Data Services 3 has new features such as Edge Server, Adaptive Throttling, Reliable Communications, and Conflation that enable users to develop better performing applications.

    This paper reviews the performance characteristics of the LiveCycle Data Services Messaging and Remoting infrastructure, but also provides an overview of the high-performance aspects of the Adobe Flash Platform itself, including the Flash Player and the binary AMF messaging protocol. The result is the fastest performing platform for real-time RIA applications available today.

    This PDF contains (as attachments) the actual load testing example, test assets, and instructions needed to run all test scenarios discussed. Customers are encouraged to use these examples to reproduce results in their own environments or perform “what-if” scenarios testing with test parameters more representative of their own applications. Please refer to Appendix B for additional information.

    LiveCycle Data Services 3 is extremely efficient in handling large number of messages with very low message latency and can push up to 400,000 messages to 500 concurrent clients with an average latency of less than 15 milliseconds on a single dual-CPU machine.

    Figure 1: LiveCycle Data Services can push up to 400,000 messages in under 15 ms

    Download Adobe® LiveCycle® Data Serviceshttp://www.adobe.com/go/

    trylivecycle_dataservices

    Adobe® LiveCycle® Data Services Home Pagehttp://www.adobe.com/products/

    livecycle/dataservices/

    Adobe® LiveCycle® Data Services Developer Centerhttp://www.adobe.com/devnet/livecycle/

    dataservices.html

    http://blogs.adobe.com/techguideshttp://learn.adobe.com/wiki/display/lccs/8.4+RTMFP+vs+RTMPhttp://help.adobe.com/en_US/LiveCycleDataServicesES/3.1/Developing/WSc3ff6d0ea77859461172e0811f00f7045b-7fc5Update.htmlhttp://help.adobe.com/en_US/LiveCycleDataServicesES/3.1/Developing/WS3a1a89e415cd1e5d-5898af47122bdca6709-7ffdUpdate.htmlhttp://help.adobe.com/en_US/Flex/4.0/AccessingData/WSbde04e3d3e6474c4-631b5269120d4352c1f-8000.htmlhttp://www.adobe.com/go/trylivecycle_dataserviceshttp://www.adobe.com/go/trylivecycle_dataserviceshttp://www.adobe.com/products/livecycle/dataservices/http://www.adobe.com/products/livecycle/dataservices/http://www.adobe.com/devnet/livecycle/dataservices.htmlhttp://www.adobe.com/devnet/livecycle/dataservices.html

  • 2

    Performance in LiveCycle Data Services 3The messaging infrastructure is core to LiveCycle Data Services and is used by Remoting and Data Management. Hence, the performance and scalability of the messaging infrastructure is reflective of the performance of the overall product.

    The ability to push messages to the client improves the user experience of RIAs by providing access to data in real-time. LiveCycle Data Services provides such an infrastructure. Depending on the nature of application, businesses may be interested in different metrics. For example, with a currency trading application, the business will likely want the latency to be very low. On the other hand, latency may not matter to an inventory management application, they may be more interested in data size and bandwidth usage.

    We have run specific scenarios to help you understand the impact of different factors on server performance. While we recommend that you run tests specific to your application to determine the actual performance, information in this paper should help you understand the server performance better.

    It is recommended to review the Glossary to have a better understanding of the terminology used.

    Impact of server throughput on average latency

    Same message scenario

    In this scenario, all the clients are subscribed to a destination and new messages from the message generator were published to this destination. This setup may be desirable if the client application does not need to selectively subscribe to messages published to a destination. If your application needs to selectively subscribe to messages, please refer to the unique message scenario.

    This test was conducted with the following parameters:

    • Message Size: 128 bytes

    • Number of Clients: 500 (single destination, no subtopics were used)

    • Two machine configuration – client and server on separate machines

    LiveCycle Data Services was able to achieve a throughput of 400,000 messages/second with an average latency of 15 milliseconds.

    Figure 2: LiveCycle Data Services can push up to 400,000 messages in under 15 ms

  • 3

    Unique message scenario

    In this scenario, the clients selectively subscribed to specific subtopics on a destination, and new messages from the message generator were published to these subtopics. No two clients were subscribed to the same subtopic ensuring that the messages were unique across clients.

    This setup may be desirable if the client application needs to selectively subscribe to messages published to a destination.

    This test was conducted with the following parameters:

    • Message Size: 128 bytes

    • Number of Clients: 500 (subtopics per client: 20)

    • Two machine configuration – client and server on separate machines

    LiveCycle Data Services was able to achieve a throughput of 150,000 messages/second with an average latency of 5ms. The mechanism to create a unique message per subtopic and route the message through a subtopic to the client adds overhead that reduces the maximum throughput achieved from 400,000 messages/second (same message scenario) to 150,000 messages/second.

    Figure 3: LiveCycle Data Services can push 150,000 unique msg/sec with an average latency of 5ms

  • 4

    Impact of concurrent clients on average latencyThis test demonstrates the impact on message latency as we vary the number of clients. This test is similar to the same message scenario, except that we increase number of clients and keep the send rate constant.

    This test was test was conducted with the following parameters:

    • Message Size: 128 bytes

    • Send rate: 2 msg/sec

    • Two machine configuration – client and server on separate machines

    A single LiveCycle Data Services server can handle up to 40,000 concurrent users. Compared to the same message scenario, supporting large number of concurrent users results in a much higher average latency and lower send rate. This may be desirable and acceptable in applications where scalability is more important than average latency.

    Figure 4: LiveCycle Data Services can support up to 40,000 concurrent users with an average latency of 400ms

  • 5

    Impact of Message Size on average LatencyThis test demonstrates the impact on message latency as we vary the size of message received by each client. This test is similar to same message scenario, except that the message size is increased.

    This test was conducted with the following parameters:

    • Number of Clients: 500

    • Send rate: 2 msg/sec

    • Two machine configuration – client and server on separate machines

    Message size does impact the performance of the server. LiveCycle Data Services server was able to send messages of 100K and maintain a send rate of 2msg/sec (Server throughput: 1000 msg/sec). Clearly, larger messages increase the average latency and reduce the server throughput.

    Figure 5: Average latency increases as message size is increased

  • 6

    Impact of using NIO on scalability

    LiveCycle Data Services supports NIO channels that can be used with HTTP and RTMP. NIO is a non blocking I/O channel and can handle more concurrent connections than a blocking I/O Servlet channel.

    This test was test was conducted with the following parameters:

    • Message Size: 128 bytes

    • Send Rate: 1 msg/sec

    • Java Heap Size: -Xmx1024m

    NIO channels support 10x concurrent connections than the blocking Servlet channel. Applications that tend to have large numbers of concurrent, but intermittently active, clients can significantly scale better with NIO than the Servlet channel.

    Figure 6: Using NIO, LiveCycle Data Services can support 10x concurrent connections than the blocking Servlet channel

  • 7

    Impact on latency using LiveCycle Data Services Edge Server

    The LiveCycle Data Services Edge Server, installed in the DMZ zone, proxies requests from the Flex client to the LiveCycle Data Services server, installed in the application tier. The Edge Server supports NIO channels, and can proxy RTMP(s) and HTTTP(s) requests.

    Flex clients can access the LiveCycle Data Services server either directly or through the Edge Server. Accessing through the Edge Server does add latency, and this test measures the latency overhead.

    The test was conducted with the following parameters:

    • Message Size: 128 bytes

    • Number of Clients: 500

    • Send rate: 120 msg/sec per client (Same/Unique Message)

    • Throughput: 60,000 msg/sec (500 clients * 120 msg/sec per client send rate)

    • Channel: HTTP AMF Streaming

    • One machine configuration – client, server and Edge Server on the same machine

    The test shows that using an edge server adds about 0.4 milliseconds to the overall message latency.

    Figure 7: Extra message hop over the Edge Server increases message latency

  • 8

    Impact of Concurrent Clients on Remoting throughput

    This test demonstrates the impact on Remoting throughput as we vary the number of concurrent clients with one and two LiveCycle Data Services server instances.

    This test was test was conducted with the following parameters:

    • Response Size: 512 bytes

    • Test Duration: 120 seconds

    • Channel: RTMP

    • Number of LCDS instances: 2

    • Two machine configuration – client and server on separate machines

    The results show the LiveCycle Data Services server handling 30,000 concurrent Remoting clients.

    Figure 8: Server can handle 30,000 concurrent Remoting clients

  • 9

    Impact of Response size on Remoting throughput

    This test demonstrates the impact on Remoting throughput as we vary the response size.

    This test was conducted with the following parameters:

    • Number of clients: 500

    • Channel: RTMP

    • Two machine configuration – client and server on separate machines

    A single LiveCycle Data Services server can handle response sizes of 200 KB per request for 500 concurrent clients.

    Figure 9: Server can handle response sizes of 200 KB per request for 500 concurrent clients

  • 10

    Common questions

    How many messages per second can Adobe Flash Player consume and with what latency?To answer that question, Christophe Coenraets, Adobe Evangelist, has built a Performance Console and feed generator. The Performance Console allows you to configure the throughput of the server-side feed generator as well as the client subscription details, and then measure the overall performance and health of the system.

    In this case, a single Flash Player 10.1 instance is able consume approximately 2,000 msg/sec on a single ThinkPad W500 laptop.

  • 11

    Below are some of the tests run with the AIR version of the Performance Console. The first eight columns provide the test parameters. The last three columns provide the actual results of the test.

    While it will be rare that any application would require sending 2,000 messages per second to its clients, it is nonetheless good to know that the Adobe Flash Player is capable of handling this load level. Throttling messages using an appropriate LiveCycle Data Services messaging policy (Ignore, Buffer, Merge, or Custom) that is appropriate for your application is encouraged to avoid unnecessarily using resources and bandwidth.

    Note that the default Flash Player frame rate must be increased to process extremely high message volumes when using the RTMP channel. For more details regarding this scenario and instructions to setup and run the test locally visit the related blog entry: http://coenraets.org/blog/2010/04/performance-tuning-real-time-trader-desktop-with-flex-and-lcds/

    How does the Adobe Flash/Flex and AMF combination perform versus alternative client and transport technologies?James Ward, Adobe Evangelist, has built a Performance Console to demonstrate the performance of some mainstream client and transport technologies, including Flex and AMF.

    In short, the combination of Flash, Flex and AMF is one of the fastest performing client and transport combinations available anywhere today, far outperforming most alternatives.

    The application can be found and run here: http://www.jamesward.com/census2/

    Sample results from running the application are below, with Flex and AMF displayed as the first bar:

    http://coenraets.org/blog/2010/04/performance-tuning-real-time-trader-desktop-with-flex-and-lcds/http://coenraets.org/blog/2010/04/performance-tuning-real-time-trader-desktop-with-flex-and-lcds/http://www.jamesward.com/census2/

  • 12

    ConclusionUser experience, especially application performance and responsiveness, is critical to successful Rich Internet Applications. To deliver the highest level of scalability and performance, the use of Adobe LiveCycle Data Services 3, together with the Adobe Flash Platform provides the best performing and scalable end-to-end solution for Rich Internet Applications available anywhere today.

    Performance and capacity planning depend on a number of factors that are unique to each application. This performance brief should help customers understand the performance of LiveCycle Data Services under specific test scenarios. Customers should be able to replicate these tests in their own environment.

    For an accurate representation of performance and capacity planning, Adobe recommends that customers conduct performance testing that is tailored to their application.

    Additional resources

    Try LiveCycle Data ServicesDownload the software for free and see how you can streamline rich Internet application development.

    DocumentationLiveCycle ES2 and LiveCycle ES2.5 Documentation

    Application modeling plug-inDownload the LiveCycle Application Modeling plug-in and begin creating your own user interface.

    Developer centerAdobe LiveCycle Data Services

    CapabilitiesCapabilities of Adobe LiveCycle Data Services ES2

    Frequently asked questionsFrequently asked questions about Adobe LiveCycle Data Services ES2

    http://www.adobe.com/go/trylivecycle_dataserviceshttp://help.adobe.com/en_US/livecycle/9.0/documentation.html#task=0,1,2,3,4,5,6&module=5http://www.adobe.com/devnet/livecycle/dataservices.htmlhttp://www.adobe.com/devnet/livecycle/dataservices.htmlhttp://www.adobe.com/devnet/livecycle/dataservices.htmlhttp://www.adobe.com/products/livecycle/dataservices/capabilities/http://www.adobe.com/products/livecycle/dataservices/faq/

  • 13

    Appendix A: Test environmentThis section describes the test environment used during the benchmarking tests.

    HardwareBenchmarking tests used the following configuration:

    • System model: HP ProLiant DL380 G6

    • Processor: Dual Quad-Core Intel® Xeon® processor 5500 sequence

    • Memory: 32 GB

    Network hardware platformAll benchmark testing used a switched Gigabit Ethernet network fabric configuration to connect the various hardware components. The test network was isolated from any outside traffic.

    Operating system Suse Enterprise Linux 10 SP2

    Server socket optimizations

    • ulimit -n 204800 command was executed for Server socket optimization.

    • The following lines were added to the /etc/security/limits.conf file on both the client and server machines to increase the per-process file descriptor limits:

    soft nofile 10240

    hard nofile 30480

    TCP Settings

    • TCP settings in /etc/sysctl.conf file:

    # Disable response to broadcasts.

    # You don’t want yourself becoming a Smurf amplifier.

    net.ipv4.icmp_echo_ignore_broadcasts = 1

    # enable route verification on all interfaces

    net.ipv4.conf.all.rp_filter = 1

    # enable ipV6 forwarding

    #net.ipv6.conf.all.forwarding = 1

    net.core.rmem_max = 16777216

    net.core.wmem_max = 16777216

    net.ipv4.tcp_rmem = 4096 87380 16777216

    net.ipv4.tcp_wmem = 4096 87380 16777216

    net.ipv4.tcp_no_metrics_save = 1

    net.ipv4.tcp_moderate_rcvbuf =1

    net.core.netdev_max_backlog = 2500

    • After the above changes, run the following commands:

    sysctl -p

    ifconfig eth0 txqueuelen 2000

  • 14

    Clock Synchronization

    If multiple machines are used for latency testing, the clocks on the test machines should be in synch.

    The time synchronization method used in our testing is NTP.

    Java• Apache Tomcat server 6.0.20

    • Sun JRE version 1.6.0_18 (build 1.6.0_18-ea-b05)

    • JRE settings for both Server and Load test tool:

    -Xms8192m -Xmx8192m -XX:ParallelGCThreads=8 -XX:GCTimeRatio=10

    -Xms8192m -Xmx8192m

    Allocating larger heap size gives more time to young objects to die out before minor collection starts. Otherwise, the objects get promoted to the old generation and get collected in full GC.  By preventing objects from getting tenured to old generation, full GC time is minimized. Setting initial and max heap sizes to the same value prevents heap resizing, which eliminates the pause time caused by heap resizing. 

    -XX:ParallelGCThreads=8

    Reduces the number of garbage collection threads. The default would be equal to the processor count, which would be unnecessarily high.

    -XX:GCTimeRatio =10

    A hint to the virtual machine that it’s desirable that not more than 1 / (1 + nnn) of the application execution time be spent in the collector.

    For example -XX:GCTimeRatio=10 sets a goal of 10% of the total time for GC and throughput goal of 90%. That is, the application should get 10 times as much time as the collector.

    By default the value is 99, meaning the application should get at least 99 times as much time as the collector. That is, the collector should run for not more than 1% of the total time. This was selected as a good choice for server applications. A value that is too high will cause the size of the heap to grow to its maximum.

    Also, the larger the heap, larger GCTimeRatio means shorter pause time.

    Appendix B: Reproduce steps

    Contents of the Performance Brief• Performance Brief

    • Performance Brief Tests – Contains configuration and instructions for individual test cases.

    How to reproduce performance brief scenarios1. Setup Test environment

    • Ensure that the test environment is as specified in Appendix A

    • Unzip load-test-tool.zip (attached to this PDF portfolio)

    • Follow instructions in Load test tool readme document.

    2. Run Tests

    • Go to Performance Brief Tests folder.

    • Follow instructions to run the desired test scenario in the {test scenario}.pdf

  • © 2010 Adobe Systems Incorporated. All rights reserved. Printed in the USA. 8/10

    Adobe, the Adobe logo, ActionScript, Flash, and LiveCycle are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other coun-

    tries. All other trademarks are the property of their respective owners.Adobe Systems Incorporated 345 Park Avenue San Jose, CA 95110-2704 USA www.adobe.com

    For more information and additional product details:www.adobe.com/devnet/livecycle/

    Appendix C: Glossary of termsSend Rate

    This is the rate at which the message generator is generating messages to send to the LiveCycle Data Services server. In other words, the send rate is also the incoming message rate to the LiveCycle Data Services server.

    Server Throughput

    Server Throughput is the outgoing message rate of the LiveCycle Data Services server. The server throughput may not be the same as the send rate. In some cases the server can send an incoming message to multiple clients thus sending more messages out of the server than it receives.

    Latency

    Latency is measured from the time a message is generated to the time that a client receives the message.

    Latency

    Client Server Message Generator

    Message Created TimeMessage Received Time

    Latency = Message Received Time – Message Created Time

    Client receive rate

    Client receive rate is the rate at which messages are received by a client

    Same message scenario

    This is the scenario where all clients subscribe to a destination (without subtopics) and messages are sent to the destination (without subtopics). In this case, every client receives every message.

    Unique message scenario

    This is the scenario where each client subscribes to its unique subtopic on the destination and each message is sent to a single subtopic. In this case, each message goes to a specific client only and not to all clients.

    Remoting throughput

    Remoting Throughput is the rate at which LiveCycle Data Services server returns a response to Remoting requests.

    http://www.adobe.com

    IntroductionPerformance in LiveCycle Data Services 3Impact of server throughput on average latencySame message scenarioUnique message scenario

    Impact of concurrent clients on average latencyImpact of Message Size on average LatencyImpact of using NIO on scalabilityImpact on latency using LiveCycle Data Services Edge ServerImpact of Concurrent Clients on Remoting ThroughputImpact of Response size on Remoting ThroughputHow many messages per second can Adobe Flash Player consume and with what latency?

    Common QuestionsHow many messages per second can Adobe Flash Player consume and with what latency?How does the Adobe Flash/Flex and AMF combination perform versus alternative client and transport te

    ConclusionAdditional ResourcesAppendix A: Test EnvironmentAppendix C: Glossary of Terms

  •  

      1  

    Concurrent  Clients  

    This  test  calculates  the  capacity  of  consumers  using  Servlet  based  HTTP  AMF  Streaming  Channel  and  NIO  HTTP  AMF  Channel  

    • Message  Size:  128  bytes  • Same  message  sent  to  all  clients  • Multiple  machines  configuration  –  increasing  machines  to  achieve  test  load    

    How  to  run  the  test  cases?  

    The  Test  procedure  described  below  needs  to  be  followed  for  each  of  the  listed  test  cases.    

    Test  procedure    

    1. Change  JVM  Heap  Size  to  –Xms1024m  –Xmx1024m,  and  maxThreads  in  Server.xml  in  Tomcat  to  60000  

    2. Start  application  server  (perf  webapp  is  deployed)  • cd    /tomcat/bin  • (Windows)  catalina.bat  run  • (Unix/Linux)  ./catalina.sh  run  

    3. Copy    from  PropertiesFiles(PerformanceBriefTests.zip)  to  <  load-‐testing-‐tool  location>  on    different  machines  

    4. Run  LCDSDriver  on    different  machines    • Update  HOST  =  

  •  

      2  

     

    Message  Generator  Form  

     

    7. Wait  2  minutes  and  Click  Stop  button  on  Message  generator  form.  All  drivers  only  calculate  their  result  based  on  2  minutes  of  messages  feed.    The  20m  of  MESSAGE_RECEIVE_TIME  is  to  ensure  all  consumers  are  subscripted.  

    8. Check  result  after  test  stop(result  is  printed  on  console)  9. Stop  application  server  10. Delete    

       

  •  

      3  

    Test  cases  

    Test  Case  1:  Using  Servlet  based  HTTP  Streaming  channel  with  5000  consumers  to  ensure  all  consumers  subscribed  and  receiving  messages.  (Check  the  memory  usage  on  server  machine.  It  fails  to  receive  message  for  6000  consumers.  It  fails  to  subscribe  for  7500  consumers)  

    • Number  of  machines:  2  • Properties  file:  httpServletBased_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  2:    Using  HTTP  AMF  Streaming  channel  with  50000  Consumers  to  ensure  all  consumers  subscribed  and  receiving  messages.  

    • Number  of  machines:  20  • Properties  file:  httpStream_  SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

     

  •  

      1  

    Edge  Server  vs.  Non  Edge  Server  Latency  test  

    This  test  calculates  the  average  latency  difference  between  Edge  Server  and  Non-‐Edge  Server.  This  test  was  test  was  conducted  with  the  following  parameters:  

    • Message  Size:  128  bytes  • Number  of  Clients:  500  • Unique  message  sent  to  each  client  • Same  machine  configuration  –  client  and  server  on  same  machine  

     

    How  to  run  the  test  cases?  

    The  Test  procedure  described  below  needs  to  be  followed  for  each  of  the  listed  test  cases.    

    Test  procedure    

    1. Start  application  server  (perf  webapp  is  deployed)  • cd    /tomcat/bin  • (Windows)  catalina.bat  run  • (Unix/Linux)  ./catalina.sh  run    

    2. Start  application  server  on  which  the  perf-‐edge  is  deployed  3. Copy    from  PropertiesFiles(PerformanceBriefTests.zip)  to  <  load-‐

    testing-‐tool  location>    4. Run  LCDSDriver  on  different  machine.  

    • Update  PORT  =    in  the    • Update  CONSUMERS  =    in  the    • Update  MESSAGE_RECEIVE_TIME  =  1m  in  the    • (Windows)rundriverexample.bat        • (Unix/Linix)./  rundriverexample.sh    

    5. Wait  until  the  driver  prints  “Virtual  Customers  are  all  subscribed  and  waiting  for  1  minute”  in  the  console.  

    6. Run  server  side  message  generator  • http://hostname:port/perf/messaging/MessageGeneratorByteArray.html  • Enter  Specify  the  message  size  in  bytes:    • Enter  the  target  send  rate  per  second:    • Enter  the  suggested  passes  per  second:    • Enter  the  number  of  generator  threads  to  use:    • Enter  the  number  of  subtopics  to  send  messages  to:    • Enter  the  number  of  sub  subtopics  to  send  for  each  subtopic:    • Click  start  button  

  •  

      2  

     

    Message  Generator  Form  

     

    7. Check  result  after  test  stop  (result  is  printed  on  console)  8. Click  Stop  button  on  Message  generator  form  9. Stop  application  servers  10. Delete    

     

     

  •  

      3  

    Test  cases  

    Test  Case  1:  Using  HTTP  AMF  Streaming  channel  without  Edge  Server  to  measure  average  message  latency.  

    • Properties  file:  httpStream_UniqueMessageTypeTest.properties  • PORT:  2156  • Consumers:  500  • Message  Size:128  • Target  send  rate:  20000  • Generator  passes:25  • Generator  threads:3  • Subtopic  count:500  • Sub  subtopic  count:  20  

    Test  Case  2:  Using  HTTP  AMF  Streaming  channel  without  Edge  Server  to  measure  average  latency  

    • Properties  file:  httpStream_UniqueMessageTypeTest.properties  • PORT:  52156  • Consumers:  500  • Message  Size:128  • Target  send  rate:  20000  • Generator  passes:25  • Generator  threads:3  • Subtopic  count:500  • Sub  subtopic  count:  20  

     

  •   1  

    Message  Size  vs.  Throughput    

    This  test  calculates  the  impact  of  message  size  on  server  throughput.  This  test  was  test  was  conducted  with  the  following  parameters:  

    • Number  of  Clients:  1000  • Same  message  sent  to  all  clients  • Same  machine  configuration  –  client  and  server  on  same  machine  

     

    How  to  run  the  test  cases?  

    The  Test  procedure  described  below  needs  to  be  followed  for  each  of  the  listed  test  cases.    

    Test  procedure  

    1. Start  application  server  (perf  webapp  is  deployed)  • cd    /tomcat/bin  • (Windows)  catalina.bat  run  • (Unix/Linux)  ./catalina.sh  run    

    2. Copy    from  PropertiesFiles(PerformanceBriefTests.zip)  folder  to  <  load-‐testing-‐tool  location>  

    3. Run  LCDSDriver  on  same  machine.  • Update  CONSUMERS  =    in  the    • (Windows)  rundriverexample.bat        • (Unix/Linix)./rundriverexample.sh    

    4. Wait  until  the  driver  prints  “Virtual  Customers  are  all  subscribed  and  waiting  for  1  minute”  in  the  console  

    5. Run  server  side  message  generator  • http://hostname:port/perf/messaging/MessageGeneratorByteArray.html  • Enter  Specify  the  message  size  in  bytes:    • Enter  the  target  send  rate  per  second:    • Enter  the  suggested  passes  per  second:    • Enter  the  number  of  generator  threads  to  use:    • Enter  the  number  of  subtopics  to  send  messages  to:    • Enter  the  number  of  sub  subtopics  to  send  for  each  subtopic:    • Click  Start  button  

             

  •   2  

     Message  Generator  Form  

     

       

    6. Check  result  after  test  stop(result  is  printed  on  console)  7. Click  Stop  button  on  Message  generator  form  8. Stop  application  server  9. Delete    

     

     

     

     

     

     

     

     

     

     

     

  •   3  

    Test  cases  

    Test  Case  1:  Using  HTTP  AMF  Streaming  Channel  with  message  size  of  1  KB  to  measure  the  impact  of  server  throughput  

    • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  1000  • Message  Size:1k  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  2:  Using  HTTP  AMF  Streaming  Channel  with  message  size  of  25KB  to  measure  the  impact  of  server  throughput  

    • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  1000  • Message  Size:25k  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  3:  Using  HTTP  AMF  Streaming  Channel  with  message  size  of  50KB  to  measure  the  impact  of  server  throughput  

    • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  1000  • Message  Size:50k  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

     

     

     

  •   4  

     

    Test  Case  4:  Using  HTTP  AMF  Streaming  Channel  with  message  size  of  75KB  to  measure  the  impact  of  server  throughput  

    • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  1000  • Message  Size:75k  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  5:  Using  HTTP  AMF  Streaming  Channel  with  message  size  of  100KB  to  measure  the  impact  of  server  throughput  

    • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  1000  • Message  Size:100k  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  6:  Using  RTMP  Channel  with  message  size  of  1KB  to  measure  the  impact  of  server  throughput  

    • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  1000  • Message  Size:1k  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

     

     

     

     

  •   5  

     

    Test  Case  7:  Using  RTMP  Channel  with  message  size  of  25KB  to  measure  the  impact  of  server  throughput  

    • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  1000  • Message  Size:25k  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  8:  Using  RTMP  Channel  with  message  size  of  50KB  to  measure  the  impact  of  server  throughput  

    • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  1000  • Message  Size:50k  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  9:  Using  RTMP  Channel  with  message  size  of  75KB  to  measure  the  impact  of  server  throughput  

    • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  1000  • Message  Size:75k  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

     

     

     

  •   6  

     

     

    Test  Case  10:  Using  RTMP  Channel  with  message  size  of  100KB  to  measure  the  impact  of  server  throughput  

    • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  1000  • Message  Size:100k  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

  •  

      1  

    Number  of  Clients  vs.  Throughput  test  

    This  test  calculates  the  impact  of  message  size  on  server  throughput.  This  test  was  test  was  conducted  with  the  following  parameters:  

    • Same  message  sent  to  all  clients  • Target  send  rate:  10  message  per  second  • Multiple  machine  configuration  –  increase  client  machine  to  achieve  test  clients  

     

    How  to  run  the  test  cases?  

    The  Test  procedure  described  below  needs  to  be  followed  for  each  of  the  listed  test  cases.    

    Test  procedure    

    1. Start  application  server  (perf  webapp  is  deployed)  • cd    /tomcat/bin  • (Windows)  catalina.bat  run  • (Unix/Linux)  ./catalina.sh  run  

    2. Copy    from  PropertiesFiles(PerformanceBriefTests.zip)  folder  to  <  load-‐testing-‐tool  location>  

    3. Run  LCDSDriver  on    different  machines    • Update  HOST  =  

  •  

      2  

    Message  Generator  Form  

     

     

    6. Wait  2  minutes  and  Click  Stop  button  on  Message  generator  form.  All  drivers  only  calculate  their  result  based  on  2  minutes  of  message  feed.    The  20m  of  MESSAGE_RECEIVE_TIME  is  to  ensure  all  consumers  are  subscripted.  

    7. Check  result  after  test  stop(result  is  printed  on  console)  8. Stop  application  server  9. Delete    

  •  

      3  

    Test  cases    

    Test  Case  1:  Using  HTTP  AMF  Streaming  channel  with  5000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  2  • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  2:  Using  HTTP  AMF  Streaming  channel  with  10000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  4  • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  3:  Using  HTTP  AMF  Streaming  channel  with  15000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  6  • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

       

  •  

      4  

    Test  Case  4:  Using  HTTP  AMF  Streaming  channel  with  20000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  8  • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  5:  Using  HTTP  AMF  Streaming  channel  with  25000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  10  • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  6:  Using  HTTP  AMF  Streaming  channel  with  30000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  12  • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  7:  Using  HTTP  AMF  Streaming  channel  with  35000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  14  • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  

  •  

      5  

    • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  8:  Using  HTTP  AMF  Streaming  channel  with  40000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  16  • Properties  file:  httpStream_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  9:  Using  RTMP  channel  with  5000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  2  • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  10:  Using  RTMP  channel  with  10000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  4  • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

     

  •  

      6  

    Test  Case  11:  Using  RTMP  channel  with  15000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  6  • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  12:  Using  RTMP  channel  with  20000  Consumers  to  measure  server  throughput    

    • Number  of  machines:  8  • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  13:  Using  RTMP  channel  with  25000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  10  • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

     

     

     

     

  •  

      7  

     

    Test  Case  14:  Using  RTMP  channel  with  30000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  12  • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  15:  Using  RTMP  channel  with  35000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  14  • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  16:  Using  RTMP  channel  with  40000  Consumers  to  measure  server  throughput  

    • Number  of  machines:  16  • Properties  file:  rtmp_SameMessageTypeTest.properties  • Consumers:  2500  • Message  Size:128  • Target  send  rate:  10  • Generator  passes:1  • Generator  threads:1  • Subtopic  count:0  • Sub  subtopic  count:  0  

     

    httpServletBased_SameMessageTestType.properties

    CHANNEL_ID=perf-streaming-amfCHANNEL_TYPE=streaming_amfHOST=localhostPORT=8400ENDPOINTURI=/perf/messagebroker/streaming-amfDESTINATION_ID=MyTopic_SubtopicCONSUMERS=10SET_CONSUMER_ID=falseMESSAGE_RECEIVE_COUNT=0MESSAGE_RECEIVE_TIME=1mMESSAGE_SEND_COUNT=0REPORT_LATENCY=true LOG_CATEGORY=AmfStreamTestLOG_LEVEL=info

    httpStream_SameMessageTestType.properties

    CHANNEL_ID=perf-nio-amf-streamCHANNEL_TYPE=streaming_amfHOST=localhostPORT=2156ENDPOINTURI=/perf/nioamfstreamDESTINATION_ID=MyTopic_SubtopicCONSUMERS=500SET_CONSUMER_ID=falseMESSAGE_RECEIVE_COUNT=0MESSAGE_RECEIVE_TIME=1mREPORT_LATENCY=true LOG_CATEGORY=NioAmfStreamTestLOG_LEVEL=info

    httpStream_UniqueMessageTestType.properties

    CHANNEL_ID=perf-nio-amf-streamCHANNEL_TYPE=streaming_amfHOST=localhostPORT=2156ENDPOINTURI=/perf/nioamfstreamDESTINATION_ID=MyTopic_SubtopicDESTINATION_SUBTOPIC=PerfSubtopic.[0-20]CONSUMERS=500SET_CONSUMER_ID=falseMESSAGE_RECEIVE_COUNT=0MESSAGE_RECEIVE_TIME=1mREPORT_LATENCY=true LOG_CATEGORY=NioAmfStreamTestLOG_LEVEL=info

    remoting_rtmp.properties

    CHANNEL_ID=perf-rtmpCHANNEL_TYPE=rtmpHOST=localhostPORT=2155DESTINATION_ID=PerfRemoteObjectCONSUMERS=500REMOTEOBJECT_METHOD=getRandomBytesRESPONSE_SIZE=ResponseSize_512b#Response size: # ResponseSize_128b# ResponseSize_512b# ResponseSize_1kb# ResponseSize_10kb# ResponseSize_50kb# ResponseSize_100kb# ResponseSize_200kb# ResponseSize_500kb# ResponseSize_750kb# ResponseSize_1000kb# ResponseSize_1050kb# ResponseSize_1200kbTESTING_TIME_IN_SECONDS=120PRERUN_TIME_IN_SECONDS=30STOP_BETWEEN_EACH_STAGE=trueLOG_CATEGORY=RtmpTestLOG_LEVEL=debug

    rtmp_SameMessageTestType.properties

    CHANNEL_ID=perf-rtmpCHANNEL_TYPE=rtmpHOST=localhostPORT=2155ENDPOINTURI=/perf/rtmpDESTINATION_ID=MyTopic_SubtopicCONSUMERS=500SET_CONSUMER_ID=falseMESSAGE_RECEIVE_COUNT=0MESSAGE_RECEIVE_TIME=1mREPORT_LATENCY=true LOG_CATEGORY=RtmpTestLOG_LEVEL=info

    rtmp_UniqueMessageTestType.properties

    CHANNEL_ID=perf-rtmpCHANNEL_TYPE=rtmpHOST=localhostPORT=2155ENDPOINTURI=/perf/rtmpDESTINATION_ID=MyTopic_SubtopicDESTINATION_SUBTOPIC=PerfSubtopic.[0-20]CONSUMERS=500SET_CONSUMER_ID=falseMESSAGE_RECEIVE_COUNT=0MESSAGE_RECEIVE_TIME=1mREPORT_LATENCY=true LOG_CATEGORY=RtmpTestLOG_LEVEL=info

  •  

      1  

    Concurrent  Clients  on  1  LCDS  server  instance  

    This  test  demonstrates  the  impact  on  Remoting  throughput,  using  RTMP  channel,  as  we  vary  the  number  of  concurrent  clients  with  a  single  LCDS  server  instance.  

    • Response  Size:  512  bytes  • All  RemoteObject  clients  keep  making  remote  calls  right  after  their  previous  calls  have  finished  • Multiple  machine  configuration  –  server  on  one  machine  and  clients  on  increasing  number  of  

    machines  to  achieve  test  load    

    How  to  run  the  test  cases?  

    The  Test  procedure  described  below  needs  to  be  followed  for  each  of  the  listed  test  cases.    

    Test  procedure    

    1. Change  JVM  Heap  Size  to  –Xms8192m  –Xmx8192m  2. Start  application  server  (perf  webapp  is  deployed)  

    • cd    /tomcat/bin  • (Windows)  catalina.bat  run  • (Unix/Linux)  ./catalina.sh  run  

    3. Copy    from  PropertiesFiles(PerformanceBriefTests.zip)  to  <  load-‐testing-‐tool  location>  on    different  machines  

    4. Run  LCDSRemotingDriver  on    different  machines    • Update  HOST  =  

  •  

      2  

    Test  cases  

    Test  Case  1:  Using  RTMP  channel  with  500  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  single  LCDS  server.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  1  • Properties  file:  remoting_rtmp.properties  • Consumers:  500  • Response  Size:  ResponseSize_512b  (512  bytes)  

    Test  Case  2:    Using  RTMP  channel  with  1000  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  single  LCDS  server.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  1  • Properties  file:  remoting_rtmp.properties  • Consumers:  1000  • Response  Size:  ResponseSize_512b  (512  bytes)  

    Test  Case  3:    Using  RTMP  channel  with  4000  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  single  LCDS  server.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  1  • Properties  file:  remoting_rtmp.properties  • Consumers:  4000  • Response  Size:  ResponseSize_512b  (512  bytes)  

    Test  Case  4:    Using  RTMP  channel  with  10000  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  single  LCDS  server.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  2  • Properties  file:  remoting_rtmp.properties  • Consumers:  5000  • Response  Size:  ResponseSize_512b  (512  bytes)  

       

  •  

      3  

    Test  Case  5:    Using  RTMP  channel  with  20000  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  single  LCDS  server.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  4  • Properties  file:  remoting_rtmp.properties  • Consumers:  5000  • Response  Size:  ResponseSize_512b  (512  bytes)  

    Test  Case  5:    Using  RTMP  channel  with  30000  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  single  LCDS  server.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  6  • Properties  file:  remoting_rtmp.properties  • Consumers:  5000  • Response  Size:  ResponseSize_512b  (512  bytes)  

     

     

  •  

      1  

    Concurrent  Clients  on  2  LCDS  server  instances  

    This  test  demonstrates  the  impact  on  Remoting  throughput,  using  RTMP  channel,  as  we  vary  the  number  of  concurrent  clients  with  two  LCDS  server  instances.  

    • Response  Size:  512  bytes  • All  RemoteObject  clients  keep  making  remote  calls  right  after  their  previous  calls  have  finished  • Multiple  machines  configuration  –  server  on  one  machine  and  clients  on  increasing  number  of  

    machines  to  achieve  test  load    

    How  to  run  the  test  cases?  

    The  Test  procedure  described  below  needs  to  be  followed  for  each  of  the  listed  test  cases.    

    Test  procedure    

    1. Copy  the  tomcat  folder  and  name  it  as  tomcatNode2  2. Change  Connector  port  in  tomcatNode2/conf/server.xml  to  8082  3. Change  Server  port  in  tomcatNode2/conf/server.xml  to  8087  4. Change  JVM  Heap  Size  to  –Xms8192m  –Xmx8192m  in  (catalina.bat/  catalina.sh)  for  both  servers  5. Deploy  perf  webapp  on  both  servers  6. Modify  channel  ports  on  tomcatNode2(tomcatNode2/webapps/perf/WEB-‐INF/flex/services-‐

    config.xml)  • Channel  perf-‐nio-‐amf  :  2158  • Channel  perf-‐rtmp:  2159  • Channel  perf-‐nio-‐amf-‐stream:  2160  • Channel  perf-‐nio-‐amf-‐longpoll:  2161    

    7. Start  application  server  (perf  webapp  is  deployed)  • cd    /tomcat/bin  • (Windows)  catalina.bat  run  • (Unix/Linux)  ./catalina.sh  run  

    8. Copy    from  PropertiesFiles(PerformanceBriefTests.zip)  to  <  load-‐testing-‐tool  location>  on    different  machines  

    9. Run  LCDSRemotingDriver  on    different  machines    • Update  HOST  =  

  •  

      2  

    14. Check  test  result  on  each  machine.  15. Stop  application  server  16. Delete  tomcatNode2(or  leave  it  for  another  testcase)  17. Delete    

       

  •  

      3  

    Test  cases  

    Test  Case  1:  Using  RTMP  channel  with  500  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  2  LCDS  servers.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  2  • Properties  file:  remoting_rtmp.properties  • Consumers:  250  • Response  Size:  ResponseSize_512b  (512  bytes)  

    Test  Case  2:    Using  RTMP  channel  with  1000  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  2  LCDS  servers.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  2  • Properties  file:  remoting_rtmp.properties  • Consumers:  500  • Response  Size:  ResponseSize_512b  (512  bytes)  

    Test  Case  3:    Using  RTMP  channel  with  4000  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  2  LCDS  servers.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  2  • Properties  file:  remoting_rtmp.properties  • Consumers:  200  • Response  Size:  ResponseSize_512b  (512  bytes)  

    Test  Case  4:    Using  RTMP  channel  with  10000  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  2  LCDS  servers.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  2  • Properties  file:  remoting_rtmp.properties  • Consumers:  5000  • Response  Size:  ResponseSize_512b  (512  bytes)  

       

  •  

      4  

    Test  Case  5:    Using  RTMP  channel  with  20000  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  2  LCDS  servers.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  4  • Properties  file:  remoting_rtmp.properties  • Consumers:  5000  • Response  Size:  ResponseSize_512b  (512  bytes)  

    Test  Case  5:    Using  RTMP  channel  with  30000  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  to  a  2  LCDS  servers.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Number  of  machines:  6  • Properties  file:  remoting_rtmp.properties  • Consumers:  5000  • Response  Size:  ResponseSize_512b  (512  bytes)  

     

     

  •  

      1  

    Response  Size  

    This  test  demonstrates  the  impact  on  Remoting  throughput,  using  RTMP  channel,  as  we  vary  the  response  size.    

    • Number  of  Clients:  500  • All  RemoteObject  clients  keep  making  remote  calls  right  after  their  previous  calls  have  finished.  • Two  Machine  configuration  –  server  and  client  on  different  machines.    

    How  to  run  the  test  cases?  

    The  Test  procedure  described  below  needs  to  be  followed  for  each  of  the  listed  test  cases.    

    Test  procedure    

    1. Change  JVM  Heap  Size  to  –Xms8192m  –Xmx8192m  2. Start  application  server  (perf  webapp  is  deployed)  

    • cd    /tomcat/bin  • (Windows)  catalina.bat  run  • (Unix/Linux)  ./catalina.sh  run  

    3. Copy    from  PropertiesFiles(PerformanceBriefTests.zip)  to  <  load-‐testing-‐tool  location>    

    4. Run  LCDSRemotingDriver  on  the  client  machine    • Update  HOST  =  

  •  

      2  

    Test  cases  

    Test  Case  1:  Using  RTMP  channel  with  500  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  with  1  kb  response  size.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Properties  file:  remoting_rtmp.properties  • Response  size:  ResponseSize_1kb  

    Test  Case  2:  Using  RTMP  channel  with  500  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  with  5  kb  response  size.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Properties  file:  remoting_rtmp.properties  • Response  size:  ResponseSize_5kb  

    Test  Case  3:  Using  RTMP  channel  with  500  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  with  10  kb  response  size.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Properties  file:  remoting_rtmp.properties  • Response  size:  ResponseSize_10kb  

    Test  Case  4:  Using  RTMP  channel  with  500  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  with  50  kb  response  size.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Properties  file:  remoting_rtmp.properties  • Response  size:  ResponseSize_50kb  

    Test  Case  5:  Using  RTMP  channel  with  500  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  with  100  kb  response  size.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Properties  file:  remoting_rtmp.properties  • Response  size:  ResponseSize_100kb  

    Test  Case  6:  Using  RTMP  channel  with  500  remote  object  clients  to  make  concurrent  remote  procedure  calls  continually  with  200  kb  response  size.    After  each  result  or  fault  returns,  the  recipient  issues  another  call.    

    • Properties  file:  remoting_rtmp.properties  • Response  size:  ResponseSize_200kb  

     

  •  

      1  

    Server  Throughput  Test  

    This  test  calculates  the  impact  of  server  throughput  on  average  latency.  This  test  was  test  was  conducted  with  the  following  parameters:  

    • Message  Size:  128  bytes  • Number  of  Clients:  500  • Same  message  sent  to  all  clients  • Two  machine  configuration  –  client  and  server  on  separate  machines  

     

    The  server  machine  synchronized  its  system  time  to  public  NTP  server,  and  second  machine  synchronized  its  system  time  to  the  server  machine.  

    How  to  run  the  test  cases?  

    The  Test  procedure  described  below  needs  to  be  followed  for  each  of  the  listed  test  cases.    

    Test  procedure  

    1. Start  application  server  (perf  webapp  is  deployed)  • cd    /tomcat/bin  • (Windows)  catalina.bat  run  • (Unix/Linux)  ./catalina.sh  run    

    2. Copy    from  PropertiesFiles(PerformanceBriefTests.zip)  folder  to  <  load-‐testing-‐tool  location>  

    3. Run  LCDSDriver  on  different  machine.  • Update  HOST  =    in  the    • (Windows)  rundriverexample.bat        • (Unix/Linix)./rundriverexample.sh    

    4. Wait  until  the  driver  prints  “Virtual  Customers  are  all  subscribed  and  waiting  for  1  minute”  in  the  console  

    5. Run  server  side  message  generator  • http://hostname:port/perf/messaging/MessageGeneratorByteArray.html  • Enter  Specify  the  message  size  in  bytes:    • Enter  the  target  send  rate  per  second:    • Enter  the  suggested  passes  per  second:    • Enter  the  number  of  generator  threads  to  use:    • Enter  the  number  of  subtopics  to  send  messages  to:    • Enter  the  number  of  sub  subtopics  to  send  for  each  subtopic:    • Click  Start  button  

     

  •  

      2  

     Message  Generator  Form  

     

       

    6. Check  result  after  test  stop(result  is  printed  on  console)  7. Click  Stop  button  on  Message  generator  form  8. Stop  application  server  9. Delete    

     

       

  •  

      3  

    Test  Cases  

    Test  Case  1:    Using  RTMP  Channel  with  throughput  of  50,000  messages  per  second  throughput  to  measure  average  latency.  

    • Properties  file:  rtmp_SameMessageTypeTest.properties  • Message  Size:  128  • Target  send  rate:  25*  • Generator  passes:  1  • Generator  threads:  4*  • Subtopic  count:  0  • Sub  subtopic  count:  0  

    *25  messages/s  x  4(number  of  generator  threads)  x  500(consumers)  =  50,000  messages  per  second  

    Test  Case  2:    Using  RTMP  channel  with  a  throughput  of  100,000  messages  per  second  throughput  to  measure  average  latency.    

    • Properties  file:  rtmp_SameMessageTypeTest.properties  • Message  Size:  128  • Target  send  rate:  50  • Generator  passes:1  • Generator  threads:4  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  3:  Using  RTMP  channel  with  a  throughput  of  250,000  messages  per  second  throughput  to  measure  average  latency.  

    • Properties  file:  rtmp_SameMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  125  • Generator  passes:1  • Generator  threads:4  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  4:  Using  RTMP  channel  with  a  throughput  of  350,000  messages  per  second  throughput  to  measure  average  latency.    

    • Properties  file:  rtmp_SameMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  100  • Generator  passes:1  

  •  

      4  

    • Generator  threads:7  • Subtopic  count:0  • Sub  subtopic  count:  0  

     

    Test  Case  5:  Using  RTMP  channel  with  a  throughput  of  100,000  messages  per  second  throughput  to  measure  average  latency.  

    • Properties  file:  rtmp_SameMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  80  • Generator  passes:1  • Generator  threads:10  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  6:  Using  HTTP  AMF  Streaming  channel  with  a  throughput  of  50,000  messages  per  second  throughput  to  measure  average  latency.    

    • Properties  file:  httpStream_SameMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  25  • Generator  passes:1  • Generator  threads:4  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  7:  Using  HTTP  AMF  Streaming  channel  with  a  throughput  of  100,000  messages  per  second  throughput  to  measure  average  latency.    

    • Properties  file:  httpStream_SameMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  50  • Generator  passes:1  • Generator  threads:4  • Subtopic  count:0  • Sub  subtopic  count:  0  

     

    Test  Case  8:  Using  HTTP  AMF  Streaming  channel  with  a  throughput  of  250,000  messages  per  second  throughput  to  measure  average  latency.    

    • Properties  file:  httpStream_SameMessageTypeTest.properties  

  •  

      5  

    • Message  Size:128  • Target  send  rate:  125  • Generator  passes:1  • Generator  threads:4  • Subtopic  count:0  • Sub  subtopic  count:  0  

     

    Test  Case  9:  Using  HTTP  AMF  Streaming  channel  with  a  throughput  of  350,000  messages  per  second  throughput  to  measure  average  latency.    

    • Properties  file:  httpStream_SameMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  100  • Generator  passes:1  • Generator  threads:7  • Subtopic  count:0  • Sub  subtopic  count:  0  

    Test  Case  10:  Using  HTTP  AMF  Streaming  channel  with  a  throughput  of  400,000  messages  per  second  throughput  to  measure  average  latency.    

    • Properties  file:  httpStream_SameMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  80  • Generator  passes:1  • Generator  threads:10  • Subtopic  count:0  • Sub  subtopic  count:  0  

     

     

  •  

      1  

    Server  Throughput  Test  for  Unique  Messages  

    This  test  calculates  the  impact  of  server  throughput  on  average  latency.  This  test  was  test  was  conducted  with  the  following  parameters:  

    • Message  Size:  128  bytes  • Number  of  Clients:  500  • Unique  messages  sent  to  each  client  • Two  machine  configuration  –  client  and  server  on  separate  machines  

     

    How  to  run  the  test  cases?  

    The  Test  procedure  described  below  needs  to  be  followed  for  each  of  the  listed  test  cases.    

     

    Test  procedure    

    1. Start  application  server  (perf  webapp  is  deployed)  • cd    /tomcat/bin  • (Windows)  catalina.bat  run  • (Unix/Linux)  ./catalina.sh  run    

    2. Copy    from  PropertiesFiles(PerformanceBriefTests.zip)  folder  to  <  load-‐testing-‐tool  location>  on  client  machine.  

    3. Run  LCDSDriver  on  different  machine.  • Update  HOST  =    in  the    • Update  CONSUMERS  =    in  the    • Update  MESSAGE_RECEIVE_TIME  =  1m  in  the    • (Windows)rundriverexample.bat        • (Unix/Linix)./  rundriverexample.sh    

    4. Wait  until  the  driver  prints  “Virtual  Customers  are  all  subscribed  and  waiting  for  1  minute”  in  the  console  

    5. Run  server  side  message  generator  • http://hostname:port/perf/messaging/MessageGeneratorByteArray.html  • Enter  Specify  the  message  size  in  bytes:    • Enter  the  target  send  rate  per  second:    • Enter  the  suggested  passes  per  second:    • Enter  the  number  of  generator  threads  to  use:    • Enter  the  number  of  subtopics  to  send  messages  to:    • Enter  the  number  of  sub  subtopics  to  send  for  each  subtopic:    • Click  Start  button  

  •  

      2  

     Message  Generator  Form  

       

    6. Check  result  after  test  stop(result  is  printed  on  console)  7. Click  Stop  button  on  Message  generator  form  8. Stop  application  server  9. Delete    

     

       

  •  

      3  

    Test  cases  

    Test  Case  1:  Using  RTMP  channel  with  50,000  messages  per  second  throughput    to  measure  latency  

    • Properties  file:  rtmp_UniqueMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  17000  • Generator  passes:25  • Generator  threads:3  • Subtopic  count:500  • Sub  subtopic  count:  20  

    Test  Case  2:  Using  RTMP  channel  with  100,000  messages  per  second  throughput  to  measure  latency  

    • Properties  file:  rtmp_UniqueMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  17000  • Generator  passes:1  • Generator  threads:6  • Subtopic  count:500  • Sub  subtopic  count:  20  

    Test  Case  3:  Using  RTMP  channel  with  150,000  messages  per  second  throughput  to  measure  latency  

    • Properties  file:  rtmp_UniqueMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  17000  • Generator  passes:1  • Generator  threads:9  • Subtopic  count:500  • Sub  subtopic  count:  20  

    Test  Case  4:  Using  HTTP  AMF  Streaming  channel  with  50,000  messages  per  second  throughput  to  measure  latency  

    • Properties  file:  httpStream_  UniqueMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  17000  • Generator  passes:25  • Generator  threads:3  • Subtopic  count:500  • Sub  subtopic  count:  20  

  •  

      4  

    Test  Case  5:  Using  HTTP  AMF  Streaming  channel  with  100,000  messages  per  second  throughput  to  measure  latency  

    • Properties  file:  httpStream_  UniqueMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  17000  • Generator  passes:25  • Generator  threads:6  • Subtopic  count:500  • Sub  subtopic  count:  20  

    Test  Case  6:  Using  HTTP  AMF  Streaming  channel  with150,000  messages  per  second  throughput  to  measure  latency  

    • Properties  file:  httpStream_  UniqueMessageTypeTest.properties  • Message  Size:128  • Target  send  rate:  17000  • Generator  passes:25  • Generator  threads:9  • Subtopic  count:500  • Sub  subtopic  count:  20  

     

     

     

    JavaClientExamples/readme.txt

    Dependencies:javaclient.jarnioload.jarflex-messaging-common.jarflex-messaging-core.jarflex-messaging-data.jar

    JavaClientExamples/src/javaclientexamples/BaseDriver.java

    JavaClientExamples/src/javaclientexamples/BaseDriver.java

    package javaclientexamples;

    import java.io.FileInputStream;

    import java.io.IOException;

    import java.util.Arrays;

    import java.util.List;

    import java.util.Properties;

    import java.util.concurrent.ScheduledThreadPoolExecutor;

    import javaclient.messaging.Channel;

    import javaclient.messaging.ChannelSet;

    import javaclient.messaging.channels.NioAmfChannel;

    import javaclient.messaging.channels.NioPollingAmfChannel;

    import javaclient.messaging.channels.NioRtmpChannel;

    import javaclient.messaging.channels.NioStreamingAmfChannel;

    import javaclientexamples.util.DriverUtils;

    import flex.messaging.log.Log;

    import flex.messaging.log.Logger;

    import nioload.Connector;

    public class BaseDriver

    {

        protected static final String AMF = "amf";

        protected static final String POLLING_AMF = "polling_amf";

        protected static final String RTMP = "rtmp";

        protected static final String STREAMING_AMF = "streaming_amf";

        protected static final String DEFAULT_ENDPOINTURI= "/nioamfpoll";

        protected static final String DEFAULT_LOG_CATEGORY = "PollingAmfTest";

        protected static final String DEFAULT_LOG_LEVEL = "info";

        protected static final String DEFAULT_CHANNEL_ID = "my-nio-amf-poll";

        protected static final String DEFAULT_CHANNEL_TYPE = "polling_amf";

        protected static final String DEFAULT_HOST = "localhost";

        protected static final int DEFAULT_PORT = 2880;

        protected static final String DEFAULT_DESTINATION_ID = "messaging_NIOAMF_Poll";

        protected String channelId = DEFAULT_CHANNEL_ID;

        protected String channelType = DEFAULT_CHANNEL_TYPE;

        protected String host = DEFAULT_HOST;

        protected int port = DEFAULT_PORT;

        protected String destinationId = DEFAULT_DESTINATION_ID;

        protected String endpointUri = DEFAULT_ENDPOINTURI;

        protected Logger logger;

        protected String logCategory = DEFAULT_LOG_CATEGORY;

        protected String logLevel = DEFAULT_LOG_LEVEL;

        protected Connector connector;

        protected ScheduledThreadPoolExecutor scheduler;

        public BaseDriver()

        {

            connector = new Connector();

            scheduler = new ScheduledThreadPoolExecutor(Runtime.getRuntime().availableProcessors());

        }

        protected static void checkArguments(String[] args)

        {

            if (args.length != 1)

            {

                System.err.println("Need to specify a properties file for the test.");

                System.exit(1);

            }

        }

        protected void setupTest(String propertiesFile)

        {

            loadProperties(propertiesFile);

            logger = DriverUtils.setupLogger(logCategory, logLevel);

            if (Log.isInfo())

                logger.info("Setting up the test.");

            logTestInformation();

            connector.start();

        }

        protected void runTest()

        {

            if (Log.isInfo())

                logger.info("Running the test.");

        }

        protected void stopTest()

        {

            if (Log.isInfo())

                logger.info("Stopping the test.");

            connector.stop();

            scheduler.shutdownNow();

        }

        protected Channel createChannel() 

        {

            if (channelType == null)

                return new NioAmfChannel(channelId, host, port, endpointUri, connector, scheduler);

            if (RTMP.equals(channelType))

                return new NioRtmpChannel(channelId, host, port, connector);

            else if (STREAMING_AMF.equalsIgnoreCase(channelType))

                return new NioStreamingAmfChannel(channelId, host, port, endpointUri, connector, scheduler);

            else if (POLLING_AMF.equalsIgnoreCase(channelType))

                return new NioPollingAmfChannel(channelId, host, port, endpointUri, connector, scheduler);

            return new NioAmfChannel(channelId, host, port, endpointUri, connector, scheduler);

        }

        protected ChannelSet createChannelSet(Channel channel)

        {

            ListprotocolHandler

               

     intreadBufferSize

               

     intwriteBufferSize

               

     

    Constructor Summary

    Connection.ConnectionRequest(InetSocketAddress address, ProtocolHandler protocolHandler, int idleTimeoutMillis, int readBufferSize, int writeBufferSize)

               

     

    Method Summary

     Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait

     

    Field Detail

    address

    public final InetSocketAddress address

    protocolHandler

    public final ProtocolHandler protocolHandler

    idleTimeoutMillis

    public final int idleTimeoutMillis

    readBufferSize

    public final int readBufferSize

    writeBufferSize

    public final int writeBufferSize

    connectAttempts

    public int connectAttempts

    Constructor Detail

    Connection.ConnectionRequest

    public Connection.ConnectionRequest(InetSocketAddress address, ProtocolHandler protocolHandler, int idleTimeoutMillis, int readBufferSize, int writeBufferSize)

    Overview  Package   Class  Use  Tree  Deprecated  Index  Help 

     PREV CLASS  NEXT CLASS FRAMES   NO FRAMES   

    All Classes

    SUMMARY: NESTED | FIELD | CONSTR | METHODDETAIL: FIELD | CONSTR | METHOD

    Copyright © 2009 Adobe Systems Inc. All Rights Reserved.

    javadoc/nioload/Connection.ReadResult.html

    Overview  Package   Class  Use  Tree  Deprecated  Index  Help 

     PREV CLASS  NEXT CLASS FRAMES   NO FRAMES   

    All Classes

    SUMMARY: NESTED | FIELD | CONSTR | METHODDETAIL: FIELD | CONSTR | METHOD

    nioload

    Class Connection.ReadResult

    java.lang.Object nioload.Connection.ReadResult

    Enclosing class:

    Connection

    public static class Connection.ReadResult

    extends Object

    Simple class to return bytes read and an optional ByteBuffer containing the bytes for a read task.

    Field Summary

     intbytesRead

               

     ByteBufferreadBuffer

               

     

    Constructor Summary

    Connection.ReadResult(int bytesRead, ByteBuffer readBuffer)

               

     

    Method Summary

     Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait

     

    Field Detail

    bytesRead

    public final int bytesRead

    readBuffer

    public final ByteBuffer readBuffer

    Constructor Detail

    Connection.ReadResult

    public Connection.ReadResult(int bytesRead, ByteBuffer readBuffer)

    Overview  Package   Class  Use  Tree  Deprecated  Index  Help 

     PREV CLASS  NEXT CLASS FRAMES   NO FRAMES   

    All Classes

    SUMMARY: NESTED | FIELD | CONSTR | METHODDETAIL: FIELD | CONSTR | METHOD

    Copyright © 2009 Adobe Systems Inc. All Rights Reserved.

    javadoc/nioload/Connection.WriteResult.html

    Overview  Package   Class  Use  Tree  Deprecated  Index  Help 

     PREV CLASS  NEXT CLASS FRAMES   NO FRAMES   

    All Classes

    SUMMARY: NESTED | FIELD | CONSTR | METHODDETAIL: FIELD | CONSTR | METHOD

    nioload

    Class Connection.WriteResult

    java.lang.Object nioload.Connection.WriteResult

    Enclosing class:

    Connection

    public static class Connection.WriteResult

    extends Object

    Simple class to return bytes written and a flag indicating whether a write has been rescheduled.

    Field Summary

     intbytesWritten

               

     booleanwriteRescheduled

               

     

    Constructor Summary

    Connection.WriteResult(int bytesWritten, boolean writeRescheduled)

               

     

    Method Summary

     Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait

     

    Field Detail

    bytesWritten

    public final int bytesWritten

    writeRescheduled

    public final boolean writeRescheduled

    Constructor Detail

    Connection.WriteResult

    public Connection.WriteResult(int bytesWritten, boolean writeRescheduled)

    Overview  Package   Class  Use  Tree  Deprecated  Index  Help 

     PREV CLASS  NEXT CLASS FRAMES   NO FRAMES   

    All Classes

    SUMMARY: NESTED | FIELD | CONSTR | METHODDETAIL: FIELD | CONSTR | METHOD

    Copyright © 2009 Adobe Systems Inc. All Rights Reserved.

    javadoc/nioload/Connection.html

    Overview  Package   Class  Use  Tree  Deprecated  Index  Help 

     PREV CLASS  NEXT CLASS FRAMES   NO FRAMES   

    All Classes

    SUMMARY: NESTED | FIELD | CONSTR | METHODDETAIL: FIELD | CONSTR | METHOD

    nioload

    Class Connection

    java.lang.Object nioload.Connection

    All Implemented Interfaces:

    Comparable, Delayed

    public class Connection

    extends Object

    implements Delayed

    Nested Class Summary

    static classConnection.ConnectionRequest

              Helper class for the initial connect process.

    static classConnection.ReadResult

              Simple class to return bytes read and an optional ByteBuffer containing the bytes for a read task.

    static classConnection.WriteResult

              Simple class to return bytes written and a flag indicating whether a write has been rescheduled.

     

    Field Summary

    protected static intCLOSED_STATE

               

    protected static intCLOSING_STATE

               

    protected  Connection.ConnectionRequestconnectionRequest

              The connect request that generated this connection.

    protected static intHANDSHAKING_STATE

               

    protected  Stringid

               

    protected  longidleTimeoutMillis

              The idle timeout setting for this connection in milliseconds.

    protected  SelectionKeykey

              The NIO SelectionKey.

    protected static intOPEN_STATE

               

    protected static intPREPARING_TO_CLOSE_STATE

               

    protected  ProtocolHandlerprotocolHandler

              The associated ProtocolHandler.

    protected  nioload.Reactorreactor

              The associated Reactor.

    protected  intreadBufferSize

              The desired read buffer size in bytes.

    protected  nioload.Connection.ConnectionReaderreader

              The helper to perform reads for the connection.

    protected static intSHUTTING_DOWN_STATE

               

    protected  SocketChannelsocketChannel

              The underlying SocketChannel.

    protected static String[]STATE_DISPLAY_NAMES

               

    protected  intwriteBufferSize

              The desired write buffer size in bytes.

    protected  nioload.Connection.ConnectionWriterwriter

              The helper to perform writes for the connection.

     

    Constructor Summary

    Connection(nioload.Reactor reactor, SocketChannel socketChannel, SelectionKey key, ProtocolHandler protocolHandler, long idleTimeoutMillis, int readBufferSize, int writeBufferSize)

              Constructs a Connection.

     

    Method Summary

     voidclose()

              Requests that the connection close as soon as it has finished writing any pending application or connection level data.

     voidcloseImmediate()

              Forces the connection to close immediately, unlike close() which writes any pending application or connection level data before closing the underlying socket.

     intcompareTo(Delayed other)

              Implements Comparable.compareTo(Object) as required by the Delayed interface.

    protected  Connection.ReadResultdoRead()

              Reads bytes from the underlying socket into the supplied buffer.

    protected  intdoWrite(ByteBuffer writeBuffer)

              Writes bytes from the supplied buffer to the underlying socket.

     RunnablegetConnectionReader()

              Returns the Runnable that performs reads for the connection.

     RunnablegetConnectionWriter()

              Returns the Runnable that performs writes for the connection.

     longgetDelay(TimeUnit unit)

              Implements Delayed.getDelay(TimeUnit).

     StringgetId()

              Identifier for the connection.

     longgetLastUse()

              Returns the last-use timestamp for the connection (based on System.currentTimeMillis()).

    protected  voidhandshake()

              Performs the initial handshake for the connection.

     voidrequestWrite()

              The associated ProtocolHandler must invoke this method when it desires to write to its connection.

    protected  voidscheduleRead()

              Schedules a read interest for the connection with its reactor.

    protected  voidscheduleWrite()

              Schedules a write interest for the connection with its reactor.

    protected  voidsetState(int value)

               

    protected  voidshutdown()

              Performs the shutdown for the connection.

     voidtimeout()

              Hook method invoked when the connection is timed out.

    protected  voidunscheduleRead()

              Unschedules a read interest for the connection with its reactor.

    protected  voidunscheduleWrite()

              Unschedules a write interest for the connection with its reactor.

     voidupdateLastUse()

              Updates the last-use timestamp for the connection to the current system time.

     Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait

     

    Field Detail

    HANDSHAKING_STATE

    protected static final int HANDSHAKING_STATE

    See Also:

    Constant Field Values

    OPEN_STATE

    protected static final int OPEN_STATE

    See Also:

    Constant Field Values

    SHUTTING_DOWN_STATE

    protected static final int SHUTTING_DOWN_STATE

    See Als