initial deck on websphere extreme scale with websphere commerce server
DESCRIPTION
This is the deck used to show how IBM WebSphere eXtreme Scale improves the usability of WebSphere Commerce Server by replacing private per JVM disk based caches with a shared datagrid based one for page fragment caching.TRANSCRIPT
2121: WebSphere eXtreme Scale and Distributed Caching in Commerce Solutions
2
Smarter Planet Solutions Require a Dynamic Application Infrastructure
• Scale quickly and efficiently• Optimize workload performance• Flexibly flow resources• Avoid downtime• Save energy• Automate management tasks
Smartregions
Smart weather
Smart countries
Smart supply chains
Smart cities
Smart industries
3
Business Needs Adoption Patterns
“Meet business objectives consistently, nimbly, cost-effectively”
Application Foundation
“Enable applications to adapt to changing market conditions”
Intelligent Management
“Address extreme demands of clients & business models”
Extreme Transaction Processing
Dynamic Application InfrastructureBuilds on Smart SOA
4
4
Dynacache Disk Offload
• This allows a JVM to have a private disk based cache.
• It’s a feature heavily exploited by WebSphere Commerce Server and other stack products.
• It allows caches much larger than is possible with a memory only conventional cache.
• This is a 3 tier cache. The JVM has a small local cache, then there is the file system cache and finally the disk itself.
4
5
5
Dynacache disk offload Server diagram
5
DiskFile
CacheFile
systemcache
App
CacheFile
systemcache
App
CacheFile
systemcache
AppDiskFile
DiskFile
6
6
WebSphere eXtreme Scale
• Organizes the memory from a number of JVMs as a single logical shared cache.
• Clients can attach to the ‘cache’ using the network and can also have an in process cache to reduce trips to the remote cache when possible.
• No dependency on a large file system cache.• No disk dependency, no SAN required.• Cache is as large as the memory in the ‘grid’.• Each record is stored once in the grid and shared by all clients.
6
7
7
WebSphere eXtreme Scale Server
7
WXSNear
CacheApp
WXSNear
CacheApp
WXSNear
CacheApp
WXS Container
WXS Container
WXS Container
WXS Container
Network
8
8
Test description
• WebSphere Application Server 6.1.0.26• WebSphere eXtreme Scale V7.0
• Hardware: Two Socket Unix box, 16GB RAM and normal disk.
• Gigabit ethernet
• Servlet generates a 72kbyte page.• Dynacache being used to cache servlet page.• 20Gb of data, 10% of which is ‘hot’.
8
9
9
Topology of test
9
RationalLoadDriver
RationalLoadDriver
ND
WXS
ND
ND
ND
WXS
WXSWXS
All boxes are 2 socket with 16GB RAM
Network is Gigabit
10
10
Results using Dynacache disk offload
• File system cache too small:
– 273 pages/sec @ 730ms and 16% CPU– 400 Disk IOPS
• File system cache large enough to stop all disk I/O– 1620 page/sec @ 121ms and 42% CPU– Network bottlenecks on HTTP side
10
11
11
Results: Remote WXS grid, no local cache AT ALL
• WXS – 1700 pages/sec @ 116ms 73% CPU– Network bottlenecked on the HTTP side– No file system cache needed per JVM– Data is compressed (2.5:1)
– Cost of fetching data from grid is therefore:• 73%-42% = 31% of CPU
– Using a WXS Near cache will eliminate this ‘cost’.
11
12
12
WXS CPU usage
• The box running the WXS grid used 15% CPU at this load of 1700 page views/sec.
• This was with no near cache. A near cache will lower this CPU significantly.
• BUT, 1700 pages view/sec is a lot of page views. One similar box can serve up 11k cached page views/sec but would require 10Gb ethernet.
12
13
13
Scaling Disk offload versus WXS
• WXS runs on commodity boxes and manages them so that it’s fully fault tolerant in software, it doesn’t need expensive reliable hardware to run on reliably.
• WXS can be scaled incrementally simply by adding another box while it’s running. Perfect linear scaling.
• Disk offload almost always uses a SAN.
• SAN has a per gigabyte charge. You can’t incrementally scale a SAN, you replace it.
13
14
14
Cache warmup is faster and cheaper with WXS
• The cache is shared between all WAS servers.
• Each cached entry is only generated ONCE, not ONCE PER JVM as with disk offload.
• It’s about 2x faster to load the WXS cache versus a disk offload based cache.
14
15
15
Invalidation/update once versus Invalidate/update all
• When the cache is invalidated with disk off load, the entry must be regenerated on EVERY JVM in the cluster.
• WXS invalidates the cache entry ONCE per cluster.
• Only one WAS JVM needs to update the invalidated entry for EVERY JVM as the cache is shared!
• This allows more frequent invalidates whilst cutting CPU and disk I/O by 1/N over before.
15
16
16
Benefits summary
• Invalidation can occur more frequently as they cost less to do using WXS than disk offload.
• No need for SAN costs for disk offload. Use the existing boxes for memory/CPU/network.
• Faster/more efficient warm up and JVM instance starting because of shared rather than private cache.
• Modern Web 2.0 like architecture.
16
17
Learn More About Dynamic Application Infrastructure!
Application Foundationibm.com/appfoundation
Intelligent Managementibm.com/intellmgmt
Extreme Transaction Processingibm.com/xtp
ibm.com/appinfrastructure
18
Thank you for Attending. We Value Your Feedback !
• Please complete the session survey for this session by:
• Accessing the SmartSite on your smart phone or computer at: http://imp2010.confnav.com – Surveys / My Session Evaluations
• Visiting any onsite event kiosk– Surveys / My Session Evaluations
• Each completed survey increases your chance to win an Apple iPod Touch with daily drawing sponsored by Alliance Tech
19
Questions?
20
Copyright and Trademarks
© IBM Corporation 2009. All rights reserved. IBM, the IBM logo, ibm.com and the globe design are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml. Other company, product, or service names may be trademarks or service marks of others.