zlinux tuning guide - txmq inc... 5! virtual!machine! ... (prod,!test,!etc…)!...

18
z/Linux Performance Configuration & Tuning for IBM® WebSphere® Compendium Prepared By: TxMQ, Inc. – An enterprise solutions and IT staffing company. ABSTRACT Create a checklist and share best practices for the configuration and tuning of WebSphere Compendium.

Upload: hoangkhanh

Post on 27-May-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

 

 

 

 

   

z/Linux  Performance  Configuration  &  Tuning  for  IBM®  WebSphere®  Compendium  

Prepared  By:  TxMQ,  Inc.  –  An  enterprise  solutions  and  IT  staffing  company.  

ABSTRACT  

Create   a   checklist   and   share   best   practices   for   the   configuration   and   tuning   of  WebSphere  Compendium.  

Page 2: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

2  

 

Z/LINUX  PERFORMANCE  CONFIGURATION  AND  TUNING  FOR    IBM®  WEBSPHERE®  COMPENDIUM  

TABLE  OF  CONTENTS  

NOTE  ON  SOURCES/DOCUMENT  PURPOSE ................................................................................................... 4  

SKILLS  NEEDED.............................................................................................................................................. 4  

PROCESS ....................................................................................................................................................... 4  

LPAR  CONFIGURATION ................................................................................................................................. 5  LPAR  WEIGHTS.......................................................................................................................................................5  VIRTUAL  MACHINE  (VSWITCH)  FOR  ALL  LINUX  GUEST  SYSTEMS  A  GOOD  PRACTICE. .........................................5  z/VM  VSWITCH  LAN........................................................................................................................................5  Guest  LAN.......................................................................................................................................................5  

HIPERSOCKETS  FOR  LPAR-­‐LPAR  COMMUNICATION ...........................................................................................5  TIPS  FOR  AVOIDING  ELIGIBLE  LISTS ................................................................................................................5  

VM  CONFIGURATION.................................................................................................................................... 7  MEMORY  MANAGEMENT  AND  ALLOCATION..................................................................................................................7  VM  SCHEDULER  RESOURCE  SETTINGS ................................................................................................................8  DO  I  NEED  PAGING  SPACE  ON  DASD? .................................................................................................................8  USER  CLASSES  AND  THEIR  DESCRIPTIONS...........................................................................................................9  MINI-­‐DISKS .......................................................................................................................................................10  VM  SHARED  KERNEL  SUPPORT .........................................................................................................................10  QUICKDSP.........................................................................................................................................................10  

LINUX  CONFIGURATION...............................................................................................................................11  LINUX  GUESTS...................................................................................................................................................11  VCPU.............................................................................................................................................................11  Memory ........................................................................................................................................................11  

LINUX  SWAP  –  WHERE  SHOULD  LINUX  SWAP? .................................................................................................11  Tips ...............................................................................................................................................................11  VDISK  vs.  DASD .............................................................................................................................................11  DCSS..............................................................................................................................................................12  Dedicated  volume.........................................................................................................................................12  Traditional  Minidisk......................................................................................................................................12  VM  T-­‐disk......................................................................................................................................................12  

Page 3: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

3  

VM  V-­‐disk .....................................................................................................................................................12  XPRAM..........................................................................................................................................................13  

WEBSPHERE  CONFIGURATION .....................................................................................................................14  GC  POLICY  SETTINGS.........................................................................................................................................14  optthruput ....................................................................................................................................................14  optavgpause.................................................................................................................................................14  gencon ..........................................................................................................................................................14  subpool .........................................................................................................................................................14  

ANY  FORM  OF  CACHING ...................................................................................................................................14  DYNACACHE  DISK-­‐OFF  LOAD ............................................................................................................................14  PERFORMANCE  TUNING  FOR  WEBSPHERE.......................................................................................................14  

PERFORMANCE  MONITORING .....................................................................................................................16  CP  COMMANDS ....................................................................................................................................................16  COMMAND-­‐LINE ...................................................................................................................................................16  z/VM  Performance  Toolkit.  PERFORMANCE  TOOLKIT  FOR  VM,  SG24-­‐6059.............................................................16  THE  WAS_APPSERVER.PL  SCRIPT ...............................................................................................................................16  

LINKS ...........................................................................................................................................................17  Z/LINUX  ARCHITECTURE,  TUNING  AND  MANAGEMENT ...................................................................................17  Z/LINUX:  RHEL  WEBSPHERE  V7  INSTALLATION  INFORMATION ........................................................................17  Z/LINUX:  RHEL  WEBSPHERE  V7  INSTALLATION  BINARIES..................................................................................17  

 

 

Page 4: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

4  

 

NOTE  ON  SOURCES/DOCUMENT  PURPOSE  All  guide  sources  come  from  well-­‐documented   IBM  or   IBM  partner’s  reference  material.  The  reason  for  this  document   is   simple:   Take   all   relevant   sources   and   put   their   salient   points   into   a   single,   comprehensive  document  for  reliable  set-­‐up  and  tuning  of  a  z/Linux  environment.    

The  ultimate  point  is  to  create  a  checklist  and  share  best  practices.  

SKILLS  NEEDED  Assemble  a  team  that  can  address  all  aspects  of  the  performance  of  the  software  stack.  The  following  skills  are  usually  required:  

o Overall  Project  Coordinator  o VM  Systems  Programmer.  This  person  set  up  all  the  Linux  guests  in  VM.  o Linux  Administrator.  This  person  installed  and  configured  Linux.  o WebSphere  Administrator.  o Lead  Application  Programmer.  The  person  can  answer  questions  about  what  the  application  does  and  

how  it  does  it.  o Network  Administrator.  

PROCESS  TIP:  Start  from  the  outside  and  work  inward  toward  the  application.      The  environment  surrounding  the  application  causes  about  half  of  the  potential  performance  problems.  The  other  half  is  caused  by  the  application  itself.    Start  with  the  environment  that  the  application  runs  in.  This  eliminates  potential  causes  of  performance  problems.  You  can  then  work  toward  the  application  in  the  following  manner.    

1. LPAR.  Things  to  look  at:  Number  of  IFLs,  weight,  caps,  total  real  memory,  memory  allocation  between  cstore  and  xstore  

2. VM.  Things  to  look  at:  Communications  configuration  between  Linux  guests  and  other  LPARs,  paging  space,  share  settings  

3. Linux.  Things  to  look  at:  Virtual  memory  size,  virtual  CPUs,  VM  share  and  limits,  swapping,  swap  file  size,  kernel  tuning  

4. WebSphere.  Things  to  look  at:  JVM  heap  size,  connection  pool  sizes,  use  of  caches.  WebSphere  application  performance  characteristics  

Page 5: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

5  

LPAR  CONFIGURATION  Defining  LPAR  resource  allocation  for  CPU,  memory,  DASD,  and  network  connections  

LPAR  WEIGHTS  

o Adjust  depending  on  the  environment  (prod,  test,  etc…)  

VIRTUAL  MACHINE  (VSWITCH)  FOR  ALL  LINUX  GUEST  SYSTEMS  A  GOOD  PRACTICE.  

With  VSWITCH,   the  routing   function   is  handled  directly  by  the  virtual  machine’s   (VM’s)  Control  Program  instead  of  the  TCP/IP  machine.  This  can  help  eliminate  most  of  the  CPU  time  that  was  used  by  the  VM  router  it  replaces,  resulting  in  a  significant  reduction  in  total  system  CPU  time.    

-­‐ When  a  TCP/IP  VM  router  was  replaced  with  VSwitch,  decreases  ranging  from  19%  to  33%  were  observed.    -­‐ When   a   Linux   router   was   replaced   with   VSwicth,   decreases   ranging   from   46%   to   70%   were  

observed.    NOTE:   The   security   of   VSwitch   is   not   equal   to   a   dedicate   firewall   or   an   external   router,   so   when   high  security  is  required  of  the  router  function,  consider  using  those  instead  of  VSwitch.    

Z/VM  VSWITCH  LAN    Configuration  resulted  in  higher  throughput  than  the  Guest  LAN  feature.  

GUEST  LAN  Guest  LAN  is  ring  based.  It  can  be  much  simpler  to  configure  and  maintain.    

HIPERSOCKETS  FOR  LPAR-­‐LPAR  COMMUNICATION  

TIPS  FOR  AVOIDING  ELIGIBLE  LISTS  o Set   each   Linux  machines   virtual-­‐storage   size   only   as   large   as   it   needs   to   be   to   let   the   desired   Linux  

application(s)   run.  This   suppresses   the  Linux  guest’s   tendency   to  use   its  entire  address   space   for   file  cache.   If   the   Linux   file   system   is   hit   largely   by   reads,   you   can  make   up   for   this  with  minidisk   cache  (MDC).   Otherwise,   turn   MDC   off,   because   it   induces   about   an   11-­‐percent   instruction-­‐path-­‐length  penalty  on  writes,  consumes  storage  for  the  cached  data,  and  pays  off  little  because  the  read  fraction  isn’t  high  enough.  

o Use  whole  volumes  for  VM  paging  instead  of  fractional  volumes.  In  other  words,  never  mix  paging  I/O  and  non-­‐paging  I/O  on  the  same  pack.  

o Implement  a  one-­‐to-­‐one  relationship  between  paging  CHPIDs  and  paging  volumes.  

Page 6: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

6  

 

o Spread  the  paging  volumes  over  as  many  DASD  control  units  as  possible.  

o Turn   on   thw   paging   control   units   of   they   support   non-­‐volatile   storage   (NVS)   or   DASD   fast   write  (DASDFW),  (applies  to  RAID  devices).    

o Provide  at   least   twice  as  much  DASD  paging  space   (CP  QUERY  ALLOC  PAGE)  as   the  sum  of   the  Linux  guests’  virtual  storage  sizes.  

o Having   at   least   one   paging   volume   per   Linux   guest   is   a   great   thing.   If   the   Linux   guest   is   using  synchronous   page   faults,   exactly   one   volume   per   Linux   guest   will   be   enough.   If   the   guest   is   using  asynchronous   page   faults,   more   than   one   per   guest   might   be   appropriate;   one   per   active   Linux  application  will  serve  the  purpose.  

o In  queued  direct  I/O  (QDIO)-­‐intensive  environments,  plan  that  1.25MB  per  idling  real  QDIO  adapter  will  be  consumed  out  of  CP  below-­‐2GB  free  storage,  for  CP  control  blocks  (shadow  queues).  If  the  adapter  is  being  driven  very  hard,  this  number  could  rise  to  as  much  as  40MB  per  adapter.  This  tends  to  hit  the  below-­‐2   GB   storage   pretty   hard.   CP   prefers   to   resolve   below-­‐2GB   contention   by   using   expanded  storage  (xstore).  Consider  configuring  at  least  2GB  to  3GB  of  xstore  to  back-­‐up  the  below-­‐2GB  central  storage,  even  if  central  storage  is  otherwise  large.  

o Try  CP  SET  RESERVE  to  favor  storage  use  toward  specific  Linux  guests.  

Page 7: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

7  

VM  CONFIGURATION  

MEMORY  MANAGEMENT  AND  ALLOCATION    

o Add  200-­‐256MB  for  WebSphere  overhead  per  guest.  o Configure  70%  of  real  memory  as  central  storage  (cstore).    o Configure  30%  of  real  memory  as  expanded  storage  (xstore).  Without  xstore  VM  must  page  directly  to  

DASD,  which  is  much  slower  than  paging  to  xstore.  o CP  SET  RESERVED.  Consider  reserving  some  memory  pages  for  one  particular  Linux  VM,  at  the  expense  

of  all  others.  This  can  be  done  with  a  z/VM  command  (CP  SET  RESERVED).  o If  unsure,  a  good  guess  at  VM  size  is  the  z/VM  scheduler's  assessment  of  the  Linux  guest's  working  set  

size.    o Use  whole  volumes  for  VM  paging  instead  of  fractional  volumes.  In  other  words,  never  mix  paging  I/O  

and  non-­‐paging  I/O  on  the  same  pack.    o Implement  a  one-­‐to-­‐one  relationship  between  paging  CHPIDs  and  paging  volumes.    o Spread  the  paging  volumes  over  as  many  DASD  control  units  as  you  can.    o If  the  paging  control  units  support  NVS  or  DASDFW,  turn  them  on  (applies  to  RAID  devices).    o CP  QUERY  ALLOC  PAGE.  Provide  at  least  twice  as  much  DASD  paging  space  as  the  sum  of  the  Linux  

guests'  virtual  storage  sizes.    o Having  at  least  one  paging  volume  per  Linux  guest  is  beneficial.  If  the  Linux  guest  is  using  

    synchronous  page  faults,  exactly  one  volume  per  Linux  guest  will  be  enough.  If  the  guest  is  using  asynchronous  page  faults,  more  than  one  per  guest  may  be  appropriate;  one  volume  per  active  Linux  application  is  realistic.  

o In  memory  over  commitment  tests  with  z/VM,  increasing  the  memory  over  commitment  up  to  a  ratio  of  3.2:1  occurred  without  any  throughput  degradation.  

o Cooperative   Memory   Management   (CMM1)   and   Collaborative   Memory   Management   (CMM2)   both  regulate   Linux  memory   requirements   under   z/VM.   Both  methods   improve   performance  when   z/VM  hits  a  system  memory  constraint.  

o Utilizing  Named  Saved  Segments   (NSS),   the  z/VM  hypervisor  makes  operating  system  code   in  shared  real   memory   pages   available   to   z/VM   guest   virtual   machines.   With   this   update,   multiple   Red   Hat  Enterprise  Linux  guest  operating  systems  on  the  z/VM  can  boot  from  the  NSS  and  be  run  from  a  single  copy  of  the  Linux  kernel  in  memory.  (BZ#474646)  

o Expanded  storage  for  VM.  Here  are  a  few  thoughts  on  why:  □ While  configuring  some  xstore  may  result  in  more  paging,  it  often  results  in  more  consistent  or  

better  response  time.  The  paging  algorithms  in  VM  evolved  around  having  a  hierarchy  of  paging  devices.   Expanded   storage   is   the   high   speed   paging   device   and  DASD   the   slower   one  where  block  paging   is   completed.   This  means  expanded   storage   can  act   as   a  buffer   for  more  active  users  as   they   switch   slightly  between  working   sets.   These  more  active  users  do  not   compete  with  users  coming  from  a  completely  paged  out  scenario.  

 

Page 8: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

8  

□ The  central   versus  expanded  storage   issue   is   related   to   the  different   implementations  of   LRU  algorithms  used  between  stealing  from  central  storage  and  expanded  storage.  In  short,  for  real  storage,  you  use  a  reference  bit,  which  gets  reset  fairly  often.  While  in  expanded  storage,  you  have   the   luxury   of   having   an   exact   timestamp   of   a   block’s   last   use.   This   allows   you   to   do   a  better  job  of  selecting  pages  to  page  out  to  DASD.  

□ In  environments  that  page  to  DASD,  the  potential  exists  for  transactions  (as  determined  by  CP)  to  break  up  with  the  paging  I/O.  This  can  cause  a  real-­‐storage-­‐only  configuration  to  look  like  the  throughput  rate  is  lower.  

□ Also  configure  some  expanded  storage,  if  needed,  for  guest  testing.  OS/390,  VM,  and  Linux  can  all  use  expanded  storage.  

VM  SCHEDULER  RESOURCE  SETTINGS  

Linux  is  a  long-­‐running  virtual  machine  and  VM,  by  default,  is  set  up  for  short-­‐running  guests.  This  means  that  the  following  changes  to  the  VM  scheduler  settings  should  be  made.  Linux  is  a  Q3  virtual  machine,  so  changing  the  third  value  in  these  commands  is  most  important.  Include  these  settings  in  the  profile  exec  for  the  operator  machine  or  autolog1  machine:  o set  srm  storbuf=300,200,200  o set  srm  ldubuf=100,100,100    Include  this  setting  in  the  PROFILE  EXEC  for  the  operator  machine  or  AUTOLOG1  machine.  

DO  I  NEED  PAGING  SPACE  ON  DASD?  

YES.  One  of  the  most  common  mistakes  with  new  VM  customers  is  ignoring  paging  space.  The  VM  system,  as  shipped,  contains  enough  page  space  to  get  the  system  installed  and  running  some  small  trial  work.  However,  you  should  add  DASD  page  space  to  do  real  work.  The  planning  and  admin  book  has  details  on  determining  how  much  space  is  required.    

Here  are  a  few  thoughts  on  page  space:  

If   the  system   is  not  paging,  you  may  not  care  where  you  put   the  page  space.  However,   sooner  or   later   the  system  grows  to  a  point  where  it  pages  and  then  you'll  wish  you  had  thought  about  it  before  this  happens.  

VM  paging   is  most  optimal  when   it  has   large,   contiguous  available   space  on  volumes   that  are  dedicated   to  paging.  Therefore,  do  not  mix  page  space  with  other  space  (user,  t-­‐disk,  spool,  etc.).  

A  rough  starting  point  for  page  allocation  is  to  add  up  the  virtual  machine  sizes  of  virtual  servers  running  and  multiple  by  2.  Keep  an  eye  on  the  allocation  percentage  and  the  block  read  set  size.  

See:  Understanding  poor  performance  due  to  paging  increases  

 

Page 9: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

9  

 

USER  CLASSES  AND  THEIR  DESCRIPTIONS  

If  you  have  command  privilege  class  E,  issue  the  following  CP  command  to  view  information  about  these  classes  of  user:  INDICATE  LOAD  

Page 10: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

10  

MINI-­‐DISKS  

A  minimal  Linux  guest  system  fits  onto  a  single  3390-­‐3  DASD,  and  this  is  the  recommended  practice  in  the  field.  This  practice  requires  that  you  do  not  use  GNOME  or  KDE  window  managers  in  order  to  retain  the  small  size  of  the  installed  system.  (The  example  does  not  do  this  because  we  want  to  show  the  use  of  LVM  and  KDE).  

VM  SHARED  KERNEL  SUPPORT  

If  your  Linux  distribution  supports  the  "VM  shared  kernel  support"  configuration  option,  the  Linux  kernel  can   be   generated   as   a   shareable   NSS   (named   saved   system).   Once   this   is   done,   any   VM   users   can   IPL  LXSHR  and  about  1.5M  of   the   kernel   is   shared  among  all   users.  Obviously,   the  greater  number  of   Linux  virtual  machines  running,  the  greater  the  benefit  of  using  the  shared  system.  

QUICKDSP    

  Makes  a  virtual  machine  exempt  from  being  held  back  in  an  eligible  list  during  scheduling  when  system     memory  and/or  paging  resources  are  constrained.  Virtual  machines  with  QUICKDSP  set  on  go  directly  to     the  dispatch  queue  and  are  identified  as  Q0  users.  We  prefer  that  you  control  the  formation  of  eligible  lists     by  tuning  the  CP  SRM  values  and  allowing  a  reasonable  over-­‐commitment  of  memory  and  paging     resources,  rather  than  depending  on  QUICKDSP.  

Page 11: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

11  

 

LINUX  CONFIGURATION  

LINUX  GUESTS    

VCPU  Defined  with  an  assigned  number  of  virtual  CPs,  and  a  SHARE  setting  that  determines  each  CP’s  share  of  the  processor  cycles  available  to  z/VM.  

MEMORY  When  running  WebSphere  applications  in  Linux,  you  are  typically  able  to  over-­‐commit  memory  at  a  1.5/1  ratio.  This  means  for  every  1000  MB  of  virtual  memory  needed  by  a  Linux  guest,  VM  needs  to  have  only  666  MB  of  real  memory  to  back  that  up.  This  ratio  is  a  starting  point  and  needs  to  be  adjusted  based  on  experience  with  your  workload.  

LINUX  SWAP  –  WHERE  SHOULD  LINUX  SWAP? TIPS    

Try   to   avoid   swapping   in   Linux   whenever   possible.   It   adds   path   length   and   causes   a   significant   hit   to  response  time.  However,  sometimes  swapping  is  unavoidable.  If  you  must  swap,  these  are  some  pointers:  

o Prefer  swap  devices  over  swap  files.  

o Do  not  enable  MDC  on  Linux  swap  Mini-­‐Disks.  The  read  ratio  is  not  high  enough  to  overcome  the  write  overhead  

o We  recommend  a  swap  device  size  approximately  15%  of  the  VM  size  of  the  Linux  guest.  For  example,  a  1  GB  Linux  VM  should  allocate  150  MB  for  the  swap  device.  

o Consider  multiple   swap   devices   rather   than   a   single,   large   VDISK   swap   device.   Using  multiple   swap  devices  with   different   priorities   can   alleviate   stress   on   the   VM   paging   system  when   compared   to   a  single,  large  VDISK.  

Linux  assigns  priorities  to  swap  extents.  For  example,  you  can  set  up  a  small  VDISK  with  higher  priority     (higher  numeric  value)  and  it  will  be  selected  for  swap  as  long  as  there  is  space  on  the  VDISK  to  contain    the  process  being  swapped.  Swap  extents  of  equal  priority  are  used  in  round-­‐robin  fashion.  Equal  prioritization  can  be  used  to  spread  swap  I/O  across  chpids  and  controllers,  but  if  you  are  doing  this,  be  careful  not  to  put  all  the  swap  extents  on  Mini-­‐Disks  on  the  same  physical  DASD  volume.  If  you  do,  you  will  not  be  accomplishing  any  spreading.  Use  swapon-­‐p...  to  set  swap  extent  priorities.  

VDISK  VS.  DASD  

Page 12: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

12  

The  advantage  of  VDISK  is  that  a  very  large  swap  area  can  be  defined  at  very  little  expense.  The  VDISK  is  not   allocated   until   the   Linux   server   attempts   to   swap.   Swapping   to   VDISK   with   the   DIAGNOSE   access  method  is  faster  than  swapping  to  DASD  or  SCSI  disk.  In  addition,  when  using  a  VDISK  swap  device,  your  z/VM  performance  management  product  can  report  swapping  by  a  Linux  guest.  

 

DCSS  Swapping  to  DCSS  is  the  fastest  known  method.  As  with  VDISK,  the  solution  requires  memory.  But  lack  of  memory   is   the   reason   for   swapping.   So   it   could  preferably  be  used  as   a   small   fast   swap  device   in  peak  situations.   The  DCSS   swap  device   should  be   the   first   in   a   cascade  of   swap  devices,  where   the   following  could  be  bigger  and  slower  (real  disk).  The  swapping  to  DCSS  adds  complexity.    

Create  an  EW/EN  DCSS  and   configure   the   Linux  guest   to   swap   to   the  DCSS.   This   technique   is  useful   for  cases  where   the   Linux   guest   is   storage-­‐constrained   but   the   z/VM   system   is   not.   The   technique   lets   the  Linux  guest  dispose  of  the  overhead  associated  with  building  channel  programs  to  talk  to  the  swap   device.  For  one  illustration  of  the  use  of  swap-­‐to-­‐DCSS,  read  the  paper  here.  

DEDICATED  VOLUME     If   the   storage   load  on  your   Linux  guest   is   large,   the  guest  might  need  a   lot  of   room   for   swap.  One  way     to  accomplish  this   is  simply  to  ATTACH  or  DEDICATE  an  entire  volume  to  Linux  for  swapping.  If  you  have     the  DASD  to  spare,  this  can  be  a  simple  and  effective  approach.  

TRADITIONAL  MINIDISK     Using   a   traditional   Mini-­‐Disk   on   physical   DASD   requires   some   setup   and   formatting   the   first   time   and     whenever   changes   in   size   of   swap   space   are   required.   However,   the   storage   burden   on   z/VM   to     support   Mini-­‐Disk   I/O   is   small,   the   controllers   are   well-­‐cached,   and   I/O   performance   is   generally   very     good.  If  you  use  a  traditional  Mini-­‐Disk,  you  should  disable  z/VM  Mini-­‐Disk  Cache  (MDC)  for  that  Mini-­‐Disk     (use  MINIOPT  NOMDC  statement  in  the  user  directory).  

VM  T-­‐DISK  A   VM   temporary   disk   (t-­‐disk)   could   be   used.   This   lets   one   define   disks   of   various   sizes   with   less  consideration  for  placement  (having  to  find  'x'  contiguous  cylinders  by  hand  if  you  don't  have  DIRMAINT  or  a  similiar  product).  However,  t-­‐disk  is  temporary,  so  it  needs  to  be  configured  (perhaps  via  PROFILE  EXEC)  whenever  the  Linux  VM  logs  on.  Storage  and  performance  benefits  of  traditional  Mini-­‐Disk  I/O  apply.  If  you  use  a  t-­‐disk,  you  should  disable  Mini-­‐Disk  cache  for  that  Mini-­‐Disk.  

VM  V-­‐DISK     A  VM  virtual  disk   in   storage   (VDISK)   is   transient   like  a   t-­‐disk   is.  However,  VDISK   is  backed  by  a  memory     address  space  instead  of  by  real  DASD.  While  in  use,  VDISK  blocks  reside  in  central  storage  (which  makes  

Page 13: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

13  

  it  very  fast).  When  not  in  use,  VDISK  blocks  can  be  paged  out  to  expanded  storage  or  paging    DASD.  The  use     of  VDISK  for  swapping  is  sufficiently  complex,  so  reference  this  separate  tips  page.  

XPRAM     Attach   expanded   storage   to   the   Linux   guest   and   allow   it   to   swap   to   this   media.   This   can   give   good     performance   if   the   Linux   guest   makes   good   use   of   the   memory,   but   it   can   waste   valuable   memory   if     Linux  uses  it  poorly  or  not  at  all.  In  general,  this  is  not  recommended  for  use  in  a  z/VM  environment.  

 

Page 14: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

14  

 

WEBSPHERE  CONFIGURATION  

GC  POLICY  SETTINGS  

The  -­‐Xgcpolicy  options  have  these  effects:

OPTTHRUPUT  Disables   concurrent   mark.   If   you   do   not   have   pause   time   problems   (as   seen   by   erratic   application  response  times),  you  get  the  best  throughput  with  this  option.  Optthruput  is  the  default  setting.  

OPTAVGPAUSE  Enables   concurrent  mark  with   its   default   values.   If   you   are  having  problems  with   erratic   application  response  times  that  are  caused  by  normal  garbage  collections,  you  can  reduce  those  problems  at  the  cost  of  some  throughput,  by  using  the  optavgpause  option.  

GENCON  Requests  the  combined  use  of  concurrent  and  generational  GC  to  help  minimize  the  time  that  is  spent  in  any  garbage  collection  pause.  

SUBPOOL  Disables   concurrent   mark.   It   uses   an   improved   object   allocation   algorithm   to   achieve   better  performance  when   allocating   objects   on   the   heap.   This   option  might   improve   performance   on   SMP  systems   with   16   or   more   processors.   The  subpool  option   is   available   only   on   AIX®,   Linux®   PPC   and  zSeries®,  z/OS®,  and  i5/OS®.  

ANY  FORM  OF  CACHING  

Resulted   in   a   significant   throughput   improvement   over   the   no   caching   case,   where   Distributed   map  caching  generated  the  highest  throughput  improvement.  

DYNACACHE  DISK-­‐OFF  LOAD    

Interesting  feature  meant  to  significantly   improve  the  performance  with  small  caches  without  additional  CPU  cost.  

PERFORMANCE  TUNING  FOR  WEBSPHERE    

The   following   recommendations   from   the  Washington   Systems  Center   can   improve   the  performance  of  your  WebSphere  applications:  o Use  the  same  value  for  StartServers,  MaxClients,  and  MaxSpareServers  parameters  in  the  httpd.conf  

file.  Identically   defined   values   avoid   starting   additional   servers   as   workload   increases.   The   HTTP   server  error  log  displays  a  message  if  the  value  is  too  low.  Use  40  as  an  initial  value.  

Page 15: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

15  

o Serve  image  content  (JPG  and  GIF  files)  from  the  IBM  HTTP  Server  (IHS)  or  Apache  Web  server.  Do  not  use  the  file  serving  servlet  in  WebSphere.  Use  the  DocumentRoot  and  <Directory>  directives,  or  the  ALIAS  directive  to  point  to  the  image  file  directory.      

o Cache  JSPs  and  Servlets  using  the  servletcache.xml  file.  A  sample  definition  is  provided  in  the  servletcache.sample.xml  file.  The  URI  defined  in  the  servletcache.xml  must  match   the   URI   found   in   the   IHS   access   log.   Look   for   GET   statements,   and   a  definition  for  each  for  each  JSP  or  servlet  to  cache.    

o Eliminate  servlet  reloading  in  production.  Specify  reloadingEnabled="false"  in  the  ibm-­‐web-­‐ext.xml  file  located  in  the  application’s  WEB-­‐INF  subdirectory.    

o Use  Resource  Analyzer  to  tune  parameter  settings.  Additionally,  examine  the  access,  error,  and  native  logs  to  verify  applications  are  functioning  correctly.    

o Reduce  WebSphere  queuing.  To   avoid   flooding   WebSphere   queues,   do   not   use   an   excessively   large   MaxClients   value   in   the  httpd.conf  file.  The  Web  Container  Service  General  Properties  MAXIMUM  THREAD  SIZE  value  should  be  two-­‐thirds   the   value   of   MaxClients   specified   in   the   httpd.conf   file.   The   Transport   MAXIMUM   KEEP  ALIVE  connections  should  be  five  more  than  the  MaxClients  value.  

Page 16: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

16  

 

PERFORMANCE  MONITORING    

CP  COMMANDS  

COMMAND-­‐LINE  

o vmstat  o sysstat  package  with  sadc,  sar,  iostat  o dasd  statistics  o SCSI  statistics  o netstat  o top  

z/VM  Performance  Toolkit.  PERFORMANCE  TOOLKIT  FOR  VM,  SG24-­‐6059  

THE  WAS_APPSERVER.PL  SCRIPT  

This  perl  script  can  help  determine  application  memory  usage.  It  displays  memory  used  by  WebSphere  as  well  as  memory  usage  for  active  WebSphere  application  servers.  Using  the  Linux  ps  command,  the  script  displays   all   processes   containing   the   text   “ActiveEJBServerProcess”   (the   WebSphere   application   server  process).  Using  the  RSS  value  for  these  processes,  the  script  attempts  to  identify  the  amount  of  memory  used  by  WebSphere  applications.  

 

Page 17: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

17  

 LINKS  

Z/LINUX  ARCHITECTURE,  TUNING  AND  MANAGEMENT    

□ Resources  for  Linux  on  System  z  □ VM  performance  tips  

□ Tuning  hints  &  tips  

□ Linux  on  IBM  System  z:  Performance  Measurement  and  Tuning  

□ Linux  Performance  When  Running  Under  VM  (part  2)  

□ Linux  on  z/VM  Memory  Management  –  Velocity  Software  

□ How  to  Share  a  WebSphere  Application  Server  V7  installation  among  many  Linux  for  IBM  System  zsystems  

□ How  to  Share  a  WebSphere  Application  Server  V6.1  installation  among  many  Linux  for  IBM  System  zsystems  

□ z/VM  and  Linux  on  IBM  System  z  The  Virtualization  Cookbook  for  Red  Hat  Enterprise  Linux  5.2  

□ How  to  Architect  z/VM  and  Linux  for  WebSphere  V7  on  System  z  

□ WebSphere  Application  Server  6.1  Base  Performance  

□ Linux  on  IBM  zSeries  and  S/390:  z/VM  Configuration  for  WebSphere  Deployments  

□ How  to  Determine  the  Causes  of  Performance  Problems  with  WAS  Applications  running  on  Linux  for  zSeries □ Red  Hat  Enterprise  Linux  for  System  z □ Tuning  WebSphere  Application  Server  Cluster  with  Caching  

□ z/VM  and  Linux  Operations  for  z/OS  System  Programmers  

□ Understanding  Poor  Performance  Due  to  Paging  Increases  

□ Linux  on  IBM  eServer  zSeries  and  S/390:  Performance  Toolkit  for  VM  

□ Linux  on  System  z  standard  monitoring  tools  

□ Linux  on  System  z  Performance  CMM  o http://www.vm.ibm.com/sysman/vmrm/vmrmcmm.html  

Z/LINUX:  RHEL  WEBSPHERE  V7  INSTALLATION  INFORMATION    

□ System Requirements for WebSphere Application Server V7.0 for Linux on IBM System z □ Preparing Red Hat Enterprise Linux 6 for installation  

□ WebSphere v7 InfoCenter  

□ WebSphere Application Server V7.0 Technotes for RHEL 6    

□ System Requirements for WebSphere Application Server V7.0 for Linux on IBM System z  

Z/LINUX:  RHEL  WEBSPHERE  V7  INSTALLATION  BINARIES  

□ Download  WebSphere  Application  Server  Network  Deployment  Version  7.0  for  the  Linux  operating  system  

□ IBM  Update  Installer  V7.0.0.17  for  WebSphere  Software  for  AIX  

□ 7.0.0.17:  WebSphere  Application  Server  V7.0  Fix  Pack  17      

 

Page 18: zLinux Tuning Guide - TxMQ Inc... 5! VIRTUAL!MACHINE! ... (prod,!test,!etc…)! VIRTUAL!MACHINE!(VSWITCH)!FOR!ALL!LINUX!GUEST!SYSTEMS!A! ... o Cooperative%Memory%Management!(CMM1)!and!

Z/Linux  Tuning  Guide    

 ©2014  TxMQ  |  1430  B  Millersport  Highway,  Williamsville,  NY  14261  |  716-­‐636-­‐0070  |  www.txmq.com  

18  

TERMS  AND  CONDITIONS  

This  whitepaper  and  all  the  information  it  contains,  including,  but  not  limited  to  trademarks,  trade  names,  service  marks  and  logos  (collectively,  the  "Content"),  is  the  property  of  TxMQ  and  is  protected  from  unauthorized  copying  and  dissemination  by  U.S.  Copyright  law,  trademark  law,  international  conventions,  and  other  intellectual  property  laws.  Nothing  contained  in  this  whitepaper  should  be  construed  as  granting,  by  implication,  estoppel,  or  otherwise,  any  license  or  right  to  use  this  Content  without  the  prior  written  permission  of  TxMQ  or  such  third  party  that  may  own  the  trademark  or  copyright  of  material  presented  in  this  whitepaper.    Subject  to  your  full  compliance  with  these  terms,  TxMQ  authorizes  you  to  view  the  Content,  make  a  single  copy  of  it,  and  print  that  copy,  but  only  for  your  own  lawful,  personal,  noncommercial  use,  provided  that  you  maintain  all  copyright,  trademark  and  other  intellectual  property  notices  contained  in  such  Content,  and  provided  that  the  Content,  or  any  part  thereof,  is  not  modified.  

All  copyrights  and  trademarks  are  property  of  their  respective  owners.  IBM  and  WebSphere  are  copyrights  to  International  Business  Machines.  

LIMITATION  OF  LIABILITY      Under  no  circumstances  will  TxMQ  be  liable  for  any  incidental,  special,  consequential  or  exemplary  damages  that  result  from  the  use  of,  or  the  inability  to  use,  this  whitepaper  or  the  information  contained  within,  even  if  TxMQ  has  been  advised  of  the  possibility  of  such  damages.  In  no  event  shall  TxMQ  total  liability  to  you  for  all  damages,  losses,  and  causes  of  action  -­‐  whether  in  contract,  tort  (including,  but  not  limited  to,  negligence)  or  otherwise  -­‐  exceed  $1.    

TRADEMARKS  TxMQ  Information  Systems  is  a  trademark  of  TxMQ  Information  Systems,  Inc.  ©  TXMQ  ,  2014  

The  IBM  name,  IBM  logo,  IBM  Premier  Business  Partner  emblem  and  all  IBM  products  are  trademarks  or  registered  trademarks  of  International  Business  Machines  Corporation  in  the  United  States,  other  countries,  or  both.  All  other  companies,  products,  service  names,  or  product  names  are  trademarks,  registered  trademarks  or  service  marks  of  their  respective  owners.  All  forward-­‐speaking  statements  &  product  plans  are  subject  to  change  based  on  numerous  factors.