thomas niewel, oracle tom kennelly, ibm · thomas niewel, oracle tom kennelly, ibm. agenda •...
TRANSCRIPT
Oracle on Linux on System z Performance Experiences
Thomas Niewel, OracleTom Kennelly, IBM
Agenda• Monitoring Oracle on Linux on z
– System Monitoring Tools (VM and Linux)
• Oracle Monitoring Tools– Using AWR and OEM to monitor performance -
• AWR – an overview• Examples of ASYNC IO options• Using Oracle Enterprise Manager Grid Control
• Guidelines for setting up a proof of concept project with Oracle on Linux on z
– Sizing considerations– Proof of concept project considerations– Production Readiness
• General Tips
z/VM and Linux Monitoring Tools• Peformance Tool Kit - IBM
• ESAMON – Velocity Software
• OMEGAMON - IBM
• VMStats - Linux
• Nmon - IBM
- 4 -
Agenda
• Oracle Monitoring Tools• Using AWR and OEM to monitor performance -
• AWR – an overview• Examples of ASYNC IO options• Using Oracle Enterprise Manager Grid Control
- 5 -
Oracle Monitoring Tools
• AWR– Is the Oracle performance warehouse. AWR allows the
collection and analysis of performance data. Offers more Information than statspack
• Enterprise Manager - Grid Control – Graphical web-based console which provides a single,
integrated solution for administration, monitoring, testing, deploying, operating, diagnosing, and resolving problems for Oracle and for non Oracle Systems
- 6 -
Automatic Workload Repository (AWR)
• Automatically collects database instance statistics
• Licensed in the Diagnostics Pack
• Captures statistical data• Used by
• AWR-Reports • Oracle database advisors • self-management features• Coordinated across RAC instances
- 7 -
Automatic Workload Repository (AWR)
• Text and HTML reports available
• Reports can be generated / viewed by• Oracle Enterprise Manager• Scripts
• awrrpt.sql• awrrpti.sql• ashrpt.sql (10.2)• awrddrpt.sql (10.2)• awrsqrpt.sql
• Contains the statspack information•• PlusPlus a lot of more information
- 8 -
DBA_HIST_xxx
Workload Repository
StatisticsV$ ViewsSGA
• Base Statistics, Metrics, SQL-Statistics,Active Session History
• Automatic Snapshots (Default 1h)
• “Historic” Data (Default 7 days)
• “Light Weight-Capture”
MMON
Automatic Workload Repository (AWR)
TS: SYSAUX$
- SQL Developer- SQL Plus- OEM- Advisors- ADDM
- 9 -
Automatic Workload Repository (AWR)
• Creating SnapshotsDBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
• Dropping SnapshotsDBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (low_snap_id => 22,
high_snap_id => 32, dbid => 3310949047);
• Modifying Snapshot SettingsDBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS( retention =>
43200,interval => 30, dbid => 3310949047);
• Dropping BaselinesDBMS_WORKLOAD_REPOSITORY.DROP_BASELINE (baseline_name => 'peak
baseline’, cascade => FALSE, dbid => 3310949047);
- 10 -
AWR report
WORKLOAD REPOSITORY report for
DB Name DB Id Instance Inst Num Release RAC Host------------ ----------- ------------ -------- ----------- --- ------------OSCAR1 2765581813 OSCAR11 2 10.2.0.4.0 YES ZL002
Snap Id Snap Time Sessions Curs/Sess--------- ------------------- -------- ---------
Begin Snap: 214 18-Feb-09 15:37:22 32 1.5End Snap: 216 18-Feb-09 15:47:56 32 1.3Elapsed: 10.56 (mins)DB Time: 63.39 (mins)
Cache Sizes~~~~~~~~~~~ Begin End
---------- ----------Buffer Cache: 11,008M 11,008M Std Block Size: 8K
Shared Pool Size: 2,048M 2,048M Log Buffer: 14,320K
- 11 -
Load Profile~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------Redo size: 7,582.46 182,948.00
Logical reads: 108,175.09 2,610,024.44Block changes: 43.51 1,049.89
Physical reads: 0.71 17.11Physical writes: 0.58 14.11
User calls: 0.28 6.67Parses: 0.79 19.00
Hard parses: 0.04 1.00Sorts: 0.25 6.00
Logons: 0.01 0.33Executes: 21.32 514.33
Transactions: 0.04
% Blocks changed per Read: 0.04 Recursive Call %: 100.00Rollback per transaction %: 0.00 Rows per Sort: 163.81
AWR report
- 12 -
Load profile
• Contains a number of common ratios
• Allows characterization of the application
• Can point to problems– high hard parse rate– high I/O rate– high login rate
- 13 -
Instance Efficiency Percentages (Target 100%)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00Buffer Hit %: 99.95 In-memory Sort %: 100.00Library Hit %: 100.00 Soft Parse %: 97.12
Execute to Parse %: 99.97 Latch Hit %: 100.00Parse CPU to Parse Elapsd %: 39.60 % Non-Parse CPU: 99.96
Shared Pool Statistics Begin End------ ------
Memory Usage %: 89.21 89.04% SQL with executions>1: 94.19 93.84
% Memory for SQL w/exec>1: 89.09 86.80
AWR report
- 14 -
Instance Efficiency
• Gives an overview of how the instance is performing
• Can also be used to compare to baseline
• Shared pool statistics allow to identify cursor sharing problems
- 15 -
• CPU time – real work• Shows where Oracle sessions are waiting • Use as basis for drilldown
Top 5 Timed Events Avg %Total~~~~~~~~~~~~~~~~~~ wait CallEvent Waits Time (s) (ms) Time Wait Class------------------------------ ------------ ----------- ------ ------ ----------CPU time 2,330 60.9log file sync 240,286 1,550 6 40.5 Commitdb file sequential read 257,388 1,142 4 29.8 User I/Odirect path write 826,666 648 1 16.9 User I/Olog file parallel write 130,204 417 3 10.9 System I/O
------------------------------------------------------------�last)
AWR report
- 16 -
AWR report
• AWR Wait Classes• Administration - backups, index rebuilds,…• Application - row/table locks, user locks,..• Cluster - RAC waits,…• Commit - log file sync,…• Concurrency - buffer busy, latches,…• Configuration - free buffer waits,…• Idle - rdbms ipc msg, smon timer,…• Network - Oracle Net • Scheduler - resource manager,…• System I/O - log file parallel write,…• User I/O - reads, direct writes,…• Other - miscellaneous waits
- 17 -
Time Model Statistics DB/Inst: DB009/DE21001 Snaps: 6201-6202-> ordered by Time (seconds) desc
Time % TotalStatistic Name (seconds) DB Time--------------------------------------------- -------------- -----------DB time 584.47 100.00sql execute elapsed time 584.26 99.96DB CPU 352.52 60.31PL/SQL execution elapsed time 31.14 5.33background elapsed time 11.57 1.98background cpu time .59 .10connection management call elapsed time .06 .01parse time elapsed .06 .01hard parse elapsed time .04 .01PL/SQL compilation elapsed time .03 .00hard parse (sharing criteria) elapsed time .00 .00Java execution elapsed time .00 .00hard parse (bind mismatch) elapsed time .00 .00
-------------------------------------------------------------
AWR report
- 18 -
�SQL ordered by Elapsed Time DB/Inst: OSCAR/OSCAR1 Snaps: 36343-36344-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Elapsed CPU Elap per % TotalTime (s) Time (s) Executions Exec (s) DB Time SQL Id
---------- ---------- ------------ ---------- ------- -------------7,108 3,884 0 N/A 79.2 cht6f417fdcsg
Module: oracle@o1 (TNS V1-V3)SELECT ENAME, DEPT FROM EMPLOYEE WHERE EMPNO=:1
SQL ordered by CPU Time DB/Inst: DB009/DE21001 Snaps: 6201-6202SQL ordered by Gets DB/Inst: DB009/DE21001 Snaps: 6201-6202SQL ordered by Reads DB/Inst: DB009/DE21001 Snaps: 6201-6202SQL ordered by Executions DB/Inst: DB009/DE21001 Snaps: 6201-6202SQL ordered by Parse Calls DB/Inst: DB009/DE21001 Snaps: 6201-6202SQL ordered by Sharable Memory DB/Inst: DB009/DE21001 Snaps: 6201-6202SQL ordered by Version Count DB/Inst: DB009/DE21001 Snaps: 6201-6202
AWR report
- 19 -
Statistic Total per Second per Trans-------------------------------- ------------------ -------------- -------------table scans (short tables) 4,378 20.2 486.4transaction rollbacks 0 0.0 0.0transaction tables consistent re 0 0.0 0.0transaction tables consistent re 0 0.0 0.0undo change vector size 502,320 2,313.2 55,813.3user calls 60 0.3 6.7user commits 9 0.0 1.0user rollbacks 0 0.0 0.0user I/O wait time 22 0.1 2.4workarea executions - onepass 0 0.0 0.0workarea executions - optimal 44 0.2 4.9write clones created in foregrou 0 0.0 0.0Cached Commit SCN referenced 0 0.0 0.0Commit SCN cached 0 0.0 0.0CPU used by this session 3366 150.3 7.3CPU used when call started 3366 150.3 7.3CR blocks created 10 0.1 1.1
AWR report
- 20 -
�Tablespace IO Stats DB/Inst: OSCAR/OSCAR1 Snaps: 220-221-> ordered by IOs (Reads + Writes) desTablespace------------------------------
Av Av Av Av Buffer Av BufReads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------L_LOBTAB
246,891 394 4.5 1.0 892,498 1,426 0 0.0D_LOBTAB
36 0 4.4 1.4 3,861 6 0 0.0UNDOTBS2
4 0 0.0 1.0 2,957 5 157 0.1SYSAUX
51 0 4.9 3.0 365 1 0 0.0SYSTEM
4 0 0.0 1.0 13 0 0 0.0TEST
8 0 0.0 1.0 8 0 0 0.0USERS
4 0 0.0 1.0 4 0 0 0.0-------------------------------------------------------------
AWR report
- 21 -
I/O rules of thumb
• dbfile sequential read < 10ms
• dbfile scattered read 10 - 30ms (dependant on I/O-Size)
• log file parallel write < 5ms (into disk cache)
• dbfile parallel write < 5ms (into disk cache)
- 22 -
Buffer Pool Advisory DB/Inst: OSCAR/OSCAR1 Snap: 12458-> Only rows with estimated physical reads >0 are displayed-> ordered by Block Size, Buffers For Estimate
EstPhys
PhysSize for Size Buffers for Read Estimated
P Est (M) Factor Estimate Factor Physical Reads--- -------- ------ ---------------- ------ ------------------D 624 .1 74,997 1.0 2,748,729D 1,248 .2 149,994 1.0 2,733,377D 1,872 .3 224,991 1.0 2,726,117D 2,496 .4 299,988 1.0 2,724,175D 3,120 .5 374,985 1.0 2,720,804D 3,744 .6 449,982 1.0 2,714,829D 4,368 .7 524,979 1.0 2,709,732D 4,992 .8 599,976 1.0 2,704,917D 5,616 .9 674,973 1.0 2,700,192D 6,240 1.0 749,970 1.0 2,694,928D 6,288 1.0 755,739 1.0 2,694,384D 6,864 1.1 824,967 1.0 2,687,845D 7,488 1.2 899,964 1.0 2,682,297D 8,112 1.3 974,961 1.0 2,677,575D 8,736 1.4 1,049,958 1.0 2,665,768D 9,360 1.5 1,124,955 1.0 2,632,310D 9,984 1.6 1,199,952 1.0 2,607,445
AWR report
- 23 -
AWR report
• AWR collects V$SEGSTAT statistics for hot segments (tables, indexes, etc.)
• Top segments are determined by • Logical and physical reads• Wait count (sum of ITL, row lock, buffer busy)• RAC interconnect activity• Size change over last snapshot period• Access of chained rows
- 24 -
Workload Repository Compare Report
- 25 -
Active Session History
• Part of AWR• Helps to analyze
– Short term problems (minute history)– Isolation of the cause by SQL_ID, SESSION_ID, MODULE
etc.– Blocking sessions (enqueue, buffer busy wait)
• Called by– ASH Report (ashrpt.sql or Enterprise Manager)– Hang Analyze
- 26 -
AWR-ReportExample:
Oracle for Linux on zUsage of Asychronous I/O
- 27 -
AWR Report example – disk_asynch_io=false
Load Profile~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------Redo size: 18,587,127.36 71,259.87
Logical reads: 23,190.17 88.91Block changes: 14,015.06 53.73
Physical reads: 249.35 0.96Physical writes: 2,159.14 8.28
User calls: 2,597.78 9.96Parses: 521.76 2.00
Hard parses: 0.05 0.00Sorts: 1.64 0.01
Logons: 0.06 0.00Executes: 525.06 2.01
Transactions: 260.84
- 28 -
AWR Report example – disk_asynch_io=false
Top 5 Timed Events Avg %Total~~~~~~~~~~~~~~~~~~ wait CallEvent Waits Time (s) (ms) Time Wait Class------------------------------ ------------ ----------- ------ ------ ----------log file sync 240,294 3,190 13 58.9 CommitCPU time 2,044 37.7db file sequential read 228,587 914 4 16.9 User I/Olog file parallel write 109,343 680 6 12.6 System I/OSQL*Net more data from client 7,680,294 24 0 0.4 Network
---------------------------------------------------------
- 29 -
AWR Report example – disk_asynch_io=true
Load Profile~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------Redo size: 27,257,205.71 71,735.04
Logical reads: 36,037.37 94.84Block changes: 21,788.78 57.34
Physical reads: 418.53 1.10Physical writes: 3,154.07 8.30
User calls: 3,789.37 9.97Parses: 762.38 2.01
Hard parses: 0.54 0.00Sorts: 3.78 0.01
Logons: 0.07 0.00Executes: 769.40 2.02
Transactions: 379.97
- 30 -
AWR Report example – disk_asynch_io=true
Top 5 Timed Events Avg %Total~~~~~~~~~~~~~~~~~~ wait CallEvent Waits Time (s) (ms) Time Wait Class------------------------------ ------------ ----------- ------ ------ ----------CPU time 2,330 60.9log file sync 240,286 1,550 6 40.5 Commitdb file sequential read 257,388 1,142 4 29.8 User I/Odirect path write 826,666 648 1 16.9 User I/Olog file parallel write 130,204 417 3 10.9 System I/O
-------------------------------------------------------------
- 31 -
Using Oracle Enterprise Manager Grid Control
Example:How to indentify problem areas
- 32 -
- 33 -
- 34 -
- 35 -
- 36 -
- 37 -
- 38 -
- 39 -
- 40 -
Guidelines for a Proof of Concept (PoC)• Guidelines for setting up a proof of concept project
with Oracle on Linux on z– Sizing considerations
– Proof of concept project considerations
– Production Readiness
Sizing – the most important step
For PoC or full production
zPSG
CCL Sizer
SCON w/SURF
SCON
z/VM Planner
zPCR
RACEv
zCP3000
CP2KVMXT
Accu
racy
Mainframe Sizing Tools
general detailedCustomer data/Methodology
Mainframe Linux Server Consolidation - Sizing Process - SCON
Questionnaire
DB http
Server Consolidation Tool
Projected Utilization on Mainframe
Distributed Servers
Type of Questions:- Servers make & model- Speed (MHz)- Peak Average Utilization (%)- Workload type (i.e. DB,Mail,http)
Input dataPerformAnalysis
Results
Gatherdata fromservers
Mainframe Linux Server Consolidation - Sizing Process - SCON with SURF
Questionnaire
DB http
MailServer Consolidation Tool SCON
Projected Utilization on Mainframe
Distributed Servers
Type of Questions:- Servers make & model- Speed (MHz)- Peak Average Utilization (%)- Workload type (i.e. DB,Mail,http)
Input data
PerformAnalysis
Results
Gatherdata fromservers
7:26
:39
11:2
6:39
3:26
:39
7:26
:39
11:2
6:39
3:26
:39
7:26
:39
11:2
6:39
3:26
:39
7:26
:39
11:2
6:39
3:26
:39
7:26
:39
11:2
6:38
3:26
:38
7:26
:38
11:2
6:38
3:26
:38
7:26
:38
11:2
6:38
3:26
:37
7:26
:37
11:2
6:37
3:26
:37
7:26
:37
11:2
6:37
Time Of Day
0
500
1000
1500
2000
2500
MIP
S
Total MIPS
Total MIPS Consumed for All Serversfor 24 hours each day in 15 minute intervals
SURF
MeasuredData
Oracle DB Memory sizing• Obtain Oracle SGA and PGA sizes from all database instances
– Prefer Advisory sizes from an AWR report.
• Calculate guest(s) virtual storage size (assume MB):(SGA + PGA) + 256 MB for ASM + 512 MB for Linux* **
• Assume the sum all of the guest virtual sizes for production equals p and the sum of all guest virtual sizes for dev/qa/training equals t.Real memory for guests = p/.66 + t/(.33) for z/VM memory over commit– Assumes multiple guests are involved. Not correct for a single
guest
• System z memory = real memory for guests + memory for z/VM and expanded storage.
*Increase estimate when Oracle SGA is large and there are expected to be hundreds of dedicated server connections** A large overall virtual storage requirement may result in larger Page Tables in Linux which requires storage
PGA Memory Advisory from an AWR report
It appears that the allocated memory of 7,168 MB is twice as large as required.
SGA Target Advisory from an AWR report
It appears that the allocated memory of 9,216 MB might be reasonable.
Threads for dedicated serversDecide on number of dedicated threads and multiply that by 4.5 MB forrequired real memory to include in guest sizing.
The logons current below may give a hint about number of threads.
Obvious comments for sizing
• Garbage in, garbage out.
• Choose appropriate time frames that represent reasonable capacity usage
• Do not make guesses about the sizing input
• We must get the CPU capacity, I/O subsystem, and the memory at the correct levels before any testing starts
• Engage a System z - Oracle specialist to assist with sizing
Proof of Concept (PoC)
PoC part 1• Engage a System z - Oracle specialist to assist with PoC planning
• Attend education
• Obtain IFLs and memory as per the sizing process– No zIIPs, zAAPs or CP’s for this environment– Choose I/O subsystem (ECKD or SCSI)
• Install z/VM and it’s performance tools
• Install Linux– Choose certified levels of SUSE or Red Hat
http://www.oracle.com/technology/support/metalink/index.html– Verify required Oracle modules have been installed
• Use Orion to validate the I/O subsystem even before a Oracle database is installed
– Performs Oracle like I/O
• No charge, Client Team Registration• Offered in Various Cities across North America• 2.5 days, Attendees responsible for travel expenses• Combination Workshops and Lab Exercises
• Customizing Linux and the Mainframe for Oracle DB Applications (LXOR6)
– For clients considering a move of Oracle to Linux on System z– Topics include hardware technologies, software components, best
practices, performance and tuning, performance tools, linux distributions, tools and services for sizing
– Las Vegas (SIG) Apr. 20-21, 2010, Gaithersburg May 25-27, 2010
• Virtualization & Consolidation to Linux on System z (VC001)– Demonstrates the benefits of consolidating distributed servers onto
Linux z– Business Seminar followed by Technical Workshop. Builds business
case and demonstrates the benefits of consolidation & virtualization. Hands on Labs to perform consolidation of distributed apps to Linux on z, project and validate capacity requirements, review tools and project steps
– Milwaukee May 4-6, 2010
Workshops – Washington Systems Center
Storage – Testing with ORION - 1ORION Simulates Oracle reads and writes, without having to create a
database and helps to isolate I/O issues. When a database is optimally configured you can expect to get up to 95% of the thorughput of Orion.
./orion_zlinux -run oltp -testname mytest -num_disks 2 -duration 30 -simulate raid0
ORION VERSION 11.2.0.0.1Commandline: -run oltp -testname mytest -num_disks 2 -duration 30 -simulate raid0This maps to this test: Test: mytestSmall IO size: 8 KB Large IO size: 1024 KBIO Types: Small Random IOs, Large Random IOsSimulated Array Type: RAID 0 Stripe Depth: 1024 KBWrite: 0% Cache Size: Not EnteredDuration for each Data Point: 30 secondsSmall Columns:, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32,
34, 36, 38, 40Large Columns:, 0 Total Data Points: 22Name: /dev/dasdq1 Size: 2461679616Name: /dev/dasdr1 Size: 24616796162 FILEs found.Maximum Small IOPS=5035 @ Small=40 and Large=0Minimum Small Latency=0.55 @ Small=2 and Large=0
Storage – Testing with ORION - 2
-run oltp -testname mytest -num_disks 2 -duration 30 -simulate raid0
This maps to this test:Test: mytestSmall IO size: 8 KB Large IO size: 1024 KBIO Types: Small Random IOs, Large Random IOsSimulated Array Type: RAID 0 Stripe Depth: 1024 KBWrite: 0%Cache Size: Not EnteredDuration for each Data Point: 30 secondsSmall Columns:, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32,
34, 36, 38, 40Large Columns:, 0Total Data Points: 22
Name: /dev/sda1 Size: 10737401856Name: /dev/sdb1 Size: 107374018562 FILEs found.Maximum Small IOPS=24945 @ Small=24 and Large=0Minimum Small Latency=0.60 @ Small=12 and Large=0
Download - http://www.oracle.com/technology/software/tech/orion/index.html
Storage – Testing with Orion - 3
• Be careful of the options you choose. The writes are destructive.
• Perform Orion testing BEFORE installing the Oracle database to validate the I/O subsystem
Moving data is like moving water – must have adequate flow throughout
OR
PoC part 2• Install Oracle database – 10gR2
– Consider starting with Oracle ASM versus LVM ext3 files– If using ext3 the verify Oracle init.ora has
filesystemio_options = setalldisk_asynch_io=trueto eliminate Linux double caching which wastes storage and CPU resources
• Create appropriate disk multipathing– Different for SCSI and ECKD– Consider running Orion again to test multi-pathing
• Load database from another Oracle database source– Use transportable tablespace or database for metadata when endian formats
are the samehttp://en.wikipedia.org/wiki/Endian
– Additional steps, like rman conversions, are required for unlike endian formats– Import/export may be required when source database is older than 10gR2– Recreate statistics for optimizer use
Endian formatsSQL> COLUMN PLATFORM_NAME FORMAT A32;SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT----------- -------------------------------- --------------1 Solaris[tm] OE (32-bit) Big2 Solaris[tm] OE (64-bit) Big7 Microsoft Windows IA (32-bit) Little10 Linux IA (32-bit) Little6 AIX-Based Systems (64-bit) Big3 HP-UX (64-bit) Big5 HP Tru64 UNIX Little4 HP-UX IA (64-bit) Big11 Linux IA (64-bit) Little15 HP Open VMS Little8 Microsoft Windows IA (64-bit) Little9 IBM zSeries Based Linux Big13 Linux x86 64-bit Little16 Apple Mac OS Big12 Microsoft Windows x86 64-bit Little17 Solaris Operating System (x86) Little18 IBM Power Based Linux Big20 Solaris Operating System (x86-64) Little19 HP IA Open VMS Little
PoC part 3• Run PoC testing
– Collect performance data by enabling:• z/VM Performance Toolkit
- Note that you must now think about virtualization versus dedicated resources- sar and iostat data from the Linux on z guest(s)- AWR reports from the Oracle database
- Review performance reports• z/VM
– Understand CPU, memory, and paging consumption for the LPAR- Review virtual machine consumption of resources- Evaulate I/O performance (ECKD only)- Verify VDISK usage
- Linux using sar and iostat- CPU, memory, swapping, and I/O performance for each guest
- Oracle AWR report- I/O performance- SGA and PGA usage via automatic memory management (see previous chart)- Normal DBA tuning review
- Review for performing SQL- Locking
– Rerun PoC if changes are made– Does the PoC validate the initial sizing?
PoC part 4• Think in terms of virtualization – different mind set
– Does that Oracle database require all of that memory it has in the non-virtualized environment
– Should you have a active/passive setup in the same z/VM• Optimize use of resources
– Did the guests get properly prioritized with respect to other guests
– What workloads are peaking at the same time• CPU peak• Memory load• I/O subsystem
– DBA’s, Linux admins, and z/VM sys progs must work as a team
AWR – I/O statistics
AWR – other statistics
Production Readiness
Production Readiness• Did the PoC validate the initial sizing
– If not, attempt to resize or use PoC information as the basis
• Did the PoC test the availability requirements established during the requirements phase (i.e., Oracle MAA)
– Standalone DB– Active/Passive– RAC with Active/Active– Use of multiple physical z10 machines– Data Guard for DR
• Is there sufficient IFL capacity, memory, and I/O for production– Are you ready to measure capacity usage over the long term.
• Are the latest Oracle patches applied