tag line, tag line netapp recommendations for oracle on aix and jfs2 jorge costa sdt - emea
Post on 30-Mar-2015
222 Views
Preview:
TRANSCRIPT
Tag line, tag lineNetAppRecommendations for Oracle on AIX and JFS2
Jorge Costa
SDT - EMEA
© 2008 NetApp. All rights reserved.
Agenda
Recommendations for:
FlexVol Layout
LVM and FS Tuning
SMO
AIX tuning
Oracle tuning
IO stack tuning
TR's and additional reading
© 2008 NetApp. All rights reserved.
Best Practices
The next slides contains NetApp's best practice recommendation for Oracle on AIX servers.
This recommendation is in terms of performance and compatibility with the SnapManager line of products.
•AIX performs better with a larger number of small luns versus a single large lun;
•Tune queue-depths
•Limit AIX buffer Cache and inject freed memory into the Oracle SGA
•Limit SGA paging activity
•Spread the DG, LV and FS across every mapped LUN
•Tune the filesystems according to its use
•Enable Oracle to use the new IO options
© 2008 NetApp. All rights reserved.
Best Practices
SnapManager requirements:
SMO does not take backups of the TEMP tablespaces
SMO does not backup the online redologs
SMO takes a backup of the datafiles, archivelogs and control-files
Create dedicated flexvols for: datafiles, redologs, archive logs, control files, and temp tablespace
Do not mix LUNs from differents FlexVols into the same LVM Diskgroup
© 2008 NetApp. All rights reserved.
Storage:ORACLE:FlexVol Layout
Create the Oracle FlexVols:[layout recommendations for performance and SMO compatibility]
/vol/oradata (datafiles and indexes)[8-16 luns]
/vol/oralog (redologs only)[2-4 luns]
/vol/orarch (archived redo logs )[2-4 luns]
/vol/controlfiles (small vol for controlfiles)[2-4 luns]
/vol/oratemp (temp tablespace)[4-8 luns]
/vol/orabin (oracle binaries)[1-2 luns]
© 2008 NetApp. All rights reserved.
Storage:ORACLE:LVM Layout
Create the Oracle LVM DG:[layout recommendations for performance and SMO compatibility]
• DGoradata (datafiles and indexes)• DGoralog (redologs only)• DGorarch (archived redo logs )• DGcontrolfiles (small vol for controlfiles)• DGoratemp (temp tablespace)• DGorabin (oracle binaries)
© 2008 NetApp. All rights reserved.
Storage:ORACLE:FlexVol Options
Set the Volume Options:
vol option <oraclevol>
nosnap=off, nosnapdir=off, minra=off, no_atime_update=on, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=on,
convert_ucode=off, maxdirsize=335462, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=0, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off
* do not set fractional_reserve=0 when not using volume autosize or snap autodelete
© 2008 NetApp. All rights reserved.
Storage: FlexVol Space Management
Use Volume AutoGrow (and SnapAutodelete)
But understand the impact of using Fractional Reserve first
The link below contains an explanation of Fractional Reserve in human readable language
http://communities.netapp.com/blogs/ServiceBytes
© 2008 NetApp. All rights reserved.
AIX:Tune the IO stack
Set the queue_depth per device:
For large servers:chdev -l hdisk2 -a queue_depth=128
chdev -l hdisk3 -a queue_depth=128
.
.
.
chdev -l hdisk16 -a queue_depth=128
For small to medium servers:chdev -l hdisk2 -a queue_depth=64chdev -l hdisk3 -a queue_depth=64...chdev -l hdisk16 -a queue_depth=64
© 2008 NetApp. All rights reserved.
AIX:Tune the IO stack
Set the queue_depth per HBA:
For large servers:chdev -l fcs0 -a num_cmd_elems=1024
chdev -l fcs1 -a num_cmd_elems=1024
For small to medium servers:chdev -l fcs0 -a num_cmd_elems=512chdev -l fcs1 -a num_cmd_elems=512
© 2008 NetApp. All rights reserved.
AIX:Tune the IO stack
check asynchronous IO and FASTPATH configuration with :
ioo –L | grep aioioo –p –o aio_fsfastpath=1 (default setting)
© 2008 NetApp. All rights reserved.
AIX:Oracle SGA
Prevent paging out memory pages of SGA :(only if App+DB on same AIX host and >80% of computational pages)
vmo -p -o v_pinshm=1
vmo -p -o maxpin%=[(total mem-SGA size)*100/total mem]+3
(Leave maxpin% at the default of 80% unless the SGA exceeds 77% of real memory)
On oracle set
LOCK_SGA=TRUE
© 2008 NetApp. All rights reserved.
AIX:LVM
Create the Diskgroups:(Distribute the VolumeGroup across all the LUNs)(only use LUNs from the corresponding FlexVol)
mkvg –S -s 32m -y <VGname> \ hdisk2 hdisk3 hdisk4 hdisk5\ hdisk6 hdisk7 hdisk8 hdisk9
* LUNs from FlexVol oradata in VGoradata
© 2008 NetApp. All rights reserved.
AIX:LVM
Create a Logical Volume:(by spreading the LV across all the LUNs)
mklv -t jfs2 -e x -y <LVname> <VGname> <size>g
© 2008 NetApp. All rights reserved.
AIX:FS
Make JFS2 options:
If you create a jfs2 filesystem on a striped (or PP spreaded) LV, use the INLINE logging option.
It will avoid « hot spots » by creating the log inside the filesystem (which is striped) instead of using a unique PP stored on 1 hdisk
crfs –a logname=INLINE
© 2008 NetApp. All rights reserved.
AIX:FS
Use Concurrent IO:
Concurrent IO (CIO) – introduced with jfs2 in AIX 5.2 ML1
Implicit use of Direct IO
No inode locking : Multiple threads can perform reads
and writes on the same file at the same time.
Performance achieved using CIO is comparable to raw-devices.
crfs –a options=cio
© 2008 NetApp. All rights reserved.
AIX:FS
Use Concurrent IO:
Benefits of CIO,DIO for Oracle:
Avoid double caching: data is already cached by the SGA
Faster access path to the disk reducing CPU utilization
Remove the inode-lock contention, several threads can read and write the same file
© 2008 NetApp. All rights reserved.
AIX:FS:ORACLE
Create the FS based on its usage case:
For Oracle datafiles:crfs -v jfs2 -d <LVname> -m </mountpoint> -a logname=INLINE -a options=cio
For Oracle redologs:crfs -v jfs2 -d <LVname> -m </mountpoint> -a logname=INLINE -a agblksize=512 -a options=cio
[when using CIO, IO must be aligned with the jfs2 blocksize to avoid a demoted IO (return to normal IO after a directio failure. Redo logs are always written in 512 bytes, so set agblksize=512]
© 2008 NetApp. All rights reserved.
AIX:FS:ORACLE
Create the FS based on its usage case:
For Archive Logs:crfs -v jfs2 -d <LVname> -m </mountpoint> -a logname=INLINE -a options=rbrw
For Control Files:crfs -v jfs2 -d <LVname> -m </mountpoint> -a logname=INLINE -a options=rw
© 2008 NetApp. All rights reserved.
AIX:FS:ORACLE
Create the FS based on its usage case:
Other Filesystems(binaries/applications)crfs -v jfs2 -d <LVname> -m </mountpoint> -a logname=INLINE
© 2008 NetApp. All rights reserved.
AIX:ORACLE
Increase LGWR priority: renice # –p <#LGWR PID>
Use lower values to increase the CPU scheduling priority of the LGWR Oracle process.
The goal is to allow access to CPU resources when lgwr needs to perform its log write operations.
© 2008 NetApp. All rights reserved.
ORACLE
Adjust init.ora:(In AIX/Oracle 9i and 10g the recommended settings for the database
are a single db writer process and async I/O)
disk_asynch_io=truefilesystemsio_options=asynch[ using asynch instead of setall, allows for buffered writes to be used on the archivelog area on 9i and 10G]
db_file_multiblock_read_count=32-128[because data transfer is bypassing the AIX buffer cache, JFS2 prefetch and write-behind
can’t be used, sequential reads can be tuned by adjusting the parameter above]
db_writer_processes=1lock_sga=true
© 2008 NetApp. All rights reserved.
NETAPP: Technologies for Oracle
PAM
the PAM offers a new way to optimize the performance of a NetApp storage by improving Throughput and Latency while reducing the number of disk spindlesshelves required as well as power, cooling and rack space requirements .
It is a an array controller resident, intelligent 3/4 length PCIe card with 16GB of DDR2 SDRAM that is used as a read cache and is integrated with Data ONTAP via FlexScale which is software that provides various tuning options and modes of operation.
© 2008 NetApp. All rights reserved.
ORACLE:PAM1
PAM Architecture: 16 GB Read Cache Card
© 2008 NetApp. All rights reserved.
ORACLE:PAM1
Improvements in response time when using PAM
© 2008 NetApp. All rights reserved.
TR's – additional reading
The NetApp Performance Acceleration Module in File Services Workloadshttp://media.netapp.com/documents/tr-3744.pdf
NetApp Performance Acceleration Module Oracle OLTP Characterizationhttp://media.netapp.com/documents/tr-3753.pdf
Configuring and Tuning NetApp Storage Systems for High-Performance Random-Access Workloadshttp://media.netapp.com/documents/tr-3647.pdf
Information Lifecycle Management with Oracle® Database 10g™ Release 2 and NetApp SnapLockhttp://media.netapp.com/documents/tr-3534.pdf
Oracle Fusion Middleware DR Solution Using NetApp Storagehttp://media.netapp.com/documents/tr-3672.pdf
© 2008 NetApp. All rights reserved.
TR's – additional reading
Simplified SAN Provisioning and Improved Space Utilization Using NetApp Provisioning Managerhttp://media.netapp.com/documents/tr-3729.pdf
NetApp Storage Controllers and Fibre Channel Queue Depthhttp://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/Queue_Depth.pdf
http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/config_guide/setting_up/concept/c_oc_set_config-limits-overview.html#c_oc_set_config-limits-overview
Oracle 11g Release 1 Performance: Protocol Comparison on Red Hat Enterprise Linux 5 Update 1ONTAP 7.2 and 7.3http://media.netapp.com/documents/tr-3700.pdf
NetApp Technology Networkhttp://communities.netapp.com/
top related