fusionsphere performance: best practices for database ... · pdf filefusionsphere is a key...
TRANSCRIPT
1 Overview
FusionSphere is a key technical platform in Huawei’s cloud computing-based data center solutions. FusionSphere virtualizes physical resources, such as CPUs, memory, and storage, into a group of logical resources. The logical resources can be centrally managed, flexibly scheduled, and dynamically allocated. Multiple iso-lated VMs that run simultaneously can be created on a single physical server based on these logical resources. Database application software, such as Oracle Real Application Clusters (RAC) and SQL Server, deployed on the FusionSphere virtual-ization platform features high performance, high security, high reliability, and easy expansion.
This document describes how to optimize performance for databases, such as Oracle RAC and SQL Server deployed on the FusionSphere platform, and provides suggestions on it.
white paperIntel® Cloud Builders
FusionSphere Performance: Best Practices for Database Applications
Table of Contents
1 Overview . . . . . . . . . . . . . . . . . . . . . . 1
2 General Configurations . . . . . . . . 2
2 .1 BIOS . . . . . . . . . . . . . . . . . . . . . . 2
2 .2 Hardware-Assisted Virtualization . . . . . . . . . . . . . . . . . 2
2 .3 VM . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Computing Resource Configuration . . . . . . . . . . . . . . . . . . . 4
3.1 Introduction . . . . . . . . . . . . . . . 4
3.2 Hyper-Threading . . . . . . . . . . . 4
3 .3 NUMA . . . . . . . . . . . . . . . . . . . . . 5
3 .4 x2APIC . . . . . . . . . . . . . . . . . . . . 6
3.5 Transparent Huge Memory Page . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 .6 CPU QoS . . . . . . . . . . . . . . . . . 10
3 .7 Memory QoS . . . . . . . . . . . . . . 10
4 Storage Configuration . . . . . . . . . 11
4.1 Introduction . . . . . . . . . . . . . . 11
4.2 RDM . . . . . . . . . . . . . . . . . . . . . 11
5 Network Configuration . . . . . . . . 12
TABLE 2-1 BIOS CONFIGURATION ITEM LIST
ITEM VALUE DESCRIPTIONNUMA Support Enabled Enables the non-uniform memory access (NUMA)
feature.
Hardware Prefetcher Enabled Enables the hardware prefetch function (the CPU prefetches the next command to improve system performance).
Adjacent Cache Line Prefetch
Enabled Prefetches the adjacent cache data.
Intel® HT Technology Enabled Enables the Intel hyper-threading technology.
Intel® Virtualization Technology
Enabled Enables the Intel virtualization technology.
Intel® Speedstep® Technology
Enabled Enables the Intel SpeedStep technology to support the overclocking running.
Intel® TurboMode Technology
Enabled Enables the Intel turbo acceleration mode.
Intel C-STATE Technology
Disabled Enables the Intel C-State power-saving function.
Intel® Virtualization Technology (Intel VT-d)
Enabled Enables the Intel VT-d technology to support device passthrough.
VT Support Enabled Enables the Intel VT technology.
Local x2APIC Enabled Enables Local x2APIC technology on condition that the virtualization layer and the guest operating system (OS) support this technology. The Local x2APIC technology requires ACPI4.0 and does not support remapping.
ACPI Selection ACPI4.0 The Local x2APIC technology requires ACPI4.0 and does not support remapping.
2 General Configurations
2 .1 BIOS
FusionSphere platform supports multiple hardware platforms. The basic input/output system (BIOS) configu-rations may vary depending on the hardware in use. Table 2-1 lists recom-mended configurations for related BIOS configuration items.
2.2 Hardware-Assisted Virtualization
2.2.1 Configuration Suggestion
CPUs of the latest generation are rec-ommended because they support CPU and memory management unit (MMU) virtualization, such as Intel® Virtualiza-tion Technology (Intel VT-x). Fusion-Sphere supports hardware-assisted
virtualization by default. Using CPUs that support the hardware-assisted virtualization feature can achieve opti-mal performance in the FusionSphere system.
2.2.2 Configuration Method
Most Intel CPUs have the hardware-assisted virtualization function, called Intel: VT-x. The BIOS management interface of the server displays the configuration items shown in Figure 2-1. (The configuration method varies depending on hardware devices. The ATAE R3 board is used as an example in Figure 2-1.)
Perform the following steps to check whether the in-use CPUs support hardware-assisted virtualization:
2White Paper: FusionSphere Performance: Best Practices for Database Applications
Step 1 Users can check whether CPUs support hardware-assisted virtualiza-tion from the following two websites:
Intel: http://ark.intel.com/zh-cn/Prod-ucts/VirtualizationTechnology
Step 2 Run the cat /proc/cpuinfo | grep flags | head -n1 command on the server to view the CPU information and check whether flags contains vmx (Intel). If flags contains (as shown in Fig-ure 2-2) vmx (Intel), the CPUs support hardware-assisted virtualization. (vmx is irrelevant to that the hardware-as-sisted virtualization function is enabled or disabled in the BIOS. You can switch to the BIOS management interface to check whether the hardware-assisted virtualization function is enabled or disabled.)
Note: The hardware-assisted virtualization function is enabled by default. However, you are required to check this in the BIOS.
----End
2 .3 VM
2.3.1 Configuration Suggestions
• Install the PV driver on VMs. The paravirtualized (PV) driver contains the VM disk driver, network inter-face card (NIC) driver, and balloon driver. After the PV driver is installed, the system can provide optimal performance in disk and network non-passthrough mode. Disable infrequently-used database service processes, such as anacron, apmd, atd, autofs, cups, cupsconfig, gpm, isdn, iptables, kudzu, netfs, and port-map, based on site requirements to save VM CPU and memory resources.
• Perform operations, such as sched-uled tasks, backup, and anti-virus scans, during off-peak hours to avoid excessive CPU usage. Otherwise, the database performance will be dete-riorated.
Figure 2-2 CPU Information
Figure 2-1 BIOS configuration
2.3.2 Configuration Method
Disable irrelevant processes run on the VM.
1. Run the following command to show services that have been en-abled:
chkconfig --list |grep 3:on
1. Run the following commands to disable unwanted services and set them as default disabled services:
chkconfig atd off
chkconfig autofs off
service atd stop
service autofs stop
The configuration method is also applicable to Windows VMs.
3White Paper: FusionSphere Performance: Best Practices for Database Applications
3 Computing Resource Configuration
3.1 Introduction
Table 3-1 lists the CPU optimal per-formance configuration in the Fusion-Sphere system. For details, see the FusionSphere Performance Optimiza-tion Guide.
TABLE 3-1 COMPOSING RESOURCE CONFIGURATIONS
ITEM VALUE
Hyper-threading (HT) Enabled
Transparent huge page (THP) Enabled
Host NUMA Enabled
Guest NUMA Enabled
X2APIC Enabled
CPU QOS Disabled
Memory overcommitment Disabled
Memory reservation All reserved
Figure 3-1 Hyper-threading configuration
3.2.2 Configuration Method
3.2 Hyper-Threading
3.2.1 Configuration Suggestion
Use hardware that supports the hyper-threading technology and enable the HT technology in the BIOS.
Note: The hyper-threading technology allows two threads to run on one physical CPU core. In this case, one physical CPU core functions as two logical cores, improving the CPU efficiency.
4White Paper: FusionSphere Performance: Best Practices for Database Applications
3 .3 NUMA
3.3.1 Configuration Suggestions
NUMA is a memory management technology designed for Service Management Point (SMP) systems. The memory access duration depends on where the CPU memory is stored. With this feature enabled, a CPU can ac-cess local memory faster than memory on another CPU or shared CPU. The prerequisites of enabling the NUMA function in the FusionSphere system is that physical memory modules are symmetrically distributed (the physical memory model and size, and the num-ber of memory modules are symmetri-cally distributed based on suggestions provided by hardware vendors) and the NUMA function has been enabled in the BIOS.
The NUMA architecture in the FusionS-phere system contains host NUMA and guest NUMA.
Host NUMA automatically allocates VM CPU and memory resources to the same NUMA node and balances CPU workload among NUMA nodes. If the number of vCPUs on a VM is greater than that of CPU cores of a NUMA node, the host NUMA function is invalid.
Guest NUMA presents the memory and vCPU resources to the VM and shows the NUMA topology in the VM to enable VM application processes to preferably use memory resources on one NUMA node, thereby improving the memory access efficiency. If the number of vCPUs on a VM is less than that of CPU cores of a single node, the guest NUMA is invalid. If the number of vCPUs on a VM is a multiple of that of CPU cores, the guest NUMA evenly allocates the vCPUs to N nodes. If the number of vC-PUs on a VM is greater than that of CPU cores but is not a multiple of that of the CPU cores, the guest NUMA evenly allocates the vCPUs to each node of a physical server.
Figure 3-2 Host NUMA configuration
Host NUMA is enabled by default. Guest NUMA is disabled by default.
• If the number of VM CPU cores is less than that of a single node of a server and the VM memory size is smaller than the single node memory size of the server, enable only host NUMA
• If the number of VM CPU cores is greater than that of single node cores of a server and the VM memory size is larger than the single node memo-ry size of the server, enable both host NUMA and guest NUMA.
3.3.2 Configuration Method
Host NUMA takes effect only when NUMA Support is enabled in the BIOS. For details, see Figure 3-2.
Guest NUMA is disabled by default in the FusionSphere system. If you need to enable it, log in to FusionCompute and select Guest NUMA on the Basic Configuration page shown in Figure 3-3.
5White Paper: FusionSphere Performance: Best Practices for Database Applications
Figure 3-3 Guest NUMA configuration
3 .4 x2APIC
3.4.1 Configuration Suggestion
The x2APIC feature can prevent perfor-mance deterioration caused by process
scheduling in the hypervisor, improving computing virtualization performance.
The virtualization platform and VM OS must support the x2APIC feature.
TABLE 3-2 LISTS VM OSS THAT SUPPORT THE X2APIC FEATURE
VM OS FUSIONSPHERE VIRTUALIZATION PLATFORM
Windows 7 and later Supported
SUSE Linux Enterprise 11 Service Pack 2 (SP2) and later Supported
The x2APIC feature is enabled by de-fault in the FusionSphere virtualization platform.
6White Paper: FusionSphere Performance: Best Practices for Database Applications
3.4.2 Configuration Method
For a Linux VM
1. Run the following command on the VM host to check whether flags displayed in the CPU information contains x2apic:
# cat /proc/cpuinfo | grep flags | head -n1
If flags contains x2apic, the CPU support the x2APIC function.
2. Run the following command on the VM to check whether the configura-tion has taken effect.
# dmesg | grep –i x2apic
Figure 3-4 Checking whether the CPU supports the X2APIC function
Figure 3-5 Verifying whether the x2APIC function takes effect
the VM configuration file:
virsh dumpxml VM name
3. Check whether the VM configu-ration file contains viridian and whether viridian is set to 1.
<viridian>1</viridian>
3.5 Transparent Huge Memory Page
3.5.1 Configuration Suggestion
You are suggested to increase the memory page size to reduce the number of mapping entries in a map-ping table, improving the CPU search efficiency.
If Enabling x2apic or Enabled x2apic is displayed in the command output, the x2APIC feature is suc-cessfully enabled.
For a Windows VM
The x2APIC feature is enabled in a Windows VM system by default.
Perform the following steps to check whether the x2APIC feature is enabled:
1. Log in to the host accommodating the Windows VM that runs the SQL Server.
2. Run the following command to view
7White Paper: FusionSphere Performance: Best Practices for Database Applications
The transparent huge page feature is enabled in the FusionSphere system by default to improve the memory access performance.
3.5.2 Configuration Method
For a Linux VM
An Oracle database is used as an example. The parameter values are provided only for reference. In practice, configure them based on specific service requirements.
1. Run the following command to check the size of the memory page:
# grep Hugepagesize /proc/ meminfo
Information similar to the following is displayed:
Hugepagesize: 2048 kB
The preceding command output shows that the huge memory page size is 2048 KB.
2. Run the following command to check whether the huge memory page has been assigned:
# cat /proc/meminfo|grep Huge Pages_Total
Information similar to the following is displayed:
HugePages_Total=0
The preceding command output shows that the huge memory page is not assigned.
3. Run the following commands to configure the number (nr_hugepag-es) of huge memory pages required to be assigned:
The nr_hugepages value can be calculated based on the following formula: nr_hugepages ≥ System global area size (MB)/Huge memory page size (MB)
Note: System global area (SGA) is a common Oracle data buffer area. For details, see the Oracle database guide.
# echo “vm.nr_hugepages=8192” >> /etc/sysctl.conf
# sysctl –p
Restart the VM and run the follow-ing command to check whether the huge memory page configuration takes effect:
grep HugePages_Total /proc/me-minfo
Information similar to the following is displayed:
HugePages_Total: 8192
The preceding command output shows that the system has been assigned 8192 huge memory pages (8192 x 2048 KB = 16384 MB).
4. Open the /etc/security/limits.conf file using the vi editor and add the following two lines in the file to configure the memlock parameter of Oracle user:
oracle soft memlock 16777216
oracle hard memlock 16777216
The memlock size can be calculated based on the following formula:
Memlock size ≥ Number of huge memory pages x 1024
Note: In this example, the memlock size is set to a double of the number of huge memory pages multiplied by 1024, namely, 16777216. (2 x 8192 x 1024).
Switch to the Oracle user and run the following command to check the memlock value:
ora_test@oracle[/home/oracle]> ulimit –l
Information similar to the following is displayed:
16777216
Run the following commands to start the database:
ora_test@oracle[/home/oracle]> sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Pro-duction on Mon Jan 25 09:50:33 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
idle> startup
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 1218292 bytes
Variable Size 67111180 bytes
Database Buffers 92274688 bytes
Redo Buffers 7168000 bytes
Database mounted.
Database opened.
idle> exit
Disconnected from Oracle Data-base 10g Enterprise Edition Release 10.2.0.1.0 – Production
With the Partitioning, OLAP and Data Mining options
Run the following command to check whether the huge memory page has been used:
# grep HugePages_Free /proc/me-minfo
Information similar to the following is displayed:
HugePages_Free: 5589
The number of huge memory pages that is displayed is smaller than the total number of huge memory
pages. This indicates that the Ora-cle database uses the huge memory page function.
To disable the AMM feature, set the following parameters, in which ga_target, memory_target, and memory_max_target must be set to 0.
8White Paper: FusionSphere Performance: Best Practices for Database Applications
Figure 3-6 CPU QoS configuration
ALTER SYSTEM SET sga_max_size=16g SCOPE=SPFILE;
ALTER SYSTEM SET sga_target=0 SCOPE=SPFILE;
ALTER SYSTEM SET PGA_AGGRE-GATE_TARGET=8g SCOPE=SPFILE;
ALTER SYSTEM SET memory_tar-get=0 SCOPE=SPFILE;
ALTER SYSTEM SET memory_max_target=0 SCOPE=SPFILE;
CAUTION
For the Oracle database 11g, the automatic memory management (AMM) feature must be disabled. Otherwise, the huge memory page cannot be used. Configurations for other parameters are the same as those for the Oracle database 10g.
After the huge memory page is configured, restart the database. To make the huge memory page feature take effect.
3. Double-click Lock pages in memory to add the user and group.
4. Restart the server.
On the Lock pages in memory page, check whether the added user is included in the user group.
For a Windows VM
Perform the following operations to enable the huge memory page function on a Windows VM: (Only Windows Server 2003 and later are applicable.)
1. Choose Control panel > Admin-istrative tools > Local security policy
2. On the Local security policy page, choose Local policies > User rights assignment.
9White Paper: FusionSphere Performance: Best Practices for Database Applications
3 .6 CPU QoS
3.6.1 Configuration Suggestion
Disable the CPU QoS feature for data-base applications.
CPU QoS ensures optimal allocation of computing resources for VMs and prevents resource contention between VMs due to different service require-ments. Therefore, CPU QoS can effec-tively increase resource utilization and reduce costs.
Set CPU QoS values during VM creation based on the planned VM services. Computing capabilities of VMs with dif-ferent CPU QoS settings vary. The sys-tem ensures the VM CPU QoS by setting the minimum computing capability and the resource allocation priorities.
3.6.2 Configuration Method
Perform the following operations to set Properties during the VM creation:
In the CPU Resource Control area, drag Reserved slider to the right and select No limit.
3 .7 Memory QoS
3.7.1 Configuration Suggestion
To achieve the optimal performance, you are suggested to disable the memory overcommitment policy and set the reserved memory of the VM to the actually assigned memory size.
3.7.2 Configuration Method
Perform the following operations to set Properties during the VM creation:
In the CPU Resource Control area, drag Reserved slider to the right.
Figure 3-7 Memory QoS configuration
10White Paper: FusionSphere Performance: Best Practices for Database Applications
LUNLUN LUN LUNLUN LUN LUNSAN
Dom0
SCSI Back Driver
Block/Schedule
SCSI
Block/Schedule
oracledata d ASM data redo log oracleedo loogg undo log
VM
Block/Schedule
SCSI
SCSI Front Driver
IO Ring
Figure 4-1 PVSCSI I/O path
4 Storage Configuration
4.1 Introduction
• Storage hardware is configured based on the applicable storage per-formance best practice documents provided by vendors.
• The Oracle database log disk and data disk are deployed on differ-ent redundant array of independent
disks (RAID) groups. Multiple data disks are used for a large amount of data and distributed on different RAID groups.
• Raw device mapping (RDM) is used as the VM storage. I/O queue depths are adjusted, preventing I/O from being blocked under heavy traffic.
4.2 RDM
4.2.1 Configuration Suggestion
The RDM feature allows VMs to iden-tify Small Computer System Interface (SCSI) disks and issue SCSI commands to hosts, which pass through the com-mands to storage devices. Therefore, VMs can provide high performance for I/O-sensitive services, such as Oracle RAC and MSCS. Figure 4-1 shows the RDM I/O path.
11White Paper: FusionSphere Performance: Best Practices for Database Applications
4.2.2 Configuration Method
The RDM feature is supported by default in the FusionSphere system. This feature can be used by mounting a single logical unit number (LUN) on domain 0 to the VM.
Perform the following operations to configure the data store:
1. Create a single LUN on the IP stor-age area network (SAN) storage de-vice and map the LUN to the host.
2. Add data stores to the host using RDM.
3. Create a disk on the data store and mount the disk to the VM.
Note: Only one disk can be created on the data store and shared by multiple VMs.
4. Run the following command on the CAN node to view the VM disk configuration:
# virsh dumpxml [vmid]
Information similar to the following is displayed:
<disk type=’block’ device=’disk’>
<driver name=’phy’/>
<source dev=’/dev/disk/by-id/xxxxxx’/>
<target dev=’sda’ bus=’scsi’/>
</disk>
5 Network Configuration
Configuration Suggestion
• The network interface card (NIC) performance must match the switch
performance, and the network bandwidth is sufficient. For example, if you use 10GE NICs, the switch must support for 10GE bandwidth.
• The VM management network, ser-vice network, and heartbeat network are deployed in different network segments of a database cluster. The service network segment and heartbeat network segment should be separated from other services to avoid being affected by other services.
• Heartbeat network delay has adverse impact on network performance. Therefore, the NIC is deployed on the heartbeat network in passthrough mode to reduce network delay.
Figure 4-2 Selecting storage device
12White Paper: FusionSphere Performance: Best Practices for Database Applications
DisclaimersCopyright © Huawei Technologies Co., Ltd. 2016. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd.
Trademarks and Permissions: Huawei logo and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective holders.
Notice: The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information, and recommendations in this document are provided “AS IS” without warranties, guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied. INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting Intel’s Web site at www.intel.com.
Copyright © 2016 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
* Other names and brands may be claimed as the property of others. Printed in USA 0316/HDW/MM/PDF Please Recycle
White Paper: FusionSphere Performance: Best Practices for Database Applications