dell emc sap bestpractice

Upload: dharshan-knicks

Post on 05-Apr-2018

239 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/31/2019 Dell Emc Sap Bestpractice

    1/56

    EMC CONFIDENTIAL

    Best Practices for Implementing

    SAP on Dell/EMC

    Part Number 300-003-347REV A01

  • 7/31/2019 Dell Emc Sap Bestpractice

    2/56

    Copyright 2006 EMC Corporation. All rights reserved.

    EMC believes the information in this publication is accurate as of its publication date.

    The information is subject to change without notice.

    THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMCCORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY

    KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND

    SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY

    OR FITNESS FOR A PARTICULAR PURPOSE.

    Use, copying, and distribution of any EMC software described in this publication

    requires an applicable software license.

    For the most up-to-date listing of EMC product names, see EMC Corporation

    Trademarks on EMC.com.

    All other trademarks used herein are the property of their respective owners.

    THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY

    CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE

    CONTENT IS PROVIDED BY DELL AS IS, WITHOUT EXPRESS OR IMPLIED

    WARRANTIES OF ANY KIND.

    Trademarks used in this text: Dell, the Dell Logo, PowerEdge, and PowerVault are

    trademarks of Dell Inc.

    Other trademarks and trade names may be used in this document refer to either the

    entities claiming the marks and names or their products. Dell Inc. disclaims any

    proprietary interest in the trademarks and trade names other than its own.

    Copyright 2006 Dell Inc. All rights reserved. Reproduction in any manner

    whatsoever without the written permission of Dell Inc. is strictly forbidden.

    Best Practices for Implementing SAP on Dell/EMC

    Version 1

    ii Best Practices for Implementing SAP on Dell/EMC Version 1

  • 7/31/2019 Dell Emc Sap Bestpractice

    3/56

    Contents

    Preface .............................................................................................................................. ix

    Chapter 1 Introduction .................................................................................................................... 1-1

    Managing complexity in SAP environments..................................................................1-2Sizing SAP .....................................................................................................................1-2

    Chapter 2 SAP NetWeaver..............................................................................................................2-1

    SAP architecture.............................................................................................................2-2

    The advent of SAP NetWeaver ...................................................................................... 2-4

    Chapter 3 Dell/EMC Software Solutions for SAP..........................................................................3-1

    Software Overview.........................................................................................................3-2

    EMC SnapView..............................................................................................................3-2

    EMC MirrorView ........................................................................................................... 3-4

    EMC MirrorView/S................................................................................................3-5

    EMC MirrorView/A ...............................................................................................3-5

    EMC Replication Manager Family.................................................................................3-6

    EMC PowerPath .............................................................................................................3-8

    EMC Navisphere ............................................................................................................3-9

    EMC Visual Products...................................................................................................3-10

    SAP Expert Monitor for EMC (SEME) ....................................................................... 3-11

    Chapter 4 Dell/EMC Storage Platform Considerations for SAP .................................................... 4-1

    CX-Series storage...........................................................................................................4-2

    RAID levels and performance ........................................................................................ 4-2

    When to use RAID 5...............................................................................................4-2

    When to use RAID 1/0 ...........................................................................................4-3When to use RAID 3...............................................................................................4-3

    When to use RAID 1...............................................................................................4-3

    Cache...4-3

    Read cache..............................................................................................................4-4

    Write cache .............................................................................................................4-4

    Fibre Channel drives.......................................................................................................4-7

    ATA drives ..................................................................................................................... 4-7

    Best Practices for Implementing SAP on Dell/EMC ii

  • 7/31/2019 Dell Emc Sap Bestpractice

    4/56

    Contents

    ATA drives and RAID levels......................................................................................... 4-8

    RAID group partitioning and ATA drives ............................................................. 4-8

    ATA drives as mirror targets and BCVs ................................................................ 4-8

    Mixing drive types in an array ............................................................................... 4-9

    LUN Distribution ................................................................................................... 4-9

    Minimizing disk contention ................................................................................. 4-11Stripes and the stripe element size ....................................................................... 4-11

    RAID 5 stripe optimizations ................................................................................ 4-11

    Number of Drives per RAID group...................................................................... 4-12

    Large spindle counts ............................................................................................ 4-12

    How many disks to use in a storage system......................................................... 4-13

    RAID-level considerations........................................................................................... 4-14

    RAID 5............................................................................................................. 4-14

    RAID 1/0.............................................................................................................. 4-15

    RAID 3 .4-15

    Binding RAID groups across buses and DAEs............................................................ 4-15

    Binding across DAEs ........................................................................................... 4-15

    Binding across Back-End Buses .......................................................................... 4-16

    Binding with DPE Drives..................................................................................... 4-16

    Chapter 5 Database Layout Considerations.................................................................................... 5-1

    Striped metaLUNs.......................................................................................................... 5-2

    Host-based striping ........................................................................................................ 5-2

    Log and BCV placement................................................................................................ 5-2

    Logical volume managers and datafile sizes.................................................................. 5-3

    PowerPath and device queue depth................................................................................ 5-3

    Snaps, snapshots, BCVs, and clones.............................................................................. 5-3

    Data access ............................................................................................................. 5-3

    Resource requirements ........................................................................................... 5-4

    Performance considerations ................................................................................... 5-5

    Appendix AReferences and Further Reading ................................................................................... A-1

    iv Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    5/56

    Figures

    Figure 2-1. Two-tier SAP R/3 system configuration......................................................2-3

    Figure 2-2. Three-tier SAP R/3 system configuration....................................................2-3

    Figure 2-3. SAP NetWeaver...........................................................................................2-4

    Figure 3-1. MirrorView/S...............................................................................................3-5

    Figure 3-2. MirrorView/A..............................................................................................3-6

    Figure 3-3. Replication Manager user interface ............................................................. 3-7

    Figure 3-4. EMC Navisphere Analyzer........................................................................3-10

    Figure 3-5. SAP Expert Monitor for EMC array information ......................................3-11

    Figure 3-6. SAP Expert Monitor for EMC logical volume information ......................3-12

    Figure 4-1. Write cache auto-configuration ................................................................... 4-7

    Best Practices for Implementing SAP on Dell/EMC v

  • 7/31/2019 Dell Emc Sap Bestpractice

    6/56

    Figures

    vi Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    7/56

    Tables

    Table 3-1. Comparing SnapView performance and economics .....................................3-4

    Table 4-1. Random access performance of 5400 rpm ATA drives relative to 10 K rpm

    Fibre Channel drives.......................................................................................................4-8

    Table 4-2. Example of RAID group and LUN numbering...........................................4-10

    Table 4-3. System high-efficiency / high-performance drive counts ........................... 4-14

    Table 4-4. RAID Types and Relative Performance in Failure Scenarios.....................4-15

    Best Practices for Implementing SAP on Dell/EMC vi

  • 7/31/2019 Dell Emc Sap Bestpractice

    8/56

    Tables

    viii Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    9/56

    Preface

    This document describes how to exploit Dell/EMC features and functionality in SAP

    environments. This document is intended to be a guide for making decisions in

    deploying the Dell/EMC family of storage products (EMC CLARiiON storage

    platforms). It covers the major topics in determining storage needs for an SAP rollout.

    As part of an effort to improve and enhance the performance and capabilities of its

    product line, Dell and EMC from time to time release revisions of their hardware and

    software. Therefore, some functions described in this guide may not be supported by all

    revisions of the software or hardware currently in use. For the most up-to-date

    information on product features, refer to your product release notes.

    Audience

    This solutions guide is intended for SAP administrators, database and system

    administrators, system integrators, storage management personnel, and members of

    EMC Technical Global Services responsible for configuring and managing SAP systems

    on Windows, Linux, and UNIX platforms. The information in this document is based on

    SAP Version 4.0 and later.

    Best Practices for Implementing SAP on Dell/EMC ix

  • 7/31/2019 Dell Emc Sap Bestpractice

    10/56

    Preface

    x Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    11/56

    Chapter 1 Introduction

    This chapter presents these topics:

    Managing complexity in SAP environments..................................................................1-2

    Sizing SAP .....................................................................................................................1-2

    Best Practices for Implementing SAP on Dell/EMC 1-1

  • 7/31/2019 Dell Emc Sap Bestpractice

    12/56

    Introduction

    Managing complexity in SAP environments

    Today, optimizing large complex environments is a continuing challenge. Not only are

    requirements changing on a daily basis, but also the impact of one proposed move may

    adversely affect the rest of the environment. To help improve IT staff productivity,

    higher application availability, and faster time to deploy new servers and associatedstorage, your SAP infrastructure needs to accommodate making nondisruptive changes

    to your systems. Examples of these are backing up your data, creating copies of your

    instances, managing security, and monitoring overall system health.

    With enterprise infrastructures from Dell and EMC, SAP customers can reduce time,

    cost, and risk in their implementation projects. Administrators can use EMCs

    SnapView and MirrorView to create and store software copies of existing or legacy

    applications data in a Dell/EMC storage platform. These copies act as backups to

    production data as well as practice or rehearsal systems for consolidating,

    upgrading, and migrating activities for volumes of data from separate instances into a

    single instance. Rehearsing your procedures on a copy of the production database helps

    reduce risk since administrators can estimate projected production downtime withgreater certainty.

    Sizing SAP

    Sizing the architecture to support SAP solutions is critical to the success of an SAP

    project.

    The first step to implement SAP with Dell is to correctly size the necessary platform for

    PowerEdge Servers and Dell/EMC storage platforms. To enable maximum performance,

    Dell selects the appropriate configurations for a cost-effective solution.

    Sizing results are generated by qualified Dell and EMC personnel who that understandthe SAP architecture within the technical architecture of PowerEdge, PowerVault, andDell/EMC products. Whether you are starting a new project, upgrading to a new version

    of SAP, or expanding your enterprise, Dell and EMC can assist in architecting the

    necessary hardware to support your organization's IT requirements.

    1-2 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    13/56

    Chapter 2 SAP NetWeaver

    This chapter presents these topics:

    SAP architecture.............................................................................................................2-2

    The advent of SAP NetWeaver ...................................................................................... 2-4

    Best Practices for Implementing SAP on Dell/EMC 2-1

  • 7/31/2019 Dell Emc Sap Bestpractice

    14/56

    SAP NetWeaver

    SAP architecture

    This document uses the SAP NetWeaver-based solution, Enterprise Core Component

    (follow-on to R/3), as the foundation for the information provided. There are many

    options when deploying SAP solutions, which could require additional storage. The

    general requirements are discussed in detail. For implementing optional components,the Competence Centers can assist in architecting the full solutions storage

    requirements through the formal sizing process.

    SAP NetWeaver-based solutions have a flexible two-tier or three-tier architecture:

    Central instance

    Database instance

    Dialog instances, if required

    Front-end GUI

    SAP offers the following types of standard configurations:

    Central system, in which the central instance and the database instance are on the

    same host

    Standalone database system, in which the central instance and the database instance

    are on different hosts

    The database server is the host on which the database is installed. In a two-tier

    configuration, this server can also accommodate the central instance (the SAP instance

    that includes the message server and enqueue server processes). If the central instance is

    installed on a separate application server, the configuration is three-tiered, and thedatabase server is called a standalone database server. Dialog instances are SAP

    instances that include only dialog, batch, spool, or update processes; these run on hosts

    called application servers.

    Each of these instance hosts (servers) require internal storage to meet the needs for the

    operating system and associated swap area. The majority of the storage requirements are

    typically from whichever server or servers host the database functionality. Other servers

    can require external storage if serving as part of a cluster or using boot-from-SANtechnology.

    Figure 2-1 on page 2-3 shows a traditional two-tier (SAP ECC) configuration in which

    the SAP central instance resides on the database server (also called central system). Thisconfiguration is often used for sandbox, development, and small productive

    environments. A three-tier configuration should be considered to support a highly

    available solution.

    2-2 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    15/56

    SAP NetWeave

    App l ica t ion Serverw i t h

    Cent r a l I ns tance

    App l ica t ion Serverw i t h

    Cent r a l I ns tanceDatabase ServerDatabase Server

    SAP I ns tan ceDVEBMGS00

    SAP I ns tan ceDVEBMGS00

    Figure 2-1. Two-tier SAP R/3 system configuration

    Figure 2-2 shows a three-tier distribution of the instances for a large SAP System (onethat spans several computers and has a standalone database server).

    Figure 2-2. Three-tier SAP R/3 system configuration

    SAP GUI SAP GUI

    SAP I ns tan ceDVEBMGS00

    SAP I ns tan ceDVEBMGS00 Database

    SAP GUI SAP GUISAP GUI SAP GUI

    Database

    SAP GUI SAP GUI

    Standa loneDatabase Server

    S tanda lone

    Database Server

    Appl icat ion Serverw i t h T w o D i a l og

    I ns tances

    SAP I ns tances

    D 0 0 + D 0 1

    (DI A, UPD)

    Appl icat ion Serverw i t h

    Cen t ra l I ns tance

    SAP I ns tance

    DVEBMGS00

    (DI A, UPD)

    Appl icat ion Serverw i t h T w o D i a l og

    I ns tances

    SAP I ns tances

    D 0 0 + D 0 1

    (DI A, UPD)

    SAP I ns tances

    D 0 0 + D 0 1

    (DI A, UPD)

    Appl icat ion Serverw i t h

    Cen t ra l I ns tance

    SAP I ns tance

    DVEBMGS00

    (DI A, UPD)

    SAP I ns tance

    DVEBMGS00

    (DI A, UPD)

    Database

    Standa loneDatabase Server

    S tanda lone

    Database Server

    SAP GUI SAP GUI SAP GUI SAP GUI

    Appl icat ion Serverw i t h T w o D i a l og

    I ns tances

    SAP I ns tances

    D 0 0 + D 0 1

    (DI A, UPD)

    SAP I ns tances

    D 0 0 + D 0 1

    (DI A, UPD)

    Appl icat ion Serverw i t h

    Cen t ra l I ns tance

    SAP I ns tance

    DVEBMGS00

    (DI A, UPD)

    SAP I ns tance

    DVEBMGS00

    (DI A, UPD)

    Appl icat ion Serverw i t h T w o D i a l og

    I ns tances

    SAP I ns tances

    D 0 0 + D 0 1

    (DI A, UPD)

    SAP I ns tances

    D 0 0 + D 0 1

    (DI A, UPD)

    Appl icat ion Serverw i t h

    Cen t ra l I ns tance

    SAP I ns tance

    DVEBMGS00

    (DI A, UPD)

    SAP I ns tance

    DVEBMGS00

    (DI A, UPD)

    Database

    SAP GUI SAP GUI SAP GUI SAP GUI

    SAP architecture 2-3

  • 7/31/2019 Dell Emc Sap Bestpractice

    16/56

    SAP NetWeaver

    The configuration of the system is planned in advance of the installation together with

    the Dell|SAP Competence Center or other SAP knowledgeable resources. The

    configuration is designed using both SAPs QuickSizer and Dells SAP sizing tools on

    the basis of sizing information that reflects the system workload. Details such as the set

    of applications to be deployed, how intensively these are used, and the number and type

    of users, IT practices around backup/restore, disaster recovery strategies, and systemavailability requirements are necessary to architect a solution that meets each customers

    unique needs.

    The advent of SAP NetWeaver

    Since 2002, SAP has used the term SAP NetWeaverto refer to an overarching

    technological concept comprising the different SAP technology platforms. This effort is

    in a sense merging the technologies to one platform: a new platform that is the

    integration of people, information, and processes in one solution.

    According to SAP, SAP NetWeaver is a comprehensive integration and application

    platform that works with existing IT infrastructures to enable and manage change. SAPNetWeaver enables the ability to flexibly and rapidly design, build, implement, and

    execute new business strategies and processes as illustrated in Figure 2-3. SAP

    NetWeaver can also drive innovation throughout the organization by combining existing

    systems while maintaining a sustainable cost structure.

    Figure 2-3. SAP NetWeaver

    2-4 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    17/56

    SAP NetWeave

    SAP NetWeaver is the technical foundation of mySAP Business Suite solutions, SAP

    Composite Applications, partner solutions, and customer custom-built applications. It

    also enables Enterprise Services Architecture, SAP's blueprint for service-oriented

    business solutions.

    More information on these topics specifically can be found on SAPs Service

    Marketplace at http://service.sap.com.

    The advent of SAP NetWeaver 2-5

    http://0.0.0.0/http://0.0.0.0/
  • 7/31/2019 Dell Emc Sap Bestpractice

    18/56

  • 7/31/2019 Dell Emc Sap Bestpractice

    19/56

    Chapter 3 Dell/EMC SoftwareSolutions for SAP

    This chapter presents these topics:

    Software overview..........................................................................................................3-2

    EMC SnapView..............................................................................................................3-2

    EMC MirrorView ........................................................................................................... 3-4

    EMC Replication Manager Family.................................................................................3-6

    EMC PowerPath .............................................................................................................3-8

    EMC Navisphere ............................................................................................................3-9

    EMC Visual Products...................................................................................................3-10

    SAP Expert Monitor for EMC (SEME) ....................................................................... 3-11

    Best Practices for Implementing SAP on Dell/EMC 3-1

  • 7/31/2019 Dell Emc Sap Bestpractice

    20/56

    Dell/EMC Software Solutions for SAP

    Software overview

    The family of Dell/EMC storage platforms (EMC CLARiiON) consists of high-

    performance, fully redundant, high availability storage platforms providing

    nondisruptive component replacements and code upgrades. Dell/EMC storage platforms

    offer high levels of performance, data integrity, reliability, and availability. In additionto the hardware array, the following software products support SAP environments:

    EMC SnapView EMC SnapView is a business continuance solution that allows

    customers to use special devices to create mirror images or snapshots of source

    devices. These business continuance volumes (BCVs), clones, or snapshots can be

    attached to the same or different hosts when they are not established with their

    source devices. The source devices remain online for regular I/O operation while themirrors are created and mounted. SnapView can be used within a single storage

    array.

    EMC MirrorView EMC MirrorView is a business continuance solution that

    allows specific source volume(s) to be mirrored to their like remote target storageplatforms. MirrowView is used across two storage arrays.

    EMC Navisphere

    The Navisphere Management Suite is a suite of integrated

    software tools that allows customers to manage, provision, monitor, and configure

    systems, as well as control all platform replication applications from an easy-to-use,

    secure web-based management console. Navisphere-managed array applications

    include Navisphere Analyzer, SnapView, MirrorView, and SAN Copy.

    EMC Visual Products VisualSAN

    and VisualSRM software integrates with

    Dell/EMC storage platforms to provide network, configuration, and performance

    management for mid-tier SANs.

    EMC Replication Manager Family EMC Replication Manager makes it easy to

    create point-in-time replicas of databases and/or file systems residing on your

    existing storage arrays. Replicas can be stored on clones or snaps.

    SAP Expert Monitor for EMC (SEME) The SEME is an SAP-owned and

    supported add-on that allows monitoring the performance statistics of the storage

    array from within a Basis transaction.

    The latest version availability of each of the solutions is provided in the latestEMC

    Support Matrix (ESM) at http://www.EMC.com/interoperability.

    EMC SnapViewSnapView is a storage-system-based software application that allows you to create a

    copy of a LUN by using either clones or snapshots. A clone, also referred to as a

    business continuance volume (BCV), is an actual copy of a LUN and takes time to

    create, depending on the size of the source LUN. A snapshot is a virtual point-in-time

    copy of a LUN and takes only seconds to create.

    3-2 Best Practices for Implementing SAP on Dell/EMC

    http://h/http://h/
  • 7/31/2019 Dell Emc Sap Bestpractice

    21/56

    Dell/EMC Software Solutions for SAP

    SnapView has the following important benefits:

    It allows full access to production data with modest to no impact on performance

    and without the risk of damaging the original data.

    For decision support or revision testing, it provides a coherent, readable and writable

    copy of real production data.

    For backup, it practically eliminates the time that production data spends offline or

    in hot backup mode, and it offloads the backup overhead from the production host to

    another host.

    A snapshot is a virtual LUN that allows a second host to view a point-in-time copy of a

    source LUN. You determine the point in time when you start a SnapView session. The

    session keeps track of how the source LUN looks at a particular point in time.

    SnapView also allows you to instantly restore a sessions point-in-time data back to the

    source LUN, if the source LUN were to become corrupt or if a sessions point-in-time

    data is desired as the source. You can do this by using SnapViews rollback feature.

    The advantage of the snapshot is that it is pointer based and does not require the same

    capacity as the source data. It typically requires 20 percent of the source. The

    disadvantage is that is still places load on the production data as it points to the source.

    The advantage of the clone is that it is independent of the production area and has its

    own dedicated space. Therefore, production is not interrupted from a backup of this area

    The only time production is interrupted is when a clone needs to be rebuilt. The

    disadvantage is that it requires the same disk capacity as the source.

    EMC SnapView 3-3

  • 7/31/2019 Dell Emc Sap Bestpractice

    22/56

    Dell/EMC Software Solutions for SAP

    In general, use the guidelines in Table 3-1 when choosing whether to use clones/BCVs

    or snapshots.

    Table 3-1. Comparing SnapView performance and economics

    SnapView Snap SnapView Clone/BCV

    Performance Supports moderate I/O

    workloads and functionalityrequirements

    Supports high I/O workloadsand availability needs

    Economics Space-saving virtual copy Requires fraction of

    capacity of source volume

    Full physical copy Requires 100% of capacity

    of the source volume

    In SAP environments, SnapView allows you to refresh test instances with production

    data in minutes rather than days. You also can use SnapView to perform a split mirror

    backup (with or without BRBACKUP) that minimizes impact on the production

    database.

    Chapter 4, Dell/EMC Storage Platform Considerations for SAP, provides more

    detailed information on clones and snapshots.

    EMC MirrorView

    EMC MirrorView is a software application that maintains a copy image of a logical unit

    (LUN) at separate locations in order to provide for disaster recovery, that is, to let one

    image continue if a serious accident or natural disaster disables the other. MirrorView is

    typically used for creating a Disaster Recovery site of the SAP production environment.

    The production image (the one mirrored) is called theprimary image; the copy image iscalled the secondary image. MirrorView supports up to two remote images, but since

    you operate on one image at a time, the examples in this manual show a single image.

    Each image resides on a separate storage system. The primary image receives I/O from a

    host called the production host; the secondary image is maintained by a separate storage

    system that can be a stand-alone storage array or connected to its own computer system.

    The same management station, which can promote the secondary image if the primary

    image becomes inaccessible, manages both storage systems.

    In SAP environments, MirrorView also allows you to refresh test instances with

    production data in minutes rather than days. You can use MirrorView to perform a split

    mirror backup (with or without BRBACKUP) that minimizes impact on the production

    database. The two implementation options for MirrorView that depend on distance andbandwidth requirements are:

    MirrorView/Synchronous

    MirrorView/Asynchronous

    3-4 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    23/56

    Dell/EMC Software Solutions for SAP

    EMC MirrorView/S

    MirrorView/S is primarily used in campus environments. It maintains a real-time mirror

    image of the production data at a remote site in mirrored volumes. MirrorView/S

    provides a consistent real-time view of the production data at the target site at all times

    as illustrated in Figure 3-1.

    Data on both the source and target volumes is always fully synchronized at the

    completion of an I/O sequence via a first-in-first-out (FIFO) queue model. All data

    movement is at the block level, with synchronized mirroring.

    Limited distance

    Source Target

    1

    4 3

    2

    Limited distanceLimited distance

    Source Target

    11

    44 33

    22

    Figure 3-1. MirrorView/S

    The sequence of operations follows:

    1. An I/O write is received from the server into the source array.

    2. The I/O is transmitted to the target array using FLARE Consistency Assist.

    3. The target array sends a receipt acknowledgment back to the source array.

    4. An acknowledgment is presented to the server.

    EMC MirrorView/A

    MirrorView/A is an asynchronous replication product based on delta set technology. It

    periodically captures the changes on the source LUN(s) in a delta set; the delta set is

    then applied to the target LUN(s) at the end of every period.

    MirrorView/A can replicate over extended distances. You can specify the duration of the

    update period in minutes to hours to days to meet your required RPO. Because

    MirrorView/A runs on the Dell/EMC storage platform, it does not use any host-CPUcycles for replication. Unlike host-based asynchronous remote-replication solutions,

    MirrorView/A is independent of host operating systems, applications, and file systems.

    Depending on recovery-point requirement and workload characteristics, MirrorView/Acan absorb the peak-load bandwidth requirement for remote replication by buffering and

    replicating during times of inactivity and can operate on much lower bandwidth than that

    of the peak load.

    EMC MirrorView 3-5

  • 7/31/2019 Dell Emc Sap Bestpractice

    24/56

    Dell/EMC Software Solutions for SAP

    MirrorView/A provides a consistent disk-based replica for fast restart at the remote

    site by creating a point-in-time gold copy of the secondary LUNs on the target system

    at the beginning of each cycle before applying the changes as shown in Figure 3-2. This

    provides a consistent restartable copy of source data on the target (at most, two cycle

    times behind the data on the source under nominal conditions) at all times.

    Extended distance

    Source Target

    1

    2 5DeltaSet

    3GoldCopy

    4

    TargetSource

    Extended distanceExtended distance

    Source Target

    11

    22 55DeltaSet

    33GoldCopy

    44

    TargetSource

    Figure 3-2. MirrorView/A

    The sequence of operations is as follows:

    1. An I/O write is received from the server into the source array.

    2. The source array sends a receipt acknowledgment to the production server.

    3. A delta set is created, and changes are tracked during the MirrorView/A replicationcycle, using FLARE Consistency Assist.

    4. A gold copy is created at the target site to ensure that a crash-recoverable copy isavailable at all times in case of link failure during delta set transport.

    5. The delta set is transported and applied to the target disk, the gold copy is

    removed, and the delta set is cleared for next cycle.

    EMC Replication Manager Family

    The Replication Manager family of software applications simplifies management and

    automation of local and remote replication technologies. The Replication Manager

    family includes two distinct offerings:

    Replication Manager/Local is a replication management solution for Dell/EMC and

    other storage platforms using EMC TimeFinder, EMC SnapView, and EMC SAN

    Copy for automated tiered storage replication

    Replication Manager/SE is a replication management solution leveragingMicrosofts VSS technology for use in SnapView and SAN Copy in Windows

    environments.

    3-6 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    25/56

    Dell/EMC Software Solutions for SAP

    Replication Manager includes a set of specific benefits for improving availability and

    data protection. Some of those benefits include:

    Automation of storage management tasks EMC Replication Manager helps users

    automate tasks such as replication, mounting replicas to alternate hosts, storagemanagement, and storage associations.

    Downtime reduction techniques Traditional restore requires the data to be read

    from linear tape to disk. With EMC disk-based replication technologies, the

    recovery and rollback can begin immediately, without having to wait until the

    restore is complete. Also, when restoring from a disk copy, the data can be tested

    first to ensure that you are restoring data that does not include the logical error. This

    is achieved by mounting the replica on an alternate host before performing the

    restore. That way, you do not need to restore more than once because the restore was

    taken from an incorrect backup.

    Alternate uses for replicated data

    Scheduled and on-demand replicas have otheruses in addition to the most obvious data protection. Replicas can also assist withbetter management of onsite resources.

    Ease of use Replication Manager provides many features that make the product

    easy to use for IT professionals with little to no storage knowledge. Users do not

    have to be storage wizards to understand how to use this product.

    Figure 3-3. Replication Manager user interface

    As well as providing business continuance to local sites, Replication Manager uses

    SnapView to create and refresh copies of production data. During the implementation

    phase, using SnapView can:

    Reduce the risks of downtime and data corruption.

    EMC Replication Manager Family 3-7

  • 7/31/2019 Dell Emc Sap Bestpractice

    26/56

    Dell/EMC Software Solutions for SAP

    Leverage scarce resources.

    Decrease the amount of time it takes to get your systems up, running, and stabilized.

    Use second instances during SAP upgrades, and continue to use these instances beyondthe upgrade, such as when adding new modules and functionality, testing new

    applications that snap-on or interface with SAP, and when new sites go live on the new

    version or new modules of SAP.

    As instances grow, some operations that users and administrators performed on separate

    instances impact performance of a consolidated instance. With local copies of real data

    and not just contrived test cases, Replication Manager allows trying out new products,

    functionality, and business processes in a controlled environment using automation.

    Testing is both iterative and destructive (that is, test the process until failure, and then

    repeat the process again and again). Replication Manager can greatly reduce the time to

    reestablish the test environment and refresh the entire test cycle.

    EMC PowerPath

    High availability and high performance are inherent requirements for mission-critical

    SAP applications. PowerPath provides consistent and improved service levels for large

    and mission-critical database environments by increasing the servers ability to access

    data on the storage array. PowerPath moves I/O workloads across multiple channels toensure fastest possible I/O speed through dynamic load balancing. If many I/O requests

    on one path cause an imbalance, PowerPath balances the load of requests across the

    paths to optimize performance.

    PowerPath understands the nature of I/O requests and automatically determines optimum

    ways of distributing them. PowerPath allows for prioritizing storage device access.Device Priority allows a device to have a higher priority over another device. In this

    case, channels with low queues support the high-priority devices while the channels with

    the long queues support the low-priority devices.

    PowerPath offers policy-based dynamic path management that accelerates information

    access and provides high availability. In the rare instance of a path failure, PowerPath

    reissues I/O to an alternate channel maintaining data availability and ensuring

    optimization of information access. For instance, if a cable is mistakenly dislodged,

    PowerPath Auto Detect takes all existing I/O that was going down that particular path

    and reroutes it to another active path. Once the cable is reattached, PowerPaths Auto

    Restore feature automatically restores path access permitting data flow down the path

    again with no application interruption.

    Since PowerPath provides multipathing and dynamic path management, at a minimum

    the database hosts in SAP implementations should run PowerPath.

    3-8 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    27/56

    Dell/EMC Software Solutions for SAP

    EMC Navisphere

    The Navisphere Management Suite consists of three software offerings: Workgroup,

    Departmental, and Enterprise. Navisphere Management Suite discovers, monitors, and

    configures all Dell/EMC storage arrays via a single, easy-to-use management interface.

    It includes agent software for managing legacy arrays, centralized event monitoring, and

    transferring host information to the array for display in Navisphere Manager. Thecommand line interface (CLI) can be used to script and automate common storage

    management tasks. LUN masking is also provided to connect hosts properly into a SAN.

    Navisphere is web-based and allows for secure management of Dell/EMC storage

    platforms from anywhere, anytime. Navisphere is complemented by other EMC

    ControlCenter storage management products that provide storage network,

    performance, and resource management.

    Navisphere consists of the following products:

    Manager Allows graphical user interface (GUI) management and configuration

    of single or multiple storage platforms and is also the center for management andconfiguration of system-based access and protection software, including Access

    Logix, SnapView, and MirrorView applications.

    Agent Provides the management communication path to the system and enables

    CLI access.

    Analyzer Provides performance analysis for Dell/EMC storage arrays and

    components

    Navisphere Analyzer provides extensive access to graphs and charts, enabling users to

    evaluate and fine-tune their storage performance. More than 60 different performance

    metrics are collected from disks, storage processors (SPs), LUNs, cache, and SnapView

    snapshot sessions. Navisphere Analyzer provides chart information at the summary and

    detail level, so you can drill down into the collected data at the level you choose as

    shown in Figure 3-4 on page 3-10.

    EMC Navisphere 3-9

  • 7/31/2019 Dell Emc Sap Bestpractice

    28/56

    Dell/EMC Software Solutions for SAP

    Figure 3-4. EMC Navisphere Analyzer

    Navisphere can be launched on its own or from the ControlCenter Console.

    Additionally, Navisphere manages all array-based applications, such as Access Logix,MirrorView, SnapView, and SAN Copy.

    Navisphere runs on the array, which ensures high availability. High availability means

    secure, fail-safe access to the storage array. For example, in the case of a storage-

    processor outage, failover takes over and maintains storage array uptime. Since thesoftware is installed on the array, a workstation CPU failure does not affect storage

    access.

    EMC Visual Products

    VisualSAN delivers an end-to-end view of all devices across the SAN, plus a storage-to-

    host provisioning wizard, to reduce management complexity and cost. VisualSRM is a

    web-based solution that discovers, reports, trends, and automates your storage

    environment with intelligent actions for file-level policies on mission-critical

    applications, from the host perspective.

    With EMC VisualSAN and VisualSRM, administrators have a single tool set to apply toinformation management across the multiple platforms employed in their enterprise.

    Focusing on this single tool set decreases learning curves, increases skill levels by

    applying the tools more often, and results in fewer human errors and increased

    application availability.

    3-10 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    29/56

    Dell/EMC Software Solutions for SAP

    SAP Expert Monitor for EMC (SEME)

    With the SAP Expert Monitor for SAP (SEME) plug-in to CCMS, Basis administrators

    can view configuration and monitoring information such as file systems, database

    objects, logical devices, physical components, and I/O rates within Dell/EMC storage

    platforms. The SEME is called from the standard CCMS interface using the RZ20

    transaction within SAP Basis.

    The SEME uses EMC Solutions Enabler and EMC Open Storage Resource Management

    (SRM) APIs to retrieve information regarding physical and logical storage components

    for file and file system mapping, data object-resolve functions, database mapping, and

    logical volume mapping. The SEME supports all SAP-supported RDBMS platforms for

    open systems only. The major benefits of the SEME include:

    Allowing Basis administrators to monitor multiple hosts and storage landscapes

    within the mySAP implementation.

    Obtaining a quick overview as well as an in-depth analysis of the storage

    configuration, performance, and layout through use of a Basis transaction. ActiveGlobal Support and EMC Customer Service, should they occur.

    The SEME lets you view storage subsystem component configuration (as in Figure 3-5)

    and performance information then sort this information depending on what the

    administrator requires. For example, the SEME allows sorting by data filename or

    searching by I/O load information as in Figure 3-6 on page 3-12. For more information,

    navigate to the /SEME alias at the SAP Service Marketplace.

    Figure 3-5. SAP Expert Monitor for EMC array information

    SAP Expert Monitor for EMC (SEME) 3-11

    http://0.0.0.8/http://0.0.0.8/
  • 7/31/2019 Dell Emc Sap Bestpractice

    30/56

    Dell/EMC Software Solutions for SAP

    Figure 3-6. SAP Expert Monitor for EMC logical volume information

    3-12 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    31/56

    Chapter 4 Dell/EMC StoragePlatformConsiderations for

    SAP

    This chapter presents these topics:

    CX-Series storage...........................................................................................................4-2

    RAID levels and performance ........................................................................................ 4-2

    Cache ............................................................................................................................4-3

    Fibre Channel drives.......................................................................................................4-7

    ATA drives ..................................................................................................................... 4-7

    ATA drives and RAID levels ......................................................................................... 4-8

    RAID-level considerations ...........................................................................................4-14

    Binding RAID groups across buses and DAEs ............................................................4-15

    Best Practices for Implementing SAP on Dell/EMC 4-1

  • 7/31/2019 Dell Emc Sap Bestpractice

    32/56

    Dell/EMC Storage Platform Considerations for SAP

    CX-Series storage

    The family of Dell/EMC storage platforms consists of three older family members, the

    CX200, CX400, and the CX600, and three newer members, the CX300, CX500, and

    CX700.

    The CX700 has a storage processor enclosure (SPE) design. The CX700 offers a faster

    chipset and memory subsystem as compared to the CX500, as well as double the disk

    bandwidth (it has four redundant disk buses on the back end). Bandwidth and IOPS

    performance of the CX700 is greatest, disk for disk, than any other Dell/EMC storage

    array. The CX700 represents the best choice for the highest performance and greatest

    scalability.

    The CX500 uses a small form-factor DPE that includes dual storage processors and 15

    drives in 3 U of rack space. The CX500 SP offers dual CPU (versus the single CPU

    CX300 SP) and a chipset faster than that in the CX300.

    In steady random I/O environments, the CX500 performs slightly below the CX700 upto its maximum complement of 120 drives. The CX500 has a smaller write cache than

    the CX700, and thus does not absorb as large a burst of host writes. The CX500 is well-

    balanced performer. The CX500 provides much higher bandwidth than the CX300,offering near wire speed with large, sequential I/O and Fibre Channel drives.

    The CX300 shares similar hardware with the older CX400, but it has half the number of

    disk ports. It performs as well in random environments up to its limit of 60 drives.

    However, due to its single back-end disk bus, its bandwidth performance is modest.

    RAID levels and performance

    Dell/EMC storage arrays often use RAID 5 for data protection and performance. RAID1/0 is used when appropriate, but the decision to use RAID 1/0 does not always depend

    on performance.

    When to use RAID 5

    RAID 5 is favored for messaging, data mining, medium-performance media serving, and

    RDBMS implementations in which the DBA is effectively using read-ahead and write-

    behind. If the host OS and HBA are capable of greater than 64 KB transfers, RAID 5 is a

    compelling choice.

    These application types are ideal for RAID 5:

    Random workloads with modest IOPS-per-gigabyte requirements

    High-performance random I/O where writes represent 30 percent or less of the

    workload

    A DSS database in which access is sequential (performing statistical analysis on

    sales records)

    4-2 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    33/56

    Dell/EMC Storage Platform Considerations for SAP

    Any RDBMS tablespace where record size is larger than 64 KB and access is

    random (personnel records with binary content, such as photographs)

    RDBMS log activity

    Messaging applications

    Video/Media

    When to use RAID 1/0

    RAID 1/0 can outperform RAID 5 in workloads that use very small, random, and write-

    intensive I/Owhere more than 30 percent of the workload is random writes. Someexamples of random, small I/O workloads are:

    High-transaction-rate OLTP

    Large messaging installations

    Real-time data/brokerage records

    RDBMS data tables containing small records that are updated frequently (account

    balances)

    If random write performance is the paramount concern, RAID 1/0 should be used for

    these applications.

    When to use RAID 3

    RAID 3 is a specialty solution. Only five-disk and nine-disk RAID group sizes are valid

    for RAID 3. The target profile for RAID 3 is large and/or sequential access.

    Since release 13, RAID 3 LUNs can use write cache. The restrictions previously made

    for RAID 3single writer, perfect alignment with the RAID stripeare no longer

    necessary, as the write cache aligns the data. RAID 3 is now more effective with

    multiple writing streams, smaller I/O sizes (such as 64 KB) and misaligned data.

    RAID 3 is particularly effective with ATA drives, bringing their bandwidth performance

    up to Fibre Channel levels.

    When to use RAID 1

    With the advent of 1+1 RAID 1/0 sets in release 16, there is no good reason to use RAID

    1. RAID 1/0 1+1 sets are expandable, whereas RAID 1 sets are not.

    Cache

    In addition to choosing what RAID level to use, the arrays cache must also be

    configured. The Dell/EMC storage arrays cache is very flexible in how it can be

    configured.

    Cache 4-3

  • 7/31/2019 Dell Emc Sap Bestpractice

    34/56

    Dell/EMC Storage Platform Considerations for SAP

    Read cache

    For systems with modest prefetch requirements (about 80 percent of installed systems),

    50 MB to 100 MB of read cache per SP is sufficient.

    For heavy sequential read environments (requests greater than 64 KB and sequential

    reads from many LUNs expected over 300 MB/s), use up to 250 MB of read cache. Forextremely heavy sequential read environments (120 or more drives reading in parallel),

    up to 1 GB of read cache can be effectively used by the CX600.

    Write cache

    Set the read cache as just explained, and then allocate the remaining memory to write

    cache.

    Caches on or off

    Most workloads benefit from both read and write cache; the default for both is on.

    To save a very small amount of service time (a fraction of a millisecond to check the

    caches when a read arrives), turn off read caching on LUNs that do not benefit from it.

    For example, LUNs with very random read environments (no sequential access) do not

    benefit from read cache. Use Navisphere CLI scripts to turn on read cache for LUNs

    when preparing to perform backups.

    Write caching is beneficial in all but the most extreme write environments. Deactivation

    of write cache is best done using the per-LUN write-aside setting discussed later in this

    section.

    Page size

    In cases where I/O size is very stable, you gain some benefit by setting the cache page

    size to the request size seen by the storage systemthe file system block size or, if raw

    partitions are used, application block size.

    In environments with varying I/O sizes, the 8 KB page size is optimal.

    Be careful when applying a 2 KB cache page size. Sequential writes to RAID groups

    with misaligned stripes and RAID 5 groups with more than eight drives may be affected.

    The HA Cache Vault option and write cache behavior

    The HA Cache Vault option, found on the Cache page of the storage-system propertiesdialog box, is on (selected) by default. The default is for classic cache vault behavior as

    outlined in the CLARiiON Fibre Channel Fundamentals (on EMCs Powerlink support

    site).

    4-4 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    35/56

    Dell/EMC Storage Platform Considerations for SAP

    Several failures cause the write cache to disable and dump its contents to the vault. One

    type of failure is that of a vault drive. If the user clears the HA Cache Vault selection,

    then a vault disk failure does not cause write cache to disable. Since a disabled write

    cache significantly impacts host I/O, it is desirable to keep the write cache active as

    much as possible.

    Clearing this selection exposes the user to the possibility of data loss in a triple-faultsituation: If a drive fails, then power is lost, and then another drive fails during the

    dump, it is not possible to dump the cache to the vault. The user must make the decision

    based on the relative merit versus risk.

    Prefetch settings

    The default setting for prefetch (Variable, with segment and multiplier set to 4) causes

    efficient cache behavior for most workloads.

    You should consider increasing the prefetch multiplier when both of the followingconditions apply:

    I/O request sizes are small (less than 32 KB).

    Heavy sequential reads are expected.

    Decrease the prefetch multiplier when:

    Host sequentiality is broken up due to use of a striped volume on the host side.

    I/O sizes close to that of the maximum prefetch value are used.

    Navisphere Analyzer shows that prefetches are not being used.

    High and low watermarks and flushing

    The Dell/EMC storage platform design has two global settings called watermarkshigh

    and lowthat work together to manage flushing. For most workloads, the defaults

    afford optimal behavior:

    FC Series High watermark of 60 percent and a low watermark of 40 percent.

    CX Series High watermark of 80 percent and a low watermark of 60 percent.

    Increase the high watermark only if Navisphere Analyzer data indicates an absence of

    forced flushes during a typical period of high utilization. Decrease the high watermark if

    write bursts are causing enough forced flushes to impact host write workloads such thatapplications are affected. This reserves more cache pages to absorb bursts.

    The low watermark should be 20 percent lower than the high watermark.

    Write-aside size

    The write-aside size is a per-LUN setting. This setting specifies the largest write request

    that is cached. Larger I/O automatically bypasses write cache.

    Cache 4-5

  • 7/31/2019 Dell Emc Sap Bestpractice

    36/56

    Dell/EMC Storage Platform Considerations for SAP

    Write-aside helps keep large I/O from taking up write cache mirroring bandwidth, and

    makes it possible for the system to exceed the write cache mirroring maximum

    bandwidth. The cost is that I/O that bypasses cache has a longer host response time than

    cached I/O.

    To exceed the write cache mirroring bandwidth, there must be sufficient drives to absorb

    the load. Furthermore, if parity RAID (RAID 5 or RAID 3) is used, ensure that:

    I/O is equal to or a multiple of the LUN stripe size and

    I/O is aligned to the stripe and

    The LUN stripe element size is 128 blocks or less.

    These conditions for parity RAID are crucial and cannot be stressed enough. Getting I/O

    to align for effective write-aside can be difficult. If in doubt, use write cache.

    The tradeoff for doing write-aside is as follows:

    The data written this way is not available in cache for a subsequent read.

    The response times for writes are longer than for cached writes.

    For CX Series users, it is suggested to change the write-aside size to 2048 blocks unless

    there is a clear need to use write-aside.

    The Navisphere CLI getlun command displays the write-aside size for a LUN.

    To change the write-aside size, use the Navisphere CLI chglun command with the -w

    option. In the following example, the -l 22 flag indicates the action is on LUN 22, and

    the write-aside is being adjusted so that I/Os of up to 1 MB are cached:

    navicli h ip_address chglun l 22 w 2048

    Note that if writes bypass the write cache, the host cannot get read hits from those

    requests. An interesting example is an RDBMS TEMP table. The TEMP data is written

    and then reread; if the writes bypass the cache, they take longer than if cached. Also,

    subsequent rereads have to go to disk (no possibility of a cache hit). Using requests that

    are small enough to ensure caching is best: writes hit the write cache and thus return

    more quickly, and the reread can be serviced from data still in the write cachemuch

    faster than going to disk. Pay attention to host file system buffering, which might

    coalesce TEMP writes into large requests.

    Balancing cache usage between SPs

    Lastly, ensure that the write cache usage is balanced between SPs. The amount of cache

    each SP is allocated is adjusted so that if more write I/O is coming through an SP, it gets

    more than half of the write cache as illustrated in Figure 4-1 on page 4-7. This

    adjustment is done every 10 minutes.

    4-6 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    37/56

    Dell/EMC Storage Platform Considerations for SAP

    Local Write Cache

    Mirror of PeersWrite Cache

    Local Read Cache(not mirrored)

    Time 0:Balanced

    Allocation

    SP B

    SP A

    SP A

    SP B

    Time 1:Allocationhas beenincreasedfor SP A

    Local Write Cache

    Mirror of PeersWrite Cache

    Local Read Cache(not mirrored)

    Figure 4-1. Write cache auto-configuration

    Balance the storage system by ensuring that each SP owns an equal number of LUNs

    using the write cache.

    Fibre Channel drives

    Fibre Channel drives are enterprise-class devices. They feature on-disk firmware capable

    of queue reordering, buffering, and advanced seek optimizations. Many refinements

    allow them to perform at levels not achievable by desktop-drive standards. Rotational

    speed has a great affect on Fibre Channel drive performance, as these drives can

    leverage increased spindle speeds very effectively.

    For random read performance, a faster rotational speed means less latency, as the cache

    cannot buffer random reads. For random writes, which are absorbed by the SP write

    cache, the effect of rotational speed is seen in cache flushing. Faster drives can flush the

    cache faster than slower drives. A cache that flushes faster allows a higher rate of I/O to

    the storage system before watermark and forced flushing cause service times to increase.

    As a result, the 15 K rpm drives offer about a 20 to 30 percent real-world increase in

    maximum random load on a system. Sequential operations benefit some, but the effect is

    less.

    ATA drives

    ATA drives are not recommended for busy random-access environments. The ATA

    specification was not designed for a heavily random multithreading environment.

    The ATA drives have been used in random environments where the IOPS requirement

    was modest. In tests of raw speeds, with random I/O, the ATA drives have about one-

    third to one-fourth the ability to service I/O, with the greatest difference being with

    smaller I/O sizes and at higher thread counts as shown in Table 4-1 on page 4-8.

    Fibre Channel drives 4-7

  • 7/31/2019 Dell Emc Sap Bestpractice

    38/56

    Dell/EMC Storage Platform Considerations for SAP

    The 7200 rpm drives perform incrementally better than the 5400 rpm drives. The

    difference is not so great as with fibre drives as their lack of command queuing restricts

    their random performance.

    Table 4-1. Random access performance of 5400 rpm ATA drives relative to 10 K rpm Fibre Channeldrives

    Threads per RAID group 2 KB to 8 KB I/O size 32 KB I/O size

    1 50% 50%

    16 25% 35%

    As mentioned in the section titled RAID levels and performance on page 4-2, in

    sequential operations using RAID 3 with large I/O sizes and modest thread counts (one

    to four threads per disk group), the ATA drives perform close to the Fibre Channel

    drives.

    ATA drives and RAID levels

    The ATA drives perform well in bandwidth applications with RAID 3. For random I/O

    and BCV (clone) of Fibre LUNs, use RAID 1/0. RAID 5 is not recommended for these

    applications due to the high disk load for random writes.

    BCVs that are normally fractured, and then periodically synchronized, should use RAID3, as synchronizing is a high-bandwidth operation.

    RAID group partitioning and ATA drives

    When partitioning a RAID group with ATA drives, assign all LUNs from that RAID

    group to the same SP. This improves throughput at the drive level. (This is not a

    requirement for Fibre Channel drives.)

    The use should consider assigning all LUNs from each ATA group to a single host.Otherwise a path-induced trespass on one host causes the ownership of its LUNs to

    conflict with others on the same RAID group. This approach should be considered if the

    drives are under consistent heavy load from multiple hosts at the same time.

    ATA drives as mirror targets and BCVs

    ATA drives can be used as BCV and MirrorView targets. However, the performanceimpact must be considered with care.

    In a system that is not being stressed (for example, write cache not hitting forced

    flushes), the use of ATA drives compared to FC drives has no significant effect on

    performance.

    4-8 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    39/56

    Dell/EMC Storage Platform Considerations for SAP

    In a system that is already experiencing some forced flushes, a synchronization of a

    BCV, or the establishment of a BCV implemented on ATA drives, could cause the write

    cache to fill. This would cause forced flushes for other LUNs being written.

    Similarly, with ATAs as a synchronous mirror target, if the cache is flushing more

    slowly on the target than at the source (due to slower drives being used), the source

    cache can fill. The result is an increase in response time for mirrored writes.

    Mixing drive types in an array

    A mix of 10 K and 15 K rpm drives may be used within an enclosure. Keep drives the

    same within each group of five, and use a maximum of one speed change per enclosure.

    LUN Distribution

    For the purposes of this discussion:

    Back-end bus refers to the redundant pair of Fibre Channel loops (one from each SP)

    by which all Dell/EMC storage arrays access disk drives. (Some Dell/EMC storagearrays have dual back-end busesa total of four fiber loops, some have four back-

    end buses.)

    A RAID group partitioned into multiple LUNs, or a LUN from such a RAID group,

    is referred to as a partitioned RAID group or partitioned LUN, respectively.

    A RAID group with only one LUN is called a dedicated RAID group and a

    dedicated LUN, respectively.

    For efficient distribution of I/O on Fibre Channel drives, distribute LUNs across RAID

    groups. When doing distribution planning, take the capacity of the LUN into account.

    Calculate the total GB of high-use storage, and distribute the capacity appropriately

    among the RAID groups.

    Additionally, balance load across storage processors. To do this, assign SP ownership:

    the default owner property for each LUN specifies the SP through which that LUN is

    normally accessed.

    When partitioning ATA-drive RAID groups, keep all LUNs from each RAID group owned by a

    single SP.

    Regarding the previous note: to avoid ownership conflicts affecting performance, it is

    useful (though not critical) to assign all LUNs from each ATA group to a single host.

    Otherwise, a path-induced trespass on one host causes the ownership of its LUNs to

    conflict with others on the same RAID group.

    When planning for metaLUNs, note that all LUNs used for a metaLUN are trespassed to

    the SP that owns the base LUN; their original default owner characteristic are

    overwritten. Thus, when planning for metaLUNs, designating pools of SP A and SP B

    LUNs assists in keeping the balance of LUNs across SPs even.

    ATA drives and RAID levels 4-9

  • 7/31/2019 Dell Emc Sap Bestpractice

    40/56

    Dell/EMC Storage Platform Considerations for SAP

    Vault and boot LUN effects

    In CX Series systems, the first five drives in the base disk enclosure are used for severalinternal tasks.

    Drives 0 through 4 are used for cache vault. The cache vault is only accessed when the

    system is disabling write cache (or enabling after a fault). Thus, there is no effect on hostperformance due to the vault activities unless there is a fault.

    The first four drives are also used as operating system boot and system configuration.

    Once the system has booted, there is very little activity from the FLARE operating

    system on these drives. Again, this does not affect host I/O.

    Navisphere uses the first three drives for caching NDU data. Heavy host I/O during an

    NDU can cause the NDU to time out, so it is recommended that before an NDU

    commences the host load be reduced to 100 IOPS per drive.

    Also, very heavy host I/O on these four drives results in increased response times for

    Navisphere commands. Thus, for performance-planning purposes, it is suggested to

    consider these drives as already having a LUN assigned to them. Host I/O performance

    is not affected by system access. Be sure to distribute the load accordingly.

    Using LUN and RAID group numbering

    This suggestion does not help performance but does assist in the administration of a

    well-designed system. Use RAID group numbering and LUN numbering to your

    advantage. For example, number LUNs so that all LUNs owned by SP A are even

    numbered and LUNs owned by SP B are odd numbered.

    A scheme to extend this is to use predictable RAID group numbering, and extend the

    RAID group number into the LUN number. This facilitates selection of LUNs for

    metaLUNs. The RAID group number embedded in the LUN number allows you to select

    LUNs from multiple RAID groups as shown in Table 4-2.

    Table 4-2. Example of RAID group and LUN numbering

    RAID group LUN Default owner

    100 SP A10

    101 SP B

    200 SP A20

    201 SP B

    300 SP A30

    301 SP B

    4-10 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    41/56

    Dell/EMC Storage Platform Considerations for SAP

    For example, if selecting LUNs with which to extend FLARE LUN 101 into a

    metaLUN, choose LUNs 201 and 301. MetaLUN components are all trespassed to the

    same SP as the base LUN with all three LUNs belonging to the same SP. Also, now the

    I/O for the new metaLUN 101 is distributed across three RAID groups.

    Minimizing disk contention

    As drive sizes continue to increase, partitioned RAID groups are more common, and it

    becomes more difficult to optimize disk behavior. The Dell/EMC storage platform

    design is quite flexible and delivers good performance, even with a significant amount of

    disk contention. However, for high-performance environments, the following guidelines

    apply.

    Backup during production

    Environments that require sequential reads (online backups) concurrent with production

    get very good results with RAID 1/0 groups, as the read load can be distributed across

    many spindles. RAID 5 can also deliver good read throughput while under moderate

    load, such as messaging applications. Such arrangements should be tested beforedeployment. Keep write loads from saturating write cache while backing upthe higher

    priority of cache flushes slow read access.

    Snapshot save areas and BCV LUNs

    It is not wise to place snapshot cache LUNs on the same drives as the source LUNs you

    snap. Write operations result in very high seek times and disappointing performance.

    The same holds true for BCV LUNs: put them on disk groups separate from the LUNs

    they are cloning.

    Stripes and the stripe element size

    The default stripe element size (128 blocks or 64 KB) is recommended, and should be

    used. Do not change this value unless instructed by Performance Engineering or an

    application-specific Best Practices white paper.

    RAID 5 stripe optimizations

    The requirements to achieve modified RAID 3 (MR3) optimization for RAID 5 have

    been misunderstood for some time. Many EMC personnel believe that the RAID

    optimizations work only with a 4+1 or 8+1 stripe, which is not trueMR3 can work for

    any size RAID 5 group.

    MR3 occurs when an I/O fills the RAID stripe, whether because it bypassed cache and

    was aligned on the stripe, or if sequential I/O is cached until it fills the stripe. However,

    the process has the following requirements:

    The cache page size is 4 KB or, for stripes of 512 KB and larger, 8 KB.

    The stripe element size is 64 KB (128 blocks) or smaller, and a multiple of 8.

    ATA drives and RAID levels 4-11

  • 7/31/2019 Dell Emc Sap Bestpractice

    42/56

    Dell/EMC Storage Platform Considerations for SAP

    For example, with a 12+1 RAID 5 group and a 64 KB stripe element, the stripe size is

    12*64 KB = 768 KB. For MR3, a cache page size of 8 KB or larger must be used, as 4

    KB page is too small.

    When the cache is not in use, a disk group of 2+1, 4+1, or 8+1 is more likely to align the

    stripe size to common host I/O sizes and still maintain aligned stripe element sizes.

    Uncached writes, parity RAID, and MR3

    The write cache imposes a maximum write bandwidth that the system can sustain.

    Bypassing write cache allows the system to achieve higher write loads, providing

    enough disks are available to deliver the required performance.

    Uncached writes can make use of MR3 processing on parity RAID types. I/O of up to 1

    MB is buffered by the host-side storage array port. For MR3 to be effective, I/O must be

    aligned to the RAID stripe and must be a multiple of the RAID stripe size.

    Number of Drives per RAID group

    For bandwidth operations, it is more effective to maximize sequentiality on a small

    number of drives than to distribute a sequential load over many drives. Attempting to

    distribute high-bandwidth streams over too many drives results in:

    Less sequential access at the drives.

    Longer synchronization times as the processor waits for multiple devices tocomplete an I/O.

    These effects are more pronounced when write aside is in use: with cached I/O, the

    cache can insulate the host from increased synchronization times. When using write

    aside, the I/O cannot complete until all drives complete the transfer.

    For high bandwidth, very large disk groups (more than 10 drives) should typically be

    avoided because there is additional seek latency as all the disks align on the same stripe

    for a particular I/O. This is one reason why, under certain circumstances, a RAID 5 LUN

    performs as well in writes as a RAID 1/0 LUN: it has fewer spindles to synchronize

    when writing the entire stripe. For writing large I/O (512 KB or greater) with RAID 5,8+1 drives is the maximum for most users.

    Generally, large disk sets are more effective for random workloads than sequential

    workloads.

    Large spindle counts

    Distribution of data across many disks is effective for random-access workloads

    characterized by the following conditions:

    Many concurrent processes or threads

    Heavy asynchronous accesses

    4-12 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    43/56

    Dell/EMC Storage Platform Considerations for SAP

    A large disk count allows concurrent requests to execute independently. For workloads

    that are random and bursty, striped metaLUNs are ideal. MetaLUNs that share RAID

    groups ideally have their peaks at different times. For example, if several RDBMS

    servers share RAID groups, activities that cause checkpoints should not be scheduled to

    overlap.

    How many disks to use in a storage system

    Plateaus in performance exist where adding disks does not scale workload linearly. The

    following are some rough guidelines for strictly maximizing performance. Refer to the

    appropriate performance white papers for the CX300, CX500 and CX700 for details on

    their metrics. The drive counts presented in Table 4-3 on page 4-14 are for concurrently

    active drives, under constant and moderate to heavy load.

    ATA drives and RAID levels 4-13

  • 7/31/2019 Dell Emc Sap Bestpractice

    44/56

    Dell/EMC Storage Platform Considerations for SAP

    Table 4-3. System high-efficiency / high-performance drive counts

    For absolute best performance small I/O, random access,drives per system

    CX700 200

    CX600 160

    CX500 120

    CX400 60

    CX300 60

    CX200 30

    For absolute best performance large I/O, sequential access,drives per system

    CX700 80

    CX600 40

    CX500 40

    CX400 20

    CX200, CX300 20

    These considerations are for customers whose absolute top priority is performance. As drives are

    added to systems, performance increases; however, the increase may not be linear.

    RAID-level considerations

    Most storage is implemented with RAID 1/0 or RAID 5 groups, as the redundant striped

    RAID types deliver the best performance and redundancy. RAID 3 is as redundant as

    RAID 5 (single parity disk).

    RAID 5

    RAID 5 is best implemented in four-to-nine-disk RAID groups. Smaller groups incur a

    high cost in capacity for parity usage. The main drawback to larger groups is the amount

    of data affected during a rebuild. The time to complete a rebuild is also longer with a

    larger group, though binding large RAID 5 groups across two back-end buses canminimize the effect. Table 4-4 on page 4-15 provides detailed rebuild times. Also, a

    smaller group provides a higher level of availability, since it is less likely that two of

    five drives fail, compared to two of ten drives.

    For systems where slowdowns due to disk failure could be critical, or where data

    integrity is critical, use a modest number of spindles per RAID group. Better yet, use

    RAID 1/0.

    4-14 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    45/56

    Dell/EMC Storage Platform Considerations for SAP

    RAID 1/0

    Use RAID 1/0 when availability and redundancy are paramount, which are typical

    requirements for SAP production systems. By nature, mirrored RAID is more redundant

    than parity schemes. Furthermore, a RAID 1/0 group needs only two DAEsone from

    each back-end busin order to afford the highest possible level of data availability.

    The advantages of RAID 1/0 to RAID 5 when under rebuild are illustrated in Table 4-4.

    Table 4-4. RAID Types and Relative Performance in Failure Scenarios

    RAID Type Rebuild IOPS Loss Rebuild Time Impact of Second Failureduring Rebuild

    RAID 5 50 percent 15-to-50 percent slowerthan RAID 1/0

    Loss of data

    RAID 1/0 20-to-25 percent 15-to-50 percent fasterthan RAID 5

    Loss of data 14 percent oftime in an eight-disk group (

    1/[n-1] ).

    RAID 1 20-to-25 percent 15-to-50 percent fasterthan RAID 5

    Loss of data

    RAID 3

    RAID 3 groups can be built of either five or nine drives. The redundancy is equivalent to

    RAID 5. However, rebuilds should be a bit faster with release 16 as the rebuild code

    takes advantage of the large back-end request size that RAID 3 uses.

    Binding RAID groups across buses and DAEs

    Engineers with experience with older SCSI-based systems expect to bind parity RAID

    groups with each disk in a separate enclosure. This was done when each enclosure was

    served by a separate SCSI bus. Considerations for availability due to SCSI failure

    semantics no longer apply. However, consider the following things when binding disks.

    Binding across DAEs

    Few subjects cause as much concern and confusion in the field as the binding of disks

    across DAEs. Is there a performance advantage? Is there a redundancy advantage? In

    both cases, it depends on the RAID configuration, and in all cases the differences are

    slight.

    Parity groups (RAID 3, RAID 5)

    Binding parity RAID groups such that each drive is in a separate DAE does not impact

    performance. However, there is a small increase in data availability in this approach.

    Using a parity RAID type with the drives striped vertically increases availability to over

    99.999 percent. However, this is very unwieldy; if very high availability is required, use

    RAID 1/0.

    Binding RAID groups across buses and DAEs 4-15

  • 7/31/2019 Dell Emc Sap Bestpractice

    46/56

    Dell/EMC Storage Platform Considerations for SAP

    RAID 1/0 groups

    There is absolutely no advantage in binding a RAID 1/0 group in more than two DAEs,but it certainly is not harmful in any way.

    Binding across Back-End Buses

    All current Dell/EMC storage platforms except the CX300 have dual or quadruple

    redundant back-end buses with which they attach to the DAEs. A disk group can be

    made up of drives from one, both or all buses. The standard racking alternates the buses

    across adjacent DAEs, with the DPE or first DAE being bus 0, the next DAE bus 1, and

    so on.

    Parity groups (RAID 3, RAID 5)

    Parity groups of 10 drives or more benefit from binding across two buses, as this helps

    reduce rebuild times. For example, bind a ten-drive RAID 5 with five drives in one

    DAE, and another five drives in the next DAE above it.

    Mirrored groups (RAID 1, RAID 1/0)

    Binding mirrored RAID groups across two buses increases availability to over 99.999

    percent and keeps rebuild times lower. This technique ensures availability of data in two

    (rare) cases of double failure: an entire DAE or redundant back-end bus (dual-cable

    failure). Bind the drives so that the primary drives for each mirror group are on the first

    back-end bus, and the secondary (mirror) drives are on the second back-end bus.

    Binding across buses also has a minimal but positive impact on performance.

    When creating the RAID group (or defining a dedicated LUN in the bind command),

    use Navisphere CLI to bind across buses. When designating the disks, Navisphere CLI

    uses the disk ordering given in the createrg orbind command to create Primary0,Mirror0, Primary1, Mirror1, and so on, in that order. Disks are designated in

    Bus_Enclosure_Disk notation. Here is an example of binding the first two drives from

    enclosure one of each bus:

    navicli h ip address createrg 55 0_1_0 1_1_0 0_1_1 1_1_1

    Binding with DPE Drives

    In a total power-fail scenario, the SPS (standby power supply) supplies battery-backed

    power to the SPs and vault disks. This allows the storage system to save the contents of

    the write cache to disk.

    However, the power to the nonvault disk storage-system enclosures (DAEs) is not

    maintained. When the storage system reboots, LUNs that had I/O outstanding are

    checked, using the background verify process, to verify that no writes in progress

    resulted in partial completions. The background verify is a fairly low-intensity process.

    4-16 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    47/56

    Dell/EMC Storage Platform Considerations for SAP

    However, a LUN bound with some drives in the vault enclosure (DPE or first DAE,

    depending on the model) and with some drives outside of the vault enclosure may

    require a rebuild, which is a more disk-intensive process. This affects performance to

    some degree on reboot.

    To avoid a rebuild on boot, follow these steps:

    Do not split RAID 1 groups across the vault enclosure and another DAE.

    For parity RAID (RAID 5, RAID 3), make sure at least two drives are outside the

    vault enclosure.

    For RAID 1/0, make sure at least one mirror (both the primary and secondary drive

    in a pair) is outside the vault enclosure.

    For RAID 1/0, you can use NaviCLIs ordering when using createrg, as explained

    earlier to ensure at least one pair is outside the vault enclosure. Example:

    navicli h ip address createrg 45 0_1_0 1_1_0 0_0_1 1_0_1 0_0_2 1_0_2

    Note that the pair 0_1_0 and 1_1_0 are outside the vault enclosure. Or simply ensure

    more than half of the drives in a RAID 1/0 group are outside the vault enclosure.

    Binding RAID groups across buses and DAEs 4-17

  • 7/31/2019 Dell Emc Sap Bestpractice

    48/56

  • 7/31/2019 Dell Emc Sap Bestpractice

    49/56

    Chapter 5 DatabaseLayoutConsiderations

    This chapter presents these topics:

    Striped metaLUNs..........................................................................................................5-2

    Host-based striping.........................................................................................................5-2

    Log and BCV placement ................................................................................................ 5-2

    Logical volume managers and datafile sizes .................................................................. 5-3

    PowerPath and device queue depth ................................................................................ 5-3

    Snaps, snapshots, BCVs, and clones ..............................................................................5-3

    Customers are strongly advised to read the SAP Support Notes for their

    platforms and database combinations.

    Best Practices for Implementing SAP on Dell/EMC 5-1

  • 7/31/2019 Dell Emc Sap Bestpractice

    50/56

    Database Layout Considerations

    Striped metaLUNs

    SAP and database vendors recommend spreading data across many spindles

    and controllers for parallel I/O operations. Dell/EMC storage arrays support

    36 GB, 73 GB, and 181 GB drives for delivering both performance and

    capacity. For random OLTP workloads such as SAP, the larger drives are asappropriate as the smaller drives since both are high-performance disks at

    10,000 rpm and have demonstrated excellent performance both in the lab and

    at customer sites.

    To deliver more throughput than is possible from a single volume, Dell/EMC

    recommends using metaLUNs for volume sets up to 500 GB in size.

    MetaLUN performance is equal to or better than host volume stripe sets.

    MetaLUNs with PowerPath can scale linearly beyond the capacity of a single

    channel to service I/O requests.

    During database layout, consider whether to store the database on either raw

    devices or cooked (file system) devices. Because raw devices do not use thehosts file buffer cache, some implementations may see a slight improvement

    in I/O performance. In Oracle environments, your decision on whether to use

    raw or cooked devices is based on the expertise and preferences of the system

    and database administrators.

    In UDB DB2 environments, SAP recommends using DMS DEVICE

    containers for your large, fast-growing tables. In both of these database

    platforms, the database management tools monitor and manage on thetablespace (logical) level. To ease management of fast-growing tables, SAP

    and EMC recommend isolating these heavily accessed or growing tables into

    their own tablespaces.

    Host-based striping

    Logical volume managers offer host-based striping and allow administrators to

    reduce the stripe width from the metaLUN default if desired. This in effect

    produces double striping and has no negative impact on performance.

    Customers have success in R/3 environments with stripe widths of 128 KBand higher. If using host-based striping, ensure that the stripe width is a

    multiple of the database block size (8192 by default).

    Log and BCV placement

    Since the database logs are write-intensive, they traditionally are dedicated to

    their own physical disks. With the advent of large disks, it may not be

    practical to dedicate a physical disk to each log file. If you do not use

    metaLUNs and do not isolate logs on their own physical disks, place them on

    the same physical disks that are shared by the least busy files of the database.

    5-2 Best Practices for Implementing SAP on Dell/EMC

  • 7/31/2019 Dell Emc Sap Bestpractice

    51/56

    Database Layout Considerations

    Logical volume managers and datafile sizes

    Use of striped metaLUNs often