note 1730566 - recommended performance-related system settings in slo - dtl

13
21.01.2013 Page 1 of 13 SAP Note 1730566 - Recommended performance-related system settings in SLO - DTL Note Language: English Version: 4 Validity: Valid Since 13.11.2012 Summary Symptom This note serves as central reference point (composite SAP note) for performance-related recommendations in the context of DMIS migration tools. These migration tools are used in different products and contexts, such as: o TDMS (activities: data selection, data selection for header tables, data transfer) o LT client transfer (activities: access plan calculation, data selection, data transfer) o Customer-specific migration projects, possibly including an integrated upgrade o NZDT (near zero downtime) projects o Landscape Transformation Migration to SAP HANA (initial load, and delta replication) It contains general recommendations with respect to o optimal decisions concerning the different strategies and options offered by the migration tools o system settings of the involved systems Concerning problems that are observed in currently running migration activities, and the means to analyze these, and possible remedies, refer to note 1730438. Other terms Migration Workbench (MWB), LT client transfer, TDMS Reason and Prerequisites The DMIS migration tools offer multiple options to transfer data from a sender to a receiver system. It needs to be understood what are the pros and cons of these options, and which of them are preferable, depending on the specific scenario (transfer is executed in downtime, making use of all available system resources, or in uptime, competing with SAP standard applications for system resources) and other influencing factors. This note explains these pros and cons, and also the recommended system settings of the involved systems, depending on the respective scenario. In this context it is also necessary to consider if a certain system is used in multiple roles (e.g. being both sender and central system). Solution 1. Choose the most efficient strategy to read the data in the sender system. 1.1 reading types

Upload: richardo

Post on 14-Apr-2015

319 views

Category:

Documents


0 download

DESCRIPTION

SAP Note

TRANSCRIPT

Page 1: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 1 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

Note Language: English Version: 4 Validity: Valid Since 13.11.2012

Summary

SymptomThis note serves as central reference point (composite SAP note) forperformance-related recommendations in the context of DMIS migration tools.These migration tools are used in different products and contexts, such as:

o TDMS (activities: data selection, data selection for header tables,data transfer)

o LT client transfer (activities: access plan calculation, dataselection, data transfer)

o Customer-specific migration projects, possibly including anintegrated upgrade

o NZDT (near zero downtime) projects

o Landscape Transformation Migration to SAP HANA (initial load, anddelta replication)

It contains general recommendations with respect to

o optimal decisions concerning the different strategies and optionsoffered by the migration tools

o system settings of the involved systems

Concerning problems that are observed in currently running migrationactivities, and the means to analyze these, and possible remedies, refer tonote 1730438.

Other termsMigration Workbench (MWB), LT client transfer, TDMS

Reason and PrerequisitesThe DMIS migration tools offer multiple options to transfer data from asender to a receiver system. It needs to be understood what are the prosand cons of these options, and which of them are preferable, depending onthe specific scenario (transfer is executed in downtime, making use of allavailable system resources, or in uptime, competing with SAP standardapplications for system resources) and other influencing factors.This note explains these pros and cons, and also the recommended systemsettings of the involved systems, depending on the respective scenario. Inthis context it is also necessary to consider if a certain system is usedin multiple roles (e.g. being both sender and central system).

Solution

1. Choose the most efficient strategy to read the data in thesender system.

1.1 reading types

Page 2: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 2 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

The strategy to read data in the sender system is mainly determined by theso-called reading type which is here discussed first. Seven reading typescan be distinguished, but which of these can be used for a certainmigration object depends on the properties of the respective migrationobject (structured object, with header table, one or multiple child tables,or flat object (only one table), in the latter case, also the table class(transparent, pool or cluster table). The restrictions, pros and cons ofthe reading types are described here:1 - access plan calculationApplicable for:

o structured migration objects

o flat migration objects with a transparent table

o flat migration objects referring to a physical cluster tableIn this case, first, a so-called access plan is calculated. This means, onespecific primary index field of the (only or top-level) table of therespective migration object is chosen as delimitation field. During thedata transfer, each data portion to be transferred is defined by an upperand lower limit value for that field. If a high number of such dataportions is determined, the transfer can be parallelized by "splitting theaccess plan", which means, by subdividing the set of data portions amongmultiple access plans, each of which can be processed by one dedicatedtransfer job.An important prerequisite of this approach is that one single index fieldof the table can be identified, for which data portions of reasonable size(at most around 100 MBytes, ideally less - see considerations about theportion size in section "Choose the best strategy to write the data to thereceiver system") can be calculated. Also, the field should ideally be atthe start of the primary (or any secondary) index of the table, to ensurean efficient access. Otherwise, it might be necessary to create such asecondary index.If these prerequisites can't be fulfilled, consider using any of the"cluster technique" reading types 4, 5, or 7.There are some more reasons why those reading types might be preferable:If the data transfer is done in parallel to the productive usage of thesender system, and the sender system uses an Oracle or IBM database, thedata transfer using the access plan technique (or also reading type 3) willuse much of the database buffer, thus impeding the productive applications.In case of variable-length fields (large object binaries) being part of anyof the tables of a migration object, the portion size calculation might beincorrect, resulting in too many data being selected with one data portion,thus causing a memory overflow.

The only advantage of the access plan technique is that in case of downtimescenarios, the access plan calculation can be executed in uptime. Only thedata transfer itself needs to be done in downtime. If the cluster techniqueis used, already the activity taking the data into the sender systemcluster must be executed in downtime, which results in a longer overalldowntime.

2 - pool tablesOnly applicable for flat migration objects referring to a logical pooltable, and only in such cases when the data volume of the respectivelogical pool table is small (less than 100 MB). Otherwise, use reading type4 or 5 for such tables. With this reading type, all data of the pool tableare selected and transferred in one portion. Other than with reading type

Page 3: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 3 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

1, we have here an access plan consisting of exactly one data portion forwhich no portion delimitation is maintained.

3 - cluster table (DB_SETGET)

Applicable for flat migration objects consisting of exactly one table.For logical cluster tables, an access plan calculation as described aboveis not possible. For such tables (but also for pool and transparent ones,in case of "flat" migration objects), it is possible to use the basisfunction module DB_SETGET. It takes in each DB access cycle 10.000 recordsfrom the source table, in the sort order of the primary key, and transfersthese records. For large tables, it is also possible to parallelize thistechnique, which means, multiple transfer jobs can run in parallel for thesame table. In this case, however, it is necessary to prepare theparallelization, which is done in the "access plan calculation" step whichin this case, other than with reading type 1, does not calculate boundaryvalues for small data portions but larger boundaries that allow multipleparallel running jobs to transfer multiple portions in a certain range ofprimary key field values.An important restriction of this reading type, compared to all the otherreading types, is that no further selection criteria (restricting theresult set to a subset of the data of the table) can be specified.Compared to the access plan option, this option can also be used for pooland cluster tables and in such cases when for a transparent table no singlekey field can be used for data portioning. Another advantage, compared toboth the access plan and the cluster technique is that in case Oracle isused as DB system in the sender system, the likelihood of an Oracle 1555exception is substantially reduced, as no long-running cursor selection isnecessary.

4 / 5 - cluster technique

This option can be used for any kind of migration objects. Here, the accessplan calculation activity does not calculate portion boundaries, but writesall data of a certain data portion into an INDX-like table (also called"cluster table", hence the term "cluster technique) in the sender system.Thus, in the data transfer activity, there is no need to access theoriginal application tables a second time. Rather, the data are read fromthe INDX-like table which is much for efficient. On the other hand, if caseof downtime scenarios, it must be kept in mind that the activity to writethe data to the INDX-like table must already be executed during thedowntime.Typically, the selection of the only or top-level table of the migrationobject will be executed as a full-table scan. If reading type 5 is chosen,this is enforced by means of an optimizer hint. In case of reading type 4,the decision about the optimal access path is left to the DB optimizer. Inmost cases, reading type 5 is used. However, there are some cases when anindex access to the data is preferable:

o Only a small amount of the data of the table is to be selected, andthe WHERE condition of these data refers to the primary key fields

o The primary index is a GUID - type field. Then, reading the data insort order allows for a quicker insertion in the receiver system,as it reduces the effort for the index access there

Page 4: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 4 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

It must be noted that in the sender system, some space is required to storethe data in the INDX-like table. Normally it is fair to assume acompression rate of 1:10 for the storage of the data. However, it theoriginal table is an INDX-like table itself, no further compression can beassumed.By means of report DMC_CREATE_CLUSTER, the INDX-like table can be assignedto a specific tablespace (for Oracle and DB6 databases).

Similar to the case of the access plan calculation, the data transfer of acertain migration object can be parallelized. In addition, it is alsopossible to parallelize the access plan calculation / data selectionactivity which puts the data to the cluster table.In scenarios controlled by the process control layer (PCL), like TDMS or LTclient transfer, this parallelization will be done for all migrationobjects itemized for the respective package in table CNVMBTPRECALCOBJ.For other scenarios, it is possible to proceed as follows:Execute report DMC_GENERATE_ACPLAN_DELIMITER in the central system. Itgenerates and executes a report in the sender system to calculate the basisof the parallelization (that means, it determines key field values to beused as boundaries of the selections applied to the multipleparallel-running jobs).Once the batch job in the sender system is finished, reportDMC_CREATE_PRECALC_ACP_W_DELIM can be started in the central system toretrieve the results. After that, the parallelized access plan calculationcan be started.

6 - cluster technique; cluster table filled by external agent

Here, an external tool is responsible for selecting the data in the sendersystem, writing them to the INDX-like table, and storing the necessaryadministration data in the DTL tool. This is a specific approach of someTDMS scenarios which is not further discussed here.

7 - Cluster technique, full table scan for the (only) childtable of the structured migration object

Reading types 4 and 5 are usually very efficient for flat migrationobjects. In case of structures objects, the same kind of issues as in caseof the access plan option might occur:

o We need to select all child table records for the correspondingheader table records. This selection might be poorly supported byan appropriate index of the child table (for other considerationsregarding the child table selection, see section about "FOR ALLENTRIES" selection below).

o There might be, for a single record of the parent table, manythousands of child table records (possibly on the third or evenhigher hierarchy level), which makes a portion delimitationimpossible.

In case of the access plan calculation, these problems cannot be adequatelydealt with. If the cluster technique is used, and if there is a two-levelhierarchy with only one header and child table, and the header table is"read-only", i.e. not really transferred, reading type 7 will help. In thiscase, the full table scan is executed not, as usual on the header table ofthe migration object, but on the child table. The header table records will

Page 5: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 5 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

all be first selected into an internal table. Then, while selecting thechild table records, the relevance of the child table records can bechecked by means of reading the corresponding record from the header table.This is normally highly efficient, but as a prerequisite, the header tablemust not be too large to put all its records (or those fulfilling therespective selection criteria) into the internal table. Otherwise, aparallelization might be considered which makes sure that each of theprocesses working in parallel on that object will only select a subset ofthe header table records that can be stored in an internal table. Seesection 1.3 for details.

Along with the cluster technique (reading types 4 or 5), it is alsopossible to replace the selection logic that is normally generated by aspecific logic which might be more efficient in certain cases. This usuallyapplies to structured objects with exactly one header table that is"read-only", and one child table. One example is to read subsets of childand header table in parallel, each order by primary key, and to match therecords of the subsets. In case of PCL-controlled scenarios like TDMS andLT client transfer, these specific includes (which are delivered with namesprefixed by DMC_INCL_ACS) can be assigned to the migration objects in tableCNVMBTINCL.Another example is the access path for logical pool table GLS1 where thedata selection is done for the physical pool table GLSP. Otherwise, theaccess would always been done using the primary index, which would resultin a very bad performance.

1.2 Other parameters influencing the read access in the sender system

In addition to choosing the most expedient reading type, the followingoptions can be helpful. Starting with DMIS 2011 SP3, all of these can beassigned to a certain migration object of a mass in table DMC_PERF_OPTIONS:

o PACKAGE SIZE for FETCH NEXT CURSORReading types 1, 4, 5, and 7 imply a cursor processing for at leastone table (the header or only table in case of reading type 1, 4,or 5, the child table in case of reading type 7). The number ofrecords read in each of the FETCH NEXT CURSOR loops is normallypredefined (100 for reading type 1; for reading types 4 or 5, 5000in the normal case, and 100 if the for all entries option describedbelow is chosen, 5000 for reading type 7). In certain cases itmight be useful to specify a different package size, which ispossible for the cluster technique reading types by specifying thedesired value for field PACKAGE_SIZE in table DMC_PERF_OPTIONS

o For All Entries" (FAE) selection optionIf any of the reading types 1, 4 or 5 is applied to structuredmigration objects, the default access to the child table records isto read specifically for each header table record the correspondingset of child table records. This is appropriate if for one headertable record, a very large number of child table records mightexist. Otherwise, it would be more efficient to read, for thecomplete set of header table records currently being processed, allcorresponding child table records. This can be activated if the"FOR_ALL_ENTRIES" switch is activated in DMC_PERF_OPTIONS. It mightbe considered to set, in this case, the number of header tablerecords to be read per FETCH loop to an appropriate value (seeabove, field PACKAGE_SIZE)

Page 6: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 6 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

In older support packages, you need to set this flag eitherdirectly in transaction MWB, workstep "Sender Range: EditStructures and Fields" (press button "access options").If no MWB license is installed, and transaction MWB therefore can'tbe used, maintain a corresponding record for the migration objectin table DMC_ACPL_FAE_SEL.

o MAX_IN_BLOCKING_FACTOR optimizer hintThis is only relevant if the FAE option described above has beenactivated. Then it can be controlled how many child table recordsshould be selected per access operation on DB level. By default,only 5 records (or any other value globally set on DB level) willbe selected. However, by means of an optimizer hint, a higher valuecan be requested. This is controlled by field MAX_IN_BLOCK. Inolder support packages, you find this field in tableDMC_ACPL_FAE_SEL.

o Oracle only: PARALLEL hintAnother optimizer hint can be specified, but it is effective onlyfor Oracle. The PARALLEL hint controls the number of DB processesapplied for a certain select operation. Specifying a number ofparallel processes might accelerate the select operation, but ofcourse requires a higher amount of DB resources

1.3 Parallelization of access plan calculation / data selection

In the TDMS context, this is controlled by table CNVMBTPRECALCOBJ where,for the corresponding package, all objects need to be specified for whichthe parallel processing should be done. In other contexts , especially theinitial load into SAP HANA, the automatic parallelization can be controlledby maintaining table IUUC_PRECALC_OBJ. As a result, a preparation job runsin the sender system which determines primary key field values that willserve as delimitation values for the parallelization of the data selection.In both tables, the maximum number of data records to be processed in eachof the parallel-running jobs must be specified. If it is better not toconsider all primary key fields for the delimitation, the first up to threekey fields of the table can be specified.

2. Choose the best strategy to write the data to the receiver system

The migration workbench offers multiple options, how the data can bewritten in the receiver system, such as

o INSERT FROM INTERNAL TABLE ("Array-Insert")

o single-record insert

o MODIFY FROM INTERNAL TABLE ("Array-Modify")

o dynamic single insert

INSERT FROM INTERNAL TABLE is usually the most effective write behavior.

Page 7: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 7 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

Only if very many duplicates are to be expected, and if the existing datashould be overwritten, Array-Modify (which first tries an update, and thenan insert, if the update failed) will be more efficient.If duplicates are to be expected only for few data portions, and ifexisting data should be overwritten, the most efficient strategy is tostart with Array-Insert, and later to restart the processing withArray-Modify.If duplicates are to be expected but should not be overwritten, writebehavior 6 ("Dynamic single insert") might be the best choice. It firsttries an Array-Insert and switched to single insert only for those dataportions for which duplicates occurred.

Yet another factor influencing the performance of a data transfer is theblock size. As the transfer of each of the data portions creates someoverhead, it is desirable to reduce the number of data portions byincreasing the block size. On the other hand, there are two main limits tothe block size:the required main memory (in the central system, this is double the size ofa data portion)the time required to write the data in the receiver system. In case oftables with a small record size (say, 80 bytes), a block size of 16 MBmeans that it is tried to write 200,000 records in one data portion. Thetime required to write that many records might exceed the maximum runtimeof a dialog process, which would result in a "time limit exceeded"exception.

3. TDMS only: Deactivate generic processing of the data

In order to reduce the number of runtime objects that need to be generatedspecifically for each migration object, in the TDMS context, the data arenormally transferred between the different systems in a generic rather thana table-specific format, and read and written by means of generic ratherthan specifically generated function modules. This, however, creates somemore network load, compared to the table-specific processing. In cases whenthis causes performance problems, the generic processing can bedeactivated, if the TDMS for HCM solution is not used. For details, seenote 1450173.

4. Load balancingWhen defining the rfc destinations, set "Load Distribution" to "Yes" toensure that load balancing is active. Otherwise, only the applicationserver specified in the rfc destination would be used.

5. Recommended system settings in each of the involved systems:

The system setting recommendations below are a result of variousexperiences with data migrations in the past. Nevertheless, in a certain,specific project context, it might be advisable to deviate from theserecommendations. Also, due to new experiences and changes in hardware,system software and database technology, these recommendations are subjectto changes. You should therefore check this note for changes from time to

Page 8: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 8 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

time.

Some recommendations are valid for all involved systems:The OS collector should be activated (in transaction ST06)Profile parameter rdisp/max_wprun_time should not be less than 900 (sec)

To avoid bottlenecks in RFC/Gateway processing due to high load, see note384971. Adjust settings in all systems accordingly.

5.1 Sender system

The access plan calculation / data selection activity in the sender systemis executed by means of synchronous rfc calls from the central system, ifthe expected data volume of the respective table is small, and by means ofbatch jobs, if the data volume is large. This is controlled by the "sizecategory" attribute of the migration object.The data transfer uses asynchronous rfc calls to read the data from thesender system.Therefore it is necessary to define a sufficient number of both batch anddialog work processes in the sender system, according to the requestednumber of parallel-running access plan calculation and data transfer jobsthat should be started in the central system.System load in the sender system is usually high for the access plancalculation activity (unless an incremental processing based on loggingtables is executed, or if reading type 3 is used. In these cases, no realcalculation in the sender system is necessary. If the cluster technique isused, table DMC_INDXCL should be assigned to a storage medium allowingquick write access.During the data transfer, system load on the sender system will be high ifany of the reading types 1 - 3 is used, or in case of incremental objectsbased on logging tables, but will be low if the cluster technique readingtypes 4 - 7 are used.

Recommendations for profile parameters:rdisp/btctime 60

Concerning the DB system settings, there is no need for particular databasesettings that differ from the standard operation.For DB6, however, table DMC_INDXCL should be defined as VOLATILE in thesender system.

5.2 Receiver system

The receiver system is only accessed by means of synchronous rfc calls,which requires a high number of dialog work processes. Load will be highonly during the data transfer. It might be advisable, if the clustertechnique is used, to reassign system resources from the sender to thereceiver system once the data selection in the sender is finished and thedata transfer is started.

Recommendations for profile parameters:rec/client OFFrdisp/btctime 60

For all larger tables in the receiver system, the switch "log data changes"

Page 9: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 9 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

in the technical settings in the SAP DDIC should not be active. Otherwise,writing the data requires additional resources, as also the log tableDBTABLOG will be written to. Also, the database will then write the datarecords in single record mode.If it is not possible / not allowed to deactivate this logging switch, itcan still be made sure that specifically for the migration activities, thelogging will not become effective. It is then necessary to set, before theruntime objects are generated, parameter SUPPRESS_DBTABLOG_UPDATE inparameter table DMC_RT_PARAMS needs to be set to 'X' in the central system.For details, see note 1549208.

Recommendations concerning the DB systemNote that the parameter values mentioned above are meant to optimize thethroughput of a data transfer. Concerning the normal use of an SAP system,you need to comply with the recommendations in the standard notes.

Oracle

LoggingThe system should be run in NOARCHIVELOGMODEThe mirroring of the redo log files should be switched off (there should beonly one member per redo log group).At least 4 redolog groups are required.The redo log files should have a size of at least 1 GB each, should becreated on quick disks, and distributed to as many disks as possible. Also,for the log files, only disks should be used that do not also contain thedata files.The size per member depends on the number of parallel processes running,and the database size. If the DB size is larger than 2 TB, the member sizeshould be 2 GB. If the DB size is larger than 5 TB, the member size shouldbe 5 GB.The redolog buffer size should generally be 128 MB at least.

The WORKAREA_SIZE_POLICY must be set to AUTOThe UNDO tablespace size depends on the number of parallel running tasksand the size of the databaseFor database sizes up to 2 TB, 256GB are requiredFor database sizes up to 5 TB, 512GB are requiredFor database sizes larger than 5 TB , 768GB are required

DB parametersDB_FILE_MULTIBLOCK_READ 128DB_CACHE_ADVICE OFFDB_BLOCK_CHECKING OFFDB_BLOCK_CHECKSUM OFFFAST_START_MTTR_TARGET 0FILESYSTEMIO_OPTIONS SETALLDISK_ASYNCH_IO TRUELOG_CHECKPOINT_INTERVAL 0LOG_CHECKPOINT_TIMEOUT 0COMMIT_WRITE BATCH, NOWAIT

DB_WRITER_PROCESSES depends on size of cache, number of CPUs, and sizeof redolog files.It does not make sense to have more DB writer processes than data filesNormally, 20 should be a good value; in case of Oracle 11, for very largecache size and redolog files, 32 can be specified.

Page 10: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 10 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

DB_CACHE_SIZE should be set to the max available physical memory of thedatabase server minus all other database memory requirements, minus 1-2 GBfor operating system and minus memory requirements of other componentssharing the same hardwareSHARED_POOL_SIZE depending on parallel degree and available RAM at thedb server 1G, 2G or 4G see note 690241

Max DB

OVERWRITE mode should be activated (for details, see note 869267)

CacheMemorySize should be set to the max available physical memory ofthe database server minus all other database memory requirements, minus 1-2GB for operating system and minus memory requirements of other componentssharing the same hardwareMaxUserTasks calculated by number of SAP connections + MaxDB internalprocessesLogQueueSize= 20000LOG OVERWRITE MODE ACTIVEUseMirroredLog NOLOGGING SWITCHED OFFMaxSQLLocks 10000UseDataCacheScanOptimization YES (Version 7.8 and above)ReadAheadTableThreshold 1024 (Version 7.8 and above)

DB6 (aka DB2 LUW, DB2 UDB)STMM (self-tuning memory manager) should be used, and INSTANCE_MEMORY setto the max available physical memory of the database server (minus 1-2 GBfor operating system and minus memory requirements of other componentssharing the same hardware)

Logging / journaling should be minimized

CHNGPGS_THRESH 40 %LOCKLIST AUTOMATIC, or > 40000 without STMMLOGPRIMARY 60LOGSECOND 0 (see note 1493587)LOGRETAIN offNUM_IOCLEANERS AUTOMATICNUM-IOSERVERS AUTOMATICSOFTMAX 300

MS SQL

logging should be minimized by choosing recovery model "simple" (see note421644)autogrow should be activatedMAX = MIN Server Memory (FIXED ) set to the max available physical memoryof the database server (minus 1-2 GB for operating system and minus memoryrequirements of other components sharing the same hardware)MAX DEGREE OF PARALLELISM = 1AUTO_SHRINK=OFF (from Version 2005 upwards)Trace Flag -t610 (from Version 2005 upwards)

Page 11: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 11 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

5.3 Central system

Both access plan calculation and data transfer require batch processes inthe central system. Therefore it is necessary to provide a sufficientnumber of batch processes. Concerning the system resources required forthese processes, it is assumed that one processor core can manage two batchprocesses.

Recommendations for profile parameters:rec/client OFFrdisp/btctime 60

Logging should be minimized, as described in the receiver system settingsfor the specific DB systems.

6. Indexes in the receiver systemWriting data into the receiver system can be substantially accelerated ifindexes are dropped before the data are transferred, and recreated withsuitable DB system tools after the transfer is completed. Prerequisite forindex dropping is that the respective index is not being used byapplications (possibly in other clients) during the data transfer. It mighteven be possible to drop the primary index. This, however, is only possibleif INSERT is the only write behavior to be used, and if strictly no SELECTis done for that table in parallel to the transfer. In the absence of theprimary and all secondary indexes, any delete or modify access to the tablewould be executed as full table scan.If Oracle is used as DB system, it might also be considered to define apartitioning of a primary index by the client field. If the transfer onlyrefers to a certain client the data of which are not read in parallel tothe transfer, it is helpful to drop only the index partition referring tothat client, while keeping those partitions referring to the other clients.

In order to recreate the index(es) after the transfer is completed,specific DB tools can be used.In case of Oracle, it is recommendable to use the following options of theDDL statement:

o NOLOGGING - During index recreation no logs will be written inonline redo logs

o PARALLEL - Parallel processing will be used

o (if required)

o ONLINE - Table will not be locked for parallel-running insert,update or delete operations during index recreation

In case duplicates have occurred, index create won't be possible. First,the duplicates need to be eliminated, which can be done by an Oraclestatement as shown below for table COEP:DELETE FROM /*+ full(COEP) parallel(COEP,12)*/ COEPWHERE rowid not in(SELECT /*+ full(COEP) parallel(COEP,12)*/ MIN(rowid)

Page 12: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 12 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

FROM COEPGROUP BY MANDT, KOKRS, BELNR, BUZEI

If it is not allowed to drop indexes, it might still be possible to set, ifOracle is used as DB system, the index(es) to NOLOGGING (DDL command:ALTER INDEX <indexname> NOLOGGING. This makes sure that index changes won'tbe written into the log files.

Header Data

Release Status: Released for CustomerReleased on: 13.11.2012 16:15:09Master Language: EnglishPriority: Recommendations/additional infoCategory: PerformancePrimary Component: CA-LT-MIG Landscape Transformation Migration

Secondary Components:CA-TDM-FRM-DTL Data Transfer Layer

CA-LT-FRM Landscape Transformation Frame

BC-HAN-LTR Landscape Transformation-basedReplication

Valid Releases

Software Component Release FromRelease

ToRelease

andSubsequent

DMIS 2006_1_46C

2006_1_46C

2006_1_46C

DMIS 2006_1_620

2006_1_620

2006_1_620

DMIS 2006_1_640

2006_1_640

2006_1_640

DMIS 2006_1_700

2006_1_700

2006_1_700

DMIS 2006_1_710

2006_1_710

2006_1_710

DMIS 2010_1_46C

2010_1_46C

2010_1_46C

DMIS 2010_1_620

2010_1_620

2010_1_620

DMIS 2010_1_640

2010_1_640

2010_1_640

DMIS 2010_1_700

2010_1_700

2010_1_700

DMIS 2010_1_710

2010_1_710

2010_1_710

DMIS 2011_1_620

2011_1_620

2011_1_620

Page 13: Note 1730566 - Recommended Performance-related System Settings in SLO - DTL

21.01.2013 Page 13 of 13

SAP Note 1730566 - Recommended performance-related systemsettings in SLO - DTL

Software Component Release FromRelease

ToRelease

andSubsequent

DMIS 2011_1_640

2011_1_640

2011_1_640

DMIS 2011_1_700

2011_1_700

2011_1_700

DMIS 2011_1_710

2011_1_710

2011_1_710

DMIS 2011_1_730

2011_1_730

2011_1_730

DMIS 2011_1_731

2011_1_731

2011_1_731

Related Notes

Number Short Text

1730438 How to analyze and solve performance problems in DTL

890797 SAP TDMS - required and recommended system settings