bw adjusting settings and monitoring data loads

107
BW Data Load Process and Settings Field Services SAP NetWeaver BI Luc Vanrobays SAP Brasil

Upload: luc-vanrobays

Post on 28-Jan-2018

20 views

Category:

Technology


3 download

TRANSCRIPT

Page 1: BW Adjusting settings and monitoring data loads

BW Data Load Process and Settings

Field Services SAP NetWeaver BI

Luc Vanrobays SAP Brasil

Page 2: BW Adjusting settings and monitoring data loads

© SAP 2008 / Page 3

Disclaimer

This presentation outlines our general product direction and should not be relied on in making a purchase decision. This presentation is not subject to your license agreement or any other agreement with SAP. SAP has no obligation to pursue any course of business outlined in this presentation or to develop or release any functionality mentioned in this presentation. This presentation and SAP's strategy and possible future developments are subject to change and may be changed by SAP at any time for any reason without notice. This document is provided without a warranty of any kind, either express or implied, including but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or non-infringement. SAP assumes no responsibility for errors or omissions in this document, except if such damages were caused by SAP intentionally or grossly negligent.

Page 3: BW Adjusting settings and monitoring data loads

© SAP 2008 / Page 4

I. Monitoring 'Data Package'-Settings related Problems

II. Process Chains Settings

III. DTP and DSO Settings

IV. Multiprovider Hints

V. Monitor Archiving Features

VI. Loading Master Data

VII. Loading Transactional Data

Agenda

Page 4: BW Adjusting settings and monitoring data loads

Technical Architecture of Source System Data Loads

RSRQSTRequest IDOC

...

DataSAP Source System

BW

RSINFOInfo Status 0 or 5

RSINFOInfo Status 1

Requestreceived

Dataselection

scheduled

Selectionrunning /

No. of records

RSINFOInfo Status 2

RSINFOInfo Status 9

Dataselection

ended

Data Load

DataPack-ages

DataPack-ages

DataPack-ages

Page 5: BW Adjusting settings and monitoring data loads

Further Analysis in case of

Ressource

problems when extracting data...

Double Check ROIDOCPRMS Settings

If no entry was maintained, the data is transferred with a standard setting of 10.000 kbyte, 1 InfoIdoc for each data packet and 100.000 lines

Check the Transfer Parameter Settings (1) SBIW / ROIDOCPRMS Table

Page 6: BW Adjusting settings and monitoring data loads

For dataload from SAP R/3 you have to maintain these parameters directly in the SAP R/3 (not in BW!)

For datamart (e.g. export datasource scenarios) you have to maintain these parameters in BW (that one, which is used for extraction)

SBIW General settings maintain control parameters for data transfer

Maintain the Transfer Parameter Settings via SBIW

Please check resource consumption in SAP Source System

Extraction is usually done in single batch process

TRFC with parallel dialog processes

Page 7: BW Adjusting settings and monitoring data loads

Further Analysis in case of

Ressource

problems when extracting data...

Check the Parameter Settings (2) – Infopackages – Overcontrol Standard Settings

NOTE: The Maximum settings can not be overridden !!!

RSA12 > select Infopackage > Shift+F8 (or menu Scheduler)

You can reduce individually in the InfoPackage frequency‘ and ‚max. size‘ for the extraction process

It is not possible to overcontrol max. lines

Page 8: BW Adjusting settings and monitoring data loads

This report can be used on the SOURCE SYSTEM to change the settings for specific datasources in the table ROOSPRMS, to override ROIDOCPRMS settings

Check the Parameter Settings (3) – Maintenance of ROOSPRMS Table

The ROOSPRMS Table can be maintained via SE38: Report Z_CIGE_BW_MAINTAIN_ROOSPRMS

Important Input Parameters:• DataSource: Name of the Infosource

•DWH System: Data Warehouse (BW) System logical system name

•Source System: Logical System Name of the Source System

•Update mode:F = Transfer of all requested dataD = Transfer of the deltas since the last requestI =Transfer of an opening balance for non-cumulative valuesR =Repetition of the transfer of a data packetC =Initialization of the delta transfer

Delete Record: Delete the Entry from the table

Override the ROIDOCPRMS Settings

Page 9: BW Adjusting settings and monitoring data loads

Extraction from SAP Source Systems

SAP-BW

SAP SystemHuge tables for extraction

Data-package 3

Data-package 2

Data-package 1

Restricted amount of main memory

Restricted amount of main memory

Typical situation:

Huge extraction (millions of records)

Problem:

•It makes no sense to transfer all data in one step, because of restricted main memory on source and target system

Recommendation:

•Splitting data up in several (small) packages, that can be held in main memory while processing in BW

less memory consumption and parallelism in processing possible.

Page 10: BW Adjusting settings and monitoring data loads

Splitting of Requests & Parallel processing

Several Requests in parallel Split up large request with selection criteria Split up large file into smaller ones and load in parallel Loading with parallel Infopackages possible for Infocubes and ODS Objects

BW

SAP Source System BTC

Dia Dia Dia. . .

BTC

Dia

Page 11: BW Adjusting settings and monitoring data loads

Further Analysis in case of

Ressourceproblems when extracting data... cfr. How To Automate Data Loads with Recursive IP

Use Selection Criteria

'Huge' Data Loads: Split your load in smaller load jobs

Page 12: BW Adjusting settings and monitoring data loads

Initial Load in Overall (1)

1. At the pre-determined cutover point, take a copy of your source system to a "cutover" client.

2. Run an initialization for delta (without data transfer) from your source system. This will create the delta queue in the R3 system so that delta activity will be accumulated. Users can now resume normal production activity.

3. Run your set up jobs for the initial load from the "cutover" client. Although the jobs may take hours/days to run, the users can still be active in the production system because the delta activity is being captured there. Do your intial load from the setup done in the "cutover" client.

4. Schedule periodic delta updates from the production system.

When the inital load still takes too long....

You could...

Page 13: BW Adjusting settings and monitoring data loads

Initial and Delta Load in Overall (2)

Restrict the data per Infopackage by using selection criteria.

You can apply multiple processes at one time to upload the data in parallel based on

e.g different sales document numbers in the different Infopackages.

To load larger amounts of data into the BW System, you should generally use several parallel data requests with different selection criterias. Here you should group the data appropriately in the InfoPackage by selection.

You might want to create an index based on the selection criteria to enhance reading performance.

Parallelisation

By the usuage of selection criteria

Document # 501-1000

Document # 1-500

Document # 1001-1500

Page 14: BW Adjusting settings and monitoring data loads

Source System Performance / Load Impact

Memory settings:

BW extractors need a lot of extended memory (depending very strong to the data package size)

Large memory consumption for extraction processes in parallel (= parallel scheduled InfoPackages)

CPU:

Extraction runs with background processes. Extractors running in parallel are able to slowdown the source system.

Extract data in times when there is just less load on the system•Avoid extractions from OLTP during dialog time. At least don‘t extract in dialog time with InfoPackages in parallel.

•Try to extract during ‘off times‘ of the the source system

Page 15: BW Adjusting settings and monitoring data loads

Golden Rules

Avoid Data Load with large number of Data Packages per Infopackage (Rule of Thumb: one load should contain less than 250 Datapackages)

BUT (depending from your hardware):

Take care on the Max Size of the Data Packages. Reduce the number of parallel processes for Datapackages with large Sizes!

(too large Values and too many processes lead to memory problems, shortdumps)

Split 'huge' Data Loads (e.g. 'Inits') in several smaller Infopackages

Review / Check the Data Package Settings periodicaly

But, if you don’t split data into packages, you: run the risk of one erroneous record forcing the entire package to be rejected

May face the need for large rollback segments.

Rollback segment is space on database containing images of data (“Before-Image”) prior to the current transaction being committed

Page 16: BW Adjusting settings and monitoring data loads

Situation Observed : System (Load)Performance decrease !!!

Small data packages in large requests often cause lock situations and a severe decrease in performance due to the monitor overhead.

Transaction SM66:'Stopped ENQ'

Page 17: BW Adjusting settings and monitoring data loads

Observation: Datapackage Processing Runtimes increase during Data Loads

Data Package 4:

I->II: 0:41:23 - 0:40:42 =

41 s

I->III: 0:43:24 - 0:40:42 =

2 min 42 s

Data Package 2033:

I->II: 21:11:32 - 21:05:33 =

5 min 59 s

I->III: 21:23:27 - 21:05:33 =

17 min 44 s

E.g. Compare Update Time for Data Package Nr. 4 vs. Nr. 2033Result: Increasing factor: 6,5 - 8,7 times more than at to begin

Data Package 4

Data Package 2033I II III

Page 18: BW Adjusting settings and monitoring data loads

How to Check SAP enqueues and wait situations?

How to check Check for SAP enqueues and wait situations:

Monitor the current system wide enqueue’s situation with SM66 or SM50 Run Performance Trace (transaction ST05) with option Enqueue Trace on.

– Display Extended Trace List to see the timestamp for each statement. – Look for

– All exclusive locks and when they are released either explicitly or implicitly at the end of the SAP LUW.

– Calculate the locking time.

Page 19: BW Adjusting settings and monitoring data loads

Locking prevents parallel processing

SAP enqueue locks means, that processes can block each other

Time

WP1

WP2

WP3

Update record 1 Commit

Update record 1 Commit

Update record 1 Commit

Wait time

Wait time

Page 20: BW Adjusting settings and monitoring data loads

SM12 Large DataPackage Nr and Large List of Locks especially on the RSMON* tables

Monitor SM12 regarding: Datapackage with large

numbers

Waiting situations, long locking time for ALL load processes caused especially by the LOAD with large package number !!!

Long list of Locks on the RSMON* tables, due to high administration overhead for some datapackages

Page 21: BW Adjusting settings and monitoring data loads

How to identify finished Loads with large number of Data Packages ?

se16 - RSMONICDP

Set the filters:• on DP_NR (e.g. '250' for loads with more than 249 DP's)

•TIMESTAMP

rsrq

Review / Check the requests

Check the DTP / Infopackage settings

1

2

5

3

4

Page 22: BW Adjusting settings and monitoring data loads

Monitor & Detect the critical Dataload Jobs

CANCEL & Restart specific Data Load JobsBUT ONLY, IF the Data is reloadable without Data Loss With NEW adjusted Transfer / Package Settings

Adjust the transfer parameters (e.g. sizes)in the InfoPackage (decrease)via Report Z_CIGE_BW_MAINTAIN_ROOSPRMS (increase or max reset)

Global Settings (via SBIW) - changes should be done ONLY after extended test cases

Monitor & Detect the critical Dataload Jobs

CANCEL & Restart specific Data Load JobsBUT ONLY, IF the Data is reloadable without Data Loss With NEW adjusted Transfer / Package Settings

Adjust the transfer parameters (e.g. sizes)in the InfoPackage (decrease)via Report Z_CIGE_BW_MAINTAIN_ROOSPRMS (increase or max reset)

Global Settings (via SBIW) - changes should be done ONLY after extended test cases

Recommendations for BI Consultants (1)

Page 23: BW Adjusting settings and monitoring data loads

•Double Check Memory Settings

•Optimal settings depend on the hardware (CPU and RAM) and the system load, too

•Monitor the memory consumption (ST02, SM66) in your system

•Schedule huge data loads carefully taking care on the system load (see slide 15 - ' Source System Performance/Load Impact)

•Avoid ‘Memory Paging’ caused by

•Too many parallel processes

•Too large Package sizes

•Heavy system loads

•Enough physical memory should be accessible & left for other processes

•Deadlocks because of I/O overload due to high number of parralelprocesses and/or competing read/update acesses.

Recommendations for BI Consultants (2)

Page 24: BW Adjusting settings and monitoring data loads

Most Important ROIDOCPRMS settings

•Max. (kB) (=maximal package size in kB):

When you transfer data into BW, the individual data records are sent in packets of limited size. You can use these parameters to control how large a typical data packet like this is.If no entry was maintained then the data is transferred with a default setting of 10,000 kBytes per data packet. The memory requirement not only depends on the settings of the data packet, but also on the size of the transfer structure and the memory requirement of the relevant extractor.Unit of this value is kB.

•Max. lines (=maximal package size in number of records)With large data packages the memory consumption strongly depends to the number of records which are transfered by one package. Default value for max, lines is 100.000 per data package.Maximal memory consumption per data package is around 2*'Max. lines'*1000 Byte.

Max proc. (=maximal number of dialog workprocesses which were allocated for this dataload for processing extracted data in BW(!))Enter a number larger than 0. The maximum number of parallel processes is set by default at 2. The ideal parameter selection depends on the configuration of the application server, which you use for transferring data.

Page 25: BW Adjusting settings and monitoring data loads

Recommendations for BW Consultants (3)

•Calculate the size of the Transfer Rules. The information is required for the Data Package Size Calculation

•See ‘Example: How-To Calculate the Transfer Structure Size’ (slide 28) in this presentation for more details

•Communicate & send the information about large (several hundred kBytes) Transfer Structure Sizes to the BW Admin’s

•Hugh Data Loads with Large Transfer Structures should be monitored with special attention

•Split large dataloads in smaller spaces

e.g. do init load with data and several repair IP with selection

Page 26: BW Adjusting settings and monitoring data loads

Recommendations for BW Consultants (4) – Data Package Settings

NOTE:

•In the InfoPackage setting it is only possible to understeer the control (SBIW-ROIDOCPRMS) parameters

•Contents of table ROOSPRMS overrules the setting of ROIDOCPRMS

•Increasing the packet size will require more memory to be used than if you use smaller packets

•The real package size depends on the data extractor and there are extractors which can't influence the package size. This is not an error or missing functionality, but depends on the structure of the data

Page 27: BW Adjusting settings and monitoring data loads

Recommendations for BW Consultants (5) – Data Package Settings

NOTE:

•If you load a 50 MB package, than normally the Workprocess processing this package needs 3-5 times more (if you don´t filter records or produce more records in update or transfer rules).

•So if you load 4 times parallel than you need at least 4 times 300 MB of extended memory (each Work-process needs about 300 MB)

•If you load from flat file than the table rsadminc (IDOCPACKSIZE) controls the size of the datapackages. You can change this settings using the transaction 'RSCUSTV6'. This setting can also be overridden by the settings in the ROOSPRMS table

Page 28: BW Adjusting settings and monitoring data loads

Determination of Datapackage size

'DP size calculated' = Max.(kB) * 1000 \ Transfer structure size

2 cases:

DP size = 'DP size calculated' IF 'DP size calculated' < Max.lines

DP size = Max.lines IF 'DP size calculated' > Max.linesDefaults: Max.(kB) = 10.000 , Max.lines = 100.000 , Details see note 409641

Data Package size is limited by Max. (kB) or Max.lines for most of standard extractors

Data Package size is influenced by Transfer structure size

Facts:

Determination of Datapackage size:

Note: In some applications the Datapackage size is hardcoded and not influenced by the customizing settings in table ROIDOCPRMS. For details see note 417307.

Page 29: BW Adjusting settings and monitoring data loads

Examples for Package Size Calculation (OSS 409641)

The general formula is: packet size = MAXSIZE * 1000 \ transfer structure size, but not more than MAXLINES.

eg. if MAXLINES < than the result of the formula, MAXLINES size is transferred into BW.

Here are 3 examples of generic data extraction:

My transfer structure size = 250 Bytes

CASE System Settings

MAXSIZE

Settings MAXLINES Calculated Package Size

Decision Criteria Package Size

1 BW

OLTP

-

20000

-

60000

(MAXSIZE(OLTP) * 1000 \ 250)

80000

MAXLINES = 60000 < 80000

60000

2 BW

OLTP

-

20000

-

-

(MAXSIZE(OLTP) * 1000 \ 250)

80000

MAXLINES default value 100000 > 80000

80000

3 BW

OLTP

10000

20000

-

60000

(MAXSIZE(BW) * 1000 \ 250)

40000

MAXLINES = 60000 > 40000

40000

Page 30: BW Adjusting settings and monitoring data loads

Example for Package Size Calculation

EXAMPLE:

•You want to keep the number of Data Packages equal (or less) 200.

•You need to load: 20.000.000 Records

•The Transfer Structure is 500 Bytes

CALCULATION:

•20.000.000 Records / 200 Data Packages = 100.000 records per package

•Limit MAXLINES: default value 100.000 => OK

•Limit for package Size MAXSIZE = 'packet size' * 'transfer structure size' / 1000

•MAXSIZE = 100.000 * 500 / 1.000 = 50.000

RESULT

The settings MAXSIZE = 50.000 ; MAXLINES = 100.000 would be a good proposal for this load

(Prerequisite: enough memory can be allocated)

Page 31: BW Adjusting settings and monitoring data loads

© SAP 2008 / Page 32

I. Monitoring 'Data Package'-Settings related Problems

II. Process Chains Settings

III. DTP and DSO Settings

IV. Monitor Archiving Features

V. Loading Master Data

VI. Loading Transactional Data

Agenda

Page 32: BW Adjusting settings and monitoring data loads

Process Chain Basics

Process chain is a logical grouping of processes, jobs or steps

Each step is made up of a ‘Process Type’•Process type generally corresponds to a BW activity•Examples of Process types are ‘Load data’, ‘Activate ODS’ etc.

Process types are maintained in table RSPROCESSTYPES•To view process type definition use Tcode RSPC •Click on the menu option –Settings --> Maintain Process Types•Click on the menu option –Settings > Maintain Process Types

•As of SP14 NW BI 7.0, SAP has supplied 47 process types

Page 33: BW Adjusting settings and monitoring data loads

Process Chain Details

Process Variant An instance of a process type Contains specific values / parameters to execute a process type (step)

To see the technical details of each step (process type’s variant)

Click on menu option ‘View -> Detailed view on’.

Scheduling time System creates a batch job named BI_PROCESS_<PROCESS TYPE’s name>

1. The job listens to a preceding step’s event (except for Start variant)

-The batch job uses the program RSPROCESS

- Process chain name, variant, wait time etc are passed to the program

Page 34: BW Adjusting settings and monitoring data loads

Enhanced Administration Features - Repairing and Repeating Instances

Resuming Failed Process Types Repeating a process type

Repeat = starting of a job with a new instance new request no.

Former ‘restart‘ function

Repairing a process type Resume the process within the same instance Some processes like the new data transfer

process use this and offer features to repair broken instances.

Code behind the Process Type’s funcionality is usually an Abap OO

Class

Code behind the Process Type’s funcionality is usually an Abap OO

Class

Determines whether a process type sends back success or error message.

Determines whether a process type sends back success or error message.

Page 35: BW Adjusting settings and monitoring data loads

BI 7.0 Features

Process Chain features

Process status evaluation (*)

Execution User

Copy process chain

New Process Types Decision Between Multiple Alternatives (*) Construct Database Statistics Deletion of Requests from the Change Log(*) Execute Planning Sequence Switch Realtime InfoCube to Plan Mode / Load Mode Close Request of Real-Time InfoPackage Interrupt process Trigger Event Data Change(*) Archive Data from an InfoProvider Check process chain is already active(*) Data Transfer Process

(*) Covered in detail in later sections

Page 36: BW Adjusting settings and monitoring data loads

Process Chain Settings - Background User

Background User Runs a process chain Scenarios

Special users for loadbalancing

Be able to see which user has scheduled a certain process chain and is in charge of it

Three options BWREMOTE (default) User actually scheduling the

process chain Manually specified user

Page 37: BW Adjusting settings and monitoring data loads

Enhanced Administration Features - Handling Of Erroneous Processes

Process Status Valuation Scenario

In a meta chain, prevent the whole chain from being stopped due to errors when ‘only‘ unimportant process steps of a sub chain have failed. -> ‘Unimportant’ means, the subsequent process will run even if the previous step is in Error. That is dependency is ‘Always’ or ‘With Errors’.

Valuate processes with errors as successful for the overall chain

They have a succeeding event for treating errors They have a succeeding event that should be

scheduled in any case Mailing and alerting are not influenced by this attribute Select Check Box in subsequent Pop-Up

Page 38: BW Adjusting settings and monitoring data loads

Is the Previous Run Still Active ?

Scenario Do not start the process chain, if the previous run is still active. From NW BI 7.0 SP14 onwards, Use a new Process type named ‘Is the Previous run in the chain still

active?’. As shown in the figure below, this process type (PC_ACTIVE) can be

found under ‘General Services’.

Page 39: BW Adjusting settings and monitoring data loads

Is the Previous Run Still Active ? Cont…

How to use this Process type ? Step 1) Drag the process type ‘Is the Previous run in the chain still

active?’ into the Design window Step 2) When you connect this process type to the next step, the

following pop-up appears. Here, select the radio button ‘Successful’. Step 3) Select the line ‘Inactive’ (value 1) as show in the below figure,

and then press the Green check mark button.

Page 40: BW Adjusting settings and monitoring data loads

Is the Previous Run Still Active ? Cont…

How to use this Process type ? Step 4) Create another process type for action, if the process chain is

already active.An example of this step would be an ABAP to send an email alert

Step 5) Connect the process type ‘Is the Previous run in the chain still active?’ to the process type created in Step 4.

This time, select the option ‘Active’ (value 2). And press green check box. The process chain will look at this at the end of step 5.

Page 41: BW Adjusting settings and monitoring data loads

Integration to BEx Broadcasting

Scenario and solution How can we execute reports just after finishing the data load? Use the process type ‘Trigger Event Data Change’ in the process chain. While scheduling BEx broadcaster setting, select the option ‘Execute with Data

change in the InfoProvider’. The steps to achieve this are listed below:

Step 1: Include the process type as a step after data load. See below figure.

Page 42: BW Adjusting settings and monitoring data loads

Integration to BEx Broadcasting Cont…

Solution Continued Step 2: In the variant specify the InfoProvider (Fig 1) Step 3: Select the option ‘Execute with Data change in the InfoProvider’ (Fig 2). Note:If you schedule many broadcast settings based on the data change, it can affect

your system performanceFig 1. Enter InfoProvider in the variant Fig 2. Scheduling Options in BEx broadcaster

Page 43: BW Adjusting settings and monitoring data loads

Enhanced Administration Features – Polling Flag

The main process waits until other proceses have finished

UseIf the chain contains distributed processes (for example, a loading process) you use this indicator to specify whether you want to hold back the starting background process until the actual work process is complete in other internal sessions, or whether you want to release it immediately.

Setting the indicator has the following advantages:External scheduling tools that react only to the SAP internal event 'Batch Process Complete', are also informed about the status of distributed processes.Very secure process runs. The starting background process asks about the status of the current processes every two minutes (polling).Setting the indicator also has the following disadvantages:Increased demand on resources. Although the CPU is not under pressure during the waiting time, it is during the status checks that run at two minute intervals.You require one more background process.Note:If you implement a distributed process and want to use this indicator, implement the interface IF_RSPC_GET_STATUS and refer to the documentation for the HOLD-Indicator.

The main process waits until other proceses have finished

UseIf the chain contains distributed processes (for example, a loading process) you use this indicator to specify whether you want to hold back the starting background process until the actual work process is complete in other internal sessions, or whether you want to release it immediately.

Setting the indicator has the following advantages:External scheduling tools that react only to the SAP internal event 'Batch Process Complete', are also informed about the status of distributed processes.Very secure process runs. The starting background process asks about the status of the current processes every two minutes (polling).Setting the indicator also has the following disadvantages:Increased demand on resources. Although the CPU is not under pressure during the waiting time, it is during the status checks that run at two minute intervals.You require one more background process.Note:If you implement a distributed process and want to use this indicator, implement the interface IF_RSPC_GET_STATUS and refer to the documentation for the HOLD-Indicator.

Page 44: BW Adjusting settings and monitoring data loads

Enhanced Administration Features - Handling Of Erroneous Processes

Synchronous execution Scenario: Reduction of process consumption of ‘small’ process chain

Recommended for ‘simple’ upload processes Used for small volumes of data

Process chain is processed synchronously (by occupying one single dialog process) and in serial (loss of internal parallelism)

Protocols to monitor the execution of the chain are still being created

The synchronous execution of process chains can be startedfrom the menu bar.

Page 45: BW Adjusting settings and monitoring data loads

How To Debug a Process Chain ?

Synchronous execution How can we set up a Break-Point in a process chain ?

Step 1: RSPC, select the process chain. Go in change mode.

Step 2: Select the process type that you want to debug Right mouse click and select ‘Debug Loop’ option Step 3: Enter a wait time greater than 0 seconds. And then, activate

the Process Chain.

Step 2 Step 3

Page 46: BW Adjusting settings and monitoring data loads

How To Debug a Process Chain ? Cont…

Synchronous execution Step 4: Go to the menu option: Execution -> Execute

Synchronous to Debugging Step 5: The debug screen will now pop-up

Step 4 Step 5

Pop-UpPop-Up

Page 47: BW Adjusting settings and monitoring data loads

How To stop a Running Process Chain ?

Solution within a Process Chain Develop a program which uses the function module

RSPC_API_CHAIN_INTERRUPT Accept Process Chain as a ‘Select-Option’ variable. Call the function module for selected chain(s) The parameter I_KILL = ‘X’ will terminate data loading processes.

Alternate Solution Using TCode RSPC, select the process chain. Switch to change mode. Go to menu Execution. Then select ‘Remove from schedule’.

Page 48: BW Adjusting settings and monitoring data loads

Enhanced Administration Features - Synchronous Execution - 2

Synchronous execution Benefits

Fast and slim execution of the processes within the chain

Process2

Starter

Process1 AND1

Process4AND2

Process41

2 3

4 5 6

Runtime

Process2

Starter

Process1

AND

Process4

Design time

Page 49: BW Adjusting settings and monitoring data loads

Enhanced Administration Features - Check on Batch Process Consumption

Batch Process Consumption Check Scenario

Prevent process chain from consuming more background processes than available ( could lead to locking issues)

Control of parallel background processes When checking the process chain, the number of parallel processes, which would

be used in optimal case, is calculated (also recursively in case of subchains) If it exceeds the number of batch processes available on the chosen server, the

level which exceeds the resources, is marked as warning (Message: Too many parallel Batch processes for Server XYZ. Message No.: RSPC118)

Check OSS note 621400 for more information on the locking problem and the calculation of how many batch processes are needed to run a particular chain

Page 50: BW Adjusting settings and monitoring data loads

Deadlocks with Process Chains (Oss 621400)

A deadlock occurs in the job processing if the process chain starts a large number of parallel subchains, because an additional background process is briefly required when a subchain is started.Make sure that there are enough background processes in the system.If the minimum requirement cannot be met, we recommend that you adjust the chain to reduce the number of parallel subchains.

o Minimum (with this setting, the chain runs more or less serially):

Number of parallel subchains at the widest part of the chains + 1Subchains that start additional subchains must be exploded for the calculation.

o Optimal (higher similarity):

Number of parallel processes at the widest part of the chain + 1.Every subchain counts here in this formula with the number of parallel processes at its widest point. Example:

---Load process-Load process--AND

Trigger--|--Load process--AND----------Load process

---Subchain---AND------------

# processes: 1P 2P+1U 3P 2P

In this case, you require at least 1U + 1 = 2 batch processes.The optimal setting would assume the subchain has 3 load processes at its widest point:2 (load processes) + 3 (load processes of the subchain) + 1 = 6 load processes.

Page 51: BW Adjusting settings and monitoring data loads

Enhanced Administration Features - Alerting Within Process Chains – 1

Alerting Trigger SAP alerts from process chains

Use of alert framework Alerts can be collected in alert inbox of the user who scheduled the process chain

(transaction code ALRTINBOX) Set-up

Attribute of process chain Alert Categories

Capabilities An alert is sent for every

process type which endswith an error

A standardized text is sentfor every error with therelevant data

Errors of batch managementare also capturedand alerts are triggered

Page 52: BW Adjusting settings and monitoring data loads

Enhanced Administration Features - Alerting Within Process Chains – 2

Alerting with process chainsThe alert category is derived by the system as follows If there is an error in the batch management the standard category

BWAC_PROCESS_CHAIN_FRAMEWORK is used Otherwise the new table RSPC_ALERT_CAT is accessed to find an alert category

for the process which resulted in an error

This table can be used by the customer to assign own alert categories or customer defined classes to customer-specific and also to standard process types. In the second case no modification has to be made. If the ‚No alert‘ flag is set, no message is sent to the user

If there is no relevant entry in this table, the standard alert category AC_PROCESS_CHAIN_ERROR is being used

Page 53: BW Adjusting settings and monitoring data loads

Enhanced Administration Features - Alerting Within Process Chains – 3

User assignment

Properties of the alert category

Alerting with process chains Maintenance of alert categories

Alert categories can be maintained in transaction ALRTCATDEF

Page 54: BW Adjusting settings and monitoring data loads

© SAP 2008 / Page 55

I. Monitoring 'Data Package'-Settings related Problems

II. Process Chains Settings

III. DTP and DSO Settings

IV. Monitor Archiving Features

V. Loading Master Data

VI. Loading Transactional Data

Agenda

Page 55: BW Adjusting settings and monitoring data loads

DTP DataLoads- Background Processing

Page 56: BW Adjusting settings and monitoring data loads

DTP DataLoads- Background Processing

Page 57: BW Adjusting settings and monitoring data loads

Control PARALLEL degree during activation

When you activate requests in an ODS object, the system does not use parallel processes to access the database table to be read (activation queue). This has a negative effect on performance.

With Support Package 14, the parameter for the "number of parallel processes" from transaction RSCUSTA2 is used.

As of Support Package 16, the "ORA_PARALLEL_DEGREE" parameter is read from table RSADMIN (see SAPNote 544521) and used for the table access.

0: DEFAULT (Oracle determines the PARALLEL degree1: no PARALLEL degree2, 3, 4, ... Value of the PARALLEL degree

Problem

Solution

Further information about parallel degree should be found in the reporting performance improvement documentation

Page 58: BW Adjusting settings and monitoring data loads

DSO Objects – Additional Information

/BI?/A<TECH_NAME>00 for the old and new A-Table

/BI?/A<TECH_NAME>40 for the new U(pdate)-Table which replaces the old M-Table

/BI?/BXXXXXXXX for the old and new Changelog (PSA-Table)

/BI?/A<TECH_NAME>50 will be generated like …40-Table

RSODSACTUPDTYPE Contains the update type

RSODSACTREQ Requests which are loaded to the ODS

RSREQICODS Monitor: Saving of the updated IC and ODS per request

Consistency Check / Repair of generated ODS programs: SAPNote 698576

Important tables

Page 59: BW Adjusting settings and monitoring data loads

Loading transactional data into ODS objects

active data

Staging Engine

change log

Activation

Req2Req3

Req1

Activation Queue

ReqID, PackID, RecNo

ReqID, PackID, RecNo

Semantic Key(e.g. document

number)

Page 60: BW Adjusting settings and monitoring data loads

ODS Object Update Details

Upload to Activation queue

Data from different requests are parallel uploaded to the activation queue.

Activation

During activation the data is sorted by the logical key of active data plus activation queue key.

This guarantees the correct sequence of the records and allows inserts only instead of updates.

Before- and After Image is written to the change log.

Request ID in activation queue and change log differ from each other.

After update, the data in the activation queue is deleted.Staging Engine

Activation queue/BI*/A<ODSname>40

Req.ID I Pack.ID I Rec.No

Change log/BI*/B<number>

ODSRx I P 1 I Rec.1I4711I 10ODSRy I P 1 I Rec.1I4711I-10ODSRy I P 1 I Rec.2I4711I+30

Activation

REQU1 I P 1 I Rec.1I4711I 10REQU2 I P 1 I Rec.1I4711I 30

Req2Req3

Req1

Active data/BI*/A<ODSname>00

Doc.No I Value

4711 I 10

4711 I 30

Page 61: BW Adjusting settings and monitoring data loads

Standard DataStore Object – Activation

Performance Aspects of activation process

Main ProcessFull (sorted) table scan of activation queue

Fetch Packages

Hand over package for SID creation

Store package in cluster

When finished: start parallel processes

Parallel Processes

Read and delete package from cluster

Check active table for existing values

Fill buffer tables for INS/DEL/UPD

Write buffer tables (active table/change log)

Parallel Processes Parallel Processes

..…

Page 62: BW Adjusting settings and monitoring data loads

Activation: Determine SIDs

ODSobject

SIDs are required for reporting

Checking / Creating SIDs (referential integrity with master data) required with BEx Reporting flag.

This is done at the beginning of the activation

Parallelization per InfoObject (on the same server only)

Only attribute flag no SIDs

Parallel SID determination

Page 63: BW Adjusting settings and monitoring data loads

Activation With Unique Data Records

Unique Flag means one record only for each key combination

Performance benefits for activation process

No sorting necessary

No reading of existing records from active data

No before image in change log

(Mass) Inserts only

Set unique flag only / always if you are sure there are no duplicates

Active data

Activation queue

Change log

Activation

Page 64: BW Adjusting settings and monitoring data loads

Settings for Activation – SID – RollBackin tcode RSODSO_SETTINGS

Page 65: BW Adjusting settings and monitoring data loads

Mass maintenance per Process Typewith tcode RSBATCH

Page 66: BW Adjusting settings and monitoring data loads

RSBATCH Mass maintenance per Process Type

Page 67: BW Adjusting settings and monitoring data loads

How to identify chains that use a particular Process Type?

Scenario: Identify the process chains that use process type –OS Command.Solution: Use TCode RSPC2.In RSPC2, enter the process type value (e.g. COMMAND)

Scenario: Identify the process chains that use process type –OS Command.Solution: Use TCode RSPC2.In RSPC2, enter the process type value (e.g. COMMAND)

Page 68: BW Adjusting settings and monitoring data loads

How can we incorporate decision branches in a Process Chain ?

Scenario Examples: Compression Only if To-Day is Sunday, Update Cube Only If data Exists in DSO Change Log….Solution: Use decision between multiple alternatives•Define a custom formula that returns a value 1, if any data exist in Change log and Use that function in the formula editor of this process variant.

Scenario Examples: Compression Only if To-Day is Sunday, Update Cube Only If data Exists in DSO Change Log….Solution: Use decision between multiple alternatives•Define a custom formula that returns a value 1, if any data exist in Change log and Use that function in the formula editor of this process variant.

Page 69: BW Adjusting settings and monitoring data loads

Decision Between Multiple Alternatives?

Solution: Maintain Decision variant (Fig 1)While trying to connect the Decision variant to the subsequent step, the a pop-up appears as shown in (Fig 2).

Solution: Maintain Decision variant (Fig 1)While trying to connect the Decision variant to the subsequent step, the a pop-up appears as shown in (Fig 2).

Final OutLook of the Process Chain (Fig 3)Final OutLook of the Process Chain (Fig 3)

Page 70: BW Adjusting settings and monitoring data loads

Process Chain Log Deletion - Maintenance

Problem:

Process Chain Logs are increasing with time, deterioratingmonitoring response time.

Solution:

Use the Program RSPC_LOG_DELETE to delete old logsDetermining Retention Policy (ex.Three Months)

Note: It may happen that the deletion program deactivate some steps of the process chain it deals with, requiring reativation.

Problem:

Process Chain Logs are increasing with time, deterioratingmonitoring response time.

Solution:

Use the Program RSPC_LOG_DELETE to delete old logsDetermining Retention Policy (ex.Three Months)

Note: It may happen that the deletion program deactivate some steps of the process chain it deals with, requiring reativation.

Page 71: BW Adjusting settings and monitoring data loads

Process Chain Analysis and Monitoring

Transactions for Monitoring: RSPC to display Process Chains

RSPC1 to display one Process Chain Log at atime

RSPCM

Transactions for Runtime Analysis:

RSPC2

BWCCMS (similar to RSPCM but access in CCMS)

ST13for monitoring and compare response time

Transactions for Monitoring: RSPC to display Process Chains

RSPC1 to display one Process Chain Log at atime

RSPCM

Transactions for Runtime Analysis:

RSPC2

BWCCMS (similar to RSPCM but access in CCMS)

ST13for monitoring and compare response time

Page 72: BW Adjusting settings and monitoring data loads

Process Chain Monitoring – RSPCM – RSM37

RSPCM - “One Stop Shop” to monitor critical process chain runs:

Requires to select the chains defined as CriticalOptional Step – Add Email Message

RSM37 - “Because SM37 doesn’t give much insight about Process Chain related jobs ” and makes it difficult to relate a background job to its parent Process Chain.

RSPCM - “One Stop Shop” to monitor critical process chain runs:

Requires to select the chains defined as CriticalOptional Step – Add Email Message

RSM37 - “Because SM37 doesn’t give much insight about Process Chain related jobs ” and makes it difficult to relate a background job to its parent Process Chain.

Can also enter a Variant hereCan also enter a Variant here

Enter Variant Value hereEnter Variant Value here

Page 73: BW Adjusting settings and monitoring data loads

Analysis and Service Tools Launch Pad – ST13

Choose BW-TOOLS or access via Se38 with /SSA/BWT With PC or Process Type Compare Runtime Feature

-Per Process Chain Id-Per Process Type

Choose BW-TOOLS or access via Se38 with /SSA/BWT With PC or Process Type Compare Runtime Feature

-Per Process Chain Id-Per Process Type

Runtime ValueRuntime Value

Process Type Details and RuntimeProcess Type Details and Runtime

Link to Process LogLink to Process Log

Up to 5 PC Selections to Compare Day-to-Day Runtime

Up to 5 PC Selections to Compare Day-to-Day Runtime

Page 74: BW Adjusting settings and monitoring data loads

Analysis and Service Tools Launch Pad – ST13 Cont…

Analyze Runtime Per Process TypeAnalyze Runtime Per Process Type

Process Type Details and RuntimeProcess Type Details and Runtime

Page 75: BW Adjusting settings and monitoring data loads

Analysis and Service Tools Launch Pad – ST13 Cont…

Compare Runtime Per Process ChainCompare Runtime Per Process Chain

Page 76: BW Adjusting settings and monitoring data loads

Maintain Parallel Settings Per Process Type

Scenario: Increase or Decrease Level of Parallelism per process typeAlter Job Class (Priority)Solution: transaction RSBATCH

Scenario: Increase or Decrease Level of Parallelism per process typeAlter Job Class (Priority)Solution: transaction RSBATCH

Page 77: BW Adjusting settings and monitoring data loads

Batch Management in SAP NetWeaver 2004s

Select process e.g. Initial Fill of Aggregate

Parameter settings:

Number of Processes (BATCH!)

Job class

Host / Server / Server Group (SM61)

These settings are the default settings but can be overcontrolled for each process chain variant (table RSBATCHPARALLEL)

Transaction RSBATCH

Page 78: BW Adjusting settings and monitoring data loads

Repair a terminated Load -

To repair (partially finished) loads in DIA (above) or in Process ChainsTo repair (partially finished) loads in DIA (above) or in Process Chains

Page 79: BW Adjusting settings and monitoring data loads

DTP manually repaired

Process failed at 12:51:56, reiniciated at 14:58:02, Successful at 15:00:37Process failed at 12:51:56, reiniciated at 14:58:02, Successful at 15:00:37

Page 80: BW Adjusting settings and monitoring data loads

DataStore Specific Transactions

RSODSO_SHOWLOG

Shows the log of requests which have been processed in a DataStore object

Detailed information about main process, data packages, runtime behaviour and runtime settings

Log also accessible via manage

screen

Page 81: BW Adjusting settings and monitoring data loads

DataStore Specific Transactions

RSODSO_BRKPNT

Can be used to set breakpoints in DataStore application logic

Parameters:

USER

Holds only for the specified user

DATASTORE

Holds only for the specified DataStore

BREAKID

Holds at the specified operation

DATA PACKAGE

Holds only for the specified data package (0 means main process)

WAIT TIME IN SECONDS

Time which is spend to wait for debugging request

Report: rssm_sleep_debug

For example: set breakpoint with id “6” to check the runtime of database statements

Page 82: BW Adjusting settings and monitoring data loads

DataStore Specific Transactions

RSODSO_RUNTIME

Can be used to measure runtime of activation or SID creation for DataStore object

Parameters: USER

– Measurement only for the specified user

CONTEXT – Measurement context (see

screenshot) PARAMETER

START / STOP OF MEASUREMET INTERVAL– Can be used to enable

measurement only for a specific time frame

Activation SID creation

Parameter 1:

Technical name of DataStore object

Technical name of DataStore object

Parameter 2:

Request SID Data package

Parameter 3:

Data package InfoObject name

Page 83: BW Adjusting settings and monitoring data loads

DataStore Specific Transactions

RSODSO_RUNTIME

Results can be found in table RSODSO_RUNTIME

Page 84: BW Adjusting settings and monitoring data loads

© SAP 2008 / Page 85

I. Monitoring 'Data Package'-Settings related Problems

II. Process Chains Settings

III. DTP and DSO Settings

IV. Multiprovider Hint

V. Monitor Archiving Features

VI. Loading Master Data

VII. Loading Transactional Data

Agenda

Page 85: BW Adjusting settings and monitoring data loads

MultiProvider Hint (1)

86

An entry in RRKMULTIPROVHINT only makes sense if a few attributes of this characteristic (that is, only a few data slices) are affected in the majority of, or

the most important, queries (SAP Notes: 911939. See also: 954889 and 1156681).

Problem: To reduce data volume in each InfoCube, data is partitioned for example by Time period. A query now have to search in all InfoProviders to find the data (i.e. billing docs from 2007). This is very slow.

Solution: We can add “hints” to guide the query execution. In the RRKMULTIPROVHINT table, you can specify one or several characteristics for each MultiProvider which are then used to partition the MultiProvider into BasicCubes.

If a query has restrictions on this characteristic, the OLAP processor is already checked to see which part cubes can return data for the query. The data manager can then completely ignore the remaining cubes.

Problem: To reduce data volume in each InfoCube, data is partitioned for example by Time period. A query now have to search in all InfoProviders to find the data (i.e. billing docs from 2007). This is very slow.

Solution: We can add “hints” to guide the query execution. In the RRKMULTIPROVHINT table, you can specify one or several characteristics for each MultiProvider which are then used to partition the MultiProvider into BasicCubes.

If a query has restrictions on this characteristic, the OLAP processor is already checked to see which part cubes can return data for the query. The data manager can then completely ignore the remaining cubes.

Page 86: BW Adjusting settings and monitoring data loads

Multiprovider Hint (2)and Logical Partitioning

AA0

Org

TimeMat

Cust

AA1

Org

TimeMat

Cust

AA2

Org

TimeMat

Cust

AA3

Org

TimeMat

Cust

AA4

Org

TimeMat

Cust

AA5

Org

TimeMat

Cust

2AA0

Org

TimeMat

Cust

Query

Partition Info_from Info_To

WC1AA Y5CO1AA0 Y6CO1MC0

WC1AA Y5CO1AA1 Y6CO1MC0

WC1AA Y5CO1AA2 Y6CO1MC0

WC1AA Y5CO1AA3 Y6CO1MC0

WC1AA Y5CO1AA4 Y6CO1MC0

WC1AA Y5CO1AA5 Y6CO1MC0

WC1AA Y5CO2AA0 Y6CO1MC0

YEDW_STEER used for YGTIPROV

Multiprov InfoObject Position

Y6CO1MC0 0CURTYPE 1

Y6CO1MC0 0VTYPE 2

Y6CO1MC0 0FISCPER 3

RRKMULTIPROVHINT

1. Open query. Enter selections. Execute.

2. YGTIPROV provides filter values for 0INFOPROVIDER using YEDW_STEER

3. Read Hints table

4. Dimension tables are read to determine cubes with data for selection

5. Multiprovider execution plan is constructed

Page 87: BW Adjusting settings and monitoring data loads

Multiprovider Hint (3)

Solution Table RRKMULTIPROVHINT provides aditional fields: POSIT – to suggest the order in which the Analitical Engine reads the

InfoProvider MAXOR – to define for each characteristic how many rows the WHERE

condition may have NOBUFFER – to deactivate metadata MultiProvider definition buffering Restrictions: Valid for Time INDEPENDENT Navigational Attributes (with SID conditions

values only) Interval Restrictions on compounded Attributes cannot be checked Time Dependent Attributes also ignored Note: if data is deleted from InfoProvider, do not forget to delete the

Dimensions Tables as well (RSRV) OSS 1156681 and 1371598 continuously updated.

Page 88: BW Adjusting settings and monitoring data loads

© SAP 2008 / Page 89

I. Monitoring 'Data Package'-Settings related Problems

II. Process Chains Settings

III. DTP and DSO Settings

IV. Monitor Archiving Features

V. Loading Master Data

VI. Loading Transactional Data

Agenda

Page 89: BW Adjusting settings and monitoring data loads

Request Archiving – 1

Archiving of Request Monitor Data Archive and delete old monitor data Reduces data in operational database Optimizes performance Facilitates administration

Page 90: BW Adjusting settings and monitoring data loads

Request Archiving – 2

Archiving of Request Monitor Data Create Archive File (via ‘File’) and enter a variant name

Page 91: BW Adjusting settings and monitoring data loads

Request Archiving – 3

Archiving of Request Monitor Data Press ‘Create Archive’ Button or Archiving via transaction SARA and archiving object BWREQARCH Tcode AOBJ (archiving Objects)

Page 92: BW Adjusting settings and monitoring data loads

Request Archiving – 4

Archiving of Request Monitor Data Create Archive File (via ‘File’) and enter a variant name

Page 93: BW Adjusting settings and monitoring data loads

Request Archiving – 5

Archiving of Request Monitor Data Create Archive File (via ‘File’) and enter a variant name

Page 94: BW Adjusting settings and monitoring data loads

© SAP 2008 / Page 95

I. Monitoring 'Data Package'-Settings related Problems

II. Process Chains Settings

III. DTP and DSO Settings

IV. Monitor Archiving Features

V. Loading Master Data

VI. Loading Transactional Data

Agenda

Page 95: BW Adjusting settings and monitoring data loads

Loading master data

General rules for Master Data Loads

Buffer number ranges (SNRO)

Switch off Single record buffering in SE12

Upload in parallel (see SAP note 421419)

Cost Base optimizer statistics

Upload in sequence master data, texts, hierarchies

Use delta handling in mySAP ERP

Page 96: BW Adjusting settings and monitoring data loads

Put in the name of the characteristic and choose object version = A

See field NUMBRANR of E_S_VCHA for number range to buffer.Add ‘BIM’ as a prefix to NUMBRANR.

E_T_ATR_NAV tells you the navigational attributes to your characteristic

Use function module RSD_IOBJ_GET

Number range buffering – Master data

Page 97: BW Adjusting settings and monitoring data loads

Key Concept – NRIV – SID - DIMID

Number ranges can cause significant overhead when loading large volumes of data as the system repeatedly accesses the number range table NRIV to establish a unique number range. If a large volume of data is waiting for these values, then the data loading process becomes slow.

One way to alleviate this bottleneck is for the system to take a packet of number range values from the table and place them into memory. This way, the system can read many of the new records from memory rather than repeatedly accessing and reading table NRIV. This speeds the data load.

A bottleneck can occur with both master data and transaction data. When the system loads new master data, it establishes a unique system identifier (SID) value to uniquely identify the master data record. By contrast, when the system loads transactional data into an InfoCube, the system must create unique dimension identifiers called DIM values. Like the SID values for master data, these DIM values establish a unique identifier of each transactional data value within the InfoCube dimension. The system gathers the last value from the NRIV table each time it adds an entry into an InfoCube dimension, and these multiple reads to the NRIV table slow large data loads.

You can set number range buffering for either or both the master data (SIDs) and dimension table (DIMs) to increase performance. Number range buffering helps when loading large volumes of data, but adds little if any performance improvement on smaller master and transactional data loads. Typically, you should set it for initial large data loads and turn it off once the load is complete. You can leave buffering on indefinitely if high volume loading occurs for all master or transactional data loads.

Number ranges can cause significant overhead when loading large volumes of data as the system repeatedly accesses the number range table NRIV to establish a unique number range. If a large volume of data is waiting for these values, then the data loading process becomes slow.

One way to alleviate this bottleneck is for the system to take a packet of number range values from the table and place them into memory. This way, the system can read many of the new records from memory rather than repeatedly accessing and reading table NRIV. This speeds the data load.

A bottleneck can occur with both master data and transaction data. When the system loads new master data, it establishes a unique system identifier (SID) value to uniquely identify the master data record. By contrast, when the system loads transactional data into an InfoCube, the system must create unique dimension identifiers called DIM values. Like the SID values for master data, these DIM values establish a unique identifier of each transactional data value within the InfoCube dimension. The system gathers the last value from the NRIV table each time it adds an entry into an InfoCube dimension, and these multiple reads to the NRIV table slow large data loads.

You can set number range buffering for either or both the master data (SIDs) and dimension table (DIMs) to increase performance. Number range buffering helps when loading large volumes of data, but adds little if any performance improvement on smaller master and transactional data loads. Typically, you should set it for initial large data loads and turn it off once the load is complete. You can leave buffering on indefinitely if high volume loading occurs for all master or transactional data loads.

Page 98: BW Adjusting settings and monitoring data loads

OSS 857998 - Number range buffering for DIM IDs and SIDs

The number range object of a dimension table should be buffered if

a) the DIM table increases by a large number of data records for each request

b) the size of the DIM table levels out but at the beginning you expected or observed a significant increase

c) there are many accesses to the NRIV table with reference to the number range object of the dimension table

d) the InfoCube or the dimension table is deleted regularly and, therefore, the DIM table increases significantly in each period.

The number range object for the SIDs of an InfoObject should be buffered if

e) you regularly add many new data records to the SID table

f) you know before the initial load that there is a lot of master data to be loaded for this InfoObject

g) you delete the master data periodically and you always load many new records (this should be an exception).

How do you find number range objects of the dimensions and SIDs whose number range should be buffered?

h) You know in advance that the dimension table or the SID table will increase significantly per request.

i) You observe accesses to the NRIV table and you can determine the number range object directly in the current SQL statement.

j) A high number range level may indicate that there is a significant increase for these two BW objects (valid only for BID* and BIM* objects)

Page 99: BW Adjusting settings and monitoring data loads

Find candidates for Number Range Objects Buffering

1. Check if there are already Number range Object defined with buffering

tcode SE 16 > TNRO > with OBJECT like ‘BI*’ and BUFFER = ‘X’

2. Check (pre-productive) if the occurences for each object deserves Buffering

tcode SE 16 > NRIV with OBJECT like ‘BI*’ sorted by NRLEVEL descending consider OBJECT with occurences over 100.000 (for example)

3. Evaluate (early and post-productive) Data Growth for SID and DIMID.

tcode SE38/SM37 scheduling program ZBC_NRIV_LOG at the beginning of BW productive live and subsequently every month or less.

evaluate ODSO for high number of changes

Page 100: BW Adjusting settings and monitoring data loads

Switch-off single record buffering – SE11

You can change the buffering to not buffered or to fully

buffered, depending how you access the data.-

Page 101: BW Adjusting settings and monitoring data loads

Additional remark regarding single record buffering

Symptom:

Because of the design of the single record buffer there can be a performance problem during LOAD and READ SI accesses to this table buffer. You can easily find those reads by performing a ST05 buffer trace. In the most cases the problem occurs during the ODS Data Activation for a high volume of data.

Explanation:

The problem is not only the parallel access of the single key buffer. Instead, the problem is the high number of extents for the table within the buffer (e.g. over 50000). Thus, for entries in the highest extents, the access is slow. Accesses to entries that are found in the first extents are always fast - in parallel or serial access.

Normally, the buffer writes at system shutdown file TBXSTAT/TBXNEW into the DATA directory. This file contains the high water marks of all tables that are in the buffer at shutdown time. At buffer instantiation, this file is read. The high water marks are used to calculate proper extent sizes. The idea is that in a "balanced" system always proper extent sizes are taken.

Generally the size of the extents can be reset the following way:

1. Restart the WP1 in transaction SM50

2. Goto Transaction AL12→ choose 'Edit'→ 'Displ./Reorg'→ Single record buffer and reorganize the single record buffer

The Basis development stated that there has been no change in any support packages regarding this behaviour of the single record buffer. The performance issue is however getting worse and worse when the size of the SID table in further increased.

Page 102: BW Adjusting settings and monitoring data loads

Additional indexes

Requirement:

You would like to load master data for a certain characteristic.

Problem:

Several attributes are not provided by the DataSource, you have to enrich the master data by routines.

Every time you have to read additional information via routines, additional indices on the accessed tables could be helpful.

In combination with transaction ST05 (SQL Trace) you can recognize where indices are missing.

Solution:

Create such indices with transaction SE11.

Depending on the data enrichment during the extraction,additional indices can be helpful on P-,X-,Y-,SID tables or on customer defined tables

Page 103: BW Adjusting settings and monitoring data loads

© SAP 2008 / Page 104

I. Monitoring 'Data Package'-Settings related Problems

II. Process Chains Settings

III. DTP and DSO Settings

IV. Monitor Archiving Features

V. Loading Master Data

VI. Loading Transactional Data

Agenda

Page 104: BW Adjusting settings and monitoring data loads

Loading transactional data in InfoCubes and Buffering

Put in the name of the InfoCube and choose object version = A

See field NOBJECT of E_T_DIME for number range to buffer.

Use function module RSD_CUBE_GET

How to detect the Number Range Object attached to Dimensions.How to detect the Number Range Object attached to Dimensions.

Page 105: BW Adjusting settings and monitoring data loads

Appendix –Buffering Tools

REPORT z_c onv _p uff _ta bl.

Te xt Par ame ters

P_ AL L Buffe rin g Allowed ?

P_ FL PUF F Buf feri ng Typ e

P_ OBJNAM Tab le Name

Te xt Symbol s

00 1 Do y ou Re ally Wa nt to Cha ng e Se ttin gs for Bu ffe ring ?

00 2 Setti ng Ch ang es to Buff eri ng can se riou sly impa ir pe rfo rma nce

00 3 Or le ad to data in con sis te nci es

00 4 L OG is AVAILABLE VI A SE3 8 -- > RADPROT A

REPORT z_c onv _p uff _ta bl.

* Upd ate Buf feri ng Ty pe and Re ac tiv ate tab le or v iew !

* Writ e L og DB - De tec t As well DDIC d epe nd enc ies

* I n c ase DDIC Obj ect s a re b uild up on Ta ble Re act iva ted

* T ABLES : d d09 l.

SELECT- OPT IONS: p_o bjn am FOR d d0 9l-t abn ame NO INTERVALS DEFAULT ' /BI 0/PMAT ERIAL' .

PARAMET ERS: p_ flpu ff T YPE p uffe run g DEFAULT '' ,

p_ all TYPE b ufa llo w.

TYPES:

l _ob j_n ame T YPE rs rang e.

DATA : re spo st a T YPE ch ar1 ,

t _ob j_n ame T YPE ST ANDARD TABLE OF l_ob j_ na me,

s _ob j_ na me T YPE l_ obj _n ame,

o bj_ typ e TYPE tb atg -ob jec t VALUE ' TABL',

i d_n ame LIKE dd 12 l-ind ex na me VAL UE '',

f ct T YPE tb atg -fc t,

g use r T YPE dd refs truc -tb at gus er VAL UE ' DDIC' ,

s ubr c T YPE s y-ta bix ,

p rid LIKE sy-s ub rc,

mcid LI KE dd2 3v -mc id,

p rotn ame LIKE ts trf0 1-fil e,

r ealn ame LIKE ts trf0 1-fil e,

p rot_ de v LIKE dd ref stru c-r sd evic e VAL UE ' T',

mode (1 ) VALUE 'N'. " new pro toc ol o r a pp end

DATA : l_ tab typ e TBATG-TABNAME.

INCLUDE rad bto um.

IF p_ flpu ff IS INITI AL.

MOVE 'N' TO p_ all.

ENDIF .

CALL FUNCT ION 'SALP_POP_ CONFIRM_ 4'

EXPORT ING

tex tlin e1 = tex t-0 01

tex tlin e2 = tex t-0 02

tex tlin e3 = tex t-0 03

tex tlin e4 = tex t-0 04

* TEXT LINE5 = ''

numbr_ of_ bu tto ns = '4 '

IMPORTI NG

ans we r = re sp ost a.

IF res po sta = 'J '.

READ T ABL E p _ob jna m I NT O s_ ob j_na me INDEX 1.

DELETE p_ obj nam WHERE low <> s_o bj_ na me .

APPEND s_ obj _na me TO t_ obj_ na me.

* r esp os ta J K N C

PERF ORM e xec ute _al ter _b uff T ABLES t_o bj _n ame USING p_f lpu ff p_a ll.

EL SEIF r esp ost a = 'K'.

LOOP AT p_ ob jnam INTO s_ obj _na me.

APPEND s _ob j_ name T O t _ob j_n ame.

ENDL OOP.

PERF ORM e xec ute _al ter _b uff T ABLES t_o bj _n ame USING p_f lpu ff p_a ll.

EL SEIF r esp ost a = 'C' OR re spo sta = 'N'.

LOOP AT p_ ob jnam.

WRITE:/ 'Alte raç ão na Bu fferi zaç ão da Ta be la ', p_ ob jna m-lo w , 'ca nce la da !'.

ENDL OOP.

ENDIF .

*&----- ----- ------ ----- ---- ----- ----- ----- ------ ----- ---- ----- ----- ----*

*& Fo rm e xe cu te_ alte r_b uff

*&----- ----- ------ ----- ---- ----- ----- ----- ------ ----- ---- ----- ----- ----*

* tex t

*-- ----- ----- ------ ----- ---- ----- ----- ----- ------ ----- ---- ----- ----- ---*

* - ->T _OBJ_NAME t ext

* - ->P_FL PUF F te xt

* - ->P_AL L t ext

*-- ----- ----- ------ ----- ---- ----- ----- ----- ------ ----- ---- ----- ----- ---*

FORM ex ecu te_ alt er_ buf f TABLES t _o bj_ name USING p _fl puf f p_ all .

LOOP AT t_ obj _na me INT O s_o bj_ name.

UPDAT E d d09 l

SET p uffe run g = p_ flp uff buf allo w = p _al l

WHERE ta bna me = s_o bj_ name-l ow.

IF sy- sub rc < > 0.

WRIT E:/ 'Err o no Upda te DDI C ! ', sy -s ubr c, ' ', p _o bjna m- low.

EL SE.

COMMIT WORK.

CALL FUNCT ION 'DD_L OGNPROT _NAME_GET'

EXPORT ING

ta sk = 'CNV' "CNV mea ns SE14 p ro toc ol

si ng le_ or_ mas s = ' S'

ob j_ typ e = obj_ ty pe

on lin e_ put = 'O'

ob j_ na me = s_o bj _n ame -low

in d_ na me = id_ na me

IMPORT ING

pr otn ame = prot na me

* j obn ame =

EXCEPT IONS

in pu t_e rror = 01

OT HERS = 02.

IF sy -su brc = 0 .

log _op en >: p rot _de v mode 1 'se t_s tyl e' p rotn ame ' ' re aln ame pri d.

EL SE.

* D05 10 : Sta nd ard pro tok oll name k on nte nic ht bere ch ne t we rde n

MESSAGE ID ' D0' TYPE 'A' NUMBER '5 10 '.

ENDI F.

smi2 >: p rid ' ' 'GT0 24 ' s _ob j_n ame -lo w o bj_ typ e.

* d ele te view na DB

CL EAR : l_ta b.

MOVE: 'CNV' TO fct , s_ ob j_n ame -low to l_ tab .

* d ele te view na DB

CALL FUNCT ION 'DD_DB_ OPERATION'

EXPORT ING

fc t = fc t

* FORCED = ' '

* id_ na me = ' '

ob j_ na me = l_ ta b

ob j_ typ e = 'T ABL'

p rid = prid

s tat us = 'D'

us er = gus er

IMPORT ING

su br c = sub rc

EXCEPT IONS

un ex pe cte d_e rro r = 0 1

un su pp orte d_ fun ctio n = 0 2

un su pp orte d_ obj _typ e = 0 3

un su pp orte d_ sta tus = 04

OT HERS = 05 .

IF sy -su brc <> 0.

WRITE: / 'Erro r Ac tiv atin g T ab le ! > RADPROTA ', s ub rc, ' ', s_o bj_ na me- low.

sub rc = 1 .

EL SE.

WRITE: / 'Acti vat ion Su cce ss full - L og Dis pon ive l n a SE38 > RADPROTA ', s_o bj_ name- low.

MESSAGE ID ' D0' TYPE 'S' NUMBER '3 26 ' WITH pro tna me .

clo se> : p rid.

ENDI F.

ENDIF .

ENDL OOP.

ENDF ORM. "e xec ut e_a lte r_b uff

*&----- ----- ------ ----- ---- ----- ----- ----- ------ ----- ---- ----- ----- ----*

*& Re por t ZBC_ NRIV_ BUF FERING_DEF AULTS

*&

*&----- ----- ------ ----- ---- ----- ----- ----- ------ ----- ---- ----- ----- ----*

*&

*&

*&----- ----- ------ ----- ---- ----- ----- ----- ------ ----- ---- ----- ----- ----*

REPORT ZBC_NRIV_BUFF ERING_ DEFAULT S.

ta ble s: n riv, tnro .

da ta: lt_ nriv ty pe sta nd ard tab le o f n riv

wi th key

CL IENT OBJ ECT SUBOBJ ECT NRRANGENR TOYEAR,

ls _n riv typ e nr iv,

lt_ tn ro t ype ha sh ed tab le o f T NRO

wi th uni que ke y OBJECT ,

ls _tn ro typ e tn ro.

da ta: bwobje ct1 ty pe /BI C/OIZCGTCNRBW.

da ta: bwobje ct2 ty pe /BI C/OIZCGTCNRBW.

da ta: l_i nde x ty pe i.

pa ramete rs:

p_v alu e t ype i def ault 10 0,

p_s id ty pe tab len ame ,

p_d im ty pe tab len ame ,

p_t est as ch ec kb ox d efa ult 'X' .

pe rfo rm:

se lec t,

fin d_l ink ed_ ob jec ts,

up dat e_t nro,

di spla y_ list.

*************************************************************

* SEL ECT *

* Sele cti ng n umbe r ra nge ob jec ts f rom NRIV *

*************************************************************

fo rm s ele ct.

pe rfo rm mess ag e u sin g 5 'Re trie vin g obj ect s...' .

if p_s id i s in itia l a nd p_ dim is i niti al.

se lec t * f rom nri v i nto tab le l t_nr iv

wh ere ob ject lik e 'BID%' or o bje ct lik e 'BIM%'.

el se.

se lec t * f rom nri v i nto tab le l t_nr iv

wh ere ob ject = p_ sid or o bje ct = p _d im.

en dif .

en dfo rm.

*************************************************************

* F IND_L INKED_OBJECTS *

* Retr iev ing BW ob jec t n ame for nu mb er r ang e o bje ct a nd *

* e limina ting the o nes th at d o n ot q ua lify (p ack age di ms , et c)

*************************************************************

fo rm f ind _lin ked _o bje cts .

da ta: dte xt(4 4).

so rt lt _nri v b y o bje ct.

loo p a t lt_ nri v i nto ls_ nri v.

con cat en ate 'Li nki ng NR o bj ect ' ls _nr iv-o bje ct

into dte xt se par ate d b y sp ac e.

per form me ss age us ing 77 dt ext .

cle ar b wo bje ct1 .

cas e ls _n riv -ob jec t(3) .

whe n 'BID'.

se le ct s ing le d ime ns ion fro m rs dd ime loc

in to bwo bje ct1 wh ere nu mb ranr = l s_n riv -ob jec t+3 (7).

if bwobj ect1 is ini tial .

de le te lt _nr iv.

el se .

* F ilte r o ut p ack ag e d ime ns ions

mo ve bwobj ect 1 to bwob jec t2.

l_ ind ex = s trle n( b wo bje ct2 ).

su bt ract 1 f rom l_i nde x.

if bwobje ct2 +l_ ind ex (1) = 'P'.

de le te lt_n riv.

en di f.

en di f.

whe n 'BIM'.

se le ct s ing le c ha bas nm fro m rs dc hab as loc

in to bwo bje ct1 wh ere nu mb ranr = l s_n riv -ob jec t+3 (7).

if bwobj ect1 is ini tial or bwobje ct1 cs 'REQUEST'.

de le te lt _nr iv.

en di f.

whe n o the rs.

end cas e.

en dlo op.

en dfo rm.

*************************************************************

* UPDATE_ TNRO *

* Upd atin g d ata ba se tab le T NRO wi th cha ng es *

*************************************************************

fo rm u pd ate_ tnr o.

fie ld- symbol s: < tn ro> typ e t nro.

pe rfor m me ss ag e u sin g 9 5 'Upda tin g T NRO ta ble ...'.

ch eck p_ tes t is in itia l.

se lec t * fr om tnr o in to tab le lt _tn ro

for all ent ries in lt_ nri v

wh ere obj ect = lt_n riv -ob ject .

loo p a t lt_ tnr o a ss ign ing <tn ro> .

if <tn ro>- noi vbu ffe r < p_ valu e.

mov e:

'X' to <tn ro> -bu ffer ,

p_ va lue to <tn ro> -no ivb uff er.

en dif .

en dlo op.

up dat e tn ro f rom ta ble lt_ tnro .

if s y-s ubr c = 0.

c ommit wo rk.

en dif.

en dfo rm.

*************************************************************

* DISPLAY_ LIST *

* Disp lay s li st o f u pd ate s *

*************************************************************

fo rm d isp lay _lis t.

wr ite: tex t-00 1.

ul ine . sk ip. det ail .

loo p a t lt_ nri v i nto ls_ nri v.

writ e:/ l s_ nriv -ob jec t.

en dlo op.

en dfo rm.

****************************************************

* MESSAGE

***************************************************

fo rm mes sag e u sin g p ct tex t.

ca ll fun ctio n ' SAPGUI _PROGRESS_ INDICATOR'

e xp orti ng

p erc en tag e = pc t

t ext = tex t

e xc ept ion s

o the rs = 1 .

ca ll fun ctio n ' ABAP4_ COMMIT_ WORK'.

en dfo rm.

- SNRO Number Ranges Objects are generated, therefore, in order not to open QA or PROD systems, the attached programs- ZBC_NRIV_BUFFERING_DEFAULTS allows to maintain Number Ranges for Initial Loads for example and reset afterwards if necessary- Change Table Buffer Settings allows to change list of Table Buffer settings in one go.

- SNRO Number Ranges Objects are generated, therefore, in order not to open QA or PROD systems, the attached programs- ZBC_NRIV_BUFFERING_DEFAULTS allows to maintain Number Ranges for Initial Loads for example and reset afterwards if necessary- Change Table Buffer Settings allows to change list of Table Buffer settings in one go.

ZBC_NRIV_BUFFERING_DEFAULTS

Change Table Buffer Settings

Page 106: BW Adjusting settings and monitoring data loads

Appendix - Z_BW_MAINTAIN_ROOSPRMS

To reset ROOSPRMSTo reset ROOSPRMS

Page 107: BW Adjusting settings and monitoring data loads

Copyright 2009 SAP AG. All Rights Reserved

No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice.

Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors.

Microsoft, Windows, Outlook, and PowerPoint are registered trademarks of Microsoft Corporation.

IBM, DB2, DB2 Universal Database, OS/2, Parallel Sysplex, MVS/ESA, AIX, S/390, AS/400, OS/390, OS/400, iSeries, pSeries, xSeries, zSeries, z/OS, AFP, Intelligent Miner, WebSphere, Netfinity, Tivoli, and Informix are trademarks or registered trademarks of IBM Corporation in the United States and/or other countries.

Oracle is a registered trademark of Oracle Corporation.

UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group.

Citrix, ICA, Program Neighborhood, MetaFrame, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems, Inc.

HTML, XML, XHTML and W3C are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts Institute of Technology.

Java is a registered trademark of Sun Microsystems, Inc.

JavaScript is a registered trademark of Sun Microsystems, Inc., used under license for technology invented and implemented by Netscape.

MaxDB is a trademark of MySQL AB, Sweden.

SAP, R/3, mySAP, mySAP.com, xApps, xApp, SAP NetWeaver and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all over the world. All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary.

These materials are subject to change without notice. These materials are provided by SAP AG and its affiliated companies ("SAP Group") for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty.