tadm70.docx

174
TADM70 SAP System: OS and DB Migration Contents Course Overview .................................................................. ............ ix Course Goals............................................................................ ..... ix Course Objectives ...................................................................... ..... xi Unit 1: Introduction .............................................................. ..............1 Introduction .................................................................... ................ 2 Unit 2: The Migration Project .............................................................. 27 The Migration Project ...................................................................... 28 Unit 3: System Copy Methods............................................................. 53 System Copy Methods ..................................................................... 54 Unit 4: SAP Migration Tools................................................................ 67 SAP Migration Tools........................................................................ 68 Unit 5: R3SETUP/SAPINST ........................................................... ...... 99 R3SETUP/ SAPINST ......................................................................100 Unit 6: Technical Background Knowledge ............................................ 113 Data Classes (TABARTs) ................................................................. 114

Upload: shajahan-khansab

Post on 14-Dec-2015

66 views

Category:

Documents


3 download

TRANSCRIPT

TADM70SAP System: OS and DB Migration

ContentsCourse Overview .............................................................................. ixCourse Goals................................................................................. ixCourse Objectives ........................................................................... xiUnit 1: Introduction ............................................................................1Introduction .................................................................................... 2Unit 2: The Migration Project .............................................................. 27The Migration Project ...................................................................... 28Unit 3: System Copy Methods............................................................. 53System Copy Methods ..................................................................... 54Unit 4: SAP Migration Tools................................................................ 67SAP Migration Tools........................................................................ 68Unit 5: R3SETUP/SAPINST ................................................................. 99R3SETUP/SAPINST ......................................................................100Unit 6: Technical Background Knowledge ............................................ 113Data Classes (TABARTs) ................................................................. 114Miscellaneous Background Information ................................................126Unit 7: R3LOAD & JLOAD Files ..........................................................139R3LOAD Files ..............................................................................140JLOAD Files ................................................................................185Unit 8: Advanced Migration Techniques ...............................................223Time Consuming Steps during Export / Import ........................................225MIGMON - Migration Monitor for R3LOAD .............................................241MIGTIME & JMIGTIME - Time Analyzer................................................254Table Splitting for R3LOAD...............................................................260DISTMON - Distribution Monitor for R3LOAD .........................................278JMIGMON - Migration Monitor for JLOAD..............................................283

2012 © 2012 SAP AG. All rights reserved. viiContents TADM70Table Splitting for JLOAD .................................................................288Unit 9: Performing the Migration.........................................................299Performing an ABAP System Migration ................................................300Performing a JAVA System Migration ...................................................319Unit 10: Troubleshooting ..................................................................335Troubleshooting ............................................................................336Unit 11: Special Projects...................................................................363Special Projects............................................................................364

Course Overview• This course offers detailed procedural and technical knowledge on homogeneous and heterogeneous system copies, which are performed by using R3LOAD/JLOAD on SAP NetWeaver systems with a focus on OS/DBmigrations. The training content is mostly release independent, and is based on information up to 7.30. Previous releases, like R/3 4.x, R/3 Enterprise 4.7 (Web AS 6.20), ERP 2004 / NetWeaver ’04 (Web AS 6.40), ECC 6.0 / NetWeaver ’04S /NetWeaver 7.0x are covered as well. The course attendance is the prerequisitefor the OS/DB Migration certification test.

Course Duration DetailsUnit 1: IntroductionIntroduction 90 MinutesExercise 1: Introduction 5 MinutesUnit 2: The Migration ProjectThe Migration Project 90 MinutesExercise 2: The Migration Project 5 MinutesUnit 3: System Copy MethodsSystem Copy Methods 30 MinutesExercise 3: System Copy Methods 5 Minutes

2012 © 2012 SAP AG. All rights reserved. ixCourse Overview TADM70Unit 4: SAP Migration ToolsSAP Migration Tools 45 MinutesExercise 4: SAP Migration Tools 10 MinutesUnit 5: R3SETUP/SAPINSTR3SETUP/SAPINST 15 MinutesExercise 5: R3SETUP/SAPINST 5 MinutesUnit 6: Technical Background KnowledgeData Classes (TABARTs) 30 MinutesMiscellaneous Background Information 15 MinutesExercise 6: Technical Background Knowledge 10 MinutesUnit 7: R3LOAD & JLOAD FilesR3LOAD Files 90 MinutesJLOAD Files 15 MinutesExercise 7: R3LOAD & JLOAD Files (Part I) 20 MinutesExercise 8: R3LOAD & JLOAD Files (Part II,Hands-On Exercise) 25 MinutesUnit 8: Advanced Migration TechniquesTime Consuming Steps during Export / Import 10 MinutesMIGMON - Migration Monitor for R3LOAD 15 MinutesMIGTIME & JMIGTIME - Time Analyzer 10 MinutesTable Splitting for R3LOAD 15 MinutesDISTMON - Distribution Monitor for R3LOAD 5 MinutesJMIGMON - Migration Monitor for JLOAD 10 MinutesTable Splitting for JLOAD 10 MinutesExercise 9: Advanced Migration Techniques 15 MinutesUnit 9: Performing the MigrationPerforming an ABAP System Migration 30 MinutesPerforming a JAVA System Migration 15 MinutesExercise 10: Performing the Migration 10 MinutesUnit 10: TroubleshootingTroubleshooting 30 MinutesExercise 11: Troubleshooting 10 MinutesUnit 11: Special ProjectsSpecial Projects 15 MinutesExercise 12: Special Projects 5 Minutes

Unit 1IntroductionUnit OverviewWhat is a homogeneous or heterogeneous system copy, which tools are available, what is the Going Live OS/DB Migration Check Service, and from where to get information about the migration procedure.

Lesson OverviewLesson ObjectivesAfter completing this lesson, you will be able to:• Distinguish between an SAP homogeneous system copy and an SAP OS/DBMigration• Estimate the problems involved with a system copy or migration• Understand the functions of the SAP OS/DB Migration Check

Business ExampleYou want to understand which system copy / migration tools are provided by SAP and what is the difference between a homogeneous and a heterogeneous system copy. Furthermore you are interested in the scope of the OS/DB Migration Check service.

Figure 1: Definition of TermsAs the naming of SAP systems is changing frequently, the term “SAP System” is used throughout the course material as a synonym for every SAP system type, which can be copied with R3LOAD or JLOAD. Explain the difference between NetWeaver 7.00 and 7.02 (NetWeaver Enhancement Package 2).Please note: Improved functionality was often introduced with new SAP Kernel versions. If the new SAP Kernel was backward compatible to older SAP releases, the new functionality was available for the older releases as well. Example: a SAP Web AS 6.20 running on SAP Kernel 6.40 can make use of R3LOAD 6.40 features.Throughout the SAP Documentation and SAP Notes, the term NetWeaver ‘04S and NetWeaver 7.00 is used in a mixed way, meaning the same. The initial SAP service offering for OS/DB Migrations was originally called “SAP OS/DB Migration Service”, but was renamed to “SAP OS/DB Migration Check Service”. Today, the term “SAP OS/DB Migration Service” is used for SAP fix price projects, in which SAP consultants migrate customer systems to a different databaseand/or operating system, mainly from remote.

Figure 2: Copying a SAP SystemThe only supported way to perform heterogeneous system copies is the R3LOAD/JLOAD method. Exceptions are SAPDB, and DB2 UDB, were OS migrations with database means are possible. Every unsupported system

copy can lead to billed SAP support for all problems which were caused by the non-supported method. Make clear that the famous Oracle EXP/IMP is not supported by SAP anymore. Every EXP/IMP migration will be done on own risk! Migrated SAP Systems are supported without regard which method was used. If a system problem is related to a non-supported migration method afterwards, the customer can be charged for the SAP fixing efforts.A client transport is not a true SAP System copy or migration. The copy function cannot transport all of the system settings and data to the target system, nor is it intended to do so. This applies particularly to production systems. Client transports have no meaning to JAVA-based SAP Systems. For further reference see SAP Note 96866 “DB copy by client transport not supported”.Databases can be duplicated by restoring a backup. In most cases, this is the fastest and easiest way to perform a homogeneous system copy. Some databases even allow a database backup to be restored in a different operating system platform (OS migration).

Note: 3rd party database tools and methods suitable for switching the operating system (OS migration) or even the database (DB migration) are not supported by SAP, if not explicitly mentioned in SAP documents or SAP Notes. Nevertheless, the usage of unsupported tools or methods is not forbidden in general (the tool and method support must be provided by the 3rd party organization in such a case). SAP cannot be made responsible for erroneous results. After the system copy, the migrated SAP system is still under maintenance, but efforts to fix problems caused by the unsupported tool or method, can and will be charged to the customer!

SAP System copy tools can be used for system copies or migrations on any SAP supported operating system and database combination as of R/3 Release 3.0D. Since NetWeaver ’04 (6.40) JAVA based systems can also be copied or migrated to any SAP supported operating system and database combination by SAP System copy tools.

Figure 3: SAP System Copy / Migration Tools (1)The SAP System copy tools are used for homogeneous and heterogeneous system copies. SAP System copy tools used for heterogeneous system copies are called SAP Migration Tools. In the remainder of this document, the the term SAP Migration Tools will be used.The message is: There is no tool difference between homogeneous or heterogeneous system copies, only the procedure differs! Give a short overview of the tool purposes without going into details.

Figure 4: SAP System Copy Tools / Migration Tools (2)

BW functionality is part of the ABAP Web AS 6.40 standard. Since then, every SAP System can contain non-standard objects! Special post- and pre-migration activities are required for them.The generated DDL statements of SMIGR_CREATE_DDL are used to tell R3LOAD how to create non-standard objects in the target database. The RS_BW_POST_MIGRATION program adapts the non-standard objects to the requirements of the target system.The reports SMIGR_CREATE_DDL and RS_BW_POST_MIGRATION are required since BW 3.0, and for all systems based on BW functionality (i.e. SCM/APO). They are also mandatory for NetWeaver ’04 (Web AS 6.40) and later.JLOAD is available since NetWeaver ’04 (Web AS 6.40). Earlier versions of the JAVA Web AS (i.e. Web AS 6.20) did not store data in a database. JSIZECHECK is available since NetWeaver 04S / 7.00.JLOAD and JSIZECHECK are JAVA programs which are called by SAPINST.

Figure 5: Support Tools for ABAP System Copies (1)The PACKAGE SPLITTER is available in a JAVA and in a Perl implementation.R3SETUP is using the Perl PACKAGE SPLITTER. SAPINST provides the Perl and JAVA PACKAGE SPLITTER or the JAVA version only (release dependent). Two TABLE SPLITTERs exist: One is database independent and is called R3TA, the other is a PL/SQL script implementation and is available for Oracle only.Table splitting is supported since R3LOAD 6.40 in combination with MIGMON. MIGCHECK is implemented in JAVA.Explain that the Migration Support Tools are intended to improve the export/import procedure. Describe the purpose of MIGMON and the signal files for a parallel export/import.

Figure 6: Support Tools for ABAP System Copies (2)MIGMON and MIGTIME are implemented in JAVA. The JAVA based tools are release independent and can be utilized on any SAP platform which supports the required JAVA version.The Distribution Monitor can be used if the R3LOAD caused CPU load should be distributed over several application servers. This can improve the database server performance significantly. It is often seen in Unicode conversion scenarios. Normally the Distribution Monitor makes sense only, if more than one application server is planned to use. It was developed to support system copies based on Web AS 6.x and later.

Figure 7: Support Tools for JAVA System CopiesJAVA Migration Support Tools are very similar to the ABAP versions, but are available from a certain NetWeaver version only.JPKGCTL (also called JSPLITTER) was developed to reduce the export/import run-time for large JAVA systems. A single JLOAD process exporting the whole database (like implemented in previous SAPINST versions) was often too slow as soon as the database was exceeding a certain size, so it was necessary to provide package and table splitting for JLOAD as for R3LOAD.JMIGMON and JMIGTIME do offer a similar functionality like MIGMON and MIGTIME.

Figure 8: Possible Negative Consequences of a System CopyThe data of a productive system is one of the most valuable things the customer owns. While doing a system copy, everything must be done to prevent loss of data or data corruptions. The slide shows the reasons why SAP developed the OS/DB migration check and the OS/DB migration certification.The goal of this training is to prevent problems, such as those mentioned above, by providing in-depth knowledge about each SAP System copy step and the tools which are involved. Following the SAP guidelines ensures a smooth migration project.

Figure 9: Definition: SAP Homogeneous System CopyThe point here is that R3SETUP and SAPINST will only be able to install the target system and loading data into it, if the operating system and the database are supported by the tools. In fact R3LOAD/JLOAD can import in any SAP released OS/DB combination, but R3SETUP/SAPINST can do this only, if they are configured for

that. For the target system, the same operating system can also mean an SAP certified successor like Windows 2003 / Windows 2008.Depending on the method used for executing the homogeneous system copy, it might be necessary to upgrade the database or the operating system of the source system first. On older SAP System releases, even an upgrade might be necessary. This can happen if the target platform requires a database or operating system version that was not backward released for the SAP System version that is to be copied, etc.New hardware on the target system might be supported by the latest operating system and database version only.With or without assistance from a consultant, customers can execute a homogeneous system copy by themselves. If you plan to use a new hardware type or make major expansions to the hardware (such as changing the disk configuration), we recommend involving the hardware partner as well.

Figure 10: Reasons for Homogeneous System CopiesOf course all the mentioned points are also valid for heterogeneous system copies but are not major reasons!The term MCOD is used for SAP installations where [M]ultiple [C]omponents are stored in [O]ne [D]atabase.If a system was installed with an SAP reserved SAPSID, a homogeneous system copy can be used to change the SAPSID. To see if a change is required, check with SAP.All the mentioned reasons above are also applicable to heterogeneous system copies.

Figure 11: Definition: SAP Heterogeneous System CopyThis slide looks very much the same as for homogeneous system copies. The major difference is who can do the system copy and the need for a SAP OS/DB Migration Service. Stress the fact that even if no productive system is involved, the migration must be done from a certified OS/DB migration consultant or a customer employee who has a certification also!An OS/DB migration is a complex process. Consultants are strongly advised to do all they can to minimize the risk with regard to the availability and performance of a production SAP System.Depending on the method used for executing the heterogeneous system copy, it might be necessary to upgrade the database or the operating system of the source system first. On older SAP System releases, even an upgrade might be necessary. This can happen if the target platform requires a database or operating system version that was not backward released for the SAP System version that is to be migrated, etc.

New hardware on the target system might be supported by the latest operating system and database version only.The decisive factors for performance in a SAP System are the parameter settings in the database, the operating system, and the SAP System itself (which depends on the operating system and the database system). During an OS/DB migration, the old settings cannot simply be taken unchanged. Determining the new parameter values requires an iterative process, during which the availability of the migrated system isrestricted.

Figure 12: Common Heterogeneous System Copy ReasonsSome of the mentioned points are also valid for homogeneous system copies but are not major reasons!The above mentioned points are the primary reasons for changing an operating system or database, but the reasons for homogeneous system copies also apply. The reasons also partially apply to homogeneous system copies.

Figure 13: Frequently used SAP TermsAs the term migration, system copy, etc. are used in any mix throughout the SAP documentation, this table should give a clear answer which term belongs to which system copy type.The above table shows which term is being used for SAP System copies. For example, when changing the operating system, this is called an OS migration and is a heterogeneous system copy. Generally, the term heterogeneous system copy implies that it is some kind of OS and/or DB migration.The term “SAP System copy” is used in a more unspecific way.

Figure 14: Homogeneous or Heterogeneous System Copy?The table tries to give an overview which system copy types are still homogeneous even if the look heterogeneous. Explain that a system copy is homogeneous (same database type expected for all involved systems) if the operating system is called the same on the source and target, i.e. Solaris (SPARC) → Solaris (x86), Linux (x86) → Linux (IA64). Please stress the fact that the term "homogeneous system copy" does not automatically mean, backup restores can be used to copy the database!

The table above is only valid when using R3LOAD or JLOAD. Homogeneous system copies using Backup/Restore will require the same database version on source and target system, or must be upgraded after the system copy.

Note: If the hardware architecture in a system copy does change, but the operating system type stays the same, it is a homogenous system copy. In other words, if the operating system is called the same on source and target, it is a homogeneous system copy. This does not automatically imply the possibility of a backup/restore to copy the database (i.e. system copy from Solaris SPARC to Solaris Intel). It only points out, SAP treats it like a homogeneous system copy and no “SAP OS/DB Migration Check” is required. SAPassumes the operating system behavior will be the same without regards of the underlying platform. Please check the database documentation for details on available system copy procedures. Further examples are: HP-UX PA-RISC to HP-UX IA64, LINUX X86 to LINUX POWER, etc.

Figure 15: SAP OS/DB Migration Check (1)The SAP OS/DB Migration Check is always fee based. Depending on the customer maintenance contract, not yet used services can be converted into an OS/DB Migration Check service. The SAP migrations tools are always for free as the same tools are used for the homogeneous system copy or the standard installation.

The cost for the SAP OS/DB Migration Check is specific to the customer location and may differ from country to country.

The SAP OS/DB Migration Check will be delivered as a remote service. In the “Remote Project Audit”, SAP checks the OS/DB migration project planning. The required tools for homogeneous or heterogeneous system copies (installation software) are provided by SAP to customers free of charge. The software can be downloaded from the SAP Service Marketplace.If there is a discussion regarding: Why should a customer pay a lot of money for a service that seems to be nothing else than a standard GoingLive service. Then the answer is, the cost of the OS/DB Migrations Check service is small compared to the costs which are caused by consultancy, testing efforts (by own employees), and hardware / software license costs of the whole project. Every customer must order the OS/DB Migration Check service there is no exception! Hardware vendors are often bundle new hardware with a migration. In this case the customer will only pay a single price for the new system and anything else is done from the hardware vendor. This hides the costs of an OS/DB migration service from the customer.

Figure 16: SAP OS/DB Migration Check (2)Point out that the SAP OS/DB Migration Check service ensures the project compliance with the SAP heterogeneous system copy procedure, and checks the performance. SAP will not verify the system against data loss or corruptions!

Figure 17: Information on the SAP OS/DB MigrationSeveral sources exist to obtain information about OS/DB migrations.

Demonstration: System DemoPurposeLogon to the SAP Service Marketplace.1. Show the alias “osdbmigration” and “systemcopy”.2. Explain available items (FAQs, OS/DB Migration Service documents and so on).

Facilitated DiscussionWhen explaining the different methods to copy an SAP System (Slide: Copying a SAP System) the students should be asked:What is the task of a client transport and which limitations apply, compared to a system copy?

Exercise 1: IntroductionExercise Duration: 5 Minutes

Exercise ObjectivesAfter completing this exercise, you will be able to:• Differentiate between homogeneous and heterogeneous system copies and toknow the procedural consequences for a migration project.

Business ExampleIn customer projects, you must know whether a system move or a database change is a homogeneous or heterogeneous system copy and in which case it is necessary to order a SAP OS/DB Migration Check Service.

Solution 1: IntroductionTask 1:A customer plans to invest in a new and more powerful hardware for his ABAP-based SAP production system (no JAVA Web AS installed). As the operating system and database version are not up-to-date, he also wants to change to the latest software versions in a single step while doing the system move.Current system configuration: Oracle 10.2, AIX 6.1Planned system configuration: Oracle 11.2, AIX 7.11. Is the planned system move a homogeneous system copy, a DB migration or an OS migration? Describe your solution!a) The system move will be a homogeneous system copy. Neither the database nor the operating system will be changed. During a system copy, an upgrade to a new database or operating system software version is not a problem, as long as the operating system and database combinations are supported by the respective SAP System release and SAP kernel version.2. If the SAP System copy tool R3LOAD is used, will it be necessary to perform an operating system or database upgrade after the move? Describe your solution! a) Provided the fact that the installation software is able to install on the target operating system version and also supports the installation of target database release directly, no additional OS/DB software upgrade will be necessary after the R3LOAD import. In the case that the new target database is not supported by the installation software, a database upgradewill have to be done after the system copy.

Task 2:An SAP implementation project must change the database system before going into production, because of strategic customer decisions. The customer system configuration was setup as a standard three-system landscape (development, quality assurance, production). Each system is configured as ABAP Web AS with JAVA Add-In.1. Is it necessary to order a SAP OS/DB Migration Check for the planned database change?a) a) The system landscape contains a pre-production system only. In this case, no OS/DB Migration Check service is necessary, as its intention is to be used for productive systems only.2. According the SAP System copy rules, who must do the system copies?a) The change of a database involves a heterogeneous system copy, which must be done from someone who is certified for OS/DB migrations. The fact that the systems are not productive is regardless.

Unit 2The Migration ProjectBusiness ExampleYou want to setup an OS/DB Migration project. You need to know which steps are required and what can be a reasonable time line to finish the tasks.

Figure 18: Project Schedule of an OS/DB Migration (1)An introductory phase applies to new SAP products only. If mentioned in a system copy SAP Note, customers must register to the introductory phase before starting the OS/DB Migration. In such a case, it was decided that this particular product can only be migrated under SAP’s control (providing direct support from SAP development in case of problems). Usually the introductory phase is limited to few months only.Customer projects with required SAP involvement can be i.e. “Pilot Projects” or a “Minimized Downtime Service” (MDS) for very large databases. The standard OS/DB migration procedure applies also to heterogeneous system copies of ABAP Systems in “Introductory Phase Projects” or “Pilot Projects”. The project type specific activities can be seen as something over-and-above a standard migration procedure.Give a brief explanation on Introductory Phase Projects. Explain that some kind of projects can only be done with SAP involvement. Example: Minimize Downtime projects (MDS).

Figure 19: Project Schedule of an OS/DB Migration (2)Prepare for the “SAP OS/DB Migration Check Analysis Session” as soon as possible. It runs on the productive SAP System (the source system) and must be performed before the final migration.Migration test-runs are iterative processes that are used to find the optimal configuration for the target system. In some cases, one test-run suffices, but several repeated runs are required in other cases.The same project procedure applies to both the operating system migration and the database migration.

Test and final migrations are mandatory for productive SAP Systems only. Most other SAP Systems like development, test or quality assurance are less critical. If the first test-run for those systems shows positive results, an additional migration-run (final migration) is not necessary. Nevertheless, the schedule defined in the “SAP OS/DB Migration Check Project Audit questionnaire” must reflect test-runs and final migrations for all SAP Systems of the customer landscape.The “SAP OS/DB Migration Check Analysis Session” will be performed on the production migration source system and the “SAP OS/DB Migration Check Verification Session” will run on the migrated production system after the final migration.

Figure 20: Time Schedule for Productive SAP SystemsYou should begin planning a migration early. If you procure new hardware, there may be long delivery times.The time which is necessary to do serious tests varies from system to system. Allow at least two weeks!SAP recommends to wait with a SAP release upgrade on a migrated productive system for 6 weeks! First get the system stable and then do the upgrade!SAP will schedule the “SAP OS/DB Migration Check Analysis Session” only if the “Remote Project Audit Session” was completed successfully.Stress the need to have at least two weeks between the test and the final migration of a productive system. This is not only for testing purposes; it also prevents too much time pressure in the project. The recommendation, not to start an upgrade before 6 weeks after the final migration, should make sure that there is enough time to stabilize the system.

Figure 21: Migration Partners

The above requirements refer to the technical implementation of the migration.Application-specific tests require knowledge of the applications. ABAP Dictionary knowledge is required for System copies based on R3LOAD. Understand consequences of missing objects on database and/or SAP ABAP Dictionary.As there are often situations, where dictionary inconsistencies are a problem, the migration partner should be able to recognize and to solve the problem. Explain transaction DB02: Diagnostic → Missing Tables and Indexes (or in older versions of DB02: ABAP DDIC/DB DDIC consistency check).Point out that there is no tool available to verify, that all tables in the *.STR files do exist on DB. A 100% check would be a comparison of all table names in the *.STR files with the database catalog, but nothing like this exists. Also it is recommended to do a test migration in the office before doing it the first time at customer side. SAP cannot control what the migration partner and the customer are doing. Because of this, the responsibility for the whole project lies on the shoulders of the migration partner and the customer. SAP is responsible for the proper functionality of the migration tools only.A method to verify, that all tables in the R3LOAD structure files can be exported without problem, would be a compare of the table names from the structure file against the ones from the database catalog. The more easy way is a test export.

Useful SAP Notes are:• 9385 What to do with QCM tables (conversion tables)• 33814 Warnings of inconsistencies between database & R/3 DDIC (DB02)

Figure 22: Contractual ArrangementsDatabase or operating system specific areas in the SAP Service Marketplace may not be visible to the customer unless the contractual agreement regarding the new configuration is finalized with SAP.The “SAP OS/DB Migration Check” is mandatory for each productive system, but not for development, quality assurance, or test systems.A productive system can be a stand-alone ABAP system, but it can also be an ABAP Web AS with an JAVA Add-in, or an ABAP Web AS with a JAVA Web AS, each using its own database. The services are checking the parameters for ABAP and JAVA-based systems.A heterogeneous system copy of a stand-alone JAVA system means that no ABAP system is copied in the migration project. Explain that the OS/DB Migration service is needed for every productive system in the migrated landscape!

Figure 23: Hardware ProcurementFor safety reasons, an OS/DB migration of productive SAP Systems must always be performed in a separate system. For this reason, should serious problems occur, you can always switch back to the old system. Retaining the old system also simplifies error analysis.

When you change the database, consider the new disk layout. Each database has its own specific hardware requirement. From a performance point of view, it might not be sufficient to provide a duplicate of the current system. Performing the migration of a productive system on a separate hardware gives additional safety in case of problems, as there is always a fall back system. Different system behavior between source and target system can be easily checked.

Figure 24: Migrating a SAP System LandscapeEach productive system must be migrated twice (test and final migration)!Development, test und quality assurance systems are less critical and can often be migrated in a single step.In many cases, the migration of a quality assurance system is not necessary, because it can be copied from the migrated production system.Explain that there is no right or wrong order, as long as the production system is migrated twice.

Figure 25: SAP OS/DB Migration Check Project AuditThe “SAP OS/DB Migration Check Project Audit Questionnaire” will automatically be sent from SAP to the customer, as soon as the “SAP OS/DB Migration Check” was requested.The migration project time schedule should be created in consultation with the migration partner.For safety reasons, SAP cannot approve any migration of a production SAP System in which the source system is deleted after the data export in order to set up the target database.Make sure to include the dates for test and final migration steps of every SAP System, not only for productive systems.The migration project schedule must reflect correct estimates of the complexity of the conversion, its time schedule, and planned effort. SAP checks for the following:• Is the migration partner technology consultant SAP-certified for migrations?• Does the migration project schedule meet the migration requirements?• Technical feasibility. Are hardware, operating system, SAP System, and database versions compatible with the migration tools, and is this combination supported for the target system?The migration of an SAP System is a complex undertaking that can result in unexpected problems. For this reason, it is essential that SAP has remote access to the migrated system. Remote access is also a prerequisite for the “SAP OS/DB Migration Check”.

This check verifies that the migration partner is certified and the time schedule is appropriate (two weeks between test and final migration in case of a production system). It is also asking for technical details of the source and target systems.

Figure 26: SAP Migration ToolsThe migrations tools must fit to the used SAP release and kernel.Only for those SAP installations that are running old database or operating systems (which are no longer supported by current installation software 4.6D and below), it may be necessary to order the Migration CD set. Most questions regarding tool versions are answered in the SAP System copy notes and manuals. Also check the “Product Availability Matrix” (PAM) in the SAP Service Marketplace. Please open a call at the SAP Service Marketplace if in doubt about which tools to use in certain software combinations.In some cases it is advisable to upgrade the operating system, database or SAP release first, before performing the migration. In rare cases if can be even necessary to use intermediate systems.Explain that the content of the Migration CD is used to migrate "old" systems which are not under SAP maintenance anymore. Systems which are under SAP maintenance can be migrated with software versions (installation medias) available on the Marketplace.

Figure 27: SAP OS/DB Migration Check AnalysisThe “SAP OS/DB Migration Check Analysis Session” is focused on the special aspects involved in the platform or database change. It is performed on the production SAP System with regard to the target migration system environment.The results of the “SAP OS/DB Migration Check” are recorded in detail and provided to the customer through the SAP Service Marketplace. They also include recommendations for the migration target system.ABAP and JAVA-based SAP Systems components will be checked.The Analysis Session only looks for performance – nothing else. The resulting recommendations are for the target system.

Figure 28: Required Source System Information (1)

It must be carefully checked that all software components can be migrated – in particular JAVA-based components!The exact version information of each software component is necessary to be able to download/order and use the right installation software. It could be the case, that a certain Support Package Stack must be installed before a OS/DB migration can take place (i.e. certain target database features can be utilized only if the Support Packages are current). Updating Support Packages can be a serious problem in some customer environments, because of modifications, system interdependencies, or fixed update schedules.The current system landscape must be known to have the big picture. There may beOS/DB related dependencies between certain systems which must be analyzed first. The number of productive systems indicates the number of test and final migrations Which systems should be migrated in which order? What is the customer time schedule (deadlines)?When minimizing the downtime, the amount of tuning efforts that are necessary increases and much more time must be spend on it.In case of a hosting environment, will the consultant have access to the source system (which limitations will apply)?The slides show the minimum information, which should be available before starting the migration. Explain the relevance of every item.

Figure 29: Required Source System Information (2)The number of CPUs and information about the I/O sub system can help in determining the best number of export processes.The sizes of the source databases indicate how long the migration will take. Next to the database size itself, the size of the largest tables will influence the export significantly. For the first test migration 10% - 15% of the source database size should be available as export file system free space.If large tables are stored in separate locations (i.e. table spaces), should this also be retained in the target database? On some databases it can increase performance or ease database administration.MDMP or UNICODE system? In case of AS/400 R/3 4.6C and below: is it an EBCDIC or ASCII based system?Case 1: Table exists in database but not in the ABAP Dictionary - table will not be exported.Case 2: Table exists in ABAP Dictionary but not in database – export errors are to be expected.How to handle external files (spool files, archives, logs, transport system files, interfaces, ) ? Which files must be copied to the target system?The migration support tools like MIGMON and the PACKAGE SPLITTER used by SAPINST will need JAVA. The old Perl-based PACKAGE SPLITTER of R3SETUP needs Perl version 5. Because of strict software policies, customers might not allow the installation of additional software on productive systems.If source and target system are not in the same location – which media will be available to transport the dump files?

Figure 30: Required Target System Information

Figure 31: Migration Test RunGenerating the target database:• Make a generous sizing of the target database, or set it to an auto extensible mode (if possible), this will prevent load errors caused by insufficient space. An analysis of disk usage cannot be performed until after the data has been loaded. Configuring the test environment:• RFC connections• External interfaces• Transport environment• Backup• Printer• Archiving• etc.This should be a short walk through the technical migration.

Figure 32: Final MigrationA cut-over plan should be created, including an activity checklist and a time schedule. Include plenty of reserve time. The migration of a production system is often performed under intense time pressure. Checklists will help you to keep track of what is to be done, and when to do it. Not all the tests and checks which were done duringprevious test runs must be necessarily done again in the final migration.In most cases it makes sense to have one cut-over-plan for the technical migration, and a separate one for application related tasks.Explain the necessary preparations for the final migration. As a final migration runs under time pressure, every step should be previously planned.

Figure 33: SAP OS/DB Migration Check Verification

The “SAP OS/DB Migration Check Verification Session” should be scheduled 4 weeks after the final migration of the productive SAP System. This is because several weeks are required to collect enough data for a performance analysis. The “old” production system should still be available.As this check runs 4 to 6 weeks after going live, it is not intended to verify that the migration was ok. It is again only looking for performance issues. This should be the last time, where the migration source system should be available. The old production system can be switched off or deleted now.ABAP and JAVA-based SAP Systems will be checked.

Exercise 2: The Migration ProjectExercise Duration: 5 Minutes

Exercise ObjectivesAfter completing this exercise, you will be able to:• Create a migration project plan and a time schedule that is compliant to SAP needs.

Business ExampleTo plan a system copy project, you must know about the proper timing and the required test phases. The database size will influence the expected downtime. You should know about the tasks of each OS/DB Migration Check service component and

Solution 2: The Migration ProjectTask 1:The SAP heterogeneous system copy procedure for productive systems requires a test phase between test and final migration, and also recommends not performing an upgrade to the next SAP System release until at least 6 weeks after the final migration. 1. What is the minimal duration recommended for the test phase?a) Two weeks is the minimum amount of time to be considered between the test and final migration of a productive system.2. What should be done in the test phase, and who should perform it?a) The test phase should be utilized to check the migrated system regarding the most important customer tasks and business processes. End users who know their daily business very well should do the major part of the testing. Two weeks might be sufficient even in complex environments.3. What is the reason for the recommended time duration between final migration and the next upgrade?a) Every time a system has been copied to a different operating system and/or database, it takes some time to get familiar with it and to establish a smooth-running production environment. In the case that an upgradeimmediately follows the migration, the direct cause of the problems may be hard to identify. First get the system stable and then do the upgrade!

Task 2:A customer SAP System landscape is made up of several systems. All systems have to be migrated to a different database.System set 1 (ERP): Development, Quality Assurance, Production.System set 2 (BW): Development, Production.1. How many SAP OS/DB Migration Checks must be ordered?a) System sets 1 and 2 contain productive systems. Because of this, two separate SAP OS/DB Migration Checks must be ordered. 2. How many system copies are involved? (More than one answer can be right) a) Systemset 1:1 x Development, 1 x Quality Assurance, 2 x Production.Alternate:1 x Development, 2 x Production, homogeneous system copy from Production to Quality Assurance.Systemset 2:1 x Development, 2 x Production.

Task 3:The following facts as listed below are known in inspecting the source system of a migration (ABAP Web AS with JAVA Add-In). Please indicate for every item what the impact on the R3LOAD/JLOAD migration will be.1. The to total size of the database is 500 GB (used space).a) From a database size of 500 GB it can be expected, that the R3LOAD / JLOAD export will need about 10% - 15% (50 GB - 75 GB) of local disk storage.2. The sizes of the largest ABAP tables are 34 GB, 20 GB, 18 GB. a) The largest ABAP tables will significantly influence the amount of time necessary to export or import the database. A single R3LOAD process for each large table will improve the export and import time.3. The sum of all tables and index sizes of the JAVA schema does not exceed 2 GB.a) Because the JAVA tables will only need a little bit of time to export, this will not be critical for the overall export time.4. Transaction DB02 shows two tables belonging to the ABAP schema user that only exist on the database, but not in the ABAP Dictionary.a) R3LDCTL only reads the ABAP Dictionary. Tables that exist on the database, but not in the ABAP Dictionary, are ignored. As a consequence they are not inserted into any *.STR file. The same happens to tables belonging to the JAVA schema, but are not defined in the JAVA Dictionary. They will not be exported.

Task 4:The SAP OS/DB Migration Check sessions have three major topics. Please explain the main tasks of each session type.1. Project Audit Sessiona) Project Audit Session: Checks for technical feasibility, certified migration partner, and time schedule.2. Analysis Sessiona) Analysis Session: Performance analysis on source system. Returns configuration and parameter recommendations for the target system.3. Verification Sessiona) Verification Session: Performance verification on the target system after going live. Returns updated configuration and parameter recommendations.

Unit 3System Copy MethodsUnit OverviewThis unit gives an overview of available SAP system copy methods. Of most importance are information about SAP products which cannot be migrated the standard way and R3LOAD restrictions that exist if a PREPARE of an upgrade was run or the Incremental Table Conversion (ICNV) was not finished.

Lesson OverviewContents• Database-specific and -unspecific methods for SAP homogeneous or heterogeneous system copies (OS/DB Migrations)

Lesson ObjectivesAfter completing this lesson, you will be able to:• Evaluate the database-specific and -unspecific options for performing SAP homogeneous or heterogeneous system copies (OS/DB Migrations) Mention, that there are supported and unsupported methods to migrate a SAP System. This chapter will discuss the supported ones.

Business ExampleIn as customer project, it must be figured out, what’s the best method to move a system from one platform to another. The right approach depends on the involved database and the type of operating system used.

Figure 34: CommentMany of the students used methods like Oracle EXP/IMP are not supported by SAP. Make clear, that the usage of every not supported method is done on own risk! SAP support in case of problems on such system copies will be billed by SAP if a problem is clearly caused be the non support system copy method. Even if a system was copied with an unsupported method, SAP will not deny further system support.

HP, IBM, Oracle, and others offer their own specific migration methods for which they are responsible by themselves.Any Hotline or Remote Consulting effort that results from the use of a copy or migration procedure that has not been officially approved by SAP will be billed.

Figure 35: R3LOAD MethodThe message is: everything goes. R3LOAD might not be as fast as database-specific methods, but it is much more flexible with regard to different database versions on source and target system. For DB migrations or Unicode conversions there is no alternate way. DB2 for LUW = DB2 for Linux, UNIX, WindowsThe above table shows that all SAP supported database systems can be copied to each other by using R3LOAD.

Note:1. The database specific methods might be faster than the R3LOAD (if released by SAP)2. The database specific methods might be faster for an OS migration than R3LOAD (if released by SAP).

Figure 36: R3LOAD Restrictions (1)Point out that the PREPARE phase is dangerous if specifically mentioned in the system copy guide and not revised in the system copy note (otherwise the Upgrade must be done first). This is also true for homogeneous system copies! Give a brief explanation of database-specific objects (used by BW) and how they must be handled. Stress that BW 2.x systems must be upgraded first.

On earlier SAP release the PREPARE phase imports and implements ABAP Dictionary changes which cannot be unloaded consistently by R3LOAD. A complete reset of all PREPARE changes is not possible. Restarting the PREPARE phase on the migrated system will not help. If it applies to your SAP release it is mentioned in the system copy guide and/or in a corresponding SAP Note.The Incremental Table Conversion implements database-specific methods which cannot be unloaded consistently by R3LOAD (danger for loss of data). Before using R3LOAD, finish all table conversions! The transaction ICNV should not show any entry.

Figure 37: R3LOAD Restrictions (2)For BW 3.0 and 3.1 R3LOAD system copies the appropriate Support Package level must be applied and a certain patch level for R3LOAD and R3SZCHK is required (according SAP Note 777024).Related SAP Notes:• 771209 “NetWeaver 04: System copy (supplementary note)”• 777024 “BW 3.0 and BW 3.1 System copy (supplementary note)”• 888210 “NetWeaver 7.**: System Copy (supplementary note)”

Figure 38: Database Specific System Copy Methods (ABAP)Point out that this is a list for Web AS Systems based on ABAP only. In case of a JAVA Add-In it must be checked, that Backup/Restore is supported. In case of Oracle, transportable table spaces can be a solution for an operating system migration as well. If RMAN is used, the endian type can be changed as well.Certain databases can be even migrated to other operating systems by a simple restore. However, heterogeneous system copies by database-specific methods must be approved by SAP. If in doubt contact SAP before executing such kind of OS migration. The SAP OS/DB Migration Check is required anyway!

Notes on database specific methods for ABAP based systems (make sure that the method is also valid for JAVA Add-In installations):1. DB2: Copy - Database copy on the same host, Dump - Database copy to another host.2. DB4: SAVLIB/RSTLIB method, see SAP Note: 5852773. DB6: Database director (redirect restore) or brdb6 tools.

4. DB6: Cross platform restore since DB2 UDB version 8 (for AIX, HP-UX, Solaris), see SAP Note: 6281565. HDB: Check http://help.sap.com/hana_appliance for the respective guide6. INF: Informix Level 0 Backup, see SAP Notes: 89698, 173970.7. ADA: Cross platform restore if source and target OS is of same endian type. SAP Note: 9620198. MSS: Detach/Attach database files, see SAP Notes: 151603, 3399129. ORA: The SAPINST Backup/Restore method is released for all products. SAP Notes: 659509, 14724310. ORA: Transportable Tablespace / Database, see SAP Notes: 1035051, 1003028, 136745111. SYB: Backup/Restore, see SAP Note: 1591387Operating system Endian types, see SAP Note: 552464

Figure 39: Database Specific System Copy Methods (JAVA)It is not enough to copy the database content! There are a lot of parameters which must be adapted on the target system. SAPINST will make sure that it is done the right way.

SAPINST runs an internal function called “Migration Tool Kit” (“Migration Controller”) to adjust the SAP JAVA target system for the new instance name, instance number, host name, etc.If the time schedule allows it, this might be a good point to insert unit 11: Special Projects. It takes about 30 minutes. You can insert this unit also behind unit 9: Advance migration techniques on day 2.

Exercise 3: System Copy MethodsBusiness ExampleFor a SAP system move, it should be known what the available options and their specific prerequisites are. R3LOAD is quite flexible, but needs more time for the export/import compared to a backup/restore scenario. Nevertheless, there can be good reasons to use R3LOAD anyway.

Solution 3: System Copy MethodsTask 1:The homogeneous copy of an ABAP system performed with database specific means is in most cases much faster than using the R3LOAD method. 1. What could be some of the reasons for using the R3LOAD method? a) The source and target systems use the same operating system and database type but different versions.b) The target disk layout is completely different from the source system and the database specific copy method does not allow adapting to new disk layouts.c) If the database storage unit names include the SAP SID, the installation of the target database according the R3LOAD method will allow you to choose new names.d) Data archiving is done in the source database and the system copy to the target system should also be used to reduce the amount of required disk spacee) In the case that systems should be moved in or out of a MCOD database.2. Which specific checks should be done before using R3LOAD to export the source system?a) Make sure the PREPARE for the next SAP upgrade was not started (if this restriction applies to your SAP System release) and verify that the Incremental Table Conversion (ICNV) has completed.

Task 2:Some databases allow OS migrations of SAP systems using database specific means.1. Is it necessary in this case to order an SAP OS/DB Migration Check for productive systems?

a) It doesn’t matter which method is used to perform a heterogeneous system copy of a productive SAP ABAP System. The SAP OS/DB Migration Check is required anyway.2. Is a test and final migration required for productive systems?a) A test and a final system migration is required, when performing an SAP heterogeneous system copy.3. Must one be certified in order to perform an OS/DB migration?a) Yes, an OS/DB migration certification is required to perform the system copy.

Unit 4SAP Migration ToolsUnit OverviewThis unit describes the SAP migration tools in detail. It also describes the tasks of R3SETUP/SAPINST and in which phase they are calling the migration tools. The R3LOAD and JLOAD export directory structure will be discussed.

SAP Migration Tools

Business ExampleYou want to know, which SAP tools are executed during an export/import based system copy, and what are the specific differences between the ABAP and JAVA system copy.

Figure 40: Installation Programs R3SETUP and SAPINSTThis slide is only used to show that tasks and features of both programs are very similar.R3SETUP can run in character mode where no graphic environment is available. SAPINST requires JAVA and a graphic environment which it supports (Microsoft Windows, or X-Windows).

Figure 41: ABAP DDIC Export and DB Object Size CalculationIt is also important to know that R3LDCTL contains SAP release specific (hard coded) table definitions. Explain which files are created, and why R3SZCHK does not exist for 3.1I and 4.0B. Stress the fact, that table DDLOADD is filled by R3SZCHK to store table and index sizes, which will be sorted and written to *.EXT files later on. R3LDCTL reads the ABAP Dictionary to extract the database independent table and index structures, and writes them into *.STR files.Every version of R3LDCTL contains release-specific, built-in knowledge about the table and index structures of specific SAP internal tables, which can not be retrieved from the ABAP dictionary.R3LDCTL creates the DDL<DBS>.TPL files for every SAP supported database. Since 6.40, additional DDL<DBS>_LRG.TPL files are generated to support system copies of large databases more easy.As of version 4.5A, the size computation of tables and indexes are removed from R3LDCTL (R/3 Load Control) and implemented in a separate program called R3SZCHK (R/3 Size Check), which also creates the *.EXT files. R3LDCTL is still used for *.EXT file generation on 3.1I and 4.0B.R3LDCTL/R3SZCHK can only run as a single process (no parallelization is possible).The table DDLOADD is used to store the results of the table/index size calculation. R3SZCHK generates the target database size file DBSIZE.XML for SAPINST. The size calculation is limited to a maximum of 1.78 GB for each database object (table or index).

Figure 42: ABAP Data Export/ImportUnderstand, that there is no checksum on the dump files, which are created by R3LOAD 3.1I or 4.0B. File corruption may not be discovered at load time. Strange errors might happen. The compression runs on block level. There is no tool available to uncompress a R3LOAD dump file. This may answer question regarding data security when transferring R3LOAD dump files over public networks. The restart capability has it limits where power failures or operating system crashes terminate the R3LOAD export or import phase. See unit 10: Troubleshooting.The standard R3LOAD implementation contains an EBCDIC/ASCII conversion of LATIN-1 character sets only. Other translations tables are available upon request.Note that 4.6C is the last R/3 version which runs on EBCDIC. Those 4.6C SAP Systems running on AS/400 (iSeries) must be converted to ASCII before an upgrade to a higher release can be possible.Character set conversions to Unicode are implemented since R3LOAD 6.10. The conversion will be done at export time, as additional information is necessary only available in the source system.

Before the data export/import, R3LOAD performs a syntax check on the *.STR files. This prevents unintended overlaps between field names in tables and R3LOAD key words, as well as other inconsistencies.If an R3LOAD process terminates with an error, a restart function allows the data export/import to be continued after the last successfully recorded action. Special care must be taken on restarts after OS crashes, power failures, and out of space on export disk (see the troubleshooting section).

As of Release R/3 4.5A, R3LOAD writes information about the source system into the dump file. R3LOAD checks these entries when starting the import. If source and target OS or DB are different, R3LOAD will need a valid migration key to perform the input. The parallel export/import of single tables using multiple R3LOADs processes is supported since R3LOAD 6.40.

Figure 43: ABAP Migration Tools CompatibilityIt is not possible to use i.e. an R3LOAD 4.6D on 4.0B. Or R3LOAD 6.40 on 6.20 It must always fit to the kernel used and released by ASP. If R3SETUP 4.6D is used to install a 4.0B Oracle system, it does not mean that you can use R3LOAD 4.6D also! SAP can change the installation programs freely.For SAP migration tool version dependencies, see the relevant SAP Notes. For special considerations on migration tools for Release 3.x, see the relevant SAP Notes for 3.1I.From time to time, SAP provides updated installation software to support new operating systems or database versions for the installation of older SAP releases directly. These updates might have new installation programs, but will still use the matching R3LDCTL, R3SZCHK, R3LOAD and kernel versions for the SAP System release in charge.

Figure 44: DDL Statements for Non-Standard DDIC ObjectsThe students should understand that since NetWeaver ’04, every SAP System can contain non-standard DDIC objects (BW objects)! Since then SMIGR_CREATE_DDL should be executed for all system types.The report SMIGR_CREATE_DDL generates DDL statements for non-standard database objects and writes it into <TABART>.SQL files. The <TABART>.SQL file is used by R3LOAD to create the non-standard DB objects in the target database,bypassing the information in <PACKAGE>.STR files. Non-standard objects are using DB specific features/storage parameters, which are not part of the ABAP dictionary (mainly BW objects). Since NetWeaver ’04, BW functionality is an integral part of the standard. Now customers or SAP can decide to implement BW objects on any system. The report must run to make sure that no non-standard DB objects get thewrong storage parameters on the target system.The report RS_BW_POST_MIGRATION performs necessary adaptations because of DB specific objects in the target system (mainly BW objects). Required adaptations can be the regeneration of database specific coding, maintaining aggregate indexes, ABAP dictionary adaptations, and many others. The program should run

independently, regardless of whether a <TABART>.SQL file was used or not.The reports above are not applicable to BW 2.x versions!

Figure 45: ABAP Web AS – Source System Tasks ≤ NW 04This slide shows which tasks are executed by R3SETUP/SAPINST and what will be done by R3LDCTL, R3SZCHK, and R3LOAD. Give a short explanation on the benefit to split *.STR files. Point out that the *.CMD files are created by R3SETUP/ SAPINST. The *.TSK files are created from SAPINST by calling R3LOAD withspecial options. Make clear that the content of table DDLOADD is used to compute the size of the target database.MIGMON can run optional. Explain the difference between MIGMON Server and Client mode.Explain when to start SMIGR_CREATE_DDL and how to deal with the <TABART>.SQL file.Depending on the database, update statistics is required before the size calculation or not.R3SETUP/SAPINST calls R3LDCTL and R3SZCHK to generate various control files for R3LOAD and to perform the size calculation for tables and indexes. R3LDCTL will also do the size calculation for tables and indexes on R/3 releases before 4.5A.

Once the size of each table and index has been calculated, R3SETUP/R3SZCHK computes the required database size. R3SETUP generates a DBSIZE.TPL. R3SZCHK creates a DBSIZE.XML for SAPINST.Optional MIGMON can be used to reduce the unload and load time significantly. A special exit step was implemented to call MIGMON since SAPINST for NetWeaver ’04. Earlier versions of SAP systems can benefit from MIGMON as well. Appropriate break-points must be implemented.R3SETUP/SAPINST/MIGMON generates R3LOAD command files for every *.STR file.SAPINST/MIGMON calls R3LOAD to generate task files for every *.STR file. The splitting of *.STR files improves unload/load times. For table splitting the usage of MIGMON is mandatory (6.40 and later)!

Figure 46: ABAP Web AS – Target System Tasks ≤ NW 04The *.CMD and *.TSK files are not copied from the source system! They are created again from scratch. The ABAP DDIC consistency check makes sure that the ABAP DDIC fits to the DB DDIC (i.e. table field order). A detailed explanation of this DDIC step should be done when discussing the example import.MIGMON can run optional.

Explain the task of RS_BW_POST_MIGRATION.Depending on the database type, the database is installed with or without support through R3SETUP or SAPINST.Optional MIGMON can be used to reduce the unload and load time significantly. A special exit step was implemented to call MIGMON in SAPINST for NetWeaver ’04. Earlier versions of SAP systems can benefit from MIGMON as well. Appropriate break-points must be implemented in the R3SETUP/SAPINST installation flow. After the data load, it is necessary to run update statistics to achieve the best possible performance.Ensuring ABAP DDIC (Dictionary) consistency means, the program “dipgntab” will be started to update the SAP System “active NAMETAB” from the database dictionary (the table field order).The last step in each migration process is to create database specific objects by calling SAP programs via RFC. To be successful, the password of user DDIC of client 000 must be known.The report RS_BW_POST_MIGRATION is called as one of the post-migration activities, which are required to bring the system to a proper state, required since ABAP Web AS 6.40 (NetWeaver ’04) and all SAP Systems using BW functionality based on Web AS 6.20.For table splitting the usage of MIGMON is mandatory (6.40 and later)!

Figure 47: ABAP Web AS – Source System Tasks ≥ NW 7.0Explain that SAPINST calls MIGMON to handle the export. In newer SAPINST versions there is an option to skip the update statistic. Since NetWeaver 7.0 (NetWeaver ’04S), some SAPINST functionalities havebeen removed and MIGMON is called instead. The above slide shows that the whole R3LOAD handling is done by MIGMON. SAPINST implements MIGMON parameter related dialogs and generates the MIGMON property file. After the export is completed, MIGMON gives the control back to SAPINST.Even if MIGMON is configured automatically by SAPINST, it can still be configured and called manually for special purposes.

Figure 48: ABAP Web AS – Target System Tasks ≥ NW 7.0Explain that SAPINST calls MIGMON to handle the import. This makes it possible to run the export and import in parallel. SAPINST uses MIGMON for the import as well. The export and the import can run at the same time, as long as the target system has already been prepared. Even if MIGMON is configured automatically by SAPINST, it can still be configured and called manually for special purposes.

Figure 49: ABAP Web AS – Export Directories and FilesAll files of the dump directory must be copied to the target system! Do not forget LABEL.ASC. <TABLE>-#.WHR files do only exist in case of table splitting. Since NetWeaver 7.00 SAPINST creates an ABAP and JAVA subdirectory automatically. Below the two directories we will find the well known directory structures of the previous releases again.R3SETUP and SAPINST automatically creates the shown directory structure on the named dump file system. During the export procedure, the files are then copied to the specified directory structures.Since NetWeaver 7.0 the dump directory contains an ABAP and/or a JAVA subdirectory to store the exports into one location, but separated by name. The *.STR, *.TOC and the dump files are stored in the DATA directory. All *.EXT files are stored in the corresponding database subdirectory. Under UNIX, the directorynames are case sensitive.The <TABART>.SQL and SQLFiles.LST (since 7.02) files exist only, if the report SMIGR_CREATE_DDL created them and they were copied to the database subdirectory (automatically by SAPINST, or manually according the system copy instructions).In most SAPINST implementations, the *.EXT files are only copied for Oracle to DB/<DBS>.Example target database: Oracle *.STR, *.TOC, and *.<nnn> files are stored in <dump directory>/DATA*.EXT files and the target database size file DBSIZE.* are stored in <dump directory>/DB/ORAThe DDLORA.TPL file is stored in <dump directory>/DB At import time, R3SETUP and SAPINST will read the content of file LABEL.ASC to verify the dump directory location.The *.WHR files do only exist if the optional table splitting was used.

Figure 50: JAVA Data Export/ImportThe JLOAD features should be compared with R3LOAD.As of NetWeaver ’04, JAVA data is stored in a database, but there are still JAVA applications storing persistent data in the file system. JLOAD deals with database data only. File system data is covered by SAPINST functionality.JLOAD is not designed to be a stand-alone tool. For migrating a JAVA-based SAP system, SAPINST will need to perform additional steps which are version and installed software components specific.Unlike R3LOAD which exports only table data, JLOAD can export the dictionary definitions and the table data into dump files.

JLOAD writes its data in a format that is independent of database and platform. This format can be read and processed on all platforms supported by SAP. If JLOAD terminates with an error, a restart function allows the data export/import to be continued after the last successfully recorded action.Before NetWeaver 7.02 one single JLOAD process did the whole export or import. Starting with 7.02, multiple JLOAD processes can run simultaneously. As of SAPINST for NetWeaver 7.02 package and table splitting is available for JLOAD.

Figure 51: JLOAD Job File Creation using JPKGCTLJPKCTL is creating JLOAD job files and supports package and table splitting. It was introduced with SAPINST 7.02 and can be switched on with a certain environment variable. Later versions might have this active by default.In previous version, JLOAD did not only export the table data, it also generated it own export/import job files. Starting with NetWeaver 7.02 JPKGCTL is used for it. Because of the need for faster exports and imports, package and table splitting was implemented. As a consequence, it was necessary to separate the meta data export from the table data export to allow a separate table creation for splitted tables.All JLOAD Processes will now be started by JMIGMON.The JLOAD package size information is stored in “sizes.xml”.

Figure 52: JAVA Target DB Size CalculationIt is important to know, that JSIZECHECK does not provide any size information about tables or indexes. The output is a DBSIZE.XML file, which is stored in the DB sub-directories of the export file system. JSIZECHECK will not have the R3SZCHK limitation of 1.78 GB per table/index!The size calculation is not limited to a certain object size (like R3SZCHK). Files containing “Initial Extents” (like the *.EXT file for R3LOAD) are not required for JLOAD.In case of a database change during a heterogeneous system copy, the conversion weights for data and indexes are calculated using master data/index sizes. The export sizes are converted to import sizes using the conversion coefficients, and 20-30% additional space is added for safety reasons.If the computed size is less then some default values (i.e. 1GB for Oracle), then default sizes are used in the output file.

Figure 53: Flow Diagram JAVA Add-In / JAVA System CopyThis diagram shows the export and import order. Explain that NW04 exports the ABAP and JAVA stack separately (in case of Java Add-In). Since NW’04S SAPINST does this in a single step.DB=DatabaseCI=Central Instance

Figure 54: JAVA Web AS – Source System Tasks NW 04 / 04SIt is Important to understand, that everything in the file system must be collected by SAPINST or by SDM. JLOAD reads the database content only.Note: The above graphic describes general steps which are important for a JAVA Web AS system copy. The steps can vary in their order. The JSIZECHECK is called to create the DBSIZE.XML files for all target databases where this file is needed. The log files for JSIZECHECK can be found in the installation directory.For applications storing their persistent data in the file system, SAPINST collects the files into SAPCAR archives.The software deployment manager (SDM) is called to put its file system components (incl. SDM repository) into the SDMKIT.JAR file.JLOAD is called to export the JAVA meta and table data.In NW 04 SAPINST must be called twice. One time for the ABAP export and the second time for the JAVA part.Since NW 04S SAPINST provides a selection for JAVA Add-In which exports the ABAP and the JAVA part in one single step.

Figure 55: JAVA Web AS – Target System Tasks NW 04 / 04SJLOAD restores the database content only. SDM reapplies its repository. After import, SAPINST adjusts certain table contents regarding the new environment (instance, hostname, etc). This internal SAPINST functionality is called in step "Java Migration Toolkit".Note: The above graphic describes general steps which are important for a JAVA Web AS system copy. The steps can vary in their order. The database software installation is only required in cases where a JAVA Web AS is installed using its own database, opposed to an JAVA Add-in installation into an existing ABAP database.JLOAD is called to load the database.SDM file system software components are re-installed (re-deployed). Applications specific data is restored from SAPCAR archives. Various post-migration tasks must be done, to bring the system to a proper state.Since NW 04S SAPINST provides a selection for JAVA Add-In which imports theABAP and the JAVA part in one single step.

Figure 56: JAVA Web AS – Source System Tasks – JPKGCTLIn all NetWeaver versions using JPKCTL, JMIGMON is mandatory for the export. Package and table splitting is optionally but recommended. Explain that the SDM is not used any long in 7.10 and later.Note: The above graphic describes general steps which are important for a JAVA Web AS system copy. The steps can vary in their order. The JSIZECHECK is called to create the DBSIZE.XML files for all target databases where this file is needed. The log files for JSIZECHECK can be found in the installation directory.JPKGCTL distributes the JAVA tables to package files (job files) and can optionally split tables.

JMIGMON calls JLOAD to export the JAVA table data.For applications storing their persistent data in the file system, SAPINST collects the files into SAPCAR archives. Since 7.10 not required anymore. The software deployment manager (SDM) is called to put its file system components (incl. SDM repository) into the SDMKIT.JAR file. Since 7.10 not required anymore.

The JPKGCTL/JMIGMON is active only if the environment variable “JAVA_MIGMON_ENABLED=true” was set before starting SAPINST 7.02. If the environment variable was not set, the export looks like in NW 04S. Later versions of SAPINST will use JPKGCTL/JMIGMON by default.

Figure 57: JAVA Web AS – Target System Tasks – JPKGCTLIn all NetWeaver versions using JPKCTL, JMIGMON is mandatory for the import.Note: The above graphic describes general steps which are important for a JAVA Web AS system copy. The steps can vary in their order. SDM file system software components are re-installed (re-deployed). Since 7.10 not required anymore.JLOAD is called to load the database.Applications specific data is restored from SAPCAR archives. Since 7.10 not required anymore.Various post-migration tasks must be done, to bring the system to a proper state.

The JPKGCTL/JMIGMON is active only if the environment variable“JAVA_MIGMON_ENABLED=true ” was set before starting SAPINST 7.02. If the environment variable was not set, the import looks like in NW 04S. Later versions of SAPINST will use JMIGMON by default.

Figure 58: JAVA Web AS – Export Directories and FilesThe APPS directory is empty, if no JAVA applications are installed, storing their data in the file system. The JLOAD *.STA files are stored outside the install directory – which may be confusing to the students.Since NetWeaver 7.00 SAPINST creates an ABAP and JAVA subdirectory automatically. Below the two directories we will find the well known directory structures of the previous releases again.The JLOAD.LOG, *_<PACKAGE>.LOG and *.STAT.XML files will be created in the SAPINST installation directory. The *_<PACKAGE>.STA files are in the SAPINST installation directory or in /usr/sap/<SAP SID>/<instance>/j2ee/sltools.The “SOURCE.PROPERTIES” file contains information that is used to create the central instance on the target system.Directories: Applications (APPS), DB, JLOAD Dump (JDMP), Software Deployment Manager (SDM)The DB sub-directories contain the target database size files created by JSIZECHECK (since SAPINST for NetWeaver 7.0)The APPS directory holds archives from applications storing their persistent data in the file system. The subdirectories and files are only created if the application is installed and known by SAPINST, otherwise application specific directives must beperformed, to copy the required files to the target system (see respective SAP Notes). Examples for applications are: ADS (Adobe Document Services), PORTAL (SAP Portal), KM (Content Management and Collaboration)The APPS and SDM directory may disappear in future releases as no JAVA relevant persistent data is stored in the file system anymore.

Figure 59: Changes in NetWeaver 7.10 and laterFor the students it will be hard to understand that SAPINST 7.02 offers more features then SAPINST 7.10. Explain the SAP release order.Since NetWeaver 7.10 the Software Deployment Manager (SDM) using a file system based repository, is not used anymore. The repository is now stored in the database and can be exported with JLOAD.JAVA applications were changed to store no persistent data in the file system. As a result SAPINST does not need to collect application files for system copies anymore. As NetWeaver 7.10 (released for certain SAP products only) was available before SAPINST 7.02, JLOAD package and table splitting is not available for this version. Releases using SAPINST functionality based on 7.02 and higher may provide these features later on. Please check the system copy guides and SAP Notes for updates.

Figure 60: SL Toolset – ABAP/JAVA Dual Stack SplitNote: The ABAP/JAVA Dual Stack Split is intended to be used in a homogeneous system copy scenario, but not for heterogeneous migrations.

The name Software Logistics Toolset stands for a product-independent delivery channel which delivers up-to-date software logistics tools. http://service.sap.com/sltoolset. As of SAP NetWeaver 7.0 including Enhancement Package 3 and SAP Business Suite 7i2011, which is based on SAP NetWeaver 7.0 including Enhancement Package 3, the installation of SAP dual-stack systems is no longer supported. Furthermore, as of SAP Business Suite 7i2011, it will no longer be possible to upgrade an SAP dual-stack system to a higher release.Related SAP Notes:• 1655335 Use Cases for Splitting Dual-Stack Systems.• 1685432 Dual-Stack Split 2.0 SP1 for Systems Based on SAP NetWeaver.• 1563579 Central Release Note for Software Logistics Toolset 1.0

Definition of a Dual-Stack SystemSAP system that contains installations of both Application Server ABAP (AS ABAP) and Application Server Java (AS Java). A dual-stack system has the following characteristics:• Common SID for all application servers and the database• Common startup framework• Common database (with different schemas for ABAP and Java)Available options for splitting a dual-stack system that is based on SAP NetWeaver into one ABAP stack and one Java stack each with own system ID (the dual-stack system is reduced to an ABAP system and the Java system is reinstalled):Move Java database: Export JAVA stack and import into a separate database. Remove original JAVA stack.Keep JAVA database: Export JAVA stack and import into the same database, but as MCOD installation. Remove original JAVA stack.Remove JAVA stack: Similar to "Keep JAVA database", but without installation and import into a new system.Keep JAVA database: Export JAVA stack and import into the same database, but as MCOD installation. Remove original JAVA stack.Remove JAVA stack: Similar to "Keep JAVA database", but without installation and import into a new system.Stress the fact, that the normal JLOAD procedure does not support the removal of the JAVA stack. Use the SL Toolset Dual-Stack split instead. Please be aware, it is intended for homogeneous system copies only - not for heterogeneous migrations.

Demonstration:PurposeDemonstration of the export1. Export the Java System DEJ2. Export preparation of ABAP System DEV (generate DBSIZE.XML)3. Table splitting preparation for ABAP System DEV4. Export the ABAP System DEVRemarksIn general the DEV system can be exported and imported into QAS in parallel.If you decide to present a parallel export/import, run the steps 1 - 3 above, and start the target system import then. After the database is created (~30 min) SAPINST will wait for the first MIGMON signal file (*.SGN). Now start the export of DEV. Do not run more than three R3LOADs for export or import because more would overload the system.

Exercise 4: SAP Migration Tools

Solution 4: SAP Migration ToolsTask 1:R3LDCTL reads the ABAP dictionary and writes database independent table and index structures into *.STR files.1. As the *.STR file only contains database independent structures, how is R3LOAD able to assemble a create table SQL statement for the target database?a) R3LDCTL creates DDL <DBS>. TPL template files, which contain all the necessary information to assemble a create table SQL statement for the target database. Information from *.STR and *.EXT files are used to fill

the table or index specific part of the statement.2. Not all the tables within *.STR files can be found with transaction SE11 (table maintenance) in the SAP System. A look at the database dictionary confirms that these tables do exist. What is the reason?a) Tables that made up the ABAP dictionary itself, or used by internal kernel functions, can not be viewed with standard dictionary transactions. R3LDCTL contains a built-in knowledge about these tables and can writetheir structures directly into the *.STR files.

Task 2:The program R3SZCHK computes the size of each table, primary key, and index.1. The target database of a system copy does not require INITIAL EXTENTs when creating a table. What else can be the purpose of the size computation?a) The sizes of tables and indexes are used to compute the amount of disk space that will be required to create the target database. The Package Split-ters rely on size information from the *.EXT files

Task 3:Every R3LOAD process needs a command file to start a data export or import.1. Which programs generate the command files?a) The programs R3SETUP, SAPINST or MIGMON create the command files.2. How do the programs know how many command files to create if no table splitting is involved?a) Command files are created for every *.STR file that can be found.

Task 4:JLOAD is used to export the JAVA data, which is stored in the database.1. How is JAVA Web AS related file system data handled in NetWeaver 7.00? a) The installed software components must be recognized by SAPINST 7.00 or by the tools which are called from it. Most of the file system data is collected in SAPCAR files, and the SDM data is stored inside the SDMKIT.JAR file. In addition, SAP System copy notes might give instructions on how to copy some files manually.

Unit 5R3SETUP/SAPINSTUnit OverviewThis unit describes the SAP installation programs R3SETUP and SAPINST. The control files will be explained. Emphasis is on how to implement user-defined break-points to stop R3SETUP/SAPINST after/before certain installation steps.

R3SETUP/SAPINSTLesson OverviewContentsThe role of R3SETUP and SAPINST in the homogeneous or heterogeneous system copy process

Lesson ObjectivesAfter completing this lesson, you will be able to:• Understand how R3SETUP and SAPINST control the export and import processes of homogeneous or heterogeneous system copies and how to influence their behavior.• Recognize the structure of the R3SETUP *.R3S control files, and be able to adjust their contents if necessary.There are no questions about this chapter in the examination.

Business ExampleThe export or import phase of a R3LOAD based system copy should be improved. For that purpose the installation tool R3SETUP/SAPINST must be stopped in certain phases. You need to know how to prepare the tools for that.

Figure 61: R3SETUP: *.R3S FilesMany of the students know R3SETUP form their own installations. Different versions of R3SETUP are using different *.R3S files for doing the import. Because of this, the system copy manuals must be read to figure out what to use. The DBRELOAD.R3S file is only available for Oracle!!!The command file DBEXPORT.R3S controls the database export of a homogeneous or heterogeneous system copy. CENTRAL.R3S calls other *.R3S files as selected.DBRELOAD.R3S is only used for re-loading an already finished installation (that is, after the test migration). Available for Oracle only. Older *.R3S files are: CENTRDB.R3S for a combined installation of central instanceand database, and CEDBMIG, used for a combined installation of central instance and database for homogeneous or heterogeneous system copies.

Figure 62: R3SETUP: *.R3S File StructureIt is important to understand the internal structure of *.R3S files to be able to react on errors or to modify the contents of *.R3S files. Explain that the [EXE] section controls the execution order of R3SETUP steps.The command file consists of several sections. The beginning of a section is always indicated by the section name in square brackets. Each section contains a set of keys and corresponding parameter values.

The [EXE] section represents an installation roadmap with all of the steps listed in sequence. The steps are executed as listed (the step with the lowest number first). Some parameters are not written to the R3SETUP command file until runtime. Parameters which are preset by editing the *.R3S file will not be overwritten fromdefault values. After a section has been successfully executed, it receives the status OK. R3SETUPstops on error if a section can not be executed. The section receives the status ERROR. R3SETUP always reads the [EXE] section first to get the execution order, and then examines the status of each section. The first section with an ERROR status or without any status will be executed next. Removing the OK status from a section will force R3SETUP to execute this section again.

Figure 63: R3SETUP: User-Defined Break-PointsWhy to insert break points into the R3SETUP execution flow, but this is good point to explain it anyway. The emphasis should be on the fact, that every Break Point (exit step) will only be executed once.Between the execution of two command sections in a *.R3S file, you may need to stop and make manual changes to the R3LOAD control files, modify database settings, or even call MIGMON. As shown in the graphic, R3SETUP can be forced to stop, by implementing user-defined break-points.The R/3 installation kits for Windows operating systems provide R3SEDIT.EXE for modifying *.R3S files in an easy way. SAP Note: “118059 Storage parameter for system copy with R3load” describes how to implement user break-points.SAP Note: “784118 System Copy Java Tools” explains how to find the MIGMON software on the SAP Marketplace. The MIGMON*.SAR archive contains a PDF-document which shows how to use MIGMON with R3SETUP.

Figure 64: R3SETUP: LABEL.ASCAs different migration kits can be in charge when doing the export or import, the expected content of the LABEL.ASC file may differ. The slide shows how the expected LABEL.ASC content can be found inside a *.R3S file.The content of the LABEL.ASC file in the export directory will be compared against the expected string inside DBMIG.R3S to make sure that the import is read from the right location. The same mechanism is used by SAPINST.

Figure 65: SAPINST: *.XML FilesThe most important difference between R3SETUP and SAPINST is the usage of *.XML files. It is nearly impossible to do changes inside these files on an intuitive level. A description of the files content is not available. The expected LABEL.ASC content can be fount in the “package.xml” file.SAPINST records the installation progress in the “keydb.xml” file. SAPINST can continue the installation from a failed step, without having to repeat previous steps. The package.xml file contains the name of installation media (CDs) and the expected LABEL.ASC content.

Figure 66: SAPINST: User-Defined Break-PointsExplain that everyone who is making use of the Step browser functionality is individually responsible for the result. SAP is not supporting this feature! The current version of SAPINST can be checked by executing “SAPINST –v”. As long as the used SAPINST version does not provide a documented way to implement user break-points, the program must be forced to stop by intended error situations.SAPINST starting with 7.0 SR2 offers the possibility to manipulate step execution via a graphic user interface, the so-called “Step Browser”. The Step Browser shows the components and steps that make up an installation. You may manipulate the state of single steps, groups of steps, and even whole components and their sub-components. By invoking the context menu for a step and choosing “Insert Dialog Exit Step above Selection” or “Insert Dialog Exit Step below Selection” you may stop an installation before or after a certain step.To activate the Step Browser, call SAPINST with the command line parameter “SAPINST_SET_STEPSTATE=true”.Please be aware, the “Step Browser” functionality is not supported officially, so the usage is done on own risk!Show SAPINST step browser screen shots D:\Additional_Files\TADM70\Templates+Doc\SAPINST_STEP_Editor SAPINST_STEP_EDITOR_screen_shots.pdf

Figure 67: Size of the Target DatabaseIt is confusing, that JSIZECHECK and R3SZCHK create files of the same name.The values calculated for ABAP database storage units are estimates, like the values for the initial extents, and primarily serve as guidelines for sizing the target database. You will probably have to increase or decrease individual values during, or after the first test migration.

ABAP Tables and indexes that are larger than 1.78 GB will be normalized to an initial extent of 1.78 GB. The target database size calculation is based on estimations. Adjust the database size manually if required.The JAVA DBSIZE calculation does not have table or index size limitations, but the result is based on estimations as well.

Exercise 5: R3SETUP/SAPINSTSolution 5: R3SETUP/SAPINSTTask 1:The installation program R3SETUP will be started with a command line containing the name of a “*.R3S” file to read (i.e. “RESETUP –f DBMIG.R3S”). The purpose of “*.R3S” files is not only to define installation steps; it is also used to store parameters and status information. R3SETUP sets the status of completed steps to OK andstops on error if a step can not be executed successfully. An erroneous step will get the status “ERROR”. Every time R3SETUP is started, the “*.R3S” file will be copied first, to have a backup of the original content. Next, R3SETUP will begin the execution at the first step that has the status “ERROR”, or no status at all.For repeated test migrations or for the final migration of a production system, it would be helpful to have a DBMIG.R3S file that rebuilds the database without reinstalling the database software again. For this purpose, we need a “DBMIG.R3S” file which starts with the generation of an empty database.1. What can be done to create such a “*.R3S” file? Different methods are possible. a) Insert a break point in the “*.R3S” file at the place where R3SETUP should stop. Copy the “*.R3S” file using a new name.b) Remove the “STATUS=OK” lines from completed “*.R3S” files. Begin editing the section where R3SETUP should start later on.

Caution: The step order is defined in the [EXE] section. If you reuse an already executed “*.R3S” file, be sure to remove the STATUS=OK lines from all sections following an [EXE] order.Do not skip steps. Use this method only if you want to repeat the installation exactly like it was done before!2. What happens to R3SETUP parameters that were preset by hand?a) R3SETUP does not overwrite preset parameters with default values. A description of each installation step and related parameters can be found in the installation directory (sub-directory “doc”), or on the installation CD.

Task 2:SAPINST stores all its installation information in “*.XML” files. As the file structure is neither easy to read nor documented, modifying the files can be risky, as it might cause unexpected side effects.1. What can be done to force SAPINST to stop before a certain installation step?a) Since SAPINST NetWeaver 7.0 SR2 the step browser can be used to insert an exit dialog before or after an installation step. Earlier SAPINST versions can only be stopped by forcing intended errors.2. In the case where we need to repeat a system copy import, it would be useful to have a SAPINST that starts at a certain step. How could this be achieved without modifying the files?a) Stop SAPINST before the step where you would like to start later on.Copy the entire installation directory as it is. Restore the saved installation directory to its original location to redo the installation.

Unit 6Technical Background KnowledgeData Classes (TABARTs)Lesson OverviewPurpose of Data Classes (TABARTs) in the ABAP DDIC and R3LOAD control files

Business ExampleIn the target database of a migration, some very large tables should be stored incustomer defined database storage units. For that purpose, you need to know how the

ABAP data dictionary and R3LOAD is dealing with Data Classes/TABARTs.

Figure 68: DefinitionExplain that the following slides like to be database independent. To achieve this, the term database storage unit was chosen as a synonym for any database disk architecture.

By this definition, examples of database storage units are:• Tablespaces (Oracle),• Dataspaces (Informix),• Tablespaces/containers (DB2 LUW)The participants should understand that a TABART is an order criterion only. All tables belong to a certain TABART. Tables of a single TABART can be stored together somewhere in the database. The term “TABART”, “data class”, or “table type” means the same. TABART sounds more German, but it should be used throughout the training.

Figure 69: TABART – Table Types (1)The table types are maintained in the ABAP Dictionary, regardless of the database used.

Figure 70: TABART – Table Types (2)Tables in clusters or pools also contain TABART entries in their technical configuration. These entries do not become active, unless the tables are converted to transparent tables.

Figure 71: TABART – Table Types (3)

Figure 72: TABART – Table Types (4)Since NetWeaver ’04, the above TABARTs can be found in any SAP System based on Web AS 6.40 and later. Even if no BW info cube was created, some tables do exist belonging to the TABARTs as shown above.

Figure 73: Tables DDART, DARTT, TS<DBS>Explain that the TS<DBS> tables can even contain storage unit names, which are from earlier SAP system versions. As long as they are not used this is not a problem. The TS<DBS> tables contain the list of all SAP defined storage units in a database. Table DDART contains all the TABARTs that are known in the SAP System. Table DARTT contains short TABART descriptions in various languages.Note: table TSDB2 may not exist in NetWeaver systems.

Figure 74: Assignment: TABART – Database Storage UnitShow the content of at least TAORA and IAORA by using transaction SE11. Explain that even TABARTs, which are mapped to not existing storage units, will cause no harm if no tables are assigned to this TABART.

R3LDCTL reads tables TA<DBS> and IA<DBS>, and writes the assignments between TABARTs and database storage units into DDL<DBS>TPL. Tables TA<DBS> and IA<DBS> only exist for databases with the appropriate architecture.Show the content of at least TAORA and IAORA by using transaction SE11.

Figure 75: Technical Configuration – Table DD09LIn DD09L, tables are mapped to a TABART and a size category. Explain the consequences on changes in DD09L (tables are in different *.STR files). The size category is a single value, which is the same for table and index. The size category in DD09L will be used as the next extent value in *.STR files.The participants should understand that the next extent size is never been computed, or retrieved from the database!DD09L: ABAP Dictionary, technical configuration of tables (TABART and TABKAT) R3LDCTL extracts the corresponding TABART and size category (TABKAT) for each table, from table DD09L of the ABAP Dictionary. This information is written to the *.STR files.Show the content of table DD09L by using transaction SE11. Select for table T000.\

Figure 76: Table and Index Storage ParametersExplain that the size category in DD09L is mapped in table TG<DBS> and IG<DBS> to database dependent values.TG<DBS>/IG<DBS>: Assignment of size category (TABKAT = table category) to database storage parameters.Table TG<DBS> gives R3LDCTL the information (i.e. for Oracle) about the size of “Default Initial Extent”, “Next Extent”, “Min Extent”, and “Max Extent”. This information is saved in the files DDL<DBS>.TPL. The assignment of a table to a specific table category is used to determine the “Next Extent Size” in *.STR.The “Initial Extent Size” actually used, is calculated and saved in *.EXT. Note: table TGDB2 and IGDB2 may not exist in NetWeaver systems. Show the content of at least TGORA and IGORA by using transaction SE11.

Figure 77: Creating New TABARTs (1)Slide 1 is an overview of which tables are involved. Slide 2 shows how to implement a new TABART. The official way to apply a new TABART to DD09L is SE11 and not table update on database level! Otherwise the ABAP DDIC is not aware of the change and the next update is overwriting it.If tables have been moved to customer-defined database storage units (that is, tablespaces) in the source database during a migration, these tables are only re-loaded into the correct storage units when tables DARTT, DDART, TS<DBS>, IA<DBS>, TA<DBS>, and DD09L have been maintained correctly.The technical configuration of all tables (stored in DD09L) must include the correct TABART. After the tables have been unloaded, the files “DDL<Target DBS>.TPL” and “DDL<Source DBS>.TPL” should contain the customer-specific TABART and database storage unit names.Note: change the content of DD09L by calling transaction SE11 (technical setting maintenance). This will be a modification and is shown in SPDD later on. If you use database tools (i.e. sqlplus) to update DD09L, the change is lost after an upgrade, if the corresponding large table is a SAP delivered one.A fast check can be performed by calling R3LDCTL without parameters. R3LDCTL generates the files “SAP<TABART>.STR” and “DDL<DBS>.TPL” in the current directory. Duration: a few minutes.See SAP Notes:• 046272 Implement new data class in technical settings• 163449 DB2/390: Implement new data class (TABART)

Figure 78: Creating New TABARTs (2)For information on how to create a new TABART, see SAP Note 46272.A customer TABART name must start with “Z” or “Y”, and four additional characters. If SAPDBA or BRSPACE was used to create additional tablespaces, TABART names like U####, USR##, and USER#, can be seen as well. To prevent SAP upgrades from overwriting these definitions, the class for customer created TABARTs must be “USR”. In the example above, the new tablespace will be used for table COEP data and index storage location.It is recommended to name new database storage units like the TABART to identify their purpose, but this is not strictly necessary. See SAP Notes:• 046272 Implement new data class in technical settings• 490365 Tablespace naming conventions

Figure 79: Moving Tables and Indexes Between SAP ReleasesIn the past, SAP changed the mapping between tables and TABARTs in table DD09L some times. As a result, tables can move between different storage units when doing a R3LOAD system copy. This should not be a real problem unless the tables in charge do need a special storage location.During a homogeneous or heterogeneous R3LOAD system copy, tables may be moved unintentionally from one database storage unit to another. The reason for this could be that:• Some tables were assigned to TABARTs of other database storage units, instead of being assigned to the TABART were currently being stored. R3LOAD always creates tables and indexes in locations obtained from the ABAP Dictionary.• Older SAP System Releases were installed with slightly different table locations than subsequent releases.• ABAP Dictionary parameters were not properly maintained after the customer had re-distributed the tables to new database storage units.If it is essential to have single tables stored in specific database storage units, check the *.STR files before starting an import.Table movement can significantly change the size of source and target database storage units.If the Oracle reduced tablespace set is used for the target database, all thoughts about table and index locations are obsolete.

Lesson: Miscellaneous Background InformationLesson OverviewMiscellaneous background information about table DBDIFF, R3LOAD/JLOAD data access, and R3SZCHK size computation.

Lesson ObjectivesAfter completing this lesson, you will be able to:• Explain the purpose of table DBDIFF• Understand how the R3LOAD/JLOAD data access is working• Distinguish between the R3SZCHK behavior if the target database type is the same or different than the source database type The ABAP and the JAVA database access is done via the DBSL interface, which is theabstraction layer between physical and logical database access.

Business ExampleYou wonder why there are more tables in the *.STR files then visible in the ABAP dictionary transaction SE11, and some objects are defined even differently. You also want to know how the ABAP data types are translated into database specific data types.

Figure 80: Exception Table DBDIFFSome tables might not be defined in the ABAP dictionary, are database-specific, or need special treatment otherwise. Exceptions are listed in table DBDIFF. Tables, which are not defined in the ABAP dictionary, are hard coded in R3LDCTL. R3LDCTL reserves special treatment for tables, views, and indexes contained inthe exception table DBDIFF, since the ABAP Dictionary either does not contain information about these tables, or the data definitions intentionally vary from those in the database.Generally, this involves database-specific objects and the tables of the ABAP Dictionary itself.Show the content of table DBDIFF by using transaction SE11. Show the field where the reasons are stored (show which options exist). See related SAP Notes:• 033814 DB02 reports inconsistencies between database & Dictionary• 193201 Views PEUxxxxx and TEUxxxxx unknown in DDIC

Figure 81: Database Modeling of the ABAP Data TypesThe DBSL interface is the translator between ABAP data types and database data types. If SAP decides to use a new data type on a database, which better fits to the ABAP data type, a change in the DBSL can do this, but it also means that the content of the involved tables must be converted or exported and imported again.The ABAP data types are modeled through the SAP database interface (DBSL) into the suitable data type for the database used. Refer to the ABAP Dictionary manual for further information.The SAP<TABART>.STR files contain the ABAP data types, not the data types of the database.Different databases provide different data types and limitations to store binary or compressed data in long raw fields. If necessary, the DBSL interface stores the same amount of data in a different number of rows, depending on the database type. R3LOAD uses the interface to read/write data to/from the database.

Figure 82: R3LOAD – ABAP Data AccessThis slide should make clear that R3LOAD does not need to know how to read or write database data. Every database specific stuff will be done by the DBSL interface that is basically a library which is linked to every SAP program which accesses the database (i.e. tp, R3trans, disp+work, R3up, …).

Figure 83: Consistency Check: ABAP DDIC – DB and RuntimeTransaction SE11 can be used to verify the consistency of tables. The participants should understand that the “active nametab” contains the activated dictionary objects, which are used to access table data.Transaction SE11 can be used to check the consistency of individual tables or views. In this process, the system checks whether the tables or view definitions in the ABAP Dictionary (DDIC) agree with the runtime object or database object. The data in the database is accessed via the runtime object of the active NAMETAB. Changes to the ABAP Dictionary are not written (and therefore are not effective) until theyare activated in the NAMETAB.The ABAP Dictionary should be OK in a standard SAP System. If you suspect that any tables are inconsistent, you can check them individually using transaction SE11.Sometimes tables exist in the active NAMETAB but not in the database. In this case, R3LOAD will stop the export on error. Fix the NAMETAB problem with appropriate methods or mark the table entry as comment in the *.STR file.

Figure 84: ABAP Size Computation: Tables, Indexes, DatabaseWhen performing a DB Migration, the ABAP dictionary information is used to calculate the size of tables and indexes (R3SZCHK option -s DD). No database specific data will be used as the computation is for a different database! The magic formulas ;-) are hard coded inside R3SZCHK. If the DB stays the same, the table/indexsize information from the DB are used (R3SZCHK option -s DB).In the case of a database change, the sizing information from the source database cannot be used to size the target database, since the data types and storage methods differ.In the case of a homogeneous system copy, the size values can be taken from the source database. Tables that have a large number of extents can be given a sufficiently large initial extent in the target database.To determine the correct size values, the database statistics (update statistics and so on) must be current.

Figure 85: JAVA Data DictionaryThe Exclude list defines which tables must not be exported (because they are views!).The name of the Java Data Dictionary table is: BC_DDDBTABLERT. The name should only be mentioned if explicitly asked for.The JAVA Web AS table and index definitions are stored as XML documents in the dictionary table.Exclude lists tell JLOAD and JPKGCTL (JSPLITTER) which objects must not be exported and which objects need special treatment during the export (i.e. removal of trailing blanks)A catalog reader (JAVA Dictionary browser) will be available with 7.10.Note: The JAVA Dictionary table will only be filled with the XML-documents, describing the table and indexes, if JLOAD is used! Do not mix a JLOAD import with other methods (i.e. database specific import tools).

Figure 86: JLOAD – Data AccessThe participants should understand that the SAP JDBC Interface is special and can not be compared with a standard JDBC interface, which is delivered from database vendors.The SAP JDBC interface implements specific extensions to the JDBC standard, which are used by SAP Open SQL (i.e. the SAP JAVA DDIC, SAP transaction logic, SAP OPEN SQL compatibility).JLOAD uses SAP Open SQL to access database data.

Exercise 6: Technical BackgroundKnowledgeExercise Duration: 10 Minutes

Business ExampleYou need to know how to handle customer specific Data Classes/TABARTs and youare interested in information about how the ABAP and JAVA data types are convertedto database specific data types.

Solution 6: Technical BackgroundKnowledgeTask 1:The OS migration of a large Oracle database was utilized to move the heavily used customer table ZTR1 to a separate table space. For that purpose the necessary tasks were done in the ABAP dictionary: TABART ZZTR1 was created and the tablespace name PSAPSR3ZZTR1 was defined.1. Which changes were done to the ABAP dictionary of the source system? Which tables were involved? Note the table entries. a) Define new TABARTs ZZTR1 in tables DDART and DARTT.b) Add the new tablespace name to TSORA.

c) Map TABART ZZTR1 to tablespace PSAPSR3ZZTR1 in tables TAORA and IAORA.d) Change the TABART entry for table ZTR1 to ZZTR1 in table DD09L.Note: Table and index data can also be stored in the same tablespace.

Task 2:A customer database was exported using R3LOAD. A look into the export directory shows that no additional *.STR files exist for tables, which were stored in separate Oracle tablespaces. The ABAP dictionary tables, which are used to define additional TABARTs, containing the list of tablespaces, and the mapping betweenTABART/tablespace, were properly maintained.1. What is the reason that no additional *.STR files were created besides the standard ones?a) The technical settings (table DD09L) of the involved objects were not changed. The existence of customer TABARTs does not cause the creation of additional *.STR files, if no tables have been mapped to it.2. What can be done in advance to check the proper creation of an *.STR file before starting a time consuming export? Which steps are necessary?a) R3LDCTL can be executed stand-alone as the <sapsid>adm user. If no command line parameters are provided, R3LDCTL will create *.STR and DDL<DBS>.TPL files in the current directory. It will take a few minutes. The created files can then be checked for proper content.

Task 3:The *.STR files contain database independent data type definitions as used in the ABAP dictionary.1. How is R3LOAD able to convert database independent into database specific data types?a) R3LOAD does not need specific knowledge about the data types of the target database, because it calls the database interface (DBSL), which knows how to handle them.

Task 4:Every database vendor provides a JDBC interface for easy database access. interface?1. Why is SAP using its own JDBC interface?a) Standard JDBC interfaces do not provide features required by SAP applications. Important JDBC extensions are the usage of the SAP JAVA Dictionary and the implementation of the SAP transaction mechanism.

Unit 7R3LOAD & JLOAD FilesThis is the most important unit of the entire course. It describes every R3LOAD and JLOAD control file in every aspect. After the participants heard this unit, they should know very well how to influence the R3LOAD and JLOAD behavior by manipulating the control file contents. They also know what never should be touched.

Lesson: R3LOAD FilesLesson OverviewPurpose, contents and structure of the R3LOAD control and data files

Figure 87: Overview: R3LOAD Control and Data Files

R3LOAD writes <PACKAGE>*.XML files during Unicode conversions. They contain the primary key of each row which cannot be properly translated to Unicode. The content is used by transaction SUMG to fix the problems in the target system. These files are not discussed in this course.

Figure 88: R3LOAD: DDL<DBS>.TPL

Figure 89: DDL<DBS>.TPL: DescriptionThe “DDL<DBS>.TPL” files contain the database-specific description of the create table/index statements. R3LOAD uses these descriptions to generate the tables and indexes.

Depending on the database used, the primary key or secondary indexes are generated either before, or after the data is loaded.Normally the R3LOAD based data export is done sorted by primary key. This default behavior can be switched on and off in the DDL<DBS>.TPL file.A negative list can be used to exclude tables, views, or indexes from the load process. Typical examples include tables LICHECK and MLICHECK. The assignment of TABART and data/index storage is made here for databases that support the distribution of data among database storage units.“Next Extent Size” classes are defined separately for tables and indexes, provided this is supported by the target database.Database specific drop, delete and truncate data SQL statements can be defined for better performance in R3LOAD restart situations.

Figure 90: DDL<DBS>.TPL: Naming Conventions

Explain that the DDL<DBS>_LRG.TPL files are used to support unsorted exports and in case of Oracle a parallel index creation.The “DDL<DBS>.TPL” files are generated by R3LDCTL. Since R3LDCTL 6.40 “DDL<DBS>_LRG.TPL” files are created to support unsorted exports (were it makes sense). For Oracle parallel index creation was added. You can also see a DDLMYS.TPL file, but this is not used.Starting with NW 7.02 SP9 or ERP 6.0 EHP 2, migrations to Sybase ASE are supported. The Sybase ASE related template file is called DDLSYB.TPLSAP Note: 1591424 SYB: Heterogeneous system copy with target ASE

Figure 91: DDL<DBS>.TPL: Internal Structure ≤ 4.6DFunction / Section names:• Create primary index order, sorted / unsorted export: prikey• Create secondary index order: seckey• Create table: cretab• Create primary key: crepkey• Create secondary index: creind• Do not create and load table: negtab• Do not create index: negind• Do not create view: negvie• Do not compress table: negcpr• Storage location: loc• Storage parameters: sto

Figure 92: DDL<DBS>.TPL: Structure – Create TableThe DDL files are templates used to generate database specific SQL statements for creating tables, primary keys and secondary indexes by R3load. Variables are indicated by “&” and filled with various values from *.STR, *.EXT files, and from the storage sections of the DDL<DBS>.TPL file itself.

Figure 93: DDL<DBS>.TPL: Structure − Create IndexSecondary indexes can be unique or ununique. Primary keys are always unique.

Figure 94: DDL<DBS>.TPL: Structure − Negative ListThe negative list can be used to prevent tables, indexes, and views from being loaded.The entries are separated by blanks and can be inserted into a single line.

Figure 95: DDL<DBS>.TPL: Structure − Table StorageThe default initial extent is only used when no <PACKAGE>.EXT exists or when it does not contain the table.

New TABARTs for additional storage units (i.e. tablespaces for Oracle) can be added to the DDL<DBS>.TPL by changing the table and index storage parameters. If you do this, change the *.STR files, and the corresponding create database templates for R3SETUP (DBSIZE.TPL) or SAPINST (DBSIZE.XML). It is easier to change the ABAP Dictionary before the export, than to change the R3LOAD control files.If R3LOAD cannot find a specific table or index entry in the <PACAKAGE>.EXT file, the missing entry is ignored and default values are used.

Figure 96: DDLDBS.TPL: Structure − Index StorageThe default initial extent is only used when no “<PACKAGE>.EXT” exists, or when it does not contain the index.The same index storage parameters are used for primary and secondary indexes.

Figure 97: DDL<DBS>.TPL: Structure − Second ExampleA less complex example from MaxDB.

Figure 98: DDL<DBS>.TPL: Internal Structure ≥ 6.10Do not change the sections marked “do not change” unless explicitly asked to do so in an SAP Note or by SAP support.Function / Section names:• Create primary key order, sorted / unsorted export: prikey• Create secondary index order: seckey• Create table: cretab, drop table: drptab• Create primary key: crepky, drop primary key: drppky• Create secondary index: creind, drop secondary index: drpind• Create view: crevie, drop view: drpvie• Truncate data: trcdat• Delete data: deldat• Do not create table: negtab• Do not load data: negdat• Do not create index: negind• Do not create view: negvie• Do not compress table: negcpr• Storage location: loc• Storage parameters: sto

Figure 99: DDL<DBS>.TPL: Structure − DROP/DELETE DataAbove are the templates for dropping objects and deleting/truncating table data. The “&where&” condition is used when restarting the import of splitted tables. All other sections are similar to 4.6D and below. Some functions apply to specific database types or database releases only.

Figure 100: R3LOAD: <PACKAGE>.EXT

Figure 101: <PACKAGE>.EXT: Initial Extent (1)The <PACKAGE>.EXT files will created for all database types, because the extent values are used to compute the size of the target database DBSIZE.TPL/DBSIZE.XML and for package splitting.

Figure 102: <PACKAGE>.EXT: Initial Extent (2)The size of “initial extent” is based on assumptions about the expected space requirements of a table. Factors such as the number and average length of the data records, compression, and the data type used, play an important role. In case of Oracle dictionary managed tablespaces the values for the “initial extent”can be increased or decreased as required. Observe database-specific limitations for maximum “initial extent” sizes.R3ZSCHK limits the maximum initial extent to a value of 1.78 GB. This was implemented to prevent data load errors of very large tables because of having not enough consecutive space in a single storage unit, otherwise small tables or indexes could block the storage unit easily. Today’s database releases handle the storage moreflexibly, making this mechanism obsolete.Even if the maximum size of a table is limited to 1.78 GB (more precisely 1700 MB), this information is accurate enough for package splitting.If R3LOAD cannot find a specific table or index entry in the <PACAKAGE>.EXT file, the missing entry is ignored and default values are used.Typical warning in R3SZCHK.log if reaching the size limit:• WARNING: REPOLOAD in SLEXC: initial extent reduced to 1782579200• WARNING: /BLUESKY/FECOND in APPL0: initial extent reduced to 1782579200

Figure 103: R3LOAD: <PACKAGE>.STR

Figure 104: <PACKAGE>.STR: Description (1)The term “package” is used as a synonym for R3LOAD structure files (*.STR). The data of tables in SAP0000.STR will never be exported. ABAP report loads must be regenerated on the target system.The ABAP Nametab tables DDNTF / DDNTT (and since 6.x DDNTF_CONV_UC / DDNTT_CONV_UC for Unicode conversions) require a certain import order. The JAVA-based Package Splitter makes sure that the Nametab tables are always put into the same file (SAPNTAB.STR).If R3LOAD cannot find a specific table or index entry in the <PACAKAGE>.EXT file, the missing entry is ignored and defaults values are used.

Figure 105: <PACKAGE>.STR: Description (2)The buffer flag is used for OS/390 migrations (as of Release 4.5A). It indicates how to buffer tables in an OS/390 DB2 database.Table type (conversion type with code page change):• C = Cluster table• D = Dynpro (screen) table• N = Nametab (active ABAP Dictionary)• P = Pooled table• Q = Unicode conversion related purpose• R = Report table• T = Transparent table• X = Unicode conversion related purposeR3LOAD activity:• all = Create table/index and load data• data = Load data only (table must be created manually)

• struct = Create table/index, but do not load any dataFor tables which are marked with “struct”, R3LOAD will not create a data export or import row inside the task file. This will prevent the export or import of unwanted table data.Comments are indicated by a “#” character in the first column.

Figure 106: <PACKAGE>.STR: Object Structure (1)The total of field lengths is the offset of the next data record to read in the “<PACKAGE>.<nnn>” file.

Figure 107: <PACKAGE>.STR: Object Structure (2)The “dbs:” list specifies databases for which the object should be created. A leading “!” means the opposite. In the above example, the index MLST~1 will be created on all databases except ADA and MSS. The index MLST~1AD will be created on ADA and MSS only. The “dbs:” was implemented, starting with R3LOAD 6.40.

Figure 108: SAPVIEW.STR: View StructureViews are not generated in the target system until all of the tables and data have been imported.The corresponding “SAPVIEW.EXT” file does not contain any entries or does not even exist, since views do not require any storage space other than for their definition in the DB Data Dictionary.

Figure 109: R3LOAD: <PACKAGE>.TOC

Figure 110: <PACKAGE>.TOC: DescriptionThe content of the <PACKAGE>.TOC file is used by R3LOAD version 4.6D and below, to restart an interrupted export. As of R3LOAD 6.10, the <PACKAGE>.TSK file is used for the restart.

Figure 111: <PACKAGE>.TOC: Internal Structure ≤ 4.6D

Figure 112: <PACKAGE>.TOC: R3LOAD Restart ExportThe above restart description is only valid for R3LOAD less or equal to 4.6D! A restart without option “-r” will force R3LOAD to begin the export at the very first table of the *.STR file in charge. The existing import *.LOG file will be automatically renamed to *.SAV and the existing *.TOC file will be reused, but not cleared. It isrecommended to delete the related *.LOG, *.TOC, and dump files before repeating a complete export of a single *.STR file or of the whole database. If R3LOAD export processes are interrupted due to a system crash or a power failure, the *.TOC file may list more exported tables than the dump file really contains (since the operating system was not able to write all the dump file buffers to disk). In this case, a restart can be dangerous as it starts after the last *.TOC entry which might not be valid. This can lead to missing data or duplicate keys later on. See the troubleshooting chapter for details on how to prevent this situation.R3SETUP adds the “-r” command line option automatically when restarting R3LOAD.

Figure 113: <PACKAGE>.TOC: Internal Structure ≥ 6.10

Since R3LOAD 6.10, the *.TSK file is used to restart a terminated data export! The *.TOC file is read to find the last write position only.

Figure 114: <PACKAGE>.TOC: Internal Structure ≥ 6.40In case of splitted tables, the WHERE condition used during the export is written into the respective *.TOC file. Before starting the import, R3LOAD compares the WHERE condition in the *.TOC file against the where condition in the *.TSK file. R3LOAD assumes a problem if they do not match and stops on error.If there is an error during data load and R3LOAD must be restarted, the WHERE condition is used for selective deletion of already imported data. Unicode code pages: 4102 Big Endian, 4103 Little Endian.Non-Unicode code pages: 1100, MDMP (for exports of MDMP systems).

Figure 115: R3LOAD: <PACKAGE>.<nnn>

Figure 116: <PACKAGE>.<nnn>: Description

Depending on the source database used for the export, a data compression ratio of between 1:4, and 1:10 or more can be achieved. The compression is performed at block level, so the file cannot be decompressed as a whole.Some versions of R3SETUP/SAPINST are asking for the maximum dump file size (other versions use different defaults - check the *.CMD file for the used value). Each additional dump file (for the same *.STR file) is assigned to a new number (such as SAPAPPL1.001 or SAPAPPL1.002). The files of a PACKAGE are all

generated in the same directory (if not specified differently in the *.CMD file – >=6.10 only!). Make sure that the available disk space is sufficient. A checksum calculation at block level is implemented as of R3LOAD 4.5A to ensure data integrity.R3LOAD versions 4.5A and above compare source system information obtained from the dump file against the actual system information. If R3LOAD detects a difference in OS or DB, a migration key is necessary to perform the import (see GSI section in export log file).

Figure 117: <PACKAGE>.<nnn>: Internal Dump File StructureR3LOAD reads a certain amount of database data into an internal buffer and does a compression on it. The number of written blocks (group) will depend on the compression result and block size used. This figure is also written into the dump, to tell R3LOAD how many blocks to read later on.Since 4.5A, a header block is used to identify heterogeneous system copies and to verify the migration key.Implemented with 4.5A, was that every group of compressed data blocks has its own checksum. Before a checksum can be verified, all blocks of a group must be read by R3LOAD. If a dump file has been corrupted during a file transfer, typical R3LOAD read errors will be: RFF (cannot read from file), RFB (cannot read from buffer), or “cannot allocate buffer of size ...”. For more details, see unit “Troubleshooting”.

Figure 118: R3LOAD: <PACKAGE>.LOG

Figure 119: <PACKAGE>.LOG: Export Log ≤ 4.6DThe header entries (GSI) of the export *.LOG file show important information which can be useful for the migration key generation.

Figure 120: <PACKAGE.LOG>: Export Log ≥ 6.10Since R3LOAD 6.10 the installation number of the exported system was added.

Figure 121: <PACKAGE.LOG>: Import Log ≤ 4.6D

Figure 122: <PACKAGE>.LOG: R3LOAD Restart ImportUp to R3LOAD 4.6D, the restart point for an interrupted import is read from the <PACKAGE>.log file. The restart performs a delete data (DELETE FROM) or a drop table/index.A restart without option “-r” will force R3LOAD to begin at the very first table of the *.STR. The existing import *.LOG file will be automatically renamed to *.SAV. The import process will terminate on error, as the database objects already exist. R3SETUP adds the “-r” option automatically when restarting R3LOAD.

Figure 123: <PACKAGE>.LOG: Import Log ≥ 6.10 and < 6.40Since R3LOAD 6.10, only the *.TSK file is used to restart an interrupted import! The restart point for the data load is the first entry in the *.TSK file of status error (err) or execute (xeq).

Figure 124: <PACKAGE.LOG>: Import Log ≥ 6.40

As of R3LOAD 6.40, separate time stamps for create table, load data, and create index are implemented. This allows a much better load analytics than on previous releases.

Figure 125: R3LOAD: <PACKAGE>.CMD

Figure 126: <PACKAGE>.CMD: DescriptionCommand files are automatically generated by SAP installation programs R3SETUP, SAPINST, and MIGMON.

Figure 127: <PACKAGE>.CMD: Internal Structure ≤ 4.6DThe “<PACKAGE>.CMD” files contain the names and paths of the files from where R3LOAD retrieves its instructions. The name of the “<PACKAGE>.CMD” file must be supplied in the R3LOAD command line.R3LOAD dump files can be redirected to different file systems by adapting the “dat:” entry.The default maximum size (fs) of a dump file is often 1000M (1000 MB).Possible units:• B = Byte• K = Kilobyte• M = Megabyte• G = GigabyteDo not change the block size (bs).Meaning of section names:• icf: Independent control file• dcf: Database dependent control file• dat: Data dump file location

• dir: Directory file (table of contents)• ext: Extent file (not required at export time)The DDL<DBS>.TPL file is often read from the installation directory. In this case, R3SETUP/SAPINST copied it from the export directory. This is done for the option to adapt storage locations and so on.

Figure 128: <PACKAGE>.CMD: Internal Structure ≥ 6.10Meaning of section names:• tsk: Task file• icf: Independent control file• dcf: Database dependent control file• dat: Data dump file location (up to 16 different locations)• dir: Directory file (table of contents)• ext: Extent file (not required at export time)In the above example, the first dump file SAPPOOL.001 will be written to /migration/DATA, the second dump file SAPPOOL.002 to /mig1/DATA, and so on. The 4th , 5th, etc., ... dump file will be stored in the last defined dump location. If more than one PACKAGE is mentioned in a *.CMD file, a single R3LOAD will execute them in sequential order. This might be useful in certain cases.

Figure 129: R3LOAD: <PACKAGE>.STA

Figure 130: <PACKAGE>.STA: Description

The values are estimates and serve primarily to display the load progress. The generation of statistic files is switched off by default. Use R3LOAD option –s <stat file> to make use of the statistic feature.

Figure 131: R3LOAD: <PACKAGE>.TSK

Figure 132: <PACKAGE>.TSK: DescriptionSince R3LOAD is using task files, the restart points are no longer read from *.LOG or *.TOC files. Complex restart situations with manual user interventions are minimized or more easy to handle.Objects or data can be easily omitted from the import process by simply changing the status of the corresponding <PACKAGE>.TSK row. See SAP Note 455195 “R3LOAD: Purpose of TSK Files” for further reference.

Figure 133: <PACKAGE>.TSK: Internal Structure for ExportThe slide above shows the initial <PACKAGE>.TSK file content, after it was created by R3LOAD.Please check unit 8 “Advanced Migration Techniques” for the table split case.

Figure 134: <PACKAGE>.TSK: Internal Structure for ImportThe above <PACKAGE>.TSK file shows the content after R3LOAD has stopped on error. The corresponding <PACKAGE>.LOG file contains the error description/reason.Please check unit 8 “Advanced Migration Techniques” for the table split case.

Figure 135: <PACKAGE>.TSK: Syntax ElementsThe “<PACKAGE>.TSK” files are used to define which objects have to be created and which data has to be exported/imported by R3LOAD. It is also used to find the right restart position after a termination.Status• xeq = Task not yet processed.• ok = Task successfully processed.• err = Failure occurred while processing the task. The next run will drop the object ordelete/truncate data before re-doing the task.• ign = Ignore task, do nothing.The Status “ ign” can be used to omit a task action and to document it as well. Setting a task manually to “ok” will have the same result, but it is not visible for later checks.There is also an action “D” which can be used to delete objects with R3load, but used in exceptional cases only.

Figure 136: <PACKAGE>.TSK: Create Task FileR3LOAD creates the “<PACKAGE>.TSK” files from existing “ <PACKAGE>.STR” files.Example: Create *.TSK file for exportR3LOAD -ctf E SAPAPPL0.STR DDLORA.TPL SAPAPPL0.TSK ORA –l SAPAPPL0.logAfter starting the database export or import, R3LOAD renames <PACKAGE>.TSK to <PACKAGE>.TSK.BCK and inserts line-by-line from <PACKAGE>.TSK.BCK into a new <PACKAGE>.TSK as soon as a task (create, export, import, ignore) was finished successfully (status: ok) or unsuccessfully (status: err).R3LOAD automatically deletes <PACKAGE>.TSK.BCK after each run. In the case of restarting, R3LOAD searches in the <PACKAGE>.TSK for not completed tasks of status “err” or “xeq”, and executes them.In case of table splitting, the content of the WHERE file (*.WHR) is added to the task file.

Figure 137: <PACKAGE>.TSK: Merge Option (1)In rare cases, it may be necessary to rebuild an already used <PACKAGE>.TSK file after a hard termination, caused by operating system crashes, power failures, etc. This must be done by merging the file <PACKAGE>.TSK.BCK with <PACKAGE>.TSK.Note: If more than one R3LOAD is executing the same task file by accident, one of the processes will find an existing <PACKAGE>.TSK.BCK file and then stop on error. This should prevent running parallel R3LOAD processes against the same database objects.The “-merge_bck” option can only be used in combination with “-e” or “-i”. The export or import will start immediately after the merge is finished!The merge option “-merge_only” merges the <PACKAGE>.TSK.BCK into the <PACKAGE>.TSK files, but does not start an export or import.

Figure 138: <PACKAGE>.TSK: Merge Option (2)R3LOAD stops on error if a <PACKAGE>.TSK.BCK file is found, as it is not clear how to proceed. For example, a power failure interrupted the import processes and R3LOAD will not be able to cleanup the <PACKAGE>.TSK.BCK and <PACKAGE>.TSK files. The current content of both files are shown above.

Figure 139: <PACKAGE>.TSK: Merge Option (3)After R3LOAD has been restarted with option “-merge_bck”, the content of <PACKAGE>.TSK.BCK will be compared against <PACKAGE>.TSK, and the missing lines will be copied to <PACKAGE>.TSK. In this stage, it is not known whether some objects not listed in <PACKAGE>.TSK already exist in the database.R3LOAD solves this problem by changing the status of each “xeq” line to “err”, to force a “DROP” or “DELETE” statement before repeating an import task.

Figure 140: <PACKAGE>.TSK: Merge Option (4)After the task file merge is completed, R3LOAD will attempt to drop each object before creating it. Errors caused by drop statements are ignored.

Figure 141: <PACKAGE>.TSK: R3LOAD Restart BehaviorNo special R3LOAD restart option is necessary!Rare cases are hard terminations caused by power failures and operating system crashes.

Export write order: dump data, <PACKAGE>.TOC, <PACKAGE>.TSK

Figure 142: R3LOAD: <TABART>.SQL

Figure 143: Why Do BW Objects Need Special Handling?In the case of BW non-standard database objects, the ABAP Dictionary contains table and index definitions that are not sufficient to describe all objects properties. The missing information is held in the BW meta data (i.e. partition information, bit mapped indices, ...). R3LDCTL reads the ABAP Dictionary only. Additional information from BW meta data cannot be inserted into *.STR files. The *.STR file content is enough to export and import BW data via R3LOAD, but is insufficient to create the BW object in the target system.To overcome the existing limitations of R3LDCTL and R3LOAD, the report SMIGR_CREATE_DDL was developed, which writes database specific DDL statements into *.SQL files. R3LOAD was extended to switch between the normal way of creating tables and indexes, and the direct execution of DDL statements from a *.SQL file. So it is possible to create non-standard database objects and to load data into them using R3LOAD.

Figure 144: <TABART>.SQL: File GenerationThe report SMIGR_CREATE_DDL is mandatory for all systems using non-standard database objects (mainly BW objects).Since NetWeaver 7.02, SMIGR_CREATE_DDL inserts the list of created <TABART>.SQL files into the file SQLFiles.LST.

Figure 145: <TABART>.SQL: Content VariantsExample 1: R3LOAD creates table /BI0/B0000103000, using the supplied CREATE TABLE statement. Depending on the DDL<DBS>.TPL, content data will be loaded before or after the primary key /BI0/B0000103000~0 creation.

Example 2: R3LOAD creates table /BI0/B0000106000 and primary key /BI0/B0000106000~0 in a single step. Afterwards, data will be loaded. As the /BI0/B0000106000~0 SQL section is empty, R3load will not try to create/BI0/B0000106000~0 again. This configuration is used to make sure that table and index will always be created together, independent from DDL<DBS>.TPL entries.

Example 3: R3LOAD creates table /BI0/B0000108000 and will load data into it. As the index /BI0/B0000108000~0 has no SQL section, no further action is required.The table will not have a primary key.Empty SQL sections are used to prevent R3LOAD from creating objects according to *.STR file content.

Figure 146: <TABART>.SQL: Example Content (1)The example above combines a create table and a create unique index statement, forcing R3load to load data after the index creation (which can be useful for sometable types).The variable &APPL0& will be replaced by the TABART according to DDL<DBS>.TPL content.

Figure 147: <TABART>.SQL: Example Content (2)

Figure 148: R3LOAD: Execution of External SQL StatementsSince R3LOAD 7.20, the file SQLFiles.LST is examined and the existence of the mentioned <TABART>.SQL files is verified. The SQLFiles.LST is searched in the current directory and then in the export DB directory. SAPINST is taking care, that the <TABART>.SQL files and the SQLFiles.LST is put into the right place.Before R3LOAD assembles the first SQL statement, it searches for a <TABART>.SQL file, which matches the TABART of the first object. The <TABART>.SQL file is searched in the current directory first and then in the DB/<DBS> directory. All object names in the <TABART>.SQL file will be added to an internal list (index).R3load will then scan the internal list for a matching object name prior to assembling a create object statement. If a match has been found, the SQL statement from the <TABART>.SQL file will be used, instead of building a statement according to DDL<DBS>.TPL content.R3LOAD can only read one <TABART>.SQL file per *.STR file!The usage of a <TABART>.SQL file may not be mentioned in the import log file before R3LOAD 7.20.

Figure 149: Import Log showing the <TABART>.SQL SearchThe SQLFiles.LST is read by R3LOAD to retrieve the *.SQL file names. R3LOAD will abort, if there is a <TABART>.SQL file mentioned in the list, which cannot be found. This was implemented as an additional safety mechanism. Independent from the SQLFiles.LST content, R3LOAD is searching for the <TABART>.SQL files based on the TABART in the respective <PACKAGE>.STR file. Even if a <TABART>.SQL file is not in the SQLFiles.LST it will be used, if it was found by R3LOAD.

Figure 150: Common R3LOAD Command Line Options (1)Increasing the commit count can improve database performance, if a database monitor shows the cause of the slowing down of the database leading to the number of commits as opposed to loading of the data. Changing the value can also decrease performance. Load tests are recommended. The default commit count is approximately 1 commit for 10.000 rows.The “-k” or “-K” option is not valid for R3LOAD below 4.5A.For additional R3LOAD options, see “R3LOAD -h”.The option “-continue_on_error” is dangerous for the export! On MDMP systems, R3LOAD 6.x automatically uses a dummy code page called “MDMP”, which indicates “do no conversion”. The MDMP code page entry can be seen in the *.TOC file. For the conversion of MDMP systems to Unicode, see unit 11 “Special Projects”.

Figure 151: Common R3LOAD Command Line Options (2)The statistic data file is useful to watch the load progress of large data dump files. Option “-o” can be combined with option “-ctf” (create task file), “-e” (export), and “-i” (import). In the combination with “-ctf”, the corresponding tasks will not be inserted into the *.TSK file. The “-o” options is used in the case of the importof splitted tables. For example: R3LOAD -ctf I → resulting task file content:• T TAB01 C xeq• D TAB01 I xeq• P TAB01~0 I xeq• R3LOAD -o D -ctf I → resulting task file content:content:• T TAB01 C xeq

• P TAB01 C xeqSince R3LOAD 6.40, the “-v” command line option shows the program compile time, to make it easier to identify patch levels.Database specific load options can be listed by “-h”. The options are used to speed up the R3LOAD import bypassing database mechanisms, which are not required for a system copy load. If in doubt about which options are recommended, check to see what R3SETUP or SAPINST is using.

Figure 152: DB Specific R3LOAD Option: Load Procedure Fast1. 0905614 DB6: R3load -loadprocedure fast COMPRESS 1058427 DB6: R3load options for compact installation2. 1464560 FAQ R3load in MaxDB,1014782 MaxDB: FAQ System Copy3. 1054852 Recommendations for migrations to MS SQL Server4. 1045847 ORACLE DIRECT PATH LOAD SUPPORT IN R3LOAD1046103 ORACLE DIRECT PATH LOAD SUPPORT IN R3LOAD 7.00 AND LATER5. 1591424 SYB: 7.02 Heterogeneous system copy with target Sybase ASE1672367 SYB: 7.30 Heterogeneous system copy with target Sybase ASE

Figure 153: R3LOAD Command Line Examples

JLOAD FilesLesson OverviewThis lesson explains the purpose, content, and structure of the JLOAD control and data files.

Lesson ObjectivesAfter completing this lesson, you will be able to:• Understand the purpose, contents, and structure of the JLOAD control, and data filesThis lesson describes the JLOAD control and data files in every aspect.

Business ExampleProblems occurred during a JLOAD system copy. For troubleshooting, you need toknow the purpose of all the various control files created.

Figure 154: Overview: JLOAD Control and Data FilesSAPINST 7.02 and its improvements on JLOAD (JPKGCTL, JMIGMON) were implemented for NetWeaver 7.02 the first time. Other versions like NetWeaver 7.10 have an higher version number, but were released earlier. This leads to a situation that a lower NetWeaver version (7.02) seems to provides more (advanced)JLOAD functionalities than a higher NetWeaver version (i.e. 7.10). In general: if no JPKGCTL was used or is available, JLOAD behaves like in NW 7.00, if JPKGCTL was run the behavior is similar to the NW 7.02 examples.

Figure 155: JLOAD: Job Files

Figure 156: Export Job File 6.40 / 7.00Job files are used to specify JLOAD actions. SAPINST in NetWeaver ’04 SR1 and NetWeaver 04S is starting a single JLOAD process, which exports the whole JAVA schema (meta data and table data). The default data dump file name is “EXPDUMP”. JLOAD can create the EXPORT.XML and IMPORT.XML files by itself.The job file can also contain a maximum data dump file size. Without such a parameter, the default size is set to 2 GB.For example: <export file=“EXPDUMP” size=“100MB”>Future versions may contain additional object types like database views (which are not used yet).

Figure 157: Export Job Files created by JPKGCTL 7.02Starting with SAPINST 7.02, JPKGCTL can be used to create the JLOAD job files. The meta data describing a table or index (EXPORT_METADATA.XML / EXPORT_POSTPROCESS.XML) is separated from the data export (EXPORT_<PACKAGE>.XML). This allows multiple JLOAD export and import processes. For table splitting it is necessary to create the table first, then load data, and create indices afterwards (post-processing).

Figure 158: Export Job Files - JPKGCTL 7.30In 7.30, there is one meta data export, several package exports, and for each package its own post-process export job file.

Figure 159: Import Job File 6.40 / 7.00Job files are used to specify JLOAD actions. In NetWeaver 04 SR1 and NetWeaver 04S, SAPINST is starting a single JLOAD process, which imports the entire JAVA schema.

Figure 160: Import Job Files created by JPKGCTL 7.02For table splitting it is necessary to create the table first, then load data, and create indexes afterwards (post-processing).

Figure 161: Import Job Files – JPKGCTL 7.30In 7.30, there is one general meta data import, multiple package imports, and for eachpackage its own post-process import job file.

Figure 162: JLOAD: Status Files

Figure 163: Export Status File 6.40 / 7.00The above *.STA file contains the export status. As soon as an item is exported, a new line will be added to the *.STA file. The content of the *.STA file is used to identify where to proceed, in case of a restart.The status can either be “OK” for successful, or “ERR” for failed.In NetWeaver 04 SR1, the “EXPORT.STA” file can be found under: /usr/sap/<SAP SID>/<Instance>/j2ee/sltoolsCheck the SAPINST log file for the location in other versions.

Figure 164: Export Status Files 7.02The meta data export is separated from the table data export.

Figure 165: Import Status File 6.40 / 7.00The above *.STA file contains the import status. As soon as an item is imported, a new line will be added to the *.STA file. The content of the *.STA file is used to identify where to proceed, in case of a restart.The status can either be “OK” for successful, or “ERR” for failed.

Figure 166: Import Status Files 7.02First the meta data is applied (create table, primary key), then the data import takes place (insert), and finally the secondary indexes are generated (post-processing).

Figure 167: JLOAD: *.LOG Files

Figure 168: Export LogThe existence of a matching export *.STA file identifies a restart situation, otherwise the export starts from scratch. NetWeaver 7.02 JLOAD writes log files with the following naming conventions:EXPORT_METADATA.XML.LOG, EXPORT_<PACKAGE>.XML.LOG, andEXPORT_POSTPROCESS.XML.LOG. It separates the meta data export from table data export.

Figure 169: Import LogThe existence of a matching import *.STA file identifies a restart situation, otherwise the import starts from the first data dump file entry.NetWeaver 7.02 JLOAD writes log files with the following naming conventions:IMPORT_METADATA.XML.LOG, IMPORT_<PACKAGE>.XML.LOG, andIMPORT_POSTPROCESS.XML.LOG. It separates the meta data import from table data import and post-processing.

Figure 170: JLOAD: Export / Import Statistic Files

Figure 171: Export / Import Statistic Files 7.02The “*_PACKAGE.STAT.XML” files will be read by JMIGTIME to create thecorresponding export/import time lists and HTML graphics.

Figure 172: JLOAD: Data Dump File

Figure 173: Data Dump File Structure 6.40 / 7.00If not otherwise specified in the export job file, a dump file can grow up to 2 GB before an additional file will be automatically created (i.e. <DUMP>.001, <DUMP>.002, ...). Because the length of each data block can be found in the respective header, JLOAD can easily search for a certain location inside the data dump file.

Figure 174: Data Dump File Structures for separated Meta DataIf JPKGCTL was used meta data and table data were put into separate dump files.

Figure 175: JPKGCTL (JSPLITTER): Package Sizes

Figure 176: Size Information for JMIGMONAfter the package splitting was completed, JPKGCTL writes the “sizes.xml” file containing the expected package sizes. This helps JMIGMON to identify large packages which should be exported first.

Figure 177: Common JLOAD Command Line OptionsParameters:• url = url for database to connect• driver = JDBC database driver• auth = database logonIf no job file is specified, the complete database will be exported by default. In addition, suitable “EXPORT.XML” and “IMPORT.XML” files will be generated. The default log file name will be “JLOAD.LOG”, unless a job file is specified; in this case, the log file will get the same name as the job file, with *.XML replacedby *.LOG.

Figure 178: Common JLOAD Command Line Options

Exercise 7: R3LOAD & JLOAD Files (Part I)

Solution 7: R3LOAD & JLOAD Files (Part I)Task 1:In a DB migration of a large database to Oracle, it was decided to move the heavily-used customer table ZTR1 (TABART APPL1) to a separate table space. No changes were done to the ABAP dictionary in advance. The export was executed the normal way.1. What changes should be done to the R3LOAD files for creating table ZTR1 and its indexes in tablespace PSAPSR3ZZTR1 and to load data into it?Fragment of SAPAPPL1.STRtab: ZTR1att: APPL1 4 ?? T all ZTR1~0 APPL1 4fld: MANDT CLNT 3 0 0 not_null 1fld: MBLNR CHAR 10 0 0 not_null 2fld: TSTMP FLTP 8 0 16 not_null 0ind: ZTR1~PSPatt: ZTR1 APPL1 4 not_uniquefld: MANDTfld: TSTMP

After the import is finished, which dictionary maintenance tasks should be done?a) To create additional tablespaces on the target database, the files DBSIZE.TPL or DBSIZE.XML must be adapted. A new TABART / table-space assignment must be added in the file DDLORA.TPL file:# table storage parameters ZZTR1 PSAPSR3ZZTR1# index storage parametersZZTR1 PSAPSR3ZZTR1… and the original TABART in the SAPAPPL1.STR file has to be changedfrom:tab: ZTR1 att: APPL1 4 ?? T all ZTR1~0 APPL1 4fld: MANDT CLNT 3 0 0 not_null 1fld: MBLNR CHAR 10 0 0 not_null 2fld: TSTMP FLTP 8 0 16 not_null 0ind: ZTR1~PSPatt: ZTR1 APPL1 4 not_uniquefld: MANDTfld: TSTMP

to:tab: ZTR1att: ZZTR1 4 ?? T all ZTR1~0 ZZTR1 4fld: MANDT CLNT 3 0 0 not_null 1fld: MBLNR CHAR 10 0 0 not_null 2fld: TSTMP FLTP 8 0 16 not_null 0ind: ZTR1~PSPatt: ZTR1 ZZTR1 4 not_uniquefld: MANDTfld: TSTMP

2. After the import is finished, which dictionary maintenance tasks should be done?a) After the import is finished, the ABAP dictionary should be maintained for table ZTR1 (update tables DDART, DARTT, TSORA, TAORA, IAORA and DD09L).

Task 2:An Informix export of a heterogeneous system copy with R3LOAD 6.x is short on disk space. None of the available file systems is large enough to store the expected amount of dump data.All TABARTs will fit into the “sapreorg” file system, except TABART CLUST, which has a size of 600 MB.File system A: /tools/exp_1 ~ 400 MB freeFile system B: /oracle/C11/sapreorg/exp ~ 4500 MB freeFile system C: /usr/sap/trans/exp_2 ~ 350 MB free1. Which SAPCLUST.cmd file content would allow an export without any manual intervention?tsk: "/oracle/C11/sapreorg/install/SAPCLUST.TSK"icf: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.STR"dcf: "/oracle/C11/sapreorg/install/DDLINF.TPL"dat: "/oracle/C11/sapreorg/exp/DATA/" bs=1k fs=1000Mdir: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.TOC"

a) Original SAPCLUST.cmdtsk: "/oracle/C11/sapreorg/install/SAPCLUST.TSK"icf: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.STR"dcf: "/oracle/C11/sapreorg/install/DDLINF.TPL"dat: "/oracle/C11/sapreorg/exp/DATA/" bs=1k fs=1000Mdir: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.TOC"

Modified SAPCLUST.cmdtsk: "/oracle/C11/sapreorg/install/SAPCLUST.TSK"icf: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.STR"dcf: "/oracle/C11/sapreorg/install/DDLINF.TPL"dat: "/tools/exp_1/DATA/" bs=1k fs=300Mdat: "/usr/sap/trans/exp_2/DATA/" bs=1k fs=300Mdir: "/oracle/C11/sapreorg/exp/DATA/SAPCLUST.TOC"

2. Which other solutions are possible with more or less manual intervention?a) Move the dump files of small packages out of the export directory, as soon as they are completed. It can also be helpful to reduce the dump file size, to move completed dump files of large packages sooner.

Task 3:While doing an export, R3LOAD stops on error, because an expected table does not exist. This seems to be an inconsistency between the ABAP Dictionary and the database dictionary. As most of the tables are already exported, it does not make sense to restart the SAP instance to fix the problem and to repeat the export afterwards.1. How can R3LOAD 4.x be forced to skip the export of the table?a) a) R3LOAD 4.x: In the *.STR file, the definitions of the non-existing table (and its indexes) can be marked as comments, by placing a “#” at the beginning of each line. Deleting the entries would also work, but afterwards the change will not be visible to others who might be searching for errors. Restart R3LOAD.2. How can R3LOAD 6.x be forced to skip the export of the table?a) R3LOAD 6.x: Change the status of the table entry inside the export *.TSK file to “ign” (ignore). This will help fix the export problem, but for the import, you will still have to a change the *.STR file (see R3LOAD 4.x).Restart R3LOAD.

Task 4:During a heterogeneous system copy, because of a mistake while cleaning up some tables in an Oracle database, the content of table ATAB was accidentally deleted. The SAP System was not started yet, but the load of all tables is already finished. 1. R3LOAD 4.x: What can be done to load the content of table ATAB without re-creating the table or an index? At least two solutions are possible. Which files must be created, and what should the R3LOAD command line look like? Table ATAB belongs to TABART POOL.SAPPOOL.cmd:icf: /exp/DATA/SAPPOOL.STRdcf: /install/DDLDBS.TPLdat: /exp/DATA/ bs=1k fs=1000Mdir: /exp/DATA/SAPPOOL.TOCext: /exp/DB/DBS/SAPPOOL.EXT

Note: Check for R3LOAD command line options at the end of Unit 7!a) a) Copy SAPPOOL.STR to ATAB.STRb) Remove everything from ATAB.STR that doesn’t belong to table ATAB.c) Inside ATAB.STR, change the action field from “all” to “data”d) Copy SAPPOOL.cmd to ATAB.cmde) Change the content of ATAB.cmdfrom:icf: /exp/DATA/SAPPOOL.STRdcf: /install/DDL<DBS>.TPLdat: /exp/DATA/ bs=1k fs=1000Mdir: /exp/DATA/SAPPOOL.TOCext: /exp/DB/<DBS>/SAPPOOL.EXT

to:icf: /<directory path>/ATAB.STRdcf: /install/DDL<DBS>.TPLdat: /exp/DATA/ bs=1k fs=1000Mdir: /exp/DATA/SAPPOOL.TOCext: /exp/DB/<DBS>/SAPPOOL.EXT

f)R3load –i ATAB.cmd –p ATAB.log –k migration key

2. R3LOAD 6.x: Which R3LOAD 6.x features and command line options can be used to load table ATAB again?a) As in solution 1, but skip step c). The R3load command line looks different: R3load –o TIVP –i ATAB.cmd –p ATAB.log –k <migration key>

Task 5:In an Oracle OS migration the database must be installed with dictionary-managed tablespaces because of certain reasons. After the test import, some large tables and indexes show a huge amount of extents.1. The customer adapted the Next Extents values in the source database on a regular base. What are the reasons of so many extents in the target database?a) The next extent values used by R3LOAD are obtained from the size categories of the ABAP Dictionary. These size categories are part of the technical settings of tables and will not be updated by any external database administration tool. If R3SZCHK computes the initial extent of tables smaller than needed, the number of next extents increases because the size category values are often too small.2. What can be done to reduce the number of extents in the next test run?a) The initial or next extent values of the involved tables should be increased by modifying the *.EXT or *.STR file.

Exercise 8: R3LOAD & JLOAD Files (Part II,Hands-On Exercise)Exercise Duration: 25 Minutes

Exercise ObjectivesAfter completing this exercise, you will be able to:• Use R3LOAD standalone.• Manually create *.TSK and *.CMD files.

Business ExampleYou want to execute R3LOAD standalone, to fix problems or to make use of specificsettings not possible in the standard setup, i.e. with SAPINST.

Solution 8: R3LOAD & JLOAD Files (Part II, Hands-On Exercise)Task 1:This is a hands-on exercise for which you must logon to the source system of the example migration.Hostname: ________________ Group-ID ________________Telnet user ________________ Password: ________________Hostname ________________ Instance #: ________________SAP user ________________ Password: ________________Client # ________________If there is a unique group number on your workstation monitor, please use this number as your Group-ID. Depending on the training setup, use the Windows Remote Desktop Connection or Telnet to logon to system DEV (logon method, user, password, and hostname as supplied by the trainer).Note: You will logon as an administrator! Please do not make any changes to the system, except for those explained in this exercise.1. Perform the following preparation steps:Change to the drive and the directory as supplied by the trainer Copy the whole directory “TEMPLATE” to your work directory.Use name work“<group-id> (i.e. “xcopy TEMPLATE work00”)Execute “env.bat” in your work directory. It will set required environment variables. Repeat this step after each logon!Change to your work<group-id> directory (i.e. work00)Use the editor “notepad” for the Windows Remote Desktop Connection or “xvi” for Telnet to perform the following modifications:In ZZADDRESS.STR change the table and primary key nameZZADDRESS to ZZADDRESS<group-id> (i.e. ZZADDRESS00)

ZZADDRESS~0 to ZZADDRESS<group-id>~0 (i.e. ZZADDRESS00~0)In ZZADDRESS.EXT change the table and primary key nameZZADDRESS to ZZADDRESS<group-id>ZZADDRESS~0 to ZZADDRESS<group-id>~0In ZZADDRESS.TOC change the table nameZADDRESS to ZZADDRESS<group-id>

Hint: Edit Notes - “xvi” survival guideThe editor “xvi” is a “vi” implementation for Windows systems, which works very well on telnet sessions.Insert mode: press “i”, end insert mode: press “Escape”Delete character under cursor: press “x”Delete character while in insert mode: press “Backspace”Save file: enter “:wq” (write and quit), if it doesn’t work, press “Escape” and try it againDo not use cursor keys while in insert mode; press “Escape” firsta) ZZADDRESS.STRtab: ZZADDRESS00 att: SSEXC 0 XX T all ZZADDRESS00~0 USER 0fld: NAME CHAR 30 0 0 not_null 1fld: CITY CHAR 30 0 0 not_null 0

ZZADDRESS.EXTZZADDRESS00 16384ZZADDRESS00~0 16384

ZZADDRESS.TOCvn: R6.40/V1.4id: adb1c36e00000046 cp: 4103 data_with_checksumtab: [HEADER]fil: ZZADDRESS.001 1024 1 1 eot: #0 rows 20050606141758tab: ZZADDRESS00fil: ZZADDRESS.001 10242 2eot: #20 rows 20050606141758eof: #20050606141758

Task 2:1. Which fields belong to the primary key of table ZZADDRESS<group-id>a) The primary key uses field NAME only.

Task 3:1. Logon to SAP System DEV and verify the ABAP Dictionary against the DB Dictionary in transaction DB02. Save the shown output to a file.a) Check for tables that only exist in the database, but not in the ABAP Dictionary

Task 4:1. Use R3LOAD to create the import task file ZZADDRESS.TSK.a)R3load -ctf I ZZADDRESS.STR DDLMSS.TPL ZZADDRESS.TSK MSS-l ZZADDRESS.log

ZZADDRESS.TSK:T ZZADDRESS00 C xeqP ZZADDRESS00~0 C xeqD ZZADDRESS00 I xeq

Task 5:1. Use an editor to create an R3LOAD command file, which can be used to import table ZZADDRESS<group-id>Note: R3load for Windows is recognizing “\” and “/” as path separator.a) tsk: ZZADDRESS.TSKicf: ZZADDRESS.STRdcf: DDLMSS.TPLdat: .\ bs=1K fs=1000Mdir: ZZADDRESS.TOCext: ZZADDRESS.EXT

Alternate notation:tsk: ZZADDRESS.TSKicf: ZZADDRESS.STR dcf: DDLMSS.TPLdat: ./ bs=1K fs=1000Mdir: ZZADDRESS.TOCext: ZZADDRESS.EXT

Task 6:1. Import table ZZADDRESS<group-id> with R3LOAD and check the content of table ZZADDRESS<group-id> by using the MSS command line utility “osql”. The command line is case sensitive! osql -E -Q "SELECT * FROM dev.ZZADDRESS<group-id>"a) Import table ZZADDRESS00:R3load -dbcodepage 4103 -i ZZADDRESS.cmd -l ZZADDRESS.log

ZZADDRESS.log:(IMP) INFO: import of ZZADDRESS00 completed (20 rows)

ZZADDRESS.TSK:T ZZADDRESS00 C okP ZZADDRESS00~0 C okD ZZADDRESS00 I okosql -E -Q "SELECT * FROM dev.ZZADDRESS00"Wattenberg MuenchenWerle Offenbach(20 rows affected)

Note: As the dump was created on a little endian Unicode system (see ZZADDRESS.TOC), the import must be performed with dbcodepage “4103”.For more information on “osql”, see document “MSS_osql.txt” in your work directory.Ignore R3LOAD messages starting with “sapparam”:sapparam: sapargv( argc, argv) has not been called.sapparam(1c): No Profile used.sapparam: SAPSYSTEMNAME neither in Profile nor in Commandline

Task 7:Repeat the verification of the ABAP Dictionary against the DB Dictionary (do a refresh!).1. Does the output look different than before? New entries?a) Transaction DB02 will show your imported table ZZADDRESS00. If not, refresh the display (tables of your student neighbors might be visible as well).

Task 8:Try to load table ZZADDRESS<group-id> again by changing the ZZADDRESS<group-id>. TSK file:D ZZADDRESS<group-id> I ok → D ZZADDRESS<group-id> I xeq

1. What happened? What is the content of ZZADDRESS.TSK and ZZADDRESS.log?a) Because of the primary key on field “NAME”, it is impossible to insert two identical names. R3LOAD returns an error (rc=26 error). The ZZADDRESS.TSK file contains:D ZZADDRESS00 I err

ZZADDRESS.log:(IMP) ERROR: DbSlEndModify failedrc = 26, table "ZZADDRESS00"

2. Try the import again, what happens now?a) The import works the second time as the status “err” in ZZADDRESS.TSK forces R3LOAD to delete the table content before starting the import.. ZZADDRESS.log:(IMP) INFO: import of ZZADDRESS00 completed (20 rows)

Task 9:1. Create a new sub-directory in your work directory. Name it “export”. Create a task and a command file to export ZZADDRESS<group-id>. Export table ZZADDRESS<group-id> and compare the number of exported rows against the number of table rows in the database.osql -E -Q “select count(*) from dev.ZZADDRESSgroup-id”Note: Make sure not to overwrite your existing ZZADDRESS.TOC and ZZADDRESS.001 file!a) Create directory “export” and copy ZZADDRESS.CMD into it. Change to directory “export”. Edit the copied command file:ZZADDRESS.CMDtsk: ZZADDRESS.TSKicf: ..\ZZADDRESS.STRdcf: ..\DDLMSS.TPLdat: .\ bs=1K fs=1000Mdir: ZZADDRESS.TOC

Alternate notation:tsk: ZZADDRESS.TSKicf: ../ZZADDRESS.STRdcf: ../DDLMSS.TPL dat: ./ bs=1K fs=1000Mdir: ZZADDRESS.TOC

Create the task file ZZADDRESS.TSK containing the following line (you can use R3LOAD or an editor):R3load -ctf E ..\ZZADDRESS.STR ..\DDLMSS.TPL ZZADDRESS.TSKMSS -l ZZADDRESS.log

ZZADDRESS.TSKD ZZADDRESS00 E xeq

Start the export:R3load -datacodepage 4103 -e ZZADDRESS.CMD -l ZZADDRESS.log

There should be 20 rows in the database, and the same number should be mentioned in the *.TOC file.

Unit 8Advanced Migration TechniquesThis unit describes advanced techniques, which can be utilized to speed up the export and the import phase of a migration.

Lesson:Time Consuming Steps during Export / ImportLesson OverviewHow to identify, minimize, or avoid time consuming steps during the export/import phases

Lesson ObjectivesAfter completing this lesson, you will be able to:• Identify the time consuming steps during export / import• Minimize the downtime by applying appropriate measuresFor the students, it’s important to know the time consuming migration steps. As a conclusion they will be able to execute them as early as possible and to avoid them during the downtime (if possible).

Business ExampleYou need to know the long running OS/DB Migration steps to estimate the time schedule in a cut-over plan.

Figure 179: General RemarksPlease take into account, that the number of rows on ABAP cluster tables will differ between source and target system in case of a Unicode Conversion because of their compressed content. For comparable results use a SQL statement like this: SELECT COUNT (*) FROM CDCLS WHERE PAGENO=’0’ .

Figure 180: Technical View: Time Consuming Export Steps (1)It depends on the SAPINST version and database whether the above tasks are available or not.Different databases have different space requirements for storing the data. The programs R3LDCTL/R3SZCHK compute the INITIAL EXTENT of all tables and indexes for the target database. The sum of all of these provides the estimated size of the target database.

Depending on the database, table splitting can be a time consuming process, which should run before the export. In most cases there is not enough time for that during the export/import downtime. The computed WHERE conditions are defined in such a way that data added or deleted afterwards doesnt matter, the conditions will fetch all data in the table. If possible, large data updates should be avoided after creating theWHERE conditions (or compute them again).If using the Oracle PL/SQL table splitter, special consideration apply to ROWID splitting. More information in SAP Note: 1043380. SAPINST Export Preparation: You want to build the target system up to the point where the database load starts, before the export of the source system has finished. Export and import processes should run in parallel during the system copy process.SAPINST Table Splitting Preparation: Optional step for preparing the table splitting before starting the export of a SAP System based on ABAP. If some of the tables a very large the downtime can be decreased by splitting the large tables into several smaller package, which can be the processed in parallel.

Figure 181: Technical View: Time Consuming Export Steps (2)The most important way to tune export performance is to optimize the use of parallel export processes.Transportable storage devices can be DVDs, external USB disks, laptops, or tapes.When R3LOAD stores the exported data into dump files, it uses a very efficient compression algorithm. You do not need to compress these files again (you may even find that the resulting file is larger than before).To save time for coping very large amounts of dump data to the target media/system, it can be useful to set the dump file size to a small value, like 300 MB. As soon as a dump file is completed, the copy can be started. Note: the MIGRATION MONITOR waits until all dump files of a package have been completed.

Figure 182: Technical View: Time Consuming Import StepsIf a parallel export / import using R3LOAD is planned the database must be ready to import when the export starts.Normally the first database update statistic is started directly after the database import.If short on time, the update statistic can be postponed to a later point in time where it can run in parallel with other activities.

Figure 183: Saving Time on Import – After Load ErrorsR3SETUP or SAPINST is starting a one-time R3LOAD process for each package. If R3LOAD processes terminate with an error condition, R3SETUP/SAPINST will stop after all R3LOAD processes are finished. The execution of R3SETUP/SAPINST must be repeated until all R3LOAD processes are successful.If you know that the cause of an R3LOAD error termination is fixed, you can save time by starting R3LOAD beside R3SETUP or SAPINST. Your own R3LOAD process must be started with the same parameter set as was used by R3SETUP or SAPINST before. The parameters can be obtained from the corresponding“<PACKAGE>.LOG” file.Log-on as the operating system user who owns the “<PACKAGE>.log” files in the installation directory (for example: <sapsid>adm). Change into the install directory. Only start R3LOAD processes for the *.STR files that have already been processed by the current run of R3SETUP or SAPINST.Never restart R3SETUP while your own R3LOAD processes are running. This will cause competition between your R3LOAD process and the R3LOAD processes started by R3SETUP to process the same *.STR file. In the case of using SAPINST, the second R3LOAD process will be stopped automatically, as a backup task filealready exists.R3SETUP/SAPINST must be restarted after all data is loaded, to execute the remaining steps of the installation/migration.

Figure 184: R3LOAD Parameters from Import *.LOG FileDo not forget to add the restart parameter “-r” to the command line of R3LOAD (4.6D and below only).If you are starting R3LOAD manually, make sure that your current working directory is the installation directory!

Figure 185: Export / Import Time DiagramIn customer databases, most of the transaction data is stored in tables belonging to only a few TABARTs. This causes long running R3LOAD export and import processes for the TABARTs. To save time and optimize the parallelism of R3LOAD processes, you can: Split package files (*.STR) into several smaller files, or separatelarge tables into additional package files.

Figure 186: Optimizing the Export / Import ProcessExport and import times are reduced by splitting package files (*.STR) and creating additional package files for large tables.Always try to export/import large tables first. This ensures the maximum parallelism of R3LOAD processes.Very large tables should be exported / imported with multiple R3LOAD processes (table splitting).Optimizing the database parameters speeds up the export or import process and can prevent time-consuming errors because of bottlenecks.Reduce CPU load on the database server by running R3LOAD on a different system.In a fast and stable network environment the usage of the R3LOAD socket method can save time.

Figure 187: JAVA-Based Package SplitterThe JAVA Package Splitter can also be used for earlier releases than Web AS 6.40.The term package is used a synonym for *.STR files.SAPINST 6.40 for NetWeaver ’04 can call the JAVA or the PERL Package Splitter(depending on the selected option). Starting with NetWeaver 04S, only the JAVA Splitter will be used.

The Splitter analyzes the content of *.EXT files to find the best splitting points. Fine tuning can be done to *.STR files after a test migration.The splitting of *.STR files is even possible without *.EXT files, if tables are named in a provided input file.Package file names for the split *.STR files are generated automatically. The documentation is provided as PDF file together with the splitting tool. In the case where no JAVA JRE 1.4.x or higher is installed, the *.STR files can be transported to another system, and split there.

Figure 188: PERL-Based Package SplitterThe Perl script SPLITSTR.PL may also be used for earlier Release 4.x migrations.Do not use the Perl splitter for Unicode conversions!The DBEXPORT.R3S command file for R3SETUP releases since 4.6 and SAPINSTcalls SPLITSTR.PL if the option has been selected.SPLITSTR.PL analyzes the content of *.EXT files to find the best split points. Fine tuning can be done to *.STR files after a test migration. Package file names for split *.STR files are generated automatically.The Perl script is self-explanatory. Calling SPLITSTR.PL without parameters or using the “-help” option causes a help text to appear.Do not use SPLITSTR.PL on already split files, as it can lead to problems. Alwayssplit from the original files, thus preserving themThe SPLITSTR.PL script is not intended to be used on 3.x *.STR files. The results are erroneous!A Perl version is available for every operating system. The installed version of Perl can be checked with “perl –v”.In the case where no Perl is installed, the *.STR files can be transported to another system, and split there.

Figure 189: R3LOAD Export/Import Using SocketsSocket connections are released for R3LOAD 6.40 and later. (Do not try to use an earlier version, even if R3LOAD provides the socket option).R3LOAD writes directly to the opened socket and does not need any dump or table of content (*.TOC) file. Error situations will be handled, as with conventional exports and imports. The exporting and importing processes will use their respective task files for restart.Network interruptions will terminate the export and import process immediately.Make sure that the export or import process does not fail because of database resource bottlenecks. R3LOAD restarts can make the import more time consuming than expected.The R3LOAD import process has to be started first, because the export process must connect to an existing socket - otherwise the process fails.The Migration Monitor does support socket connections in an easy-to-configure way.

Figure 190: R3LOAD Socket Connections – Technical ViewThe same files are used, as in standard R3LOAD scenarios, but no dump or *.TOC files are created. The R3LOAD control files must be accessible as usual, on the source and target system.

Figure 191: <PACKAGE>.CMD – Socket Connection ≥ 6.40R3LOAD must be started with the “-socket” command line option.The importing process must be invoked before the export process can be started. The socket port can be any free number between 1024 and 65535 on the import host.Meaning of section names:• tsk: Task file• icf: Independent control file• dcf: Database dependent control file• dat: Socket port number and name or IP address of the import host• ext: Extent file (not required at export time)The “dir” section is not required because no <PACKAGE>.TOC file will be created

Figure 192: <PACKAGE>.LOG – R3LOAD Socket Logs ≥ 6.40The importing process listens on the specified port and waits for the exporting process to connect.

Figure 193: Migration / Table Checker (MIGCHECK) – FeaturesThe JAVA-based “Migration Checker” was developed to check that there is a log file for each package file (option: -checkPackages). This is an indicator that R3LOAD did run for them. The second check is to verify that each action in the task file is completed successfully (option: -checkObjects). Unsuccessful tasks are listed in an output file. The two features are used by SAPINST for NetWeaver 04S and later, to check the import completeness. Database and table depending exceptions are handled automatically.The “Table Checker” feature is used to check the number of table rows. It can be used to make sure, that tables are containing the right number of rows after import. As this is a long running task, it can be started manually only.

Figure 194: SAPINST: Complete / Modify Package Table (6.40)SAPINST: allows for custom export/import order definitions and even the change of individual parameters for each single package.

ORDER: defines the sequence in which the packages are to be loaded. The load starts with the lowest values first. Negative values are also allowed.PKGID: Identifier <SAPSID>_<DBSID>.PKGNAME: Name of the package (*.STR file).PKGFILESIZE: Size of the data dump file.PGKDIR: Path where DATA and DB sub-directories for the package reside.PKGDDLFILE: The name of the DDL<DBS>.TPL file.PKGCMDFILE: The name of the command file for this package. (Will be generated automatically, but if you want to use your own, you may enter its name).PKGLOADOPTIONS: Additional DB specific R3LOAD options that will be applied when this package is imported.As NetWeaver 04S uses MIGMON to start R3LOAD, the advanced features of the Migration Monitor are used, instead of the mechanism above.

Figure 195: Unsorted ExportBefore starting an unsorted export, please read SAP Note: “954268 Optimization of export: Unsorted unloading”!By default, the system unloads the data as sorted. This is controlled by the following entry in the DDL<DBS>.TPL file: prikey: .... ORDER_BY_PKEY. Sorting takes time and needs a large temporary storage, if it can be omitted, the export will be faster.Take care about consequences in the target system (performance impact). If you use MaxDB as target database, you must export all of the tables as sorted. If you use MaxDB as the source database, you can unload sorted data only. Do not override this option when you export MaxDB.If you use MSSQL as the target database, you should export all of the tables as sorted, so that you can avoid performance problems during the import. If you have to unload the tables as unsorted and if you use MSSQL as the target database, you should refer to Note 1054852.Certain table types are not allowed to be exported in an unsorted way. SAP Note 954268 explains the considerations release and code page dependent.R3LDCTL generates DDL<DBS>_LRG.TPL files to simplify unsorted exports since NetWeaver 04.

Figure 196: Changing R3LOAD Table Load Sequence in *.STRDo not re-order tables in *.STR files after export. If more than one dump file exists for a single *.STR file, R3LOAD will not be able to read table data from i.e. File *.002 and for the next table from file *.001, if the table order in the *.STR file was changed after export.

Figure 197: Initial Extent Larger than Consecutive DB StorageThe situation above can be a problem on Oracle dictionary managed tablespaces, but should not apply to locally managed tablespaces as well.Customer databases can contain tables and indexes that require a larger “initial extent” than the maximum possible in a single data container. In such cases, reduce the “initial extent” in the *.EXT file and adapt the “next extent” size class in the relevant *.STR file.The new “initial extent” size should be slightly less than the maximum available space in the data container. This gives the database some space for internal administration data.

Lesson:MIGMON - Migration Monitor for R3LOADBusiness ExampleYou need to know the appropriate MIGMON configuration scenario for specific customer SAP System landscapes.

Figure 198: Migration Monitor (MIGMON) – Features (1)SAP Note: 784118 “System Copy JAVA Tools”. The note also describes how to download the software from SAP Marketplace.The export server mode applies where R3SETUP/SAPINST will be replaced for the export.Even if MIGMON is not used for the import, the advanced control features of the export processes can help to save time.Already existing *.TSK or *.CMD files will not be overwritten, but used.

Figure 199: Migration Monitor (MIGMON) – Features (2)The export client mode applies, where R3SETUP/SAPINST performs the database export, and MIGMON is used for the import. The client MIGMON is used to transfer the files to the target host and to signal the importing MIGMON, that a package is ready to load.Even if MIGMON was not used to perform the export, the import can still benefit from the advanced MIGMON R3LOAD control features.

Figure 200: Migration Monitor (MIGMON) – ParametersThe number of export and import processes can be different. In the case of socket usage, the number of export and import processes is the same. The export job number is ignored, because the Export Monitor requests the job number from Import Monitor during startup.Groups of packages can be assigned to different DDL*.TPL files. Data transfer configuration variants

• FTP: File transfer via FTP between source and target system• Network: Export directory is shared between source and target system• Socket: R3LOAD will use sockets (requires R3LOAD 6.40 or higher). It can be combined with ftp to copy R3LOAD control files to the target system.• Stand-alone: MIGMON runs stand-alone, i.e. the export will be provided on a transportable media only (possibly no fast network connection to source system available).FTP Parameters contain the logon password. To hide FTP password in the command line (visible using “ps –ef” command on UNIX, or various Windows tools) ,the export_monitor_secure.sh/bat files should be used.The usage of FTP might be a security risk, but it is a reliable method of data transfer.

Figure 201: Migration Monitor – Net Configuration VariantThe Migration Monitor Net Configuration Variant is useful in environments, where file systems can be shared. For consistency reasons, exports should always be done to local file systems!In the example above, the export directory and the network exchange directory are shared from the exporting to the importing system. As soon as a package is successfully exported, the corresponding signal file (*.SGN) will be created in the network exchange directory. Now the importing Migration Monitor starts an R3LOAD process to load the dump from the shared export directory.The file “export_statistics.properties” is generated from the exporting Migration Monitor before it exits and is used to inform the importing Monitor about the total number of packages and how many of them are erroneous. If all export packages are ok, the importing Migration Monitor stops looking for new packages in theexchange directory. After the successful load of all packages, it starts the load of the SAPVIEW.STR.

Figure 202: Migration Monitor – FTP Configuration Variant

The Migration Monitor FTP Configuration Variant is useful in environments, where file systems cannot be shared, but a FTP file transfer is possible.In the above example, the export and import directories are located on different hosts. The FTP exchange directory is on the target system.As soon as a package is successfully exported, the corresponding files will be transferred to the importing system. After success, the signal file (*.SGN) will be created in the FTP exchange directory. Then the importing Migration Monitor starts an R3LOAD process to load the dump from the import directory.The “export_statistics.properties” file is used in the same way as in Net mode.Pay attention to the FTP time-out settings. FTP servers may have certain default settings, which limit the amount of data which can be copied in a single session. In the case of unclear FTP transfer problems it is very important to check FTP server logs and settings, because the returned error information will sometimes, not provide a sufficient description of the FTP problem.

Figure 203: Migration Monitor – Socket Configuration VariantThe Migration Monitor socket method is in theory, the fastest method ever in export and import of data in that we have to have a stable network and we have to make sure that the exporting and importing databases always have enough resources to serve the R3LOAD processes.A network share, a manual file copy, or the Migration Monitor FTP file transfer (option –ftpCopy) can be used to copy the R3LOAD control files to the target system. The importing Migration Monitor must be started first. The exporting Migration Monitor connects to the importing Monitor, using the provided socket port. Thesocket port numbers are incremented one-by-one, for each R3LOAD process started.The communication between the export and import Monitor ensures, that the right port numbers will be written into the corresponding *.CMD files. No port number is used twice. Unusable port numbers are skipped (may be in use by others). If a firewall is between the source and target system, make sure that a whole port range(base port + number of R3LOAD packages + safety) is released for the duration of the migration.

Figure 204: Migration Monitor – Stand-Alone ConfigurationThe Migration Monitor Stand-Alone Configuration Variant is useful in environments, where source and target systems do not have a network connection, or the existing connection is too slow for a file transfer.In the above example, the export and import directories are located on different hosts in different locations. The Migration Monitor is used to start R3LOAD processes only. The file transfer between the source and target system will be done using transportable media.

Figure 205: Migration Monitor – Control FilesThe export/import state or the file transfer state can be changed from minus (“-”) to zero (“0”) for restarting R3LOAD or a file transfer.Sockets only: MIGMON for NetWeaver ’04 cannot restart the R3LOAD process by changing the state only (future versions will support this).In case of a file transfer restart, all dump files of a package are copied again.Example: import_state.propertiesSAPAPPL1=0 Not started yetCOEP=? RunningSWW_CONT-1=+ Finished (part 1 of splitted table)SWW_CONT-2=+ Finished (part 2 of splitted table)SWW_CONT-3=- Error (part 3 of splitted table)SWW_CONT-4=0 Not started yet (part 4 of splitted table)SWW_CONT-5=0 Not started yet (part 5 of splitted table)SWW_CONT-post=0 Not started yet (secondary index creation,post-processing)SWW_CONT-pre=+ Finished (table and primary key creation, pre-processing)

Figure 206: MIGMON Installation Tool Integration (1)The MIGMON server mode for pre-NetWeaver 04 SR1 versions can only be used ifSAPINST has been forced to stop, i.e. by implementing an intended error situation.

Figure 207: MIGMON Installation Tool Integration (2)The MIGMON server mode for NetWeaver 04 SR1 can only be used if SAPINST had been forced to stop, i.e. by implementing an intended error situation.SAPINST Netweaver 04S requires a manual start of MIGMON if using the socket mode.

Figure 208: Summary: R3LOAD Unload/Load Order by ToolThe MIGRATION MONITOR unload/load process order can be defined in the respective properties file. In addition, a file can be provided that contains a list of packages used to define the unload/load order. If the file does not contain all existing packages, the remaining packages are unloaded in alphabetical order and loaded by size – starting with the largest package. (Nothing will be lost).SAPINST allows you to select different orders for unloading or loading the database. The feature of customizing the execution order of each *.STR file gives a good control over the unload or load process. SAPINST NetWeaver 04S uses MIGMON to start R3LOAD processes. The MIGMON R3LOAD start features are integrated into SAPINST dialogs.

Figure 209: MIGMON Export / Import OrderIn the above example, the largest tables should be exported first. For that purpose, the tables were splitted from its standard *.STR files into package files containing one table only. The package names were inserted into “export_order.txt”. The Migration Monitor will export the packages in exactly the order as defined in “export_order.txt”. Afterwards it will export the remaining packages in alphabetical order.On the target system, the packages will be imported as specified in “import_order.txt”. If no package mentioned in “import_order.txt”, is available for import (still exporting) the package with the next largest dump file will be used instead.Often two different export- and import-order files make sense, i.e. if some tables have a lot of indexes but are small compared to the largest tables. In this case the overall run-time of a smaller table can be much longer then for the larger table, because of the index creation time. In the above example the tables GLPCA and MSEG are big, but not the biggest. For the import it was decided to give them top priority because they have a lot of indexes and so the index creation times will exceed even the import time of the largest table SOFFCONT1.

Figure 210: Advanced MIGMON DDL*.TPL File UsageThe Migration Monitor can be used to export or import selected packages with specific DDL<DBS>.TPL files.The above export example, shows a how to export three packages unsorted (DDLORA_LRG.TPL) and the majority of all tables the standard way (DDLORA.TPL).

The import example, utilizes a special Oracle feature to parallelize the index creation. For that purpose two different DDL<DBS>.TPL files were generated to import two packages with index creation parallel degree 2 (DDLORA_par_2.TPL) and the other two packages with index creation parallel degree 4 (DDLORA_par_4.TPL). The remaining packages are imported as usual (DDLORA.TPL).

Lesson:MIGTIME & JMIGTIME - Time AnalyzerBusiness ExampleYou need to analyze the export/import behavior in an OS/DB Migration to minimizethe downtime for the final migration of a productive system.

Figure 211: Time Analyzer (MIGTIME / JMIGTIME) – FeaturesSAP Note: 784118 “System Copy JAVA Tools”. The note also describes how to download the software from the SAP Marketplace.Over time, the content of R3LOAD *.LOG and *.TOC files has been improved by adding more and more information. The Time Analyzer can handle all existing formats.R3LOAD 6.40 writes separate time stamps for data load and index creation (earlier versions did not!).MIGTIME obtains the export import time information from *.TOC and *.LOG files. JMIGTIME retrieves the time information from the JLOAD <PACKAGE>.STAT.XML files.

Figure 212: Time Analyzer – Output Based on Export Files (1)The list output shows the start/end date and the export duration of each package, and additionally provides run-time information, as seen above in the longest running tables.

Figure 213: Time Analyzer – Output Based on Export Files (2)The HTML output gives a quick overview on the package run-time distribution.

Figure 214: Time Analyzer – Output Based on Import Files (1)The list output shows the start/end date and the import duration of each package. If the used R3LOAD version (i.e. 6.40) provides time stamps for each table import and primary key/index creation, the output list can then distinguish between data load and index creation time.

The list of long running tables can be generated for pre-6.40 R3load releases too, but it does not contain data and index columns, only a time column. The log file contains time information for data load ends, therefore the time for tables in the old R3load releases is not 100% correct: table time = table load time + index/pkey creation time for the previous table (if index/pkey is created after data load). From R3LOAD 6.40,table time is correctly determined, because create table/index times are present in the log files.

Figure 215: Time Analyzer – Output Based on Import Files (2)

Figure 216: Time Analyzer – Time Join

Lesson:Table Splitting for R3LOADLesson OverviewExplanation of the table splitting procedure for R3LOAD

Business ExampleYou need to know how R3LOAD table splitting is working and how to troubleshoot problems.

Figure 217: R3TA Table SplitterR3TA analyzes a given table and returns a set of WHERE conditions that will select approximately the same amount of rows. For each WHERE condition one R3LOAD can be started.The parallel export does not reduce the export time only, it will also allow an earlier start of the import.Because of the complex handling of splitted tables, the usage of MIGMON is mandatory.The resulting “<table>.WHR” file requires further splitting into “<table>-n.WHR” format (WHERE SPLITTER).If the parallel import into a single table is not possible on a particular database type, a sequential import of splitted tables can be forced by defining MIGMON load groups. Please check the respective system copy manual and related notes for current limitations.Even if the parallel import into a single table is not supported on your database, the overall time saving because of the parallel export itself is significantly enough. SAP Note: 952514 Using the table splitting feature

Figure 218: Oracle PL/SQL Table SplitterThe PL/SQL table splitter analyzes a given table and returns a set of WHERE conditions that will select approximately the same amount of rows. For each WHERE condition one R3LOAD can be started. Normally the PL/SQL script is faster then R3TA as it is using Oracle specific features. The resulting *.WHR files can be used without further splitting (no WHERE SPLITTER required).SAP Note: 1043380 Efficient Table Splitting for Oracle Databases (the current PL/SQL table splitter script is attached to the note)Specific ROWID table splitting limitations:• ROWID table splitting MUST be performed during downtime of the SAP system. No table changes are allowed for ROWID splitted tables after ranges have been calculated and export was completed. Any table change before the export requires a recalculation of the ROWID ranges.• ROWID splitted tables MUST be imported with the “-loadprocedure fast” option of R3load.• ROWID table splitting works only for transparent and non-partitioned tables.• ROWID table splitting CANNOT be used if the target database is a non Oracle database.

Figure 219: Table Splitting in SAPINST ≥ NW04Table splitting is a task which will be done before the export. The “split_input.txt” file must specify the tables to split and how often. Take care about the different input formats in case of R3TA or the Oracle PL/SQL table splitter. Check the corresponding system copy guide.The “R3ta_hints.txt” contains predefined split fields for the most common large tables. More tables and fields can be inserted with an editor. The file has to be located in the directory in which R3ta will be started. If “R3ta_hints.txt” was found and the table to split is inside, the predefined field will be used, otherwise R3TA analyzes each field of the primary key to find the best matching one. The “R3ta_hints.txt” is part of theR3TA archive which can be downloaded from SAP Marketplace, if not already on the installation media.

CAUTION: When doing a system copy with the change of the code page (non-Unicode to Unicode; 4102 to 4103; 4103 to 4102), make sure not to use a WHERE condition with the PAGENO column included for cluster tables (i.e. CDCLS, RFBLG, ).The resulting “*.WHR” files will be written into subdirectory DATA of the specified export directory. Table splitting will take place if the specified export directory is the same like for the R3LOAD export later on. The “whr.txt” file contains the name of the splitted tables. It can be used as an input file for the package splitter to make sure that each splitted table has it own *.STR file.It depends on the SAPINST release whether a database type can be selected or not. SAPINST 7.02 can make use of the Oracle PL/SQL splitter if the database type Oracle was selected. Radio buttons allow to choose between the R3TA and the PL/SQL table splitter.

Figure 220: Example of an R3TA Based Table SplittingThe above example shows the R3TA WHERE file creation for an Oracle database.The CKIS.STR is provided to the command line to tell R3TA which fields belong to the primary key.R3TA generates a CKIS.WHR file containing the computed R3LOAD WHERE conditions, a set of files to create a temporary index, and a further set of files to drop the temporary index.It must be decided on an individual base, whether it makes sense to create an additional index or not.

Figure 221: R3TA Example: Create Temporary Index (Optional)Depending on the database type, database optimizer behavior, table type, table field or table size, a temporary index can improve the R3LOAD data selection considerably. To find out whether a temporary index makes sense or not, a SQL EXPLAIN statement can help to check the database optimizer cost factor on the data to select. Indexes should be checked on a copy of the productive system for example.

The corresponding system copy guide describes how to create or delete R3TA related indexes.

Figure 222: R3TA Example: Drop Temporary IndexIf the temporary index does not improve the R3LOAD export, it can be dropped using the predefined files or with SQL commands directly.

Figure 223: R3TA Example: WHERE Condition File CKIS.WHRR3TA writes all WHERE conditions for a table into one single file. It must be split into pieces to utilize a parallel export with MIGMON.If it cannot be achieved to create exactly the number of splits as requested, it can happen that more or less WHERE conditions are created. In the example above, 10 split were requested but R3TA created 11.

Figure 224: R3TA Example: CKIS.WHR SplittingEach WHERE condition must be put into a separate file, otherwise the MIGMON mechanism to support table splitting would not work as intended.The WHERE splitter is part of the JAVA package splitter archive. In case of SAPINST, it will be called automatically. If R3TA was called directly, the WHERE splitter must called manually. A description of the WHERE splitter usage is available in the splitter archive.

Figure 225: Example of an Oracle PL/SQL Based Table SplittingThe above example shows the PL/SQL script based WHERE file creation for an Oracle database. A split strategy can be chosen between field or ROWID splitting. ROWID splitting can be used if the target database is Oracle (“-loadprocedure fast” must be used for the import).Opposite to R3TA, the PLS/SQL splitter creates *.WHR files directly usable by MIGMON.

Figure 226: Example: MIGMON Export Processing (1)As soon as MIGMON finds “*.WHR” files, it generates the necessary “*.TSK” and “*.CMD” files automatically. The “*.TSK” files will be created with the special option “-where” to put the WHERE condition into it.Make sure to have a separate “*.STR” file for each splitted table.

Figure 227: Example: MIGMON Export Processing (2)For each “*.TSK” file a corresponding “*.CMD” file will be created.

Figure 228: Example: MIGMON Export Processing (3)R3LOAD inserts the used WHERE condition into the *.TOC file. So it is easy to find out which part of a table is stored in which dump file. Furthermore this information is used for a safety mechanism to make sure the import does run with the same WHERE conditions as the export did (otherwise it could lead to a potential data lost in import restart situations). I the case of a mismatch, R3LOAD stops on error.

Figure 229: Example: Directory Content after ExportTo simplify the graphic above, no deep directory structures are shown (like SAPINST is creating) and the files under “<export_dir>/DB” are not explicitly mentioned. R3LOAD is assumed to run in “/inst” and the export directory is named “/exp”. The “/inst/split” directory is used to run R3TA some days or hours before the database export. The R3TA WHERE file was splitted and the results were copied into “/exp/DATA”. In case of the Oracle PL/SQL splitter, the WHERE files and be put directly into “/exp/DATA”.The export log file information of R3LOAD 7.20: "(DB) INFO: Read hintfile:D:\EXPORT\ABAP\DATA\CKIS-1.WHR" means, the respective „*.WHR” file is scanned for an optional database hint to be utilized during the data export (currently implemented for Oracle only, directing the optimizer to choose a certain execution plan).

Figure 230: Example: MIGMON Import Processing (1)MIGMON makes automatically sure, that the “*.TSK” and “*.CMD” files for table creation are generated before data import. After successfully creating the table, the data load processes are started. This preparation phase is marked in the MIGMON “import.state.properties” file as “<table>-pre=+”.For databases with the need of a primary key before import, it will be created together with the table.

Figure 231: Example: MIGMON Import Processing (2)After the table was created successfully, multiple “*.TSK” files are generated for each WHERE condition. The “*.TSK” files will be created with the special option “-where” to put the WHERE condition into it.

Figure 232: Example: MIGMON Import Processing (3)For each “*.TSK” file, the corresponding “*.CMD” file is generated.Before starting the import, R3LOAD compares the WHERE condition between the “*.TOC” and “*.TSK” files. R3LOAD stops on error in case of a mismatch.

Figure 233: Example: MIGMON Import Processing (4)After start, R3LOAD compares the WHERE condition between the “*.TOC” and “*.TSK” files and terminates on error in case of a mismatch. A successful import is only possible if the WHERE condition used for the export is identical to the one during import. Otherwise a possible restart would delete more or less data from a table, which can result in a data loss. In case of an Oracle “-loadprocedure fast”, R3LOAD does not commit data until the import is finished successfully.

Figure 234: Example: MIGMON Import Processing (5)After all parallel import processes for the splitted table were finished, the remaining tasks can be started: creating the primary key and secondary indexes. This post-import phase is marked in the MIGMON “import.state.properties” file as “<table>-post=+”.For databases creating the primary index before import, the remaining task is the secondary index generation only.

Figure 235: Example: Force Sequential Import of CKIS SplitsIf the target database does not allow to import with multiple R3LOAD processes into the same table (because of performance or locking issues), MIGMON can be instructed to use a single R3LOAD process for a specified list of packages. In the above example, the file “import_order.txt” is read by MIGMON to set the import order. All packages belonging to group [CKIS], that is CKIS-1 to CKIS-11, will be imported using one single R3LOAD process (jobNum = 1). This does not guarantee, that CKIS-1 is imported before CKIS-2, but it will make sure that no two R3LOAD processes import into CKIS.A group can have any name, but it makes sense to name it like the table in charge. Beside the number of R3LOAD processes (jobNum=) the R3LOAD arguments for task file generation (taskArgs=) and import (loadArgs=) can be defined individually for each group.The total number of running R3LOAD processes is the sum of the specified number of processes in “import_monitor_cmd.properties” and the number of processes defined in “import_order.txt”.

Figure 236: Example: Directory Content after ImportFor Oracle:• CKIS__DPI.TSK: create table, but do not create primary key, indexes, or load data• CKIS__TPI.TSK: load data, but do not create table, primary key, or indexes• CKIS__DT.TSK: create primary key and indexes, but do create table, or load data

Lesson:DISTMON - Distribution Monitor for R3LOADBusiness ExampleIn a test run of a Unicode Conversion project, it was identified that the CPU load on the database server was the bottleneck of the R3LOAD export. Running R3LOAD on a separate server would solve the problem. If more than one R3LOAD server is planned, it makes sense to utilize the Distribution Monitor.

Figure 237: DISTMON – Distribution MonitorTo distribute the R3LOAD CPU load to different systems, various types of applications servers can be used, i.e. a mix of two 4 CPU systems and one 8 CPU system or even systems running on different operating systems. As long as the operating systems and DB clients libraries are supported by the respective SAP release, a wide range of system combinations are possible. Nevertheless, from an administrative point of view it will be more easy to have a homogeneous operating system landscape, file system sharing can be complex otherwise.

Figure 238: DISTMON – Restrictions

DISTMON is making use of R3LOAD features not available below 6.40. DISTMON can only handle the ABAP data export. JAVA stacks must be exported using JLOAD.

Figure 239: DISTMON Server LayoutThe communication directory is used to share configuration and status information among the servers. It is physically mounted on one of the involved systems and shared to the other application servers.Control files (*.STR, DDL*.TPL, export_monitor_cmd.properties and import_monitor_cmd.properties) are generated here and distributed during the preparation phase.Because of safety reasons, the export of each application server is written to local mounted disks and not to NFS mounted file systems.

Figure 240: DISTMON Distribution ProcessEach MIGMON will be started locally on the respective application server. That means, each application server can run a MIGMON for export and a second one for the import. The start is initiated by DISTMON. Each MIGMON runs independently and does not know about other MIGMONs in the case of parallel export/import on the same server.The status monitor allows the monitoring of the application servers from a single user interface. Status information is read from the shared communication directory.Each MIGMON will be started locally on the respective application server. That means, each application server can run a MIGMON for export and a second one for the import. The start is initiated by DISTMON. Each

MIGMON runs independently and does not know about other MIGMONs in the case of parallel export/import on the same server.The status monitor allows the monitoring of the application servers from a single user interface. Status information is read from the shared communication directory.

Lesson:JMIGMON - Migration Monitor for JLOADBusiness ExampleYou need to know, the appropriate JMIGMON configuration scenario for a specific customer SAP System landscape.

Figure 241: JAVA Migration Monitor (JMIGMON) – FeaturesThe very first implementation came with SAPINST 7.02.The JLOAD package files must be created with JPKGCTL before starting the export or import.The parallel export / import makes use of “*.SGN” files like in the MIGMON implemenation.JPKGCTL creates a “sizes.xml” containing the package sizes to support an ordered export with the largest packages first.Failed JLOAD processes can be restarted by changing the content of the file “export/import.jmigmon.states”.

Figure 242: JMIGMON – Net ConfigurationThe JMIGMON network configuration is useful in environments, where file systems can be shared between source and target system. For consistency reasons, exports should always be done to local file systems / directories!In the example above, the export directory and the network exchange directory are shared from the exporting to the importing system. As soon as a package is successfully exported, the corresponding signal file (*.SGN) will be created in the network exchange directory. Now the importing JMIGMON starts an JLOAD process to load the dump from the shared export directory.

Figure 243: JMIGMON – Stand-Alone ConfigurationThe JMIGMON “Stand-Alone Configuration” is useful in environments, where source and target systems do not have a network connection, or the existing connection is too slow for a file transfer.In the above example, the export and import directories are located on separate hosts in different locations. The JMIGMON is used to start JLOAD processes only. The file transfer between the source and target system will be done using transportable media.

Figure 244: JMIGMON- Control and Output FilesThe “jmigmon.console.log” should be inspected in case of export or import errors.More detailed information can be found in the respective job log.The JMIGMON state files are used to control which packages are already exported, currently in use, or terminated on error. Changing a package state from minus (“-”) to zero (“0”), will force JMIGMON to restart the job.Example export_jmigmon_states:EXPORT_METADATA.XML=+ FinishedEXPORT_13_J2EE_CONFIGENTRY.XML=+ Finished (splitted table)EXPORT_14_J2EE_CONFIGENTRY.XML=+ Finished (splitted table)EXPORT_0.XML=+ FinishedExample Iimport.jmigmon_states:IMPORT_METADATA.XML=+ finishedMPORT_13_J2EE_CONFIGENTRY.XML=+ finished (splitted table)IMPORT_14_J2EE_CONFIGENTRY.XML=? running (splitted table)IMPORT_0.XML=- failed

Lesson:Table Splitting for JLOADBusiness ExampleYou need to know, how JLOAD table splitting is working and how to troubleshoot problems.

Figure 245: JPKGCTL – Package and Table SplittingThe “split” parameter defines the size limit for JLOAD packages. JPKGCTL will add as many tables to a package until the size limit is reached. The number of packages is related to the size limit parameter. A small size will result into a large number of package files compared to a large size which will create few packages only. If a table is equal or larger then the given size, the package file will contain this single table only.The “splitrulesfile” is only required if table splitting is planned. It can contain entries in three different formats. If only the number of splits is specified, all fields of the primary key are checked for highest selectivity. In the case where a single field is explicitly given, only this field is used for splitting. If multiple fields are provided,the most selective field is used.

Figure 246: JPKGCTL (JSPLITTER) – WorkflowThe “jsplitter_cmd.properties” file is generated according user input by SAPINST. JPKGCTL connects to the database, reads the database objects definitions and calculates the sizes of items to be exported. The tables are distributed to the JLOAD job files (packages). The distribution criteria is the package size as provided in the “jsplitter_cmd.properties” file.

After all packages are created, the “sizes.xml” file containing the expected export size of each package is written. JMIGMON will use the content to start the export in the package size order.

Figure 247: JPKGCTL (JSPLITTER) – Table Split StrategyTable splitting is an optional task. It makes sense for large tables which do influence the export time significantly. JPKGCTL is able to find a useful split column automatically, but then it will only check the fields of the primary key. If a different field should be used, it must be explicitly mentioned in a split rule file. If the requested number of splits cannot be achieved, the number of splits will be automatically reduced. If even this does not result into useful WHERE conditions, JPKGCTL gives up and no table splitting takes place.

Figure 248: EXPORT<PACKAGE>.XML of a splitted TableThe WHERE condition is used to select data of a specified range. For each job file of a splitted table, a separate JLOAD is started.

Exercise 9: Advanced Migration TechniquesSolution 9: Advanced Migration TechniquesTask 1:A customer database of an ABAP SAP System has 10 very large tables that are between 2 and 20 GB in size and some other large tables ranging from 500 – 2.000 MB. After the JAVA- or Perl-based Package Splitter was executed with option “-top 10” (move the 10 largest tables to separate *.STR files) 10 additional *.STR files exit, but contain other tables than expected.1. What can be the reason of this behavior?Hint: What file is read to get the table size?What happens to large tables?

a) R3SZCHK limits the computed table sizes to a maximum of 1.78 GB. Be-cause of this, the package splitter catches the first 10 largest tables found in the *.EXT files. A 20 GB table will have the same *.EXT entryas a 2000 MB table.

Task 2:In a preparation of an R3LOAD heterogeneous system copy, the customer was asked to install Perl 5 or a JAVA JDK on his Windows production system, but he denied, because of restrictive software installation policies.1. Nevertheless, what can be done to improve the export time?a) The *.STR files can be split manually using an editor, or can be transferred to another system where Perl or JAVA is available to perform the split. In order to do this, the export will need to have been stopped after R3SZCHK has started.Caution: If the split is done in advance, be sure that no new changes have been made to the ABAP dictionary since the initial creation of the *.STR files! Otherwise you risk inconsistencies.

Task 3:The Migration Monitor has a client and a server export mode.1. What are the benefits of using the client mode?a) No changes to the standard R3SETUP or SAPINST export process is required.b) Automatic file transfer to the target system is possible.c) The data load can be started automatically as soon as the first package is signaled to be ready.

Unit 9Performing the MigrationLesson:Performing an ABAP System MigrationBusiness ExampleYou need a quick overview about the executed steps in an ABAP System Migration.

Figure 249: Technical Migration Steps (ABAP-Based System)Many migration steps can be performed in parallel in the source and target systems.After step 3 (generate templates for DB sizes) has been performed in the source system, you should be prepared to start step 8 (create database in the target system). Once step 6 (file transfer) is complete, steps 7-8 should already have been performed in the target system.In the case where MIGMON is used for concurrent export/import, the steps 4, 5, 6, 9, 10 will run in parallel

Figure 250: Technical Migration Preparation (1)Just before you start the migration, check all the migration-related SAP Notes for updates.

Figure 251: Technical Migration Preparation (2)To reduce the time required to unload and load the database, minimize the amount of data in the migration source system. Before the migration make sure to de-schedule all jobs in the source system. This avoids jobs failing directly after the first start of the migrated SAP System. The reports BTCTRNS1 (set jobs into suspend mode) and BTCTRNS2 (reactivate jobs) can be helpful. Check the corresponding SAP Notes and SAP System upgrade guides for further reference.If the target system has a new SAP SID, release all the corrections and repairs before starting the export.If the database contains tables that are not in the ABAP Dictionary, check whether some of these tables also have to be migrated.

Figure 252: Technical Migration Preparation (3)

The execution of report “SMIGR_CREATE_DDL” is mandatory for all SAP systems using non-standard database objects (BI/BW, SCM/APO). For NetWeaver 04 and later, the execution of “SMIGR_CREATE_DDL” is a must! Make sure not to make any changes to the non-standard objects after “SMIGR_CREATE_DDL” has been called!If no database specific objects exist, then no <TABART>.SQL files will be generated.As long as the report terminates with status “successfully”, everything is ok.The “Installation Directory” can be any file system location. Copy <TABART>.SQL files to the SAPINST export install directory or directly into the “<export_dir>/DB/<target_DBS>” directory. Follow the guidelines in thehomogeneous/heterogeneous system copy manual.Depending on the target database additional options might be available, which can be selected in the field “Database Version”.“Optional Parameters” allows the creation of a single <TABART>.SQL file for a certain TABART, or for a specific table only. The resulting <TABART>.SQL file will always have the name of the TABART. If the selected TABART or table is not a BW object, no <TABART>.SQL file will be created.See SAP Notes:• 771209 “NetWeaver 04: System copy (supplementary note)”• 888210 “NetWeaver 7.00/7.10: System Copy (supplementary note)”

Figure 253: Generate *.EXT and *.STR FilesR3SETUP/SAPINST calls R3LDCTL and R3SZCHK. The runtime of R3SZCHK depends on the version, the size of the database and the database type.DBSIZE.TPL is created by R3SETUP, from the information computed by R3SZCHK and stored in table DDLOADD.

Figure 254: Split *.STR Files and TablesThe generated *.STR and *.EXT files will be split into smaller units to improve the unload/load times. R3SETUP calls the Perl script to split *.STR files. Depending on the version, SAPINST uses the JAVA- or the Perl-based Package Splitter. On large databases table splitting will reduce the export / import run-time significantly.

Figure 255: Generate Export *.CMD and *.TSK Files

SAPINST/MIGMON call R3LOAD to create task files.R3SETUP/SAPINST/MIGMON generates command files. If WHERE files exist, the WHERE conditions will be inserted into the *.TSK files.

Figure 256: Export Database with R3LOADR3SETUP/SAPINST/MIGMON start a number of R3LOAD processes. A separate R3LOAD processes is started for each command file. The R3LOAD processes write the dump files to disk.As soon as an R3LOAD process terminates (whether successfully or due to error), R3SETUP/SAPINST/MIGMON start a new R3LOAD process for the next command file.Do not use NFS file systems as an export target for the dump files! Dump files can be unnoticeably damaged and cause data corruption!

Figure 257: Manual File Transfer (1)EBCDIC R3LOAD control files created on AS/400 systems must be transferred in ASCII mode, if the target system is to run on an ASCII-based platform.In cases where dump files must be copied to transportable media, make sure that the files are copied correctly. Its better to spend additional time on verifying the copied files against the original files than spending several hours or even days to transport them to the target system, only to discover that some files had been corrupted by the copy procedure used. Appropriate checksum tools are available for every operating system.

Figure 258: Manual File Transfer (2)

The file “LABEL.ASC” is generated during the export of the source database. R3SETUP/SAPINST uses its content to determine whether the load data is read from the correct directory.Since 7.02, the SQLFiles.LST is generated by SMIGR_CREATE_DDL together with the *.SQL files.The *.CMD and *.TSK files are generated separately for export and import. Therefore, do not copy them!

Figure 259: Get Migration Key (1)The migration key must be requested from the customer, because he has to accept the shown migration key license agreement.Check the migration key as soon as possible! All entries are case sensitive. Before opening a problem call, See SAP Note 338372.

Figure 260: Get Migration Key (2)Since 4.6D, the migration key is identical for different SAP Systems of the same installation number.The migration key must match the R3LOAD version. If asked for the SAP R/3 Release, enter the release version of the used R3LOAD. If in doubt check the log files.Some systems are using several different hostnames (i.e. in a cluster environment).The node name shown by “uname -a” or “hostname” should be the “DB Server Hostname”. Starting with 4.5A, always generate the migration key from the node name which is listed in the “(GSI) INFO” section of the R3LOAD export log (source system) and MIGKEY.log (target system). In some installations the System ID can even be in lower-case letters, because it is obtained from the first three characters of “(GSI)INFO: dbname”! The R3LOAD log files of 3.1I and 4.0B do not contain information about the source system, as in versions 4.5A and above.R3SETUP and SAPINST test the migration key by calling R3LOAD -K (upper-case K). The file “MIGKEY.log” contains the check results.The migration key in NetWeaver 7.00 Systems with “old” SAP license installed (upgraded system) is different then for the “new” SAP license. Check the corresponding system copy note for details.See SAP Note 338372 “Migration key does not work” for further reference.

Figure 261: Install SAP and Database Software

Figure 262: Create DatabaseThe size values for the target database that are calculated from the source database serve only as starting points. Generally, some values will be too large and others will be too small. Therefore, be generous in your database sizing during the first migration test run. The experience gained through the test migration is better than any advanced estimate you could calculate, and you can always adjust the values in subsequent tests.

Figure 263: Generate Import *.CMD and *.TSK FilesSAPINST/MIGMON call R3LOAD to create task files.R3SETUP/SAPINST/MIGMON generate commands files. If WHERE files exist, the WHERE conditions will be inserted into the *.TSK files.

Figure 264: Import Data with R3LOADR3SETUP/SAPINST/MIGMON starts the import R3LOAD processes.

Figure 265: Technical Post-Migration Activities (1)The general follow-up activities are described in the homogeneous and heterogeneous system copy guides and their respective SAP Notes.Before the copied system is started the first time, the consistency between the ABAP and database dictionary will be checked and updated. R3SETUP/SAPINST will start the program “dipgntab” for that purpose. All updates of the active NAMETAB are logged in the file “dipgntab<SAP SID>.log”. The summary at the end of this file should not report any error!

Figure 266: Technical Post-Migration Activities (2)In many cases, the change of a database system will also include a change in the backup mechanism. Make sure to get familiar with the changed/new backup/restore procedures.After the migration, the SAP System statistics and backup information for the source system can be deleted from the target database. For a list of the tables, see the system copy guide.

Figure 267: Technical Post-Migration Activities (3)Report RADDBDIF creates database-specific objects (tables, views, indexes).RADDBDIF is usually called by R3SETUP/SAPINST via RFC (user DDIC, client 000) after the data is loaded.After ok from customer side, the SAP jobs that have been set to suspend mode via report BTCTRNS1, can now be rescheduled with BTCTRNS2.

Figure 268: Technical Post-Migration Activities (4)The non-standard database objects (mainly BW objects) which were identified on the source system and are recreated and imported into the target system, will need some adjustments. The report RS_BW_POST_MIGRATION will do this. For further reference check SAP Note 777024 “BW 3.0 and BW 3.1 System copy” and/or read the corresponding chapter “Final Activities” in the homogeneous and heterogeneoussystem copy 6.40 or higher. The program should run independently, whether or not a *.SQL file was used or not.The data source system connection can be checked in transaction RSA1. The RFC parameters can be changed in transaction SM59.

Figure 269: Technical Post-Migration Activities (5)The report variants SAP&POSTMGRDB and SAP&POSTMGR are pre-defined for system copies changing/not changing the database system. Run the report in the background, because the execution can take a while.Invalidate Generated Programs: Generated programs can be database specific. In order to make sure that every program will be re-generated according to the new database needs, the already generated programs will be invalidated.Adapt DBDIFF to New DB: Depending on the database type, more or less indexes will be required. Table DBDIFF will be adapted accordingly. No missing BW objects will be shown in transaction DB02 afterwards.Adapt Aggregate Indexes: runs CHECK_INDEX_STATE Adapt Basis Cube Indexes: runs CHECK_INDEX_STATE

Generate New PSA Version: runs RS_TRANSTRU_ACTIVATE_ALL Delete Temporary Tables: runs SAP_DROP_TMPTABLES Repair Fact View: runs SAP_FACTVIEWS_RECREATEL_DBMISC: Database specific tasks (if defined for current database) Restriction to One Cube: restricts CHECK_INDEX_STATE to a single cube only

Figure 270: Technical Post-Migration Activities (6)The tables in the SAP0000.STR file contain the generated ABAPs (ABAP loads) of the SAP System. These loads are no longer valid after a hardware migration. For this reason, R3SETUP/SAPINST does not load these tables.Each ABAP load is generated automatically the next time a program is called. The system will be slow unless all commonly used programs are generated. Use transaction SGEN (starting with Release 4.6B) to regenerate all ABAPs. On versions before 4.6B, run transaction SAMT or report RDDGENLD. The report RDDGENLD requires the file REPLIST in the SAP instance work directory. To create the file REPLIST in the source system, call report RDDLDTC2.

Figure 271: Post-Migration Test ActivitiesTake care when setting up the test environment. To prevent unwanted data communication to external systems, isolate the system. External systems do not distinguish between migration tests and production access.

To develop a cut over plan, an already existing checklist from a previous upgrade/migration can be a valuable source of ideas. To identify any differences between the original and the migrated system, involve end users as soon as possible.

Lesson:Performing a JAVA System MigrationBusiness ExampleYou need a quick overview about the executed steps in a JAVA System Migration.

Figure 272: Technical Migration Steps (JAVA-Based System)

Figure 273: Technical Migration PreparationJust before you start the migration, check all the migration-related SAP Notes for updates.

Figure 274: Generate Template for Target DB Size

Figure 275: Collect Application Data from File SystemIf SAPINST does not recognize the installed application and its related files, no archives will be created. Make sure to use the right version of the installation CD, as mentioned in the appropriate SAP Notes regarding homogeneous and heterogeneous system copies.

Applications that are not recognized by SAPINST may require operation system specific commands to copy the respective directories and files to the target system. If this is the case, the corresponding SAP Notes will give instructions how to deal with it. The copy of other applications might need the installation of a certain support stack and a matching SAPINST.

Figure 276: Collect SDM DataIn SAP releases below 7.10, the SDM repository itself is installed in the file system and will be redeployed into the target system from the SDMKIT.JAR file.

Figure 277: JPKGCTL (JSPLITTER) onlyJPKGCTL is optional used since SAPINST 7.02.Packaged job files containing multiple tables are named “EXPORT_<n>.XML” and job files for a single table only are named “EXPORT_<n>_<TABLE>.XML”. If a table was splitted, the resulting job files are named the same but with a different number.For every export job file an import job file is generated with the same name but “EXPORT” is replaced with “IMPORT”.The “sizes.xml” file contains the expected export size of each generated packaged job file. It helps JMIGMON to export the packages by size. The largest package will be exported first, then the next smaller packages, and so on.

Figure 278: Export Database with JLOADNote: In versions without JPKGCTL, JLOAD is generating the EXPORT.XML and IMPORT.XML by itself.

Figure 279: File TransferThe file “LABELIDX.ASC” and the “LABEL.ASC” files are generated during the export of the source database. SAPINST uses its content to determine whether the load data is read from the correct directory.The “<export_dir>/APPS” directory might be empty if no applications are installed, i.e. those which are keeping their data in the file system. Another possibility can be that the application is not known by SAPINST.

Future releases may be able to create additional directories/files in the export directory, which are named “Others” in the above slide.

Figure 280: Install DB/SAP Software and Extend DatabaseIn SAPINST versions below NetWeaver 04S the database size is set using a default value.

Figure 281: Deploy SDM File System DataThe Software Deployment Manager holds its repository in the file system. For a re-installation, the content of SDMIT.JAR will be used, which contains the necessary file system components as collected in the source system. This step is not necessary anymore in NetWeaver 7.10 and later.

Figure 282: Import database with JLOADIn versions without JPKGCTL, JLOAD is generating the EXPORT.XML and IMPORT.XML by itself.

Figure 283: Restore Application Data to File SystemMore and more JAVA-based software components will be integrated into the SAPINST system copy procedure. In the meantime, SAP Notes will give instructions how to copy or to extract data manually, if required.Since 7.10 well behaving JAVA programs should not write persistent data into the file system.

Figure 284: Technical Post Migration ActivitiesThe general follow-up activities are described in the homogeneous and heterogeneous system copy guides and their respective SAP Notes.The license key of the source system is not valid for the target system. You will required to provide a new one.JAVA systems that include components that connect to an ABAP backend using the SAP JAVA Connector (SAP JCo), for example SAP BW or SAP Enterprise Portal, need to maintain the RFC destination.After system copy, the public-key certificates will be invalid on the target system. You will need to reconfigure them.Component specific follow-up activities for SAP BW, Adobe Document Services, SAP Knowledge Warehouse, SAP ERP, SAP CRM.

Figure 285: Post-Migration Test ActivitiesTake care when setting up the test environment. To prevent unwanted data communication to external systems, isolate the system. External systems do not distinguish between migration tests and production access. To identify any differences between the original and the migrated system, involve end users as soon as possible.

Exercise 10: Performing the MigrationBusiness ExampleYou need to request an OS/DB Migration Key from SAP. For that purpose, you needto know the right installation number for the key request.

Solution 10: Performing the Migration Task 1:In the preparation of a R3LOAD system copy, the customer was asked for scheduled backups, scheduled SAP System jobs, external programs or interfaces which are writing directly to the database and the planned SAP SID of the target system. 1. Why is it important to know which jobs are scheduled (SAP System or external jobs)?

a) While the export is running, only the SAP instance is shutdown. External programs or interfaces that are directly writing into the database, while the export is running, can cause inconsistencies!b) Scheduled database backups can shutdown the database, or decrease the export performance.c) After starting the migration target system the first time, scheduled jobs may be executed immediately; this can either be harmful or not harmful, depending on their nature. In this stage, the target system has not beenproperly configured. Jobs that need to be run for verification purposes may only be capable of being executed once.Hint: The BTCTRNS1 and BTCTRNS2 reports are available to set all scheduled jobs to “suspended” status on the source system, and to invoke them again on the target system. On earlier SAPSystem versions, critical jobs should be set to “planned” status before starting the export. If this is not possible, take suitable precautions in the target system.2. In the case that the SAP SID will be changed during the system copy, which actions should be taken before the export? a) In the case that the SAP SID is changed, all open Corrections and Repairs should be released in the source system. If not, the transport system initialization (SE06) will close them without releasing the transports tothe file system.

Task 2:The export of a heterogeneous system copy runs on the central instance SAP System with the following configuration:Source system: Installation number: 012004711Standalone database server hostname: “dbsrv01”Central instance server hostname: “cisrv01”Target system: Installation number: 012000815Central instance with database, hostname: “cidb001”1. What would be the correct installation number to request the migration key?a) The installation number of the source system will always be used to generate the migration key. In this case, “012004711”.2. Which hostnames must be provided when filling the migration key request form in the SAP Service Marketplace?a) The hostname of the system that R3LOAD is running on is of great importance. As the export has to be done on the central instance, the following combination of hostnames are needed to request the migrationkey:Export system: “ciserv01”Import system: “cidb001”3. Where can information about the R3LOAD version and which hostnames were used for export/import (especially in environments where systems have a lot of network controllers and IP addresses or are even clustered) be found?a) The R3LOAD export and import log files contain the used hostnames. In case of a migration key mismatch, always check these entries. Hint: When requesting the migration key, make sure to enter the version of R3LOAD and not the version of the SAP (base) System. The R3LOAD version is mentioned in the log files.The versioning can be confusing on SAP releases running on a backward compatible kernel.

Unit 10TroubleshootingThis unit is discussing special error situations. Some of them are quite rare, but nevertheless they occur! Point out; the most important knowledge for trouble shooting is to understand the restart behavior of R3LOAD and JLOAD very well.

Unit OverviewLesson:Troubleshooting

Business ExampleThere are sometimes strange error situations which are not easy to understand. You want to understand their reasons and how to avoid them.

Figure 286: R3LOAD – Unload TerminationsThe migration tools used are not compatible with the current SAP System or database version? Password problems? Changes to the root user or the environment necessary before starting R3SETUP/SAPINST?The active object definition in the ABAP Dictionary differs from the object definition in the database, or the tables do not exist on DB at all. For some tables, this is intentional and is caught by a special handling routine in R3LDCTL. These tables are usually recorded in the exception table DBDIFF, and do not cause errors. Therefore, if an error occurs, it must involve a table whose data definition is unintentionally wrong.This situation must be corrected. Often QCM* tables are involved.

Not enough space to unload the data in the dump file system. As a rule of thumb, the export can be started to a file system which has about 10% - 15% of the database size. If no additional disk space is available, copy already finished dump files to a different location while the export is running.

For the Perl PACKAGE SPLITTER: Is at least Perl version 5 installed? Does the default search path point to the right Perl executable? Does the first line of the SPLITSTR.PL script contain the right name of Perl in your system? For the JAVA PACKAGE SPLITTER: Is the right JAVA version installed?R3LOAD exports data in the primary key order. Depending on the database used, more temporary databases disk space may be required for sorting. Increase database storage units that are used for sorting (i.e. PSAPTEMP). Related SAP Note: 9385 “What to do with QCM tables”

Figure 287: R3LOAD – Load TerminationsNot enough temporary database disk space for index creation (sorting). Increase the database storage units that are used for sorting (i.e. PSAPTEMP), or reduce the number of parallel running R3LOAD processes.Oracle rollback segment problems: Restart R3SETUP/SAPINST and try again. Alternatively, you can reduce the number of parallel running R3LOAD processes, or implement the necessary measures in the database (Oracle). Older installation software may not activate Oracle undo management.

The migration tools used are not compatible with the current SAP System or database version? Are changes to the root user environment necessary before starting R3SETUP/SAPINST?Make sure that the database user can access the directories and files of the import file system.

Figure 288: Useful R3LOAD Environment Variables (1)The R3LOAD warning level can have any value, as long as something is set.The R3LOAD trace level is forwarded to the DBSL interface only. Useful values are between 1 and 4. The higher the value, the more output that is written. Most of the output can only be interpreted from developers, but in the case of database connection problems, the trace can give valuable hints for troubleshooting.

Figure 289: Useful R3LOAD Environment Variables (2)The R3LOAD warning level is useful in cases where files cannot be found or opened without an obvious reason. The contained list of environment variables can assist you in the analysis of database connection problems, caused by wrong environment settings. In general, much more information is written then normal.

Figure 290: R3LOAD – Load Termination ExampleInitial situation:1. Table ATAB has been created successfully.2. DB2 SQL error 551 occurs during an INSERT to table ATAB.3. Error text: Database user SAPR3 is not authorized to perform the INSERT. R3LOAD response:

The R3LOAD process that is processing file “SAPPOOL.CMD” cannot continue, due to the SQL error. A termination occurs. R3SETUP/SAPINST is given a negative return code and starts a new R3LOAD process for the next command file. Correcting the problem:Grant access authorization for table ATAB to user SAPR3

Figure 291: R3LOAD – Restart Example R3LOAD ≤ 4.6DSituation after R3LOAD has been started again:1. R3LOAD reads the last entry in the “SAPPOOL.LOG” file. Because the error occurred during the load, and not while creating the table, the table contents must be deleted first. To do this, R3LOAD executes the SQL statement “DELETE FROM”. In the above case, no data has been loaded yet, which explains the SQL error 100 (Row not found).2. Restart complete. Data will be loaded.

Figure 292: R3LOAD – Restart Example R3LOAD ≥ 6.10Situation after R3LOAD has been started again:(1) R3LOAD reads the first task of status “err” or “xeq” from the “SAPPOOL.TSK” file. Because the error occurred during the load, and not while creating the table, the table contents must be deleted first. To do this, R3LOAD executes the truncate/delete SQL statement which is defined in the DDL<DBS>.TPL file.(2) Restart complete. Data will be loaded.

Figure 293: Duplicate Key at Import TimeIn most cases, duplicate keys are caused by unexpected terminations at export or import time. The following pages will provide reasons and problem solutions. Some tables might not have a primary key on the source database, but in the ABAP DDIC. This may or may not be intentional. Please ask the customer for reasons and/or check for SAP Notes.

No *.SQL file was found by R3LOAD, or SMIGR_CREATE_DDL was not called on the source system.In the source database, silently corrupted primary keys can lead to double exported data records. In this case, the number of records in the *.TOC file will be larger than the number of rows which are reported by a SELECT COUNT(*) statement. Verify primary key/repair primary key and export the table again.

SAP ABAP Systems do not write data with trailing blanks into table fields. If external programs are directly writing into SAP System tables, trailing blanks can be inserted. R3LOAD always exports table data without trailing blanks! Similar data existing in the source database, with or without trailing blanks (because the data was modified by the SAP System), this will cause duplicate key errors at import time. Cleanup source table and export again.

Figure 294: Power Failure / OS Crash at Export Time (1)In case of an OS crash or power failures, it is difficult to ascertain which OS buffers were flushed to the open files or what happened to the entire file. The slide above describes a rare situation, but it can happen.Exports into a network mounted file systems can lead to similar symptoms if the network connections breaks.Abnormal terminations are caused by external events. It is not a termination caused by a database error or file permission problems.

Figure 295: Power Failure / OS Crash at Export Time (2)In the above example, a power failure or operating system crash occurred. As the operating system could not flush all of the file buffers to disk, the result was a mismatch between the dump file and the *.TOC or *.TSK file content. R3load exported TABLE08 already and the *.TOC or *.TSK was updated, but the data was not yet written by the operating system into the dump file. The *.TOC file contains block 48, as the last data block in the dump file. R3LOAD until 4.6D:After restarting the export process, R3LOAD looks for the last exported table in the *.TOC file, which is TABLE08. R3LOAD now opens up the dump file and does a seek to the next write position block 49 (which is behind end of file in this case). The next table in the *.STR file to be read is TABLE09, which will be stored at

block 49 and so on. The gap between block 42 (last physical write) and 49 contains random data. R3LOAD versions since 4.5A will create a block check sum (CRC) to identify corrupted data. Earlier versions will try to load the data and will usually stop on error while uncompressing the data, or will stop with duplicate key errors.R3LOAD 6.10 and above: As it is not clear whether the *.TSK file contains more or less entries than the corresponding dump file, the use of the “merge” option may not be 100% safe and can thus, lead to the same problems as in earlier R3LOAD versions.This problem is rare and it usually only happens to very small tables that are exported completely, where only a few blocks have to be flushed to the dump file, and the entry of *.TOC or *.TSK files has already been written.

Figure 296: Power Failure / OS Crash at Export Time (3)In the above example, a power failure or operating system crash occurred. As the operating system could not flush all of the file buffers to disk, the result is a mismatch between the dump file and the *.TOC or *.TSK file content. R3load exported the large TABLE02 already and the *.TOC or *.TSK was updated, but the last 8 datablocks were not yet written by the operating system into the dump file. The *.TOC file contains block 320.000 as the last valid data block in the dump file. R3LOAD until 4.6D:

After restarting the export process, R3LOAD looks for the last exported table in the *.TOC file, which is TABLE02. R3LOAD now opens up the dump file and does a seek to the next write position block 320.001 (which is behind end of file in this case). The next table in the *.STR file to be read is TABLE03, which will be stored at block 320.001 and so on. The gap between block 319.992 (last physical write) and 320.001 contains random data. R3LOAD versions since 4.5A will create a block check sum (CRC) to identify corrupted data. Earlier versions will try to load the data and will usually stop on error while uncompressing the data, or will stop with duplicate key errors.R3LOAD 6.10 and above:

As it is not clear whether the *.TSK file contains more or less entries than the corresponding dump file, the use of the “merge” option may not be 100% safe and can thus lead to the same problems as in earlier R3LOAD versions.This problem is rare and it can happen to tables that are exported completely where only the last blocks have to be flushed to the dump file, and the entry of *.TOC or *.TSK files has already been written.

Figure 297: Power Failure / OS Crash at Export Time (4)Remove only files for packages that were not completed at the time of the system crash.If the *.TSK file has been updated before all data was flushed to the dump file by the operating system, the content can be misleading.Task files can be re-created by using the same R3LOAD command line as shown in the *.LOG file.In the described case, it is not recommended to use the “merge” option to restart without repeating the export of the involved R3LOADS from scratch.

Figure 298: R3LOAD – Export Error due to Space Shortage (1)In the described case, it is not recommended to use the “merge” option to restart without repeating the export of the involved R3LOADS from scratch.The same scenario can happen if an export process is writing to an NFS file system while a network error occurs.RFF: Cannot read from file error.RFB: Cannot read from buffer error.

Figure 299: R3LOAD - Export Error due to Space Shortage (2)SAP Note: 769476 “Danger of inconsistencies”The above export log shows unload terminations due to a shortage of space in the file system. After increasing the space of the file system, the export will restart and finish without further problems. At import time, R3LOAD will stop with a checksum error.

Figure 300: Export Rules for R3SETUP / SAPINST / MIGMONThe R3LOAD export process is very sensitive, so be sure to prevent any disturbance.If the R3LOAD export was done by someone else and you are responsible for the import only, please make sure to receive the export logs along with the export dump files. In case of import errors, because of dump corruptions, the export logs should be examined for troubleshooting.

Figure 301: Power Failure / OS Crash at Import Time (1)

Figure 302: Power Failure / OS Crash at Import Time (2)In the case of an OS crash or power failures, it is difficult to ascertain which OS buffers were flushed to the open files or what happened to the entire file. In the above example, a power failure or operating system crash occurred. As the operating system could not flush all of the file buffers to disk, the result is a mismatchbetween database content and the *.LOG or *.TSK file. R3load imported Table08 already, but the *.LOG or *.TSK file contained only the information that Table05 and its primary key were created.R3LOAD until 4.6D:The restart will try to create Table06, but it already exists. R3LOAD stops on error. A restart beginning with data load, can result in duplicate keys. Another scenario that could occur is that the first table will be treated right and the problem first arises with the second table. This depends on the *.LOG file content. R3LOAD 6.10 and above:

The merge of *.TSK and *.BCK is necessary. All entries that are not marked as executed (“xeq”) will be set to error (“err”), R3LOAD will drop or delete objects and data first, before restarting a task.This usually only happens to small tables. As databases are designed to write data in a safe way, the R3LOAD import is less critical in the case of power failures or operating system crashes as opposed to the export of data.

Figure 303: Power Failure / OS Crash at Import Time (3)

Figure 304: Duplicate Key Problem after Restarting Import (1)In a restart scenario where the import of a large table fails and the SQL command that deletes the entire table content has also failed with an error, R3LOAD assumes that the table has been emptied and thus, restarts the import. The creation of the primary key stops with a duplicate key error. This kind of problem has often been caused by Oracle “Snapshot too old” situations. As R3LOAD 6.x is using the Oracle “TRUNCATE” statement to delete data, the described error should not happen any more. Nevertheless other database errors can lead to a similar erroneous restart situations.

Figure 305: Duplicate Key Problem after Restarting Import (2)Database-specific delete commands (for example, the Oracle command TRUNCATE) can work faster than the standard SQL command DELETE. When manipulating the “<PACKAGE>.LOG” files, be very cautious and examine the completed results carefully. Do not modify the *.LOG files unless absolutely necessary. For verification, count the table rows after import.

Figure 306: Duplicate Key Problem after Restarting Import (3)For verification, count the table rows after import.

Figure 307: Corrupted R3LOAD Dump Files (1)RFF=Read from fileRFB=Read from buffer

(RFF) ERROR: SAPxxx.TOC is not from same export as SAPxxx.001The dump file is corrupted, or files of different exports have been accidentally mixed.(RFF) ERROR: buffer (... KB) too small (the figure is larger than 10.000)R3LOAD read an invalid buffer size from the dump file. The buffer is used to load a certain number of data blocks to uncompress it. Typical buffer sizes range only a few MB.(RFB) ERROR: CsDecompr rc = -1Buffer data can not be decompressed.(RFB) ERROR: wrong checksum – invalid data The checksum of loaded data blocks is wrong.See SAP Notes:143272 R3LOAD: (RFF) ERROR: buffer (xxxxx kB) to small438932 R3LOAD: Error during restart (export)

Figure 308: Corrupted R3LOAD Dump Files (2)Analysis: Check the export log files. Did something take place during the export?Was there a restart situation?Use different checksum tools to compare original and copied files, or use different algorithms, if possible.

Figure 309: SICK – System Installation Check FailedTransaction SICK detects errors that generally indicate an incorrect SAP Basis installation.

Figure 310: DB02 – ABAP DDIC / Database Consistency Check

Some required tables are created from external programs, such as SAPDBA/BRCONNECT.Some database specific objects are created in the last step of the system copy via RFC programs. Did they really finish successfully?If tables or indexes are missing, check to see if there are SAP Notes related to these issues.The only 100% check whether a table in a package file is in the database or not, is to compare the database dictionary with the content of the *.STR files.

Figure 311: R3LOAD – Load Termination due Space ShortageBe generous with database space during the first test migration (Add, for instance, 20% space as a rule of thumb). Adjustments for the production migration can be determined from the results of the test migrations.

Figure 312: Nametab Import Problem (Unicode Conversion)

Warn the students they must never ever use the Perl-based Package Splitter when performing a Unicode Conversion!Problem: Executing “R3trans -d” returns “No TADIR in this system”, and the trans.log file contains the warning “The HEAD entry in TADIR is missing”. When using the Perl- or JAVA-based Package Splitter (version 1), it can happen that the Active Nametab tables may be separated into different *.STR files. During the Unicode Conversion, the Active Nametab tables must be specially treated by R3LOAD. This will be possible, only if the tables were imported in a certain order in the same *.STR file. Normally, SAPSDIC.STR contains the tables in the right way. But after a split, SAPSDIC.STR does not necessarily contain all Active Nametab tables anymore.In NetWeaver 04S, the JAVA-based Package Splitter has been improved to write the ABAP Nametab tables into a single package called “SAPNTAB.STR”, making sure that the right import order is used. The revised JAVA package splitter is also available form SAP Marketplace.For more information see SAP Note: 833946 “Splitting of STR files”.

Figure 313: Data/Index in Wrong Database Storage Unit

In many cases, tables and indexes are moved to database storage units created by customers, without maintaining the appropriate ABAP Dictionary tables. The result is that the files “DDL<DBS>.TPL” and *.STR will contain the original SAP TABART settings.

The installation programs R3SETUP and SAPINST can change the content of the DDL<DBS>.TPL file before starting the import. Check the *.CMD files to find out which DDL<DBS>.TPL file have been used.Oracle databases running on the “old” tablespace layout will be automatically installed with the reduced tabelspace set on the target system.

Figure 314: JLOAD Export Restart BehaviorBecause it does not make sense to continue with another table if a previous table failed to export, JLOAD stops if an error occurs. In NetWeaver 04 SR1, the “EXPORT.STA” file can be found under: “/usr/sap/<SAPSID>/<Instance>/j2ee/sltools”. Check the SAPINST log file for the file location in other versions.In NetWeaver ’04 and NetWeaver 7.00, there is only a single EXPORT.STA file. Since NetWeaver 7.02 there can be multiple JLOAD job files (packages).

Figure 315: JLOAD Import Restart BehaviorEven if a table fails to import, it makes sense to proceed with other tables (thus, saving time). A later restart will only deal with the erroneous objects. If the last unsuccessful action was load data, the restart action will be delete data. If the last unsuccessful action was create object (table or index), the restart action will be drop table.In NetWeaver ’04 SR1, the “IMPORT.STA” file can be found under: “/usr/sap/<SAPSID>/<Instance>/j2ee/sltools”. Check the SAPINST log file for the file location in other versions.In NetWeaver ’04 and NetWeaver 7.00 there is only a single IMPORT.STA file. Since NetWeaver 7.02 there can be multiple JLOAD job files (packages).

Exercise 11: TroubleshootingBusiness ExampleYou need to know which files are read during a restart and which precautions are required depending on the cause of an error.

Solution 11: TroubleshootingTask 1:During the import of a table R3LOAD 6.x stopped, because the database returned an error. After the problem was fixed, R3LOAD was started again.1. Which restart action will be automatically executed by R3LOAD?a) R3LOAD will delete all table data from the table before loading it again.2. What is the exact database statement that R3LOAD will use for the restart?a) R3LOAD will use the database command, which is defined as truncate ta-ble statement (section trcdat: ) in the DDL<DBS>.TPL file. If nothing is defined, DELETE will be used as default.Note: The first guess might be wrong! Think about which R3LOAD files are read.

Task 2:R3LOAD provides restart functionality on export and import1. Why could it be dangerous to restart a terminated export, caused by “out of space” in the export file system?a) It is not certain which R3LOAD file was written last. If the dump file con-tains less blocks than recorded in the *.TOC file, a restart will start behind end-of-file. The blocks between end-of-file and the new writeposition contain random data.2. Why isn’t it a problem to restart a terminated import, caused by “out of space” in the database?a) In the case that R3LOAD were to stop, because of a database import error, it can still close all files properly (that means the files have a consistent state). Afterwards R3LOAD is able to perform the right restart activities.

Task 3:JLOAD can restart a terminated export or import, which failed because of errors.1. Which files are read to find the restart position in the dump file?

a) JLOAD reads the content of the EXPORT[_<PACKAGE>].STA to identify already exported tables. The dump file is opened and blocks of already ex-ported tables are skipped. JLOAD proceeds with writing from theposition found. The next table to export is read from the export job file EXPORT[_<PACKAGE>].XML.2. Which files are read to restart an import?a) JLOAD reads the content of the IMPORT[_ <PACKAGE>].STA file for erroneous entries. As a “continue-on-error” strategy is used, error flagged tables of the IMPORT[_<PACKAGE>].STA file will be repeated, and re-maining tables will be read from the job file IMPORT[_<PACKAGE>].XML.

Unit 11Special ProjectsThis unit can be inserted between any lessons after completing the first four units.

Lesson:Special ProjectsLesson Duration: 15 Minutes

Lesson OverviewContents• Special considerations for NZDT/MDS system copies and Unicode Conversions• Technical description of the NZDT/MDS method

Business Example

Figure 316: NZDT – A Minimized Downtime Service (MDS)The Near Zero Downtime method (NZDT) is a SAP Minimized Downtime Service (MDS) using an incremental migration approach, which has been developed to copy very large databases. Compared to the standard system copy procedure, it can reduce the technical system copy downtime significantly to few hours or even less. At the moment (May 2012), the NZDT method can be applied by specially trained SAP consultants only (which might change with future NZDT versions or procedures). It is suitable for heterogeneous system copies and Unicode Conversions (or a combination of both). SAP delivers this type of project usually for a fixed price.In BW and SCM systems, structure changes of tables and indexes are quite common. Please discuss with SAP, whether a customer specific NZDT project is possible. Furthermore other technical maintenance events like upgrades or updates can be performed in course of this type of Near Zero Downtime procedure.

Figure 317: NZDT/MDS - FeaturesThe NZDT Workbench runs on a NetWeaver system with installed DMIS Add-On.It is used to configure and control the migration process between the source and the target system. During the table synchronization, the data stream runs through the NZDT Workbench. In case of an Unicode Conversion, it is also performing the data translation to Unicode. Depending on the migration scenario, the NZDT Workbench will be a separate system or is installed on the target system.To log table changes, insert, update, delete table triggers are created, which will be fired as soon as the content of the table is changed. The primary key of the record will be written into a log table.A freeze trigger is used to force a short dump, if a transaction tries to change the content of a record. This will set the respective table to a read only mode. Freeze triggers are created for tables where no change is expected, or it was agreed with the customer to allow no change during the NZDT migration process. This willimply some system usage restrictions, but reduces the number of tables which must be synchronized during the online or offline delta replay, what helps to shorten the downtime.

Figure 318: NZDT/MDS Scenario: Export Remaining TablesThe table insert, update, delete triggers will usually be implemented for the 100 – 200 largest tables, to cover about 90% of the database data for the online delta replay. The replay transfers nearly all data of the triggered tables, with the incremental transfer mechanism, before downtime. During the downtime, a final synchronization takes place, transferring the records, which were not already synchronized or changed

during the ramp down. Usually, this takes few minutes only. Every table, which was synchronized with the delta replay, does not need to be exported by R3LOAD. R3LOAD is used to export the clone system, which is created after the triggers were established on the source system.After completing the last delta replay, the remaining tables are exported/imported using R3LOAD during the downtime.The technical downtime depends mainly on the source system database performance and on the amount/size of the remaining tables. The achievable downtime is quite small compared to a conventional database export.The table structure of the selected tables must not change during the NZDT process. As a consequence, all transports into the source system must be examined for dangerous objects. This means, transports which are intending to modify triggered tables, must be postponed to a point in time, when the NZDT procedure is completed.

Figure 319: Prepare NZDT Workbench and Source System (S1)Setup:• Install a separate NetWeaver system with the DMIS Add-On, running the NZDT Workbench (based on SLO Migration Workbench technology).• Install the DMIS Add-On on the source system Using the NZDT Workbench, the selected 100 – 200 large tables in the source system can now be prepared. Triggers and logging tables are created to record inserts,updates, and deletes in the source database. There must be enough free space for the logging tables.

Figure 320: Create Clone and Target System (S1)After the triggers were established in the source system, the system is cloned (copied via backup/restore or advanced storage copy techniques). All tables will be exported via R3LOAD and imported into the target system. The SAP System of the clone will never be started and is used for the export only. The target system will be isolated before starting it the first time, to avoid any data change. The database triggers and log tables are not active in the target system.In cases where the NZDT method is used for a homogeneous system copy, it might be possible to use the clone system directly as the target system.

Figure 321: Online Delta Replay - Synchronize Table Data (S1)The online delta replay table synchronization can be scheduled individually for each table, thus balancing the batch load on the source system. On heavily-used systems, it might be advisable to install a separate application server for running the synchronization batch processes.The logging tables contain the primary keys of changed records, additional information about the type of change (insert, update, delete), a time stamp, and the process status. Every time a row is inserted, updated, or deleted, a database trigger will be fired to update the logging table. Changes to the same record will be optimized in a way that the last recorded change is transmitted only.The synchronization jobs scan the logging tables (here TAB01’ and TAB02’) for unprocessed records. These records will be transmitted via the NZDT Workbench to the target system. A safe protocol makes sure that only those records are marked as completed (processed), which have been successfully updated in the target database. In case of an Unicode Conversion, the translation to Unicode is done in the NZDT Workbench.

Figure 322: System Ramp Down (S1)After the online replay is finished, the source system must be ramped down for the final offline delta replay. That means, there must be no system activity anymore: users are locked out, no jobs are running, interfaces are stopped, etc. to avoid further data changes.

Figure 323: Offline Delta Replay and Target Cleanup (S1)After the source system was ramped down, the offline (final) delta replay takes place, which transfers the data for those tables, where the online replay was not 100% completed, or data was changed during ramp down. Afterwards, the source and target systems will be stopped. The remaining tables will be deleted form the target system to prepare the R3LOAD import.

Figure 324: Export/Import Remaining Tables (S1)The remaining tables will be exported/imported by R3LOAD. The amount of data in the remaining tables must not be larger than that of which can be exported and imported in the customer-defined maximum technical downtime (i.e. few hours).After the imported is completed, the technical migration is finished. The resulting target system can be prepared for productive operation now.

Figure 325: NZDT/MDS Scenario: Offline Delta ReplayAll tables of the system must be classified to apply insert, update, delete triggers or a freeze trigger to them. For tables with insert, update, delete triggers, it must be specified whether they should be synchronized during the online delta replay or the offline delta replay. The table insert, update, delete triggers will be usually implemented for thousands of tables. The rest will get freeze triggers.R3LOAD is used to export the clone system, which is created after the triggers were established in the source system. Tables with freeze triggers have already their final state after the import.After completing the online delta replay, the remaining unsynchronized table data will be transferred during the downtime offline delta replay.The technical downtime depends mainly on the source system database performance and on the amount of delta data which must be synchronized offline. The achievable downtime is very small compared to a conventional database export, and even smaller than in the NZDT scenario, where the remaining tables are exported by R3LOAD. No table structures must be changed during the NZDT process. As a consequence, there must be strict transport rules in place, only allowing emergency transports into the source system. Because almost all tables have triggers (and many of them have freeze triggers), transports are difficult to manage. Most transports must be postponed to a point in time, when the NZDT procedure is completed.

Figure 326: Prepare NZDT Workbench and Source System (S2)Setup:• Install a separate NetWeaver system with the DMIS Add-On, running the NZDT Workbench (based on SLO Migration Workbench technology).• Install the DMIS Add-On on the source system. On the NZDT Workbench, the tables of the source system can now be classified and the triggers and logging tables will be created. There must be enough free spacefor the logging tables.

Figure 327: Create Clone and Target System (S2)After the triggers were established in the source system, the system is cloned (copied via backup/restore or advanced storage copy techniques). All tables will be exported via R3LOAD and imported into the target system. The SAP System of the clone will never be started and is used for the export only. The target system will be isolated before starting it the first time, to avoid any data change. The database triggers and logtables are not active in the target system.In cases where the NZDT method is used for a homogeneous system copy, it might be possible to use the clone system directly as the target system.

Figure 328: Online Delta Replay - Synchronize Table Data (S2)The online delta replay table synchronization can be scheduled individually for each table, thus balancing the batch load on the source system. On heavily-used systems, it might be advisable to install a separate application server for running the synchronization batch processes.The logging tables contain the primary keys of changed records, additional information about the type of change, and the synchronization status (insert, update, delete, processed). Every time a row is inserted, updated, or deleted, a database trigger will be fired to update the logging table. Changes to the same record will be optimized in a way that the last recorded change is transmitted only.The synchronization jobs scan the logging tables (here TAB01’ and TAB02’) for unprocessed records. These records will be transmitted via the NZDT Workbench to the target system. A safe protocol makes sure that only

those records are marked as completed (processed), which have been successfully updated in the target database. In case of a Unicode Conversion, the translation to Unicode is done in the NZDTWorkbench.

Figure 329: System Ramp Down (S2)After the online replay is finished, the source system must be ramped down for the offline delta replay. That means, there must be no system activity anymore: users are locked out, no jobs are running, interfaces are stopped, etc. to avoid further data changes.

Figure 330: Offline Delta Replay – Synchronize Remaining (S2)After the source system is ramped down, the offline (final) delta replay takes place. It transfers the changes of those tables, where the online delta replay was not 100% finished or data was changed during ramp down, and for tables which could not be transferred online (i.e. USR02). The offline delta replay is typically completed in less than one hour and is finishing the technical migration. The resulting target system can be prepared for productive operation now.Scenario 2 is synchronizing almost all tables using the delta replay, except the ones with a freeze trigger. Because of the freeze triggers, the amount of tables needing a transfer, is smaller than in scenario 1, where the largest tables are synchronized only and the rest must be exported/imported (transferred) using R3LOAD.

Figure 331: R3LOAD Unicode ConversionsUnicode SAP Systems require SAP Web AS 6.20 and above.The Unicode Conversion is only applicable if a minimum Support Package Level is installed. Check the Unicode Conversion SAP Notes for more information.The “OS/DB Migration Check” applies to Unicode Conversions changing their target operating system or database system.R3LOAD converts the data to Unicode while the export is running. As it is not sufficient for R3LOAD to read the raw data, additional features are implemented regarding the data context. Because the context is only available in the source system, a Unicode conversion at import time is not supported for customer systems. A reverse conversion from Unicode to non-Unicode is not possible.During the Unicode export, R3LOAD writes &.XML files which are used for final data adjustments in the target system (transaction SUMG).In MDMP systems, not all tables provide a language identifier for their data. For this data, a vocabulary must be created to allow an automated conversion. The creation and maintenance of this vocabulary is a time consuming task. The involvement of experienced consultants will shorten the whole process significantly.Related SAP Notes:0548016 Conversion to Unicode1319517 Unicode Collection Note

Figure 332: Unicode Conversion ChallengesVery large databases, small downtimes, and slow hardware might require an incremental system copy approach.ABAP coding must be reviewed using UCCHECK. Byte offset programming or dynamic programming (data types determined dynamically during program execution) can require lots of effort. UCCHECK is available as of Web AS 6.20.The Unicode Conversion of MDMP systems will cause more efforts compared to the ones for single copy page systems (increasing with the number of installed code pages).MDMP / Unicode Interfaces cause high efforts and the recommendation is to minimize the number of language specific interfaces, if possible.See SAP Note: 745030 MDMP - Unicode Interfaces: Solution Overview.For Non-SAP Interfaces, a detailed analysis of all existing interfaces is necessary.Third-party products that are “SAP certified” are not automatically also Unicode compliant. Therefore, the vendors need to be contacted. A specific Unicode-certification is available. Certified products will be listed on the SAP Marketplace.

Exercise 12: Special ProjectsBusiness ExampleYou want to know how the SAP NZDT/MDS migration procedure is technically synchronizing the table data between the source and the target system. You are also interested in the general Unicode conversion approach.

Solution 12: Special ProjectsTask 1:The SAP NZDT/MDS Migration procedure is used to export large customer tables while the source system is online. Database triggers are implemented to allow table synchronizations between the source and target system.1. What is the task of the implemented database trigger during a table update?a) As soon as the database has been inserted, deleted, or updated a record, the trigger adds the primary key to the log table and appends some information about the change operation: insert, update, delete, and the process status. 2. How are the synchronization jobs able to recognize, which records to update in the target database, and how does the synchronization work?a) The synchronization job scans the log table for records having the status “unprocessed”. The found primary key is used to read the table data that must be transferred to the target system. After the transfer has beensuc-cessfully completed and the receiving system has signaled, “update successful”, the record status is changed to “processed” in the log table of the source system.

Task 2:Unicode conversions of SAP MDMP systems (using multiple code pages) are more difficult than for non-MDMP systems (using a single-code page only).1. What is the reason?a) In a single code page system, every character has a unique related Unicode character. In an MDMP system, the meaning of a character depends on it language context. As this context is not always properly maintained or even not known, time consuming export preparation tasks are required to map data to the right language.2. Why is the Unicode conversion done at export time?a) The table data in an R3LOAD dump file does not provide enough information for a customer system Unicode conversion. R3LOAD reads certain tables for additional Unicode conversion information during theexport.

FeedbackSAP AG has made every effort in the preparation of this course to ensure the accuracyand completeness of the materials. If you have any corrections or suggestions forimprovement, please record them in the appropriate place in the course evaluation.